The new sentence was then translated back to see how closely it resembled the original.
An IBM system refines offensive content on social media, while keeping the content intact, according to the new scientist. The IBM team harvested nearly 10 million posts from Twitter and Reddit. The team then drafted an artificial intelligence (AI) algorithm, of several parts — one part parsed a given offensive sentence to work out its meaning, a second used this to create a more palatable version and a third evaluated the translated sentence to see whether the tone had changed. The new sentence was then translated back to see how closely it resembled the original. If there were any errors, the system was then tweaked accordingly.
“For f***sake, first world problems are the worst,” became “for hell sake, first world problems are the worst”, for example. Experts believe the system will be able to handle posts containing hate speech, racism and sexism. But this raises complex issues of censorship and the restriction of free speech. Sooner the AI algorithm maybe used as a prompt: telling online posters what they’re about to say is offensive, and offering a kinder option.