Thursday, Apr 25, 2024 | Last Update : 05:46 AM IST

  AI aims to make Wikipedia friendlier and better

AI aims to make Wikipedia friendlier and better

Published : Dec 2, 2015, 10:26 pm IST
Updated : Dec 2, 2015, 10:26 pm IST

A software trained to know the difference between an honest mistake and intentional vandalism is being rolled out in an effort to make editing Wikipedia less psychologically bruising.

A software trained to know the difference between an honest mistake and intentional vandalism is being rolled out in an effort to make editing Wikipedia less psychologically bruising. It was developed by the Wikimedia Foundation, the nonprofit organisation that supports Wikipedia.

One motivation for the project, is a significant decline in the number of people considered as active contributors to the flagship English-language Wikipedia: it has fallen by 40 per cent over the past eight years, to about 30,000. Research indicates that the problem is rooted in Wikipedians’ complex bureaucracy and their often hard-line responses to newcomers’ mistakes, enabled by semi-automated tools that make deleting new changes easy.

Aaron Halfaker, a senior research scientist at Wikimedia Foundation who helped diagnose that problem, is now leading the project trying to fight it, which relies on algorithms with a sense for human fallibility. His ORES system, for “Objective Revision Evaluation Service,” can be trained to score the quality of new changes to Wikipedia, and judge whether an edit was made in good faith or not.

Halfaker, invented ORES in hopes of improving tools that help Wikipedia editors by showing recent edits and making it easy to undo them with a single click. The tools were invented to meet a genuine need for better quality control after Wikipedia became popular, but an unintended consequence is that new editors can find their first contributions wiped out without explanation, because they unwittingly broke one of Wikipedia’s many rules.