Intensifying its fight against child abuse images floating on the web, Google is now deploying artificially intelligent systems to help organisations identify and report child sexual abuse material (CSAM) images.
In its blog, Google announced it will start using cutting-edge AI that will add on to its existing technologies to help service providers, NGOs, and other technology companies review disturbing content at scale.
The approach will leverage deep neural network for image processing and identify new images quickly, thereby cutting down on the response time. It will also flag content which went unflagged previously.
Google will be providing the new AI tools for free to NGOs and industry partners through its Content Safety API toolkit that has more capacity to review CSAM online and requires minimal human inspection.