Saturday, Sep 22, 2018 | Last Update : 04:30 AM IST
Is AIs safe? Can they be trusted with real-world responsibilities?.
It’s hard to predict how Artificial Intelligence (AI) will behave. In 2016, Microsoft released Tay, an AI Twitter bot designed to interact with people through conversational language and get smarter along the way by observing, learning and mimicking. The designers perhaps underestimated the Twitterati’s penchant for mischief.
Soon after Tay was launched, people started sending it racist, misogynistic and hateful tweets. And Tay, true to design, absorbed and internalised this behaviour, and began tweeting similar nasty sentiments back at them. Within hours, the AI morphed into a racist, abusive troll. Microsoft had to scramble and work overtime to delete the offensive tweets. While this episode generated a lot of mirth, it poses many serious questions. Are AIs safe? Can they be controlled? Can they be trusted with real-world responsibilities? If one of the world’s premier technology companies couldn’t anticipate their AI’s behaviour, how will others fare? Although the field of AI has existed since the 1950s, it has begun to mature only in the past decade or thereabouts. Industries are increasingly turning to AI and machine learning for more intelligent solutions. While the term is frequently bandied about, there is much confusion about what AI is. An AI is a machine that mimics the human mind’s cognitive functions such as learning and problem solving. The Turing Test, proposed in 1950 by English computer scientist Alan Turing, gives us one perspective of what an AI is. A computer passes this test if its responses to human interrogation cannot be distinguished from that of a real person. By this definition, an AI is a machine that can think as well as act like a human.
By another definition, an AI is a machine that can think and act rationally (as opposed to human behaviour, which can often be irrational), and is capable of learning and solving problems autonomously. In the current context, perhaps the best way to understand AI is to think of it as any task that a computer can perform as well as, or better than, humans.
There are three main kinds of AIs at present: algorithm-based, neural-network- based machine learning, and deep learning.
Consider Google’s recommendation engines, which determine what ads to show you, and which stories and videos to recommend to you, based on your past preferences.
Machine learning, on the other hand, relies on neural networks, which are complex computer systems modelled on the human brain. Neural networks utilise multi-level statistical and probabilistic analysis that replicates the way the human brain’s network of neurons and synapses processes and interprets data. Such programs “learn” autonomously, and as a result, even their programmers cannot accurately predict how they will derive solutions and solve problems.
Consider “Go”, an incredibly complex and abstract two-player board game with a near-infinite set of possible moves. DeepMind’s Go-playing software, AlphaGo Zero, equipped with nothing more than the basic rules of the game, became the strongest Go player in history simply by playing against itself millions of times and learning along the way. This is an example of machine learning.
Deep learning uses multiple layers of neural networks, working both independently as well as in concert, each analysing different data and solving different problems before the system combines the disparate outcomes into an integrated whole. Applications of deep learning include image recognition, automatic speech recognition and on-the-fly translation, visual art processing, real-time facial recognition systems, drug discovery, cancer treatment, medical diagnostics, genetics, self-driving vehicles, and much more.
Artificial Intelligence has undoubted benefits. It can automate tasks that are either too difficult or too dangerous for humans to perform. It can eliminate human error, save lives, discover new medicines and treatments for diseases, vastly improve the quality of our lives, and eliminate the need for human supervision or intervention in a host of tasks, thereby freeing us for more productive and meaningful work. On the other hand, AI, in tandem with robotics, is poised to automate many tasks, possibly entire industries, thereby rendering millions of jobs redundant. This has the potential to significantly increase unemployment. AI has the potential to make the rich wealthier than ever before while marginalising the working classes.
AI can enable governments to monitor citizens 24/7, making privacy a thing of the past. This can have serious implications for human rights and freedom, especially in autocratic and repressive regimes. China, for example, is implementing a Social Credit System which will use government data to determine citizens’ economic and social status and potentially what rights and freedoms they get.
Future AIs may enable media corporations to control every aspect of people’s lives. Consider the military applications of AI. It can power autonomous drones that do not require human intervention to decide which targets to bomb. Such autonomous drones are already on the drawing boards of several militaries. Now imagine a similar system in charge of nuclear weapons.
Finally, there is a remote, though real possibility that AI-enabled machines and systems, if linked together, may give rise to an immortal, godlike artificial hyperintelligence, possibly even a self-aware one that controls every economy and weapons system and determines the fate of human civilisation. This is the doomsday scenario Elon Musk has repeatedly warned of. Ultimately, all technology has the potential to either benefit humanity, or to harm, even destroy us. It depends on how we use it. Humanity stands at the cusp of a new technological era. If AI is regulated and used for good, it will cure diseases and may even end hunger and poverty. If used for military purposes, as it increasingly is, it may well bring us back to the Stone Age. May we have the wisdom to use it well.
(The writer is a theoretical physicist whose research interests include dark matter, dark energy, black hole physics, quantum gravity, and the physics of the very early universe)