Sanjeev Ahluwalia | Why the whole world needs to keep the AI genie bottled

The Asian Age.  | Sanjeev Ahluwalia

Opinion, Columnists

Is Artificial Intelligence (AI) different, and if so, how much time do we have left?

The time horizon of the AI tipping point is uncertain. (Image: jcomp on Freepik)

Humans have faced existential threats before during the 300,000 years they have been around. Self-inflicted harm exceeds damage from natural calamities. Science has grown our understanding of how human progress itself seeds new existential threats. And yet, humans have proliferated and benefited from each scientific iteration that outsourced work to machines. Is Artificial Intelligence (AI) different, and if so, how much time do we have left?

Elon Musk is being quite honest when he warns that AI would eventually eat all human jobs, rendering us redundant, although a mite optimistic when claiming that universal, abundant wealth -- well beyond what Karl Marx and succeeding fellow romantics dreamed of -- could be ours.

He recognises a problem. How to retain a sense of purpose and self-worth in a life of leisure -- a conundrum familiar to the progeny of billionaires.

Humans are actually quite good at wasting time, as evidenced by America’s $600 billion sports industry, including the Olympics Games, where puny humans compete at jumping higher, running faster, throwing javelins further and wrestling each other, whilst machines and missiles outperform humans on the ground, in the air and in the water. Expect more of the same futile struggles for vanity trophies in the age of plenty.

The time horizon of the AI tipping point is uncertain. Given our experience with climate change, we are likely to wake up to reality too late. After all, a significant segment of the rich world still believes that climate change does not exist and even more believe that we are likely to make the transition safely. Humans tend to be optimistic about the distant future but anxious about immediate outcomes -- like will my doctor see me on time -- and they are happy to be proven wrong when they survive despite the darkest predictions.

The median estimate at a 2015 conference of AI experts in Puerto Rico pegged 2045, with others assessing centuries, as the likely year by when generalised AI -- capable of acting autonomously of human management and therefore, potentially, harmful rather than beneficial for humanity -- could exist. This is about five decades earlier than the expected climate apocalypse in 2100 and serves to highlight the need for early, concerted global action.

The AI Safety Summit at Bletchley Park, 80 km north of London, billed as the world’s first, was organized by the UK recently. Another will follow, co-hosted by South Korea, about six months later, with France hosting the 2024 summit. It projects the UK as a nimble, pragmatic, technological lynchpin between the rumbustious free markets of the United States, the innovation unfriendly, rules-bound constraints of the European Union and the heavy hand of the State in China. To be sure, the UK has what it takes to walk the middle path independently and survive. According to a UK government-sponsored AI sector study published in 2023, it has 3,170 active AI companies, of which 60 per cent are dedicated AI businesses.

London, the southeast and the east of England account for 75 per cent of registered AI office addresses. Elon Musk, prodded at the meeting by Prime Minister Rishi Sunak, said that the Bay Area in California and Greater London are the global drivers of innovation. Regional clusters in the UK specialise in automotive, industrial automation and machinery; energy, utilities, and renewables; health, well-being and medical practice, and agricultural technology. Annual revenues are GBP 10.6 billion with 50,040 full time equivalent jobs. The contribution of AI to the national gross value (GVA) added is 1.3 per cent, or GBP 3.7 billion. Larger companies contribute more to GVA than smaller ones, illustrating the capital intensive, high R&D nature of deep technology.

The UK announced that it would establish the world’s first AI Safety Institute to complement international efforts including at G-7, OECD, Council of Europe, United Nations and the Global Partnership on AI. The US has similarly decided on establishing an AI Safety Institute.

Clones are likely to proliferate. Twenty-eight nations, including the US, EU, China, India, Japan, Australia, Brazil, Germany, France, Israel, Ireland, Kenya, Saudi Arabia, Nigeria and the United Arab Emirates participated and endorsed the Bletchley Declaration.

The contents of the declaration are anodyne and designed to promote consensus. Motherhood and apple pie statements dominate. Savour this extract: “We welcome… the recognition that protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”. This strategy to globalise the concerns around AI safety appears to replicate the non-binding nature of recent international agreements like the climate template. The risk here is that the longer the list of unfunded “concerns”, the more difficult it becomes to arrive at a workable, efficient, meaningful agreement.

Consider the Bletchley resolve to build “risk-based policies across our countries to ensure safety in light of such (AI related) risks, collaborating as appropriate, while recognising our approaches may differ, based on national circumstances and applicable legal frameworks”.

The “One Problem Many Solutions” template can become a cop out, which pursues a formal “agreement” doggedly on generalities, without any bite, somewhat like the climate action agreement.

China was invited to attend, and it did. That can only have good consequences. Excluding the second largest economy would have diminished the summit. But the fact that Rishi Sunak publicly sought and got Elon Musk’s endorsement of this outreach shows the extent of our splintered geopolitics and the demise of global collaboration. Possibly, a proxy yes vote from the US tech industry was the best possible olive branch whilst the pre-US elections standoff with China continues.

Consider another rhetorical resolve to identify “AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks”. In a world, adrift with a war in the heart of Europe and another in West Asia, both partly to shore up the political interests of the concerned governments, and selective trade restrictions proliferating, a shared world view and common aspirations must precede meaningful scientific collaboration amongst the major AI capable countries We are far from there yet, and therein lies the real threat to humanity.

Amidst this dark reality, is the fact that myths surround AI and should be debunked. Robots are not killer machines unless they are programmed to be thus, like heat-sensing missiles. Robots are only as intelligent as you make them. So, keep the AI genie bottled well, and make sure that you don’t wish for more than what you want.