For the best experience, open
https://m.greaterkashmir.com
on your mobile browser.

AI: A scary forecast

These dangers were mentioned in an open letter signed by 1000 technology practitioners in the field in March this year
12:02 AM Nov 25, 2023 IST | Vivek Katju
ai  a scary forecast
Advertisement

At a time when the upheaval in one of the world’s leading artificial intelligence (AI) companies, OpenAI, has been attracting worldwide attention, Prime Minister Narendra Modi announced to the G20 countries, at the virtual summit he hosted for its leaders on November 22, that India will hold the Global AI Partnership Summit next month. Modi told his peers and representatives of other G20 states as well as those who were invited to the physical G20 summit in September, “In the era of Artificial Intelligence, there is need to use the technology in a responsible manner. There is growing concern over the negative use of AI all over the world. India firmly believes that we should work together on global regulation of AI. We have to move forward, understanding the seriousness of how DeepFake is for society, for the individual”. Saying this Modi expressed the hope that “all of you will cooperate in this as well”.

Advertisement
   

The world is in the fourth industrial revolution which involves progress in artificial intelligence, robotics and the internet of things. Artificial Intelligence products such as Chatgpt produced by OpenAI have created considerable excitement all across the world. They are able to put together, on the basis of the information available on the internet, different kinds of writing on a whole variety of subjects. Some of Chatgpt’s output needs correction but OpenAI and AI experts claim that its output’s accuracy will increase overtime. It therefore holds the promise of doing, at a minimum, basic secretarial functions which are performed by human beings today. But this is only the beginning for AI; as Modi has cautioned it has the potential of posing dangers to individuals as well as societies.

Advertisement

These dangers were mentioned in an open letter signed by 1000 technology practitioners in the field in March this year. According to media reports the letter called for AI labs to pause research and production of systems beyond GPT 4 because they may constitute great “risks” to humanity. The letter, according to the New York Times noted that AI companies are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict or reliably control”. If what the letter writers have written is true then it breaks the basic rule on which all science and technology has developed since the beginning of time. That rule is that human beings are masters of what scientific discoveries unfold and the technology that they produce. Indeed, it is inconceivable that human beings can create artificial “minds” that will acquire decision-making powers and can override human will. That is precisely what the letter writers are cautioning the world at large.

Advertisement

Predictably, AI companies are acknowledging that AI products need to be carefully managed and monitored but are unwilling to pause research and development of newer and more powerful AI products which can perform analytical functions and learn, as human beings do, through experience. Indeed, the recent and dramatic turmoil in the OpenAI company was on account of the CEO Sam Altman’s unwillingness to stop working on more sophisticated systems. The OpenAI company board dismissed Altman but the tables were turned on the anti-Altman board members swiftly when Microsoft was willing to take him in and almost all AI workers became willing to join him in Microsoft. The result is that Altman is back and it is not unlikely that he will work with greater zeal to work on higher level of AI products.

Advertisement

That there is need to regulate AI is obvious because of the dangers involved. The question is whether the United States will be willing to let the international community be a participant in doing so because all the major internet and AI companies are in the US and the technology is being developed under their ownership. Chips and hardware are being produced elsewhere too and China is putting in a major effort in the AI field but still it is the US which is ahead in the game. In the case of the internet there are really no effective international governmental organisations to regulate it. And, it is undeniable that the internet has ushered in the digital age as nothing else has. Countries have national laws and regulations to monitor and control digital products such as WhatsApp and X but there are no international regulations in these areas.

Advertisement

This is vastly different from the way the powers with nuclear weapons controlled their spread by bringing in the Nuclear Non-Proliferation Treaty (NPT) and created and international organization the International Atomic Energy Agency (IAEA) to regulate civilian nuclear activity. This was possible because the US did not have the monopoly of either nuclear weapons or technology. The international bodies to control the spread of chemical or biological weapons of mass destruction are also there because no country has the kind of monopoly that the US has in the AI industry as of now.

Advertisement

In the current scene it would be interesting to observe the stand the US will take regarding exposing their AI industry to the monitoring and control of inter-governmental organisations. The cautionary letter written by AI industry participants may mention problems for humanity but it is directed to US lawmakers and really seeks at most US governmental regulation. It is unlikely if the letter writers themselves will be willing for intergovernmental regulation and monitoring. In these circumstances it will be most interesting to see the practical positions—as distinct from rhetoric--- that the developed world, specifically the US, takes at the summit called by India.

Advertisement

Advertisement
×