AI: Humanity’s greatest tool or most dangerous gamble
Artificial. The word itself often carries a sting. Artificial gold, artificial silver, artificial clothes, even an artificial pen, etc... All are seen as second-class, duplicate, imitations, copies of the “real thing.” For centuries, we have dismissed the artificial as inferior. But today, the conversation has shifted. We are no longer talking about fake ornaments or knock-off goods. It is real. Artificial Intelligence (AI), is a force that is not only reshaping mechanical industries, the health sector, communication, and banking systems, but also redefining what it means to be human in a world where machines can think, learn, and even build themselves.
Artificial Intelligence (AI) is touching our daily life, leapfrogging, giving surprises and shocks, with every passing year of its success. And it is growing so fast that even experts sometimes feel disbelief and unease. As AI expert Matt Shumer says, “AI is now building the next AI.” Think about that carefully. If intelligence can improve itself, then we are standing at the edge of a revolution that moves on its own
Today, every major country wants to lead in AI. Governments see AI as important for defence, space research, transport systems, and even daily administration. Companies see it as a powerful tool to increase productivity. Young people, especially Gen Z, are already using AI tools that solve in seconds what once took hours for Gen millennium.
But there is another side to this story. AI can also become a weapon. It can be misused by hostile nations, extremist groups or even corporations that are not properly regulated. International law controls nuclear weapons and chemical arms, even though it remains controversial. But it has not yet fully addressed AI.
AI is not like a missile or a bomb. It is code, invisible but powerful. Imagine drones selecting targets without human control. Imagine fake news campaigns run entirely by machines, influencing millions of people within seconds. Or imagine hackers feeding wrong data into AI systems to create chaos. The Geneva Conventions were written for wars fought by humans. But what happens if algorithms start deciding who is a threat and who is not? AI warfare may not follow traditional rules.
AI Building AI
Matt Shumer points out that advanced AI models like GPT-5.3 Codex were partly built using earlier AI versions. In simple words, AI helped create a smarter version of itself. This is not imagination. It is already happening. Experts call this the “intelligence explosion,” a cycle where each new AI system helps build an even more powerful one. According to Dario Amodei, CEO of Anthropic, we may be just one or two years away from a stage where AI independently builds the next generation of AI. Imagine, it is happening. Machines may improve faster than humans can regulate them. Shumer admits honestly that only a small group of researchers and companies are shaping this powerful technology. That concentration of power is worrying. A few labs and corporations are making decisions that may affect the entire world. Where is the global discussion? Where is democratic oversight.
Growing Humanitarian Concern
AI is a super-advanced technology globally growing at a much faster speed. But are we thinking of people who are living in fear, not knowing how safe their tomorrow, their future, is going to be. Many fear job losses. AI is already writing software, answering customer queries, and even doing creative work. Shumer, in his article, says he now simply describes what he wants in plain English, and the AI builds it. Work gives us more than money. It gives us dignity and purpose. If machines replace large numbers of workers, what happens to people’s sense of identity? One fears that society is going to split into a new classification between those who control AI and those who depend on it. International law recognises the right to work and the right to dignity. AI challenges both. If machines become better than humans at most tasks, how do we protect human relevance?
Think of pace of AI development, It is shocking as Matt Shumer summarises and describes it: He says, in 2022, AI struggled with simple maths, in 2023, it cleared professional exams, in 2024, it wrote working software and explained complex science, in 2025, many engineers began handing over coding tasks to AI and now in 2026, new models made older systems look outdated. Sam Altman, in the recent summit at New Delhi, said superintelligence will exist by the end of 2028, and it will be better at research than our best scientists and better at management than the greatest CEO.
This is not slow progress. It is like a tidal wave. Soon, AI may complete month-long projects in a few hours. Amodei predicts that by 2026 or 2027, AI systems could be smarter than almost all humans at almost every task. If that happens, we must ask: what will be the human role?
PM Modi, very rightly, in his address at the India AI Impact Summit 2026, showcases a comparison. PM Modi said that, as nuclear power has demonstrated both destructive and constructive capacities, AI too can either disrupt humanity or solve its deepest challenges, depending on how it is guided. The central question, he argued, is not what AI can do, but what humanity chooses to do with it now.
PM Modi said that the core objective of the summit was to transform AI from a “machine-centric” system into a “human-centric” one, making it sensitive, responsive and aligned with human welfare rather than profit or power alone. He launched the “MANAV” framework, with the message that AI must remain under human command, rooted in ethics, transparency and inclusivity, if it is to become humanity’s greatest tool rather than its most dangerous gamble.
The biggest concern is not just that AI is powerful. It is that it may go beyond our control. If AI designs new AI systems, who is responsible if something goes wrong? The programmer? The company? Or no one? At present, international law has no clear answers, even though on several platforms, the experts and the governments do show concern, but hardly any major step taken by the international community or global watchdogs who matter to take a lead. Shumer warns that this is no longer a theoretical discussion. The future is already here. It just has not affected everyone equally yet, but it is a question of just time when the common masses will feel the brunt.
So what should be done?
First, AI must be treated not only as a technology issue but also as a legal and humanitarian issue. Just as the world created treaties for nuclear weapons, it must create rules for AI urgently. There needs to be global agreements, powerful decision-making oversight bodies, and accountability mechanisms.
Second, human dignity must be protected. AI should support human work, not completely erase it. Governments must invest in retraining programmes, better education, and strong social safety systems.
Third, AI decision-making should not remain in the hands of a few companies. There must be global cooperation and wider public participation in shaping AI policies.
Finally, we must remember: technology is a tool. It is not our destiny. AI can help cure diseases, fight climate change and improve education. But it can also increase inequality, spread misinformation, and strengthen warfare. The next two to five years may bring changes that many of us are not prepared for. AI is moving fast. We all talk about it, feel about it. It is rewriting social rules and changing the industrial world. The real question, despite assurances in seminars and summits today, is simple: Will AI serve humanity, or will humanity struggle to keep up with AI? The answer depends on what we do today and decide globally.
Author is National Editor, Greater Kashmir