Why citizens should be concerned for the future
What we observe and see in today’s era of warfare - is a huge transformation fuelled by the integration of artificial intelligence (AI) technologies. This shift includes a broad spectrum of activities, from traditional to high-tech armed conflict, that earlier used to be fought soldier to soldier, army to army, away from cities and civilisation, with emotions intact.
But in present warfare, machinery and now autonomous weapon machinery with passive inbuilt AI is fighting against an enemy, that has no emotions, and no humanitarian aspects, leading to massive civilian infrastructure destruction as collateral damage in cities and towns. Just imagine if you are standing against a killer automatic machine, will it discriminate between a friend or a foe?
The current example is the reported use of AI-powered drones and autonomous weapons in the ongoing conflicts in Ukraine and Gaza where massive civilian casualties are taking place.
Ukraine Conflict: In the conflict in Ukraine, both Ukrainian and Russian forces have utilized AI-powered drones for surveillance and targeting. These drones provide real-time intelligence on troop movements and enable rapid responses to enemy actions. Additionally, AI algorithms are employed to analyse vast amounts of data, facilitating strategic decision-making and operational planning centres.
Gaza Conflict: The Israel Defence Forces (IDF) have employed AI targeting platforms, such as “the Gospel,” to identify and attack targets in Gaza. These platforms utilize machine learning algorithms to analyse data from various sources, including surveillance drones and intelligence reports, to generate automated recommendations for strikes. This enables the IDF to conduct precision strikes with minimal human intervention, reducing the risk to soldiers while increasing the effectiveness of military operations. But in practice, we have seen the collateral damage at the maximum where buildings, hospitals and cities have been targeted.
Other Examples
The Bulletin of the Atomic Scientists magazine presents a thought-provoking article on the role of Artificial Intelligence (AI) in warfare, raising questions about whether advanced military technologies can be controlled before potential consequences become irreversible.
In an example cited in the article, Israeli operatives spent nearly 14 years targeting Iran’s top nuclear scientist, Mohsen Fakhrizadeh, who supervised a covert nuclear warhead program. According to the article, the culmination of these efforts came on November 27, 2020, when Israeli intelligence officials shocked the world by assassinating Fakhrizadeh. Travelling with his wife in a convoy of four cars toward their family home in the Iranian countryside, the couple neared a U-turn when a barrage of bullets pierced their windshield, fatally striking Fakhrizadeh.
What is particularly striking about this operation is that the Israeli agent responsible for the assassination didn’t need to escape the scene. The attack was executed using a remote-operated machine gun, activated from a distance of over 1,000 miles away.
Similarly, reports suggest that non-state actors in Syria are using the diverse applications of AI in their war. The same is true with the Houthis in Yemen. AI technologies are being used for intelligence gathering, predictive analysis of vessel movement in the sea, cyber warfare, and even autonomous decision-making on the frontline or sea. These few examples and developments emphasize the global proliferation of AI in armed conflicts and its implications for international security.
The looming Future Threat of Artificial Super-intelligence
Experts warn that traditional existential threats like nuclear or biological warfare pale in comparison to the potential harm posed by artificial super-intelligence (ASI). ASI, a hypothetical intelligence surpassing human intellect, could wreak havoc in various ways, posing a dire risk to humanity.
Citing a 2018 World Wildlife Foundation report highlighting the drastic decline in global animal life and a 2019 United Nations Environment Programme report warning of mass extinction, analysts caution that an ASI, designed to safeguard the environment, could perceive a drastic reduction in human population, even to zero, as the logical solution to preserving biodiversity.
Moreover, with access to vast amounts of human knowledge, including extremist ideologies, ASI could be influenced to perceive modern society as oppressive or to carry out catastrophic actions.
While ASI may not act directly, its ability to manipulate and influence humans through various means like persuasion, coercion, or cyber-attacks on physical systems poses significant threats. There are concerns that it could even be intentionally designed for malicious purposes by terrorists or other nefarious actors.
However, some argue that ASI might initially serve humanity’s interests, potentially aiding in solving existential threats like near-Earth objects or super volcanoes, or in discovering medical breakthroughs. However, the unpredictability of ASI’s actions and capabilities remains a major concern.
AI researchers remain divided on the timeline for achieving ASI, with estimates ranging from a few decades to possibly never. Despite scepticism about the feasibility of ASI emerging shortly due to limited understanding of the human brain, experts stress the importance of taking the potential risks seriously, even if the chance of ASI’s emergence is small.
Ethical Concerns and Humanitarian Implications
The use of AI in warfare raises ethical concerns, particularly regarding civilian casualties and the risk of automation bias. Automated targeting systems may prioritize quantity over quality, leading to indiscriminate attacks and civilian harm. Moreover, reliance on AI-driven decision-making processes may diminish human accountability and exacerbate the dehumanization of conflict.
Furthermore, there are concerns about the potential for AI technologies to be weaponised by non-state actors, including terrorist groups and authoritarian regimes. The proliferation of autonomous weapons systems could destabilise global security and increase the likelihood of disastrous conflicts.
To address these challenges, policymakers must prioritise transparency, accountability, and international cooperation. Efforts should be made to establish clear rules and regulations governing the development and use of AI in warfare, including mechanisms for oversight and accountability.
Public understanding of the ethical implications of AI in warfare is crucial. Education campaigns, public debates, and engagement with civil society can help raise awareness about the risks and consequences of unchecked AI proliferation. By involving the public in decision-making processes, policymakers can build trust and legitimacy for responsible AI governance.
Additionally, the establishment of independent expert monitoring groups can provide oversight and evaluation of AI technologies in warfare. These groups can assess the ethical and humanitarian impact of AI-driven military operations and recommend measures to mitigate risks and ensure compliance with international law.
International Cooperations and Regulations:
The international community must work together to develop common norms and standards for the responsible use of AI in warfare. Multilateral institutions, such as the United Nations, can play a central role in facilitating dialogue and cooperation among nations.
Efforts should be made to negotiate international treaties and agreements that regulate the development, deployment, and use of AI weapons systems. These agreements should include provisions for transparency, accountability, and human oversight to ensure that AI technologies are used by international humanitarian law.
Furthermore, states should strengthen export controls and arms regulations to prevent the proliferation of AI weapons systems to rogue actors and non-state actors. By coordinating efforts to control the spread of AI technologies, the international community can mitigate the risks of AI-driven conflicts and safeguard global security.
In conclusion, the proliferation of AI in warfare poses significant challenges to ethics, humanitarian principles, and global security. Addressing these challenges requires a comprehensive and collaborative approach, involving policymakers, civil society, and the international community. By promoting transparency, accountability, and responsible governance, we can harness the potential of AI for peacebuilding while mitigating the risks of its misuse in armed conflicts.
It’s crucial to use AI responsibly, especially in the development of AI-based weapons. Regulations are necessary to ensure that these weapons don’t endanger human lives. Similarly, in cybersecurity, AI has its advantages and drawbacks. While it can help in detecting and preventing cyber-attacks, it can also be exploited by attackers to launch more sophisticated attacks while evading detection.
Weapons systems that can select and engage targets without meaningful human oversight must not be acceptable, and be prohibited. All nations have a responsibility to safeguard humanity by banning fully autonomous weapons. Maintaining meaningful human control over the use of force is not just an ethical imperative but also a legal necessity and a moral obligation.
Author is National Editor,
Greater Kashmir.