Man, Machine & Morals
Machines and computers are becoming increasingly sophisticated and self-sustaining. Thanks to AI, the distinction between man and machine is becoming increasingly blurred. As we integrate such technologies into our daily lives, questions concerning moral integrity and best practices arise.
A changing world requires renegotiating our current set of standards. Without best practices to guide interaction and use with these complex machines, interaction with them will turn disastrous. AI has also been used to create ‘deep fakes’ that can turn what we call reality inside out, and create images depicting people in compromising situations.
“Artificial Intelligence tools may pose a threat to writers of thrillers and science fiction, but lack the originality and humour to challenge serious novelists. Given that Hollywood is constantly creating new versions of the same film, AI could be used to draft screenplays,” remarked Salman Rushdie, the writer of renown.
If and when the day comes when machines can think for themselves independent of human inputs, only then can we conclude that yes, we have a huge threat or wonderful ally on our hands, depending on the perspective from which you view LLMs, for instance. And depending on the extent to which you can exercise some sort of control so that AI does not go off the rails. It will also open up the debate as to whether machines can have consciousness, for, to be self-aware and ruminate on the nature of the higher Self, so far, one needs to be human with the ability to carry out this kind of thought process.
How should machines treat people? What would it take to develop an ethical machine? These questions were the focus of a 2006 special issue, as well as the Association for the Advancement of Artificial Intelligence 2005 Fall Symposium on Machine Ethics. Since then, ethical issues surrounding AI and data analytics - lately referred to by the umbrella term FATE: Fairness, Accountability, Transparency and Ethics - have captured the attention of researchers and practitioners working in broad areas. We reflect on how ethics research has evolved, highlighting key points of progress, but also noting some possible challenges and pitfalls that we must bear in mind.
There is a great irony in the fact that one of the leading edges of scientific and technological development, represented by robotics and AI, is at last coming to see the importance of ethics; yet it is hardly a surprise if it should not yet see that importance clearly or broadly. Hans Jonas noted nearly four decades ago that the developments in science and technology that have so greatly increased human power in the world have “by a necessary complementarity eroded the foundations from which norms could be derived…. The very nature of the age which cries out for an ethical theory makes it suspiciously look like a fool’s errand.” As the use and impact of autonomous and intelligent systems (A/IS) become pervasive, we need to establish societal and policy guidelines in order for such systems to remain human-centric, serving humanity’s values and ethical principles.
These systems have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between people and technology that is needed for its fruitful, pervasive use in our daily lives.
It is no secret that Algorithms are becoming self-taught and capable of independent decision-making. As intelligent, autonomous systems continue to evolve, human judgment and behaviour is influencing machine decision-making. The field of machine ethics involves efforts to ensure that the behaviour of AI-enabled machines is ethically acceptable. According to Nell Watson, AI and Robotics Faculty at Singularity University and Co-Founder of EthicsNet, “machine ethics is an emerging domain which is concerned with teaching machines about human values.” Here one should not confuse Machine ethics with computer ethics or robo-ethics, lines of thought that deal with the ethical use of technology by humans.
Way back in 1942, American science fiction writer Isaac Asimov formulated his Three Laws of Robotics. The First Law states: A robot may not harm a human or allow a human to come to harm. The Second Law says: A robot must obey orders given by humans except when these conflict with the First Law. The Third Law says: A robot must protect its own existence as long as it does not conflict with the two other Laws.
That’s fine, but what happens when robots become more human than humans themselves are? Some say that such a turning point might not be far off. Advanced AI systems are learning to learn on their own, or self-learn, without help from humans. ASI or Artificial Super Intelligence may develop “thinking skills more advanced than any humans”. Scientists have justifiably warned that ASI could “surpass human control…leading to unforeseen consequences and even existential risks” to humanity. Science fiction? Perhaps. But science fiction has an uncomfortable way of becoming science fact. Will a future AI Asimov frame the Three Laws of Humans: A human may not harm a robot…
If machines are to learn to make decisions that benefit humans, how do we define that benefit? According to the IEEE “Common metrics of success include profit, occupational safety, and fiscal health. While important, these metrics fail to encompass the full spectrum of well-being for individuals or society. Psychological, social, and environmental factors matter. Well-being metrics capture such factors, allowing the benefits arising from technological progress to be more comprehensively evaluated, providing opportunities to test.
The possibility of harmful side effects from enhancement technologies will always be worrisome. But the deeper dilemma is not simply the regulatory question of what is “safe” but more fundamental questions about the proper shape of a human life. Some danger, after all, is central to noble activity. The pursuit of excellence in one area of life will inevitably create distortions in others. The question is how far such distortions can go before the quest for excellence becomes destructive of the very humanity of the one undertaking it. Having shown us why the most obvious concerns are not the deepest concerns, Beyond Therapy strives to offer, in outline, a picture of genuinely human excellence, a realistic account of what it means to live a fully human life. In doing so, the Council stands against some of the most powerful ideas - new and old - behind enhancement efforts.
It dips into very difficult waters - thinking about the relationship between mind, body, and the “dignity of human activity.”
The argument begins with a respect for — but hardly an acquiescence in — the “naturally given.” The point is not that nature has created us the best of all possible beings, or that our circumstances in nature are ideal. Unlike most other animals, we are capable by nature of fundamental alterations to our naturally given condition, and much that is valuable in human life stems from just such alteration. But we are also limited by being embodied in the way we are, and by the specific qualities of our individual bodies, and by the changes to our bodies over time. We are not “hardwired” to accomplish our ends, and yet we are not responsible for building what we are from the ground up.
The idea that we should respect some of the limitations of our given humanity, and the belief that our limits might be inextricably linked with our virtues, stands in stark opposition to those who proudly advocate a “post-human future.” These advocates called “extropians,” “transhumanists,” or “extinctionists” see human life as a temporary stage in an ongoing evolutionary process, by which what is given will inevitably change. Since we have the power to modify the given, there is every reason to use it to direct evolution beyond the given. Our successors might see us as we see our pre-human ancestors: as primitive cousins.
After all, isn’t the notion of a “fundamental” limit simply an artifact of the technological capacities of a given moment? Machinery has long allowed us to surpass the limits of human strength; what is the difference between using a crane operated by hand to lift tons of steel, and lifting those same tons with a crane operated as a third hand by BMI? The difference is that we don’t have three hands, and being two-handed creatures may be significant for living fully and truly as human beings.
We are embodied in a particular way. Enhancements that seek to make the most of our embodiment are distinguishable from those that seek to alter it in completely novel ways. Enhancements to the body itself are distinguishable from enhanced performance through the use of tools. This argument will never convince those who see our bodies as machines, as complex assemblages of molecular parts, whose workings become more manipulable the more we understand them. In this view, the history of our interactions with tools is a story of relatively crude interfacing between two different machines. But today, neuroscience, artificial intelligence, and nanotechnology are opening the door to more efficient interfaces. The human mind is making the human machine better.
The error here is thinking of ourselves simply as “inputs that produce outputs,” an error that lies at the heart of many of the fantasies of artificial intelligence. In one of its most thought-provoking examples, Beyond Therapy distinguishes between chess playing as a human performance and chess playing as a machine output. The machine has “no uncertainty, no nervousness, no sweaty palms, no active mind.”
It may defeat human beings, but is the machine really “playing chess”? Inventor and visionary Ray Kurzweil has a computer program that can produce representational figure drawings seemingly indistinguishable from human artwork. But is the program an artist? Even if our bodies are, as Beyond Therapy sometimes concedes, in some sense like complex machines, such biochemical reductionism does not tell the whole story about “being human.” The trouble is that the rest of the story — the heart of the story, which is our lived experience of ourselves in the world — is not so easily told, at least not in an age that demands scientific precision about body and psyche.
Bhushan Lal Razdan, formerly of the Indian Revenue Service, retired as Director General of Income Tax (Investigation), Chandigarh.