The Digital Deluge
Technologies are ideas, and ideas cannot be quelled. Discussions about disruptive technologies have largely remained elite pursuits, interesting talking points for business class lounge, panel discussions, op-eds, or topics for presentation halls. Most of humanity doesn’t worry about these things yet.
But the reality is that we stand on the brink of a new era, one defined by the convergence of artificial intelligence (AI), artificial generative intelligence (AGI), quantum computing, nano technology and synthetic biology. This technological surge promises to revolutionize not just how we live and work, but fundamentally alter the fabric of society itself.
Consider the astonishing progress we've witnessed already. Robots are performing intricate surgeries autonomously, a feat once relegated to the realm of science fiction. Diseases that were once death sentences, like sickle cell, are now potentially curable through technologies like CRISPR. Quantum computing’s incredible applications in climate change forecasting, drug discovery, cryptography and communication are a few areas that have come to light, its massive potential remains relatively less discussed. With Internet technology, a single algorithm can help a small business grow into a huge corporation.
The Dilemma
Yet, to overlook the perils of this wave is to fall into the pessimism-aversion trap. We might convince ourselves that such scenarios are confined to the realms of techno-fantasia, but the truth is, these technologies already control our business and leisure time. Billions of hours of human life and existence are consumed, shaped, distorted, and enriched by them.
While creators of technology rarely contemplate the 'revenge effects'—the ways in which technology can go awry and contradict its original purpose—history is replete with examples. Gutenberg's printing press catalysed the scientific revolution, despite his initial aim to profit from printing Bibles. Pesticides created superbugs that are resistant to treatments. Antibiotics have made bacteria, viruses, fungi and parasites resistant, leading to antimicrobial resistance (AMR) which is now a leading cause of death worldwide. The Holocaust remains one of the most shameful examples of the misuse of technology in the modern age.
With the massive repository of data available online, knowledge is accessible and available to all, and knowledge is power. The Hamas missiles and drones were made in garages, the Kyiv skirmishes were done by hobbyists and enthusiasts. These developments represent a colossal transfer of power from the traditional states to anyone with a capacity and motivation. Anyone with the drive to learn can purchase a DNA synthesizer synthesis or clone genes with the right techniques.
promises and perils
AI holds immense promise for education. It can be an equalizer, catering to individual students' needs and capabilities. With Conversational AI, each student can receive personalized learning experiences tailored to their unique learning styles. Moreover, technologies like augmented and virtual reality have the potential to revolutionize education, allowing for immersive learning experiences that were once unimaginable. Adaptive Learning, automated grading and feedback, intelligent content creation are gamechangers that enhance the learning experience of students, save time and aid the teachers and administration in various tasks.
However, this potential comes with its own set of challenges. The increasing dependence on technology in education raises questions about privacy, equity, and the future of work. As AI becomes more prevalent in classrooms, there is a risk that it could exacerbate existing inequalities if not implemented thoughtfully. Evaluation and critical thinking could be greatly compromised, therefore training the teachers, parents and students in the ethics of AI is imperative. The ability to learn, comprehend, study, analyse and filter the vast ocean of information available online has to be developed in the students, and no machine can teach that.
Furthermore, the rise of AI poses significant challenges to the job market. While it has the potential to create new opportunities and streamline existing processes, it also threatens to automate many jobs, leading to widespread unemployment and economic disruption. Technology has always been about allowing us to do more, but crucially with humans still doing the doing. Nobody trained ChatGPT to write like Dan Brown or make customized diet charts or develop marketing strategies. This self-improving and self-learning technology can render an average writer or dietician or a marketing agent useless if not contained properly.
Stuart Russel talks about the “gorilla problem” that gives us insights about the threats caused by intelligent machines; although gorillas are physically stronger and mightier than humans, they are the ones caged and put up in zoos by humans. By creating a technology smarter than us, we could put ourselves in the gorillas’ position soon.
Strides and Suppression: striking a balance
Yet, despite these challenges, there is reason for hope. Governments and organizations around the world are beginning to recognize the importance of preparing for the AI revolution. Initiatives like India's Education 4.0, the G20's commitment to equitable and inclusive AI in education are steps in the right direction. The Central Board for Secondary Education introduced Artificial Intelligence and Internet of Things in the curriculum from classes 6th-12th. The workforce in the frontier technologies is less than 1,50,000, this calls for an urgent need to educate and skill the students to prepare them for 21st century jobs and to lead the way.
Additionally, investments in research and development are crucial to ensuring that we harness the full potential of these technologies while mitigating their risks. According to the Department of Science and Technology, India's total investment in R&D reached $17.2 billion in 2020-21. China's research and development (R&D) expenditure exceeded about 458.5 billion U.S. dollars in 2023, an 8.1-percent year-on-year increase. India recently launched India AI Mission with a budget of 10,372 crore for five years, however a conscious effort needs to be put in order to build tech which is safe, private and equitable.
Countries like the USA, China, the UK, Israel, and Canada are leading the way in AI research and technology, but it will require global cooperation to navigate the challenges and opportunities ahead. The Global Partnership on Artificial Intelligence (GPAI) is an international and multi-stakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth
A country’s progress is often determined by its technological advancements. The Prime Minister’s rebranding of the slogan “Jai Jawan, Jai Kisan, Jai Vigyan” to “ Jai Jawan, Jai Kisan, Jai Vigyan, Jai Anusandhan” intends to reinforce the spirit of research and innovation for development. The announcement in the interim budget 2024-2025 to set up a corpus of 1 lakh crore for research and innovation has sparked enthusiasm in the scientific community; sculpting the coming wave will require collaboration not only between the scientific community at national and international levels, but also between tech enthusiasts and hobbyists. Ultimately, this new technological era represents a monumental shift in human civilization. How we navigate this wave will determine the course of our future. Top of Form
Bottom of Form
BY Haya Qazi