Broadcast on Newsbook and 103FM a conversation between Professor Andrew Azzopardi and I, explored the key themes in Artificial Intelligence (AI), its historical developments, the potential risks and benefits, and the course we should take to ensure its alignment with human values.
The Evolution of AI
AI’s development started in the 1940s, coinciding with the invention of the computer. The first scientific document on neural networks — the architecture of today’s AI — was published in 1943 in the Mathematical Biophysics Bulletin.
Over the past 80 years, generations of AI scientists have been born, educated, worked, and in many cases, passed away without witnessing the payback we see today. Presently, some AI engineers, whose grandparents or even great-grandparents were involved in the creation of AI ideas, work tirelessly to make AI a reality, despite significant fear and uncertainty.
What is AI, and should we fear it?
AI refers to the application of mathematics and software to teach computers how to understand, synthesize, and generate knowledge like humans. It’s a computer program whose outputs are useful in various fields such as medicine, law, and the creative arts. AI is not a deadly software or world-destroying robots depicted in movies. Rather than being a threat to humanity, it can potentially help the world find new solutions.
AI has the potential to improve academic achievements, job performance, income, creativity, health, and many other areas. It is already in use in various forums and will only accelerate in the future.
AI’s Benefits for Malta
In the new era of AI, everyone — from children to scientists, artists, engineers, businesspeople, and doctors — will have AI assistants to improve their capabilities and results. AI will drive productivity growth, economic growth, scientific discoveries, technological advances, and a golden age in the creative arts. It can address challenges previously thought impossible, from disease eradication to sustainable travel.
Why the Panic?
Despite AI’s positive potential, there’s a moral panic surrounding it. Historically, new technologies have caused panic, raising concerns and hindering progress. Regulation is genuinely needed to prevent potential AI risks.
Will AI Kill us All?
The fear that AI will annihilate humanity is a profound misunderstanding. AI is not a living creature with motivations but a creation of humanity. The discussion around the risk of AI has developed into a cult-like phenomenon with some extreme beliefs and behaviour.
The greatest immediate human risks regarding AI use are the easy generation and spread of plausible false content, undermining the concept of truth. Yet, the primary existential risk in Malta comes from the lack of planning, unsustainable environment and ecology, and politics without a vision for the future. On a global level, global warming, growing nuclear threats (such as that with Russia), and rapid global aging pose significant catastrophic risks. For future generations, the most significant risk we face (triggered by global warming) is the loss of biodiversity.
Nothing but regulation, ethics, and education can address this. The possibility has existed and been recognised at least since Alan Turin in 1950. It seems inevitable that AI capabilities will continue to develop and AI systems will be used to invent ways to further develop AI capabilities.
AI Alignment & Will AI Ruin Society?
Another AI risk is the concern that AI will ruin our society by generating harmful outputs such as hate speech and misinformation. The solution lies in AI alignment, which involves aligning AI models with humans, but also aligning human society with companies and governments.
This is a turning point, perhaps the most important of this century, where we decide how governing bodies should regulate for this phenomenon, how we can distribute the wealth that AI creates more equitably, how we should redefine ‘work’, and how we should rethink public policy on matters like education and the economy.
It’s not just a technical debate or one on digital tools; it’s a debate about politics and coordination. By viewing it in this way, we’ll have a lot more leverage. So, we must build trust through alignment. This requires open dialogue and public engagement to ensure that the AI we build aligns with the value system we cherish, the value system that can create positive wellbeing for our society.
The Lump of Labour Fallacy
The fear that AI might one day replace all our jobs is a persistent one. This anxiety has surfaced throughout history with every significant technological advancement but has never materialized. This misconception stems from the ‘Lump of Labor Fallacy,’ which erroneously assumes a fixed amount of work in the economy, which either machines or people can perform.
However, technology increases productivity, reduces the cost of goods and services, and creates new industries and jobs. Higher productivity leads to higher wages as workers become more valuable in technology-driven businesses. In a market economy with freely introduced technology, there’s a perpetual upward cycle of economic and job growth.
The introduction of AI could result in even greater economic productivity growth and increased consumer wellbeing. The belief that AI will take over all human work is, in fact, an opportunity for unprecedented economic growth and prosperity.
Rather than worrying about losing our jobs, we should see ourselves as privileged to witness such advancements.
AI & Education: Shaping the Future with Technology
Technology can serve as a bridge between various learning strategies. Educational technologies can be utilised to deliver syllabi in an interactive and engaging manner. They can also support adaptive learning by providing individual feedback, facilitating collective projects, and giving students access to a broad array of resources and perspectives.
The incorporation of elements fostering problem-solving skills, digital literacy, and cultivating a learning culture is essential. These skills are critical for students to adapt to a continually evolving world. Sending a soldier into battle without proper preparation or adequate equipment is seen as a dereliction of duty. Yet, we regularly send children into the digital world without the skills they need to understand complex information structures, sources, truths, and facts online.
The encouragement of lifelong learning will help ensure that students remain adaptable and resilient in the face of change. This involves not just providing knowledge but also cultivating curiosity, critical thinking, and the ability to learn independently.
Therefore, employability today is less about what you already know and more about your ability to learn and adapt.
Work, Marx, and the Potential for Re-humanisation
The concept of work is profoundly fundamental. The Marxist struggle between labor and capital has been present throughout the last 70 years of political history. AI is now the component of automated labor, which is non-human, doesn’t require pay, and is available at a marginal cost of zero.
With the potential to automate up to 45% of existing tasks, we need to reassess how we formulate economic policy, distribute wealth, and change our notion of employability, especially in the public sector, which remains the largest sector in many countries.
The narrative that work is meaningful because we need people to do it or because we need people to find meaning in it is flawed. Many jobs are rule-based.
Maybe not in a year or two, but in 10 or 20 years… isn’t there a possibility for re-humanisation here, as AI could automate our job tasks that we are not best positioned to do?
No Comments