Menu
Artificial Intelligence / Politics / Technology

Notes on AI from Davos

DAVOS World Economic Forum 2024 - Artificial Intelligence

This is a week we’ve all been looking forward to. At this year’s Davos World Economic Forum, the conversations around AI weren’t just about technological advancements; they were about the future of our society. The forum’s focus on AI – from its ethical deployment to its impact on jobs and global economies – reflects a broader realisation that AI is no longer a distant future; it’s a present reality shaping our lives. In turn, this comes with its own ramifications in the way we think about consciousness, jobs and statehood.

AI and the World Economic Forum

AI and the World Economic Forum

My five key take-aways from the Forum, relate to the fundamental aspects of AI which are impacting the way we think, work and develop prosperity.

1. The Ethical Imperative in AI: I was struck by the urgent call for ethical frameworks in AI. As we dive deeper into this AI-infused era, the necessity for robust, ethical guidelines becomes increasingly apparent. The initiatives by the U.S. and the G7 are steps in the right direction, but they also raise a question: are we moving fast enough to keep pace with AI’s rapid development? The challenge is not just to create these frameworks but to ensure they evolve as quickly as the technology they seek to regulate​​. My sense is that the EU’s own AI Act is already struggling with managing some emerging aspects of Generative AI. Below, I’m linking to some frameworks and ethical guidelines I have found useful in my own reading on the subject:

2. The Generative AI Revolution Andrew Ng’s emphasis on generative AI marked a turning point in the discussion here. It’s a reminder that AI’s potential extends far beyond the tech industry; it’s a tool that can (and largely, has) revolutionised healthcare, finance, and more. This pivot towards more meaningful applications of AI is not just a technological advancement; it’s a societal one. It challenges us to think about how we can harness AI to solve some of our most pressing problems​​ and this is a key deliverable of education … the ability to unlock human potential in solving interesting problems with new technologies. It is perhaps, what defines my own career.

3. AI and the Global Job Market: The World Economic Forum highlighted the potential impact of AI on jobs which is not a new concern. It estimates that nearly 40% of jobs worldwide could be affected. Developed economies are at greater risk than emerging markets and low-income countries. The forum called for proactive policy measures to address these challenges and prevent AI from exacerbating social tensions.

To me, the potential impact of AI on jobs is a double-edged sword. AI promises efficiency and innovation, and simultaneously puts nations in a tight spot to ensure job security for its most vulnerable citizens. This dichotomy calls for a balanced approach. We need policies that don’t just mitigate the risks but also leverage AI’s potential to create new opportunities. It’s about finding harmony between advancing technology and protecting our workforce.

Work is a very fundamental concept in national politics and the Marxist struggle between labour and capital has been present through the last 70 years of political history. With AI automating work at a marginal cost of zero, I believe we must re-think the entire economic model about the generation and distribution of wealth.

4. India’s Emergence in AI: I found India’s growing prominence in the AI landscape as particularly noteworthy. India was recognised as a strong contender in the global AI race due to its large youth population and thriving startup ecosystem. It represents a shift in the global AI dynamics, emphasising the importance of diversity in AI development and application. This isn’t just about technology; it’s about representation and ensuring that the benefits of AI are accessible across different geographies and economies​​. 

So far investment in AI (which is a key indicator of a country’s commitment to technological development) has been principally led by the United States and China, with billions invested in research and startups. The EU, although more fragmented, is increasing its investment, especially in ethical AI. The Global AI Index of 2023 reports this ranking:

5. AI in Warfare: A Red Line The strong stance against the militarisation of AI is a sobering reminder of the technology’s potential for harm when used by bad-actors. It underscores a crucial aspect of AI ethics: some lines should not be crossed, and the use of AI in warfare is emphatically one of them. This is a call for international cooperation and norms to ensure AI is used for the betterment of humanity, not its destruction. The key elements of the discussion that struck me were around:

  • Risks of Escalatory Conflict: The use of AI in warfare could lead to faster and more precise attacks, potentially increasing the risk of escalation and unintended consequences. This could exacerbate the destructiveness of warfare and raise ethical concerns about autonomous weapons systems.

  • International Norms and Regulations: There is a growing need for international norms and regulations to govern the development and use of AI in warfare, ensuring that it is used responsibly and ethically. The principles of a ‘just war’ (jus ante bellum) need to be updated to consider new thinking about AI decision making. I found this 2023 paper by Mitt Regan and Jovana Davidovic, to be a good primer on thinking about this problem.

  • Human Control and Oversight: Maintaining human control and oversight over AI systems in warfare is essential to prevent unintended consequences and ensure that AI is used for legitimate defense purposes. The Human in the Loop principle is also a key principle in the EU AI Act. Under Article 14 (1), systems must be designed and developed in such a way that they can be ‘effectively overseen by natural persons during the period in which the AI system is in use’. This obligation implies that the purpose of Article 14 (1) is to compel high-risk AI designers to integrate, in their products, a human control function as part of a safeguard against malfunctions of AI. 

  • Non-proliferation of Autonomous Weapons: International efforts should focus on preventing the proliferation of autonomous weapons systems that could operate without human oversight, posing significant ethical and safety risks.

Davos 2024

Yet, the general emphasis on AI’s societal implications is perhaps the most crucial takeaway. AI is not just a technical subject; it’s a social one. It challenges us to think about bias, transparency, and accountability. How we respond to these challenges will shape not just the future of technology but the future of our societies.

I felt that the discussions at Davos 2024 revealed a complex picture of AI’s role in our world. It’s a picture that demands careful consideration since we must constantly review its impact on our future, and that’s not a simple task.

The conversation serves as a crucial reminder of AI’s dual nature: as a force for immense good and potential disruption. The decisions we make now — about how we develop, deploy, and regulate it — will shape our increasingly global society. What is clear from Davos is that AI is a human issue, deeply intertwined with our choices, ethics, politics and the collective future we aspire to build.

Davos 2024 Breakout space

Davos 2024 Breakout

No Comments

    Leave a Reply