Menu
Artificial Intelligence / Philosophy / Politics / Technology

AI and the birth of true autonomy

First appearing in Next 12, a publication by Seed Consultancy, this article explores the a fundamental topic that is important to business leaders and policy-makers: autonomy.

AI’s coming of age

As Artificial Intelligence further untethers from human supervision, we are starting to see the first significant results of true autonomy. Autonomy (which is often, incorrectly, confused with ‘automation’) implies that within the uncertainty and unpredictability of our world, AI technology is capable of performing well despite unintended technical failures and lack of external intervention. It is no surprise that the global AI market size is projected to reach $267bn by 2027 exhibiting an averaged CAGR of 33.2% from 2020 to 2027[1].

It is natural that this trajectory (and its continuing acceleration) will significantly impact the functioning of our society. And because technology structures our experiences and shapes how we live, it has enormous ethical significance[2]

AI is ‘intelligent’ in pure, algorithmic computational sense. Yet despite the increased level of intelligence and autonomy, the claim that AI can attain capabilities such as human consciousness (although definitions of ‘consciousness’ vary) is presently unfounded. Consciousness on intelligence (thinking about intelligence) has subjective, first-person causal powers and consciousness is not inherently computational the way computer programs are. The human mind has a number of intrinsic characteristics, such as subjectivity, intentionality, teleology and rationality, which a computer can only simulate. Moreover machines do not have access to the metaphysical nature of reality. Mirroring reason is not the same as reasoning. And reasoning is not the same thing as consciousness.

This essay thus frames AI as a human creation developed to be an instrument for positive change in society.

AI is posed to replace tasks (and jobs)

Although global employment will invariably be affected, most of the AI deployments this year will rely on human judgement when complex cognitive tasks are at play. However, in the low-skill, linear and predictable jobs, it is likely that the automation will displace around 10%-12% of jobs according to the OECD[3].

When enterprise adopts AI to automate production, employment is affected through three main thrusts[4]. First, new technologies lead to a direct substitution of jobs and tasks currently performed by employees (known as the ‘displacement effect’); second, there is a complementary increase in jobs and tasks necessary to use, run and supervise AI technology (known as the ‘skill-complementarity effect’); and third, there is a demand effect both from lower prices and a general increase in disposable income in the economy due to higher productivity (known as ‘the productivity effect’). Because these thrusts are not simultaneous but linear and progressive, it is likely that those countries who adapt their education and economy to AI realities will not register unmanageable unemployment surges through job displacement since the automation of tasks will occur before the automation of jobs, and even the latter will occur over a period of time. More so, multiple studies reveal that AI will roughly create the same number of jobs that it will displace[5] although this will place increased stress on the educational system to ensure that it is providing the right formative environment for the new roles to take shape.

This in turn opens a major public policy issue as it is imperative that Governments consider policies aimed at providing the necessary social security measures for affected workers while investing heavily in the rapid development of the necessary skills to take advantage of the new jobs created. Governments should support the re-training of affected individuals so that they can positively participate and contribute to the work force. 

Workers in Jakarta

A pandemic of misinformation

While the world is still grappling with the worst pandemic in many years, we should also focus on a different AI-driven crisis relating to the erosion of trust through automated misinformation. 

The role of choosing and filtering stories for us to read has moved from the hands of an editor to the algorithmic muscle of the channel we most commonly use: be it social, search, APP or voice assistant. In the case of the latter it’s not only the content but also its context and detail that is determined for us. False news is a significant problem in a democratic society which is polarised through the aggregation of people and ideas with analogous interests. As Obama’s former communication director, Cass Sunstein put it; ‘it is precisely the people most likely to filter out opposing views who most need to hear them’.

Present online revenue models don’t help either. The Internet has commoditized most services which we were previously ready to pay for. Paying for a newspaper is largely unheard of, instead content is prioritised based on how many times it is clicked and subsequently revenue flows were eyeballs go. Truth is often the casualty.  The Francis Bacon maxim: ‘ipsa scientia potestas est’ (knowledge is power) seems less appropriate in an age where knowledge and ideas are second to the ability to game an algorithm to influence the masses.

Democracy is served through its fourth pillar: a free press, but is the press enslaved by a manipulative mechanism which rewards falsehoods and deceptions that spread fast through link bait? Although democracy has been an enduring feature of European society, it can be eroded unless it is upheld and protected through a trusted system and a public policy framework which recognises the opportunities and challenges which AI brings to the table.

The informed citizen

Misinformation therefore poses a two-fold threat to democracy. It leaves citizens ill-informed, and it undermines trust and engagement with accurate content. To provide relevant experiences to our citizens, we need to invest in fact-checking services dedicated to creating content which examines the facts and claims behind content.

In turn, fact-checking services (often themselves based on AI tools) must be transparent. And next to transparency is the question of responsibility in AI. Transparent AI makes our underlying values explicit and encourages companies to take responsibility for AI-based decisions. Consequently, responsible AI is AI that has all the ethical considerations in place and is aligned with the core principles of the technology provider, the requirements of the law, societal norms and user expectations.

Indeed, the informed citizen doesn’t require the trivialisation of AI, such as publishing core algorithms which won’t be understood (and could create intellectual property debacles), but rather a clear and easily understood explanation on how a decision is made by an AI model. The level of detail and disclosure should naturally depend on the impact of the technology and the audience to be reached. 

Dr. Gege Gatt - Next 12 Publication - Seed Consultancy

Human Centric Accountability

AI technology, like science, is a human endeavour that is guided by values. We develop AI to fulfil a specific objective so the view that technology is ‘neutral’ is dangerous as  it offers a convenient escape route from responsibility.  We must therefore ensure that AI, and technology platforms, have specific elements of accountability.

Companies that develop, deploy or use AI technology need to have a framework of accountability which they should adhere to, and which provides human accountability (and agency) for the outcomes of the technology delivered or adopted. This human-centric approach is not simply a way to direct the ethical development of this technology, but implies that human beings (whether acting as a natural person or through a body corporate) will develop, deploy and use AI systems, and that they ought to do so in a responsible, ethical and lawful manner. Accountability doesn’t simply evoke legal frameworks of civil and criminal liability, or contractual tort, but more importantly highlights principles of good governance

It is perhaps for this reason that Saudi Arabia’s stunt to give the robot, Sophia legal personality seems absurd[6]. AI stakeholders should not be able to evade responsibility for legal or ethical faults by ascribing pseudo legal-personality to their creations, irrespective of their degree of AI autonomy. Accountability should be construed to always keep humans in a position to modify, monitor or control AI systems, and subsequently be accountable for them and their actions.

Green AI Innovation

While growth and acceleration remain the focus of most company boards, environmental sustainability is becoming a more critical political issue. AI is a significant contributor to the carbon footprint or if managed well, can help reduce it. 

Developing AI mediated outcomes is highly compute intensive. AI software processes a great deal of data, increasing the need for servers and dependence on energy to cool the data centre facilities within which they are house. A study by the University of Massachusetts concluded that training AI models to do Natural Language Processing (NLP), produces the carbon dioxide equivalent of 5X the lifetime emissions of a car (or the equivalent of 300 round-trip flights between San Francisco and New York)[7].

As environmental sustainability becomes more important, we need a lot more information about the impact this technology is having on our world. We need to track and report on what is happening but also create methods of reducing the impact of harmful carbon emissions. We have witnessed ingenious attempts to address the latter. For example, Google has developed AI that teaches itself to minimize the use of energy to cool its data centres reducing its energy requirements by 35%. Microsoft has boldly committed to be carbon negative by 2030. In Africa and in Asia, AI is transforming agriculture in such a way that less energy is required to monitor and manage crops, whilst also reducing the use of water and fertiliser through smart methods (such as AI enabled computer vision) to detect plant activity in the field.

AI can and will improve the circumstances we find ourselves in but our urgency in positively demanding green AI will ensure that the technology’s growth path will remain sustainable.

AI: The next Privacy trend?

As AI evolves it will magnify the ability to use persona data and in turn this could intrude on privacy interests. Most AI innovations are ushered in with haste to support competitive lead but have the unfortunate (and often intended) consequence of exerting social and political control. Subsequently this erodes basic norms (such as citizen consent) which challenge the privacy models we have espoused so far. 

AI-based facial recognition systems have provided a glimpse of the troubling privacy issues emerging. The multiplicity of camera devices has provided us with a huge amount of data to ingest and totalitarian regimes have used this as a method of societal control and oppression.

China has deployed drones which use AI technology to determine which citizens are not wearing a facemask and subsequently can file an automated report[8]. The success of these measures has a direct correlation to the social acceptance of surveillance in the country. It also has a bearing on algorithmic discrimination whereby algorithms produce unlawful or undesired discrimination in the decisions based on data bias or inaccuracy. This doesn’t only open a discussion about absolute morality but also about the delicate balance between guaranteeing individual rights and protecting collective interests.

Therefore, it is likely that those companies who could understand citizen concerns around privacy and find ways to address these concerns through their product are likely to win user trust and affection at the same speed at which Facebook is losing it.

Regarding privacy and information rights, the poor and the marginalised are particularly threatened because of their current lack of power and agency. Thus, governments need to find ways of delivering empowerment to the poor through information and education and investment in skills enhancement.

Re-establishing trust

Trust is a construct encompassing a wide variety of relationships we establish in everyday life. We trust our friends, the scientific community and perhaps even politicians. ‘Trusting’ an inanimate object like AI is somewhat of a paradox as it has the effect of anthropomorphising thus human and moral sentiment. Yet ‘trusting’ the AI tools which are part of our world is a precursor to their use and acceptance.

In the case of AI, the features, functions and outcomes of the system make it trustworthy for achieving a goal, and thus are objective reasons to trust that system. But beyond that, citizens must have the certainty that compliance with fundamental rights is ensured and that AI systems will be used only for specifically defined tasks and that individuals remain in control of their data. In turn this allows users to create subjective reasons to trust an AI tool as it doesn’t merely surpass the benchmark but provides inherent benefits such as speed or improved quality.

As we build more societal trust in AI, AI must serve good governance, including the identification, and prevention of illegal activity. However, when AI is in the hands of companies alone, the revenues from AI might not be redistributed equitably which in turn opens a new debate about social divide and the marginalisation of the poor. A new world order is already emerging: those countries who are able to leverage AI technology to accelerate societal and economic growth; and those who are unable to do so. The latter revert to outdated generalised political systems with stunted economic outcomes, whilst the rest race ahead. The countries who are left behind will be those who experience higher levels of inequality, laggard healthcare systems and economic dismay.However, our discussion on AI trust, reliability and ethics should be grounded in the here and now rather than on imaginary future technology. It should be grounded on the technical and social developments which we can pragmatically control and influence. In turn this also creates a perspective for a moral economy which wields the power of AI for good, fairness and justice.


[1] Fortune Business Insights, https://www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-market-100114

[2] Sandler Ronald, Nanotechnology: The Social and Ethical Issues, 2009

[3] ISSN : 1815199X, OECD Social, Employment and Migration Working Papers, https://www.oecd-ilibrary.org/fr/employment/automation-skills-use-and-training_2e2f4eea-en?mlang=en

[4] Acemoglu and Restrepo, 2017; Chiacchio, Petropoulos and Pichler, 2018; Vivarelli, 2014.

[5] https://www.bbc.com/news/business-44849492

[6] https://www.dw.com/en/saudi-arabia-grants-citizenship-to-robot-sophia/a-41150856

[7] Energy and Policy Considerations for Deep Learning in NLP, arXiv:1906.02243 [cs.CL], Emma Strubell, Ananya Ganesh, Andrew McCallum, 2019

[8] Bruno Macaes, City Journal, 1st April 2020. Accessed 4th April 2020. https://www.city-journal.org/covid-19-and-technology

Photo by Fauzan on Unsplash
Photo by Igor Son on Unsplash 

No Comments

    Leave a Reply