In conversation with Rebecca Anastasi from Digital Island magazine about Malta’s capability to build an AI ecosystem which benefits both the private and the public sector. Can businesses harness the potential of Artificial Intelligence? Is AI-Hype (and all things Sophia) shifting the discussion away from the crucial issues?
Once associated with sci-fi and cutting edge research labs, AI has become a ubiquitous term. But is there room for AI in the small to medium enterprise? And if so, how can these organisations embrace AI?
Technology is not an instrument in the way a production machine is. It is a way of ‘seeing’ the world and not merely creating artefacts for it. Therefore, the underlying message for Maltese businesses is not about how to adopt AI, but around how to embrace positive change in the midst of the deep societal transformation we’re witnessing.
In my experience, businesses are less constrained by budgets than they are by cultural legacy and change resistance. The most common problem is not having an innovation strategy and thus taking on short-term ‘projects’ which don’t align to an ultimate scope. An innovation strategy should dictate the direction of innovation and its operational implementation. Without one, innovation efforts risk misalignment.
How should IT consultancies start developing teams able to deliver business-ready AI-powered solutions suited to a variety of pockets and tech sophistication?
A common issue I encounter is the restriction of the innovation process to one functional group (such as the notorious ‘IT Department’) within a company. The myth that one group is more suited to innovate than others is a hindrance to the pace of innovation as each business unit can provide a unique perspective for driving successful adoption of AI.
Therefore, businesses must develop cross-functional teams which work seamlessly across the various processes of the company. Collaboration is key to innovation, and while many organisations understand the importance of collaboration internally, collaboration externally can be equally important. Innovation ecosystems bring together industry partners, consultants, customers, and even competitors to drive innovation. Since AI has a degree of inherent complexity the Big Four should strive to create innovation ecosystems which harness the best available resources to tackle specific, repeatable uses cases which are ready for automation.
The touted wide-spread adoption of Blockchain tech did not quite happen (yet), despite the promising technology. Do you envision AI being viewed similarly?
There’s a lot of AI hype which unfortunately ‘hides’ the real discussions we should be having around business benefits, or the tough questions we need to discuss about AI bias, ethics and human agency. Hype shifts the discussion away from AI literacy to sponsored marketing-speak; which is deplorable. This unfortunate approach marred the blockchain debate from its inception and became a frenzy in 2008 when the pseudonym Satoshi Nakamoto published an overview of the peer-to-peer electronic cash system using the Blockchain which we now call Bitcoin.
AI must engender trust and to do so it must have wide societal acceptance. For this to occur we should take time to explain key AI concepts, like classification and confidence levels, ethics and fairness in machine learning, for non-technical audiences. And we must do this in simple terms to allow anyone to follow.
AI requires a deep partnership between the technologists and the humanists and so it presents the need for interdisciplinary education which fosters better communication and explainability. We’re not thinking enough about this in our education system and in our public policy formulation.
What are the legal implications in the adoption of AI for businesses – what should businesses watch out for in this regard?
It’s not how intelligent the machines are, it’s how much control we give them. Thus, before we delve into legal implications, we need to ensure that AI has earned the broad confidence of a business before it is adopted and rolled-out.
Once that is overcome, one of the most common concerns is liability in case of an AI system causing damage. Who is responsible? The AI software? The manager/employee using it? The software house developing it? The company implementing it? An additional issue is GDPR and privacy regulations. With GDPR, data has become a commodity that needs attention, which in turn may stifle data collection requirements which are a prerequisite for an AI tool.
As a general rule however, an AI system should follow 4 principles. It must be: (i) lawful (thus respecting all applicable laws and regulations), (ii) ethical (thus respecting ethical principles and values in the countries within which it has an impact), (iii) robust (both from a technical perspective while taking into account its social environment), and (iv) explainability (thus providing understandability arounds its processes).
In the wider discourse, AI and its offshoot, automation, invite concerns of job loss at a scale unprecedented since the industrial revolution. Do you think enough emphasis is being placed on the economic, social and ethical implications of AI?
As with any revolution, some jobs will go whilst others will be created. This is a reality we need to get comfortable with. However, in the foreseeable future, AI will find it hard to negotiate complex social relationships or be creative in a way which requires self-awareness and consciousness. This is specifically were human ability will trump computing activity. New jobs must be focused on knowledge and the soft-skills that drive it. That’s why I think that today employability is less about what you already know and more about your capacity to learn and adapt.
All AI initiatives must ultimately focus on human rights and well-being, thus ensuring that AI enriches and enables basic and fundamental human rights. To do this we need a new governance framework, standards, the translation of existing legal obligations into informed policy, allowing for the development of new cultural norms and legal frameworks; and always maintaining complete human control over AI systems.