In an interview with Dot Financial News in Cyprus, I discuss the key capabilities that Artificial Intelligence has today. The anchor, Andrew Mreana, covers topics from AI-dependency to intellect, and AI-supervision to redress. He asks 8 key questions.
Hera are the extracts:
How dependent are we on AIs ?
Unlike visible technology (like machinery) AI is not seen. There is no “AI inside” label. Therefore we consume AI without knowing it (from music selection, seat selection on a plane, routing of our Amazon packages it is indeed all pervasive). But AI also consumes us (and the data-footprint which we leave online).
Which areas of a business are expanding due to AI?
Anywhere that there is a linear processes which is describable, then it is ready for automation. For example customer service, bank tills, KYC, AML, document verification. We are also seeing sector expansion where AI provides the ability to analyse large swathes of data and creates meaningful outcomes – such as predicting customer-attrition, or in scenario and risk planning.
Which areas of a business are limited by AIs in the past year?
Every stage of a business which requires creativity, emotional intelligence is not touched by AI. To some extent AI also reduces spontaneity as we normalize actions and workflows to follow a ‘mean average’ which is ultimately where AI gravitates.
Are AIs a viable solution to replace a whole marketing team?
No. Men and machines are good at fundamentally different things. People have intentionality; we form plans and make decisions in complicated situations. We’re less good at making sense of enormous amounts of data. Computers are exactly the opposite: they excel at efficient data processing, but they struggle to make basic judgments that would be simple for any human.
Should we give separate legal personality to AI?
No. And this argument is often used as an excuse to actually ‘hide’ the responsibility of human decision making behind an inanimate object. It is perhaps for this reason that Saudi Arabia’s stunt to give the robot, Sophia legal personality seems absurd to me.
AI stakeholders should not be able to evade responsibility for legal prejudices or ethical faults by ascribing personality to their creations, irrespective of their degree of AI autonomy. Accountability should be construed to always keep humans in a position to modify, monitor or control AI systems, and subsequently be accountable for them and their actions.
How smart is AI?
If by smart we mean intelligent, and if intelligence is understood as pure computation (algorithmic calculation), there’s no reason to believe AI can’t be more intelligent than humans. If intelligence is elevated to contain consciousness on intelligence (thinking about thought) then AI cannot be defined to be intelligent.
What do we do when AI is wrong?
We need to introduce principles which allow for redress. Perhaps we should think of “redress by design” (like we have the principle of “privacy by design” for systems managing personal data). This is because even if AI decisions are right 99% of the time (which is more than human decision making), then we need to allow a redress mechanism for the remaining 1%. If you think of this in terms of the Chinese population (1.4bn) and assume AI is managing the tax computation of all citizens, then a 1% error rate, means 14,000,000 citizens receiving the wrong computation. Not a small number. In these wrong cases (they can be false positives or false negatives), the appeal procedure may not exist or, if it exists, it may be ineffective, or its cost may be excessive. Thus we cannot design AI if we do not create redress methods in built.
Are we trusting AI?
If companies do not actively work to keep clarity, discipline and consistency in balance, then trust starts to break down.