Artificial Intelligence / IT Law / Politics / Technology

Navigating AI Safety, Ethics, and Regulatory Challenges

Dr Gege Gatt, Malta IT Law Association, MITLA

The Malta IT Law Association (MITLA) held its annual conference on the 23th of June 2023. The year’s theme was ‘Digital Safeguards’. During a panel dedicated to a review of upcoming AI legislation, I led a discussion covering  questions of safety, ethics, and regulation.

AI Safety & Ethics

Understanding AI’s function and impacts is integral to safety and ethics. As AI systems grow increasingly complex, the challenge lies in making AI explainable and interpretable. We can approach this by fostering transparency in AI algorithms, investing in research that demystifies AI processes, and creating a “translator” layer, that turns AI’s complex language into a human-understandable format. Moreover, the upcoming EU AI Act mandates AI systems to be explainable and interpretable. Panelists also considered the ability for AI systems to self-report their decision-making process, which would increase AI’s explainability.

Dr. Gege Gatt leading an AI discussion for the Malta IT Law Association (MITLA)

Dr. Gege Gatt leading an AI discussion for the Malta IT Law Association (MITLA)

Legal Personality for AI

Sophia’s Saudi Arabian citizenship opened a Pandora’s box. Assigning legal personhood to AI creates a problematic scenario in which human agency is lost and AI becomes an escape-mechanism for tort and liability. The panel discussed whether an AI entity can (or should) possess rights and duties, or be held morally responsible. While this is still an open academic debate it was agreed that this is not an ideal scenario.

For autonomous vehicles, current practices often hold the deployer (e.g., the car manufacturer or autonomous software developer) liable. This is because they have the most control over the system’s behavior. But it might be better to consider a shared liability model involving developers, deployers, and users to distribute responsibility fairly. Here too there’s a development of different national approaches towards liability which is still emerging.

Overseeing High-Risk AI Systems

The EU’s AI Act demands human oversight over high-risk AI systems. While challenging, humans can be trained to understand and monitor these systems with appropriate support and tools. More importantly, humans in the loop can provide the necessary checks and balances to prevent misuse or misunderstanding of AI technology.

The Act also mandates self-assessments to reduce risk, yet given the failures of self-assessments in other sectors, it is reasonable to argue for external audits for all high-risk AI systems. Revisions to the Act could certainly mandate this to ensure impartial and rigorous examination of AI systems’ safety and ethics.

Regulatory Concerns

We can protect personal data while enabling AI to evolve through strict regulations on data usage, anonymisation techniques, and robust consent mechanisms. Ensuring AI doesn’t infringe upon data ownership rights calls for a comprehensive legal and technical solution, including clear guidelines on acceptable data usage and advanced privacy-preserving techniques like differential privacy.

Privacy-friendly AI development can be supervised and enabled through stringent regulation, privacy-by-design practices, and technology like federated learning, which allows models to learn from decentralized data sources without compromising privacy.

To protect citizens against these threats, we need a multipronged approach: deploying AI tools to detect and combat these problems, raising public awareness, and creating legislation to penalise offenders. Transparency and auditability standards for AI systems would also allow us to understand and control these systems better.

Regulating AI: A discussion led by Dr Gege Gatt for MITLA

Algorithmic Decision-Making

The rights of access, rectification, and opposition in algorithmic decision-making should be protected, akin to the rights under GDPR – which in turn remains an effective tool within the AI regime. This means making AI systems transparent, providing individuals with information about decisions made about them, and giving them the ability to challenge and rectify these decisions.

As AI continues to evolve, questions around its safety, ethics, and regulation will persist. The answers will evolve too, and MITLA’s approach to these questions will be equally dynamic, ensuring that there is an effective ongoing dialogue, rigorous research, and agile policy-making.


To become a MITLA member click this link:

MITLA Annual Conference: Digital Safeguards

MITLA Annual Conference: Digital Safeguards


No Comments

    Leave a Reply