Menu
Artificial Intelligence / IT Law / Philosophy / Technology

Should AI have distinctive rights?

AI-Ethics-Malta

Granting distinct legal personality to AI systems could undermine the fundamental principle of human-centric AI.

I co-authored a reaction paper (full version here) regarding AI ethics and how Malta should address this developing framework. The Government of Malta published a consultation document (here) which put forward some initial ideas on the subject. Writing with fellow colleagues at MITLA, the key outcomes were these:

AI Ethics Malta, MITLA reaction

  1. On Trustworthiness: Place greater emphasis on the creation and bolstering of legal mechanisms which seek to address and redress the intrinsic biases of AI, being that they are the subject of human input, in order to ensure a human-centric approach, rather than on the use ofthe term ‘trustworthiness’, both because of the different social and ethical connotations of the term, as well as because the trustworthiness of AI is not to be derived from the systems which implement it. If, however, the use of trustworthiness as a metric against which to measure the suitability of AI to contemporary and future society is not appropriate, what then should form the basis of a truly ethical AI framework? The principles which MITLA proposes should replace or at least supplement the moniker of trustworthiness are outlined in chapter 2 of the full document.
  2. On Digital Rights: Our principal assertion is that prior to engaging in any major developments and/or amendments of the existing legal framework, digital rights should first be introduced into the Maltese Constitution. This would put Malta at the forefront in the movement towards building an ethical AI framework and would secure rights which are becoming ever more important in contemporary society and will continue to gain importance for future generations, including the right to informational self-determination, the right to unhindered development of personality, the right to access to truthful sources of information balanced against freedom of expression and the potential role of Government in deterring the spreading of false or misleading information and the provision of guidelines for the use of AI systems by which potentially harmful content may be flagged and dealt with.
  3. On Legislative Needs: We call for a systemic analysis of the legislative needs required for the creation of new liability mechanisms or the adaptation of existing legal liability models to cater for cases of harm caused by AI stakeholders operating an AI system to another party.
  4. On Accountability: We recommend that in order to counter for a potential malfunctioning or worse of AI systems, AI systems should have built-in ‘kill-switches’ making it easy for humans to manually override, take over and redact any potential unwanted consequences of these AI systems.
  5. Legal Personality: We make a strong recommendation to Government not to grant distinct legal personality to AI systems, as doing so would undermine the fundamental principle of human-centric AI, AI which respects fundamental human rights and accountability in AI. 
  6. European-driven legislative thinking: Any local legislative efforts towards the regulation of AI should take into account previous and current European and global efforts for coordination, as otherwise, Malta runs the risk of falling out of step with legal mechanisms tackling the use of AI. At the same time, Government has an equal obligation to ensure that AI systems adhere to and reflect the ethical and moral principles of Maltese society. 
  7. AI & Competition Law: We propose that local competition law enforcement authorities should proactively monitor progress in AI technology and the use and commercialisation of big data. Tied to this, Government should undertake regular reviews of competition law to ensure that consumers stand to gain from the development and deployment of AI systems. In tandem, Government should create the infrastructures necessary to promote open access to datasets for academic research and/or non-commercial use via open source or other similar licensing arrangements. Data held by public sector bodies, private sector bodies and industry organisations should be accessible and open, as appropriate and where possible. 
  8. AI Ethics Officer: We propose that the development, deployment and use of AI systems should require the appointment of an AI Ethics Officer, whose role would mimic the responsibilities and functions of the Data Protection Officer under the General Data Protection Regulation. 
  9. AI & IP Law: While recognising that current local, European and international IP do not adequately cater for AI-enabled and/or AI-created IP works, we recommend that Government considers and actively monitors the development of legislation in this regard which provides adequate protection while, as much as possible, not hindering the flow of data within and the development of AI systems. 
  10. Data Agency: On the subject of data agency, it is time for the industry and also legislators to recognise that digital consent is not serving the purpose of informing citizens about the use of their data, including the extent thereof and their rights attached thereto. A new and effective way of informing and engaging data subjects must be found. 
  11. AI for the benefit of society: AI must not be used for the sake of itself, but for the sake of society – industry must provide evidence of the advantages to be gained by adopting an AI system and its fitness for the purpose at hand. 
  12. Explainability: Another obligation which we propose to be expected of the industry is that of always being in a position to trace back the origins and logic behind a particular decision or outcome, so as to counter the dangers of machine learning and bias. 
  13. Education: From an academic perspective, modules on ethics should be form an intrinsic part of, and integrated within any science, technology, engineering and mathematics programme of studies and not be merely considered as an after-thought. 
  14. Cross Sectoral Activity: Government should allocate resources and create incentives to bring together different stakeholders in the field of AI, including engineers and designers, scholars and social scientists. 
  15. Privacy: Finally, Government should invest its finances and resources in securing and protecting its data by actively participating in efforts on a European level to address Europe’s current dependence on data centres located outside the European Economic Area. 

The full reaction paper is available here.

No Comments

    Leave a Reply