Menu
Artificial Intelligence / IT Law / Philosophy / Technology

Trustworthy AI in Healthcare

Data science is increasingly being used to accelerate processes, compare services, find solutions, and produce answers…. but will citizens be prepared to trust machines and the companies behind them? This question is even more pertinent in the healthcare sector since health is a deeply personal, sensitive and emotional subject. In this event hosted by the Med-Tech World forum, I touched on the building blocks of trustworthy AI built on the pillars of lawfulness, ethics, robustness and explainability.

The World Health Organisation provides six key principles which guide the ethical principles for AI development:. These are: 

  • Protect autonomy: Humans should have oversight of and the final say on all health decisions — they shouldn’t be made entirely by machines, and doctors should be able to override them at any time. AI shouldn’t be used to guide someone’s medical care without their consent, and their data should be protected. 
  • Promote human safety: Developers should continuously monitor any AI tools to make sure they’re working as they’re supposed to and not causing harm. 
  • Ensure transparency: Developers should publish information about the design of AI tools. One regular criticism of the systems is that they’re “black boxes,” and it’s too hard for researchers and doctors to know how they make decisions. The WHO wants to see enough transparency that they can be fully audited and understood by users and regulators. 
  • Foster accountability: When something goes wrong with an AI technology — like if a decision made by a tool leads to patient harm — there should be mechanisms determining who is responsible (like manufacturers and clinical users).
  • Ensure equity: That means making sure tools are available in multiple languages, that they’re trained on diverse sets of data. In the past few years, close scrutiny of common health algorithms has found that some have racial bias built in. 
  • Promote AI that is sustainable: Developers should be able to regularly update their tools, and institutions should have ways to adjust if a tool seems ineffective. Institutions or companies should also only introduce tools that can be repaired, even in under-resourced health systems.

In order to ensure patients’ trust AI and the innovation it presents, we must ensure that vendors become collegially transparent about the benefits and risks of AI technology and how pragmatically it all works.

Unless the patient is an expert, AI will be inherently complex so we must help patients understand the benefits of AI and be clear about how the technology can support their care. There’s a lot of AI hype which subsequently ‘hides’ the tough questions and shifts the discussion away from AI literacy to AI marketing-speak. This must change.

Dr. Gege Gatt at Med-Tech World Forum
Dr. Gege Gatt at Med-Tech World Forum

Header Photo by Jean-Philippe Delberghe on Unsplash

No Comments

    Leave a Reply