As technology continues to evolve at a rapid pace, its impact on the healthcare industry is becoming increasingly profound. A significant enabler in this transformation is AI, which presents a wealth of opportunities for improving patient outcomes and revolutionising healthcare delivery. A recent symposium hosted by the University of Malta brought together a panel of experts to discuss AI’s role in this rapidly evolving landscape.
I was invited to speak about the intersection of technology, law, ethics, marketing, and healthcare – which represents an exciting frontier for the next couple of years. I focused on the potential AI has to extend healthcare accessibility to marginalised groups, thereby promoting health equity.
AI: Beyond chatbots
While AI-powered chatbots like ChatGPT offer considerable advantages, the capabilities of Artificial Intelligence in healthtech extend far beyond this. The exponential growth curve of AI means that future developments are set to make today’s technology seem primitive in comparison. One of the primary objectives of healthtech is patient engagement, empowering patients and aiding healthcare professionals to provide better services. Technologies under its umbrella range from automated diagnostic tools to patient record systems, and drug discovery.
Addressing Legal and Ethical Implications
However, the adoption of AI in healthcare isn’t without its challenges. Particularly, the legal and ethical implications concerning liability, insurance, and data protection when using AI tools. Open AI, the company behind ChatGPT, currently disclaims all forms of liability, transferring responsibility onto the user.
I believe that as the role of such technologies grows, companies developing the technology should be accountable for the technologies they introduce to the market. This will ensure that no technology causing harm is released without its creators assuming responsibility for user safety. The EU’s AI Act is a good step in this direction as it uses a risk-based framework to forecast and manage possible harm:
Naturally, although AI can facilitate healthcare access, especially for those with low digital literacy, it is not a substitute for doctors and should not be the sole source of medical advice.
Evolving Patient-Doctor Interaction
There is a notable shift from a reactive to a proactive care model, thanks to better access to information and health knowledge. AI can play a crucial role in facilitating this access, complementing rather than replacing human healthcare providers.
AI’s Future in Healthcare
AI is not set to replace the healthcare workforce but instead identify areas where technology can perform tasks more efficiently than humans. AI could automate the initial assessment stage of most patient pathways, leaving subsequent stages to the patient-doctor interface.
Looking to the future, we envisage AI-human collaboration becoming the standard in healthcare. The goal is not to replace roles entirely with AI, but to enhance job functions through automation of certain tasks. Blended services, part AI and part human, are likely to be adopted more widely by patients due to their potential for increased convenience and speed. Reflecting on EBO’s work with the UK National Health Service (NHS), it’s remarkable to observe the shift in service culture. Recent research found that a large amount of NHS patients would be willing to interact with automated AI, signifying a promising movement towards the adoption of AI in healthcare. This transition is largely driven by the convenience and speed that AI can offer. By accurately interpreting the intent behind a request, AI can provide answers more quickly and efficiently than a human agent could for most common queries.
Reflecting on EBO’s implementations in the UK, there are several key lessons that come to light. The first is that cultural attitudes towards technology and AI in healthcare are rapidly evolving. The second lesson is that AI should be applied where it can improve service, keeping in mind its limitations in areas requiring emotional intelligence. AI is intended to augment human capabilities, not replace them. The most successful projects are those that strike the right balance between human and AI inputs. The third lesson revolves around the importance of adoption and ethical considerations. We need to build trustworthy AI systems that are lawful, ethical, robust, and explainable. Establishing trust in AI systems is pivotal to their successful implementation, requiring both internal adoption (by staff) and external adoption (by customers).
A collaborative approach is needed to maximise potential benefits while addressing challenges. The input of specialists across various disciplines will be vital to successfully navigating this new frontier. While AI offers exciting possibilities for the future of healthcare, we must remember that it serves as a tool to aid, not replace, the valuable work of healthcare professionals.