2025: Crisis Management Days Book of Abstracts
International and EU security, Public health aspects of crises and local community preparedness, Crisis situation analyses and learned lessons

From innovation to accountability: Legal dimensions of AI-powered robots in EU medicine

Jelena Levak
University of Applied Sciences Velika Gorica
Maja Nišević
Research Unit KU Leuven Centre for IT & IP Law (CiTiP)
Duško Milojević
Research Unit KU Leuven Centre for IT & IP Law (CiTiP)

Published 2025-05-16

Keywords

  • AI-powered robots,
  • medical liability,
  • healthcare regulation,
  • explainable AI

How to Cite

Levak, J., Nišević, M., & Milojević, D. (2025). From innovation to accountability: Legal dimensions of AI-powered robots in EU medicine. Crisis Management Days. Retrieved from https://ojs.vvg.hr/index.php/DKU/article/view/715

Abstract

Introduction

The integration of artificial intelligence (AI) - powered robots into healthcare systems is transforming the landscape of medical practice by enhancing diagnostic accuracy, operational efficiency, and patient-specific treatment. These systems, which combine advanced algorithms with robotic capabilities, have the potential to minimize human error, streamline clinical workflows, and improve treatment outcomes. However, this technological evolution brings significant legal, ethical, and regulatory challenges, particularly within the European Union (EU), where existing legal frameworks struggle to keep pace with rapid AI developments.

The complexity of AI ecosystems—comprising developers, manufacturers, software designers, and healthcare operators—complicates the attribution of legal responsibility when adverse outcomes arise. Furthermore, the opacity of AI systems, often described as the “black box” problem, makes it difficult to determine causality in cases of harm, thereby challenging conventional legal principles based on fault and intent. These issues raise pressing questions about liability distribution, accountability, and risk management in the context of medical AI.

This paper focuses on how the EU is responding to these challenges through its evolving legal instruments. It examines how liability is—or should be—allocated among various stakeholders and considers how legal clarity can promote safer, more transparent AI adoption in medicine. In doing so, the paper addresses the crucial need for harmonized legislation that upholds patient safety without stifling innovation.

Methodology

The study employs a qualitative, interdisciplinary approach to examine liability issues associated with AI-powered robots in healthcare. Legal analysis is conducted using primary sources such as the Artificial Intelligence Act, AI Liability Directive (now revoked), General Data Protection Regulation (GDPR), and Medical Devices Regulation (MDR). These are supplemented by academic literature, short survey, expert interviews, regulatory guidelines, media reports, and official EU communications.

A comparative analysis is also conducted to assess how other jurisdictions approach liability for autonomous systems, helping to identify best practices that could inform EU-level harmonization. Case examples and real-world incidents are used to illustrate gaps between theoretical frameworks and practical implementation. Particular attention is given to how different stakeholders—healthcare providers, software developers, and manufacturers—share or evade liability under current legislation.

The methodology emphasizes the interplay between law, medicine, and technology, aiming to generate holistic recommendations grounded in the operational realities of clinical environments.

Main Results

The study identifies several structural and conceptual shortcomings within the European Union’s current liability framework as it applies to AI-powered robots in medicine. One of the key issues is the unclear distribution of liability. Present legal models often place the burden of responsibility primarily on healthcare professionals, without adequately considering the broader network of stakeholders involved in the development, design, and operation of AI systems. This narrow attribution of fault does not reflect the collaborative and distributed nature of AI deployment, and it may hinder innovation by exposing clinicians to disproportionate legal risk.

Another significant challenge is the so-called "black box" problem, which refers to the opaque and often inscrutable decision-making processes of AI systems. This lack of transparency complicates efforts to determine the cause of medical errors or adverse events, thereby undermining accountability. In legal systems that rely on principles of causation and foreseeability, such opacity introduces considerable ambiguity and weakens the effectiveness of traditional liability mechanisms.

The findings also reveal persistent fragmentation and inconsistencies within the legal landscape. Although the EU has introduced important legislative initiatives such as the AI Act and now revoked the AI Liability Directive, these frameworks still leave unresolved issues related to autonomous decision-making, unexpected system malfunctions, and regulatory discrepancies across member states. These inconsistencies pose obstacles to the coherent and safe deployment of AI technologies in cross-border healthcare contexts.

Data protection and cybersecurity present further areas of concern. The use of AI in healthcare involves the processing of vast amounts of sensitive personal data, which must comply with the GDPR. However, current safeguards are not always sufficient or uniformly applied, raising questions about data integrity, privacy, and the security of AI systems operating in clinical environments.

The study emphasizes the critical importance of explainable and trustworthy AI. Legal responsibility is closely tied to a system’s ability to provide transparent and interpretable outputs. Without sufficient explainability, healthcare professionals and patients may struggle to understand or contest AI-generated decisions, thereby weakening trust and complicating liability assessments. Aligning AI systems with GDPR principles and medical standards for informed consent requires technical and legal mechanisms that promote system transparency and accountability.

In response to these challenges, the paper proposes a shared liability model that reflects the collaborative nature of AI development and use. This model envisions a more balanced distribution of responsibility among developers, manufacturers, healthcare institutions, and end-users. It is supported by the introduction of updated insurance schemes and risk management strategies tailored to the unique characteristics of AI ecosystems. Such a model could foster legal clarity while promoting innovation and ensuring patient safety.

References

  1. Chang, A. (2023). The role of artificial intelligence in digital health. In Digital health entrepreneurship (pp. 75–85). Cham, Switzerland: Springer International Publishing.
  2. Choudhury, A., & Asan, O. (2020). Role of artificial intelligence in patient safety outcomes: Systematic literature review. JMIR Medical Informatics, 8(7), e18599. https://doi.org/10.2196/18599
  3. Bathaee, Y. (2017). The artificial intelligence black box and the failure of intent and causation. Harvard Journal of Law & Technology, 31, 889–938.
  4. Elendu, C., Amaechi, D. C., Elendu, T. C., Jingwa, K. A., Okoye, O. K., Okah, M. J., ... & Alimi, H. A. (2023). Ethical implications of AI and robotics in healthcare: A review. Medicine, 102(50), e36671.
  5. European Parliament and Council. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.
  6. European Parliament and Council. (2017). Regulation (EU) 2017/745 on medical devices (Medical Devices Regulation). Official Journal of the European Union.
  7. European Parliament and Council. (2016). Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data (General Data Protection Regulation). Official Journal of the European Union.