Our paper ” On Assessing Trustworthy AI in Healthcare Best Practice for Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls.” has been accepted for publication in Frontiers in Human Dynamics

……………………………………………………………………………………………………………

Frontiers in Human Dynamics | doi: 10.3389/fhumd.2021.673104

https://www.frontiersin.org/articles/10.3389/fhumd.2021.673104/abstract

ORIGINAL RESEARCH.  The full article will be available online soon.

Frontiers in Human Dynamics publishes rigorously peer-reviewed research that aims to address the sociological and demographic patterns of resilience and adaptation to our ever-changing societies and environment.

…………………………………………………………………………………………………………………..

On Assessing Trustworthy AI in Healthcare

Best Practice for Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls.

Roberto V. Zicari 2,29, James Brusseau 21, Stig Nikolaj Blomberg 31, Helle Collatz Christensen 31, Megan Coffee 33, Marianna B. Ganapini 20, Sara Gerke 30, Thomas Krendl Gilbert 28, Eleanore Hickman 11, Elisabeth Hildt 25, Sune Holm 6, Ulrich Kühne 14, Vince I. Madai 8,41,42, Walter Osika 17, Andy Spezzatti 4, Eberhard Schnebel 1, Jesmin Jahan Tithi 32, Dennis Vetter 1, Magnus Westerlund 2, Renee Wurth 10, Julia Amann34, Vegard Antun 7, Valentina Beretta38, Frédérick Bruneault 13, Erik Campano39 , Boris Düdder 3, Alessio Gallucci 9, Emmanuel Goffi 12 , Christoffer Bjerre Haase 16, Thilo Hagendorff 18, Pedro Kringen 1, Florian Möslein36 , Davi Ottenheimer 40, Matiss Ozols 5, Laura Palazzani 35, Martin Petrin 37, Karin Tafur19 , Jim Tørresen23, Holger Volland 24, Georgios Kararigas 22

1 Frankfurt Big Data Lab, Goethe University Frankfurt, Germany
2 Arcada University of Applied Sciences, Helsinki, Finland
3 Department of Computer Science (DIKU), University of Copenhagen (UCPH), Denmark
4 Industrial Engineering and Operation Research, University of California, Berkeley, USA
5 University of Manchester and Wellcome Sanger institute, UK
6 Department of Food and Resource Economics, Faculty of Science University of Copenhagen, DK
7 Department of Mathematics, University of Oslo, Norway
8 CLAIM – Charité Lab for AI in Medicine, Charité Universitätsmedizin Berlin, Germany
9 Department of Mathematics and Computer Science Eindhoven University of Technology, The Netherlands
10 Fitbiomics, San Francisco, USA
11 Faculty of Law, University of Cambridge, UK
12 Observatoire Ethique & Intelligence Artificielle de l’Institut Sapiens, Paris and aivancity, School for Technology, Business and Society, Paris-Cachan, France
13 École des médias, Université du Québec à Montréal and Philosophie, Collège André-Laurendeau, Canada
14 Hautmedizin Bad Soden, Germany
16. Section for Health Service Research and Section for General Practice, Department of Public Health, University of Copenhagen, Denmark.
17 Center for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
18. Cluster of Excellence “Machine Learning: New Perspectives for Science”, University of Tuebingen , Germany
19 Independent AI Researcher (Law and Ethics) and Legal Tech Entrepreneur, Barcelona, Spain
20 Montreal AI Ethics Institute, Canada and Union College USA
21 Philosophy Department, Pace University, New York, USA
22 Department of Physiology, Faculty of Medicine, University of Iceland, Reykjavik, Iceland
23 Department of Informatics, University of Oslo, Norway
24 Head of community and communications, Z-Inspection® Initiative
25 Center for the Study of Ethics in the Professions, Illinois Institute of Technology Chicago, USA
28 Center for Human-Compatible AI, University of California, Berkeley, USA
29 Data Science Graduate School, Seoul National University, South Korea
30 Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics. Harvard Law School, USA
31 University of Copenhagen, Copenhagen Emergency Medical Services, Denmark.
32 Parallel Computing Labs, Intel, Santa Clara, California, USA.
33. Department of Medicine and Division of Infectious Diseases and Immunology, NYU Grossman School of Medicine, New York, USA
34 Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Switzerland
35 Philosophy of Law, LUMSA University, Rome, Italy
36 Institute of the Law and Regulation of Digitalization, Philipps-University Marburg, Germany
37 Western University, Canada, and University College London, UK
38 Department of Economics and Management, Università degli studi di Pavia, Italy
39 Department of Informatics, Umeå University, Sweden
40 Inrupt, San Francisco, USA
41 QUEST Center for Transforming Biomedical Research, Berlin Institute of Health (BIH), Charité Universitätsmedizinm Berlin, Germany
42 School of Computing and Digital Technology, Faculty of Computing, Engineering and the Built Environment, Birmingham City University, United Kingdom

Abstract

Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm.

To guide future developments in AI, the High-Level Expert Group on AI (AI HLEG, 2019) set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners towards more ethical and more robust applications of AI.

In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used.

The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls.

The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare.

For the assessment, we use a process to assess trustworthy AI, called Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.

Keywords: Artificial Intelligence, Cardiac Arrest, Case Study, Ethical Trade-Off, Explainable AI, Healthcare, Trust, Trustworthy AI.