On the Ethics of AI-based Algorithmic decision-making in Healthcare.

Co-design of Trustworthy AI. Best Practice: Deep Learning based Skin Lesion Classifiers.

In cooperation with

German Research Center for Artificial Intelligence GmbH (DFKI)

Approach

The team of Dr. Andreas Dengel at the German Research Center for Artificial Intelligence(DFKI) used a well-trained and high performing neural network for classification of three skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis and performed a detailed analysis on its latent space.

The result of their work is available here: IJCNN_Interpretability (1)

We have been working with Andreas Dengel and his team and applied our Z-inspection® process to assess the ethicaltechnical and legal implications of using Deep Learning in this context.

Andreas Dengel

German Research Center for Artificial Intelligence (DFKI) Kaiserslautern, Germany

Ethically aligned co-design

Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier.

Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund and Renee Wurth

Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021

VIEW ORIGINAL RESEARCH article

The main contribution of our work is to show the use of an ethically aligned co-design methodology to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare. The system is aimed to explain the decisions made by deep learning networks when used to analyze images of skin lesions. For that, we use a holistic process, called Z-inspection®, which requires a multidisciplinary team of experts working together with the AI designers and their managers to explore and investigate possible ethical, legal and technical issues that could arise from the future use of the AI system. Our research work is addressing the need for the co-design of trustworthy AI using a holistic approach, rather then using static ethical checklists.  Our results can also serve as guidance for other early-phase AI-similar tool developments.

Key Lessons learned:

  1. Mission: “…Aid the development of designs with reduced end-user vulnerability…”
  2. “…Socio-technical scenarios can be used to broaden stakeholders’ understanding of one’s own role in the technology, as well as awareness of stakeholders’ interdependence…”
  3. “…Recurrent, open-minded, and interdisciplinary discussions involving different perspectives of the broad problem definition….”
  4. “…The early involvement of an interdisciplinary panel of experts broadened the horizon of AI designers which are usually focused on the problem definition from a data and application perspective…”
  5. “…Consider the aim of the future AI system as a claim that needs to be validated before the AI system is deployed..”
  6. “…Involve patients at every stage of the design process … it is particularly important to ensure that the views, needs, and preferences of vulnerable and disadvantaged patient groups are taken into account to avoid exacerbating existing inequalities…”

Thank you, Roberto V. Zicari and the rest of the team for these insights!

Helga Brogger, President of the Norwegian Society of Radiology