On the Ethics of AI-based Algorithmic decision-making in Healthcare.
Co-design of Trustworthy AI. Best Practice: Deep Learning based Skin Lesion Classifiers.
In cooperation with
German Research Center for Artificial Intelligence GmbH (DFKI)
Approach
The team of Dr. Andreas Dengel at the German Research Center for Artificial Intelligence(DFKI) used a well-trained and high performing neural network for classification of three skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis and performed a detailed analysis on its latent space.
The result of their work is available here: IJCNN_Interpretability (1)
We have been working with Andreas Dengel and his team and applied our Z-inspection® process to assess the ethical, technical and legal implications of using Deep Learning in this context.
Andreas Dengel
Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier.
Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund and Renee Wurth
Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021
VIEW ORIGINAL RESEARCH article
The main contribution of our work is to show the use of an ethically aligned co-design methodology to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare. The system is aimed to explain the decisions made by deep learning networks when used to analyze images of skin lesions. For that, we use a holistic process, called Z-inspection®, which requires a multidisciplinary team of experts working together with the AI designers and their managers to explore and investigate possible ethical, legal and technical issues that could arise from the future use of the AI system. Our research work is addressing the need for the co-design of trustworthy AI using a holistic approach, rather then using static ethical checklists. Our results can also serve as guidance for other early-phase AI-similar tool developments.
Trustworthy AI Co-Design
Basic Concepts
Z-inspection® can be considered an ethically aligned co-design methodology, as defined by the work of Robertson et al. (2019) who propose a design process of robotics and autonomous systems using a co-design approach, applied ethics, and values-driven methods. In the following, we illustrate some key concepts.
(source:Front. Hum. Dyn. , 13 July 2021, Sec. Digital Impacts Volume 3 – 2021 | https://doi.org/10.3389/fhumd.2021.688152)
Co-Design
Co-design is defined as a collective creativity, applied across the whole span of a design process, that engages end-users and other relevant stakeholders (Robertson et al., 2019). In their methodology, Robertson et al. (2019) suggest that the design process is open, in the sense that within this process “interactions occur in a broader socio-technical context”; this is the reason why “stakeholder engagement should not be restricted to end-user involvement but should encourage and support the inclusion of additional stakeholder groups” which are part of the design process or which are impacted by the designed product. The ethical aspects of the process and product must also be considered in relation to the “existing regulatory environment (…) to facilitate the integration of such provisions in the early stages” of the co-design.
Vulnerability
Robertson et al. (2019) mention that “within a socio-technical system where humans interact with partially automated technologies, an end-user is vulnerable to failures from both humans and the technology”. These failures and the risks associated with them are symptomatic of power asymmetries embedded in these technologies. This stresses the importance of an “exposure analysis that employs a metric of end-user exposure capable of attributing variations across measurements to specific contributors (which) can aid the development of designs with reduced end-user vulnerability”.
Exposure
Exposure represents an evaluation of the contact potential between a hazard and a receptor (Robertson et al., 2019), for this reason, the authors state that: “A threat to an end-user, engaging with a technological system is only significant if it aligns with a specific weakness of that system resulting in contact that leads to exposure”. Conversely, every weakness can potentially be targeted by a threat—either external or arising from a component’s failure to achieve “fitness for purpose”—and so the configuration of the system’s weaknesses influences the end-user’s “exposure.” They accordingly emphasize that, in the case of autonomous systems, the “analysis of the “exposure” of the system provides a numerical and defensible measure of the weaknesses” of that system, and thus must be an integral part of the co-design process.
In this best practices, we focused on the part of the co-design process that helps to identify the possible exposures when designing a system. In the framework for Trustworthy AI, exposures are defined as ethical, technical, and legal issues related to the use of the AI system.
Key Lessons learned:
- Mission: “…Aid the development of designs with reduced end-user vulnerability…”
- “…Socio-technical scenarios can be used to broaden stakeholders’ understanding of one’s own role in the technology, as well as awareness of stakeholders’ interdependence…”
- “…Recurrent, open-minded, and interdisciplinary discussions involving different perspectives of the broad problem definition….”
- “…The early involvement of an interdisciplinary panel of experts broadened the horizon of AI designers which are usually focused on the problem definition from a data and application perspective…”
- “…Consider the aim of the future AI system as a claim that needs to be validated before the AI system is deployed..”
- “…Involve patients at every stage of the design process … it is particularly important to ensure that the views, needs, and preferences of vulnerable and disadvantaged patient groups are taken into account to avoid exacerbating existing inequalities…”
Thank you, Roberto V. Zicari and the rest of the team for these insights!
Helga Brogger, President of the Norwegian Society of Radiology