Ethical Implication of AI: Assessing Trustworthy AI in Practice

—Series of Lectures—

Seoul National University

Remote Zoom video call, and in presence sessions

65% of lecture remote, 35% present

Class schedule:  Every Tuesday and Thursday, from 5pm -6.15pm Korean time (KST) (10.00am – 11.15 am CET)

Duration of a class: 1hour and 15 minutes.

Course starts on September 1, 2022 and ends on December 14, 2022

NOTE:  the first lesson will be on Thursday, September 15.

The course is for 3 Credits.

Course Coordinators: 

Prof. Roberto V. Zicari, SNU

Visiting Assistant Prof. Heejin Kim, SNU



Dennis Vetter, Goethe University Frankfurt

Target Students:  Master students and PhD students from interdisciplinary background. (e.g. Computer Science, Data Science, Machine Learning, Law, Medicine, Social Science, Ethics, Public Policy, etc.)

How to get credit points

In order to get the final credit points you need to write a mid-term report and a final report at the end of the semester.

Course Description

Applications based on Machine Learning and/or Deep Learning carry specific (mostly unintentional) risks that are considered within AI ethics. As a consequence, the quest for trustworthy AI has become a central issue for governance and technology impact assessment efforts, and has increased in the last four years, with focus on identifying both ethical and legal principles.

As AI capabilities have grown exponentially, it has become increasingly difficult to determine whether their model outputs or system behaviors protect the rights and interests of an ever-wider group of stakeholders –let alone evaluate them as ethical or legal, or meeting goals of improving human welfare and freedom.

For example, what if decisions made using an AI-driven algorithm benefit some socially salient groups more than others?

And what if we fail to identify and prevent these inequalities because we cannot explain how decisions were derived?

Moreover, we also need to consider how the adoption of these new algorithms and the lack of knowledge and control over their inner workings may impact those in charge of making decisions.

This course will help students to assess Trustworthy AI systems in practice by using the Z-Inspection® process.

The Z-Inspection® process is the result of 4.5 years of applied research of the Z-Inspection® initiative, a network of high class world experts lead by Prof. Roberto V. Zicari.

Z-Inspection® is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI.

Students will work in small groups and will evaluate the trustworthiness of real AI systems in various domains, e.g. healthcare, government and public administrations, justice.


1. Fundamental rights as moral and legal entitlements

2. From fundamental rights to ethical principles

3. Introduction to the EU guidelines for Trustworthy AI

3.1. Trustworthy AI Ethical principles:

– Respect for human autonomy

– Prevention of harm

– Fairness

– Explicability

3.2 Ethical Tensions and Trade offs

3.3. Seven Requirements (+ sub-requirements) for Trustworthy AI

– Human agency and oversight.Including fundamental rights, human agency and human oversight

– Technical robustness and safety.Including resilience to attack and security, fall back plan and general safety

– Privacy and data governance. Including respect for privacy, quality and integrity of data, and access to data

–  Transparency. Including traceability, explainability and communication

– Diversity, non-discrimination and fairness. Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation

– Societal and environmental wellbeing. Including sustainability and environmental friendliness, social impact, society and democracy

– Accountability. Including auditability, minimisation and reporting of negative impact, trade-offs and redress.

4. Assessing Trustworthy AI in Practice.

4. 1 The Z-Inspection® process in detail

– Set Up Phase

– Assess Phase

– Resolution Phase

4.2 . Additional Frameworks

The Claim, Arguments and Evidence Framework (CAE)

The fundamental rights and algorithm impact assessment (‘FRAIA’)

4.3 Tools

– The ALTAI Web tool


EU Trustworthy AI Guidelines

– Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019. Link to .PDF

– WHITE PAPER. On Artificial Intelligence – A European approach to excellence and trust. European Commission, Brussels, 19.2.2020 COM(2020) 65 final. Link to .PDF

EU Draft Proposed AI Law


Socio-Technical Scenarios.

–Ethical Framework for Designing Autonomous Intelligent Systems. J Leikas et al. J. of Open Innovation, 2019, 5, 1. Link

Catalog of Ethical tensions

–  Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Whittlestone et al. (2019) – Link

Claims Arguments and Evidence (CAE)

R. Bloomfield and K. Netkachova, “Building Blocks for Assurance Cases,” 2014 IEEE International Symposium on Software Reliability Engineering Workshops, 2014, pp. 186-191, doi: 10.1109/ISSREW.2014.72.- LINK

Fundamental Human Rights

–The Fundamental Rights and Algorithm Impact Assessment (FRAIA) helps to map the risks to human rights in the use of algorithms and to take measures to address this. Link.

The Z-Inspection® process

– Z-Inspection®: A Process to Assess Trustworthy AI.

Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.

IEEE Transactions on Technology and Society, VOL. 2, NO. 2, JUNE 2021

Print ISSN: 2637-6415 Online ISSN: 2637-6415

Digital Object Identifier: 10.1109/TTS.2021.3066209


–  How to Assess Trustworthy AI in Practice.

Roberto V. Zicari, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Eleanore Hickman, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Elisabeth Hildt, Georgios Kararigas, Pedro Kringen, Vince I. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Dennis Vetter, Magnus Westerlund, Renee Wurth

On behalf of the Z-Inspection® initiative (2022)


This report is a methodological reflection on Z-Inspection®. Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system.

Cite as: arXiv:2206.09887 [cs.CY] The full report is available on arXiv. Link

You can download the full report as .PDF

Best Practices

How to assess an AI already deployed

On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

Roberto V. Zicari • James Brusseau • Stig Nikolaj Blomberg • Helle Collatz Christensen • Megan Coffee • Marianna B. Ganapini • Sara Gerke • Thomas Krendl Gilbert • Eleanore Hickman • Elisabeth Hildt • Sune Holm • Ulrich Kühne • Vince I. Madai • Walter Osika • Andy Spezzatti • Eberhard Schnebel • Jesmin Jahan Tithi • Dennis Vetter • Magnus Westerlund • Renee Wurth • Julia Amann • Vegard Antun • Valentina Beretta • Frédérick Bruneault • Erik Campano • Boris Düdder • Alessio Gallucci • Emmanuel Goffi • Christoffer Bjerre Haase • Thilo Hagendorff • Pedro Kringen • Florian Möslein • Davi Ottenheimer • Matiss Ozols • Laura Palazzani • Martin Petrin • Karin Tafur • Jim Tørresen • Holger Volland • Georgios Kararigas

Front. Hum. Dyn., Human and Artificial Collaboration for Medical Best Practices, 08 July 2021 |


How to work with engineers in co-design of an AI

–  Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier.

Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund and Renee Wurth.

Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021


– Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients

Himanshi Allahabadi, Julia Amann, Isabelle Balot, Andrea Beretta, Charles Binkley, Jonas Bozenhard, Frédérick Bruneault, James Brusseau, Sema Candemir, Luca Alessandro Cappellini, Genevieve Fieux Castagnet, Subrata Chakraborty, Nicoleta Cherciu, Christina Cociancig, Megan Coffee, Irene Ek, Leonardo Espinosa-Leal, Davide Farina, Geneviève Fieux-Castagnet, Thomas Frauenfelder, Alessio Gallucci, Guya Giuliani, Adam Golda, Irmhild van Halem, Elisabeth Hildt, Sune Holm, Georgios Kararigas, Sebastien A. Krier, Ulrich Kühne, Francesca Lizzi, Vince I. Madai, Aniek F. Markus, Serg Masis, Emilie Wiinblad Mathez, Francesco Mureddu, Emanuele Neri, Walter Osika, Matiss Ozols, Cecilia Panigutti, Brendan Parent, Francesca Pratesi, Pedro A. Moreno-Sánchez, Giovanni Sartor, Mattia Savardi, Alberto Signoroni, Hanna Sormunen, Andy Spezzatti, Adarsh Srivastava, Annette F. Stephansen, Lau Bee Theng, Jesmin Jahan Tithi, Jarno Tuominen, Steven Umbrello, Filippo Vaccher, Dennis Vetter, Magnus Westerlund, Renee Wurth, Roberto V. Zicari

in IEEE Transactions on Technology and Society, 2022.

Digital Object Identifier: 10.1109/TTS.2022.3195114


We present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic.

The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time.

The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.

Index Terms—Artificial Intelligence, Case Study, COVID-19, Ethical Trade-off, Explainable AI, Healthcare, Pandemic, Trust, Trustworthy AI, Radiology, Ethics, Z-Inspection®

To view the article abstract page, please use this URL 

Download Early Access Preview Version of the Article as .PDF.

( Citation information: DOI 10.1109/TTS.2022.3195114)

– Using Sentence Embeddings and Semantic Similarity for Seeking Consensus when Assessing Trustworthy AI

Dennis Vetter, Jesmin Jahan Tithi, Magnus Westerlund, Roberto V. Zicari, Gemma Roig. LINK

Cite as: arXiv:2208.04608