Ethical Implication of AI: Assessing Trustworthy AI in Practice

Fall 2024

—Series of Lectures—

Seoul National University


Admin

Remote Zoom video call, and in presence sessions

65% of lecture remote, 35% present

Class schedule:   

Duration of a class: 1hour and 15 minutes.

Course starts on  

NOTE:  the first lesson will be on

The course is for 3 Credits.

Course Instructor: 

Prof. Roberto V. Zicari, SNU

Assistant: 

Sarah Hyojin, SNU


Target Students

SNU Master students and SNU PhD students from interdisciplinary background. (e.g. Computer Science, Data Science, Machine Learning, Law, Medicine, Social Science, Ethics, Public Policy, etc.).  Max. 30 students. First come first served principle.

How to get credit points  

In order to get the final credit points you need to write a mid-term report and a final report at the end of the semester.


Course Description

Applications based on Machine Learning and/or Deep Learning carry specific (mostly unintentional) risks that are considered within AI ethics. As a consequence, the quest for trustworthy AI has become a central issue for governance and technology impact assessment efforts, and has increased in the last four years, with focus on identifying both ethical and legal principles.

As AI capabilities have grown exponentially, it has become increasingly difficult to determine whether their model outputs or system behaviors protect the rights and interests of an ever-wider group of stakeholders –let alone evaluate them as ethical or legal, or meeting goals of improving human welfare and freedom.

For example, what if decisions made using an AI-driven algorithm benefit some socially salient groups more than others?

And what if we fail to identify and prevent these inequalities because we cannot explain how decisions were derived?

Moreover, we also need to consider how the adoption of these new algorithms and the lack of knowledge and control over their inner workings may impact those in charge of making decisions.

This course will help students to assess Trustworthy AI systems in practice by using the Z-Inspection® process.

The Z-Inspection® process is the result of 5 years of applied research of the Z-Inspection® initiative, a network of high class world experts lead by Prof. Roberto V. Zicari.

Z-Inspection® is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI.

Students will work in small groups and will evaluate the trustworthiness of real AI systems in various domains, e.g. healthcare, government and public administrations, justice.


Syllabus

1. Fundamental rights as moral and legal entitlements

2. From fundamental rights to ethical principles

3. Introduction to the EU guidelines for Trustworthy AI

3.1. Trustworthy AI Ethical principles:

– Respect for human autonomy

– Prevention of harm

– Fairness

– Explicability

3.2 Ethical Tensions and Trade offs

3.3. Seven Requirements (+ sub-requirements) for Trustworthy AI

– Human agency and oversight.Including fundamental rights, human agency and human oversight

– Technical robustness and safety.Including resilience to attack and security, fall back plan and general safety

– Privacy and data governance. Including respect for privacy, quality and integrity of data, and access to data

–  Transparency. Including traceability, explainability and communication

– Diversity, non-discrimination and fairness. Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation

– Societal and environmental wellbeing. Including sustainability and environmental friendliness, social impact, society and democracy

– Accountability. Including auditability, minimisation and reporting of negative impact, trade-offs and redress.

4. Assessing Trustworthy AI in Practice.

4. 1 The Z-Inspection® process in detail

– Set Up Phase

– Assess Phase

– Resolution Phase

4.2 . Additional Frameworks

The Claim, Arguments and Evidence Framework (CAE)

4.3 Tool 

– The ALTAI Web tool


Schedule

September 12:   Get together in presence. Introduction to the course.

from 11-12.15 (Korean time).

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302

Lesson slides:  Zicari.SNUAIEthicsCourse.INTRO.Fall.2023

Watch YouTube Video: The Ethics of Artificial Intelligence (AI) (Dr. Emmanuel Goffi) [video]


September 13:   EU Ethics Guidelines for Trustworthy AI. Part I 

from 11 till  12.15 (Korean time).

Lesson slides: EUFramework.SNU.2023.

Readings:

Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019. Link to .PDF

Hodges (2016) – Ethical Business Regulation: Understanding the Evidence. Link to .PDF


September 19:  EU Ethics Guidelines for Trustworthy AI. Part II

from 11 till 12.15 (Korean time). Zoom. 

Lesson slides : EUFramework.SNU.2023.

Reading:  Notes on EU Treaties and the concept of Proportionality (Prof. Heejin Kim): NotesEUTreatiesandProportionality


September 20: Z-Inspection® Process: An Overview  

from 11 till 12.15 (Korean time).  

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302

Lesson slides : SNU-Z-Inspection.Overview2023

Readings: Z-Inspection®: A Process to Assess Trustworthy AI. DOWNLOAD THE PAPER


September 26:   Q&As, Teams buildings, Select AI Systems.

from 11 till 12.15 (Korean time)

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302


September  27 : Q&As, Teams buildings, Select AI Systems: Final composition of teams and final selection of AI systems.

from 11 till 12.15 (Korean time). Zoom. 

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302


October 3: Z-Inspection® :  Socio-Technical Scenarios  

from 11 till 12.15  (Korean time). Zoom. 

Lesson slides:SNU-Z-Inspection.SocioTechnicalScenarios.2023

Readings:

-– Ethical Framework for Designing Autonomous Intelligent Systems. J Leikas et al. J. of Open Innovation, 2019, 5, 1. Link

–  How to Assess Trustworthy AI in Practice. download the full report as .PDF


October 4: Claim, Arguments and Evidence

from 11 till 12.15 (Korean time). Zoom. 

Lesson slides:SNU-Z-Inspection.Claim Arguments and Evidence.2023

Readings:

– Claims Arguments and Evidence (CAE), ISSREW.2014.72.- LINK

– Brundage et al. (2020) – Toward Trustworthy AI Development:Mechanisms for Supporting Verifiable Claims. LINK to .PDF


October 10 : Ethical Tensions and Trade Offs 

from 11 till 12.15  (Korean time). Zoom. 

Lesson slides :SNU-Z-Inspection.EthicalTensions.2023

Readings:

–  Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Whittlestone et al. (2019) – Link


October 11: Workshop (Claims Arguments and Evidence / Ethical Tensi0ns)

from 11 till 12.15  (Korean time).

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302

Readings:

Claims, Arguments, Evidence (CAE) framework
source Adelard LLP(2020).​ 

CAE Concepts: https://claimsargumentsevidence.org/notations/claims-arguments-evidence-cae/

CAE CONCISE GUIDANCE: https://claimsargumentsevidence.org/notations/concise-guidance/

CAE BUILDING BLOCKS: https://claimsargumentsevidence.org/notations/cae-building-blocks/

HelpingHand–CAE FRAMEWORK.​  https://claimsargumentsevidence.org/notations/helping-hand/

Study Medical Devices Safety and Assurance: https://claimsargumentsevidence.org/medical-devices/

DOWNLOADABLE RESOURCES: https://claimsargumentsevidence.org/resources/downloadable-resources/

TOOLS FOR CASES: https://claimsargumentsevidence.org/resources/tools-for-cases/


October 17 :  Z-Inspection®: Mapping to EU Trustworthy AI Framework

from 11 till 12.15 (Korean time). Zoom. 

Lesson slides : SNU-Z-Inspection.Mappings2023

Readings:

On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls. Front. Hum. Dyn., Human and Artificial Collaboration for Medical Best Practices, 08 July 2021 | VIEW ORIGINAL RESEARCH article


October 18Z-Inspection®: Mapping to EU Trustworthy AI Framework Part II: Use Case “On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls.”

from 11 till 12.15 (Korean time) Zoom

Lesson slides : SNU-Z-Inspection.Mappings2023



Mid Term Report: due October 25


October 24 Q&As, review of the AI systems selected by the teams.

from 11 till 12.15 (Korean time).

in presence.  SNU Graduate School of Data Science, Bldg. 942, Room  # 302

Assignment: Use the The ALTAI web tool

You can watch an ALTAI Demo (2021). Youtube video: https://youtu.be/85eSIuB4M0s 

To register for the web tool: LINK

To Register: LINK

Step by step instructions on how to use the ALTAI web tool: ALTAI Intro


October 25: Z-Inspection®: Trade off and Giving Recommendations

from 11 till 12.15 (Korean time). Zoom. 

Lesson slides : 

Readings:

– Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients. IEEE Transactions on Technology and Society, 2022. Download Early Access Preview Version of the Article as .PDF.


October  31 : Workshop I (Claims Arguments and Evidence / Ethical Tensi0ns)

from 11 till 12.15 (Korean time). Zoom.

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302


November 1:  Workshop II (using the ALTAI Web tool)

from 11 till 12.15 (Korean time).

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302

Readings:


November 7  Q&As with teams

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302

from 11 till 12.15 (Korean time).


November 8: Workshop III

from 11 till 12.15 (Korean time). Zoom. 


November 14: Workshop IV

from 11 till 12.15 (Korean time).

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302


November 15: Teams presenting their work so far.

from 11 till 12.15 (Korean time).

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302

For the team presentation here are the requirements:

  • Each team will present a Power Point presentation.
  • Time is 7 minutes.
  • Content of the presentation
  1. Brief introduction of the team members
  2. Brief introduction to the AI system chosen
  3. Main results obtained in the Mid Term report
  4. Next steps: what will be done for the Final Report

November 22: Teams presenting their work so far.

from 11 till 12.15 (Korean time).

in presence. SNU Graduate School of Data Science, Bldg. 942, Room # 302

For the team presentation here are the requirements:

  • Each team will present a Power Point presentation.
  • Time is 7 minutes.
  • Content of the presentation
  1. Brief introduction of the team members
  2. Brief introduction to the AI system chosen
  3. Main results obtained in the Mid Term report
  4. Next steps: what will be done for the Final Report

FINAL Report: due December 2nd


Resources

EU Trustworthy AI Guidelines

Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019. Link to .PDF

WHITE PAPER. On Artificial Intelligence – A European approach to excellence and trust. European Commission, Brussels, 19.2.2020 COM(2020) 65 final. Link to .PDF


EU Draft Proposed AI Law

–  Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS. LINK


The Assessment List for Trustworthy AI (ALTAI) Webtool. LINK

To Register: LINK


Socio-Technical Scenarios.

Ethical Framework for Designing Autonomous Intelligent Systems. J Leikas et al. J. of Open Innovation, 2019, 5, 1. Link


Catalog of Ethical tensions

–  Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Whittlestone et al. (2019) – Link


Claims Arguments and Evidence (CAE)

R. Bloomfield and K. Netkachova, “Building Blocks for Assurance Cases,” 2014 IEEE International Symposium on Software Reliability Engineering Workshops, 2014, pp. 186-191, doi: 10.1109/ISSREW.2014.72.- LINK

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
April 2020  arXiv:2004.07213 [cs.CY]. LINK 


The Z-Inspection® process

Z-Inspection®: A Process to Assess Trustworthy AI.

Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.

IEEE Transactions on Technology and Society, VOL. 2, NO. 2, JUNE 2021

Print ISSN: 2637-6415 Online ISSN: 2637-6415

Digital Object Identifier: 10.1109/TTS.2021.3066209

DOWNLOAD THE PAPER


–  How to Assess Trustworthy AI in Practice.

Roberto V. Zicari, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Eleanore Hickman, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Elisabeth Hildt, Georgios Kararigas, Pedro Kringen, Vince I. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Dennis Vetter, Magnus Westerlund, Renee Wurth

On behalf of the Z-Inspection® initiative (2022)

Abstract

This report is a methodological reflection on Z-Inspection®. Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system.

Cite as: arXiv:2206.09887 [cs.CY] The full report is available on arXiv. Link

You can download the full report as .PDF


Best Practices

How to assess an AI already deployed

On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

Roberto V. Zicari • James Brusseau • Stig Nikolaj Blomberg • Helle Collatz Christensen • Megan Coffee • Marianna B. Ganapini • Sara Gerke • Thomas Krendl Gilbert • Eleanore Hickman • Elisabeth Hildt • Sune Holm • Ulrich Kühne • Vince I. Madai • Walter Osika • Andy Spezzatti • Eberhard Schnebel • Jesmin Jahan Tithi • Dennis Vetter • Magnus Westerlund • Renee Wurth • Julia Amann • Vegard Antun • Valentina Beretta • Frédérick Bruneault • Erik Campano • Boris Düdder • Alessio Gallucci • Emmanuel Goffi • Christoffer Bjerre Haase • Thilo Hagendorff • Pedro Kringen • Florian Möslein • Davi Ottenheimer • Matiss Ozols • Laura Palazzani • Martin Petrin • Karin Tafur • Jim Tørresen • Holger Volland • Georgios Kararigas

Front. Hum. Dyn., Human and Artificial Collaboration for Medical Best Practices, 08 July 2021 |

VIEW ORIGINAL RESEARCH article


How to work with engineers in co-design of an AI

–  Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier.

Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund and Renee Wurth.

Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021

VIEW ORIGINAL RESEARCH article


– Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients

Himanshi Allahabadi, Julia Amann, Isabelle Balot, Andrea Beretta, Charles Binkley, Jonas Bozenhard, Frédérick Bruneault, James Brusseau, Sema Candemir, Luca Alessandro Cappellini, Genevieve Fieux Castagnet, Subrata Chakraborty, Nicoleta Cherciu, Christina Cociancig, Megan Coffee, Irene Ek, Leonardo Espinosa-Leal, Davide Farina, Geneviève Fieux-Castagnet, Thomas Frauenfelder, Alessio Gallucci, Guya Giuliani, Adam Golda, Irmhild van Halem, Elisabeth Hildt, Sune Holm, Georgios Kararigas, Sebastien A. Krier, Ulrich Kühne, Francesca Lizzi, Vince I. Madai, Aniek F. Markus, Serg Masis, Emilie Wiinblad Mathez, Francesco Mureddu, Emanuele Neri, Walter Osika, Matiss Ozols, Cecilia Panigutti, Brendan Parent, Francesca Pratesi, Pedro A. Moreno-Sánchez, Giovanni Sartor, Mattia Savardi, Alberto Signoroni, Hanna Sormunen, Andy Spezzatti, Adarsh Srivastava, Annette F. Stephansen, Lau Bee Theng, Jesmin Jahan Tithi, Jarno Tuominen, Steven Umbrello, Filippo Vaccher, Dennis Vetter, Magnus Westerlund, Renee Wurth, Roberto V. Zicari

in IEEE Transactions on Technology and Society, 2022.

Digital Object Identifier: 10.1109/TTS.2022.3195114

Abstract—

We present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic.

The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time.

The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.

Index Terms—Artificial Intelligence, Case Study, COVID-19, Ethical Trade-off, Explainable AI, Healthcare, Pandemic, Trust, Trustworthy AI, Radiology, Ethics, Z-Inspection®

To view the article abstract page, please use this URL 

Download Early Access Preview Version of the Article as .PDF.

( Citation information: DOI 10.1109/TTS.2022.3195114)


– Using Sentence Embeddings and Semantic Similarity for Seeking Consensus when Assessing Trustworthy AI

Dennis Vetter, Jesmin Jahan Tithi, Magnus Westerlund, Roberto V. Zicari, Gemma Roig. LINK

Cite as: arXiv:2208.04608

( Optional) Fundamental Human Rights

–The Fundamental Rights and Algorithm Impact Assessment (FRAIA) helps to map the risks to human rights in the use of algorithms and to take measures to address this. Link.

##