Ethical Implication of AI: Assessing Trustworthy AI in Practice

—Series of Lectures—

Seoul National University


Remote Zoom video call, and in presence sessions

65% of lecture remote, 35% present

Class schedule:  Every Tuesday and Thursday, from 5pm -6.15pm Korean time (KST) (10.00am – 11.15 am CET)

Duration of a class: 1hour and 15 minutes.

Course starts on September 1, 2022 and ends on December 14, 2022

NOTE:  the first lesson will be on Thursday, September 15.

The course is for 3 Credits.

Course Coordinators: 

Prof. Roberto V. Zicari, SNU

Visiting Assistant Prof. Heejin Kim, SNU

Assistants: 

Sarah Hyojin, SNU

Dennis Vetter, Goethe University Frankfurt


Target Students:  SNU Master students and SNU PhD students from interdisciplinary background. (e.g. Computer Science, Data Science, Machine Learning, Law, Medicine, Social Science, Ethics, Public Policy, etc.)

How to get credit points

In order to get the final credit points you need to write a mid-term report and a final report at the end of the semester.


Course Description

Applications based on Machine Learning and/or Deep Learning carry specific (mostly unintentional) risks that are considered within AI ethics. As a consequence, the quest for trustworthy AI has become a central issue for governance and technology impact assessment efforts, and has increased in the last four years, with focus on identifying both ethical and legal principles.

As AI capabilities have grown exponentially, it has become increasingly difficult to determine whether their model outputs or system behaviors protect the rights and interests of an ever-wider group of stakeholders –let alone evaluate them as ethical or legal, or meeting goals of improving human welfare and freedom.

For example, what if decisions made using an AI-driven algorithm benefit some socially salient groups more than others?

And what if we fail to identify and prevent these inequalities because we cannot explain how decisions were derived?

Moreover, we also need to consider how the adoption of these new algorithms and the lack of knowledge and control over their inner workings may impact those in charge of making decisions.

This course will help students to assess Trustworthy AI systems in practice by using the Z-Inspection® process.

The Z-Inspection® process is the result of 4.5 years of applied research of the Z-Inspection® initiative, a network of high class world experts lead by Prof. Roberto V. Zicari.

Z-Inspection® is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI.

Students will work in small groups and will evaluate the trustworthiness of real AI systems in various domains, e.g. healthcare, government and public administrations, justice.


Syllabus

1. Fundamental rights as moral and legal entitlements

2. From fundamental rights to ethical principles

3. Introduction to the EU guidelines for Trustworthy AI

3.1. Trustworthy AI Ethical principles:

– Respect for human autonomy

– Prevention of harm

– Fairness

– Explicability

3.2 Ethical Tensions and Trade offs

3.3. Seven Requirements (+ sub-requirements) for Trustworthy AI

– Human agency and oversight.Including fundamental rights, human agency and human oversight

– Technical robustness and safety.Including resilience to attack and security, fall back plan and general safety

– Privacy and data governance. Including respect for privacy, quality and integrity of data, and access to data

–  Transparency. Including traceability, explainability and communication

– Diversity, non-discrimination and fairness. Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation

– Societal and environmental wellbeing. Including sustainability and environmental friendliness, social impact, society and democracy

– Accountability. Including auditability, minimisation and reporting of negative impact, trade-offs and redress.

4. Assessing Trustworthy AI in Practice.

4. 1 The Z-Inspection® process in detail

– Set Up Phase

– Assess Phase

– Resolution Phase

4.2 . Additional Frameworks

The Claim, Arguments and Evidence Framework (CAE)

The fundamental rights and algorithm impact assessment (‘FRAIA’)

4.3 Tools

– The ALTAI Web tool


Schedule

September 15, 2022 from 5pm till 6.15pm (Korean time). Get together in presence. SNU Graduate School of Data Science, Bldg. 942, Room #302


September 20  Introduction to the course (Zicari).

from 5pm till 6.15pm (Korean time). Zoom

Lesson slides: Ethical Implication of AI:  Assessing Trustworthy AI in Practice: Introduction (Prof. Roberto V. Zicari): Zicari.SNUAIEthicsIntro.2022

Recording of the Lecture on Tuesday, September 20: https://youtu.be/HqEHranj3RA

Watch YouTube Video:

The Ethics of Artificial Intelligence (AI) (Dr. Emmanuel Goffi) [video]

Readings:

– Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019. Link to .PDF


September 22 EU Ethics Guidelines for Trustworthy AI. (Zicari)

from 5pm till 6.15pm (Korean time). Zoom. 

Recording of the Lecture on Thursday, September 22: https://youtu.be/ACKFKOccipE


September 27 EU Ethics Guidelines for Trustworthy AI. (Zicari)

from 5pm till 6.15pm (Korean time).  Zoom.

Lesson slides (Sept. 22 and Sept. 27): EUFramework.SNU.2022

Notes on EU Treaties and the concept of Proportionality (Kim): NotesEUTreatiesandProportionality

Recording of the Lecture on Tuesday, September 27:

( Youtube link): https://youtu.be/H1dvXImYtWY


September 29 Q&As, Teams buildings, Select AI Systems.

from 5pm till 6.15pm (Korean time).in presence. SNU Graduate School of Data Science, Bldg. 942, Room #302

Readings:

– Ethical Framework for Designing Autonomous Intelligent Systems. J Leikas et al. J. of Open Innovation, 2019, 5, 1. Link

– Z-Inspection®: A Process to Assess Trustworthy AI. DOWNLOAD THE PAPER


October 4: Z-Inspection® Overview and Socio-Technical Scenarios (Zicari)

from 5pm till 6.15pm (Korean time). Zoom. 

Recording of the Lecture on October 4: https://youtu.be/Z1dANbEHoPc

Readings:

–  Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Whittlestone et al. (2019) – Link

–  How to Assess Trustworthy AI in Practice. download the full report as .PDF


October 6:  Claim, Arguments and Evidence; Ethical Tensions;  (Zicari)

from 5pm till 6.15pm (Korean time). Zoom. 

Recording of the Lecture on October 6: https://www.youtube.com/watch?v=w0-NLazwn_I

Lesson slides (October 4 and 6):SNU-Z-Inspection.PartI.2022


October 11: Ethical Tensions and Trade Offs ((Zicari)

from 5pm till 6.15pm (Korean time). Zoom. 

Lesson slides (October 11):SNU-Z-Inspection.PartIII.2022

Recording of the Lecture on October 12: https://youtu.be/ZxPupf5waEQ

Readings:

– Claims Arguments and Evidence (CAE), ISSREW.2014.72.- LINK

– On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls. Front. Hum. Dyn., Human and Artificial Collaboration for Medical Best Practices, 08 July 2021 | VIEW ORIGINAL RESEARCH article

–  Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier. Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021 VIEW ORIGINAL RESEARCH article


October 13:  The ALTAI web tool (Vetter)

NOTE: there will be no Zoom lesson.

You can watch an ALTAI Demo (2021). Youtube video: https://youtu.be/85eSIuB4M0s 

To register for the web tool:

Assessment List for Trustworthy AI (ALTAI) Webtool. LINK

To Register: LINK

Step by step instructions on how to use the ALTAI web tool: ALTAI Intro


October 18:  Introduction to Fundamental Rights (Kim)

from 5pm till 6.15pm (Korean time). Zoom. 

Recording of the Lecture on October 18:  https://youtu.be/op-9U3USgMM

Lesson slides (October 18):GSDS Part 1

Readings:


October 20: The Fundamental Rights and Algorithm Impact Assessment (FRAIA) (Kim)

from 5pm till 6.15pm (Korean time). Zoom. 

Lesson slides (October 20): GSDS Part 2 copy

Recording of the Lecture on October 20: https://youtu.be/mb4t7xnGHlE


October 25: Q&As, review of the AI systems selected by the teams.

from 5pm till 6.15pm (Korean time). in presence. SNU Graduate School of Data Science, Bldg. 942, Room #302



Mid Term Report: due October 28


October 27: The Fundamental Rights and Algorithm Impact Assessment (FRAIA) (Kim)

from 5pm till 6.15pm (Korean time). Zoom. 

Lesson slides (October 27):GSDS Part 3

Recording of the Lecture on October 27: https://youtu.be/Sm19Zzz7R8g

Readings:

–  How to Assess Trustworthy AI in Practice. download the full report as .PDF


November 1: AI regulation in Korea.  Guest Lecture: Prof. Seongwook, Heo, Professor of Law, SNU

from 5pm till 6.15pm (Korean time). Zoom.  

Recording of the Lecture  November 1: https://youtu.be/ZNu-OkpZJ2M

Readings:

– EU Draft Proposed AI Law. LINK

Resources (in Korean only):

1. 사람이 중심이 되는 AI 윤리기준(과기정통부)

2. 이용자 중심의 지능정보사회를 위한 원칙(방통위)

인공지능 개발과 활용에 관한 인권 가이드라인(국가인권위원회)


November 3: Z-Inspection®: Mapping to EU Trustworthy AI Framework (Zicari)

from 5pm till 6.15pm (Korean time). Zoom. 

Lesson slides (November 3):SNU-Z-Inspection.PartIV

The recording is available here: https://snu-ac-kr.zoom.us/rec/share/pxVBwsUlhDZ7puE8z1mCCZ0B_TAhLQp0LA6SOwvKQxOlkk89Lm88vqL25M-NraZ3.9qHSfzMHenQqezFs?startTime=1665532390000

Readings:

– On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls. Front. Hum. Dyn., Human and Artificial Collaboration for Medical Best Practices, 08 July 2021 | VIEW ORIGINAL RESEARCH article


November 8: EU AI act draft proposal for AI regulation. Guest Lecture: Prof. Dr. Florian Möslein, LL.M. (London), Professorship for civil law, German and European business law, Philipps-Universität Marburg,

from 5pm till 6.15pm (Korean time). Zoom. 

Recording of the Lecture  November 8https://youtu.be/3uNsNjXWqgQ

Lesson slides: Artificial Intelligence Act_Seoul Nov 22

Readings:

“How to Assess Trustworthy AI in practice” (Zicari), Graduate School of Data Science, Seoul National University, October 11, 2022:

https://z-inspection.org/wp-content/uploads/2022/10/TalkSNU.October12.pdf


November 10: Q&As with teams (Kim)

in presence. Bldg. 942, Room #302

Readings:

– Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients. IEEE Transactions on Technology and Society, 2022. Download Early Access Preview Version of the Article as .PDF.


November 15:

Z-Inspection®: Giving Recommendations (Zicari)

from 5pm till 6.15pm (Korean time). Zoom. 

Recording of the Lecture  November 15: https://youtu.be/B9nXwI2WyZc

Lesson slides (November 15):SNU-Z-Inspection.PartV


November 17:  there will be a Q&A in presence (NOT teams presentations)

Bldg. 942, Room #302


November 22: Teams presenting their work so far.

in presence. Bldg. 942, Room #302

Teams presentations will be on Tuesday Nov. 22 and Thursday Nov. 24.

For the team presentation here are the requirements.

Each team will present a Power Point presentation.

Time is 7 minutes.

Content of the presentation

  1. Brief introduction of the team members
  2. Brief introduction to the AI system chosen
  3. Main results obtained in the Mid Term report
  4. Next steps: what will be done for the Final Report

November 24: Teams presenting their work so far.

in presence. Bldg. 942, Room #302

Teams presentations will be on Tuesday Nov. 22 and Thursday Nov. 24.

For the team presentation here are the requirements.

Each team will present a Power Point presentation.

Time is 7 minutes.

Content of the presentation

  1. Brief introduction of the team members
  2. Brief introduction to the AI system chosen
  3. Main results obtained in the Mid Term report
  4. Next steps: what will be done for the Final Report

FINAL Report: due December 2


Resources

EU Trustworthy AI Guidelines

– Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019. Link to .PDF

– WHITE PAPER. On Artificial Intelligence – A European approach to excellence and trust. European Commission, Brussels, 19.2.2020 COM(2020) 65 final. Link to .PDF


EU Draft Proposed AI Law

–  Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS. LINK


The Assessment List for Trustworthy AI (ALTAI) Webtool. LINK

To Register: LINK


Socio-Technical Scenarios.

–Ethical Framework for Designing Autonomous Intelligent Systems. J Leikas et al. J. of Open Innovation, 2019, 5, 1. Link


Catalog of Ethical tensions

–  Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Whittlestone et al. (2019) – Link


Claims Arguments and Evidence (CAE)

R. Bloomfield and K. Netkachova, “Building Blocks for Assurance Cases,” 2014 IEEE International Symposium on Software Reliability Engineering Workshops, 2014, pp. 186-191, doi: 10.1109/ISSREW.2014.72.- LINK

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
April 2020  arXiv:2004.07213 [cs.CY]. LINK 


Fundamental Human Rights

–The Fundamental Rights and Algorithm Impact Assessment (FRAIA) helps to map the risks to human rights in the use of algorithms and to take measures to address this. Link.


The Z-Inspection® process

Z-Inspection®: A Process to Assess Trustworthy AI.

Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.

IEEE Transactions on Technology and Society, VOL. 2, NO. 2, JUNE 2021

Print ISSN: 2637-6415 Online ISSN: 2637-6415

Digital Object Identifier: 10.1109/TTS.2021.3066209

DOWNLOAD THE PAPER


–  How to Assess Trustworthy AI in Practice.

Roberto V. Zicari, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Eleanore Hickman, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Elisabeth Hildt, Georgios Kararigas, Pedro Kringen, Vince I. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Dennis Vetter, Magnus Westerlund, Renee Wurth

On behalf of the Z-Inspection® initiative (2022)

Abstract

This report is a methodological reflection on Z-Inspection®. Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system.

Cite as: arXiv:2206.09887 [cs.CY] The full report is available on arXiv. Link

You can download the full report as .PDF


Best Practices

How to assess an AI already deployed

On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

Roberto V. Zicari • James Brusseau • Stig Nikolaj Blomberg • Helle Collatz Christensen • Megan Coffee • Marianna B. Ganapini • Sara Gerke • Thomas Krendl Gilbert • Eleanore Hickman • Elisabeth Hildt • Sune Holm • Ulrich Kühne • Vince I. Madai • Walter Osika • Andy Spezzatti • Eberhard Schnebel • Jesmin Jahan Tithi • Dennis Vetter • Magnus Westerlund • Renee Wurth • Julia Amann • Vegard Antun • Valentina Beretta • Frédérick Bruneault • Erik Campano • Boris Düdder • Alessio Gallucci • Emmanuel Goffi • Christoffer Bjerre Haase • Thilo Hagendorff • Pedro Kringen • Florian Möslein • Davi Ottenheimer • Matiss Ozols • Laura Palazzani • Martin Petrin • Karin Tafur • Jim Tørresen • Holger Volland • Georgios Kararigas

Front. Hum. Dyn., Human and Artificial Collaboration for Medical Best Practices, 08 July 2021 |

VIEW ORIGINAL RESEARCH article


How to work with engineers in co-design of an AI

–  Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier.

Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund and Renee Wurth.

Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021

VIEW ORIGINAL RESEARCH article


– Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients

Himanshi Allahabadi, Julia Amann, Isabelle Balot, Andrea Beretta, Charles Binkley, Jonas Bozenhard, Frédérick Bruneault, James Brusseau, Sema Candemir, Luca Alessandro Cappellini, Genevieve Fieux Castagnet, Subrata Chakraborty, Nicoleta Cherciu, Christina Cociancig, Megan Coffee, Irene Ek, Leonardo Espinosa-Leal, Davide Farina, Geneviève Fieux-Castagnet, Thomas Frauenfelder, Alessio Gallucci, Guya Giuliani, Adam Golda, Irmhild van Halem, Elisabeth Hildt, Sune Holm, Georgios Kararigas, Sebastien A. Krier, Ulrich Kühne, Francesca Lizzi, Vince I. Madai, Aniek F. Markus, Serg Masis, Emilie Wiinblad Mathez, Francesco Mureddu, Emanuele Neri, Walter Osika, Matiss Ozols, Cecilia Panigutti, Brendan Parent, Francesca Pratesi, Pedro A. Moreno-Sánchez, Giovanni Sartor, Mattia Savardi, Alberto Signoroni, Hanna Sormunen, Andy Spezzatti, Adarsh Srivastava, Annette F. Stephansen, Lau Bee Theng, Jesmin Jahan Tithi, Jarno Tuominen, Steven Umbrello, Filippo Vaccher, Dennis Vetter, Magnus Westerlund, Renee Wurth, Roberto V. Zicari

in IEEE Transactions on Technology and Society, 2022.

Digital Object Identifier: 10.1109/TTS.2022.3195114

Abstract—

We present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic.

The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time.

The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.

Index Terms—Artificial Intelligence, Case Study, COVID-19, Ethical Trade-off, Explainable AI, Healthcare, Pandemic, Trust, Trustworthy AI, Radiology, Ethics, Z-Inspection®

To view the article abstract page, please use this URL 

Download Early Access Preview Version of the Article as .PDF.

( Citation information: DOI 10.1109/TTS.2022.3195114)


– Using Sentence Embeddings and Semantic Similarity for Seeking Consensus when Assessing Trustworthy AI

Dennis Vetter, Jesmin Jahan Tithi, Magnus Westerlund, Roberto V. Zicari, Gemma Roig. LINK

Cite as: arXiv:2208.04608

##