Ethical Implications of AI

Series of Lectures

Course Coordinators: 
Prof. Roberto V. Zicari,  
Prof. HeoSeongwook 

Assistants: 

Jaewon Chang,
Dennis Vetter,

Seoul National University

Remote:  Zoom video call

Class schedule: Mon. Wed. 18:00 ~ 19:15  Korean time (KST) (10:00 am till 11:15 CET) 


Course starts on March 3rd, and ends on Monday June 16. (16 weeks)

Course Outline (KOR)

AI의 윤리적 영향 과정에 오신 것을 환영합니다!

AI는 정치 지도자를 포함한 다양한 이해관계자들의 손에 정교한 도구가 되고 있다. 일부 AI 어플리케이션은 새로운 윤리적, 법적 문제를 제기할 수 있으며, 일반적으로 사회에 상당한 영향을 미친다. (사회에 미치는 영향은 좋은 영향이 될 수도 있고 악영향이 될 수도 있다.) AI의 중요한 문제는 어떻게 하면 AI가 통제 불능이 되는 것을 피할 수 있는지, 그리고 어떻게 의사 결정이 내려지는지, 그리고 이것이 사회 전반에 어떤 결과를 초래하는지에 대해 어떻게 이해하느냐는 것이다. 학생들은 이 수업을 통해 인공지능(AI) 활용의 윤리적 함의를 배우게 된다. “사회, 인간, 개인에 미치는 영향은 무엇인가? AI는 인간에게 봉사하는가?”등의 질문에 대해 다룬다. 윤리 문제에 대한, 그리고 이에 대한 학문간 토론은 성숙한 공동체를 확립할 수 있기 때문에 전문적 개발의 필수적인 부분이다. 학생들은 윤리적 성찰을 통해 윤리적 의사결정에 도움을 줄 오리엔테이션/역량을 배울 수 있다. 본 강좌에서는 유럽위원회 AI 전문가들이 정의한 신뢰할 수 있는 AI에 대한 7가지 핵심 원칙 및 요구사항과 같은 주제를 다룬다. •인간의 결정권 (Human agency and oversight) •기술적 강건성과 안정성 (Technical robustness and safety) •개인정보보호 및 데이터 거버넌스 (Privacy and data governance) •투명성 (Transparency) •다양성, 비차별성, 공정성 (Diversity, non-discrimination and fairness) •사회적 복지 및 환경적 복지 (Societal and environmental wellbeing) •책임 (Accountability) 학생들은 소그룹으로 나뉘어 Z-검사 (https://z-inspection.org) 를 이용해 신뢰할 수 있는 AI를 실제로 평가하는 방법을 배운다.

Course Description

AI is becoming a sophisticated tool in the hands of a variety of stakeholders, including political leaders. Some AI applications may raise new ethical and legal questions, and in general have a significant impact on society (for the good or for the bad or for both). 

People’s motivation plays a key role here. With AI the important question is how to avoid that it goes out of control, and how to understand how decisions are made and what are the consequences for society at large. 

Students will learn the ethical implications of the use of Artificial Intelligence (AI). 

What are the consequences for society? For human beings / individuals?  Does AI serve human kind?

Discussion and debate of ethical issues is an essential part of professional development—both within and between disciplines—as it can establish a mature community of responsible practitioners.

Through ethical reflection students can gain orientation / competencies that will help them in their ethical decision making.

Students will work in small groups and learn to assess the use of AI system in the domain of healthcare. 

The course will cover topics such as the seven key principles and requirements (values) for trustworthy AI, as defined by the European Commision’s High Level Expert Group on AI:

  • Human agency and oversight,
  • Technical robustness and safety,
  • Privacy and data governance,
  • Transparency,
  • Diversity, non-discrimination and fairness,
  • Societal and environmental wellbeing,
  • Accountability.

Pre-requisites

Students should have an interest in reflecting on what is right or wrong, and it is assumed that they are capable of discussing a scenario and taking a view on whether an action is ethical.

We encourage students with different backgrounds, knowledge, and geographies to enroll in this course. The topic is highly interdisciplinary and therefore requires different points of views, expertise, and attitudes.

How to get credit points

Assignments –  Students will each week watch two video lessons and read two papers.

In order to get the final credit points you need to write a mid-term report and a final report at the end of the semester.

Recommended Lecture Schedule

DateTopicMaterials
MARCH 3Intro Lesson 1 (Live)
(Prof. Roberto V. Zicari)

Copy of slides: https://z-inspection.org/wp-content/uploads/2021/03/Zicari.SNUAIEthicsCourse.2021.pdf

YouTube: Ethical Implications of AI – SNU 2021 – Live Lecture, March 3rd: LINK

Assignments of the week:

Paper 1: Whittlestone et al. (2019) – Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research
—–
Paper 2: Independent High-Level Expert Group on Artificial Intelligence (2019) – Ethics Guidelines for Trustworthy AI
[paper1]


 [paper2]
MARCH 8Intro Lesson 2 (Live)
(Prof. Roberto V. Zicari)
Ethical Implications of AI – SNU 2021 – Live Lecture , March 8th
LINK (Video YouTube)
MARCH 10Intro Lesson 3 (Live)

The Ethics of Artificial Intelligence (AI) 
(Prof. Roberto V. Zicari)
Video (YouTube) Ethical Implications of AI – SNU 2021 – Live Lecture, March 10th


[video]
Assignments of the week:

Paper: High-Level Expert Group on Artificial Intelligence (2020) – Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment
—–
Web tool: Try out the ALTAI web tool


 [paper]


[web tool]
MARCH 15(Live) Demo of the ALTAI web tool (Dennis Vetter) and (Live) Intro Lesson 3 and Q&A Roberto V. Zicari


The Ethics of Artificial Intelligence (AI)
(Dr. Emmanuel Goffi)
ALTAI Live Demo. Presentations of the teams and the AI systems. Youtube video: https://youtu.be/85eSIuB4M0s


[video]
MARCH 17(Live) Intro Lesson 4 and Q&A Roberto V. Zicari

Ethics, Moral Values, Humankind, Technology, AI Examples.
(Prof. Rafael A. Calvo)
YouTube Video of the Live Lecture: https://youtu.be/xO8V8oFHT4A

[video]
Assignments of the week:
Paper 1: Leikas et al. (2019) – Ethical Framework for Designing Autonomous Intelligent Systems
—– 
Paper 2: Rajkomar et al. (2018) – Ensuring Fairness in Machine Learning to Advance Health Equity 
[paper1]

[paper2]
MARCH 22 Ethics, Moral Values, Humankind, Technology, AI Examples.
(Dr. Emmanuel Goffi)
 [video]
MARCH 24On the ethics of algorithmic decision-making in healthcare
(Dr. Thomas Grote)
Assignments of the week:
Paper 1: Grote & Berens (2019) – On the ethics of algorithmic decision-making in healthcare  
—– 
Paper 2: Wendehorst et al. (2019) – Opinion of the Data Ethics Commission
[paper1]

[paper2]
MARCH
29
(Live lecture) Assessing Trustworthy AI. Best Practice: Machine learning as a supportive tool to recognize cardiac arrest in emergency calls. Roberto V. Zicari

Fairness, Bias and Discrimination in AI 
(Prof. Gemma Roig) 
Live Lecture, March 29th
https://youtu.be/OtbVMKT6WEA

[video]
MARCH 31(Live lecture) Assessing Trustworthy AI. Best Practice: Machine learning as a supportive tool to recognize cardiac arrest in emergency calls. Roberto V. Zicari

AI and Trust: Explainability, Transparency
(Prof. Dragutin Petkovic)
Live Lecture, March 31
https://youtu.be/CNDRi9-bA5w


[video p1] [video p2] [Q&A]
Assignments of the week:

Paper 1
“Z-Inspection®: A Process to Assess Ethical AI”
Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.
 IEEE Transactions on Technology and Society, 2021
Print ISSN: 2637-6415
Online ISSN: 2637-6415
Digital Object Identifier: 10.1109/TTS.2021.3066209
 Link to Download the paper:
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9380498

——–
Paper 2: Hodges (2016) – Ethical Business Regulation: Understanding the Evidence 

 

 




Link to Download the paper:
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9380498






[paper2]

APRIL 5AI Privacy, Responsibility, Accountability, Safety and Human-in the loop
(Dr. Magnus Westerlund)
[video]
  APRIL 7(Live lecture) Assessing Trustworthy AI. Best Practice: Machine learning as a supportive tool to recognize cardiac arrest in emergency calls. Roberto V. Zicari



Trustworthy AI: A Human-Centred Perspective 
(Dr. Christopher Burr)
Live Lecture YouTube: https://youtu.be/DXrFcDkN8CI
Copy of Slides of Live Lectures March 29, March 31 and April 7: https://z-inspection.org/wp-content/uploads/2021/04/Zicari.112-ML.UseCaseMarch25.2021.pdf


  [video]
Assignments of the week:
Paper 1: Brundage et al. (2020) – Toward Trustworthy AI Development:Mechanisms for Supporting Verifiable Claims
—–
Paper 2: Arya et al. (2019) – One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
[paper1]

[paper2]
APRIL 12(Live) Lesson: AI and Privacy – a brief overview of privacy protection law and regulation in Korea – Professor Heo, Seong Wook, Ph.D. in Law (Professor of Law)




Emerging Rules on Artificial Intelligence:
Trojan Horses of Ethics in the Realm of Law?

(Prof. Florian Möslein)
Slides (Korean)
Slides (English)
Live Recorded lecture: https://drive.google.com/file/d/1bmI0IDI1M0uyrv22evSyUAAr1RxWErKe/view?usp=sharing


[video]
APRIL
14

(Live) Lesson: Assessing Trustworthy AI. Best Practice: Machine learning as a supportive tool to recognize cardiac arrest in emergency calls. Ensuring Fairness. Roberto V. Zicari


– Mindful Use of AI.  Z-Inspection®: A holistic and analytic process to assess Ethical AI 
– Talk (1 hour).  (Prof. Roberto V. Zicari)
Youtube video 
Copy of the slides:
Zicari.Lecture.October15.2020
YouTube Live lesson:   https://youtu.be/-tm2M9qzM1A




Youtube video 
Copy of the slides:
Zicari.Lecture.October15.2020
  APRIL
19
AI Fairness and AI Explainability software tools
(Romeo Kienzler)
[video]
Assignments of the week:

Paper 1: ICO & The Alan Turing Institute (2020) – Explaining decisions made with AI. [paper1]

—————————–
Claims, Arguments, Evidence (CAE) framework 
source Adelard LLP(2020).​
CAE Concepts: https://claimsargumentsevidence.org/notations/claims-arguments-evidence-cae/
CAE CONCISE GUIDANCE: https://claimsargumentsevidence.org/notations/concise-guidance/
CAE BUILDING BLOCKS: https://claimsargumentsevidence.org/notations/cae-building-blocks/
HelpingHand–CAE FRAMEWORK.​  https://claimsargumentsevidence.org/notations/helping-hand/
Case Study Medical Devices Safety and Assurance: https://claimsargumentsevidence.org/medical-devices/
DOWNLOADABLE RESOURCES: https://claimsargumentsevidence.org/resources/downloadable-resources/
TOOLS FOR CASES: https://claimsargumentsevidence.org/resources/tools-for-cases/
[paper1]
APRIL 21Design of Ethics Tools for AI Developers(Dr. Carl-Maria Mörch)[video]
APRIL 26(Live) Lesson: Roberto V. Zicari, Dennis Vetter: Using CAE and ALTAI

Opinion of the German Data Ethics Commission
(Prof. Christiane Wendehorst)
Live lesson youtube link: https://youtu.be/ar5f1e2fZmE
Copy Slides: https://z-inspection.org/wp-content/uploads/2021/04/CAE-Framework.pdf

[video]
 Assignments of the week:
Paper 1: Hind et al. (2019) – Experiences with Improving the Transparency of AI Models and Services
—–
Paper 2: Obermeyer et al. (2019) – Dissecting racial bias in an algorithm used to manage the health of populations
 [paper1]

[paper2]
APRIL
28
Increasing Trust in AI
(Dr. Michael Hind)
[video]
May 3
Live Lesson: Discussion (Teams`s feedback) Roberto V. Zicari and Dennis Vetter.

Assessing AI use cases. Ethical tensions, Trade offs.
(Dr. Estella Hebert)
Live Lesson: Youtube video: https://youtu.be/Xabsd0CiwSE

[video]
Assignments of the week:
Paper 1: Peters et al. (2020) – Responsible AI – Two Frameworks for Ethical Design Practice
—–
Paper 2: Gebru et al. (2020) – Datasheets for Datasets
[paper1]

[paper2]
May 5Assignments of the week:
Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements. Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, Ed H. Chi (Submitted on 14 Jan 2019). Link to .PDF
———-
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques, Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang, 2019, Link to .PDF
May 10 Live Lesson: Discussion (Teams`s feedback) Roberto V. Zicari

MID TERM REPORT
May 12Assignments of the week:
AI auditing framework – draft guidance for consultation, 20200214 8 Version 1.0, ICO.
https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf
ICO is the UK’s independent body set up to uphold information rights.
May 17AI and Privacy Live Lesson
Professor Heo, Seong Wook, Ph.D. in Law (Professor of Law)
Slides 1, Slides 2
Paper 1, Paper 2
May 19
May 24Jurisprudence and AI – finding ‘law’ in the age of AI Live Lesson
Professor Heo, Seong Wook, Ph.D. in Law (Professor of Law)
Slides
Paper 1
Paper 2
May 26
May 31AI ethics and law in healthcare Live Lesson
Professor Heo, Seong Wook, Ph.D. in Law (Professor of Law)
Copy of Slides (Korean): https://z-inspection.org/wp-content/uploads/2021/05/2021-AI-Ethics-and-Law-0531-헬스케어-AI-SWH.pdf

Copy of Slides (English): https://z-inspection.org/wp-content/uploads/2021/06/2021-AI-Ethics-and-Law-헬스케어-AI-SWHENG.pdf
June 2Live Lesson: Teams feedback. Roberto V. Zicari and Dennis Vetter
June 7Administrative decision making through AI and judicial review Live Lesson
Professor Heo, Seong Wook, Ph.D. in Law (Professor of Law)
Slides 1, Slides 2
Paper
June 9Live Lesson: Teams feedback. Roberto V. Zicari and Dennis Vetter
June 14FINAL REPORT
June 16

Resources

YouTube Playlist: Videos of the recording of the lectures.

AI and Ethics: Reports/Papers classified by topics

Instructors

Roberto V. Zicari, Course coordinator. Roberto V. Zicari is Adjunct Professor at the Seul National University, and Affiliate Professor at the Yrkeshögskolan Arcada, Helsinki, Finland. He was for 29 years, professor of Database and Information Systems (DBIS) at the Goethe University Frankfurt, Germany. He is an internationally recognized expert in the field of Databases and Big Data. His interests also expand to Ethics and AI, Innovation and Entrepreneurship. He is the editor of the ODBMS.org web portal and of the ODBMS Industry Watch Blog. He was from 2014 till 2018 visiting professor with the Center for Entrepreneurship and Technology within the Department of Industrial Engineering and Operations Research at UC Berkeley (USA).

Heo, Seong Wook, Ph.D. in Law,
Professor of Law,
Seoul National University, Law School

Professor Heo is a public law professor at Seoul National University Law School. He teaches administrative law, environmental law, and law and economics. He received his Ph. D. in law and L.L.M. degree from Seoul National University Graduate School of Law, and he received his bachelor’s degree in economics from Seoul National University Economics Department.

He is interested in the research topics of economic regulation with the analytic tools of economics. Recently, he is mostly interested in the field of climate change law, energy law, food safety law, IT & privacy law, and judicial system. He also participated in the process of framing the Green Growth Act and the Emission Trade Act in Korea.

Before joining the SNU Law School as a faculty member in 2006, he served as a judge of Seoul Central District Court in Korea. He was a presiding judge of specialized panel for the intellectual property law cases in the Seoul Central District Court from 2005 to 2006.

He stayed as a visiting scholar at Stanford University Asia Pacific Research Center in U.S. from Aug. 2010 to Aug. 2011. He also stayed at Munich University College of Law in Germany from Dec. 2009 to Feb. 2010.

He is currently a board member of the Korean Public Law Association, the Korean Environmental Law Association, the Korean Law and Economics Association, and the Korean Regulation Law Association. He used to be the chief-editor of the Korean Journal of Law and Economics from 2015 to 2019.

Prof. Dr. Gemma Roig, Group Leader, Computational Vision & Artificial Intelligence, Goethe University FrankfurtI am currently a professor at the Computer Science Department in Goethe University Frankfurt. I am also a research affiliate at MIT. Before I was ass. prof. at Singapore University of Technology and Design. Previously, I was a postdoc fellow at MIT in the Center for Brains Minds and Machines with Prof. Tomaso Poggio. I was also affiliated at the Laboratory for Computational and Statistical Learning. I pursued my doctoral degree in Computer Vision at ETH Zurich. My research focuses on understanding the underlying computational principles of visual intelligence of humans and artificial systems, with the aim of developing a general artificial intelligence framework. Such general artificial intelligence system, is fundamental to design machine models that mimic or surpass human performance in specific domains, and that can automatically learn new tasks.

Dr. Emmanuel R. Goffi, Director, Observatoire éthique & intelligence artificielle | Observatory on Ethics & Artificial Intelligence at the Institut Sapiens, in ParisEmmanuel R. Goffi is an expert in ethics of artificial intelligence. He was the Director of the Creéia – Centre de recherche et expertise en éthique et intelligence artificielle and a Professor of ethics with the ILERI – Institut libre d’étude des relations internationales. He holds a PhD in Political Science from Science Po-CERI. Emmanuel is a research fellow with the Centre for Defence and Security Studies at the University of Manitoba (UofM), in Winnipeg, and a research member with the Centre FrancoPaix at the Université du Québec à Montréal. He is also a member of the Mines Action Canada Board.
Emmanuel has served in the French Air Force for 25 years. He lectured at the French Air Force Academy, and has been lecturing in several universities and colleges in France and in Canada.

Dr. Thomas Grote, Ethics and Philosophy Lab, Cluster of Excellence: “Machine Learning: New Perspectives for Science”, University of Tübingen, Tübingen 72076, GermanyDr. Thomas Grote is a postdoctoral researcher at the Ethics and Philosophy Lab (EPL) of the Cluster of Excellence: Machine Learning: New Perspectives for Science at the University of Tübingen. His research focuses on issues related to machine learning at the intersection of epistemology and ethics.

Prof. Dr. Florian Möslein, Professor of Law at the Philipps-University Marburg, Director of the Institute of the Law and Regulation of Digitalisation (IRDi, www.irdi.institute)Florian Möslein is Director of the Institute for Law and Regulation of Digitalisation (www.irdi.institute) and Professor of Law at the Philipps-University Marburg, where he teaches Contract Law, Company Law and Capital Markets Law. He previously held academic positions at the Universities of Bremen, St. Gallen, and Berlin, and visiting fellowships in Italy (Florence, European University Institute), the US (Stanford and Berkeley), Australia (University of Sydney), Spain (CEU San Pablo, Madrid) and Denmark (Aarhus).Having graduated from the Faculty of Law in Munich, he also holds academic degrees from the University of Paris-Assas (licence en droit) and London (LL.M. in International Business Law). Florian Möslein published three monographs and over 80 articles and book contributions, and has edited seven books.His current research focus is on regulatory theory, corporate sustainability and the legal challenges of the digital age.

Prof. Dragutin Petkovic, Professor, Associate Chair, Undergraduate Advisor, IEEE LIFE Fellow, Director, Computing for Life Sciences (CCLS), Coordinator for Graduate Certificates in AI Ethics and SW EngineeringProf. D. Petkovic obtained his Ph.D. at UC Irvine, in the area of biomedical image processing. He spent over 15 years at IBM Almaden Research Center as a scientist and in various management roles. His contributions ranged from use of computer vision for inspection, to multimedia and content management systems. He is the founder of IBM’s well-known QBIC (query by image content) project, which significantly influenced the content-based retrieval field. Dr. Petkovic received numerous IBM awards for his work and became an IEEE Fellow in 1998 and IEEE LIFE Fellow in 2018 for leadership in content-based retrieval area. Dr. Petkovic also had various technical management roles in Silicon Valley startups. In 2003 Dr. Petkovic joined CS Department as a Chair and also founded SFSU Center for Computing for Life Sciences in 2005. Currently, Dr. Petkovic is the Associate Chair of the SFSU Department of Computer Science and Director of the Center for Computing for Life Sciences. He led the establishment of SFSU Graduate Certificate in AI Ethics, jointly with SFSU Schools of Business and Philosophy. Research and teaching interests of Prof. Petkovic include Machine Learning with emphasis on Explainability and Ethics, teaching methods for Global SW Engineering and engineering teamwork, and the design and development of easy to use systems.

Dr. Christopher Burr is a philosopher of cognitive science and artificial intelligence. He is a Senior Research Associate at the Alan Turing Institute and a Research Associate at the Digital Ethics Lab, University of Oxford.His current research explores philosophical and ethical issues related to data-driven technologies and human-computer interaction, including the opportunities and risks that such technologies have for mental health and well-being. A primary goal of this research is to develop robust and pragmatic guidance to support the governance, responsible innovation, and sustainable use of data-driven technology within a digital society. To support this goal, he has worked with a number of public sector bodies and organisations, including NHSx; the UK Government’s Department for Health and Social Care; Department for Digital, Culture, Media and Sport; Centre for Data Ethics and Innovation; and the Ministry of Justice. He has held previous posts at the University of Bristol, where he explored the ethical and epistemological impact of big data and artificial intelligence as a postdoctoral researcher and also completed his PhD in 2017. Research Interests: Philosophy of Cognitive Science and Artificial Intelligence, Digital Ethics, Bioethics, Decision Theory, Public Policy, and Human-Computer Interaction.

DSc. Magnus Westerlund, Principal Lecturer, Head of Master Degree Programme in Big Data Analytics
Arcada University of Applied Sciences, Helsinki, Finland Magnus Westerlund (DSc) is the programme director of the master degree programme in big data analytics and deputy head of business and analytics department at Arcada University of Applied Sciences in Helsinki, Finland. He has a background from the private sector in telecom and information management and earned his doctoral degree in information systems at Åbo Akademi University, Finland. Magnus has research publications in the fields of analytics, IT-security, cyber regulation, and distributed ledger technology. His current research topics are found in the decentralized platform area of distributed applications, and the application of intelligent and secure autonomous systems. His long-term aim is to help define what we mean by autonomous systems that are trustworthy, accountable, and that can learn from interaction.

Prof. Rafael A. Calvo, Chair in Engineering Design, Faculty of Engineering, Dyson School of Design Engineering, Imperial College LondonRafael A. Calvo, PhD (2000) is Professor at Imperial College London focusing on the design of systems that support wellbeing in areas of mental health, medicine and education, and the ethical challenges raised by new technologies. In 2015 Calvo was appointed a Future Fellow of the Australian Research Council to study the design of wellbeing-supportive technology.
Rafael is the Director for Research at the Dyson School of Design Engineering and co-lead at the Leverhulme Centre for the Future of Intelligence.

Dr. Estella Hebert, Goethe University FrankfurtEstella Hebert is a postdoctoral researcher and lecturer in the department of education at the Goethe University Frankfurt focusing her research on questions of digitalisation within educational contexts. She finished her PhD on the relationship of identity, agency and personal data in 2019. Coming from a media pedagogical and educational philosophical perspective her interests are based in post-digital perspectives on the social, ethical and cultural transformations caused by digitality, questions of datafication and media critical perspectives.

Romeo Kienzler, IBM Center for Open Source Data and AI Technologies, San Francisco, CA, USARomeo Kienzler is Chief Data Scientist at the IBM Center for Open Source Data and AI Technologies (CODAIT) in San Fransisco. He holds an M. Sc. (ETH) in Computer Science with specialisation in Information Systems, Bioinformatics and Applied Statistics from the Swiss Federal Institute of Technology Zurich. He works as Associate Professor for Artificial Intelligence at the Swiss University of Applied Sciences Berne and Adjunct Professor for Information Security at the Swiss University of Applied Sciences Northwestern Switzerland (FHNW). His current research focus is on cloud-scale machine learning and deep learning using open source technologies including TensorFlow, Keras, and the Apache Spark stack. Recently he joined the Linux Foundation AI as lead for the Trusted AI technical workgroup with focus on Deep Learning Adversarial Robustness, Fairness and Explainability. He also contributes to various open source projects. He regularly speaks at international conferences including significant publications in the area of data mining, machine learning and Blockchain technologies. Romeo is lead instructor of the Advance Data Science specialisation on Coursera  with courses on Scalable Data Science, Advanced Machine Learning, Signal Processing and Applied AI with DeepLearning.

Carl Mörch, Postdoctoral Fellow, Algora Lab – MILA, OBVIACarl is currently a postdoctoral fellow at the Université de Montréal and Mila. He has been awarded a Postdoctoral Fellowship by the International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technologies (OBVIA). He is also a lecturer and adjunct professor at UQÀM (Montréal, Canada). His research is oriented towards the creation of AI Ethics Tools. His objective is to contribute to the concrete application of high-level ethical principles by developing lists of standards in high-risk areas (Health, Finance). In general, he is interested in the responsible development of technologies in society, health care and psychology. He co-created canadaprotocol.com, an open access tool for AI developers working in Mental Health. He is also working on the ethical evaluation of free mobile applications and on the concept of moral competence in AI. Finally, he is leading “Reach Me” an m-health project to improve pregnant women’s access to prenatal services, using text messaging. He holds a M.Psy. (ICP, France), and a Ph.D. in Psychology (UQÀM, Canada).

Dr. Michael Hind, Distinguished Research Staff Member, IBM Research AI Department, IBM Thomas J Watson Research CenterDr. Hind has authored over 50 publications, served on over 50 program committees, and given several keynotes and invited talks at top universities, conferences, and government settings. Michael has led dozens of researchers to successfully transfer technology to various parts of IBM and helped launch several successful open source projects, such as AI Fairness 360 and AI Explainability 360. His 2000 paper on Adaptive Optimization was recognized as the OOPSLA’00 Most Influential Paper and his work on Jikes RVM was recognized with the SIGPLAN Software Award in 2012. Michael is an ACM Distinguished Scientist, and a member of IBM’s Academy of Technology.

Prof. Christiane Wendehorst, Professor of Civil Law at the University of ViennaChristiane Wendehorst has been Professor of Civil Law at the University of Vienna since 2008. Amongst other functions, she is founding member and President of the European Law Institute (ELI), chair of the Academy Council of the Austrian Academy of Sciences (ÖAW), Co-Head of the Department of Innovation and Digitalisation in Law, and member of the Managing Board of the Austrian Jurists’ Association (ÖJT), the Academia Europea (AE), the International Academy for Comparative Law (IACL), the American Law Institute (ALI) and the Bioethics Committee at the Austrian Federal Chancellery. She has been Co-chair of the German Data Ethics Committee from 2018-2019. Currently, her work is focussed on legal challenges arising from digitalization, and she has worked as an expert on subjects such as digital content, Internet of Things, artificial intelligence and data economy for, inter alia, the European Commission, the European Parliament, the German Federal Government, the ELI and the ALI. Prior to moving to Vienna, she held chairs in in Göttingen (1999-2008) and Greifswald (1998-99) and was Managing Director of the Sino-German Institute of Legal Studies (2000-2008).

———————————————————————————————————————

Mid-Term Report Requirements

The goal of the mid-term report is to select an AI system (i.e. an AI-product and or an AI-based service) used in healthcare, and start with the evaluation process.

In teams of two students:
Choose a data-driven product/solution in healthcare, and assess for “trustworthiness” with trustworthy AI as based on the EU Ethics Guidelines for Trustworthy AI, adapted to the healthcare domain and the Z-inspection process.

E.g. use of AI/machine learning approaches and technologies to optimise the management of emergencies (e.g. tracking devices, predictions of number of contagions etc., during COVID-19).

The report contributes to your grade and every team member will receive the same grade.

Scope

The report must be delivered as a google doc (we will create one document per team).

It must be min. 3-max. 5 pages long, including references and written with the google docs presets (i.e. Normal Text: Arial, 11pt).

The Mid Term report should cover the following:

  1.  Define and agree upon the ​boundaries and context ​of the assessment.
  2.  Analyze ​Socio-technical scenarios
  3.  Identify ​Ethical Issues and Tensions

In particular, the term report should relate to the topics covered by the lecture recordings and paper recommendations for up until the due date, as well as the ALTAI assessment list published by the EU and the ALTAI web tool. Questions that need to be answered in the Mid term report are:

Analyze Socio-technical scenarios

By collecting relevant resources, socio-technical scenarios should be created and analyzed by the team of students:

  1. Describe the aim of the AI system;
  2. Who are the actors;
  3. What are the actors expectations;
  4. How Actors interacts with each other and with the AI system;
  5. What are the process where the AI system is used;
  6. What AI technology is used;
  7. What is the context where the AI is used;
  8. What are any legal and contractual obligations related to the use of AI in this context

And anything else you wish to be relevant.

Identify Ethical Issues and Tensions

We use the term ‘tension’ as defined in ​[Whittlestone et al. 2019]​: „tensions between the pursuit of different values in technological applications rather than an abstract tension between the values themselves.“

Use the Catalog of Examples of Ethical tensions

  • Accuracy vs. Fairness
  • Accuracy vs. Explainability
  • Privacy vs. Transparency
  • Quality of services vs. Privacy
  • Personalisation vs. Solidarity
  • Convenience vs. Dignity
  • Efficiency vs. Safety and Sustainability
  • Satisfaction of Preferences vs. Equality

Classify ethical tensions

according to the three dilemma defined in ​[Whittlestone et al. 2019]​:

– true dilemma​, i.e. “a conflict between two or more duties, obligations, or values, both of which an agent would ordinarily have reason to pursue but cannot”;

–  ​dilemma in practice​, i.e. “the tension exists not inherently, but due to current technological capabilities and constraints, including the time and resources available for finding a solution”;

–  ​false dilemma​, i.e. “situations where there exists a third set of options beyond having to choose between two important values”.

Remarks

NEVER EVER COPY AND PASTE​ text from the internet and other sources.

Two options:

1.You can describe what the source is saying using your words and quoting the source.

e.g As indicated in ​[Roig and Vetter 2020]​ the moon is flat. Reference [Roig and Vetter 2020]​ Why the Moon is Flat. Roig Gemma, Vetter Dennis Journal of Dreams, Issue No. 1, November 2020-

2.You can quote what the source is saying using their words in “ “

e.g ​[Roig and Vetter 2020]​ has made a case that “the moon is completely flat and not round”.Reference [Roig and Vetter 2020]​ Why the Moon is Flat. Roig Gemma, Vetter Dennis Journal of Dreams, Issue No. 1, November 2020

-Make sure to READ the source when you use them to make sure you understand the context! In the example above, if you quote that “the moon is flat” without understanding that this was a dream and not a scientific evidence, you are using a quote in a WRONG way!

Grading
Each team receives 0-5 points for the mid-term report and again for the final report.To pass the course you need to receive at least one point for both the mid-term and the final report and have a total of at least 3 points.

The points will then be translated into a grade (more points = better grade).

References

[Whittlestone et al. 2019]​ ​Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. ​Whittlestone, J. Nyrup, R. Alexandrova, A. Dihal, K. Cave, S. (2019), ​London.​ Nuffield Foundation.


Final Report Requirements

The goal of the final report is to continue to work on the selected AI system (i.e. an AI-product and or an AI-based service) used in healthcare, and finish the evaluation process.

Scope

The report must be delivered using the same google doc. The ​new part must be min 3, max. 5 pages long (excluding references). The ​combined references (mid-term + final) must be ​no more than 2 pages.​You should use the same style as for the mid-term report (normal text: Arial, 11pt, 1.15 line spacing, extra space after paragraph, for references you can reduce font size to 9pt).

The FINAL report should cover the following 5 points:


1. Identify any CLAIMS made by the producer of the AI system

According to the definition provided by Brundage et al. (2020, p.65) “​Claims are assertions put forward for general acceptance. They’re typically statements about a property of the system or some subsystem. Claims asserted as true without justification are assumptions, and claims supporting an argument are subclaims.” Furthermore “AI developers regularly make claims regarding the properties of AI systems they develop as well as their associated societal consequences. Claims related to AI development might include, e.g.:

  • We will adhere to the data usage protocols we have specified;
  • The cloud services on which our AI systems run are secure;
  • We will evaluate risks and benefits of publishing AI systems in partnership withappropriately qualified third parties;
  • We will not create or sell AI systems that are intended to cause harm;
  • We will assess and report any harmful societal impacts of AI systems that we build; and
  • Broadly, we will act in a way that aligns with society’s interests.”(Brundage et al., 2020, p.64)

2. Develop of an evidence base

Following Brundage et al. (2020), this step consists of reviewing and creating an evidence base to verify/support any claims made by producers of the AI system and other relevant stakeholders:

“​Evidence​ serves as the basis for justification of a claim. Sources of evidence can include the design, the development process, prior experience, testing, or formal analysis” (Brundage et al., 2020, p.65) and “​Arguments​ link evidence to a claim, which can be deterministic, probabilistic, or qualitative. They consist of “statements indicating the general ways of arguing being applied in a particular case and implicitly relied on and whose trustworthiness is well established” [144], together with validation of any scientific laws used. In an engineering context, arguments should be explicit” (Brundage et al. 2020, p.65)

NOTE:​You can for example use the “helping hand” from the Claims, Arguments, and Evidence (CAE) framework (Adelard LLP, 2020)

page2image2831158064

3. Map Ethical issues to Trustworthy AI Areas of Investigation

The basic idea of the process in this step is to identify from the list of ethical issues which areas require inspection. Therefore map Ethical issues to some or all of the seven requirements for trustworthy AI:

  • –  Human agency and oversight,
  • –  Technical robustness and safety,
  • –  Privacy and data governance,
  • –  Transparency,
  • –  Diversity, non-discrimination and fairness,
  • –  Societal and environmental wellbeing
  • –  Accountability
  • (High-Level Expert Group on Artificial Intelligence, 2019, p.14)

4. Use the ​ALTAI web tool​ to answer the questions for the corresponding areas of trustworthy AI that you have mapped

5. Critically evaluate the result of the ALTAI assessment if it is relevant for the use case you have chosen. Motivate your analysis.

Grading

Each team receives 0-5 points for the final report.To pass the course you need to receive at least one point for both the mid-term and the final report and have a total of at least 3 points.The points will then be translated into a grade (more points = better grade).

References
AdelardLLP(2020).​ HelpingHand–CAEFRAMEWORK.​ Retrieved December16,2020, from​ ​https://claimsargumentsevidence.org/notations/helping-hand/

Brundage, M., Avin, S., Wang, J., Belfield, H., Krueger, G., Hadfield, G., Khlaaf, H., Yang, J., Toner, H., Fong, R., Maharaj, T., Koh, P. W., Hooker, S., Leung, J., Trask, A., Bluemke, E., Lebensold, J., O’Keefe, C., Koren, M., … Anderljung, M. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims.ArXiv:2004.07213 [Cs].​ ​​http://arxiv.org/abs/2004.07213

High-Level Expert Group on Artificial Intelligence. (2019). ​Ethics guidelines for trustworthy AI​. European Commission. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

——–

Discrimination

In the United States, legal arguments around discrimination follow one of two frameworks:

disparate treatment or disparate impact [6].

Disparate treatment is the intentional and direct use of a protected class for a prohibited purpose.

An example of this type of discrimination was argued in McDonnell Douglas Corp. v. Green [48], in which the U.S. Supreme Court found that an employer fired an employee on the basis of their race. An element of disparate treatment arguments is an establishment of the protected attribute as a cause of the biased decision [17].

Source: https://arxiv.org/pdf/1707.08120.pdf

™Discrimination does not have to involve a direct use of a protected class; class memberships may not even take part in the decision. Discrimination can also occur due to correlations between the protected class and other attributes. The legal framework of disparate impact [49] addresses such cases by first requiring significantly different outcomes for the protected class, regardless of how the outcomes came to be.

™An association between loan decisions and race due to the use of applicant address, which itself is associated with race, is an example [33] of this type of discrimination

™

™Source: https://arxiv.org/pdf/1707.08120.pdf

“Proxy”

™Discrimination arising due to use of features correlated to protected classes is referred to as discrimination by proxy in U.S. legal literature or indirect discrimination in other jurisdictions such as the U.K. [3].

™We will use the term “proxy” to refer to a feature correlated with a protected class whose use in a decision procedure can result in indirect discrimination

™Source: Proxy Discrimination in Data-Driven Systems Theory and Experiments with Machine Learnt Programs

https://arxiv.org/pdf/1707.08120.pdf