Articles and Reports

Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment

Marjolein BoonstraFrédérick BruneaultSubrata ChakrabortyTjitske FaberAlessio GallucciEleanore HickmanGerard KemaHeejin KimJaap KooikerElisabeth HildtAnnegret LamadéEmilie Wiinblad MathezFlorian MösleinGenien PathuisGiovanni SartorMarijke SteegeAlice StoccoWilly TademaJarno TuimalaIsabel van VledderDennis VetterJana VetterMagnus WesterlundRoberto V. Zicari

This report shares the experiences, results and lessons learned in conducting a pilot project “Responsible use of AI” in cooperation with the Province of Friesland, Rijks ICT Gilde-part of the Ministry of the Interior and Kingdom Relations (BZK) (both in The Netherlands) and a group of members of the Z-Inspection® Initiative. The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed. The AI maps heathland grassland by means of satellite images for monitoring nature reserves. Environmental monitoring is one of the crucial activities carried on by society for several purposes ranging from maintaining standards on drinkable water to quantifying the CO2 emissions of a particular state or region. Using satellite imagery and machine learning to support decisions is becoming an important part of environmental monitoring. The main focus of this report is to share the experiences, results and lessons learned from performing both a Trustworthy AI assessment using the Z-Inspection® process and the EU framework for Trustworthy AI, and combining it with a Fundamental Rights assessment using the Fundamental Rights and Algorithms Impact Assessment (FRAIA) as recommended by the Dutch government for the use of AI algorithms by the Dutch public authorities.

Comments: On behalf of the Z-Inspection® Initiative

Subjects: Computers and Society (cs.CY)

Cite: arXiv:2404.14366 [cs.CY] (or arXiv:2404.14366v1[cs.CY] for this version)

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Lessons Learned from Assessing Trustworthy AI in Practice.

Dennis VetterJulia AmannFrederick BruneaultMegan CoffeeBoris DüdderAlessio GallucciThomas Krendl Gilbert, Thilo Hagendorff, Irmhild van HalemDr Eleanore HickmanElisabeth HildtSune HolmGeorge Kararigas,Pedro KringenVince Madai , Emilie Wiinblad MathezJesmin Jahan Tithi, Ph.D , Magnus WesterlundRenee Wurth, PhDRoberto V. Zicari & Z-Inspection® initiative (2022)

Digital Society (DSO),  2, 35 (2023). Springer

Link:

https://link.springer.com/article/10.1007/s44206-023-00063-1

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients.

Himanshi Allahabadi, Julia Amann, Isabelle Balot , Andrea Beretta, Charles Binkley , Jonas Bozenhard , Frédérick Bruneault, James Brusseau , Sema Candemir, Luca Alessandro Cappellini , Subrata Chakraborty , Senior Member, IEEE, Nicoleta Cherciu, Christina Cociancig, Megan Coffee , Irene Ek, Leonardo Espinosa-Leal, Davide Farina, Geneviève Fieux-Castagnet, Thomas Frauenfelder , Alessio Gallucci, Guya Giuliani, Adam Golda , Irmhild van Halem, Elisabeth Hildt , Sune Holm, Georgios Kararigas , Sebastien A. Krier, Ulrich Kühne, Francesca Lizzi , Vince I. Madai, Aniek F. Markus , Serg Masis , Emilie Wiinblad Mathez, Francesco Mureddu, Emanuele Neri, Walter Osika, Matiss Ozols , Cecilia Panigutti, Brendan Parent, Francesca Pratesi , Pedro A. Moreno-Sánchez, Giovanni Sartor, Mattia Savardi , Alberto Signoroni , Hanna-Maria Sormunen , Andy Spezzatti, Adarsh Srivastava , Annette F. Stephansen, Lau Bee Theng , Senior Member, IEEE, Jesmin Jahan Tithi, Jarno Tuominen , Steven Umbrello , Filippo Vaccher, Dennis Vetter , Magnus Westerlund, Renee Wurth, and Roberto V. Zicari 

in IEEE Transactions on Technology and Society

* Publication Date: DECEMBER 2022

* Volume: 3, Issue: 4

* On Page(s): 272-289

* Print ISSN: 2637-6415

* Online ISSN: 2637-6415

* Digital Object Identifier: 10.1109/TTS.2022.3195114

Link: https://ieeexplore.ieee.org/document/9845195

Link to .PDF:https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9845195

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

How to Assess Trustworthy AI in Practice.
Roberto V. Zicari, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Eleanore Hickman, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Elisabeth Hildt, Georgios Kararigas, Pedro Kringen, Vince I. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Dennis Vetter, Magnus Westerlund, Renee Wurth
On behalf of the Z-Inspection® initiative (2022)

Abstract
This report is a methodological reflection on Z-Inspection®.  Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system. 

Cite as: arXiv:2206.09887 [cs.CY] [v1] Mon, 20 Jun 2022 16:46:21 UTC (463 KB)

The full report is available on arXiv .

Download the full report as .PDF

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Using Sentence Embeddings and  Semantic Similarity for Seeking Consensus when Assessing Trustworthy AI.

D. Vetter, J. J. Tithi, M. Westerlund, R. V. Zicari, G. Roig.

1st International  Workshop on Imagining the AI Landscape After the AI Act (In conjunction with The first 
International Conference on Hybrid Human-Artificial Intelligence), 2022. 
Available version in arXiv

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems. 

Julia Amann ,Dennis Vetter ,Stig Nikolaj Blomberg,Helle Collatz Christensen,Megan Coffee,Sara Gerke,Thomas K. Gilbert,Thilo Hagendorff,Sune Holm,Michelle Livne,Andy Spezzatti,Inga Strümke,Roberto V. Zicari,Vince Istvan Madai , on behalf of the Z-Inspection initiative (2022) 

PLOS Digit Health 1(2): e0000016, February 17, 2022

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

Roberto V. Zicari • James Brusseau • Stig Nikolaj Blomberg • Helle Collatz Christensen • Megan Coffee • Marianna B. Ganapini • Sara Gerke • Thomas Krendl Gilbert • Eleanore Hickman • Elisabeth Hildt • Sune Holm • Ulrich Kühne • Vince I. Madai • Walter Osika • Andy Spezzatti • Eberhard Schnebel • Jesmin Jahan Tithi • Dennis Vetter • Magnus Westerlund • Renee Wurth • Julia Amann • Vegard Antun • Valentina Beretta • Frédérick Bruneault • Erik Campano • Boris Düdder • Alessio Gallucci • Emmanuel Goffi • Christoffer Bjerre Haase • Thilo Hagendorff • Pedro Kringen • Florian Möslein • Davi Ottenheimer • Matiss Ozols • Laura Palazzani • Martin Petrin • Karin Tafur • Jim Tørresen • Holger Volland • Georgios Kararigas

Front. Hum. Dyn., 08 July 2021

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Co-design of a Trustworthy AI System in Healthcare: Deep Learning based Skin Lesion Classifier. 

Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano , Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund, Renee Wurth. 

Published on 13 July 2021. Front. Hum. Dyn. doi:
https://doi.org/10.3389/fhumd.2021.688152

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Z-Inspection®: A Process to Assess Trustworthy AI.

Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.

IEEE Transactions on Technology and Society, 2021. Print ISSN: 2637-6415, Online ISSN: 2637-6415 Digital Object Identifier: 10.1109/TTS.2021.3066209 DOWNLOAD THE PAPER, Cite This.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Brusseau J. , What a Philosopher Learned at an AI Ethics Evaluation, AI Ethics Journal 2020, 1(1)-4,

Zicari, R. V.. (2020). KI, Ethik, Vertrauen, Risiken, AuditPosition paper presented at the Enquete-Kommission „Künstliche Intelligenz – Gesellschaftliche Verantwortung und wirtschaftliche, soziale und ökologische Potenziale“ of the German Bundestag, on February 10, 2020, Berlin. LINK to .PDF: https://z-inspection.org/wp-content/uploads/2022/01/Zicari.AIEthikVertrauenRisikenAudit.pdf

Zicari, R. V.. (2018). Big Data and Artificial Intelligence: Ethical and Societal Implications. In Wolff, B. (Ed.), In Whither artificial intelligence? Debating the policy challenges of the upcoming transformation (pp. 68). Mercator Science-Policy Fellowship-Programme.

Credit: Getty Images

Book Chapters

Florian Möslein, Roberto V. Zicari.  Certifying Artificial Intelligence. Book chapter in “Research Handbook on Big Data Law“, Editor Roland Vogl, CodeX – The Stanford Center for Legal Informatics, Edward Elgar Publishing, Publication Date: May 2021 ISBN: 978 1 78897 281 9 Extent: c 456 pp

Roberto V. Zicari. Assessing the trustworthiness of artificial intelligence. In Data Science in Economics and Finance for Decision Makers (editor Per Nymand-Andersen), Risk Books 2021

Boris Düdder, Florian Möslein, Norman Stürtz, Magnus Westerlund , Roberto V. Zicari. Ethical Maintenance of Artificial Intelligence Systems. Book chapter in “Artificial Intelligence for Sustainable Value Creation “. Editors Margherita Pagani and Renaud Champion, em Lyon Business School.  Edward Elgar Publishing, 2021 to appear.


Talks

How to Assess Trustworthy AI in Practice

Roberto V. Zicari

February 13, 2023

Technology Innovation Management (TIM) program and sponsored by the Special Interest Group on Digital Disruption and Transformation from the International Society of Professional Innovation Managers (ISPIM), in collaboration with Carleton University and the Sprott School of Business. Canada

Recording (YouTube)

……………………………………………………………………………………………………………………………………………………………………………………………………………………..

“How to Assess Trustworthy AI in practice”, Graduate School of Data Science, Seoul National University, October 11, 2022:

https://z-inspection.org/wp-content/uploads/2022/10/TalkSNU.October12.pdf

How to Assess Trustworthy AI  in practice, Innovation, Governance and AI4Good The Responsible AI Forum Munich, December 6, 2021.

—-> Download: HowtoAssessTrustworthyAI

  • Mindful Use of AI.  Z-Inspection: A process to assess Trustworthy AI” – Talk (30 minutes), Prof. Roberto V. Zicari, AI4EU Workshop, Nov. 13, 2020. YouTube Video.
  • Mindful Use of AI.  Z-Inspection: A holistic and analytic process to assess Ethical AI – Roberto V. Zicari,Frankfurt Big Data Lab, July 2, 2020, Youtube video.  and pdf of the slides
  • “Mindful Use of AI.  Z-Inspection: A holistic and analytic process to assess Ethical AI” – Talk(1 hour). Prof.Roberto V. Zicari, Frankfurt Big Data Lab, July 2, 2020, Youtube video  and copy of the slides of the talk: Zicari.Lecture.July2.2020
  • Introduction to Z-inspection. A framework to assess Ethical AI” – Talk (2 hours). Prof. Roberto V. Zicari,  May 27, 2020 [slides] [video]
  • The Ethics of Artificial Intelligence (AI) – Lecture (2 hours).Prof. Roberto V. Zicari, April 22, 2020. [slides] [video]

Videos

  • Mindful Use of AI.  Z-Inspection: A holistic and analytic process to assess Ethical AI – Roberto V. Zicari,Frankfurt Big Data Lab, July 2, 2020, Youtube video:  https://www.youtube.com/watch?v=NJ2XASEHdWA and pdf slides: http://www.bigdata.uni-frankfurt.de/wp-content/uploads/2019/01/Zicari.Lecture.July2_.2020.pdf
  • Prof. Zicari gave an impulse talk at the high-profile German Parliamentary AI examination committee. Die Mitglieder der  Enquete-Kommission „Künstliche Intelligenz – Gesellschaftliche Verantwortung und wirtschaftliche, soziale und ökologische Potenziale“ haben sich am  Montag, 10. Februar 2020, in ihrer Sitzung mit Feinheiten einer möglichen  Regulierung algorithmischer Entscheidungssysteme (ADM-Systeme) befasst. Fünf Sachverständige trugen dazu in öffentlicher Sitzung unter Leitung von Kommissionsmitglied  Ronja Kemmer (CDU/CSU) vor. Professor Roberto V. Zicari (Initiative Z-inspection) betonte in seinem Vortag die Bedeutung von Vertrauen im Einsatz von Systemen der Künstlichen Intelligenz. Zicari verwies wiederholt auf Empfehlungen der Datenethik-Kommission. Er empfahl unter anderem, dass der Staat keine proprietären KI-Systeme einsetzen sollte, die mit Berufung auf Geschäftsgeheimnisse Transparenz verhinderten.
  • Das Video der Anhörung ist jetzt in der Mediathek des Bundestags abrufbar: https://www.bundestag.de/dokumente/textarchiv/2020/kw07-pa-enquete-ki-681576 “

Initiative