Articles

Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients.

H. Allahabadi et al., in IEEE Transactions on Technology and Society, 2022, doi: 10.1109/TTS.2022.3195114.

To view the article abstract page, please use this URL

Download Preview Version of the Article as .PDF( Citation information: DOI 10.1109/TTS.2022.3195114)

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

How to Assess Trustworthy AI in Practice.
Roberto V. Zicari, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Eleanore Hickman, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Elisabeth Hildt, Georgios Kararigas, Pedro Kringen, Vince I. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Dennis Vetter, Magnus Westerlund, Renee Wurth
On behalf of the Z-Inspection® initiative (2022)

Abstract
This report is a methodological reflection on Z-Inspection®.  Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system. 

Cite as: arXiv:2206.09887 [cs.CY] [v1] Mon, 20 Jun 2022 16:46:21 UTC (463 KB)

The full report is available on arXiv .

Download the full report as .PDF

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Using Sentence Embeddings and  Semantic Similarity for Seeking Consensus when Assessing Trustworthy AI.

D. Vetter, J. J. Tithi, M. Westerlund, R. V. Zicari, G. Roig.

1st International  Workshop on Imagining the AI Landscape After the AI Act (In conjunction with The first 
International Conference on Hybrid Human-Artificial Intelligence), 2022. 
It will be published in CEUR Workshop Proceedings 

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems. 

Julia Amann ,Dennis Vetter ,Stig Nikolaj Blomberg,Helle Collatz Christensen,Megan Coffee,Sara Gerke,Thomas K. Gilbert,Thilo Hagendorff,Sune Holm,Michelle Livne,Andy Spezzatti,Inga Strümke,Roberto V. Zicari,Vince Istvan Madai , on behalf of the Z-Inspection initiative (2022) 

PLOS Digit Health 1(2): e0000016, February 17, 2022

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

Roberto V. Zicari • James Brusseau • Stig Nikolaj Blomberg • Helle Collatz Christensen • Megan Coffee • Marianna B. Ganapini • Sara Gerke • Thomas Krendl Gilbert • Eleanore Hickman • Elisabeth Hildt • Sune Holm • Ulrich Kühne • Vince I. Madai • Walter Osika • Andy Spezzatti • Eberhard Schnebel • Jesmin Jahan Tithi • Dennis Vetter • Magnus Westerlund • Renee Wurth • Julia Amann • Vegard Antun • Valentina Beretta • Frédérick Bruneault • Erik Campano • Boris Düdder • Alessio Gallucci • Emmanuel Goffi • Christoffer Bjerre Haase • Thilo Hagendorff • Pedro Kringen • Florian Möslein • Davi Ottenheimer • Matiss Ozols • Laura Palazzani • Martin Petrin • Karin Tafur • Jim Tørresen • Holger Volland • Georgios Kararigas

Front. Hum. Dyn., 08 July 2021

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Co-design of a Trustworthy AI System in Healthcare: Deep Learning based Skin Lesion Classifier. 

Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano , Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund, Renee Wurth. 

Published on 13 July 2021. Front. Hum. Dyn. doi:
https://doi.org/10.3389/fhumd.2021.688152

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Z-Inspection®: A Process to Assess Trustworthy AI.

Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.

IEEE Transactions on Technology and Society, 2021. Print ISSN: 2637-6415, Online ISSN: 2637-6415 Digital Object Identifier: 10.1109/TTS.2021.3066209 DOWNLOAD THE PAPER, Cite This.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Brusseau J. , What a Philosopher Learned at an AI Ethics Evaluation, AI Ethics Journal 2020, 1(1)-4,

Zicari, R. V.. (2020). KI, Ethik, Vertrauen, Risiken, AuditPosition paper presented at the Enquete-Kommission „Künstliche Intelligenz – Gesellschaftliche Verantwortung und wirtschaftliche, soziale und ökologische Potenziale“ of the German Bundestag, on February 10, 2020, Berlin. LINK to .PDF: http://z-inspection.org/wp-content/uploads/2022/01/Zicari.AIEthikVertrauenRisikenAudit.pdf

Zicari, R. V.. (2018). Big Data and Artificial Intelligence: Ethical and Societal Implications. In Wolff, B. (Ed.), In Whither artificial intelligence? Debating the policy challenges of the upcoming transformation (pp. 68). Mercator Science-Policy Fellowship-Programme.

Credit: Getty Images

Book Chapters

Florian Möslein, Roberto V. Zicari.  Certifying Artificial Intelligence. Book chapter in “Research Handbook on Big Data Law“, Editor Roland Vogl, CodeX – The Stanford Center for Legal Informatics, Edward Elgar Publishing, Publication Date: May 2021 ISBN: 978 1 78897 281 9 Extent: c 456 pp

Roberto V. Zicari. Assessing the trustworthiness of artificial intelligence. In Data Science in Economics and Finance for Decision Makers (editor Per Nymand-Andersen), Risk Books 2021

Boris Düdder, Florian Möslein, Norman Stürtz, Magnus Westerlund , Roberto V. Zicari. Ethical Maintenance of Artificial Intelligence Systems. Book chapter in “Artificial Intelligence for Sustainable Value Creation “. Editors Margherita Pagani and Renaud Champion, em Lyon Business School.  Edward Elgar Publishing, 2021 to appear.


Talks

How to Assess Trustworthy AI  in practice, Innovation, Governance and AI4Good The Responsible AI Forum Munich, December 6, 2021.

—-> Download: HowtoAssessTrustworthyAI

  • Mindful Use of AI.  Z-Inspection: A process to assess Trustworthy AI” – Talk (30 minutes), Prof. Roberto V. Zicari, AI4EU Workshop, Nov. 13, 2020. YouTube Video.
  • Mindful Use of AI.  Z-Inspection: A holistic and analytic process to assess Ethical AI – Roberto V. Zicari,Frankfurt Big Data Lab, July 2, 2020, Youtube video.  and pdf of the slides
  • “Mindful Use of AI.  Z-Inspection: A holistic and analytic process to assess Ethical AI” – Talk(1 hour). Prof.Roberto V. Zicari, Frankfurt Big Data Lab, July 2, 2020, Youtube video  and copy of the slides of the talk: Zicari.Lecture.July2.2020
  • Introduction to Z-inspection. A framework to assess Ethical AI” – Talk (2 hours). Prof. Roberto V. Zicari,  May 27, 2020 [slides] [video]
  • The Ethics of Artificial Intelligence (AI) – Lecture (2 hours).Prof. Roberto V. Zicari, April 22, 2020. [slides] [video]

Videos

  • Mindful Use of AI.  Z-Inspection: A holistic and analytic process to assess Ethical AI – Roberto V. Zicari,Frankfurt Big Data Lab, July 2, 2020, Youtube video:  https://www.youtube.com/watch?v=NJ2XASEHdWA and pdf slides: http://www.bigdata.uni-frankfurt.de/wp-content/uploads/2019/01/Zicari.Lecture.July2_.2020.pdf
  • Prof. Zicari gave an impulse talk at the high-profile German Parliamentary AI examination committee. Die Mitglieder der  Enquete-Kommission „Künstliche Intelligenz – Gesellschaftliche Verantwortung und wirtschaftliche, soziale und ökologische Potenziale“ haben sich am  Montag, 10. Februar 2020, in ihrer Sitzung mit Feinheiten einer möglichen  Regulierung algorithmischer Entscheidungssysteme (ADM-Systeme) befasst. Fünf Sachverständige trugen dazu in öffentlicher Sitzung unter Leitung von Kommissionsmitglied  Ronja Kemmer (CDU/CSU) vor. Professor Roberto V. Zicari (Initiative Z-inspection) betonte in seinem Vortag die Bedeutung von Vertrauen im Einsatz von Systemen der Künstlichen Intelligenz. Zicari verwies wiederholt auf Empfehlungen der Datenethik-Kommission. Er empfahl unter anderem, dass der Staat keine proprietären KI-Systeme einsetzen sollte, die mit Berufung auf Geschäftsgeheimnisse Transparenz verhinderten.
  • Das Video der Anhörung ist jetzt in der Mediathek des Bundestags abrufbar: https://www.bundestag.de/dokumente/textarchiv/2020/kw07-pa-enquete-ki-681576 “

Initiative