Z-Inspection®: A process to assess trustworthy AI
The Process
Z-Inspection® is a process to assess trustworthy AI in practice.
(generated by notebookLM)
The Z-Inspection® process has the potential to play a key role in the context of the new EU Artificial Intelligence (AI) regulation.
Our work is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license.
Z-Inspection® is listed in the OECD Catalogue of AI Tools & Metrics
We have reached a major milestone!
After 5 years of applied research work, we produced two full reports containing the lessons we have learned and a list of practical suggestions:
Cite as: arXiv:2206.09887 [cs.CY]. [v2] Tue, 28 Jun 2022 14:23:47 UTC (465 KB)
The full report is available on arXiv.
You can download the full report as .PDF
– Lessons Learned from Assessing Trustworthy AI in Practice.
Digital Society (DSO), 2, 35 (2023). Springer
Z-Inspection®: A Process to Assess Trustworthy AI
Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.
IEEE Transactions on Technology and Society,
VOL. 2, NO. 2, JUNE 2021
Print ISSN: 2637-6415
Online ISSN: 2637-6415
Digital Object Identifier: 10.1109/TTS.2021.3066209
DOWNLOAD THE PAPER
Assessing Trustworthy AI in times of COVID-19.
Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients.
The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the Public Hospital (ASST Spedali Civili) in Brescia (Italy) since December 2020 during pandemic time.
In cooperation with Department of Information Engineering and Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health – University of Brescia, Brescia, Italy.
IEEE Transactions on Technology and Society
* Publication Date: DECEMBER 2022 * Volume: 3, Issue: 4 * On Page(s): 272-289
* Print ISSN: 2637-6415 * Online ISSN: 2637-6415
* Digital Object Identifier: 10.1109/TTS.2022.3195114
audio generated by notebook (based on the research article)
Co-design of Trustworthy AI. Best Practice.
Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier.
Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund and Renee Wurth.
Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021
VIEW ORIGINAL RESEARCH article
Audio file generated by notebookLM (based on the article)
Assessing Trustworthy AI. Best Practice.
On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls
Roberto V. Zicari • James Brusseau • Stig Nikolaj Blomberg • Helle Collatz Christensen • Megan Coffee • Marianna B. Ganapini • Sara Gerke • Thomas Krendl Gilbert • Eleanore Hickman • Elisabeth Hildt • Sune Holm • Ulrich Kühne • Vince I. Madai • Walter Osika • Andy Spezzatti • Eberhard Schnebel • Jesmin Jahan Tithi • Dennis Vetter • Magnus Westerlund • Renee Wurth • Julia Amann • Vegard Antun • Valentina Beretta • Frédérick Bruneault • Erik Campano • Boris Düdder • Alessio Gallucci • Emmanuel Goffi • Christoffer Bjerre Haase • Thilo Hagendorff • Pedro Kringen • Florian Möslein • Davi Ottenheimer • Matiss Ozols • Laura Palazzani • Martin Petrin • Karin Tafur • Jim Tørresen • Holger Volland • Georgios Kararigas
Front. Hum. Dyn., Human and Artificial Collaboration for Medical Best Practices, 08 July 2021 |
Pilot Project: Assessment for Responsible Artificial Intelligence together with Rijks ICT Gilde -Ministry of the Interior and Kingdom Relations (BZK)- and the province of Fryslân (The Netherlands)
” The results of this pilot are of great importance for the entire Dutch government, because we have developed a best practice with which administrators can really get started, and actually incorporate ethical values into the algorithms used.”
— Rijks ICT Gilde -Ministry of the Interior and Kingdom Relations (BZK)
Artificial Intelligence (AI) is in more and more aspects of our lives. The technology – based on algorithms and data – is in numerous devices and can be useful in solving social issues. For example, about energy, sustainability or poverty. As a government, we have an exemplary role. If we want to seize the opportunities of AI, then important questions about ethics, technology, transparency and the possible effects of AI applications on our society must be answered.
The pilot “Assessment for Responsible AI” is a step in this process. During a six-months pilot, the Rijks ICT Gilde (Ministry of the Interior and Kingdom Relations (BZK)), in cooperation with the province of Fryslân and a team of experts of the Z-Inspection® Initiative lead by Prof. Zicari, investigated a deep learning algorithm in practice.
👉 Download the full project report: Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment
New Pilot Project: Assessing Trustworthiness of the use of Generative AI for higher Education.
This pilot project of the Z-inspection® initiative (https://z-inspection.org) aims at assessing the use of Generative AI in higher level education considering specific use cases.
For this pilot project, we will assess the ethical, technical, domain-specific (i.e. education) and legal implications of the use of Generative AI-product/service within the university context.
We follow the UNESCO guidance for policymakers on Generative AI and education. In particular the policy recommendation: Pilot testing, monitoring and evaluation, and building an evidence base.
Participants
– Affiliated Labs: https://z-inspection.org/affiliated-labs/
– Members of the Z-inspection® initiative: https://z-inspection.org
– Ministries and Universities
– Specialized agencies
– Others
Approach
An interdisciplinary team of experts will assess the trustworthiness of Generative AI for selected use cases in High Education using the Z-Inspection® process: https://z-inspection.org
First World Z-inspection® Conference. Ateneo Veneto, March 10-11, 2023, Venice, Italy -CONFERENCE READER
The interdisciplinary meeting welcomed over 60 international scientist and experts from AI, ethics, human rights and domains like healthcare, ecology, business or law.
– The Pilot Project: “Assessment for Responsible Artificial Intelligence” together with Rijks ICT Gilde – part of the Ministry of the Interior and Kingdom Relations (BZK)- and the province of Fryslân (The Netherlands);
– The assessment of the use of AI in times of COVID-19 at the Brescia Public Hospital (“ASST Spedali Civili di Brescia“).
Two panel discussions on “Human Rights and Trustworthy AI” and “How do we trust AI?“ provided an interdisciplinary view on the relevance of data and AI ethics in the human rights and business context.
The main message of the conference was the need of a Mindful Use of AI (#MUAI).
This premiere World Z-Inspection® Conference was held in cooperation with Global Campus of Human Rights and Venice Urban Lab and was supported by Arcada University of Applied Science, Merck, Roche and Zurich Insurance Company.
Successfully completed the Second World Z-inspection® Conference, Friday, August 23 and Saturday, August 24, 2024 Hamburg, Germany
/in News /by Roberto Zicari