Assessing Trustworthy AI. Best Practice: AI Medical device for Predicting Cardiovascular Risks

We have used Z-Inspection® to evaluate a non invasive AI medical device which was implemented to assist medical doctors in the diagnosis of cardiovascular diseases.

  Assessment Completed.

Learn more.

Assessing Trustworthy AI. Best Practice: Machine learning as a supportive tool to recognize cardiac arrest in 112 emergency calls for the City of Copenhagen.

Jointly with the Emergency Medical Services Copenhagen, we completed the first part of our trustworthy AI assessment.

We use an holistic approach

“Ethical impact evaluation involves evaluating the ethical impact of a technology’s use, not just on its users, but often, also on those indirectly affected, such as their friends and families, communities, society as a whole, and the planet.“

–Peters et al.

Lessons Learned from Assessing Trustworthy AI in Practice.

Dennis Vetter, Julia Amann, Frederick Bruneault, Megan Coffee,
Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff,
Irmhild van Halem, Dr Eleanore Hickman, Elisabeth Hildt, Sune Holm,
George Kararigas, Pedro Kringen, Vince Madai, Emilie Wiinblad Mathez
Jesmin Jahan Tithi, Ph.D, Magnus Westerlund, Renee Wurth, PhD
Roberto V. Zicari & Z-Inspection® initiative (2022)

Digital Society (DSO),  2, 35 (2023). Springer

Co-design of Trustworthy AI. Best Practice: Deep Learning based Skin Lesion Classifier.

We used Z-Inspection® as an ethically aligned co-design methodology and helped ML engineers to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare.

In cooperation with

German Research Center for Artificial Intelligence GmbH (DFKI)

Co-Design of a Trustworthy AI-driven Voice-based System in Mental Health Monitoring

We use Z-Inspection® as an ethically aligned co-design methodology and help Data Scientists and AI engineers to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare.

This work is part of  The “ExplainMe” project which aims to develop innovative IT tools that enable explaining the way of speaking, thus supporting diagnostics and monitoring of health.

In cooperation with

Systems Research Institute Polish Academy of Sciences, Warsaw, Poland

Download Presentation

Project web site

Validation of a Trustworthy AI-based Clinical Decision Support System for Improving Patient Outcome in Acute Stroke Treatment

We use Z-Inspection® as an ethically aligned co-design methodology and help Data Scientists and AI engineers to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare.

Based on pre-clinical evidence and previously developed models, and an available prototype of a clinical decision support system (patent pending), we set out in this project to further develop, test, and validate a prognostic tool for outcome prediction of acute stroke patients.

This work is part of  The “VALIDATE” EU project.

In cooperation with

Charité Lab for AI in Medicine (CLAIM) and QUEST center at the Berlin Institute of Health of Charité University Hospital, Berlin, Germany

Download Presentation

Project web site

Assessing Trustworthy AI in times of COVID-19: Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients.

We conducted a self assessment together with the

Department of Information Engineering and Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health – University of Brescia, Brescia, Italy 

Assessing Trustworthiness of the use of Generative AI for higher Education.

Use of AI in Marking and Providing Feedback to Graduate Students for Individual Written Assignments in Health Informatics Courses.

Marking and providing feedback on narrative assignments is time consuming and cognitively taxing.  This leads to delayed and terse feedback that may not be satisfactory to the learner.  Can AI be used to speed up marking and provide more substantive feedback to learners?

We are conducting  a self assessment together with the

Co-Design of a Trustworthy AI System in Healthcare: An Interpretable and Explainable Polygenic Risk Score AI Tool to Predict Type 2 Diabetes using Genome Data.

he polygenic risk score (PRS) is an important method for assessing genetic susceptibility to traits and diseases. Significant progress has been made in applying PRS to conditions such as obesity, cancer, and type 2 diabetes. Studies have demonstrated that PRS can effectively identify individuals at high risk, enabling early screening, personalized treatment, and targeted interventions.

 One of the current limitations of PRS, however, is a lack of interpretability tools. To address this problem, at team of researchers at the Graduate School of Data Science at the Seoul National University introduced eXplainable PRS (XPRS), an interpretation and visualization tool that decomposes PRSs into genes/regions and single nucleotide polymorphism (SNP) contribution scores via Shapley additive explanations (SHAPs), which provide insights into specific genes and SNPs that significantly contribute to the PRS of an individual. 

In this best practice we use a co-design approach to help various stakeholders to embed ethics across the whole span of the design and implementation process of the XPRS system design and implementation. For that, we use Z-inspection® an ethically aligned Trustworthy AI co-design methodology.

Project web site

Co-design of a Trustworthy Efficient AI for Cloud-Edge Computing

The EU MANOLO project aims to deliver a complete stack of trustworthy algorithms and tools to help AI systems reach better efficiency and seamless optimization in their operations, resources and data required to train, deploy and run high-quality and lighter AI models in both centralised and cloud-edge distributed environments.

MANOLO employs the Z-Inspection® process to assess the trustworthiness of AI systems based on the European Ethics Guidelines.

Download Deliverable D1.1 Trustworthy Efficient AI for the Cloud-edge Deep Dive

Download D6.1 Use Cases Design and Deployment Plans

Brochure

News: Refining the TAI Approach: FRAUNHOFER IIS and ARCADA Collaborate on Z-Inspection® within the MANOLO EU Project

 Project Web site

Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment

This report shares the experiences, results and lessons learned in conducting a pilot project ‘Responsible use of AI’ in cooperation with the Province of Friesland, Rijks ICT Gilde-part of the Ministry of the Interior and Kingdom Relations (BZK) (both in The Netherlands) and a group of members of the Z-Inspection® Initiative. The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed.

The AI maps heathland grassland by means of satellite images for monitoring nature reserves. Environmental monitoring is one of the crucial activities carried on by society for several purposes ranging from maintaining standards on drinkable water to quantifying the CO2 emissions of a particular state or region. Using satellite imagery and machine learning to support decisions is becoming an important part of environmental monitoring.

The main focus of this report is to share the experiences, results and lessons learned from performing both a Trustworthy AI assessment using the Z-Inspection® process and the EU framework for Trustworthy AI, and combining it with a Fundamental Rights assessment using the Fundamental Rights and Algorithms Impact Assessment (FRAIA) as recommended by the Dutch government for the use of AI algorithms by the Dutch public authorities.


arXiv:2404.14366
[cs.CY]