Assessing Trustworthy AI. Best Practice: AI Medical device for Predicting Cardiovascular Risks

We have used Z-Inspection® to evaluate a non invasive AI medical device which was implemented to assist medical doctors in the diagnosis of cardiovascular diseases.

  Assessment Completed.

Learn more.

Assessing Trustworthy AI. Best Practice: Machine learning as a supportive tool to recognize cardiac arrest in 112 emergency calls for the City of Copenhagen.

Jointly with the Emergency Medical Services Copenhagen, we completed the first part of our trustworthy AI assessment.

Assessment Completed.

We use an holistic approach

“Ethical impact evaluation involves evaluating the ethical impact of a technology’s use, not just on its users, but often, also on those indirectly affected, such as their friends and families, communities, society as a whole, and the planet.“

–Peters et al.

Lessons Learned from Assessing Trustworthy AI in Practice.

Dennis Vetter, Julia Amann, Frederick Bruneault, Megan Coffee,
Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff,
Irmhild van Halem, Dr Eleanore Hickman, Elisabeth Hildt, Sune Holm,
George Kararigas, Pedro Kringen, Vince Madai, Emilie Wiinblad Mathez
Jesmin Jahan Tithi, Ph.D, Magnus Westerlund, Renee Wurth, PhD
Roberto V. Zicari & Z-Inspection® initiative (2022)

Digital Society (DSO),  2, 35 (2023). Springer

Co-design of Trustworthy AI. Best Practice: Deep Learning based Skin Lesion Classifier.

We used Z-Inspection® as an ethically aligned co-design methodology and helped ML engineers to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare.

In cooperation with

German Research Center for Artificial Intelligence GmbH (DFKI)

Assessment Completed.

Co-Design of a Trustworthy AI-driven Voice-based System in Mental Health Monitoring

We use Z-Inspection® as an ethically aligned co-design methodology and help Data Scientists and AI engineers to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare.

This work is part of  The “ExplainMe” project which aims to develop innovative IT tools that enable explaining the way of speaking, thus supporting diagnostics and monitoring of health.

In cooperation with

Systems Research Institute Polish Academy of Sciences, Warsaw, Poland

→ On going project.

Download Presentation

Project web site

Validation of a Trustworthy AI-based Clinical Decision Support System for Improving Patient Outcome in Acute Stroke Treatment

We use Z-Inspection® as an ethically aligned co-design methodology and help Data Scientists and AI engineers to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare.

Based on pre-clinical evidence and previously developed models, and an available prototype of a clinical decision support system (patent pending), we set out in this project to further develop, test, and validate a prognostic tool for outcome prediction of acute stroke patients.

This work is part of  The “VALIDATE” EU project.

In cooperation with

Charité Lab for AI in Medicine (CLAIM) and QUEST center at the Berlin Institute of Health of Charité University Hospital, Berlin, Germany

→  On going project.

Download Presentation

Project web site

Assessing Trustworthy AI in times of COVID-19: Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients.

We conducted a self assessment together with the

Department of Information Engineering and Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health – University of Brescia, Brescia, Italy 

Assessment Completed.

Assessing Trustworthiness of the use of Generative AI for higher Education.

Use of AI in Marking and Providing Feedback to Graduate Students for Individual Written Assignments in Health Informatics Courses.

Marking and providing feedback on narrative assignments is time consuming and cognitively taxing.  This leads to delayed and terse feedback that may not be satisfactory to the learner.  Can AI be used to speed up marking and provide more substantive feedback to learners?

We are conducting  a self assessment together with the

Co-Design of a Trustworthy AI System in Healthcare: An Interpretable and Explainable Polygenic Risk Score AI Tool to Predict Type 2 Diabetes using Genome Data.

he polygenic risk score (PRS) is an important method for assessing genetic susceptibility to traits and diseases. Significant progress has been made in applying PRS to conditions such as obesity, cancer, and type 2 diabetes. Studies have demonstrated that PRS can effectively identify individuals at high risk, enabling early screening, personalized treatment, and targeted interventions.

 One of the current limitations of PRS, however, is a lack of interpretability tools. To address this problem, at team of researchers at the Graduate School of Data Science at the Seoul National University introduced eXplainable PRS (XPRS), an interpretation and visualization tool that decomposes PRSs into genes/regions and single nucleotide polymorphism (SNP) contribution scores via Shapley additive explanations (SHAPs), which provide insights into specific genes and SNPs that significantly contribute to the PRS of an individual. 

In this best practice we use a co-design approach to help various stakeholders to embed ethics across the whole span of the design and implementation process of the XPRS system design and implementation. For that, we use Z-inspection® an ethically aligned Trustworthy AI co-design methodology.

Project Web Site

→  On going project.

Co-design of a Trustworthy Efficient AI for Cloud-Edge Computing

The EU MANOLO project aims to deliver a complete stack of trustworthy algorithms and tools to help AI systems reach better efficiency and seamless optimization in their operations, resources and data required to train, deploy and run high-quality and lighter AI models in both centralised and cloud-edge distributed environments.

MANOLO employs the Z-Inspection® process to assess the trustworthiness of AI systems based on the European Ethics Guidelines.

Download Deliverable D1.1 Trustworthy Efficient AI for the Cloud-edge Deep Dive

Download D6.1 Use Cases Design and Deployment Plans

Brochure

News: Refining the TAI Approach: FRAUNHOFER IIS and ARCADA Collaborate on Z-Inspection® within the MANOLO EU Project

 Project Web site

→ On going project

Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment

This report shares the experiences, results and lessons learned in conducting a pilot project ‘Responsible use of AI’ in cooperation with the Province of Friesland, Rijks ICT Gilde-part of the Ministry of the Interior and Kingdom Relations (BZK) (both in The Netherlands) and a group of members of the Z-Inspection® Initiative. The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed.

The AI maps heathland grassland by means of satellite images for monitoring nature reserves. Environmental monitoring is one of the crucial activities carried on by society for several purposes ranging from maintaining standards on drinkable water to quantifying the CO2 emissions of a particular state or region. Using satellite imagery and machine learning to support decisions is becoming an important part of environmental monitoring.

The main focus of this report is to share the experiences, results and lessons learned from performing both a Trustworthy AI assessment using the Z-Inspection® process and the EU framework for Trustworthy AI, and combining it with a Fundamental Rights assessment using the Fundamental Rights and Algorithms Impact Assessment (FRAIA) as recommended by the Dutch government for the use of AI algorithms by the Dutch public authorities.


arXiv:2404.14366
[cs.CY]

Optimizing the Provision of Rehabilitation Services Through Explainable Recommender Systems (SmartRehab)

Main contact:

PD Dr. Luis Terán

Lucerne University of Applied Sciences and Arts

The complexity of the data and the limited ability to explore it can lead to suboptimal treatment decisions, resulting in over-treatment or under-treatment. It increases healthcare costs and has significant implications for patient health and well-being.

Overview

This project aims to contribute to this area of research by implementing recommender systems that support healthcare professionals’ decision-making processes.

The project will use data from a rehabilitation clinic in Switzerland that cares for persons with spinal cord injury (SCI). Our system will analyze data from the first rehabilitation, which happens after acute care and lasts six to eight months. SCI is damage to the spine that leads to irreversible physical impairment, also known as paraplegia and tetraplegia. First rehabilitation is designed to support persons with SCI to get used to their new condition, and to give them the tools to make them as independent as possible. The healthcare professionals involved during this phase include physicians, nurses, physiotherapists, logopedists, and psychologists, among other therapists.

This project is led by the Lucerne University of Applied Sciences and Arts (HSLU). HSLU is responsible for designing and implementing explainable methods in recommendations that will be integrated into the beta version of SmartRehab.

In addition, this project collaborates with the University of Lucerne (UniLU), health professionals at the Swiss Paraplegic Center (SPC), and the Swiss Paraplegic Research (SPF) to comprehensively analyze the use case and thoroughly test and provide feedback on the system developed. The objective is to provide feedback on the system and ensure it is functional and user-friendly. The project’s ultimate goal is to deploy a prototype of SmartRehab that can optimize medical decision-making in SCI rehabilitation. It will translate into optimizing patient functioning and reducing the cost of services. The project also considers rigorous ethical considerations. As a result, it provides a solid foundation for expanding current solutions by establishing a standard for using health recommender systems (HRSs) for rehabilitation services.

A task of this project is to use the the Z-Inspection® process for Trustworthy AI co-design.

→ On going project.

Trustworthy AI Assessment in Agriculture: From Plant Disease Recognition to Animal Welfare Monitoring

Main contact

Prof. Alvaro Fuentes
Research Professor, Jeonbuk National University (JBNU), South Korea
Core Research Institute of Intelligent Robots, Electronics Engineering

Personal website: https://lnkd.in/eAF5XxP2

Research Background

Artificial Intelligence has become an essential tool for transforming modern agriculture, offering new possibilities for precision crop management, disease detection, and livestock welfare assessment. Yet, despite major advances in accuracy and automation, trustworthiness, including transparency, explainability, robustness, and ethical accountability, remains insufficiently addressed in agricultural AI systems.

While other domains (e.g., healthcare, finance) have started to operationalize Trustworthy AI principles, agriculture presents distinct challenges: heterogeneous environments, evolving biological conditions, limited labeled data, and ethical concerns related to animal welfare and sustainability. A systematic approach to assessing and enhancing trust in agricultural AI models is therefore urgently needed.

Over the past decade, my research group at JBNU (See here) has advanced the development of deep learning systems for plant disease recognition (Dong et a., 2025a,b), crop growth and environmental monitoring, autonomous greenhouse management, and cattle behavior and welfare analysis (Han et al., 2025; Nasir et al., 2025). These platforms, validated under real-world conditions, provide a robust empirical foundation for investigating how trustworthiness can be measured, interpreted, and improved in AI systems designed for agricultural decision-making.

Research Objectives

This study aims to develop and validate a Trustworthy AI Assessment Framework tailored for agricultural applications through two complementary case studies.

Case Study 1: Plant Disease Recognition

Objective: Assess robustness and interpretability of AI models under domain shifts caused by environmental and crop variability.

Case Study 2: Animal Behavior Recognition and Welfare Assessment

Objective: Examine model reliability and ethical implications in automated cattle behavior analysis.

A task of this project is to use the the Z-Inspection® process for Trustworthy AI co-design.

→  On going project.