Assessing Trustworthy AI. Best Practice: AI Medical device for Predicting Cardiovascular Risks
We have used Z-Inspection® to evaluate a non invasive AI medical device which was implemented to assist medical doctors in the diagnosis of cardiovascular diseases.
√ Assessment Completed.
Assessing Trustworthy AI. Best Practice: Machine learning as a supportive tool to recognize cardiac arrest in 112 emergency calls for the City of Copenhagen.
Jointly with the Emergency Medical Services Copenhagen, we completed the first part of our trustworthy AI assessment.
√ Assessment Completed.
We use an holistic approach
“Ethical impact evaluation involves evaluating the ethical impact of a technology’s use, not just on its users, but often, also on those indirectly affected, such as their friends and families, communities, society as a whole, and the planet.“
–Peters et al.
Lessons Learned from Assessing Trustworthy AI in Practice.
Digital Society (DSO), 2, 35 (2023). Springer
Co-design of Trustworthy AI. Best Practice: Deep Learning based Skin Lesion Classifier.
We used Z-Inspection® as an ethically aligned co-design methodology and helped ML engineers to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare.
In cooperation with
German Research Center for Artificial Intelligence GmbH (DFKI)
√ Assessment Completed.
Co-Design of a Trustworthy AI-driven Voice-based System in Mental Health Monitoring
We use Z-Inspection® as an ethically aligned co-design methodology and help Data Scientists and AI engineers to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare.
This work is part of The “ExplainMe” project which aims to develop innovative IT tools that enable explaining the way of speaking, thus supporting diagnostics and monitoring of health.
In cooperation with
Systems Research Institute Polish Academy of Sciences, Warsaw, Poland
→ On going project.
Validation of a Trustworthy AI-based Clinical Decision Support System for Improving Patient Outcome in Acute Stroke Treatment
We use Z-Inspection® as an ethically aligned co-design methodology and help Data Scientists and AI engineers to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare.
Based on pre-clinical evidence and previously developed models, and an available prototype of a clinical decision support system (patent pending), we set out in this project to further develop, test, and validate a prognostic tool for outcome prediction of acute stroke patients.
This work is part of The “VALIDATE” EU project.
In cooperation with
Charité Lab for AI in Medicine (CLAIM) and QUEST center at the Berlin Institute of Health of Charité University Hospital, Berlin, Germany
→ On going project.
Assessing Trustworthy AI in times of COVID-19: Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients.
We conducted a self assessment together with the
Department of Information Engineering and Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health – University of Brescia, Brescia, Italy
√ Assessment Completed.
Assessing Trustworthiness of the use of Generative AI for higher Education.
Use of AI in Marking and Providing Feedback to Graduate Students for Individual Written Assignments in Health Informatics Courses.
Marking and providing feedback on narrative assignments is time consuming and cognitively taxing. This leads to delayed and terse feedback that may not be satisfactory to the learner. Can AI be used to speed up marking and provide more substantive feedback to learners?
We are conducting a self assessment together with the
LINK to Overview of the Use Case (LinkedIn)
→ On going project.
Co-Design of a Trustworthy AI System in Healthcare: An Interpretable and Explainable Polygenic Risk Score AI Tool to Predict Type 2 Diabetes using Genome Data.
The polygenic risk score (PRS) is an important method for assessing genetic susceptibility to traits and diseases. Significant progress has been made in applying PRS to conditions such as obesity, cancer, and type 2 diabetes. Studies have demonstrated that PRS can effectively identify individuals at high risk, enabling early screening, personalized treatment, and targeted interventions.
One of the current limitations of PRS, however, is a lack of interpretability tools. To address this problem, at team of researchers at the Graduate School of Data Science at the Seoul National University introduced eXplainable PRS (XPRS), an interpretation and visualization tool that decomposes PRSs into genes/regions and single nucleotide polymorphism (SNP) contribution scores via Shapley additive explanations (SHAPs), which provide insights into specific genes and SNPs that significantly contribute to the PRS of an individual.
In this best practice we use a co-design approach to help various stakeholders to embed ethics across the whole span of the design and implementation process of the XPRS system design and implementation. For that, we use Z-inspection® an ethically aligned Trustworthy AI co-design methodology.
In Cooperation with the:
Graduate School of Data Science, Seoul National University, Seoul, South Korea
→ On going project.
Co-design of a Trustworthy Efficient AI for Cloud-Edge Computing
The EU MANOLO project aims to deliver a complete stack of trustworthy algorithms and tools to help AI systems reach better efficiency and seamless optimization in their operations, resources and data required to train, deploy and run high-quality and lighter AI models in both centralised and cloud-edge distributed environments.
MANOLO employs the Z-Inspection® process to assess the trustworthiness of AI systems based on the European Ethics Guidelines.
Download Deliverable D1.1 Trustworthy Efficient AI for the Cloud-edge Deep Dive
Download D6.1 Use Cases Design and Deployment Plans
→ On going project
Assessing the application of a deep learning algorithm for Nature Monitoring.
This report shares the experiences, results and lessons learned in conducting a pilot project ‘Responsible use of AI’ in cooperation with the Province of Friesland, Rijks ICT Gilde-part of the Ministry of the Interior and Kingdom Relations (BZK) (both in The Netherlands) and a group of members of the Z-Inspection® Initiative. The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed.
The AI maps heathland grassland by means of satellite images for monitoring nature reserves. Environmental monitoring is one of the crucial activities carried on by society for several purposes ranging from maintaining standards on drinkable water to quantifying the CO2 emissions of a particular state or region. Using satellite imagery and machine learning to support decisions is becoming an important part of environmental monitoring.
The main focus of this report is to share the experiences, results and lessons learned from performing both a Trustworthy AI assessment using the Z-Inspection® process and the EU framework for Trustworthy AI, and combining it with a Fundamental Rights assessment using the Fundamental Rights and Algorithms Impact Assessment (FRAIA) as recommended by the Dutch government for the use of AI algorithms by the Dutch public authorities.
arXiv:2404.14366 [cs.CY]
Nordic Applied Ethical AI Consortium (NAEAIC): Applied Ethical AI on Nordic Patient Records
The NAEAIC project (also known as FEHLS) applies federated learning to train AI models on Nordic patient records stored in Denmark (Research Unit for General Practice, RUGP) and Norway (Norwegian Centre for Headache Research, NTNU) without exchanging sensitive health data across borders. The project uses the GRACE platform (2021.AI) for AI governance and federated model orchestration.
Z-Inspection® was used to conduct a trustworthiness barrier assessment of the federated AI system, evaluating ethical, legal, societal, and technical tensions in line with the EU AI Act, GDPR, EHDS, and the European Commission’s framework for Trustworthy AI. The assessment maps socio-technical scenarios, identifies non-functional requirements, and resolves ethical issues through an interdisciplinary co-design process. Arcada University of Applied Sciences and Tampere University lead the Z-Inspection® barrier assessment and business case evaluation.
Collaborators
Lead institution(s): 2021.AI (Copenhagen, Denmark)
Partner institutions: Arcada University of Applied Sciences (Helsinki, Finland), Tampere University (Tampere, Finland), Research Unit for General Practice / RUGP (Aalborg, Denmark), Norwegian Centre for Headache Research / NTNU (Trondheim, Norway), Horten Law Firm (Copenhagen, Denmark
Country/countries: Denmark, Finland, Norway
Funding Source
Nordic Innovation (Norden), Case No. 407-7003-P22027. Total grant: NOK 5,629,336 (50% of project costs). Programme: Life Science & Health Tech.
Start date: Jan 2024 End date: Dec 2025
√ Assessment Completed.
Optimizing the Provision of Rehabilitation Services Through Explainable Recommender Systems (SmartRehab)
The complexity of the data and the limited ability to explore it can lead to suboptimal treatment decisions, resulting in over-treatment or under-treatment. It increases healthcare costs and has significant implications for patient health and well-being.
Overview
This project aims to contribute to this area of research by implementing recommender systems that support healthcare professionals’ decision-making processes.
The project will use data from a rehabilitation clinic in Switzerland that cares for persons with spinal cord injury (SCI). Our system will analyze data from the first rehabilitation, which happens after acute care and lasts six to eight months. SCI is damage to the spine that leads to irreversible physical impairment, also known as paraplegia and tetraplegia. First rehabilitation is designed to support persons with SCI to get used to their new condition, and to give them the tools to make them as independent as possible. The healthcare professionals involved during this phase include physicians, nurses, physiotherapists, logopedists, and psychologists, among other therapists.
This project is led by the Lucerne University of Applied Sciences and Arts (HSLU). HSLU is responsible for designing and implementing explainable methods in recommendations that will be integrated into the beta version of SmartRehab.
In addition, this project collaborates with the University of Lucerne (UniLU), health professionals at the Swiss Paraplegic Center (SPC), and the Swiss Paraplegic Research (SPF) to comprehensively analyze the use case and thoroughly test and provide feedback on the system developed. The objective is to provide feedback on the system and ensure it is functional and user-friendly. The project’s ultimate goal is to deploy a prototype of SmartRehab that can optimize medical decision-making in SCI rehabilitation. It will translate into optimizing patient functioning and reducing the cost of services. The project also considers rigorous ethical considerations. As a result, it provides a solid foundation for expanding current solutions by establishing a standard for using health recommender systems (HRSs) for rehabilitation services.
A task of this project is to use the the Z-Inspection® process for Trustworthy AI co-design.
Main contact:
PD Dr. Luis Terán, Lucerne University of Applied Sciences and Arts
→ On going project.
Trustworthy AI Assessment in Agriculture: From Plant Disease Recognition to Animal Welfare Monitoring
Research Background
Artificial Intelligence has become an essential tool for transforming modern agriculture, offering new possibilities for precision crop management, disease detection, and livestock welfare assessment. Yet, despite major advances in accuracy and automation, trustworthiness, including transparency, explainability, robustness, and ethical accountability, remains insufficiently addressed in agricultural AI systems.
While other domains (e.g., healthcare, finance) have started to operationalize Trustworthy AI principles, agriculture presents distinct challenges: heterogeneous environments, evolving biological conditions, limited labeled data, and ethical concerns related to animal welfare and sustainability. A systematic approach to assessing and enhancing trust in agricultural AI models is therefore urgently needed.
Over the past decade, my research group at JBNU (See here) has advanced the development of deep learning systems for plant disease recognition (Dong et a., 2025a,b), crop growth and environmental monitoring, autonomous greenhouse management, and cattle behavior and welfare analysis (Han et al., 2025; Nasir et al., 2025). These platforms, validated under real-world conditions, provide a robust empirical foundation for investigating how trustworthiness can be measured, interpreted, and improved in AI systems designed for agricultural decision-making.
Research Objectives
This study aims to develop and validate a Trustworthy AI Assessment Framework tailored for agricultural applications through two complementary case studies.
Case Study 1: Plant Disease Recognition
Objective: Assess robustness and interpretability of AI models under domain shifts caused by environmental and crop variability.
Case Study 2: Animal Behavior Recognition and Welfare Assessment
Objective: Examine model reliability and ethical implications in automated cattle behavior analysis.
A task of this project is to use the the Z-Inspection® process for Trustworthy AI co-design.
Main contact
Prof. Alvaro Fuentes
Research Professor, Jeonbuk National University (JBNU), South Korea
→ On going project.
Trustworthy AI Co-creation for Unmanned Vehicle-based solutions for Port Security
Geopolitical developments underscore the importance of maritime security. The effective monitoring and protection of port infrastructures are particularly critical to overall infrastructure security. The EU-funded COLOSSUS project aims to improve digital security, resilience, and maintenance of port infrastructures, aligning with the updated EU Maritime Security Strategy (EUMSS 2023). Collaborating with small and medium-sized enterprises (SMEs) from various Member States, the project addresses challenges in the EU’s Blue Growth strategy and the European Green Deal. It will develop solutions and test them through laboratory stress tests and real-world scenarios at the Arsenal do Alfeite (AASA) naval base in Portugal. A key component is the integrated perimeter surveillance system, which will provide live surveillance to enhance overall port security.
In the context of COLOSSUS, the Z-inspection® process is deployed initially as an ethically aligned co-design methodology which helps the stakeholders of the three envisioned Use Cases to assess their initial design decisions based on the EU Framework for Trustworthy AI. Overall, the core of the co-design process is a cross disciplinary conversation.
The initial outcome of this co-design process is the redefinition of the AI designs’ goal and purpose. This will be achieved by discussing a number of socio-technical scenarios. For each COLOSSUS Use Case a shared socio-technical scenario file will be created.
HORIZON.2.3 – Civil Security for Society
Coordinated by
TECNOVERITAS SERVICOS DE ENGENHARIAE SISTEMAS TECNOLOGICOS LDA, Portugal
https://colossus-project.eu/artificial-intelligence/
→ On going project.
