The Process


The ethical and societal implications of artificial intelligence systems raise concerns.

Z-Inspection® is a general inspection process for Ethical AI which can be applied to a variety of domains such as business, healthcare, public sector, among many others. It uses applied ethics.

To the best of our knowledge, Z-Inspection® is the first process that combines a holistic and analytic approach to assess Trustworthy AI in practice.

Core Idea

The core idea of our assessment is to create an orchestration process to help teams of skilled experts assessing the ethical, technical and legal implications of using an AI-product/service within a given context.

Click here for the process in a nutshell

Trustworthy AI

Framework for Trustworthy AI

Our assessment takes into account the Framework for Trustworthy AI  and the seven key requirements that AI systems should meet in order to be deemed trustworthy, defined by the independent High-Level Expert Group of Artificial Intelligence, set by the European Commission, as well as the Organization for Economic Co-operation and Development (OECD) AI principles.

Four ethical principles

Z-Inspection® is based on 4 ethical principles based on fundamental rights:

Respect for human autonomy,

Prevention of harm,



Seven key requirements for trustworthy AI

Human agency and oversight

Technical robustness and safety

Privacy and data governance


Diversity, non-discrimination and fairness

Societal and environmental wellbeing


A step forward to translate ethical principles into actionable strategies in practice.

“Professor Zicari and his team are developing  an ethical framework to identify the moral hazards of AI.

The challenge is how to trust something that is difficult to explain. In his view, there is a clear gap between the engineers who drive the technology forward and the rest of society, for whom AI should ultimately be available as a technology that at once adds value and is ethically sound.

In the course of his research, trust should be established that AI can be used responsibly to this end. To achieve this, the gap that currently exists between thinkers, society, politics and technologists must be closed.”


” If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus making it difficult for them to identify or respond to bias, errors, or other problems.

The public will have less insight into how agencies function, and have less power to question or appeal decisions”

“An Ethical assessment would also benefit vendors (AI developers) that prioritize fairness, accountability, and transparency in their offering. Companies that are best equipped to help agencies and researchers study their system would have a competitive advantage over others.

Cooperation would also help improve public trust, especially at a time when skepticism of the societal benefits of AI is on the rise.”

Algorithmic Impact Assessment: A Practical Framework for Public Agency Accountability, AI Now, April 2018