The ethical and societal implications of artificial intelligence systems raise concerns.

We defined a novel process based on applied ethics, namely Z-Inspection®, to assess if an AI system is trustworthy. We use the definition of trustworthy AI given by the high-level European Commission’s expert group on AI.

Z-Inspection®: A Process to Assess Trustworthy AI

Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.

submitted for publication.

GO TO The Process in a nutshell

Ethical Principles

Z-Inspection® takes into account the “Framework for Trustworthy AI” as defined by the independent High-Level Expert Group of Artificial Intelligence, set by the European Commission.

Four ethical principles based on fundamental rights:

– Respect for human autonomy,

– Prevention of harm,

– Fairness, and

– Explicability.

Trustworthy AI Requirements

Seven key requirements:

– Human agency and oversight,

– Technical robustness and safety,

– Privacy and data governance,

– Transparency,

– Diversity, non-discrimination and fairness,

– Societal and environmental wellbeing,

– Accountability.

We added to the seven requirements the following two:

-Assessing if the ecosystems respect values of Western European democracy,

-Avoiding concentration of power.

How and When To Use Z-Inspection®

Z-Inspection® can be used

  • during the AI design phase as a validation process to ensure trustworthy AI;
  • after AI deployment as a validation process to assess trustworthy AI;
  • as part of AI certification  to certify trustworthy AI;
  • as part of AI audit to help verify claims;
  • as part of the monitoring of AI over time.

Best Practices

We are using Z-Inspection® to assess real life use case.

GO TO Best Practices