The Process

The Process

Motivation

The ethical and societal implications of artificial intelligence systems continue to raise concerns.

We – a team of international experts – defined a novel holistic and analytic processes to assess Ethical AI, called Z-Inspection®.

Z-Inspection® is a general inspection process for Ethical AI which can be applied to a variety of domains such as business, healthcare, public sector, among many others. It uses applied ethics. To the best of our knowledge, Z-Inspection® is the first process that combines a holistic and analytic approach to assess Ethical AI in practice.

Core Idea

The core idea of our assessment is to create an orchestration process to help teams of skilled experts assessing the ethical, technical and legal implications of using an AI-product/service within a given context. Wherever possible Z-Inspection® allows us to use existing frameworks, checklists, and to “plug in” existing tools to perform specific parts of the verification. The goal is to customize the assessment process for AIs deployed in different domains and in different contexts.

Absence of conflict of interests

In our opinion, one cornerstone of being able to conduct a neutral, effective AI Ethical assessment is the absence of conflict of interests (direct and indirect).

This means

  • Ensure no conflict of interests exist between the inspectors and the entity/organization to be examined;
  • Ensure no conflict of interests exist between the inspectors and vendors of tools and/toolkits/frameworks to be used in the inspection;
  • Assess potential bias of the team of inspectors.

This result in a:

→ GO if all three above are satisfied.
Still GO with restricted use of specific tools, if 2 is not satisfied.
NoGO if 1 or 3 are not satisfied.

A Holistic and Analytic Process to Assess Ethical AI

The goals for creating Z-Inspection® were to help to close the gap between “principles” (the “what” of AI ethics), and “practices” (the “how”), and to foster ethical values and ethical actions.

In detail…

Z-Inspection® is designed integrating and complementing two well-known approaches:

– A holistic approach, to try grasping the whole without consideration of the various parts; and

– An analytic approach, to consider each part of the problem domain.

Click here for the process in a nutshell

” If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus making it difficult for them to identify or respond to bias, errors, or other problems.

The public will have less insight into how agencies function, and have less power to question or appeal decisions

“An Ethical assessment would also benefit vendors (AI developers) that prioritize fairness, accountability, and transparency in their offering. Companies that are best equipped to help agencies and researchers study their system would have a competitive advantage over others.

Cooperation would also help improve public trust, especially at a time when skepticism of the societal benefits of AI is on the rise.”

Algorithmic Impact Assessment: A Practical Framework for Public Agency Accountability, AI Now, April 2018

A step forward to translate ethical principles into actionable strategies in practice.

An Orchestration Process

The core idea of our assessment is to create an orchestration process to help teams of skilled experts assessing the ethical, technical and legal implications of using an AI-product/service within a given context.

In detail…

Wherever possible Z-Inspection® allows us to use existing frameworks, checklists, and to “plug in” existing tools to perform specific parts of the verification.

The goal is to customize the assessment process for AIs deployed in different domains and in different contexts.

Goals 

-To help assess if the use of AI in a given context is appropriate;

-To help minimize risks associated with the using of an AI in a given context (benefits versus harms);

– To help establish trust in AI;

– To help improve the design of the AI from a socio-legal- technical viewpoint;

– To help identifying chances of using an AI in a given context:

– To help foster ethical values and ethical actions (i.e. stimulate new kinds of innovation).

Trustworthy AI

Our assessment takes into account the Framework for Trustworthy AI and the seven key requirements that AI systems should meet in order to be deemed trustworthy, defined by the independent High-Level Expert Group of Artificial Intelligence, set by the European Commission, as well as the Organization for Economic Co-operation and Development (OECD) AI principles, the first set of intergovernmental policy guidelines on Artificial Intelligence (AI) today.

Framework for Trustworthy AI

Z-Inspection® is based on 4 ethical principles based on fundamental rights:

Respect for human autonomy,

Prevention of harm,

Fairness,

Explicability

Z-Inspection® uses the seven key requirements (values) for trustworthy AI:

Human agency and oversight

Technical robustness and safety

Privacy and data governance

Transparency

Diversity, non-discrimination and fairness

Societal and environmental wellbeing

Accountability