On the Ethics of AI-based Algorithmic decision-making in Healthcare.

Assessing Trustworthy AI. Best Practice: AI for Predicting Cardiovascular Risks

We analyze the ethical and societal implication of the use of AI with the goal to predict the risk of a cardiovascular heart disease.

The Problem Domain

Cardiovascular diseases (CVDs) are the number 1 cause of death globally, taking an estimated 17.9 million lives each year. Over the past decade, several machine-learning techniques have been used for cardiovascular disease diagnosis and prediction. The potential of AI in cardiovascular medicine is high; however, ignorance of the challenges may overshadow its potential clinical impact.

Scenario

The product we assessed was a non-invasive AI medical device that used machine learning to analyze sensor data (i.e. electrical signals of the heart) of patients to predict the risk of cardiovascular heart disease.

The company uses a traditional machine learning pipeline approach, which transforms raw data into features that better represent the predictive task. The features are interpretable and the role of machine learning is to map the representation to output.

Approach

We use the Z-inspection® methodology, considering the following perspectives:

a)  Social ( relevant for policy makers, users/patients and society at large );

b)  Domain specific (relevant for key stakeholders in health care);

c)  Technology specific (relevant for Machine Learning engineers).

In context of Z-inspection®, we call these perspectives Layers. It is possible to have more layers. The reason to introduce layers is to highlight and address that normally within each layer different culture, behavior, procedures and terms exist.

This also means that a team composed of experts from each layer need to ensure that the communication and collaboration works. It is important to note that also between experts of the same layer this should be checked.

In order to do so, a process of Concept Building should be performed. This process is a continuous process and should be documented, e.g. in form of a log and/or at least as a set of definitions and/or mappings/translations between concepts of different layers.

One of the goals of the Z-inspection® is to identify Ethical Tensions (E) – values can be in conflict. In the domain of health care, for example input variables might be skipped in order to avoid discrimination even if this can have negative impact to the accuracy.
 In many cases during inspection it is possible to identify issues and decisions with an unclear impact or concerns on the ethical values. We call these issues Flags (F).

The core of the Z-inspection® is an iterative process, where Paths (P) of investigations are defined, assigned to teams and executed in order to better understand the ethical tensions, flags and their consequences. As a result of execution of these paths, new ethical tensions or flags might be defined, and/ or previous ones might be further substantiated or deleted, hence it is a continuous process.

Recommendations

Depending on the list of classified ethical issues (dilemma in practice and true ethical dilemma), the team of inspectors may give recommendations. When possible, this would help improve the design of the AI. Developers could be using the feedback and results derived from the areas of the assessment and from the ethical aspects identified to improve the technical aspects of the AI and/or to better match the ethical challenges.

Such recommendations should be considered as a source of qualified information that help decision makers make good decisions, and that help the decision-making process for defining appropriate trade-offs.

Trade Offs

The purpose of the investigation is to help define a degree of confidence that the AI analyzed is trustworthy with respect to a number of indicators, -taking into account, the context (e.g. ecosystems), people, data, processes, and selected areas of investigations.

We envision that the results of the investigation would be useful for relevant stakeholders who are responsible for making final decisions on the appropriate use or not of the AI in the given context. They would also help continue the discussion by engaging additional stakeholders in the decision-process.