On the Ethics of AI-based Algorithmic decision-making in Healthcare.

Assessing Trustworthy AI. Best Practice: AI for Predicting Cardiovascular Risks

We analyzed the ethical and societal implication, assessed the technical and legal aspects of the use of AI with the goal to predict the risk of a cardiovascular heart disease.

What a Philosopher Learned at an AI Ethics Evaluation

Our expert James Brusseau (PhD, Philosophy) – Pace University, New York City, USA– wrote an essay documenting the learnings he acquired from working in our team and applying  Z-Inspection® on this use case.

These are nine lessons he learned about applying ethics to AI in the real world.

DOWNLOAD PDF

The Problem Domain

Cardiovascular diseases (CVDs) are the number 1 cause of death globally, taking an estimated 17.9 million lives each year. Over the past decade, several machine-learning techniques have been used for cardiovascular disease diagnosis and prediction. The potential of AI in cardiovascular medicine is high; however, ignorance of the challenges may overshadow its potential clinical impact.

Scenario

The product we assessed was a non-invasive AI medical device that used machine learning to analyze sensor data (i.e. electrical signals of the heart) of patients to predict the risk of cardiovascular heart disease.

The company uses a traditional machine learning pipeline approach, which transforms raw data into features that better represent the predictive task. The features are interpretable and the role of machine learning is to map the representation to output.

Approach

We used the Z-inspection® methodology, considering the following perspectives:

a)  Social ( relevant for policy makers, users/patients and society at large );

b)  Domain specific (relevant for key stakeholders in health care);

c)  Technology specific (relevant for Machine Learning engineers).

Recommendations

Depending on the list of classified ethical issues (dilemma in practice and true ethical dilemma), the team of inspectors may give recommendations. When possible, this would help improve the design of the AI. Developers could be using the feedback and results derived from the areas of the assessment and from the ethical aspects identified to improve the technical aspects of the AI and/or to better match the ethical challenges.

Such recommendations should be considered as a source of qualified information that help decision makers make good decisions, and that help the decision-making process for defining appropriate trade-offs.

Trade Offs

The purpose of the investigation is to help define a degree of confidence that the AI analyzed is trustworthy with respect to a number of indicators, -taking into account, the context (e.g. ecosystems), people, data, processes, and selected areas of investigations.

We envision that the results of the investigation would be useful for relevant stakeholders who are responsible for making final decisions on the appropriate use or not of the AI in the given context. They would also help continue the discussion by engaging additional stakeholders in the decision-process.