On the Ethics of AI-based Algorithmic decision-making in Healthcare.

Assessing Trustworthy AI. Best Practice: AI medial device for Predicting Cardiovascular Risks

We analyzed the ethical and societal implication, assessed the technical and legal aspects of the use of an AI-medical device to predict the risk of a cardiovascular heart disease.

You can read a summary of the results of the assessment here:

“Z-Inspection®: A Process to Assess Ethical AI”
IEEE Transactions on Technology and Society, 2021
Digital Object Identifier: 10.1109/TTS.2021.3066209
DOWNLOAD THE PAPER

What a Philosopher Learned at an AI Ethics Evaluation

Our expert James Brusseau (PhD, Philosophy) – Pace University, New York City, USA– wrote an essay documenting the learnings he acquired from working in our team and applying  Z-Inspection® on this use case.

These are nine lessons he learned about applying ethics to AI in the real world.

DOWNLOAD PDF

The Problem Domain

Cardiovascular diseases (CVDs) are the number 1 cause of death globally, taking an estimated 17.9 million lives each year. Over the past decade, several machine-learning techniques have been used for cardiovascular disease diagnosis and prediction. The potential of AI in cardiovascular medicine is high; however, ignorance of the challenges may overshadow its potential clinical impact.

Scenario

The product we assessed was a non-invasive AI medical device that used machine learning to analyze sensor data (i.e. electrical signals of the heart) of patients to predict the risk of cardiovascular heart disease.

The company uses a traditional machine learning pipeline approach, which transforms raw data into features that better represent the predictive task. The features are interpretable and the role of machine learning is to map the representation to output.

Recommendations

Depending on the list of classified ethical issues (dilemma in practice and true ethical dilemma) we discovered, we gave recommendations. Reccommedation would help improve the design of the AI. Developers could be using the feedback and results derived from the areas of the assessment and from the ethical aspects identified to improve the technical aspects of the AI and/or to better match the ethical challenges.

Such recommendations should be considered as a source of qualified information that help decision makers make good decisions, and that help the decision-making process for defining appropriate trade-offs.

Trade Offs

The purpose of the investigation was to help define a degree of confidence that the AI analyzed is trustworthy with respect to a number of indicators, -taking into account, the context (e.g. ecosystems), people, data, processes, and selected areas of investigations.

We envisioned that the results of the investigation would be useful for relevant stakeholders who are responsible for making final decisions on the appropriate use or not of the AI in the given context. They would also help continue the discussion by engaging additional stakeholders in the decision-process.