Kick off Meeting (April 15, 2021) Assessing Trustworthy AI. Best Practice: Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients. In cooperation with Department of Information Engineering and Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health – University of Brescia, Brescia, Italy

On April 15, 2021 we had a real great kick off meeting for this use case:

Assessing Trustworthy AI. Best Practice: Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients.

71 experts from all over the world attended.

Worldwide, the saturation of healthcare facilities, due to the high contagiousness of Sars-Cov-2 virus and the significant rate of respiratory complications is indeed one among the most critical aspects of the ongoing COVID-19 pandemic
The team of Alberto Signoroni and colleagues implemented an end-to-end deep learning architecture, designed for predicting, on Chest X-rays images (CXR), a multi-regional score conveying the degree of lung compromise in COVID-19 patients.

We will work with Alberto Signoroni and his team and apply our Z-inspection® process to assess the ethical, technical and legal implications of using Deep Learning in this context.

For more information: https://z-inspection.org/best-practice-deep-learning-for-predicting-a-multi-regional-score-conveying-the-degree-of-lung-compromise-in-covid-19-patients/

This AI detects cardiac arrests during emergency calls

Jointly with the Emergency Medical Services Copenhagen, we completed the first part of our trustworthy AI assessment.
A ML sytem is currently used as a supportive tool to recognize cardiac arrest in 112 emergency calls.
A team of multidisciplinary experts used Z-Inspection® and
identified  ethical,technical and legal issues in using such AI system.
This confirms some of the ethical concern raised by Kay Firth-Butterfield, back in June 2018….

This is another example of the need to test and verify algorithms,says Kay Firth-Butterfieldhead of Artificial Intelligence and Machine Learning at the World Economic Forum.

“We all want to believe that AI will ‘wave its magic wand’ and help us do better and this sounds as if it is a way of getting AI to do something extremely valuable.
“But,” Firth-Butterfield added, “it still needs to meet the requirements of transparency and accountability and protection of patient privacy. As it is in the EU, it will be caught by GDPR, so it is probably not a problem.” However, the technology raises the fraught issue of accountability, as Firth-Butterfield explains. Who is liable if the machine gets it wrong? the AI manufacturer, the human being advised by it, the centre using it? This is a much debated question within AI which we need to solve urgently: when do we accept that if the AI is wrong it doesn’t matter because it is significantly better than humans. Does it need to be a 100% better than us or just a little better? At what point is the use, or not using this technology negligent?

Source: https://www.weforum.org/agenda/2018/06/this-ai-detects-cardiac-arrests-during-emergency-calls/

Image: CPR

The full report is submitted for publication. Contact me if you are interested to know more. RVZ

Resources:

Article World Economic Forum, 06 Jun 2018.

Download the Z-Inspection® Process

“Z-Inspection®: A Process to Assess Ethical AI”
Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.
IEEE Transactions on Technology and Society, 2021
Print ISSN: 2637-6415
Online ISSN: 2637-6415
Digital Object Identifier: 10.1109/TTS.2021.3066209
DOWNLOAD THE PAPER

The Z-Inspection® Process is available for Download!

The Z-Inspection® Process is available for Download! “Z-Inspection®: A Process to Assess Ethical AI”Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.IEEE Transactions on Technology and […]

Professor SIBRAND POPPEMA, MEMBER OF THE Z-Inspection® ADVISORY BOARD, RECEIVES GERMAN ORDER OF MERIT.

The Order of Merit of the Federal Republic of Germany is the highest tribute Germany pays for services to the nation. Professor Sibrand Poppema, member of the Advisory Board of the Z-Inspection® initiative and President of Sunway University in Malaysia, has been awarded the Officer’s Cross of the Order of Merit in recognition of his outstanding contribution to research and education relations between Germany and the Netherlands. Professor Poppema has recently joined as a member of the Advisory Board of the Z-Inspection® initiative. The Advisory Board is responsible for supporting the researcher members of the Z-Inspection initiative in scientific and strategic matters with external expertise. The Advisory Board currently consists of forty two international experts.

Award ceremony with Ambassador Dr. Peter Blomeyer
Photo: Sunway University

Naveed Mushtaq. May he rest in peace.

Our team member, colleague and friend Naveed Mushtaq has passed away last night. His heart did not make it.

He suffered a sudden cardiac arrest a few weeks ago.

May he rest in peace.  

We pray for the family, that God will grant them the serenity in this very difficult time in their life.

What a Philosopher Learned at an AI Ethics Evaluation

Our expert James Brusseau (PhD, Philosophy) – Pace University, New York City, USA– wrote an essay documenting the learnings he acquired from working on Z-Inspection® performed on an existing, deployed, and functioning AI medical device. 

What a Philosopher Learned at an AI Ethics Evaluation

By James Brusseau

AI ethics increasingly focuses on converting abstract principles into practical action. This case study documents nine lessons for the conversion learned while performing an ethics evaluation on a deployed AI medical device. The utilized ethical principles were adopted from the Ethics Guidelines for Trustworthy AI, and the conversion into practical insights and recommendations was accomplished by an independent team composed of philosophers, technical and medical experts.

This essay contributes to the conversion of abstract principles into concrete artificial intelligence applications by documenting learnings acquired from a robust ethics evaluation performed on an existing, deployed, and functioning AI medical device. 

The ethics evaluation formed part of a larger inspection involving technical and legal aspects of the device that was organized by computer scientist Roberto Zicari (2020). This document is limited to the applied ethics, and to his experience as a philosopher.

These are nine lessons I learned about applying ethics to AI in the real world.

DOWNLOAD PDF

Zicari, Roberto (2020). Z-Inspection: A process to assess trustworthy AI. 

Dipayan Ghosh, Ph.D co-director, Digital Platforms & Democracy Project, Senior Fellow Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School joined our Advisory Board!

We are proud to announce you that Dipayan Ghosh, Ph.D co-director, Digital Platforms & Democracy Project, Senior Fellow Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, USA has joined our Advisory Board! 

His research and writing have been cited and published widely, with recent analysis appearing in The New York TimesThe Washington PostThe Wall Street JournalThe AtlanticThe GuardianForeign AffairsHarvard Business ReviewForeign Policy, Time, and CNN. He has also appeared on CNNMSNBCCNBCNPR and BBC. A computer scientist by training, Ghosh previously worked on global privacy and public policy issues at Facebook, where he led strategic efforts to address privacy and security issues at the company. Prior to Facebook, Ghosh was a technology and economic policy advisor in the Obama White House where he served across the Office of Science & Technology Policy and the National Economic Council. He focused on issues concerning big data’s impact on consumer privacy and the digital economy. He has also served as a public interest technology fellow at New America, a Washington-based public policy think tank. Ghosh received a Ph.D. in electrical engineering & computer science at Cornell University where he conducted research at the Wireless Intelligent Systems Lab, and completed post-doctoral work at University of California, Berkeley.

Professor Margaret Levi, Sara Miller McCune Director of the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University joined our Advisory Board!

We are proud to announce you that Professor Margaret Levi,  Sara Miller McCune Director of the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University, and Professor of Political Science, Stanford University, and Senior Fellow, Stanford Woods Institute for the Environment, USA , has joined our Advisory Board! 

CASBS @ Stanford brings together deep thinkers from diverse disciplines and communities to advance understanding of the full range of human beliefs, behaviors, interactions, and institutions. A leading incubator of human-centered knowledge, CASBS facilitates collaborations across academia, policy, industry, civil society, and government to collectively design a better future.

Arcada’s researchers are contributing to the development of an ethical assessment of Artificial Intelligence

25.11.2020 

What is Artificial Intelligence (AI) and how should we handle the use of AI? These are issues that are being pondered in most EU institutions and large companies today. AI is a loose term that includes most methods (self-learning algorithms) that are used primarily to process collected data and to create decisions based on patterns found in data. In fact, even today, most people use and are controlled by an AI without thinking about it. AI as a software technology is a little different and requires a deeper analysis than regular software. What characterizes AI is that it can be intrusive, it constantly needs more data and is continuously retrained to become better at finding patterns in our data.

Arcada participates in Z-Inspection®, an international research network that advocates “a mindful use of AI”, i.e. an ethical use of artificial intelligence. Professor Roberto Zicari at Goethe University has created an international interdisciplinary network that works to understand how we should handle AI. The network includes both the private sector and academics who work together to establish ideas for how we get an AI use that is compatible with European values. We probably all realize that AI will have a great impact on society and therefore such systems must be safe and resilient to use. This is what researcher Magnus Westerlund, who leads Arcada’s Master’s programme in Big Data Analytics and is also part of the Z-Inspection network, says. The challenge, however, is that technological development is only a small part, the big work lies in helping people to use AI, to enable a better experience and to understand when and to what extent we can trust AI.

Arcada’s researchers look, among other things, at the healthcare sector, where the impact will be great when AI is integrated into the systems. It requires the healthcare sector to implement a digitization process that gives AI access to reliable data. This can, for example, include a variety of digital systems and sensors that digitize the physical space. The healthcare sector has a clear challenge in keeping focus when it comes to the development of the digital processes. It makes small sprints a bit here and there, but the vision for where it is going and the ability to take us there is the problem.

– We can, for example, look at Estonia and state that a clear improvement would be an open platform that connects different competent actors, and how we get data. This is of utmost importance to be able to use AI, says Westerlund.

A significant difference to, for example, Estonia has been all the legacy systems that exist in Finland and to connect these in a safe way. Data security for the systems has become one of the major challenges in being able to drive a faster change process. At the same time, too much emphasis has been placed on purchasing large and fully developed IT systems from abroad. Instead of nurturing a domestic software industry, we have relied on consultant-driven system adaptations to Finnish needs. The question is how these systems will now be able to deliver data to various companies and researchers who wish to use AI to improve decision-making in healthcare.

The future of AI-supported services

According to reports, the European Commission will propose a new AI legislation, which will impose requirements on AI-supported services. This is work that is reminiscent of the process of the Data Protection Regulation (GDPR). It is likely that in the future there will be demands for AI services that have an impact on both the individual and society. One of the basic requirements should be to perform an ethical evaluation when using AI services.

According to Westerlund, transparency and openness are needed in our systems, so that we understand how an AI decision came about.

– The algorithm may be wrong and that is something we must dare to address. Attempts to automate may reduce costs, but it is important to understand that AI has no understanding of the broader relevance of what it proposes. If we start introducing services that replace people, there will be problems, especially in government decisions or when it comes to life and death, such as in health care and traffic. Our society has not yet had time to adapt to an AI making autonomous decisions –  we expect a human being to be legally responsible.

Westerlund points out that artificial intelligence should in the medium term be seen as a complement to the human expert, which contributes to making decisions and diagnoses better and more reliable.

Concrete proposals for opening up healthcare for AI use are to open up data warehouses, modularize used software, and improve digital identities and data security. Initiatives such as Z-Inspection can then be used to evaluate how the AI ​​solutions meet the ethical requirements.

The research at Arcada wants to solve problems related to:

  • A better use of the digital, and in the future to include AI to, for example, detect / support certain functions.
  • Open architecture for infrastructure.
  • Distributed security suitable for modular systems.
  • Evaluate the use of AI.

Editor, Elina Sagne-Ollikainen, Master in Health Care and Social Service, Specialist in Research and Development at Arcada.

Article written by Magnus Westerlund, DSc., Principle Lecturer in Information Technology.

Originally published here.