The Center for the Study of Ethics in the Professions at Illinois Institute of Technology (Chicago, USA) launches The Ethical and Trustworthy AI Lab based on the Z-Inspection® Process.

Chicago, March 15, 2022

The Ethical and Trustworthy AI Lab at Illinois Institute of Technology’s Center for the Study of Ethics in the Professions is an interdisciplinary group of researchers interested in the social and ethical implications of Artificial Intelligence (AI).

The group investigates philosophical, ethical, and social aspects of AI including trustworthiness and the question of what it is that makes AI uses ethical, just, and trustworthy; the roles of ethics codes, ethical guidelines, and policy-making in the regulation of AI technology; as well as AI applications in agriculture and medical contexts. 

The mission of the Lab is to involve stakeholders from all fields, such as computer science, technology, engineering, philosophy, social sciences, practitioners, and students, in an interdisciplinary reflection on the ethical uses of AI.

The Lab closely collaborate with the AI@IllinoisTech initiative, in particular its AI Ethics Working Group (AIEWG) and the international Z-Inspection® network. The Z-Inspection® assessment method for Trustworthy AI is an approach based on the Ethics Guidelines for Trustworthy AI by the European Commission High-Level Expert Group on Artificial Intelligence.

The head of the new Lab is Prof. Elisabeth Hildt.

More information here.

Vision 2022

1. Trustworthy AI Labs are established, based on the Z-Inspection® process

Like this one in Helsinki;

2. Z-Inspection® process modules are incorporated into Master and PhD programs at selected universities, to equip a new generation of interdisciplinary students in assessment of ethical AI;

3. The Z-Inspection® process is leveraged in future European policy and regulation relating to AI.

Arcada University of Applied Sciences (Helsinki, Finland) launches The Laboratory for Trustworthy AI based on the Z-Inspection® Process.

Helsinki, November 12, 2021

The Laboratory for Trustworthy AI at Arcada University of Applied Sciences (Helsinki, Finland) is a transdisciplinary and international research community who trains organizations and actors to assess the use of artificial intelligence. The lab connects academia and civil society, including developers of AI solutions, students, end-users, researchers, and stakeholders.

The Lab promotes a human-centric approach to AI and towards closing the gap between ethically sound AI development and the technical and methodological practices. The Lab embraces technical innovativeness and assist organizations in mapping socio-technical scenarios that are used to assess risk. 

The Lab collaborates closely with international networks such as the Z-Inspection® assessment method for Trustworthy AI External link. The Z-Inspection® approach is a validated assessment method that helps organizations to deliver ethically sustainable, evidence based, trustworthy and user-friendly AI driven solutions. The method is published in IEEE Transactions on Technology and Society 

More information about the Lab here.

Lessons Learned: Co-design of Trustworthy AI. Best Practice. By Helga Brogger, President of the Norwegian Society of Radiology

Mission: …Aid the development of designs with reduced end-user vulnerability…

-“…Socio-technical scenarios can be used to broaden stakeholders’ understanding of one’s own role in the technology, as well as awareness of stakeholders’ interdependence…”

– “…Recurrent, open-minded, and interdisciplinary discussions involving different perspectives of the broad problem definition….”

– “…The early involvement of an interdisciplinary panel of experts broadened the horizon of AI designers which are usually focused on the problem definition from a data and application perspective…”

– “…Consider the aim of the future AI system as a claim that needs to be validated before the AI system is deployed..”

-“…Involve patients at every stage of the design process … it is particularly important to ensure that the views, needs, and preferences of vulnerable and disadvantaged patient groups are taken into account to avoid exacerbating existing inequalities…”

Thank you, Roberto V. Zicari and the rest of the team for these insights!

— Helga Brogger, President of the Norwegian Society of Radiology

………………………………………………………………………………………………………………………………………………………………………….

Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier.

Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund and Renee Wurth.

Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021

VIEW ORIGINAL RESEARCH article

Learn more

Our paper ” Co-design of a Trustworthy AI System in Healthcare: Deep Learning based Skin Lesion Classifier.” has been accepted for publication in Frontiers in Human Dynamics

Co-design of a Trustworthy AI System in Healthcare: Deep Learning based Skin Lesion Classifier.

Roberto V. Zicari (1)(2)(3), Sheraz Ahmed (4), Julia Amann (5), Stephan Alexander Braun (6)(7), John Brodersen (8)(9), Frédérick Bruneault (10), James Brusseau (11), Erik Campano (12), Megan Coffee (13), Andreas Dengel (4)(14), Boris Düdder (15), Alessio Gallucci (16), Thomas Krendl Gilbert (17), Philippe Gottfrois (18), Emmanuel Goffi (19), Christoffer Bjerre Haase (20), Thilo Hagendorff (21), Eleanore Hickman (22), Elisabeth Hildt (23), Sune Holm (24), Pedro Kringen (1), Ulrich Kühne (25), Adriano Lucieri (4)(14), Vince I. Madai (26)(27)(28), Pedro A. Moreno-Sánchez (29), Oriana Medlicott (30), Matiss Ozols (31)(32), Eberhard Schnebel (1), Andy Spezzatti (33), Jesmin Jahan Tithi (34), Steven Umbrello (35), Dennis Vetter (1), Holger Volland (36), Magnus Westerlund (2), Renee Wurth (37).

(1) Frankfurt Big Data Lab, Goethe University Frankfurt, Germany
(2) Arcada University of Applied Sciences, Helsinki, Finland
(3) Data Science Graduate School, Seoul National University, South Korea
(4) German Research Center for Artificial Intelligence (DFKI) Kaiserslautern, Germany
(5) Health Ethics and Policy Lab,Swiss Federal Institute of Technology (ETH Zurich), Switzerland
(6) Department of Dermatology, University Clinic Münster, Germany
(7) Dept. of Dermatology, Medical Faculty, Heinrich-Heine University, Düsseldorf, Germany
(8) Section of General Practice and Research Unit for General Practice, Department of Public Health, Faculty of Health and Medical Sciences, University of Copenhagen, Danemark
(9) Primary Health Care Research Unit, Region Zealand, Denmark
(10) École des médias, Université du Québec à Montréal and Philosophie, Collège André-Laurendeau, Canada
(11) Philosophy Department, Pace University, New York, USA
(12) Department of Informatics, Umeå University, Sweden
(13) Department of Medicine and Division of Infectious Diseases and Immunology, NYU Grossman School of Medicine, New York, USA
(14) Department of Computer Science, TU Kaiserslautern, Germany
(15) Department of Computer Science (DIKU), University of Copenhagen (UCPH), Denmark
(16) Department of Mathematics and Computer Science, Eindhoven University of Technology, The Netherlands.
(17) Center for Human-Compatible AI, University of California, Berkeley, USA
(18) Department of Biomedical Engineering, Basel University, Switzerland
(19) The Global AI Ethics Institute, France
(20) Section for Health Service Research and Section for General Practice, Department of Public Health, University of Copenhagen, Denmark. Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne, Australia (21) Ethics & Philosophy Lab, University of Tuebingen , Germany
(22) Faculty of Law, University of Cambridge, UK
(23) Center for the Study of Ethics in the Professions, Illinois Institute of Technology Chicago, USA
(24) Department of Food and Resource Economics, Faculty of Science, University of Copenhagen, DK
(25) “Hautmedizin Bad Soden”, Germany
(26) Charité Lab for AI in Medicine, Charité Universitätsmedizin Berlin, Germany
(27) QUEST Center for Transforming Biomedical Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Germany
(28) School of Computing and Digital Technology, Faculty of Computing, Engineering and the Built Environment, Birmingham City University, United Kingdom
(29) School of Healthcare and Social Work, Seinäjoki University of Applied Sciences (SeAMK), Finland
(30) Freelance researcher, writer and consultant in AI Ethics, UK
(31) Division of Cell Matrix Biology and Regenerative Medicine, The University of Manchester, UK
(32) Human Genetics, Wellcome Sanger Institute, UK
(33) Industrial Engineering & Operation Research, UC Berkeley, USA
(34) Intel Labs, Santa Clara, CA, USA
(35) Institute for Ethics and Emerging Technologies, University of Turin, Italy

(36) Z-Inspection® Initiative
(37) T.H Chan School of Public Health, Harvard University, USA

* Correspondence:

Corresponding Author Roberto V. Zicari

Z-inspection® is a registered trademark

Accepted on 09 June 2021
Front. Hum. Dyn. doi: 10.3389/fhumd.2021.688152

Abstract

This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.

Keywords: Artificial Intelligence, Healthcare, Explainable AI, Trust, Case-Studies, Trustworthy AI, Ethics, Malignant Melanoma, Z-inspection®, Ethical co-design.

Our paper ” On Assessing Trustworthy AI in Healthcare Best Practice for Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls.” has been accepted for publication in Frontiers in Human Dynamics

What did we learn in assessing Trustworthy AI in practice?

If you are interested and have time you can watch this video.

It will give you an overview of our research work in the last 2.5 years, in assessing trustworthy AI in healthcare in practice:

What did we learn in assessing Trustworthy AI in practice?

Roberto V. Zicari, Z-Inspection® Initiative

AI Ethics online–Chalmers, April 20, 2021

YouTube: https://www.youtube.com/watch?v=Jt63ZUbrBJM

Download Copy of the Presentation: https://z-inspection.org/wp-content/uploads/2021/04/Zicari.CHALMERSApril20.2021.pdf

For any questions and/or ideas of possible collaboration, please do not hesitate to contact me.

Stay safe

Roberto

…………………………………….

Prof. Roberto V. Zicari

Z-Inspection® Initiative

Affiliated Professor, Yrkeshögskolan Arcada, Helsinki

Adjunct Professor, Seul National University, South Korea

길이 깁니다

The legislative proposal for AI by the European Commission has been published today.

The highly anticipated legislative proposal for AI by the European Commission has been published today.

Read the EU Regulatory Proposal on AI:

https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence

EU Press Release

Press release 21 April 2021 Brussels

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence

Kick off Meeting (April 15, 2021) Assessing Trustworthy AI. Best Practice: Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients. In cooperation with Department of Information Engineering and Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health – University of Brescia, Brescia, Italy

On April 15, 2021 we had a real great kick off meeting for this use case:

Assessing Trustworthy AI. Best Practice: Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients.

71 experts from all over the world attended.

Worldwide, the saturation of healthcare facilities, due to the high contagiousness of Sars-Cov-2 virus and the significant rate of respiratory complications is indeed one among the most critical aspects of the ongoing COVID-19 pandemic
The team of Alberto Signoroni and colleagues implemented an end-to-end deep learning architecture, designed for predicting, on Chest X-rays images (CXR), a multi-regional score conveying the degree of lung compromise in COVID-19 patients.

We will work with Alberto Signoroni and his team and apply our Z-inspection® process to assess the ethical, technical and legal implications of using Deep Learning in this context.

For more information: https://z-inspection.org/best-practice-deep-learning-for-predicting-a-multi-regional-score-conveying-the-degree-of-lung-compromise-in-covid-19-patients/

This AI detects cardiac arrests during emergency calls

Jointly with the Emergency Medical Services Copenhagen, we completed the first part of our trustworthy AI assessment.
A ML sytem is currently used as a supportive tool to recognize cardiac arrest in 112 emergency calls.
A team of multidisciplinary experts used Z-Inspection® and
identified  ethical,technical and legal issues in using such AI system.
This confirms some of the ethical concern raised by Kay Firth-Butterfield, back in June 2018….

This is another example of the need to test and verify algorithms,says Kay Firth-Butterfieldhead of Artificial Intelligence and Machine Learning at the World Economic Forum.

“We all want to believe that AI will ‘wave its magic wand’ and help us do better and this sounds as if it is a way of getting AI to do something extremely valuable.
“But,” Firth-Butterfield added, “it still needs to meet the requirements of transparency and accountability and protection of patient privacy. As it is in the EU, it will be caught by GDPR, so it is probably not a problem.” However, the technology raises the fraught issue of accountability, as Firth-Butterfield explains. Who is liable if the machine gets it wrong? the AI manufacturer, the human being advised by it, the centre using it? This is a much debated question within AI which we need to solve urgently: when do we accept that if the AI is wrong it doesn’t matter because it is significantly better than humans. Does it need to be a 100% better than us or just a little better? At what point is the use, or not using this technology negligent?

Source: https://www.weforum.org/agenda/2018/06/this-ai-detects-cardiac-arrests-during-emergency-calls/

Image: CPR

The full report is submitted for publication. Contact me if you are interested to know more. RVZ

Resources:

Article World Economic Forum, 06 Jun 2018.

Download the Z-Inspection® Process

“Z-Inspection®: A Process to Assess Ethical AI”
Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.
IEEE Transactions on Technology and Society, 2021
Print ISSN: 2637-6415
Online ISSN: 2637-6415
Digital Object Identifier: 10.1109/TTS.2021.3066209
DOWNLOAD THE PAPER