Lessons Learned from Assessing Trustworthy AI in Practice.

lWe published the Lessons Learned from Assessing Trustworthy AI in Practice.

Dennis Vetter, Julia Amann, Frederick Bruneault, Megan Coffee, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Dr Eleanore Hickman, Elisabeth Hildt, Sune Holm, George Kararigas,Pedro Kringen, Vince Madai , Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Ph.D , Magnus Westerlund, Renee Wurth, PhD, Roberto V. Zicari & Z-Inspection® initiative (2022)

Digital Society (DSO), 2, 35 (2023). Springer

Link: https://link.springer.com/article/10.1007/s44206-023-00063-1

Abstract

Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements.

The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI.

This article is a methodological reflection on the Z-Inspection® process. We illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights for both AI researchers and AI practitioners. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of real-world AI systems, as well as key recommendations and practical suggestions on how to ensure a rigorous trustworthiness assessment throughout the lifecycle of an AI system.

The results presented in this article are based on our assessments of AI systems in the healthcare sector and environmental monitoring, where we used the framework for trustworthy AI proposed in the Ethics Guidelines for Trustworthy AI by the European Commission’s High-Level Expert Group on AI. However, the assessment process and the lessons learned can be adapted to other domains and include additional frameworks.

An extended version of this article is available in Zicari et al. (2022) 

Announcing a New Pilot Project: Assessing Trustworthiness of the use of Generative AI for higher Education.

This pilot project of the Z-inspection® initiative  (https://z-inspection.org) aims at assessing the use of Generative AI in higher level education considering specific use cases. 

For this pilot project, we will assess the ethical, technical, domain-specific (i.e. education) and legal implications of the use of Generative AI-product/service within the university context.

We follow the UNESCO guidance for policymakers on AI and education. In particular the policy recommendation 6. : Pilot testing, monitoring and evaluation, and building an evidence base.

Expected Output: a white paper and a peer reviewed journal article as best practice and a set of recommendations for each specific use case. Such recommendations could be also useful for helping clarify the Guidelines that each university is creating for this.

Participants

– Affiliated Labs: https://z-inspection.org/affiliated-labs/

– Members of the Z-inspection® initiative: https://z-inspection.org

– Ministries and Universities

– Specialized agencies

– Others

Approach

An interdisciplinary team of experts will assess the trustworthiness of Generative AI for selected use cases in High Education using the Z-Inspection® process: https://z-inspection.org 

Z-Inspection® is a holistic process based on the method of evaluating new technologies, where ethical issues need to be discussed through the elaboration of socio-technical scenarios. In particular, Z-Inspection® can be used to perform independent assessments and/or self-assessments together with the stakeholders owning the use case. 

For the context of this pilot project we define ethics in line with the essence of modern democracy i.e. “respect for others, expressed through support for fundamental human rights”. We take into consideration that “trust” in the development, deployment and use of AI systems concerns not only the technology’s inherent properties, but also the qualities of the socio-technical systems involving AI applications.

Specifically, we consider the ethics guidelines for trustworthy artificial intelligence defined by the EU High-Level Expert Group on AI, which defined trustworthy AI as:

(1) lawful – respecting all applicable laws and regulations 

(2) ethical – respecting ethical principles and values 

(3) robust – both from a technical perspective and taking into account its social environment

And we use the four ethical principles, rooted in fundamental rights defined in [13], acknowledging that tensions may arise between them:

(1) Respect for human autonomy 

(2) Prevention of harm 

(3) Fairness 

(4) Explicability 

Furthermore, we also consider the seven requirements of Trustworthy AI defined by the High Level experts group set by the EU. Each requirement has a number of sub-requirements as indicated in Table 1.

Table 1. Requirements and sub-requirements Trustworthy AI. 

1 Human agency and oversight Including fundamental rights, human agency and human oversight

2 Technical robustness and safety Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility

3 Privacy and data governance Including respect for privacy, quality and integrity of data, and access to data

4 Transparency Including traceability, explainability and communication 

5 Diversity, non-discrimination and fairness Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation

6 Societal and environmental wellbeing Including sustainability and environmental friendliness, social impact, society and democracy

7 Accountability Including auditability, minimization and reporting of negative impact, trade-offs and redress.

While we consider the seven requirements comprehensive, we believe additional ones can still bring value. Two of such additional requirements proposed by the Z-Inspection® initiative are “Assessing if the ecosystems respect values of Western Modern democracy” and “Avoiding concentration of power” .

………………………………………………………………………………………………………………………………………………………..

From the UNESCO guidance for policymakers on AI and education sets out policy recommendations in seven areas:

(https://unesdoc.unesco.org/ark:/48223/pf0000376709)

1. A system-wide vision and strategic priorities 

2. Overarching principle for AI and education policies 

3. Interdisciplinary planning and inter-sectoral governance 

4. Policies and regulations for equitable, inclusive, and ethical use of AI 

5. Master plans for using AI in education management, teaching, learning, and assessment

6. Pilot testing, monitoring and evaluation, and building an evidence base 

7. Fostering local AI innovations for education 

……………………………………………………………………………………………………………………………………………………………….

from the Guidelines for the use of AI in teaching at the University of Helsinki Academic Affairs Council 16.2.2023

(https://teaching.helsinki.fi/instructions/article/artificial-intelligence-teaching)

“Large artificial intelligence (AI)-based language models such as Chat GPT, Google Bard, and DeepL have evolved to the point where they can produce human-like text and conversations and correct and transform text at such a high level that it can be difficult to distinguish the result from human- generated text. It is foreseeable that more such models will emerge, and their functionalities will continue to evolve, so their existence should be taken into account in university teaching and research.

The existence of large language models should be seen as an opportunity. Degree programmes and teachers are encouraged to use AI in their teaching and to prepare students for a society of the future where AI methods will be widely used.

As AI brings new possibilities for producing text whose origin and reliability is unclear, they should be used in a controlled way. Use may be restricted in teaching in situations where the use would not promote student learning.

At EU level, an AI regulation is under preparation, which will also apply to AI systems in education. In addition, there is an ethical policy on AI and its use, as well as an ethical code for teachers1 . The University’s guidelines may be further specified in the light of future regulation and technological developments.” 

______________________________________________________

Resources

1. Policy paper. Generative artificial intelligence in education. UK: The Department for Education’s (DfE) position on the use of generative artificial intelligence (AI) in High Education. Link to .PDF

2. Do Foundation Model Providers Comply with the Draft EU AI Act? Stanford researchers evaluate foundation model providers like OpenAI and Google for their compliance with proposed EU law on AI.  Link

They identified a final list of 12 requirements and scored the 10 models using a 5-point rubric. The methodology for the study can be found here.

3. Leading universities in the UK (the Russell Group universities) have developed a set of principles on the use of generative AI tools in education. Here is the link.

4. Frontier AI Regulation: Managing Emerging Risks to Public Safety arXiv:2307.03718 [cs.CY] LINK

5. Zhaki Abdullah, Students, teachers will learn to properly use tools like ChatGPT: (Singapore Education Minister) Chan Chun Sing, Straits Times, 12 February 2023, LINK

6. U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington D.C., May 2023, LINK

7. Japanese schools to be allowed limited use of generative AI, Kyodo News, 22 June 2023, LINK

8. Higher Education Webinar: Implications of Artificial Intelligence in Higher Education , Tuesday, June 27, 2023, Council on Foreign Relations, LINK

9. Novelli, C., Casolari, F., Rotolo, A. et al. Taking AI risks seriously: a new assessment model for the AI ActAI & Soc (2023). LINK

10. Academic without Borders, Bimonthly Newsletter n°59, July 2023.

11. On the use of artificial intelligence and in particular of ChatGPT in higher education. (UNESCO). Link to .PDF

12. KU Leuven, Responsible use of Generative Artificial Intelligence (GenAI) in research. These guidelines will be updated with new information and insights to keep them in line with the rapidly evolving technology (last updated June 24, 2023). The further integration of teaching and research guidelines is still on the agenda. LINK

13.Stanford HAI, ChatGPT Out-scores Medical Students on Complex Clinical Care Exam Questions. Jul 17, 2023 |  Adam Hadhazy

14. The Norwegian Consumer Council published a detailed report “Ghost in the machine – Addressing the consumer harms of generative AI” outlining the harms, legal frameworks, and possible ways forward. In conjunction with this launch, the Norwegian Consumer Council and 14 consumer organizations from across the EU and the US demand that policymakers and regulators act.  https://storage02.forbrukerradet.no/media/2023/06/generative-ai-rapport-2023.pdf

15. University of Melbourne: Inquiry into the use of generative AI in the education system. Submission to the House Standing Committee on Employment, Education and Training 14 July 2023. https://about.unimelb.edu.au/__data/assets/pdf_file/0032/396446/UoM-Submission-Inquiry-into-Generative-AI-in-Education-FINAL.pdf

16. Cornell University: Generative Artificial Intelligence for Education and Pedagogy.July 18, 2023. https://teaching.cornell.edu/sites/default/files/2023-08/Cornell-GenerativeAIForEducation-Report_2.pdf

17. he University of North Carolina at Chapel Hill: Teaching Use Guidelines for Generative Artificial Intelligence, https://provost.unc.edu/wp-content/uploads/2023/07/Teaching-Generative-AI-Use-Guidance_UNC-AI-Committee-June-15-202348.pdf

18. University of Sydney: 13 March, 2023 Students answer your questions about generative AI – part 2: Ethics, integrity, and the value of university. https://educational-innovation.sydney.edu.au/teaching@sydney/students-answer-your-questions-about-generative-ai-part-2-ethics-integrity-and-the-value-of-university/

19. The Berkman Klein Center for Internet & Society at Harvard University: Exploring the Impacts of Generative AI on the Future of Teaching and Learning https://cyber.harvard.edu/story/2023-06/impacts-generative-ai-teaching-learning

20. Stanford University: Pedagogic strategies for adapting to generative AI chatbots. Eight strategic steps to help instructors adapt to generative AI tools and chatbots. June 19, 2023, Center for Teaching and Learning. https://docs.google.com/document/d/1la8jOJTWfhUdNna5AJYiKgNR2-54MBJswg0gyBcGB-c/edit

21. Council of Europe: ARTIFICIAL INTELLIGENCE AND EDUCATION A critical view through the lens of human rights, democracy and the rule of law.November 2022

https://rm.coe.int/artificial-intelligence-and-education-a-critical-view-through-the-lens/1680a886bd

22. ANU Centre for Learning and Teaching: Chat GPT and other generative AI tools: What ANU academics need to know February 2023. https://teaching.weblogs.anu.edu.au/files/2023/02/Chat_GPT_FAQ-1.pdf

23.Guidance for Generative AI in education and research | UNESCO, 7 September 2023

 

Z-Inspection® is a mandatory course for the new Online Master Program xAIM

xAIM is a new Online Interdisciplinary Master’s Program at the Intersection of AI and Health Care.

Students will learn how to apply the process to real use cases of AI in healthcare.

A number of teaching resources are available here:

STARTING PERIOD FEB 2023
EXPECTED GRADUATION APR 2024

The eXplainable Artificial Intelligence in Healthcare Management Masters is developed within an xAIM project supported by a Connecting Europe Facility in Telecom (project INEA/CEF/ICT/A2020/2276680). Our aim is to advance development of highly qualified professionals to address the lack of highly specialized digital skills in AI. The master is designed for anyone interested in understanding the needs of xAI in healthcare, and, in particular for health-related professionals, with a particular focus on the exploitation of the possible applications.

xAIM Web  Site

Pilot Project: Assessment for Responsible Artificial Intelligence together with Rijks ICT Gilde -Ministry of the Interior and Kingdom Relations (BZK)- and the province of Fryslân (The Netherlands)

Original Full text available from the Rijks ICT Gilde -Ministry of the Interior and Kingdom Relations (BZK)- web site in Dutch.

https://www.rijksorganisatieodi.nl/rijks-ict-gilde/mycelia/pilot-kunstmatige-intelligentie

Artificial Intelligence (AI) is in more and more aspects of our lives. The technology – based on algorithms and data – is in numerous devices and can be useful in solving social issues. For example, about energy, sustainability or poverty. As a government, we have an exemplary role. If we want to seize the opportunities of AI, then important questions about ethics, technology, transparency and the possible effects of AI applications on our society must be answered.

The pilot “Assessment for Responsible AI” is a step in this process. During a three-month pilot, the Rijks ICT Gilde (Ministry of the Interior and Kingdom Relations (BZK)) , in cooperation with the province of Fryslân and AI authority Prof. Dr. Zicari and his team, is investigating a deep learning algorithm in practice.

Reason for the pilot
During the conference “AI and the future of Europe” in Brussels on March 30, 2022, Secretary of State Alexandra van Huffelen told us that the digital transition and the use of AI should always be human-centered and based on our democratic values and rights. Governments should lead by example in this regard.

As a government, we want to seize the opportunities of AI, but the technology still raises many important questions. How reliable are algorithms? Can an algorithm discriminate? What are the ethical and social effects of AI and how transparent is its use? In addition, the use of AI must always be human-centered and based on our democratic values and rights.

With that comes an impressive amount of rules, frameworks and frameworks in the field of AI. How do you apply them in practice? What do you need to pay attention to? And how do you integrate them into the development and use of AI?

With the pilot “Assessment for responsible AI” we hope to get answers to these questions and more. First, to stimulate awareness and dialogue about AI within government. And then to be able to confidently deploy the technology for the questions of tomorrow.

Background for the pilot

During this six-month pilot, the practical application of a deep learning algorithm from the province of Fryslân will be investigated and assessed. The algorithm maps heathland grassland by means of satellite images for monitoring nature reserves. The testing of this algorithm is done in collaboration with an international interdisciplinary team, based on the ‘Z-Inspection® method’ – a process to assess Trustworthy AI in Practice.

This involves testing the algorithm for social, ethical, technical and legal implications. This is done in interdisciplinary teams according to a holistic methodology: it looks at the coherence, arrangement and interaction of the features to a system. The holistic nature of the method leaves room for different dimensions and views, and focuses in particular on the interpretation and discussion of ethical issues and tensions.

Pilot objectives
The pilot has multiple objectives, so each party can get its own win from the pilot:

A science-based assessment in practice based on a concrete AI application;
To understand how to carefully and responsibly organize your processes for developing and using AI;
Learning which frameworks, laws, and regulations are important and must be tested in the different phases of development and use;
Increasing insight and overview through recommendations;
Stimulating dialogue and increasing awareness about applying reliable AI.

Z-Inspection® : A method for responsible AI
One of the main characteristics of the Z-Inspection® method is its interdisciplinary nature. The complexity of an AI application is reflected in the composition of the team. The diversity of participants provides a more inclusive assessment of the reliability of an AI application.

Another strong feature of this method is its dynamic application. A standard checklist does not adapt to the case. In contrast, the holistic approach determines which issues are central to the use case (real-world application) at different stages of the process. And assesses which aspects of the case are most important, moving back and forth between intra- and interdisciplinary discussions.

With the application of this method, awareness grows within the own organization, a frame of mind (mindset) emerges for responsible data use and the reliable application of AI. Both Province of Fryslan and the Rijks ICT Gilde are testing the method in practice to learn together how to look at an AI application from different dimensions.

Pilot outcome
Using the Z-Inspection® method, the advantages and disadvantages of the AI application under investigation are described. This is done using an ongoing and iterative (repeating and increasingly refining) research process. Participants are hereby given space to openly reflect and document what is known (and unknown) about the capabilities of an AI application as a basis for later evaluations.

Participating parties
Several parties are working together in the pilot. The Province of Fryslân, AI authority Prof. Dr. Zicari and his team and the UBR| Rijks ICT Gilde are jointly investigating the reliability of AI applications and their responsible use. Leeuwarden municipality, the University of Groningen/Campus Fryslân and policy advisors of the Ministry of the Interior and Kingdom Relations are participating as observers.

Read more about the participating parties below.

Province of Friesland
During the pilot project, a deep learning algorithm of the Province of Fryslân will be evaluated. This algorithm visualizes heather grassing for monitoring nature reserves. The Province of Fryslân is investing in the coming years in the smart and effective use of data. The province sees that almost all provincial developments and social tasks contain a data component. This creates urgency in the subject. To be able to responsibly respond to technological developments as a province, a sharp vision on data and AI is needed. Participation in the pilot helps design the future digital infrastructure and outline ethical frameworks.

Read more > https://www.fryslan.frl/

UBR | Rijks ICT Gilde
The Rijks ICT Gilde (RIG) (Ministry of the Interior and Kingdom Relations (BZK) ) is an ambitious tech organization that implements projects across the central government. The organization uses its knowledge and network to create smart partnerships and solutions. It employs specialists with a drive to help the Netherlands move forward in the fields of data, software and security.

Mycelia is a young, dynamic and energetic programme of the Rijks ICT Gilde that uses the responsible use of data and AI to achieve impact. We do this by answering and filling in relevant questions without compromising the public values and fundamental rights that the government stands for.

With enthusiasm and specific knowledge, Mycelia’s data and AI experts work on impactful and ethically responsible projects for the public sector and collaborate with organizations and experts in order to learn, grow and develop from each other.

With the growth of data and the rise of AI, we see a government that will never be the same again. Responsible use of AI and trust is and will therefore become increasingly important. We believe in helping and supporting each other and an honest and transparent government. We want to leave a better world for the next generations.

In the pilot, the RIG provides expertise on AI, ethics and responsible data use within government.

Read more > https://www.ubrijk.nl/service/rijks-ict-gilde

Update (January 25, 2023)

Timeline
The Assessment for Responsible AI pilot took place from May 2022 through January 2023.

Project Members:

Sara M. Beery,
Marjolein Boonstra,
Frédérick Bruneault,
Subrata Chakraborty,
Tjitske Faber,
Alessio Gallucci,
Eleanore Hickman,
Gerard Kema,
Heejin Kim,
Jaap Kooiker,
Ruth Koole,
Elisabeth Hildt,
Annegret Lamadé,

Emilie Wiinblad Mathez 
Florian Möslein,
Genien Pathuis,
Rosa Maria Roman-Cuesta,
Marijke Steege,
Alice Stocco,
Willy Tadema,
Jarno Tuimala,
Isabel van Vledder,
Dennis Vetter,
Jana Vetter,
Elise Wendt,
Magnus Westerlund,
Roberto V. Zicari

Update (July 15, 2023)

Rijks ICT Gilde has recently published (in Dutch):

the main lessons learned

the technical results of the assessment.

the human rights and ethical results of the assessment.

–  the ecological  evaluations.

Update (August 3d, 2023)

Rijks ICT Gilde has published (in English):

Announcing a new Pilot project with the Province of Friesland and UBR Rijks ICT Gilde (part of the Ministry of the Interior and Kingdom) the Netherlands.

Excited to announce that on May 16, 2022 we had our kick-off of the pilot project “Assessment for responsible AI” together with the Province of Friesland (Fryslân), a team of the  Z-inspection® initiative, and the UBR  Rijks ICT Gilde (part of the Ministry of the Interior and Kingdom Relations, the Netherlands).

Together we will investigate the reliability of AI applications for the Province of Friesland and their responsible use, using the  Z-inspection® process and the EU Framework for Trustworthy AI.

The Leeuwarden Municipality, University of Groningen/Campus Fryslân and policy advisors from the Ministry of the Interior were also invited as observers.

Announcement in Dutch :

Kick-off krachtig samenwerkingsverband voor verantwoorde AI

Nieuwsbericht | 17-05-2022 | 13:42- Rijks ICT Gilde. 

English Translation:

Kick-off powerful partnership for responsible AI

News release | 17-05-2022 | 13:42
Yesterday was the kick-off of the pilot ‘Assessment for responsible AI’. The Province of Fryslân, AI authority Prof. Dr. Zicari and the UBR Rijks ICT Gilde are jointly investigating the reliability of AI applications and their responsible use. Leeuwarden municipality, Groningen University/Campus Fryslân and policy advisors from the Ministry of the Interior are invited as observers.

Artificial Intelligence (AI) is appearing in more and more aspects of our lives. It is in all kinds of devices we use in our work and private lives. The technology, based on data and algorithms, can be useful in solving social issues about energy, sustainability or even poverty, for example.

As a government, we want to exploit the opportunities of AI, but the technology still raises many important questions.

How reliable are algorithms? Can an algorithm discriminate? And how transparent is the use of AI?

In the three-month pilot ‘Assessment for responsible AI’ we are looking for answers to the questions:

How do you as a government steer the development and use of responsible AI?
What frameworks, laws and regulations are important, and how do we test them in the development and use of AI?
How do you analyze, assess and improve AI applications? And are the applications in line with public values and human rights?

In the pilot we assess an algorithm of the province of Fryslân. We will analyze it using the Z-inspection® method of Roberto Zicari; a self-assessment in which participants discuss critical issues such as: the purpose of the algorithm, the development process, ethical dilemmas and conflicts of interest. The Z-inspection® method is a working method to analyze, assess and improve AI applications in a sustainable, demonstrable and transparent way. This enables organizations to develop and use responsible AI applications in a structured way.

Furthermore, it is very important that the knowledge and experiences from the pilot are shared. First of all, to stimulate digital awareness and dialogue about AI within the government. And then to be able to confidently deploy the technology for the questions of tomorrow.

Lessons Learned: Co-design of Trustworthy AI. Best Practice. By Helga Brogger, President of the Norwegian Society of Radiology

Mission: …Aid the development of designs with reduced end-user vulnerability…

-“…Socio-technical scenarios can be used to broaden stakeholders’ understanding of one’s own role in the technology, as well as awareness of stakeholders’ interdependence…”

– “…Recurrent, open-minded, and interdisciplinary discussions involving different perspectives of the broad problem definition….”

– “…The early involvement of an interdisciplinary panel of experts broadened the horizon of AI designers which are usually focused on the problem definition from a data and application perspective…”

– “…Consider the aim of the future AI system as a claim that needs to be validated before the AI system is deployed..”

-“…Involve patients at every stage of the design process … it is particularly important to ensure that the views, needs, and preferences of vulnerable and disadvantaged patient groups are taken into account to avoid exacerbating existing inequalities…”

Thank you, Roberto V. Zicari and the rest of the team for these insights!

— Helga Brogger, President of the Norwegian Society of Radiology

………………………………………………………………………………………………………………………………………………………………………….

Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier.

Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund and Renee Wurth.

Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021

VIEW ORIGINAL RESEARCH article

Learn more

Our paper ” On Assessing Trustworthy AI in Healthcare Best Practice for Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls.” has been accepted for publication in Frontiers in Human Dynamics

The legislative proposal for AI by the European Commission has been published today.

The highly anticipated legislative proposal for AI by the European Commission has been published today.

Read the EU Regulatory Proposal on AI:

https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence

EU Press Release

Press release 21 April 2021 Brussels

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence

Kick off Meeting (April 15, 2021) Assessing Trustworthy AI. Best Practice: Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients. In cooperation with Department of Information Engineering and Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health – University of Brescia, Brescia, Italy

On April 15, 2021 we had a real great kick off meeting for this use case:

Assessing Trustworthy AI. Best Practice: Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients.

71 experts from all over the world attended.

Worldwide, the saturation of healthcare facilities, due to the high contagiousness of Sars-Cov-2 virus and the significant rate of respiratory complications is indeed one among the most critical aspects of the ongoing COVID-19 pandemic
The team of Alberto Signoroni and colleagues implemented an end-to-end deep learning architecture, designed for predicting, on Chest X-rays images (CXR), a multi-regional score conveying the degree of lung compromise in COVID-19 patients.

We will work with Alberto Signoroni and his team and apply our Z-inspection® process to assess the ethical, technical and legal implications of using Deep Learning in this context.

For more information: https://z-inspection.org/best-practice-deep-learning-for-predicting-a-multi-regional-score-conveying-the-degree-of-lung-compromise-in-covid-19-patients/

This AI detects cardiac arrests during emergency calls

Jointly with the Emergency Medical Services Copenhagen, we completed the first part of our trustworthy AI assessment.
A ML sytem is currently used as a supportive tool to recognize cardiac arrest in 112 emergency calls.
A team of multidisciplinary experts used Z-Inspection® and
identified  ethical,technical and legal issues in using such AI system.
This confirms some of the ethical concern raised by Kay Firth-Butterfield, back in June 2018….

This is another example of the need to test and verify algorithms,says Kay Firth-Butterfieldhead of Artificial Intelligence and Machine Learning at the World Economic Forum.

“We all want to believe that AI will ‘wave its magic wand’ and help us do better and this sounds as if it is a way of getting AI to do something extremely valuable.
“But,” Firth-Butterfield added, “it still needs to meet the requirements of transparency and accountability and protection of patient privacy. As it is in the EU, it will be caught by GDPR, so it is probably not a problem.” However, the technology raises the fraught issue of accountability, as Firth-Butterfield explains. Who is liable if the machine gets it wrong? the AI manufacturer, the human being advised by it, the centre using it? This is a much debated question within AI which we need to solve urgently: when do we accept that if the AI is wrong it doesn’t matter because it is significantly better than humans. Does it need to be a 100% better than us or just a little better? At what point is the use, or not using this technology negligent?

Source: https://www.weforum.org/agenda/2018/06/this-ai-detects-cardiac-arrests-during-emergency-calls/

Image: CPR

The full report is submitted for publication. Contact me if you are interested to know more. RVZ

Resources:

Article World Economic Forum, 06 Jun 2018.

Download the Z-Inspection® Process

“Z-Inspection®: A Process to Assess Ethical AI”
Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.
IEEE Transactions on Technology and Society, 2021
Print ISSN: 2637-6415
Online ISSN: 2637-6415
Digital Object Identifier: 10.1109/TTS.2021.3066209
DOWNLOAD THE PAPER