Impressions of the First World Z-inspection® Conference: Ateneo Veneto, March 10-11, 2023, Venice, Italy.

World Z-inspection® Conference

Ateneo VenetoMarch 10-11, 2023, Venice, Italy.

Campo S. Fantin, 1897, 30124 Venezia

Press Release (Ateneo Veneto) : https://ateneoveneto.org/world-z-inspection-conference-sullintelligenza-artificiale/

Friday 10 afternoon and Saturday, March 11, 2023 all day

Location: Ateneo Veneto (https://en.wikipedia.org/wiki/Ateneo_Veneto),

Reading Room (Convention) + Tommaseo Room for coffee break and lunch

In cooperation with Global Campus of Human Rights (https://gchumanrights.org) and Venice Urban Lab (https://www.veniceurbanlab.org/en)

Supporters: Arcada University of Applied Science, Merck, Roche, Zurich Insurance Company.

……………………………………………………………………………………………………………………………………………

Antonella Magaraggia, President Ateneo Veneto,
Sergio Pascolo, President Venice Urban Lab

Friday (afternoon), March 10, 2023

Moderator: Holger Volland CEO brand eins, Germany)

2:00 pm

– Welcome remarks

Antonella Magaraggia, President Ateneo Veneto,

Sergio Pascolo, President, Venice Urban Lab,

George Ulrich, Academic Director, Global Campus of Human Rights.

2:20pm       “Singing tuning with Ahhh.” Alessandro Donati

2:30 pm

– The Z-inspection® initiative

Roberto V. Zicari (Lead Z-inspection® initiative),

3:15 pm

– Presentation of selected Affiliated Trustworthy AI Labs

Magnus Westerlund (The Laboratory for Trustworthy AI at Arcada University of Applied Sciences (Helsinki, Finland).

Sune Holm, Boris Düdder ( Trustworthy AI Lab at the University of Copenhagen (Copenhagen, Denmark)

Gemma Roig, Karsten Tolle  (Trustworthy AI Lab at the Goethe University Frankfurt (Frankfurt, Germany)

Vince Madai (Trustworthy AI in Healthcare Lab at the QUEST Centre for Responsible Research (Berlin Institute of Health at Charité (BIH) (Germany)

Pedro Moreno Sanchez (Trustworthy AI for healthcare Lab, Tampere University (Finland)

Roberto Francischello  (Trustworthy AI Lab at the Imaging Lab, University of Pisa (Pisa, Italy)

Coffee break (4:30 pm)

5:15 pm

– Panel: “Human Rights and Trustworthy AI”

Panelists:

George Ulrich (Academic Director, Global Campus of Human Rights),

Elisabeth Hildt (Director, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, and L3S Research Center, Leibniz University Hannover)

Emilie Wiinblad Mathez (Senior Ethics Adviser) 

Frédérick Bruneault (adjunct professor, École des médias Unive, Université du Québec,Montréal)

Peter G. Kirchschlaeger (Ethics-Professor, Director of the Institute of Social Ethics ISE, University of Lucerne)

Giovanni Sartor ( Professor in Legal Informatics at the University of Bologna) 

Moderator: Holger Volland CEO brand eins, Germany)

Aperitiv (6.30 pm)

………………………………………………………………………………………………………………………………………………….

Saturday (all day), March 11, 2024

Moderator: Holger Volland CEO brand eins, Germany)

10:00 am

– Welcome remarks

Gianpaolo Scarante, Past President Ateneo Veneto

10:10  “Singing tuning with Ahhh.” Alessandro Donati

10:15 am

– Pilot Project “Responsible use of AI” with Rijks ICT Gilde ( Ministry of Interior and Kingdom Relationships) , and the Province of Friesland, The Netherlands

Willy Tadema (AI Ethics Lead, Rijks ICT Gilde, The Netherlands),

Marijke Steege, Marijke (Senior Consultant Strategy Innovation and Data, Rijks ICT Gilde, The Netherlands),

Gerard Kema (Innovator Manager, Province of Friesland, The Netherlands),

Coffee break (11 a.m.)

11:30am

– Trustworthy AI in Practice: Best practices

Mattia Savardi. Davide Farini, Alberto Signoroni

Alberto Signoroni, (University of Brescia, Italy),

Mattia Savardi, (University of Brescia, Italy),

Davide Farina (University of Brescia, Italy),

Hanna Sormunen (Finnish Tax Administration),

Vince Madai (QUEST, Berlin)

Lunch (1:00 p.m.)

2.30pm

– Panel: “How do we trust AI? “

Panelists:

Jean Enno Charton (Director Bioethics & Digital Ethics, Merck),

Bryn Roberts (Global Head of Data &Analytics, Roche),

Sarah Gadd (Head of Data & Artificial Intelligence Solutions, Credit Suisse),

Lisa Bechtold, (Global Lead AI Assurance & Data Governance · Zurich Insurance Company)  

Moderator: Holger Volland (CEO brand eins, Germany)

Coffee break (4 p.m.)

4:45 pm

– Trustworthy AI in Practice: Best practices

Ulrich Kühne (Hautmedizin Bad Soden, Germany)

Adriano Lucieri (DFKI, Germany)

James Brusseau (Pace University, USA)

Adarsh Srivastava (Roche, India)

Concluding Remarks

Roberto V. Zicari (Lead Z-inspection® initiative), 

Sergio Pascolo (President, Venice Urban Lab)

Aperitif (6 p.m.)

Frédérick Bruneault and Pedro Kringen in front of an original Tintoretto.

Pilot Project: Assessment for Responsible Artificial Intelligence together with Rijks ICT Gilde -Ministry of the Interior and Kingdom Relations (BZK)- and the province of Fryslân (The Netherlands)

Original Full text available from the Rijks ICT Gilde -Ministry of the Interior and Kingdom Relations (BZK)- web site in Dutch.

Artificial Intelligence (AI) is in more and more aspects of our lives. The technology – based on algorithms and data – is in numerous devices and can be useful in solving social issues. For example, about energy, sustainability or poverty. As a government, we have an exemplary role. If we want to seize the opportunities of AI, then important questions about ethics, technology, transparency and the possible effects of AI applications on our society must be answered.

The pilot “Assessment for Responsible AI” is a step in this process. During a three-month pilot, the Rijks ICT Gilde (Ministry of the Interior and Kingdom Relations (BZK)) , in cooperation with the province of Fryslân and AI authority Prof. Dr. Zicari and his team, is investigating a deep learning algorithm in practice.

Reason for the pilot
During the conference “AI and the future of Europe” in Brussels on March 30, 2022, Secretary of State Alexandra van Huffelen told us that the digital transition and the use of AI should always be human-centered and based on our democratic values and rights. Governments should lead by example in this regard.

As a government, we want to seize the opportunities of AI, but the technology still raises many important questions. How reliable are algorithms? Can an algorithm discriminate? What are the ethical and social effects of AI and how transparent is its use? In addition, the use of AI must always be human-centered and based on our democratic values and rights.

With that comes an impressive amount of rules, frameworks and frameworks in the field of AI. How do you apply them in practice? What do you need to pay attention to? And how do you integrate them into the development and use of AI?

With the pilot “Assessment for responsible AI” we hope to get answers to these questions and more. First, to stimulate awareness and dialogue about AI within government. And then to be able to confidently deploy the technology for the questions of tomorrow.

Background for the pilot

During this three-month pilot, the practical application of a deep learning algorithm from the province of Fryslân will be investigated and assessed. The algorithm maps heathland grassland by means of satellite images for monitoring nature reserves. The testing of this algorithm is done in collaboration with an international interdisciplinary team, based on the ‘Z-Inspection® method’ – a process to test AI for reliability.

This involves testing the algorithm for social, ethical, technical and legal implications. This is done in interdisciplinary teams according to a holistic methodology: it looks at the coherence, arrangement and interaction of the features to a system. The holistic nature of the method leaves room for different dimensions and views, and focuses in particular on the interpretation and discussion of ethical issues and tensions.

Pilot objectives
The pilot has multiple objectives, so each party can get its own win from the pilot:

A science-based assessment in practice based on a concrete AI application;
To understand how to carefully and responsibly organize your processes for developing and using AI;
Learning which frameworks, laws, and regulations are important and must be tested in the different phases of development and use;
Increasing insight and overview through recommendations;
Stimulating dialogue and increasing awareness about applying reliable AI.

Z-Inspection® : A method for responsible AI
One of the main characteristics of the Z-Inspection® method is its interdisciplinary nature. The complexity of an AI application is reflected in the composition of the team. The diversity of participants provides a more inclusive assessment of the reliability of an AI application.

Another strong feature of this method is its dynamic application. A standard checklist does not adapt to the case. In contrast, the holistic approach determines which issues are central to the use case (real-world application) at different stages of the process. And assesses which aspects of the case are most important, moving back and forth between intra- and interdisciplinary discussions.

With the application of this method, awareness grows within the own organization, a frame of mind (mindset) emerges for responsible data use and the reliable application of AI. Both Province of Fryslan and the Rijks ICT Gilde are testing the method in practice to learn together how to look at an AI application from different dimensions.

Pilot outcome
Using the Z-Inspection® method, the advantages and disadvantages of the AI application under investigation are described. This is done using an ongoing and iterative (repeating and increasingly refining) research process. Participants are hereby given space to openly reflect and document what is known (and unknown) about the capabilities of an AI application as a basis for later evaluations.

Participating parties
Several parties are working together in the pilot. The Province of Fryslân, AI authority Prof. Dr. Zicari and his team and the UBR| Rijks ICT Gilde are jointly investigating the reliability of AI applications and their responsible use. Leeuwarden municipality, the University of Groningen/Campus Fryslân and policy advisors of the Ministry of the Interior and Kingdom Relations are participating as observers.

Read more about the participating parties below.

Province of Friesland
During the pilot project, a deep learning algorithm of the Province of Fryslân will be evaluated. This algorithm visualizes heather grassing for monitoring nature reserves. The Province of Fryslân is investing in the coming years in the smart and effective use of data. The province sees that almost all provincial developments and social tasks contain a data component. This creates urgency in the subject. To be able to responsibly respond to technological developments as a province, a sharp vision on data and AI is needed. Participation in the pilot helps design the future digital infrastructure and outline ethical frameworks.

Read more > https://www.fryslan.frl/

UBR | Rijks ICT Gilde
The Rijks ICT Gilde (RIG) (Ministry of the Interior and Kingdom Relations (BZK) ) is an ambitious tech organization that implements projects across the central government. The organization uses its knowledge and network to create smart partnerships and solutions. It employs specialists with a drive to help the Netherlands move forward in the fields of data, software and security.

Mycelia is a young, dynamic and energetic programme of the Rijks ICT Gilde that uses the responsible use of data and AI to achieve impact. We do this by answering and filling in relevant questions without compromising the public values and fundamental rights that the government stands for.

With enthusiasm and specific knowledge, Mycelia’s data and AI experts work on impactful and ethically responsible projects for the public sector and collaborate with organizations and experts in order to learn, grow and develop from each other.

With the growth of data and the rise of AI, we see a government that will never be the same again. Responsible use of AI and trust is and will therefore become increasingly important. We believe in helping and supporting each other and an honest and transparent government. We want to leave a better world for the next generations.

In the pilot, the RIG provides expertise on AI, ethics and responsible data use within government.

Read more > https://www.ubrijk.nl/service/rijks-ict-gilde

Translated with www.DeepL.com/Translator (free version)

Update (January 25, 2023)

Timeline
The Assessment for Responsible AI pilot took place from May 2022 through January 2023.

Project Members:

Sara M. Beery,
Marjolein Boonstra,
Frédérick Bruneault,
Subrata Chakraborty,
Tjitske Faber,
Alessio Gallucci,
Eleanore Hickman,
Gerard Kema,
Heejin Kim,
Jaap Kooiker,
Ruth Koole,
Elisabeth Hildt,
Annegret Lamadé,

Emilie Wiinblad Mathez 
Florian Möslein,
Genien Pathuis,
Rosa Maria Roman-Cuesta,
Marijke Steege,
Alice Stocco,
Willy Tadema,
Jarno Tuimala,
Isabel van Vledder,
Dennis Vetter,
Jana Vetter,
Elise Wendt,
Magnus Westerlund,
Roberto V. Zicari

Announcing a new Pilot project with the Province of Friesland and UBR Rijks ICT Gilde (part of the Ministry of the Interior and Kingdom) the Netherlands.

Excited to announce that on May 16, 2022 we had our kick-off of the pilot project “Assessment for responsible AI” together with the Province of Friesland (Fryslân), a team of the  Z-inspection® initiative, and the UBR  Rijks ICT Gilde (part of the Ministry of the Interior and Kingdom Relations, the Netherlands).

Together we will investigate the reliability of AI applications for the Province of Friesland and their responsible use, using the  Z-inspection® process and the EU Framework for Trustworthy AI.

The Leeuwarden Municipality, University of Groningen/Campus Fryslân and policy advisors from the Ministry of the Interior were also invited as observers.

Announcement in Dutch :

Kick-off krachtig samenwerkingsverband voor verantwoorde AI

Nieuwsbericht | 17-05-2022 | 13:42- Rijks ICT Gilde. 

English Translation:

Kick-off powerful partnership for responsible AI

News release | 17-05-2022 | 13:42
Yesterday was the kick-off of the pilot ‘Assessment for responsible AI’. The Province of Fryslân, AI authority Prof. Dr. Zicari and the UBR Rijks ICT Gilde are jointly investigating the reliability of AI applications and their responsible use. Leeuwarden municipality, Groningen University/Campus Fryslân and policy advisors from the Ministry of the Interior are invited as observers.

Artificial Intelligence (AI) is appearing in more and more aspects of our lives. It is in all kinds of devices we use in our work and private lives. The technology, based on data and algorithms, can be useful in solving social issues about energy, sustainability or even poverty, for example.

As a government, we want to exploit the opportunities of AI, but the technology still raises many important questions.

How reliable are algorithms? Can an algorithm discriminate? And how transparent is the use of AI?

In the three-month pilot ‘Assessment for responsible AI’ we are looking for answers to the questions:

How do you as a government steer the development and use of responsible AI?
What frameworks, laws and regulations are important, and how do we test them in the development and use of AI?
How do you analyze, assess and improve AI applications? And are the applications in line with public values and human rights?

In the pilot we assess an algorithm of the province of Fryslân. We will analyze it using the Z-inspection® method of Roberto Zicari; a self-assessment in which participants discuss critical issues such as: the purpose of the algorithm, the development process, ethical dilemmas and conflicts of interest. The Z-inspection® method is a working method to analyze, assess and improve AI applications in a sustainable, demonstrable and transparent way. This enables organizations to develop and use responsible AI applications in a structured way.

Furthermore, it is very important that the knowledge and experiences from the pilot are shared. First of all, to stimulate digital awareness and dialogue about AI within the government. And then to be able to confidently deploy the technology for the questions of tomorrow.

Z-inspection®: A process to assess trustworthy AI in Practice has won a ISSIP Distinguished Recognition Award for Service Innovation!

The results of the ISSIP Excellence In Service Innovation Awards are in, and it is our great pleasure to inform you that Z-Inspection: A process to assess trustworthy AI in Practice has won a Distinguished Recognition Award for Service Innovation. 

The Distinguished Recognition Award is given each year to submissions that, in the judgment of the ISSIP Award Committee, represent innovative and impactful service designs and implementations from which service innovators around the world can derive inspiration.

The judging criteria are based on the uniqueness, creativity, technical merit, value generation and impact of the innovative solution. 

We plan to recognize your achievement at the next ISSIP Board of Directors & Progress call, July 27, 2022, 3-4 pm EDT. 

Again Congratulations! 

cc Ralph Badinelli, Chair of the ISSIP Awards Committee

cc Jim Spohrer, ISSIP Board of Directors 

Michele Carroll, Executive Director

www.issip.org

List of Awards here.

Z-Inspection® process module is incorporated into a new Interdisciplinary Master’s Program on eXplainable Artificial Intelligence in Healthcare to equip a new generation of interdisciplinary students in assessment of Trustworthy AI.

Pavia, April 9, 2022

Z-Inspection® process module is incorporated into a new Interdisciplinary Master’s Program on eXplainable Artificial Intelligence in Healthcare, to equip a new generation of interdisciplinary students in assessment of Trustworthy AI.

With the Z-Inspection® process students will learn how to assess trustworthiness of AI systems for healthcare using socio-technical scenarios.

The new Master program is offered by a consortium composed of the University of Pavia, Goethe University (D), Keele University (UK), Leibniz University Hannover (D) and University of Ljubljana (SL), with funding from the European Commission program for “Connecting Europe Facility (CEF) Telecom” designed to offer co-financing to stimulate and support the spread of European Digital Service Infrastructures (DSI) in various sectors, including the design and implementation of specialized master’s programs in Artificial Intelligence (AI).

The overall grant is 1,664,557 euros.

The Master’s degree will be awarded by the University of Pavia

The duration of the  Master is 1 year and a half (2250 hours), and corresponds to 90 ECTS.

The Content of the Master is available here: 

https://xaim.eu/pdf-files/xAIM%20Master’s%20Brochure.pdf

The application for the first intake (October 2022) opens in Spring 2022

For more information:

https://xaim.eu

https://xaim.eu/faq/

Professor Roberto V. Zicari gave a Guest Lecture at the Seoul National University Law School.

Professor Roberto V. Zicari was invited to give a guest lecture by Professor Heo, Seongwook, Professor of Law at the Seoul National University Law School.

Professor Zicari introduced to his law students:

i) the EU Framework for Trustworthy AI, 

ii) our research work on assessing Trustworthy AI, and 

ii) the EU AI Act.

The English presentation starts at minute 28 (before it is in Korean): Watch here.

The Center for the Study of Ethics in the Professions at Illinois Institute of Technology (Chicago, USA) launches The Ethical and Trustworthy AI Lab based on the Z-Inspection® Process.

Chicago, March 15, 2022

The Ethical and Trustworthy AI Lab at Illinois Institute of Technology’s Center for the Study of Ethics in the Professions is an interdisciplinary group of researchers interested in the social and ethical implications of Artificial Intelligence (AI).

The group investigates philosophical, ethical, and social aspects of AI including trustworthiness and the question of what it is that makes AI uses ethical, just, and trustworthy; the roles of ethics codes, ethical guidelines, and policy-making in the regulation of AI technology; as well as AI applications in agriculture and medical contexts. 

The mission of the Lab is to involve stakeholders from all fields, such as computer science, technology, engineering, philosophy, social sciences, practitioners, and students, in an interdisciplinary reflection on the ethical uses of AI.

The Lab closely collaborate with the AI@IllinoisTech initiative, in particular its AI Ethics Working Group (AIEWG) and the international Z-Inspection® network. The Z-Inspection® assessment method for Trustworthy AI is an approach based on the Ethics Guidelines for Trustworthy AI by the European Commission High-Level Expert Group on Artificial Intelligence.

The head of the new Lab is Prof. Elisabeth Hildt.

More information here.

Vision 2022

1. Trustworthy AI Labs are established, based on the Z-Inspection® process

Like this one in Helsinki;

2. Z-Inspection® process modules are incorporated into Master and PhD programs at selected universities, to equip a new generation of interdisciplinary students in assessment of ethical AI;

3. The Z-Inspection® process is leveraged in future European policy and regulation relating to AI.

Arcada University of Applied Sciences (Helsinki, Finland) launches The Laboratory for Trustworthy AI based on the Z-Inspection® Process.

Helsinki, November 12, 2021

The Laboratory for Trustworthy AI at Arcada University of Applied Sciences (Helsinki, Finland) is a transdisciplinary and international research community who trains organizations and actors to assess the use of artificial intelligence. The lab connects academia and civil society, including developers of AI solutions, students, end-users, researchers, and stakeholders.

The Lab promotes a human-centric approach to AI and towards closing the gap between ethically sound AI development and the technical and methodological practices. The Lab embraces technical innovativeness and assist organizations in mapping socio-technical scenarios that are used to assess risk. 

The Lab collaborates closely with international networks such as the Z-Inspection® assessment method for Trustworthy AI External link. The Z-Inspection® approach is a validated assessment method that helps organizations to deliver ethically sustainable, evidence based, trustworthy and user-friendly AI driven solutions. The method is published in IEEE Transactions on Technology and Society 

More information about the Lab here.

Lessons Learned: Co-design of Trustworthy AI. Best Practice. By Helga Brogger, President of the Norwegian Society of Radiology

Mission: …Aid the development of designs with reduced end-user vulnerability…

-“…Socio-technical scenarios can be used to broaden stakeholders’ understanding of one’s own role in the technology, as well as awareness of stakeholders’ interdependence…”

– “…Recurrent, open-minded, and interdisciplinary discussions involving different perspectives of the broad problem definition….”

– “…The early involvement of an interdisciplinary panel of experts broadened the horizon of AI designers which are usually focused on the problem definition from a data and application perspective…”

– “…Consider the aim of the future AI system as a claim that needs to be validated before the AI system is deployed..”

-“…Involve patients at every stage of the design process … it is particularly important to ensure that the views, needs, and preferences of vulnerable and disadvantaged patient groups are taken into account to avoid exacerbating existing inequalities…”

Thank you, Roberto V. Zicari and the rest of the team for these insights!

— Helga Brogger, President of the Norwegian Society of Radiology

………………………………………………………………………………………………………………………………………………………………………….

Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier.

Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund and Renee Wurth.

Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021

VIEW ORIGINAL RESEARCH article

Learn more