Successfully completed Third World Z-inspection® Conference in Seoul, May 20, 21, 2025

“국제 AI 윤리 표준 만든다”…서울대, ‘제트인스펙션 컨퍼런스’ 개최

👉 Third World Z-inspection® Conference,
at the Seoul National University, Seoul, May 20 and 21, 2025.

50+ Experts from around the world gathered to discuss important issues such as:

– How to assess Trustworthy AI in practice,
– AI regulation vs. AI Innovation vs. AI Ethics,
– AI in Education,
– AI Certification,
– AI and Society,
– AI Governance seen from different world view points,
and more…

Supported by Merck

In Cooperation with

Trustworthy AI Lab at the Graduate School of Data Science, Seoul National University

Graduate School of Data Science, Seoul National University (SNU)

Z-Inspection® Initiative

Press Coverage


Our pilot project: “Assessment for Responsible Artificial Intelligence” together with Rijks ICT Gilde -Ministry of the Interior and Kingdom Relations (BZK)- and the province of Fryslân (The Netherlands) is the Winner of the 2025 ISSIP Awards Excellence in Service Innovation with Distinguished Recognition – Impact to Society.

📣 The results of the International Society of Service Innovation Professionals (ISSIP) Excellence In are in.

It is our distinct pleasure to inform you that  Assessment for Responsible AI has been selected by the ISSIP Awards Committee as the winner of Excellence in Service Innovation with Distinguished Recognition – Impact to Society.

This prestigious award is given once each year to a company or organization that has designed, developed, or deployed a novel solution that, in the judgment of the ISSIP Awards Committee, is the most innovative of all the submissions for that year in its category.

The categories are ‘impact to business,’ ‘impact to society’ and ‘impact to innovation.’

💡 Your project came first in the category of ‘Distinguished Recognition – Impact to Society.’

The judging criteria is based on the uniqueness, creativity, technical merit, value generation and impact of the innovative solution.

ISSIP is proud to recognize the outstanding commitment of your team and Rijks ICT Gilde (Dutch Ministry of the Interior and Kingdom Relations (BZK)), Province of Fryslân, The Netherlands, and Z-Inspection® Initiative to service innovation which should serve as an inspiration to other innovators across the globe.

Again congratulations!
Haroon Abbu, Chair of the ISSIP Awards Committee
cc Michele Carroll, Jim Spohrer

Show Certificate

In March of this year, we started the trustworthiness assessment of the ExplainMe project using the Z-Inspection® methodology.

In March of this year, we started the trustworthiness assessment of the ExplainMe project using the Z-Inspection® methodology.

This assessment is co-lead by Jesmin Jahan Tithi, Ph.D, Hanna Sormunen and Megan Coffee with the support of Roberto V. Zicari.

Our multi-disciplinary team has already grown to 32 experts, covering various disciplines, including computer science, explainable artificial intelligence, software engineering, psychiatry, law, ethics, philosophy, and other areas of medicine and social sciences.

Our aim is the co-design of a Trustworthy Explainable AI system for mental health monitoring using speech.

The motivation behind ExplainMe stems from the observation that most state-of-the-art systems supporting remote mental health monitoring lack transparency in their reasoning and decision-making. At the same time, research confirms that acoustic features extracted from speech serve as valid markers for assessing the severity of manic and depressive symptoms. ExplainMe addressed this gap and aims to design an Explainable AI system for mental health monitoring using speech.

The project “ExplainMe: Explainable Artificial Intelligence for Monitoring Acoustic Features Extracted from Speech” (FENG.02.02-IP.05-0302/23) coordinated by Katarzyna Kaczmarek-Majer is carried out within the First Team programme of the FNP Foundation for Polish Science co-financed by the European Union under the European Funds for Smart Economy 2021-2027 (FENG).

Successfully completed the Second World Z-inspection® Conference, Friday, August 23 and Saturday, August 24, 2024 Hamburg, Germany

Short video Impressions (Link)

Posts and Quotes

” “The pilot we as the Dutch government conducted together with the Z-Inspection® Initiative and Trustworthy AI Labs was one of the best experiences in my professional life so far”, I said as a panelist at the Z-Inspection World Conference in Hamburg last Friday, and with that I was not exaggerating.” — Willy Tadema, AI Governance & AI Ethics at Rijksoverheid

https://www.linkedin.com/posts/roberto-v-zicari-087863_zinspection-ai-trustworthyai-activity-7233920228861132800-TJ8r?utm_source=share&utm_medium=member_desktop

https://www.linkedin.com/posts/z-inspection%C2%AE-trustworthy-ai-labs_zinspection-ai-trustworthyai-activity-7233446907325431809-JQVG?utm_source=share&utm_medium=member_desktop

https://www.linkedin.com/posts/activity-7233858821012475904-q6De?utm_source=share&utm_medium=member_desktop



https://www.linkedin.com/posts/marta-bienkiewicz_i-was-privileged-to-participate-in-the-activity-7233896647389323265-RjFL?utm_source=share&utm_medium=member_desktop

https://www.linkedin.com/feed/update/urn:li:activity:7234080960152031232/



https://www.linkedin.com/posts/willytadema_home-activity-7234098217007677440-O4TV?utm_source=share&utm_medium=member_desktop

” Last week I had the pleasure to be part of the second world Z-Inspection® Initiative and Trustworthy AI Labs conference – such a great opportunity to discuss ideas on responsible AI with like-minded experts from all around the world!” — Jean Enno Charton, Director Digital Ethics & Bioethics, Merck Group.

##

Full Conference Program



Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment

I am super proud of our work together and very happy to share with you that the big report for our pilot project is now public online in ArXiv!

” This report is made public. The results of this pilot are of great importance for the Dutch government, serving as a best practice with which public administrators can get started, and incorporate ethical and human rights values when considering the use of an AI system and/or algorithms. It also sends a strong message to encourage public administrators to make the results of AI assessments like this one, transparent and available to the public.”

Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment

Marjolein BoonstraFrédérick BruneaultSubrata ChakrabortyTjitske FaberAlessio GallucciEleanore HickmanGerard KemaHeejin KimJaap KooikerElisabeth HildtAnnegret LamadéEmilie Wiinblad MathezFlorian MösleinGenien PathuisGiovanni SartorMarijke SteegeAlice StoccoWilly TademaJarno TuimalaIsabel van VledderDennis VetterJana VetterMagnus WesterlundRoberto V. Zicari

This report shares the experiences, results and lessons learned in conducting a pilot project “Responsible use of AI” in cooperation with the Province of Friesland, Rijks ICT Gilde-part of the Ministry of the Interior and Kingdom Relations (BZK) (both in The Netherlands) and a group of members of the Z-Inspection® Initiative. The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed. The AI maps heathland grassland by means of satellite images for monitoring nature reserves. Environmental monitoring is one of the crucial activities carried on by society for several purposes ranging from maintaining standards on drinkable water to quantifying the CO2 emissions of a particular state or region. Using satellite imagery and machine learning to support decisions is becoming an important part of environmental monitoring. The main focus of this report is to share the experiences, results and lessons learned from performing both a Trustworthy AI assessment using the Z-Inspection® process and the EU framework for Trustworthy AI, and combining it with a Fundamental Rights assessment using the Fundamental Rights and Algorithms Impact Assessment (FRAIA) as recommended by the Dutch government for the use of AI algorithms by the Dutch public authorities.

Comments: On behalf of the Z-Inspection® Initiative

Subjects: Computers and Society (cs.CY)

Cite: arXiv:2404.14366 [cs.CY] (or arXiv:2404.14366v1[cs.CY] for this version)

The Z-inspection® initiative creates an Advisory Body to address Generative AI for Higher Education in Practice.

Kronberg, December 6, 2023

The Z-inspection® initiative lead Roberto V. Zicari on Wednesday December 6th announced the creation of a 12-member advisory body to address Generative AI for Higher Education in Practice.The advisory board will support the Z-inspection® initiative international community’s efforts to assess the risks, opportunities of using Generative AI in higher education in practice, within the pilot project “Assessing the use of Generative AI in Higher Education” .

The project follows the UNESCO guidance for policymakers on Generative AI and education: Pilot testing, monitoring and evaluation, and building an evidence base

The members are listed below:

Kiran Bhujun, Professor, Director of the Tertiary Education and Scientific Research Division of the Ministry of Education, Tertiary Education, Science and Technology of the Republic of Mauritius.

Yves Deville, Professor, Université catholique de Louvain. Senior Advisor to the President for the Digital University at UCLouvain. Belgium

Julio Cesar Duhalde, Technical, economic and legal advisor at the Ministry of Economy – Buenos Aires, Argentina.

Erja Heikkinen, PhD, Director General (temp), Ministry of Education and Culture, Helsinki, Finland

Lambert Hogenhout, Chief Data, Analytics and Emerging Technologies, United Nations, New York, USA.

Jonathan Michie OBE FAcSS, Professor of Innovation and Knowledge Exchange, University of Oxford, Pro-Vice-Chancellor (without portfolio), President of Kellogg College, University of Oxford, UK.

Irina Mirkina, AI Lead, Office of Innovation, UNICEF, Stockholm, Sweden.

Victor Ochen, The African Youth Initiative Network (AYINET). Member of the Advisory Group to the United Nations High Commissioner for Refugees at United Nations- Lira, Uganda.

Sung Jae Park, Ph.D. Research fellow, Korean Educational Development Institute (KEDI). Former Senior Advisor to the Deputy Prime Minister and Minister of Education of the Republic of Korea. Seoul, South Korea.

Gerald Santucci, President, EUROPEAN EDUCATION NEW SOCIETY ASSOCIATION (ENSA)- France.

Willy Tadema, AI Ethics Lead, Rijks ICT Gilde (RIG), Ministry of Interior and Kingdom Relations; AI policy advisor, Ministry of Interior and Kingdom Relations; Member of the Dutch National Standards Body for AI (NEN), The Netherlands.

Peter J. Wells, Head of Education Southern Africa, UNESCO- Harare, Zimbabwe.

Find out more on Pilot Project: Assessing Trustworthiness of the use of Generative AI for Higher Education at:

https://z-inspection.org/pilot-project-assessing-trustworthiness-of-the-use-of-generative-ai-for-higher-education/

Background

Z-Inspection® is a holistic process based on the method of evaluating new technologies, where ethical issues need to be discussed through the elaboration of socio-technical scenarios. In particular, Z-Inspection® can be used to perform independent assessments and/or self-assessments together with the stakeholders owning the use case.

Z-inspection® is a registered trademark.

This work is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license.

##

Lessons Learned from Assessing Trustworthy AI in Practice.

lWe published the Lessons Learned from Assessing Trustworthy AI in Practice.

Dennis Vetter, Julia Amann, Frederick Bruneault, Megan Coffee, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Dr Eleanore Hickman, Elisabeth Hildt, Sune Holm, George Kararigas,Pedro Kringen, Vince Madai , Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Ph.D , Magnus Westerlund, Renee Wurth, PhD, Roberto V. Zicari & Z-Inspection® initiative (2022)

Digital Society (DSO), 2, 35 (2023). Springer

Link: https://link.springer.com/article/10.1007/s44206-023-00063-1

Abstract

Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements.

The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI.

This article is a methodological reflection on the Z-Inspection® process. We illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights for both AI researchers and AI practitioners. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of real-world AI systems, as well as key recommendations and practical suggestions on how to ensure a rigorous trustworthiness assessment throughout the lifecycle of an AI system.

The results presented in this article are based on our assessments of AI systems in the healthcare sector and environmental monitoring, where we used the framework for trustworthy AI proposed in the Ethics Guidelines for Trustworthy AI by the European Commission’s High-Level Expert Group on AI. However, the assessment process and the lessons learned can be adapted to other domains and include additional frameworks.

An extended version of this article is available in Zicari et al. (2022) 

Announcing a New Pilot Project: Assessing Trustworthiness of the use of Generative AI for higher Education.

This pilot project of the Z-inspection® initiative  (https://z-inspection.org) aims at assessing the use of Generative AI in higher level education considering specific use cases. 

For this pilot project, we will assess the ethical, technical, domain-specific (i.e. education) and legal implications of the use of Generative AI-product/service within the university context.

We follow the UNESCO guidance for policymakers on AI and education. In particular the policy recommendation 6. : Pilot testing, monitoring and evaluation, and building an evidence base.

Expected Output: a white paper and a peer reviewed journal article as best practice and a set of recommendations for each specific use case. Such recommendations could be also useful for helping clarify the Guidelines that each university is creating for this.

Participants

– Affiliated Labs: https://z-inspection.org/affiliated-labs/

– Members of the Z-inspection® initiative: https://z-inspection.org

– Ministries and Universities

– Specialized agencies

– Others

Approach

An interdisciplinary team of experts will assess the trustworthiness of Generative AI for selected use cases in High Education using the Z-Inspection® process: https://z-inspection.org 

Z-Inspection® is a holistic process based on the method of evaluating new technologies, where ethical issues need to be discussed through the elaboration of socio-technical scenarios. In particular, Z-Inspection® can be used to perform independent assessments and/or self-assessments together with the stakeholders owning the use case. 

For the context of this pilot project we define ethics in line with the essence of modern democracy i.e. “respect for others, expressed through support for fundamental human rights”. We take into consideration that “trust” in the development, deployment and use of AI systems concerns not only the technology’s inherent properties, but also the qualities of the socio-technical systems involving AI applications.

Specifically, we consider the ethics guidelines for trustworthy artificial intelligence defined by the EU High-Level Expert Group on AI, which defined trustworthy AI as:

(1) lawful – respecting all applicable laws and regulations 

(2) ethical – respecting ethical principles and values 

(3) robust – both from a technical perspective and taking into account its social environment

And we use the four ethical principles, rooted in fundamental rights defined in [13], acknowledging that tensions may arise between them:

(1) Respect for human autonomy 

(2) Prevention of harm 

(3) Fairness 

(4) Explicability 

Furthermore, we also consider the seven requirements of Trustworthy AI defined by the High Level experts group set by the EU. Each requirement has a number of sub-requirements as indicated in Table 1.

Table 1. Requirements and sub-requirements Trustworthy AI. 

1 Human agency and oversight Including fundamental rights, human agency and human oversight

2 Technical robustness and safety Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility

3 Privacy and data governance Including respect for privacy, quality and integrity of data, and access to data

4 Transparency Including traceability, explainability and communication 

5 Diversity, non-discrimination and fairness Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation

6 Societal and environmental wellbeing Including sustainability and environmental friendliness, social impact, society and democracy

7 Accountability Including auditability, minimization and reporting of negative impact, trade-offs and redress.

While we consider the seven requirements comprehensive, we believe additional ones can still bring value. Two of such additional requirements proposed by the Z-Inspection® initiative are “Assessing if the ecosystems respect values of Western Modern democracy” and “Avoiding concentration of power” .

………………………………………………………………………………………………………………………………………………………..

From the UNESCO guidance for policymakers on AI and education sets out policy recommendations in seven areas:

(https://unesdoc.unesco.org/ark:/48223/pf0000376709)

1. A system-wide vision and strategic priorities 

2. Overarching principle for AI and education policies 

3. Interdisciplinary planning and inter-sectoral governance 

4. Policies and regulations for equitable, inclusive, and ethical use of AI 

5. Master plans for using AI in education management, teaching, learning, and assessment

6. Pilot testing, monitoring and evaluation, and building an evidence base 

7. Fostering local AI innovations for education 

……………………………………………………………………………………………………………………………………………………………….

from the Guidelines for the use of AI in teaching at the University of Helsinki Academic Affairs Council 16.2.2023

(https://teaching.helsinki.fi/instructions/article/artificial-intelligence-teaching)

“Large artificial intelligence (AI)-based language models such as Chat GPT, Google Bard, and DeepL have evolved to the point where they can produce human-like text and conversations and correct and transform text at such a high level that it can be difficult to distinguish the result from human- generated text. It is foreseeable that more such models will emerge, and their functionalities will continue to evolve, so their existence should be taken into account in university teaching and research.

The existence of large language models should be seen as an opportunity. Degree programmes and teachers are encouraged to use AI in their teaching and to prepare students for a society of the future where AI methods will be widely used.

As AI brings new possibilities for producing text whose origin and reliability is unclear, they should be used in a controlled way. Use may be restricted in teaching in situations where the use would not promote student learning.

At EU level, an AI regulation is under preparation, which will also apply to AI systems in education. In addition, there is an ethical policy on AI and its use, as well as an ethical code for teachers1 . The University’s guidelines may be further specified in the light of future regulation and technological developments.” 

______________________________________________________

Resources

1. Policy paper. Generative artificial intelligence in education. UK: The Department for Education’s (DfE) position on the use of generative artificial intelligence (AI) in High Education. Link to .PDF

2. Do Foundation Model Providers Comply with the Draft EU AI Act? Stanford researchers evaluate foundation model providers like OpenAI and Google for their compliance with proposed EU law on AI.  Link

They identified a final list of 12 requirements and scored the 10 models using a 5-point rubric. The methodology for the study can be found here.

3. Leading universities in the UK (the Russell Group universities) have developed a set of principles on the use of generative AI tools in education. Here is the link.

4. Frontier AI Regulation: Managing Emerging Risks to Public Safety arXiv:2307.03718 [cs.CY] LINK

5. Zhaki Abdullah, Students, teachers will learn to properly use tools like ChatGPT: (Singapore Education Minister) Chan Chun Sing, Straits Times, 12 February 2023, LINK

6. U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington D.C., May 2023, LINK

7. Japanese schools to be allowed limited use of generative AI, Kyodo News, 22 June 2023, LINK

8. Higher Education Webinar: Implications of Artificial Intelligence in Higher Education , Tuesday, June 27, 2023, Council on Foreign Relations, LINK

9. Novelli, C., Casolari, F., Rotolo, A. et al. Taking AI risks seriously: a new assessment model for the AI ActAI & Soc (2023). LINK

10. Academic without Borders, Bimonthly Newsletter n°59, July 2023.

11. On the use of artificial intelligence and in particular of ChatGPT in higher education. (UNESCO). Link to .PDF

12. KU Leuven, Responsible use of Generative Artificial Intelligence (GenAI) in research. These guidelines will be updated with new information and insights to keep them in line with the rapidly evolving technology (last updated June 24, 2023). The further integration of teaching and research guidelines is still on the agenda. LINK

13.Stanford HAI, ChatGPT Out-scores Medical Students on Complex Clinical Care Exam Questions. Jul 17, 2023 |  Adam Hadhazy

14. The Norwegian Consumer Council published a detailed report “Ghost in the machine – Addressing the consumer harms of generative AI” outlining the harms, legal frameworks, and possible ways forward. In conjunction with this launch, the Norwegian Consumer Council and 14 consumer organizations from across the EU and the US demand that policymakers and regulators act.  https://storage02.forbrukerradet.no/media/2023/06/generative-ai-rapport-2023.pdf

15. University of Melbourne: Inquiry into the use of generative AI in the education system. Submission to the House Standing Committee on Employment, Education and Training 14 July 2023. https://about.unimelb.edu.au/__data/assets/pdf_file/0032/396446/UoM-Submission-Inquiry-into-Generative-AI-in-Education-FINAL.pdf

16. Cornell University: Generative Artificial Intelligence for Education and Pedagogy.July 18, 2023. https://teaching.cornell.edu/sites/default/files/2023-08/Cornell-GenerativeAIForEducation-Report_2.pdf

17. he University of North Carolina at Chapel Hill: Teaching Use Guidelines for Generative Artificial Intelligence, https://provost.unc.edu/wp-content/uploads/2023/07/Teaching-Generative-AI-Use-Guidance_UNC-AI-Committee-June-15-202348.pdf

18. University of Sydney: 13 March, 2023 Students answer your questions about generative AI – part 2: Ethics, integrity, and the value of university. https://educational-innovation.sydney.edu.au/teaching@sydney/students-answer-your-questions-about-generative-ai-part-2-ethics-integrity-and-the-value-of-university/

19. The Berkman Klein Center for Internet & Society at Harvard University: Exploring the Impacts of Generative AI on the Future of Teaching and Learning https://cyber.harvard.edu/story/2023-06/impacts-generative-ai-teaching-learning

20. Stanford University: Pedagogic strategies for adapting to generative AI chatbots. Eight strategic steps to help instructors adapt to generative AI tools and chatbots. June 19, 2023, Center for Teaching and Learning. https://docs.google.com/document/d/1la8jOJTWfhUdNna5AJYiKgNR2-54MBJswg0gyBcGB-c/edit

21. Council of Europe: ARTIFICIAL INTELLIGENCE AND EDUCATION A critical view through the lens of human rights, democracy and the rule of law.November 2022

https://rm.coe.int/artificial-intelligence-and-education-a-critical-view-through-the-lens/1680a886bd

22. ANU Centre for Learning and Teaching: Chat GPT and other generative AI tools: What ANU academics need to know February 2023. https://teaching.weblogs.anu.edu.au/files/2023/02/Chat_GPT_FAQ-1.pdf

23.Guidance for Generative AI in education and research | UNESCO, 7 September 2023

 

Z-Inspection® is a mandatory course for the new Online Master Program xAIM

xAIM is a new Online Interdisciplinary Master’s Program at the Intersection of AI and Health Care.

Students will learn how to apply the process to real use cases of AI in healthcare.

A number of teaching resources are available here:

STARTING PERIOD FEB 2023
EXPECTED GRADUATION APR 2024

The eXplainable Artificial Intelligence in Healthcare Management Masters is developed within an xAIM project supported by a Connecting Europe Facility in Telecom (project INEA/CEF/ICT/A2020/2276680). Our aim is to advance development of highly qualified professionals to address the lack of highly specialized digital skills in AI. The master is designed for anyone interested in understanding the needs of xAI in healthcare, and, in particular for health-related professionals, with a particular focus on the exploitation of the possible applications.

xAIM Web  Site

Impressions of the First World Z-inspection® Conference: Ateneo Veneto, March 10-11, 2023, Venice, Italy.

World Z-inspection® Conference

Ateneo VenetoMarch 10-11, 2023, Venice, Italy.

Campo S. Fantin, 1897, 30124 Venezia

Press Release (Ateneo Veneto) : https://ateneoveneto.org/world-z-inspection-conference-sullintelligenza-artificiale/

Friday 10 afternoon and Saturday, March 11, 2023 all day

Location: Ateneo Veneto (https://en.wikipedia.org/wiki/Ateneo_Veneto),

Reading Room (Convention) + Tommaseo Room for coffee break and lunch

In cooperation with Global Campus of Human Rights (https://gchumanrights.org) and Venice Urban Lab (https://www.veniceurbanlab.org/en)

Supporters: Arcada University of Applied Science, Merck, Roche, Zurich Insurance Company.

……………………………………………………………………………………………………………………………………………

Antonella Magaraggia, President Ateneo Veneto,
Sergio Pascolo, President Venice Urban Lab

Friday (afternoon), March 10, 2023

Moderator: Holger Volland CEO brand eins, Germany)

2:00 pm

– Welcome remarks

Antonella Magaraggia, President Ateneo Veneto,

Sergio Pascolo, President, Venice Urban Lab,

George Ulrich, Academic Director, Global Campus of Human Rights.

2:20pm       “Singing tuning with Ahhh.” Alessandro Donati

2:30 pm

– The Z-inspection® initiative

Roberto V. Zicari (Lead Z-inspection® initiative),

3:15 pm

– Presentation of selected Affiliated Trustworthy AI Labs

Magnus Westerlund (The Laboratory for Trustworthy AI at Arcada University of Applied Sciences (Helsinki, Finland).

Sune Holm, Boris Düdder ( Trustworthy AI Lab at the University of Copenhagen (Copenhagen, Denmark)

Gemma Roig, Karsten Tolle  (Trustworthy AI Lab at the Goethe University Frankfurt (Frankfurt, Germany)

Vince Madai (Trustworthy AI in Healthcare Lab at the QUEST Centre for Responsible Research (Berlin Institute of Health at Charité (BIH) (Germany)

Pedro Moreno Sanchez (Trustworthy AI for healthcare Lab, Tampere University (Finland)

Roberto Francischello  (Trustworthy AI Lab at the Imaging Lab, University of Pisa (Pisa, Italy)

Coffee break (4:30 pm)

5:15 pm

– Panel: “Human Rights and Trustworthy AI”

Panelists:

George Ulrich (Academic Director, Global Campus of Human Rights),

Elisabeth Hildt (Director, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, and L3S Research Center, Leibniz University Hannover)

Emilie Wiinblad Mathez (Senior Ethics Adviser) 

Frédérick Bruneault (adjunct professor, École des médias Unive, Université du Québec,Montréal)

Peter G. Kirchschlaeger (Ethics-Professor, Director of the Institute of Social Ethics ISE, University of Lucerne)

Giovanni Sartor ( Professor in Legal Informatics at the University of Bologna) 

Moderator: Holger Volland CEO brand eins, Germany)

Aperitiv (6.30 pm)

………………………………………………………………………………………………………………………………………………….

Saturday (all day), March 11, 2024

Moderator: Holger Volland CEO brand eins, Germany)

10:00 am

– Welcome remarks

Gianpaolo Scarante, Past President Ateneo Veneto

10:10  “Singing tuning with Ahhh.” Alessandro Donati

10:15 am

– Pilot Project “Responsible use of AI” with Rijks ICT Gilde ( Ministry of Interior and Kingdom Relationships) , and the Province of Friesland, The Netherlands

Willy Tadema (AI Ethics Lead, Rijks ICT Gilde, The Netherlands),

Marijke Steege, Marijke (Senior Consultant Strategy Innovation and Data, Rijks ICT Gilde, The Netherlands),

Gerard Kema (Innovator Manager, Province of Friesland, The Netherlands),

Coffee break (11 a.m.)

11:30am

– Trustworthy AI in Practice: Best practices

Mattia Savardi. Davide Farini, Alberto Signoroni

Alberto Signoroni, (University of Brescia, Italy),

Mattia Savardi, (University of Brescia, Italy),

Davide Farina (University of Brescia, Italy),

Hanna Sormunen (Finnish Tax Administration),

Vince Madai (QUEST, Berlin)

Lunch (1:00 p.m.)

2.30pm

– Panel: “How do we trust AI? “

Panelists:

Jean Enno Charton (Director Bioethics & Digital Ethics, Merck),

Bryn Roberts (Global Head of Data &Analytics, Roche),

Sarah Gadd (Head of Data & Artificial Intelligence Solutions, Credit Suisse),

Lisa Bechtold, (Global Lead AI Assurance & Data Governance · Zurich Insurance Company)  

Moderator: Holger Volland (CEO brand eins, Germany)

Coffee break (4 p.m.)

4:45 pm

– Trustworthy AI in Practice: Best practices

Ulrich Kühne (Hautmedizin Bad Soden, Germany)

Adriano Lucieri (DFKI, Germany)

James Brusseau (Pace University, USA)

Adarsh Srivastava (Roche, India)

Concluding Remarks

Roberto V. Zicari (Lead Z-inspection® initiative), 

Sergio Pascolo (President, Venice Urban Lab)

Aperitif (6 p.m.)

Frédérick Bruneault and Pedro Kringen in front of an original Tintoretto.