I am super proud of our work together and very happy to share with you that the big report for our pilot project is now public online in ArXiv!
” This report is made public. The results of this pilot are of great importance for the Dutch government, serving as a best practice with which public administrators can get started, and incorporate ethical and human rights values when considering the use of an AI system and/or algorithms. It also sends a strong message to encourage public administrators to make the results of AI assessments like this one, transparent and available to the public.”
Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment
This report shares the experiences, results and lessons learned in conducting a pilot project “Responsible use of AI” in cooperation with the Province of Friesland, Rijks ICT Gilde-part of the Ministry of the Interior and Kingdom Relations (BZK) (both in The Netherlands) and a group of members of the Z-Inspection® Initiative. The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed. The AI maps heathland grassland by means of satellite images for monitoring nature reserves. Environmental monitoring is one of the crucial activities carried on by society for several purposes ranging from maintaining standards on drinkable water to quantifying the CO2 emissions of a particular state or region. Using satellite imagery and machine learning to support decisions is becoming an important part of environmental monitoring. The main focus of this report is to share the experiences, results and lessons learned from performing both a Trustworthy AI assessment using the Z-Inspection® process and the EU framework for Trustworthy AI, and combining it with a Fundamental Rights assessment using the Fundamental Rights and Algorithms Impact Assessment (FRAIA) as recommended by the Dutch government for the use of AI algorithms by the Dutch public authorities.
Comments: On behalf of the Z-Inspection® Initiative
http://z-inspection.org/wp-content/uploads/2020/07/zinspection2-2-300x139.png00Roberto Zicarihttp://z-inspection.org/wp-content/uploads/2020/07/zinspection2-2-300x139.pngRoberto Zicari2024-04-23 07:44:272024-04-23 07:44:28Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment
This pilot project of the Z-inspection® initiative (https://z-inspection.org) aims at assessing the use of Generative AI in higher level education considering specific use cases.
For this pilot project, we will assess the ethical, technical, domain-specific (i.e. education) and legal implications of the use of Generative AI-product/service within the university context.
We follow the UNESCO guidance for policymakers on AI and education. In particular the policy recommendation 6. : Pilot testing, monitoring and evaluation, and building an evidence base.
Expected Output: a white paper and a peer reviewed journal article as best practice and a set of recommendations for each specific use case. Such recommendations could be also useful for helping clarify the Guidelines that each university is creating for this.
An interdisciplinary team of experts will assess the trustworthiness of Generative AI for selected use cases in High Education using the Z-Inspection® process: https://z-inspection.org
Z-Inspection® is a holistic process based on the method of evaluating new technologies, where ethical issues need to be discussed through the elaboration of socio-technical scenarios. In particular, Z-Inspection® can be used to perform independent assessments and/or self-assessments together with the stakeholders owning the use case.
For the context of this pilot project we define ethics in line with the essence of modern democracy i.e. “respect for others, expressed through support for fundamental human rights”. We take into consideration that “trust” in the development, deployment and use of AI systems concerns not only the technology’s inherent properties, but also the qualities of the socio-technical systems involving AI applications.
Specifically, we consider the ethics guidelines for trustworthy artificial intelligence defined by the EU High-Level Expert Group on AI, which defined trustworthy AI as:
(1) lawful – respecting all applicable laws and regulations
(2) ethical – respecting ethical principles and values
(3) robust – both from a technical perspective and taking into account its social environment
And we use the four ethical principles, rooted in fundamental rights defined in [13], acknowledging that tensions may arise between them:
(1) Respect for human autonomy
(2) Prevention of harm
(3) Fairness
(4) Explicability
Furthermore, we also consider the seven requirements of Trustworthy AI defined by the High Level experts group set by the EU. Each requirement has a number of sub-requirements as indicated in Table 1.
Table 1. Requirements and sub-requirements Trustworthy AI.
1 Human agency and oversight Including fundamental rights, human agency and human oversight
2 Technical robustness and safety Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
3 Privacy and data governance Including respect for privacy, quality and integrity of data, and access to data
4 Transparency Including traceability, explainability and communication
5 Diversity, non-discrimination and fairness Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
6 Societal and environmental wellbeing Including sustainability and environmental friendliness, social impact, society and democracy
7 Accountability Including auditability, minimization and reporting of negative impact, trade-offs and redress.
While we consider the seven requirements comprehensive, we believe additional ones can still bring value. Two of such additional requirements proposed by the Z-Inspection® initiative are “Assessing if the ecosystems respect values of Western Modern democracy” and “Avoiding concentration of power” .
“Large artificial intelligence (AI)-based language models such as Chat GPT, Google Bard, and DeepL have evolved to the point where they can produce human-like text and conversations and correct and transform text at such a high level that it can be difficult to distinguish the result from human- generated text. It is foreseeable that more such models will emerge, and their functionalities will continue to evolve, so their existence should be taken into account in university teaching and research.
The existence of large language models should be seen as an opportunity. Degree programmes and teachers are encouraged to use AI in their teaching and to prepare students for a society of the future where AI methods will be widely used.
As AI brings new possibilities for producing text whose origin and reliability is unclear, they should be used in a controlled way. Use may be restricted in teaching in situations where the use would not promote student learning.
At EU level, an AI regulation is under preparation, which will also apply to AI systems in education. In addition, there is an ethical policy on AI and its use, as well as an ethical code for teachers1 . The University’s guidelines may be further specified in the light of future regulation and technological developments.”
1.Policy paper. Generative artificial intelligence in education. UK: The Department for Education’s (DfE) position on the use of generative artificial intelligence (AI) in High Education.Link to .PDF
2. Do Foundation Model Providers Comply with the Draft EU AI Act?Stanford researchers evaluate foundation model providers like OpenAI and Google for their compliance with proposed EU law on AI. Link
They identified a final list of 12 requirements and scored the 10 models using a 5-point rubric. The methodology for the study can be found here.
3. Leading universities in the UK (the Russell Group universities)have developed a set of principles on the use of generative AI tools in education. Here is the link.
4. Frontier AI Regulation: Managing Emerging Risks to Public SafetyarXiv:2307.03718 [cs.CY] LINK
5. Zhaki Abdullah, Students, teachers will learn to properly use tools like ChatGPT: (Singapore Education Minister) Chan Chun Sing, Straits Times, 12 February 2023, LINK
6. U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington D.C., May 2023, LINK
7. Japanese schools to be allowed limited use of generative AI, Kyodo News, 22 June 2023, LINK
8. Higher Education Webinar: Implications of Artificial Intelligence in Higher Education, Tuesday, June 27, 2023, Council on Foreign Relations, LINK
9. Novelli, C., Casolari, F., Rotolo, A. et al.Taking AI risks seriously: a new assessment model for the AI Act. AI & Soc (2023). LINK
10. Academic without Borders, Bimonthly Newsletter n°59, July 2023.
11. On the use of artificial intelligence and in particular of ChatGPT in higher education. (UNESCO). Link to .PDF
12. KU Leuven, Responsible use of Generative Artificial Intelligence (GenAI) in research. These guidelines will be updated with new information and insights to keep them in line with the rapidly evolving technology (last updated June 24, 2023). The further integration of teaching and research guidelines is still on the agenda. LINK
21. Council of Europe: ARTIFICIAL INTELLIGENCE AND EDUCATION A critical view through the lens of human rights, democracy and the rule of law.November 2022
http://z-inspection.org/wp-content/uploads/2020/07/zinspection2-2-300x139.png00Roberto Zicarihttp://z-inspection.org/wp-content/uploads/2020/07/zinspection2-2-300x139.pngRoberto Zicari2023-07-14 15:51:172023-09-08 08:17:10Announcing a New Pilot Project: Assessing Trustworthiness of the use of Generative AI for higher Education.
http://z-inspection.org/wp-content/uploads/2020/07/zinspection2-2-300x139.png00Roberto Zicarihttp://z-inspection.org/wp-content/uploads/2020/07/zinspection2-2-300x139.pngRoberto Zicari2021-05-21 10:03:322021-06-17 10:42:22Our paper ” On Assessing Trustworthy AI in Healthcare Best Practice for Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls.” has been accepted for publication in Frontiers in Human Dynamics
We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
Essential Website Cookies
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refuseing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
Other external services
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Google reCaptcha Settings:
Vimeo and Youtube video embeds:
Privacy Policy
You can read about our cookies and privacy settings in detail on our Privacy Policy Page.