This pilot project of the Z-inspection® initiative (https://z-inspection.org) aims at assessing the use of Generative AI in higher level education considering specific use cases.
For this pilot project, we will assess the ethical, technical, domain-specific (i.e. education) and legal implications of the use of Generative AI-product/service within the university context.
We follow the UNESCO guidance for policymakers on AI and education. In particular the policy recommendation 6. : Pilot testing, monitoring and evaluation, and building an evidence base.
Expected Output: a white paper and a peer reviewed journal article as best practice and a set of recommendations for each specific use case. Such recommendations could be also useful for helping clarify the Guidelines that each university is creating for this.
– Affiliated Labs: https://z-inspection.org/affiliated-labs/
– Members of the Z-inspection® initiative: https://z-inspection.org
– Ministries and Universities
– Specialized agencies
An interdisciplinary team of experts will assess the trustworthiness of Generative AI for selected use cases in High Education using the Z-Inspection® process: https://z-inspection.org
Z-Inspection® is a holistic process based on the method of evaluating new technologies, where ethical issues need to be discussed through the elaboration of socio-technical scenarios. In particular, Z-Inspection® can be used to perform independent assessments and/or self-assessments together with the stakeholders owning the use case.
For the context of this pilot project we define ethics in line with the essence of modern democracy i.e. “respect for others, expressed through support for fundamental human rights”. We take into consideration that “trust” in the development, deployment and use of AI systems concerns not only the technology’s inherent properties, but also the qualities of the socio-technical systems involving AI applications.
Specifically, we consider the ethics guidelines for trustworthy artificial intelligence defined by the EU High-Level Expert Group on AI, which defined trustworthy AI as:
(1) lawful – respecting all applicable laws and regulations
(2) ethical – respecting ethical principles and values
(3) robust – both from a technical perspective and taking into account its social environment
And we use the four ethical principles, rooted in fundamental rights defined in , acknowledging that tensions may arise between them:
(1) Respect for human autonomy
(2) Prevention of harm
Furthermore, we also consider the seven requirements of Trustworthy AI defined by the High Level experts group set by the EU. Each requirement has a number of sub-requirements as indicated in Table 1.
Table 1. Requirements and sub-requirements Trustworthy AI.
1 Human agency and oversight Including fundamental rights, human agency and human oversight
2 Technical robustness and safety Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
3 Privacy and data governance Including respect for privacy, quality and integrity of data, and access to data
4 Transparency Including traceability, explainability and communication
5 Diversity, non-discrimination and fairness Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
6 Societal and environmental wellbeing Including sustainability and environmental friendliness, social impact, society and democracy
7 Accountability Including auditability, minimization and reporting of negative impact, trade-offs and redress.
While we consider the seven requirements comprehensive, we believe additional ones can still bring value. Two of such additional requirements proposed by the Z-Inspection® initiative are “Assessing if the ecosystems respect values of Western Modern democracy” and “Avoiding concentration of power” .
From the UNESCO guidance for policymakers on AI and education sets out policy recommendations in seven areas:
1. A system-wide vision and strategic priorities
2. Overarching principle for AI and education policies
3. Interdisciplinary planning and inter-sectoral governance
4. Policies and regulations for equitable, inclusive, and ethical use of AI
5. Master plans for using AI in education management, teaching, learning, and assessment
6. Pilot testing, monitoring and evaluation, and building an evidence base
7. Fostering local AI innovations for education
from the Guidelines for the use of AI in teaching at the University of Helsinki Academic Affairs Council 16.2.2023
“Large artificial intelligence (AI)-based language models such as Chat GPT, Google Bard, and DeepL have evolved to the point where they can produce human-like text and conversations and correct and transform text at such a high level that it can be difficult to distinguish the result from human- generated text. It is foreseeable that more such models will emerge, and their functionalities will continue to evolve, so their existence should be taken into account in university teaching and research.
The existence of large language models should be seen as an opportunity. Degree programmes and teachers are encouraged to use AI in their teaching and to prepare students for a society of the future where AI methods will be widely used.
As AI brings new possibilities for producing text whose origin and reliability is unclear, they should be used in a controlled way. Use may be restricted in teaching in situations where the use would not promote student learning.
At EU level, an AI regulation is under preparation, which will also apply to AI systems in education. In addition, there is an ethical policy on AI and its use, as well as an ethical code for teachers1 . The University’s guidelines may be further specified in the light of future regulation and technological developments.”
1. Policy paper. Generative artificial intelligence in education. UK: The Department for Education’s (DfE) position on the use of generative artificial intelligence (AI) in High Education. Link to .PDF
2. Do Foundation Model Providers Comply with the Draft EU AI Act? Stanford researchers evaluate foundation model providers like OpenAI and Google for their compliance with proposed EU law on AI. Link
They identified a final list of 12 requirements and scored the 10 models using a 5-point rubric. The methodology for the study can be found here.
5. Zhaki Abdullah, Students, teachers will learn to properly use tools like ChatGPT: (Singapore Education Minister) Chan Chun Sing, Straits Times, 12 February 2023, LINK
6. U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington D.C., May 2023, LINK
7. Japanese schools to be allowed limited use of generative AI, Kyodo News, 22 June 2023, LINK
8. Higher Education Webinar: Implications of Artificial Intelligence in Higher Education , Tuesday, June 27, 2023, Council on Foreign Relations, LINK
9. Novelli, C., Casolari, F., Rotolo, A. et al. Taking AI risks seriously: a new assessment model for the AI Act. AI & Soc (2023). LINK
10. Academic without Borders, Bimonthly Newsletter n°59, July 2023.
11. On the use of artificial intelligence and in particular of ChatGPT in higher education. (UNESCO). Link to .PDF
12. KU Leuven, Responsible use of Generative Artificial Intelligence (GenAI) in research. These guidelines will be updated with new information and insights to keep them in line with the rapidly evolving technology (last updated June 24, 2023). The further integration of teaching and research guidelines is still on the agenda. LINK
14. The Norwegian Consumer Council published a detailed report “Ghost in the machine – Addressing the consumer harms of generative AI” outlining the harms, legal frameworks, and possible ways forward. In conjunction with this launch, the Norwegian Consumer Council and 14 consumer organizations from across the EU and the US demand that policymakers and regulators act. https://storage02.forbrukerradet.no/media/2023/06/generative-ai-rapport-2023.pdf
15. University of Melbourne: Inquiry into the use of generative AI in the education system. Submission to the House Standing Committee on Employment, Education and Training 14 July 2023. https://about.unimelb.edu.au/__data/assets/pdf_file/0032/396446/UoM-Submission-Inquiry-into-Generative-AI-in-Education-FINAL.pdf
16. Cornell University: Generative Artificial Intelligence for Education and Pedagogy.July 18, 2023. https://teaching.cornell.edu/sites/default/files/2023-08/Cornell-GenerativeAIForEducation-Report_2.pdf
17. he University of North Carolina at Chapel Hill: Teaching Use Guidelines for Generative Artificial Intelligence, https://provost.unc.edu/wp-content/uploads/2023/07/Teaching-Generative-AI-Use-Guidance_UNC-AI-Committee-June-15-202348.pdf
18. University of Sydney: 13 March, 2023 Students answer your questions about generative AI – part 2: Ethics, integrity, and the value of university. https://educational-innovation.sydney.edu.au/teaching@sydney/students-answer-your-questions-about-generative-ai-part-2-ethics-integrity-and-the-value-of-university/
19. The Berkman Klein Center for Internet & Society at Harvard University: Exploring the Impacts of Generative AI on the Future of Teaching and Learning https://cyber.harvard.edu/story/2023-06/impacts-generative-ai-teaching-learning
20. Stanford University: Pedagogic strategies for adapting to generative AI chatbots. Eight strategic steps to help instructors adapt to generative AI tools and chatbots. June 19, 2023, Center for Teaching and Learning. https://docs.google.com/document/d/1la8jOJTWfhUdNna5AJYiKgNR2-54MBJswg0gyBcGB-c/edit
21. Council of Europe: ARTIFICIAL INTELLIGENCE AND EDUCATION A critical view through the lens of human rights, democracy and the rule of law.November 2022
22. ANU Centre for Learning and Teaching: Chat GPT and other generative AI tools: What ANU academics need to know February 2023. https://teaching.weblogs.anu.edu.au/files/2023/02/Chat_GPT_FAQ-1.pdf
23.Guidance for Generative AI in education and research | UNESCO, 7 September 2023