Professor SIBRAND POPPEMA, MEMBER OF THE Z-Inspection® ADVISORY BOARD, RECEIVES GERMAN ORDER OF MERIT.

Award ceremony with Ambassador Dr. Peter Blomeyer
Photo: Sunway University
Naveed Mushtaq. May he rest in peace.

Our team member, colleague and friend Naveed Mushtaq has passed away last night. His heart did not make it.
He suffered a sudden cardiac arrest a few weeks ago.
May he rest in peace.
We pray for the family, that God will grant them the serenity in this very difficult time in their life.
What a Philosopher Learned at an AI Ethics Evaluation
Our expert James Brusseau (PhD, Philosophy) – Pace University, New York City, USA– wrote an essay documenting the learnings he acquired from working on Z-Inspection® performed on an existing, deployed, and functioning AI medical device.
What a Philosopher Learned at an AI Ethics Evaluation

AI ethics increasingly focuses on converting abstract principles into practical action. This case study documents nine lessons for the conversion learned while performing an ethics evaluation on a deployed AI medical device. The utilized ethical principles were adopted from the Ethics Guidelines for Trustworthy AI, and the conversion into practical insights and recommendations was accomplished by an independent team composed of philosophers, technical and medical experts.
This essay contributes to the conversion of abstract principles into concrete artificial intelligence applications by documenting learnings acquired from a robust ethics evaluation performed on an existing, deployed, and functioning AI medical device.
The ethics evaluation formed part of a larger inspection involving technical and legal aspects of the device that was organized by computer scientist Roberto Zicari (2020). This document is limited to the applied ethics, and to his experience as a philosopher.
These are nine lessons I learned about applying ethics to AI in the real world.
Zicari, Roberto (2020). Z-Inspection: A process to assess trustworthy AI.
Dipayan Ghosh, Ph.D co-director, Digital Platforms & Democracy Project, Senior Fellow Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School joined our Advisory Board!
We are proud to announce you that Dipayan Ghosh, Ph.D co-director, Digital Platforms & Democracy Project, Senior Fellow Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, USA has joined our Advisory Board!
His research and writing have been cited and published widely, with recent analysis appearing in The New York Times, The Washington Post, The Wall Street Journal, The Atlantic, The Guardian, Foreign Affairs, Harvard Business Review, Foreign Policy, Time, and CNN. He has also appeared on CNN, MSNBC, CNBC, NPR and BBC. A computer scientist by training, Ghosh previously worked on global privacy and public policy issues at Facebook, where he led strategic efforts to address privacy and security issues at the company. Prior to Facebook, Ghosh was a technology and economic policy advisor in the Obama White House where he served across the Office of Science & Technology Policy and the National Economic Council. He focused on issues concerning big data’s impact on consumer privacy and the digital economy. He has also served as a public interest technology fellow at New America, a Washington-based public policy think tank. Ghosh received a Ph.D. in electrical engineering & computer science at Cornell University where he conducted research at the Wireless Intelligent Systems Lab, and completed post-doctoral work at University of California, Berkeley.
Professor Margaret Levi, Sara Miller McCune Director of the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University joined our Advisory Board!

We are proud to announce you that Professor Margaret Levi, Sara Miller McCune Director of the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University, and Professor of Political Science, Stanford University, and Senior Fellow, Stanford Woods Institute for the Environment, USA , has joined our Advisory Board!
CASBS @ Stanford brings together deep thinkers from diverse disciplines and communities to advance understanding of the full range of human beliefs, behaviors, interactions, and institutions. A leading incubator of human-centered knowledge, CASBS facilitates collaborations across academia, policy, industry, civil society, and government to collectively design a better future.
Arcada’s researchers are contributing to the development of an ethical assessment of Artificial Intelligence
25.11.2020
What is Artificial Intelligence (AI) and how should we handle the use of AI? These are issues that are being pondered in most EU institutions and large companies today. AI is a loose term that includes most methods (self-learning algorithms) that are used primarily to process collected data and to create decisions based on patterns found in data. In fact, even today, most people use and are controlled by an AI without thinking about it. AI as a software technology is a little different and requires a deeper analysis than regular software. What characterizes AI is that it can be intrusive, it constantly needs more data and is continuously retrained to become better at finding patterns in our data.
Arcada participates in Z-Inspection®, an international research network that advocates “a mindful use of AI”, i.e. an ethical use of artificial intelligence. Professor Roberto Zicari at Goethe University has created an international interdisciplinary network that works to understand how we should handle AI. The network includes both the private sector and academics who work together to establish ideas for how we get an AI use that is compatible with European values. We probably all realize that AI will have a great impact on society and therefore such systems must be safe and resilient to use. This is what researcher Magnus Westerlund, who leads Arcada’s Master’s programme in Big Data Analytics and is also part of the Z-Inspection network, says. The challenge, however, is that technological development is only a small part, the big work lies in helping people to use AI, to enable a better experience and to understand when and to what extent we can trust AI.
Arcada’s researchers look, among other things, at the healthcare sector, where the impact will be great when AI is integrated into the systems. It requires the healthcare sector to implement a digitization process that gives AI access to reliable data. This can, for example, include a variety of digital systems and sensors that digitize the physical space. The healthcare sector has a clear challenge in keeping focus when it comes to the development of the digital processes. It makes small sprints a bit here and there, but the vision for where it is going and the ability to take us there is the problem.
– We can, for example, look at Estonia and state that a clear improvement would be an open platform that connects different competent actors, and how we get data. This is of utmost importance to be able to use AI, says Westerlund.
A significant difference to, for example, Estonia has been all the legacy systems that exist in Finland and to connect these in a safe way. Data security for the systems has become one of the major challenges in being able to drive a faster change process. At the same time, too much emphasis has been placed on purchasing large and fully developed IT systems from abroad. Instead of nurturing a domestic software industry, we have relied on consultant-driven system adaptations to Finnish needs. The question is how these systems will now be able to deliver data to various companies and researchers who wish to use AI to improve decision-making in healthcare.
The future of AI-supported services
According to reports, the European Commission will propose a new AI legislation, which will impose requirements on AI-supported services. This is work that is reminiscent of the process of the Data Protection Regulation (GDPR). It is likely that in the future there will be demands for AI services that have an impact on both the individual and society. One of the basic requirements should be to perform an ethical evaluation when using AI services.
According to Westerlund, transparency and openness are needed in our systems, so that we understand how an AI decision came about.
– The algorithm may be wrong and that is something we must dare to address. Attempts to automate may reduce costs, but it is important to understand that AI has no understanding of the broader relevance of what it proposes. If we start introducing services that replace people, there will be problems, especially in government decisions or when it comes to life and death, such as in health care and traffic. Our society has not yet had time to adapt to an AI making autonomous decisions – we expect a human being to be legally responsible.
Westerlund points out that artificial intelligence should in the medium term be seen as a complement to the human expert, which contributes to making decisions and diagnoses better and more reliable.
Concrete proposals for opening up healthcare for AI use are to open up data warehouses, modularize used software, and improve digital identities and data security. Initiatives such as Z-Inspection can then be used to evaluate how the AI solutions meet the ethical requirements.
The research at Arcada wants to solve problems related to:
- A better use of the digital, and in the future to include AI to, for example, detect / support certain functions.
- Open architecture for infrastructure.
- Distributed security suitable for modular systems.
- Evaluate the use of AI.
Editor, Elina Sagne-Ollikainen, Master in Health Care and Social Service, Specialist in Research and Development at Arcada.
Article written by Magnus Westerlund, DSc., Principle Lecturer in Information Technology.
Originally published here.
Presentation on Z-Inspection® at the [AI4EU] Trustworthy AI workshop. November 13, 2020
Roberto V. Zicari did a 30 minutes presentation at the [AI4EU] Trustworthy AI workshop on our research on Z-Inspection® , a process to assess Trustworthy AI.

YouTube: Link to when the presentation starts
PERISCOPE Project Started on November 1, 2020. Pan-European Response to the ImpactS of COVID-19 and future Pandemics and Epidemics
PERISCOPE is a Horizon 2020 research project that was funded with almost 10 million Euros under the Coronavirus Global Response initiative launched in May 2020 by the European Commission President Ursula von der Leyen.
PERISCOPE started on 1 November 2020 and will last until 31 October 2023.
The impact of the COVID-19 pandemic has been deep and wide. In spite of unprecedented efforts to understand the COVID-19 disease and its causative virus SARS-CoV-2, months after the emergence of the first local case in Europe (San Matteo hospital, Pavia, 21st February 2020) significant knowledge gaps persist. While social and natural scientists managed to develop new research and shed light on the dynamics of the outbreak and the most effective possible containment measures, governments have been increasingly faced with the need to adopt urgent decisions. Against this background, PERISCOPE plans to contribute to a dramatically deeper understanding of the dynamics of the outbreak, by means of an intense multi-disciplinary research, both theoretical and experimental, and the consideration of different viewpoints: clinic and epidemiologic; humanistic and psychologic; socio-economic and political; statistical and technological.
The overarching objectives of PERISCOPE are to map and analyse the unintended impacts of the COVID-19 outbreak; develop solutions and guidance for policymakers and health authorities on how to mitigate the impact of the outbreak; enhance Europe’s preparedness for future similar events; and reflect on the future multi-level governance in the health as well as other domains affected by the outbreak.
In pursuing this objective, PERISCOPE sheds new light on the unintended and indirect consequences of the outbreak and the related government responses, with the intention to preserve evidence-based policymaking by collecting an unprecedented amount of data and information on the social, economic and behavioural consequences of the current pandemic. At the same time, PERISCOPE will produce new information on the conditions that led to the impact of the pandemic, the differences in “policy mix” adopted at the national level in EU and associated countries, and the behavioural impacts of both the outbreak and the policies adopted.
About the project PERISCOPE
The project runs from November 1, 2020, until October 31, 2023. It brings together economists, engineers, journalists, communication experts, lawyers, political scientists, experts in regulatory governance, mathematicians, policymakers, health authorities, physicians, social psychologists, sociologists, statisticians, experts in ethics and new technologies and representatives of patients’ organizations. The 32 European partners come from Italy, Belgium, Austria, Czechia, France, Germany, Luxembourg, the Netherlands, Poland, Portugal, Romania, Serbia, Spain, Sweden, Switzerland, and the UK.
About Horizon 2020
Horizon 2020 is the EU’s main research and innovation program, which has nearly EUR 80 billion of funding for the period 2014–2020. It ensures that research and innovation policies in a wide range of areas are implemented, securing Europe’s global competitiveness. It has an emphasis on science, industrial leadership and initiatives that tackle societal challenges.
Read the Press Release.