Arcada’s researchers are contributing to the development of an ethical assessment of Artificial Intelligence

25.11.2020 

What is Artificial Intelligence (AI) and how should we handle the use of AI? These are issues that are being pondered in most EU institutions and large companies today. AI is a loose term that includes most methods (self-learning algorithms) that are used primarily to process collected data and to create decisions based on patterns found in data. In fact, even today, most people use and are controlled by an AI without thinking about it. AI as a software technology is a little different and requires a deeper analysis than regular software. What characterizes AI is that it can be intrusive, it constantly needs more data and is continuously retrained to become better at finding patterns in our data.

Arcada participates in Z-Inspection®, an international research network that advocates “a mindful use of AI”, i.e. an ethical use of artificial intelligence. Professor Roberto Zicari at Goethe University has created an international interdisciplinary network that works to understand how we should handle AI. The network includes both the private sector and academics who work together to establish ideas for how we get an AI use that is compatible with European values. We probably all realize that AI will have a great impact on society and therefore such systems must be safe and resilient to use. This is what researcher Magnus Westerlund, who leads Arcada’s Master’s programme in Big Data Analytics and is also part of the Z-Inspection network, says. The challenge, however, is that technological development is only a small part, the big work lies in helping people to use AI, to enable a better experience and to understand when and to what extent we can trust AI.

Arcada’s researchers look, among other things, at the healthcare sector, where the impact will be great when AI is integrated into the systems. It requires the healthcare sector to implement a digitization process that gives AI access to reliable data. This can, for example, include a variety of digital systems and sensors that digitize the physical space. The healthcare sector has a clear challenge in keeping focus when it comes to the development of the digital processes. It makes small sprints a bit here and there, but the vision for where it is going and the ability to take us there is the problem.

– We can, for example, look at Estonia and state that a clear improvement would be an open platform that connects different competent actors, and how we get data. This is of utmost importance to be able to use AI, says Westerlund.

A significant difference to, for example, Estonia has been all the legacy systems that exist in Finland and to connect these in a safe way. Data security for the systems has become one of the major challenges in being able to drive a faster change process. At the same time, too much emphasis has been placed on purchasing large and fully developed IT systems from abroad. Instead of nurturing a domestic software industry, we have relied on consultant-driven system adaptations to Finnish needs. The question is how these systems will now be able to deliver data to various companies and researchers who wish to use AI to improve decision-making in healthcare.

The future of AI-supported services

According to reports, the European Commission will propose a new AI legislation, which will impose requirements on AI-supported services. This is work that is reminiscent of the process of the Data Protection Regulation (GDPR). It is likely that in the future there will be demands for AI services that have an impact on both the individual and society. One of the basic requirements should be to perform an ethical evaluation when using AI services.

According to Westerlund, transparency and openness are needed in our systems, so that we understand how an AI decision came about.

– The algorithm may be wrong and that is something we must dare to address. Attempts to automate may reduce costs, but it is important to understand that AI has no understanding of the broader relevance of what it proposes. If we start introducing services that replace people, there will be problems, especially in government decisions or when it comes to life and death, such as in health care and traffic. Our society has not yet had time to adapt to an AI making autonomous decisions –  we expect a human being to be legally responsible.

Westerlund points out that artificial intelligence should in the medium term be seen as a complement to the human expert, which contributes to making decisions and diagnoses better and more reliable.

Concrete proposals for opening up healthcare for AI use are to open up data warehouses, modularize used software, and improve digital identities and data security. Initiatives such as Z-Inspection can then be used to evaluate how the AI ​​solutions meet the ethical requirements.

The research at Arcada wants to solve problems related to:

  • A better use of the digital, and in the future to include AI to, for example, detect / support certain functions.
  • Open architecture for infrastructure.
  • Distributed security suitable for modular systems.
  • Evaluate the use of AI.

Editor, Elina Sagne-Ollikainen, Master in Health Care and Social Service, Specialist in Research and Development at Arcada.

Article written by Magnus Westerlund, DSc., Principle Lecturer in Information Technology.

Originally published here.

Presentation on Z-Inspection® at the [AI4EU] Trustworthy AI workshop. November 13, 2020

Roberto V. Zicari did a 30 minutes presentation at the [AI4EU] Trustworthy AI workshop  on our research on Z-Inspection® , a process to assess Trustworthy AI.

YouTube: Link to when the presentation starts 

PERISCOPE Project Started on November 1, 2020. Pan-European Response to the ImpactS of COVID-19 and future Pandemics and Epidemics

PERISCOPE is a Horizon 2020 research project that was funded with almost 10 million Euros under the Coronavirus Global Response initiative launched in May 2020 by the European Commission President Ursula von der Leyen. 

PERISCOPE started on 1 November 2020 and will last until 31 October 2023.

The impact of the COVID-19 pandemic has been deep and wide. In spite of unprecedented efforts to understand the COVID-19 disease and its causative virus SARS-CoV-2, months after the emergence of the first local case in Europe (San Matteo hospital, Pavia, 21st February 2020) significant knowledge gaps persist. While social and natural scientists managed to develop new research and shed light on the dynamics of the outbreak and the most effective possible containment measures, governments have been increasingly faced with the need to adopt urgent decisions. Against this background, PERISCOPE plans to contribute to a dramatically deeper understanding of the dynamics of the outbreak, by means of an intense multi-disciplinary research, both theoretical and experimental, and the consideration of different viewpoints: clinic and epidemiologic; humanistic and psychologic; socio-economic and political; statistical and technological.

The overarching objectives of PERISCOPE are to map and analyse the unintended impacts of the COVID-19 outbreak; develop solutions and guidance for policymakers and health authorities on how to mitigate the impact of the outbreak; enhance Europe’s preparedness for future similar events; and reflect on the future multi-level governance in the health as well as other domains affected by the outbreak.

In pursuing this objective, PERISCOPE sheds new light on the unintended and indirect consequences of the outbreak and the related government responses, with the intention to preserve evidence-based policymaking by collecting an unprecedented amount of data and information on the social, economic and behavioural consequences of the current pandemic. At the same time, PERISCOPE will produce new information on the conditions that led to the impact of the pandemic, the differences in “policy mix” adopted at the national level in EU and associated countries, and the behavioural impacts of both the outbreak and the policies adopted.

About the project PERISCOPE

The project runs from November 1, 2020, until October 31, 2023. It brings together economists, engineers, journalists, communication experts, lawyers, political scientists, experts in regulatory governance, mathematicians, policymakers, health authorities, physicians, social psychologists, sociologists, statisticians, experts in ethics and new technologies and representatives of patients’ organizations. The 32 European partners come from Italy, Belgium, Austria, Czechia, France, Germany, Luxembourg, the Netherlands, Poland, Portugal, Romania, Serbia, Spain, Sweden, Switzerland, and the UK.

About Horizon 2020

Horizon 2020 is the EU’s main research and innovation program, which has nearly EUR 80 billion of funding for the period 2014–2020. It ensures that research and innovation policies in a wide range of areas are implemented, securing Europe’s global competitiveness. It has an emphasis on science, industrial leadership and initiatives that tackle societal challenges.  

Read the Press Release.

Prof. Sang Kyun Cha, Founding Dean of Seoul National University Graduate School of Data Science, joined our Advisory Board!

Prof. Sang Kyun Cha Founding Dean of Seoul National University Graduate School of Data Science, Founding Director of SNU Big Data Institute, School of Electrical and Computer Engineering, Seoul National University, South Korea All members of Advisory Board

EU Commission Grant: PERISCOPE will investigate the broad socio-economic and behavioral impacts of the COVID-19 pandemic, to make Europe more resilient and prepared for future large-scale risks. We plan to use Z-Inspection® in PERISCOPE to assess trustworthy AI solutions and products.

EU Commission Grant

PRESS RELEASE

PERISCOPE will investigate the broad socio-economic and behavioral impacts of the COVID-19 pandemic, to make Europe more resilient and prepared for future large-scale risks.

The European Commission approved PERISCOPE (PAN-EUROPEAN RESPONSE TO THE IMPACTS OF COVID-19 AND FUTURE PANDEMICS AND EPIDEMICS), a large-scale research project that brings together 32 European institutions and is coordinated by the University of Pavia. PERISCOPE is a Horizon 2020 research project that was funded with almost 10 million Euros under the Coronavirus Global Response initiative launched in May 2020 by the European Commission President Ursula von der Leyen. The goal of PERISCOPE is to shed light into the broad socio-economic and behavioral impacts of COVID-19. A multidisciplinary consortium will bring together experts in all aspects of the current outbreak: clinic and epidemiologic; socio-economic and political; statistical and technological.

The partners of the consortium will carry out theoretical and experimental research to contribute to a deeperunderstanding of the short- and long-term impacts of the pandemic and the measures adopted to contain it. Such research-intensive activities will allow the consortium to propose measures to prepare Europe for future pandemics and epidemics in a relatively short timeline.

The main goals of PERISCOPE are:

  • to gather data on the broad impacts of COVID-19 in order to develop a comprehensive, user-friendly, openly accessible COVID Atlas, which should become a reference tool for researchers and policymakers, and a dynamic source of information to disseminate to the general public;
  • to perform innovative statistical analysis on the collected data, with the help of various methods including machine learning tools;
  • to identify successful practices and approaches adopted at the local level, which could be scaled up at the pan-European level for a better containment of the pandemic and its related socio-economic impacts; and
  • to develop guidance for policymakers at all levels of government, in order to enhance Europe’s preparedness for future similar events and proposed reforms in the multi-level governance of health.

Guidance on data-driven approach to healthcare, including the use of machine learning approach to optimise the management of emergencies. Particular attention will be given to the deployment of trustworthy AI solutions, based on the EU Ethics Guidelines for Trustworthy AI, adapted to the healthcare domain; and on the Z-Inspection® process.

Developing guidance on Trustworthy Artificial intelligence in healthcare.

AI will be used extensively in government and in healthcare over the coming years, as testified by the fact that public services and healthcare were two of the three priority sectors chosen by the EU High Level Healthcare Group on AI. “Trustworthy Artificial Intelligence” is defined as AI that meets three cumulative requirements: legal compliance, ethical alignment, and socio-technical robustness. The EU Ethics Guidelines for Trustworthy AI observe that any human-centric approach to AI requires compliance with fundamental rights, independently of whether these are explicitly protected by EU Treaties, or by the Charter of Fundamental Rights of the EU. The EU Guidelines identify four key principles (defined as ethical “imperatives”) for Trustworthy AI: the respect for human autonomy, the prevention of harm, fairness and explicability.

The Guidelines then go beyond the four imperatives and put forward seven requirements that AI systems should comply with in order to be defined as Trustworthy. Perhaps the most innovative feature of the Ethics Guidelines is the attempt to operationalise the requirement through a detailed (self- )assessment list, about to be released in software form in June 2020. Recent studies show that most start-ups in healthcare have a limited or non‐existent participation and impact in the publicly available scientific literature, and that healthcare products not subjected to peer-review but based on internal data generation alone may be problematic and non-trustworthy. This sub-task will develop tailored guidance on trustworthy AI for the healthcare sector, and will thus also contribute to the advancement of the EU policy in this field.

PERISCOPE started on 1 November 2020 and will last until 31 October 2023.

Paul NEMITZ,
 Laura Galindo-Romero, Kay Firth-Butterfield, Andrea Renda, Geneviève Fieux Castagnet, joined our Advisory Board!

We are pleased to announce that

Paul NEMITZ 

Principal Advisor, European Commission Directorate-General for Justice and Consumers, 
Brussels, Belgium

Laura Galindo-Romero,
AI Policy Consultant at OECD AI Policy Observatory, J.S.M Stanford University, France.

Kay Firth-Butterfield
Head of Artificial Intelligence and Machine Learning
World Economic Forum, USA

Geneviève Fieux Castagnet 
Ethics Officer,
SNCF – DIRECTION de l’Ethique Groupe, France

Andrea Renda 
Senior Research Fellow and Head of Global Governance, Regulation,
Innovation and the Digital Economy, CEPS, Belgium.
Professor of Digital Policy, School of Transnational Governance,
European University Institute.
Former Member of the EU High Level Expert Group on AI.

have joined our Advisory Board!

Assessing Trustworthy AI. Best Practice: Machine learning as a supportive tool to recognize cardiac arrest in emergency calls.

We started assessing this new use case in cooperation with Emergency Medical Services Copenhagen, and the Department of Clinical Medicine, University of Copenhagen, Denmark

Emergency medical dispatchers fail to identify approximately 25% of cases of out of hospital cardiac arrest, thus loose the opportunity to provide the caller instructions in cardiopulmonary resuscitation.

A team lead by Stig Nikolaj Blomberg (Emergency Medical Services Copenhagen, and Department of Clinical Medicine, University of Copenhagen, Denmark) examined whether a machine learning framework could recognize out-of-hospital cardiac arrest from audio files of calls to the emergency medical dispatch center.

You can read more here