Prof. Sang Kyun Cha, Founding Dean of Seoul National University Graduate School of Data Science, joined our Advisory Board!

Prof. Sang Kyun Cha Founding Dean of Seoul National University Graduate School of Data Science, Founding Director of SNU Big Data Institute, School of Electrical and Computer Engineering, Seoul National University, South Korea All members of Advisory Board

EU Commission Grant: PERISCOPE will investigate the broad socio-economic and behavioral impacts of the COVID-19 pandemic, to make Europe more resilient and prepared for future large-scale risks. We plan to use Z-Inspection® in PERISCOPE to assess trustworthy AI solutions and products.

EU Commission Grant

PRESS RELEASE

PERISCOPE will investigate the broad socio-economic and behavioral impacts of the COVID-19 pandemic, to make Europe more resilient and prepared for future large-scale risks.

The European Commission approved PERISCOPE (PAN-EUROPEAN RESPONSE TO THE IMPACTS OF COVID-19 AND FUTURE PANDEMICS AND EPIDEMICS), a large-scale research project that brings together 32 European institutions and is coordinated by the University of Pavia. PERISCOPE is a Horizon 2020 research project that was funded with almost 10 million Euros under the Coronavirus Global Response initiative launched in May 2020 by the European Commission President Ursula von der Leyen. The goal of PERISCOPE is to shed light into the broad socio-economic and behavioral impacts of COVID-19. A multidisciplinary consortium will bring together experts in all aspects of the current outbreak: clinic and epidemiologic; socio-economic and political; statistical and technological.

The partners of the consortium will carry out theoretical and experimental research to contribute to a deeperunderstanding of the short- and long-term impacts of the pandemic and the measures adopted to contain it. Such research-intensive activities will allow the consortium to propose measures to prepare Europe for future pandemics and epidemics in a relatively short timeline.

The main goals of PERISCOPE are:

  • to gather data on the broad impacts of COVID-19 in order to develop a comprehensive, user-friendly, openly accessible COVID Atlas, which should become a reference tool for researchers and policymakers, and a dynamic source of information to disseminate to the general public;
  • to perform innovative statistical analysis on the collected data, with the help of various methods including machine learning tools;
  • to identify successful practices and approaches adopted at the local level, which could be scaled up at the pan-European level for a better containment of the pandemic and its related socio-economic impacts; and
  • to develop guidance for policymakers at all levels of government, in order to enhance Europe’s preparedness for future similar events and proposed reforms in the multi-level governance of health.

Guidance on data-driven approach to healthcare, including the use of machine learning approach to optimise the management of emergencies. Particular attention will be given to the deployment of trustworthy AI solutions, based on the EU Ethics Guidelines for Trustworthy AI, adapted to the healthcare domain; and on the Z-Inspection® process.

Developing guidance on Trustworthy Artificial intelligence in healthcare.

AI will be used extensively in government and in healthcare over the coming years, as testified by the fact that public services and healthcare were two of the three priority sectors chosen by the EU High Level Healthcare Group on AI. “Trustworthy Artificial Intelligence” is defined as AI that meets three cumulative requirements: legal compliance, ethical alignment, and socio-technical robustness. The EU Ethics Guidelines for Trustworthy AI observe that any human-centric approach to AI requires compliance with fundamental rights, independently of whether these are explicitly protected by EU Treaties, or by the Charter of Fundamental Rights of the EU. The EU Guidelines identify four key principles (defined as ethical “imperatives”) for Trustworthy AI: the respect for human autonomy, the prevention of harm, fairness and explicability.

The Guidelines then go beyond the four imperatives and put forward seven requirements that AI systems should comply with in order to be defined as Trustworthy. Perhaps the most innovative feature of the Ethics Guidelines is the attempt to operationalise the requirement through a detailed (self- )assessment list, about to be released in software form in June 2020. Recent studies show that most start-ups in healthcare have a limited or non‐existent participation and impact in the publicly available scientific literature, and that healthcare products not subjected to peer-review but based on internal data generation alone may be problematic and non-trustworthy. This sub-task will develop tailored guidance on trustworthy AI for the healthcare sector, and will thus also contribute to the advancement of the EU policy in this field.

PERISCOPE started on 1 November 2020 and will last until 31 October 2023.

Paul NEMITZ,
 Laura Galindo-Romero, Kay Firth-Butterfield, Andrea Renda, Geneviève Fieux Castagnet, joined our Advisory Board!

We are pleased to announce that

Paul NEMITZ 

Principal Advisor, European Commission Directorate-General for Justice and Consumers, 
Brussels, Belgium

Laura Galindo-Romero,
AI Policy Consultant at OECD AI Policy Observatory, J.S.M Stanford University, France.

Kay Firth-Butterfield
Head of Artificial Intelligence and Machine Learning
World Economic Forum, USA

Geneviève Fieux Castagnet 
Ethics Officer,
SNCF – DIRECTION de l’Ethique Groupe, France

Andrea Renda 
Senior Research Fellow and Head of Global Governance, Regulation,
Innovation and the Digital Economy, CEPS, Belgium.
Professor of Digital Policy, School of Transnational Governance,
European University Institute.
Former Member of the EU High Level Expert Group on AI.

have joined our Advisory Board!

Assessing Trustworthy AI. Best Practice: Machine learning as a supportive tool to recognize cardiac arrest in emergency calls.

We started assessing this new use case in cooperation with Emergency Medical Services Copenhagen, and the Department of Clinical Medicine, University of Copenhagen, Denmark

Emergency medical dispatchers fail to identify approximately 25% of cases of out of hospital cardiac arrest, thus loose the opportunity to provide the caller instructions in cardiopulmonary resuscitation.

A team lead by Stig Nikolaj Blomberg (Emergency Medical Services Copenhagen, and Department of Clinical Medicine, University of Copenhagen, Denmark) examined whether a machine learning framework could recognize out-of-hospital cardiac arrest from audio files of calls to the emergency medical dispatch center.

You can read more here