Arcada’s researchers are contributing to the development of an ethical assessment of Artificial Intelligence

25.11.2020 

What is Artificial Intelligence (AI) and how should we handle the use of AI? These are issues that are being pondered in most EU institutions and large companies today. AI is a loose term that includes most methods (self-learning algorithms) that are used primarily to process collected data and to create decisions based on patterns found in data. In fact, even today, most people use and are controlled by an AI without thinking about it. AI as a software technology is a little different and requires a deeper analysis than regular software. What characterizes AI is that it can be intrusive, it constantly needs more data and is continuously retrained to become better at finding patterns in our data.

Arcada participates in Z-Inspection®, an international research network that advocates “a mindful use of AI”, i.e. an ethical use of artificial intelligence. Professor Roberto Zicari at Goethe University has created an international interdisciplinary network that works to understand how we should handle AI. The network includes both the private sector and academics who work together to establish ideas for how we get an AI use that is compatible with European values. We probably all realize that AI will have a great impact on society and therefore such systems must be safe and resilient to use. This is what researcher Magnus Westerlund, who leads Arcada’s Master’s programme in Big Data Analytics and is also part of the Z-Inspection network, says. The challenge, however, is that technological development is only a small part, the big work lies in helping people to use AI, to enable a better experience and to understand when and to what extent we can trust AI.

Arcada’s researchers look, among other things, at the healthcare sector, where the impact will be great when AI is integrated into the systems. It requires the healthcare sector to implement a digitization process that gives AI access to reliable data. This can, for example, include a variety of digital systems and sensors that digitize the physical space. The healthcare sector has a clear challenge in keeping focus when it comes to the development of the digital processes. It makes small sprints a bit here and there, but the vision for where it is going and the ability to take us there is the problem.

– We can, for example, look at Estonia and state that a clear improvement would be an open platform that connects different competent actors, and how we get data. This is of utmost importance to be able to use AI, says Westerlund.

A significant difference to, for example, Estonia has been all the legacy systems that exist in Finland and to connect these in a safe way. Data security for the systems has become one of the major challenges in being able to drive a faster change process. At the same time, too much emphasis has been placed on purchasing large and fully developed IT systems from abroad. Instead of nurturing a domestic software industry, we have relied on consultant-driven system adaptations to Finnish needs. The question is how these systems will now be able to deliver data to various companies and researchers who wish to use AI to improve decision-making in healthcare.

The future of AI-supported services

According to reports, the European Commission will propose a new AI legislation, which will impose requirements on AI-supported services. This is work that is reminiscent of the process of the Data Protection Regulation (GDPR). It is likely that in the future there will be demands for AI services that have an impact on both the individual and society. One of the basic requirements should be to perform an ethical evaluation when using AI services.

According to Westerlund, transparency and openness are needed in our systems, so that we understand how an AI decision came about.

– The algorithm may be wrong and that is something we must dare to address. Attempts to automate may reduce costs, but it is important to understand that AI has no understanding of the broader relevance of what it proposes. If we start introducing services that replace people, there will be problems, especially in government decisions or when it comes to life and death, such as in health care and traffic. Our society has not yet had time to adapt to an AI making autonomous decisions –  we expect a human being to be legally responsible.

Westerlund points out that artificial intelligence should in the medium term be seen as a complement to the human expert, which contributes to making decisions and diagnoses better and more reliable.

Concrete proposals for opening up healthcare for AI use are to open up data warehouses, modularize used software, and improve digital identities and data security. Initiatives such as Z-Inspection can then be used to evaluate how the AI ​​solutions meet the ethical requirements.

The research at Arcada wants to solve problems related to:

  • A better use of the digital, and in the future to include AI to, for example, detect / support certain functions.
  • Open architecture for infrastructure.
  • Distributed security suitable for modular systems.
  • Evaluate the use of AI.

Editor, Elina Sagne-Ollikainen, Master in Health Care and Social Service, Specialist in Research and Development at Arcada.

Article written by Magnus Westerlund, DSc., Principle Lecturer in Information Technology.

Originally published here.

Prof. Sang Kyun Cha, Founding Dean of Seoul National University Graduate School of Data Science, joined our Advisory Board!

Prof. Sang Kyun Cha Founding Dean of Seoul National University Graduate School of Data Science, Founding Director of SNU Big Data Institute, School of Electrical and Computer Engineering, Seoul National University, South Korea All members of Advisory Board

Paul NEMITZ,
 Laura Galindo-Romero, Kay Firth-Butterfield, Andrea Renda, Geneviève Fieux Castagnet, joined our Advisory Board!

We are pleased to announce that

Paul NEMITZ 

Principal Advisor, European Commission Directorate-General for Justice and Consumers, 
Brussels, Belgium

Laura Galindo-Romero,
AI Policy Consultant at OECD AI Policy Observatory, J.S.M Stanford University, France.

Kay Firth-Butterfield
Head of Artificial Intelligence and Machine Learning
World Economic Forum, USA

Geneviève Fieux Castagnet 
Ethics Officer,
SNCF – DIRECTION de l’Ethique Groupe, France

Andrea Renda 
Senior Research Fellow and Head of Global Governance, Regulation,
Innovation and the Digital Economy, CEPS, Belgium.
Professor of Digital Policy, School of Transnational Governance,
European University Institute.
Former Member of the EU High Level Expert Group on AI.

have joined our Advisory Board!