Award ceremony with Ambassador Dr. Peter Blomeyer
Photo: Sunway University
Our team member, colleague and friend Naveed Mushtaq has passed away last night. His heart did not make it.
He suffered a sudden cardiac arrest a few weeks ago.
May he rest in peace.
We pray for the family, that God will grant them the serenity in this very difficult time in their life.
Our expert James Brusseau (PhD, Philosophy) – Pace University, New York City, USA– wrote an essay documenting the learnings he acquired from working on Z-Inspection® performed on an existing, deployed, and functioning AI medical device.
What a Philosopher Learned at an AI Ethics Evaluation
AI ethics increasingly focuses on converting abstract principles into practical action. This case study documents nine lessons for the conversion learned while performing an ethics evaluation on a deployed AI medical device. The utilized ethical principles were adopted from the Ethics Guidelines for Trustworthy AI, and the conversion into practical insights and recommendations was accomplished by an independent team composed of philosophers, technical and medical experts.
This essay contributes to the conversion of abstract principles into concrete artificial intelligence applications by documenting learnings acquired from a robust ethics evaluation performed on an existing, deployed, and functioning AI medical device.
The ethics evaluation formed part of a larger inspection involving technical and legal aspects of the device that was organized by computer scientist Roberto Zicari (2020). This document is limited to the applied ethics, and to his experience as a philosopher.
These are nine lessons I learned about applying ethics to AI in the real world.
Zicari, Roberto (2020). Z-Inspection: A process to assess trustworthy AI.
We are proud to announce you that Dipayan Ghosh, Ph.D co-director, Digital Platforms & Democracy Project, Senior Fellow Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, USA has joined our Advisory Board!
His research and writing have been cited and published widely, with recent analysis appearing in The New York Times, The Washington Post, The Wall Street Journal, The Atlantic, The Guardian, Foreign Affairs, Harvard Business Review, Foreign Policy, Time, and CNN. He has also appeared on CNN, MSNBC, CNBC, NPR and BBC. A computer scientist by training, Ghosh previously worked on global privacy and public policy issues at Facebook, where he led strategic efforts to address privacy and security issues at the company. Prior to Facebook, Ghosh was a technology and economic policy advisor in the Obama White House where he served across the Office of Science & Technology Policy and the National Economic Council. He focused on issues concerning big data’s impact on consumer privacy and the digital economy. He has also served as a public interest technology fellow at New America, a Washington-based public policy think tank. Ghosh received a Ph.D. in electrical engineering & computer science at Cornell University where he conducted research at the Wireless Intelligent Systems Lab, and completed post-doctoral work at University of California, Berkeley.
We are proud to announce you that Professor Margaret Levi, Sara Miller McCune Director of the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University, and Professor of Political Science, Stanford University, and Senior Fellow, Stanford Woods Institute for the Environment, USA , has joined our Advisory Board!
CASBS @ Stanford brings together deep thinkers from diverse disciplines and communities to advance understanding of the full range of human beliefs, behaviors, interactions, and institutions. A leading incubator of human-centered knowledge, CASBS facilitates collaborations across academia, policy, industry, civil society, and government to collectively design a better future.
What is Artificial Intelligence (AI) and how should we handle the use of AI? These are issues that are being pondered in most EU institutions and large companies today. AI is a loose term that includes most methods (self-learning algorithms) that are used primarily to process collected data and to create decisions based on patterns found in data. In fact, even today, most people use and are controlled by an AI without thinking about it. AI as a software technology is a little different and requires a deeper analysis than regular software. What characterizes AI is that it can be intrusive, it constantly needs more data and is continuously retrained to become better at finding patterns in our data.
Arcada participates in Z-Inspection®, an international research network that advocates “a mindful use of AI”, i.e. an ethical use of artificial intelligence. Professor Roberto Zicari at Goethe University has created an international interdisciplinary network that works to understand how we should handle AI. The network includes both the private sector and academics who work together to establish ideas for how we get an AI use that is compatible with European values. We probably all realize that AI will have a great impact on society and therefore such systems must be safe and resilient to use. This is what researcher Magnus Westerlund, who leads Arcada’s Master’s programme in Big Data Analytics and is also part of the Z-Inspection network, says. The challenge, however, is that technological development is only a small part, the big work lies in helping people to use AI, to enable a better experience and to understand when and to what extent we can trust AI.
Arcada’s researchers look, among other things, at the healthcare sector, where the impact will be great when AI is integrated into the systems. It requires the healthcare sector to implement a digitization process that gives AI access to reliable data. This can, for example, include a variety of digital systems and sensors that digitize the physical space. The healthcare sector has a clear challenge in keeping focus when it comes to the development of the digital processes. It makes small sprints a bit here and there, but the vision for where it is going and the ability to take us there is the problem.
– We can, for example, look at Estonia and state that a clear improvement would be an open platform that connects different competent actors, and how we get data. This is of utmost importance to be able to use AI, says Westerlund.
A significant difference to, for example, Estonia has been all the legacy systems that exist in Finland and to connect these in a safe way. Data security for the systems has become one of the major challenges in being able to drive a faster change process. At the same time, too much emphasis has been placed on purchasing large and fully developed IT systems from abroad. Instead of nurturing a domestic software industry, we have relied on consultant-driven system adaptations to Finnish needs. The question is how these systems will now be able to deliver data to various companies and researchers who wish to use AI to improve decision-making in healthcare.
The future of AI-supported services
According to reports, the European Commission will propose a new AI legislation, which will impose requirements on AI-supported services. This is work that is reminiscent of the process of the Data Protection Regulation (GDPR). It is likely that in the future there will be demands for AI services that have an impact on both the individual and society. One of the basic requirements should be to perform an ethical evaluation when using AI services.
According to Westerlund, transparency and openness are needed in our systems, so that we understand how an AI decision came about.
– The algorithm may be wrong and that is something we must dare to address. Attempts to automate may reduce costs, but it is important to understand that AI has no understanding of the broader relevance of what it proposes. If we start introducing services that replace people, there will be problems, especially in government decisions or when it comes to life and death, such as in health care and traffic. Our society has not yet had time to adapt to an AI making autonomous decisions – we expect a human being to be legally responsible.
Westerlund points out that artificial intelligence should in the medium term be seen as a complement to the human expert, which contributes to making decisions and diagnoses better and more reliable.
Concrete proposals for opening up healthcare for AI use are to open up data warehouses, modularize used software, and improve digital identities and data security. Initiatives such as Z-Inspection can then be used to evaluate how the AI solutions meet the ethical requirements.
The research at Arcada wants to solve problems related to:
- A better use of the digital, and in the future to include AI to, for example, detect / support certain functions.
- Open architecture for infrastructure.
- Distributed security suitable for modular systems.
- Evaluate the use of AI.
Editor, Elina Sagne-Ollikainen, Master in Health Care and Social Service, Specialist in Research and Development at Arcada.
Article written by Magnus Westerlund, DSc., Principle Lecturer in Information Technology.
Originally published here.
Prof. Sang Kyun Cha Founding Dean of Seoul National University Graduate School of Data Science, Founding Director of SNU Big Data Institute, School of Electrical and Computer Engineering, Seoul National University, South Korea All members of Advisory Board
We are pleased to announce that
Principal Advisor, European Commission Directorate-General for Justice and Consumers,
AI Policy Consultant at OECD AI Policy Observatory, J.S.M Stanford University, France.
Head of Artificial Intelligence and Machine Learning
World Economic Forum, USA
Geneviève Fieux Castagnet
SNCF – DIRECTION de l’Ethique Groupe, France
Senior Research Fellow and Head of Global Governance, Regulation,
Innovation and the Digital Economy, CEPS, Belgium.
Professor of Digital Policy, School of Transnational Governance,
European University Institute.
Former Member of the EU High Level Expert Group on AI.
have joined our Advisory Board!