In March of this year, we started the trustworthiness assessment of the ExplainMe project using the Z-Inspection® methodology.

In March of this year, we started the trustworthiness assessment of the ExplainMe project using the Z-Inspection® methodology.

This assessment is co-lead by Jesmin Jahan Tithi, Ph.D, Hanna Sormunen and Megan Coffee with the support of Roberto V. Zicari.

Our multi-disciplinary team has already grown to 32 experts, covering various disciplines, including computer science, explainable artificial intelligence, software engineering, psychiatry, law, ethics, philosophy, and other areas of medicine and social sciences.

Our aim is the co-design of a Trustworthy Explainable AI system for mental health monitoring using speech.

The motivation behind ExplainMe stems from the observation that most state-of-the-art systems supporting remote mental health monitoring lack transparency in their reasoning and decision-making. At the same time, research confirms that acoustic features extracted from speech serve as valid markers for assessing the severity of manic and depressive symptoms. ExplainMe addressed this gap and aims to design an Explainable AI system for mental health monitoring using speech.

The project “ExplainMe: Explainable Artificial Intelligence for Monitoring Acoustic Features Extracted from Speech” (FENG.02.02-IP.05-0302/23) coordinated by Katarzyna Kaczmarek-Majer is carried out within the First Team programme of the FNP Foundation for Polish Science co-financed by the European Union under the European Funds for Smart Economy 2021-2027 (FENG).

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *