top of page
Laboratory

WANTED!

Find collaborators for projects/funding proposals

 ( or leave your own project idea here)

On these pages we highlight current initiatives which are looking for additional U.S. or Finnish collaborators (for example to utilize joint funding calls by the Academy of Finland and its U.S. partners).

Science Lab

TRUSTWORTHY AI FOR HEALTHCARE LABORATORY

Tampere University

Project Idea: 

In the field of healthcare, Artificial Intelligence (AI) is currently promising support for healthcare professionals in their decision-making process of disease detection or predicting risk situations for patients. However, when AI systems’ outputs affect a patient’s life, their adoption in clinical routine encounters barriers related to the trustworthiness and understanding of the outputs. These have become relevant values for any stakeholders related to the AI solutions lifecycle in the healthcare domain. 
The Trustworthy AI for Healthcare Laboratory at Tampere University is composed of a group of researchers who aim to leverage the importance of Trustworthy AI, particularly of explainable AI, in academic and civil society. Their goal is to enhance the AI solutions aimed at healthcare and improve their uptake by the different healthcare stakeholders. The Lab is affiliated with the Z-inspection ® initiative (www.z-inspection.org).
Currently, we participate in research activities such as projects, and international research collaborations, where explainable AI and trustworthy AI are being put into practice in collaboration with clinicians in several medical domains, including cardiovascular diseases, emergency patient admission and management resources, and mental well-being in chronic diseases.
The project idea of the lab aims to:
•    Design, develop, and evaluate decision-support models for different healthcare fields of common interest for Tampere University and other FARIA network’s partners that foster trustworthy AI principles, with an emphasis on explainable AI and the risk of bias.
•    Raise awareness of trustworthy AI in the health data science community of FARIA network through dissemination actions like MOOC courses or seminars held within research institutions.
•    Explore further joint collaborations such as project grants applications or support for research activities, in the area of health data science where trustworthy and explainable AI are essential components in the clinical decision support systems.

What we are looking for:

We are looking for research entities or departments that are interested either in investigating Trustworthy AI aspects in their existing clinical decision support systems or in developing decision support models based on ML to tackle a specific clinical issue.

What you should know:

I work as a postdoctoral research fellow at the Decision Support for Health research group at the Faculty of Medicine and Health Technology. My focus research areas are explainable AI and trustworthy AI applied to clinical prediction models. Our research group develops methods to help healthcare professionals, patients, or people who want to stay healthy to understand often complex health-related data. So that they can make informed decisions leading to actions. We specialize in methods that are data-driven. These are typically based on combinations of biomedical signal processing, (explainable) AI and ML and statistical analysis. Our methods are designed to work with real-life (‘ugly’) data that can be noisy, and have artefacts and missing components. A guiding theme for us is that our methods should be well accepted by end-users and have an actual measurable impact and function in real-life.

LOOKING FOR COLLABORATORS?

Add your project idea to the site!

Thanks for your proposal! We'll inform you once it has been added to the website!

bottom of page