top of page

Promoting research

The initiative supports and encourages interdisciplinary research and, among other things, calls for support for research students and researchers. Grant recipients in 2024 include graduate students, research groups, and professional discussion groups.

Below are the winning research projects for the years 2023-2024:

From Responsible Genomics to Responsible Medical AI
The research is being conducted by:

AI is now positioned to soon become ubiquitous in clinical practice, with diverse applications across all healthcare sectors. Since responsible AI use in healthcare is still in its nascent stages, there is an urgent need for responsible governance of AI Healthcare systems, by mapping the uncertainties, benefits, and risks of AI systems in general and with regard to different healthcare sectors.

This project aims to bridge a gap by developing frameworks that ensure the responsible use of AI technologies in medicine, addressing both ethical and legal considerations.

download (1).png

Empathetic Wellbeing: Using AI Responsibly in the Art Museum

The research is being conducted by:

 

The study explores the intersection of art, empathy, and AI in the context of museum spaces, highlighting a recent experiment that showed how gallery environments can strengthen visitors’ sense of trust – as a basis for empathy – towards artworks created using AI. This finding raises the need for a critical discussion about the responsible use of AI in such contexts.

 

Through an artistic-scientific dialogue, the researchers seek to create an art installation that is both an exhibition and a research laboratory – encouraging visitors to confront conflicts surrounding the concept of empathy, while examining the different roles of sound and image in driving human responsiveness and the involvement of artificial intelligence in this process.

 

Understanding the connection between empathy and artificial intelligence is critical, especially in light of previous studies showing that artificial intelligence may even outperform human doctors in empathetic communication – thereby contributing to deeper understanding in diverse fields.

Evaluation of Language Models and ICU Nurses for Clinical Decision Support

The research is being conducted by:

Advances in artificial intelligence have enabled new clinical decision support tools through natural language processing models. However, there are also important concerns regarding the safe and responsible integration of AI in high-acuity healthcare settings. 

This research compares the decision-making capabilities of AI language models and ICU nurses in common critical care scenarios, focusing on the safe and responsible integration of AI into critical care environments the study will examine key attributes such as transparency, robustness, and fairness in critical care decision-making.

This approach aims to provide insights into AI's potential applications and limitations in intensive care settings.

download (2).png

"Advancing Responsible AI in Medicine to Design Trusted Decision Support Systems"

The research is being conducted by:

My research is centered on Explainable AI (XAI), wherein I aim to establish mathematical foundations for elucidating model predictions. The overarching goal is to discern the limitations and possibilities associated with explanations generated by AI models.

My theoretical research, delves into the interpretability of AI models, exploring feature importance scores with a focus on local (e.g. individual patient diagnosis) and global (e.g. gene impact on diseases) interpretations.

 

"I distinguish between explaining data, akin to a scientist drawing conclusions from encoded information, and explaining the model, resembling an engineer's vigilance for system reliability, crucial in healthcare contexts".

 

On the practical front, my research involves studying the risks of induction during childbirth, specifically for women opting for voluntary induction. Utilizing causality and statistical methods, I aim to unravel the nuanced connections between induction procedures and birth success, contributing to both the understanding of medical practices and potential policy implications for more informed and patient-centric birthing processes.

2.png
download.png

“Usable, Secure, Privacy-Preserving Genomic Data Sharing for AI”

The research is being conducted by:

The artificial Intelligence era has transformed many fields, including medicine, where, among others, genomic data is employed to offer personalized and effective medical treatments. Still, privacy remains a major concern that may hinder the development and deployment of such approaches–to become accurate, AI models need to be trained on a multitude of real patient records, which may leak from the systems and models.

 

"Our research aims to close these gaps by offering a system to accurately measure the empirical privacy risk of genomic AI models, under different settings, and visualize these in an accessible, usable manner."

 

This research will help better understand their perceptions of privacy risk and identify useful defenses that best address privacy concerns while maintaining model utility. Ultimately, the study will give scientists and health professionals a holistic view of the benefits and risks of genomic AI models, allowing them to advance science and medicine effectively while guarding individuals’ privacy.

1.png

“Approaching Explainable Data in Medical Studies”

The research is being conducted by:

The intersection of AI and medicine has opened exciting possibilities and raised many questions specifically in responsible AI. Our lab focuses on improving feature importance scores to better explain real-world medical data. Current scores often struggle with real-world applications.

My Research centers on refining the approach to feature importance through the exploration and application of the Marginal Contribution Feature Importance (MCI).  By developing a set of axioms to guide feature importance scores in explaining data, the MCI emerges as a singular score that encapsulates all desired properties.

This novel approach not only addresses the challenges posed by correlated features but also ensures a more accurate and reliable understanding of the contribution of specific properties to medical outcomes.  This research thrust, emphasizing the MCI, heralds a new era of responsible AI in medicine. It is poised to enhance transparency, interpretability, and ethical deployment of AI models in healthcare settings.

 

"As we embark on these future studies, the goal is clear—to pioneer methodologies that not only unravel the complexities of AI in medicine but also fortify its responsible integration for the betterment of patient outcomes".

3.png

Thanks for applying!

Contact us

Want to join our activities?
Connect with researchers, industry professionals, regulators, or find out more about research grants and other activities?

Contact us!

Tel Aviv University, Haim Levanon 30, Ramat Aviv, Tel Aviv 69978

© 2023 by Tel Aviv University

bottom of page