Fundamental AI Research

Advancing the foundations of Artificial Intelligence through safety, trustworthiness, human–AI ecosystems, and multi-agent autonomy — exploring how intelligent systems learn, collaborate, and evolve over time.

AI Safety & Trustworthiness

We develop methods to ensure AI systems are reliable, transparent, and equitable — addressing bias, uncertainty, robustness to real-world clinical variation, security vulnerabilities, demographic leakage, and safe use of generative AI in radiology and beyond.

[ Placeholder: Diagram — Bias / Robustness / Security ]

Human–AI Ecosystem

We investigate how AI systems can collaborate with each other and with humans — learning across sites, agents, and tasks. This includes the development of SheLL (Shared Experience Lifelong Learning), multi-agent reasoning, and foundations for autonomous research workflows.

[ Placeholder: Diagram — SheLL / Multi-Agent Learning ]