AI Safety & Trustworthiness

We study when and how AI systems fail, and how to make their behavior more reliable in critical settings.

This includes:

The goal is to design evaluation frameworks and mitigation strategies that go beyond accuracy, placing safety and trust at the center of AI deployment.