Pushing the boundaries of machine learning, uncertainty quantification, and explainable AI.
Developing neural networks that quantify uncertainty in predictions, crucial for high-stakes applications in healthcare and autonomous systems.
Explore →Creating interpretable models that provide insights into AI decision-making processes, ensuring transparency and trust.
Explore →Advancing transformer architectures for domain-specific language processing with improved context understanding.
Explore →Building AI systems resilient to attacks and adversarial examples through novel training methodologies.
Explore →Optimizing model architectures for faster inference and reduced computational costs without sacrificing accuracy.
Explore →Enabling collaborative model training across distributed datasets while preserving privacy and security.
Explore →Developing systems for real-time monitoring and oversight of AI systems at scale, ensuring continuous safety and performance evaluation.
Explore →Investigating safety degradation under adaptive and adversarial pressure to build resilient AI systems that maintain safety guarantees.
Explore →Creating uncertainty-aware control, abstention, and deferral mechanisms that allow AI systems to recognize and act appropriately on their limitations.
Explore →Designing hybrid deterministic–learned safety systems that combine the reliability of traditional methods with the flexibility of machine learning.
Explore →Detecting early-warning signals from model internals and system dynamics to anticipate and prevent failures before they occur.
Explore →Ensuring reliability and robustness of tool-using and internet-connected LLMs through advanced safety mechanisms and monitoring.
Explore →We are opened to partner with leading universities and research institutions worldwide to advance AI science.