Towards trustworthy deployable LLM-centric AI systems

EVENT DATE
18 Jul 2025
Please refer to specific dates for varied timings
TIME
10:30 am 12:00 pm
LOCATION
SUTD Think Tank 23 (Building 2, Level 4, Room 2.413)

The rapid advancement of large language models (LLMs) has revolutionised AI capabilities. Yet, the transformative potential of LLMs in industrial systems is tempered by critical trustworthiness gaps, raising two pivotal questions:

  1. What factors define trustworthiness of LLM-centric AI systems?
  2. Do existing trustworthiness evaluation/enhancement approaches adequately represent real-world deployment requirements and user trust needs?

This talk presents our recent efforts to bridge these gaps, ranging from human-aligned LLM safety/over-safety evaluation and enhancement, region-aware AI social value study, mechanism reliability investigation to downstream industrial applications, collectively advancing trustworthy deployable LLM-centric AI.

Speaker’s profile

Dongxia Wang is currently an Assistant Professor at the College of Control Science and Engineering at Zhejiang University. She obtained her PhD degree from School of Computer Science and Engineering in Nanyang Technological University, Singapore in 2018. She was a Postdoctoral Researcher in the Department of Computer Science at the University of Oxford and School of Computing and Information Systems at Singapore Management University. Her research interests include trustworthy AI systems, multi-agent systems and industrial solutions. She has published in top-tier conferences and journals such as ICML, KDD, ACL, SIGIR, AAMAS, IJCAI, TOIS, TDSC, TIFs etc. She also serves as a program committee member at multiple AI conferences.

ADD TO CALENDAR
Google Calendar
Apple Calendar