MinerAlert
Dates: January 26 @ 10:30 am - 11:30am, 2026
Location: CCSB 1.0410.
Speaker: Min Xian, Ph.D., Associate Professor, University of Idaho
Abstract: The widespread deployment of artificial intelligence (AI) in high-stakes domains—such as healthcare, finance, and energy systems—demands a move beyond mere predictive performance toward the development of trustworthy AI systems. Trustworthiness necessitates that AI is not only accurate but also reliable, transparent, and secure. This presentation will outline the technical pathway for trustworthy AI, founded on three interdependent pillars: (1) Interpretable AI, which provides human-understandable insights into model predictions and decision logic; (2) Uncertainty Quantification (UQ), which enables models to express their confidence (or lack thereof) in predictions, distinguishing between reliable and ambiguous cases; and (3) Adversarial Robustness, which ensures model resilience against deliberate manipulations and distribution shifts. In addition, the presentation will discuss state-of-the-art techniques and practical case studies, and how their integration creates AI systems that are more auditable, dependable, and safe for real-world deployment.
Biosketch: Dr. Min Xian is an Associate Professor in the Department of Computer Science at the University of Idaho. He received his Ph.D. degree in Computer Science from Utah State University, Logan, Utah, in 2017, and his M.S. degree in Pattern Recognition and Intelligent Systems from Harbin Institute of Technology, Harbin, China, in 2011. Dr. Xian is now the director of the Machine Intelligence and Data Analytics (MIDA) lab, a research-oriented, collaborative, and synergistic core that impels interdisciplinary research. Dr. Xian is an affiliate Professor and doctoral supervisor in the Bioinformatics and Computational Biology (BCB) program at the University of Idaho and a participating faculty member of the Institute for Modeling, Collaboration, and Innovation (IMCI). He is leading projects on AI-enhanced cancer detection (NIH) and material characterization and development (DOE). His research interests include trustworthy AI, deep learning, applied AI in critical areas, adversarial learning, biomedical data analytics, and digital image understanding. Dr. Xian is a guest editor at Healthcare, an area chair for the AAAI conference, and an active reviewer for many prestigious international journals, e.g., Pattern Recognition, IEEE Trans. Medical Imaging, Medical Image Analysis, Medical Physics, Scientific Reports, Neurocomputing, and Artificial Intelligence in Medicine.