Purdue University
Time: 1:30 PM - 2:20 PM
Topic: Interpretable, Robust, and Trustworthy AI Systems
This talk will introduce several innovative technologies designed to enable the development of interpretable, robust, and trustworthy AI systems. In particular, we will showcase AI frameworks that can be reliably deployed for real-time prediction of complex dynamical systems, with applications aimed at enhancing their stability and efficiency. To illustrate these concepts, I will present case studies on COVID-19 pandemic forecasting and personalized prediction in Alzheimer's disease, demonstrating how to construct data-driven models that are both interpretable and trustworthy.
About the Speaker: Prof. Guang Lin is the Associate Dean for Research and Innovation and the Director of Data Science Consulting Service. He is also the Chair of the Initiative for Data Science and Engineering Applications at the College of Engineering and a Full Professor in the School of Mechanical Engineering and Department of Mathematics at Purdue University. Prof. Lin has received various awards, including the NSF CAREER Award, Mid-Career Sigma Xi Award, University Faculty Scholar, and Mathematical Biosciences Institute Early Career Award.
This paper analyzes l_1 regularized linear regression under the challenging scenario of having only adversarially corrupted data for training. The research proves that existing deterministic adversarial attacks can be handled with a few samples for support recovery, and considers a more general stochastic adversary model showing counter-intuitive results about adversarial influence on sample complexity.
Graphs offer a powerful framework for modeling relational data across various modalities, showing significant potential in enhancing foundation models by capturing relationships and knowledge among real-world entities. This talk will delve into how relational information embedded in graphs can be integrated with foundation models, with a central focus on privacy challenges posed by learning from sensitive relational data, particularly in augmenting and fine-tuning pretrained models.
This presentation studies the shallow Ritz method for solving one-dimensional elliptic problems, showing that the method improves the order of approximation dramatically for non-smooth problems. A damped block Newton method is developed to achieve optimal or nearly optimal order of approximation with computational cost of O(n) per iteration.
Disturbances such as atmospheric turbulence and aero-optic effects lead to wavefront aberrations. This paper introduces ReVAR (Re-Whitened Vector Auto-Regression), a novel algorithm for data-driven aero-optic phase screen generation that trains on experimental time-series data and generates synthetic data capturing the statistics of the experimental data. The algorithm is computationally efficient and produces high-quality results.
This talk presents Geometry-Aware Superpixels (GAS) to address limitations in current 3D scene understanding approaches. GAS extends 2D+time superpixels to efficiently tessellate a 3D volume and dynamically adjusts its complexity based on scene content, allowing for both computational efficiency and high-quality scene understanding.
See invited speaker section above for details.
Pattern formation is present across the natural world. This talk demonstrates applying Approximate Approximate Bayesian Computation (AABC) to two biological systems: an agent-based model of zebrafish skin patterns and a vertex-based model of meristem development in fern gametophytes, showing the utility of AABC in understanding complex biological processes.
Recent advances in machine learning have brought new challenges in optimization, particularly for non-convex and min-max problems. This presentation examines partially second-order methods for min-max optimization and explores the higher-order smoothness regime by extending higher-order methods to cases of structured non-monotone and higher-order smooth problems.