150 Western Avenue, Allston, MA 02134

View map

First Talk

TItle: Adaptive Resource Allocation for Improving HIV Testing Processes

Speaker: Davin Choo, Postdoctoral Fellow at Harvard SEAS

Abstract: HIV testing programs face severe resource constraints while operating under uncertainty and evolving information. In this talk, I present some recent work on adaptive resource allocation for improving HIV testing processes, framed as sequential decision-making problems where actions reveal new information and shape future opportunities. I will introduce algorithmic models that capture frontier-based testing, partial observability, and multi-round stochastic arrivals. I then show how tools such as Gittins indices, branching bandits, and diffusion models can be used to design principled adaptive policies. Beyond HIV testing, these methods offer broader insights into AI-driven approaches for resource-constrained decision-making beyond public health.

Speaker Bio: Davin is a postdoctoral fellow at Teamcore, Harvard University. He earned his PhD in Computer Science from the National University of Singapore (NUS) as an AISG PhD fellow, a Master's degree in Computer Science from ETH Zürich, and two undergraduate degrees in Computer Science and Applied Mathematics from NUS. Between his undergraduate and Masters, he also worked for a while as an applied research scientist at DSO National Laboratories on projects that lie in the intersection of AI and security. During his PhD at NUS, he focused on the foundations of AI and machine learning, working on statistical models, causal inference, and the design of resource-efficient algorithms. His current postdoctoral research at Harvard explores how principled algorithmic and AI techniques can be applied to real-world problems with the goal of achieving meaningful social impact.

 

Second Talk

 

 

Title: Perception as Generation: Navigating Ambiguity with Diffusion Models

Speaker: Xinran (Nicole) Han , Ph.D Candidate at Harvard SEAS

Abstract: Recovering 3D structure from 2D images is a central problem in computer vision, yet it is fundamentally ambiguous: many different 3D worlds can give rise to the same image. In this talk, I argue that instead of seeking a single “best” estimate, vision systems should be generative and model the distribution of plausible interpretations, akin to how humans respond to visual illusions that induce multiple distinct interpretations. I demonstrate how this behavior emerges from training a patch-based diffusion model on everyday objects. I further show that small motions, together with architectural inductive biases that encourage bottom-up and top-down integration, enable joint reasoning over shape and material. More broadly, this line of work points toward a deeper understanding of human perception and suggests new directions for building more robust embodied systems.

 

Speaker Bio: Xinran (Nicole) Han is a PhD student at Harvard University, working with Prof. Todd Zickler. Previously, she graduated from the University of Pennsylvania, advised by Prof. Jianbo Shi. Her research interests span computer vision and human perception, with an emphasis on 3D understanding. Her work focuses on combining physics-based insights and learning-based neural priors to build data- and compute-efficient models that generalize to unseen scenarios.

 

There will be pretzels and coffee before the talk at 2:15pm outside of LL2.224

0 people are interested in this event