ICML 2026 Workshop on Decision-Making from Offline Datasets to Online Adaptation: Black-Box Optimization to Reinforcement Learning

This workshop aims to bring together researchers from academia and industry to discuss major challenges, outline recent advances, and highlight future directions pertaining to novel and existing problems on decision-making from offline datasets including synergies between methods for offline RL and offline black-box optimization. The workshop aims to crystallize and understand diverse research perspectives on safe, efficient, and scalable decision-making from offline datasets. We will bring together different communities that work on this general problem space: deep generative models, RL, Bayesian optimization, contextual bandits, causal ML, and AI for Science. Specifically, we believe that the community of researchers working on offline model-based optimization (which is in early stages) will greatly benefit from interacting with the offline RL community and lead to cross-fertilization of ideas. Similarly, the offline RL community will learn about the challenges that arise from applying existing methods to LLM applications which will inspire new research methodologies. We will discuss unification of general principles across many disciplines (e.g., uncertainty quantification and conservative behavior). We will also discuss the need for new reliable benchmarks and evaluation protocols that mimic the real-world applications as closely as possible.

Speakers

Avatar

Jake Gardner

University of Pennsylvania

Avatar

Wen Sun

Cornell University

Avatar

Clara Wong-Fannjiang

Genentech

Avatar

Eytan Bakshy

Meta Research

Avatar

Aarti Singh

Carnegie Mellon University

Avatar

Sergey Levine

UC Berkeley

Organizing Committee

Avatar

Jana Doppa

Avatar

Aryan Deshwal

Avatar

Haruka Kiyohara

Avatar

Syrine Belakaria

Avatar

Willie Neiswanger

Avatar

Nghia Hoang

Avatar

Thanh Nguyen-Tang