This workshop aims to bring together researchers from academia and industry to discuss major challenges, outline recent advances, and highlight future directions pertaining to novel and existing problems on decision-making from offline datasets including synergies between methods for offline RL and offline black-box optimization. The workshop aims to crystallize and understand diverse research perspectives on safe, efficient, and scalable decision-making from offline datasets. We will bring together different communities that work on this general problem space: deep generative models, RL, Bayesian optimization, contextual bandits, causal ML, and AI for Science. Specifically, we believe that the community of researchers working on offline model-based optimization (which is in early stages) will greatly benefit from interacting with the offline RL community and lead to cross-fertilization of ideas. Similarly, the offline RL community will learn about the challenges that arise from applying existing methods to LLM applications which will inspire new research methodologies. We will discuss unification of general principles across many disciplines (e.g., uncertainty quantification and conservative behavior). We will also discuss the need for new reliable benchmarks and evaluation protocols that mimic the real-world applications as closely as possible.