
# Builders
# Builder Bootcamp
# Builder Labs
# Production
# Optimization
In this session, you’ll learn how to evaluate and optimize AI systems for production tradeoffs like quality, latency, and cost. We’ll cover how to benchmark model performance, compare baselines, and use evaluation results to make practical decisions about model selection and optimization.
You’ll also explore fine-tuning and distillation patterns: preparing a dataset, generating teacher outputs, training a smaller model, and measuring whether the optimized model maintains quality. This session is geared toward builders looking to move beyond “it works” toward systems that are measurable, efficient, and production-ready.
Speakers
Sean Lubbers
Technical Enablement Manager @ OpenAI
Live in 33 days
May 28, 5:00 PM GMT
Online
Add to calendar
Live in 33 days
May 28, 5:00 PM GMT
Online
Add to calendar

