How are cutting-edge LLMs are trained? Find out in today's exceptional episode with Julien Launay, who digs into pre-training (supervised learning) and post-training (reinforcement learning) in eloquent detail.
Julien:
• CEO and co-founder of Adaptive ML, a remarkably fast-growing startup focused on enabling A.I. models to learn from experience.
• Previous led the extreme-scale research teams at Hugging Face and LightOn, where he helped develop state-of-the-art open-source models.
• Organizer of the "Efficient Systems for Foundation Models" workshop at ICML (the prestigious International Conference on Machine Learning).
Today's episode will appeal most to hands-on practitioners but other folks who are open to getting into the technical weeds on Large Language Model (LLM) training should also listen in.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.