• Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
  • Menu

Jon Krohn

  • Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
Jon Krohn

LLM Pre-Training and Post-Training 101, with Julien Launay

Added on August 12, 2025 by Jon Krohn.

How are cutting-edge LLMs are trained? Find out in today's exceptional episode with Julien Launay, who digs into pre-training (supervised learning) and post-training (reinforcement learning) in eloquent detail.

Julien:

• CEO and co-founder of Adaptive ML, a remarkably fast-growing startup focused on enabling A.I. models to learn from experience.

• Previous led the extreme-scale research teams at Hugging Face and LightOn, where he helped develop state-of-the-art open-source models.

• Organizer of the "Efficient Systems for Foundation Models" workshop at ICML (the prestigious International Conference on Machine Learning).

Today's episode will appeal most to hands-on practitioners but other folks who are open to getting into the technical weeds on Large Language Model (LLM) training should also listen in.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Podcast, SuperDataScience, YouTube Tags superdatascience, llm, llms, ai, model training, reinforcementlearning
Older: In Case You Missed It in July 2025 →
Back to Top