• Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
  • Menu

Jon Krohn

  • Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
Jon Krohn

Get More Language Context out of your LLM

Added on June 2, 2023 by Jon Krohn.

The "context window" limits the number of words that can be input to (or output by) a given Large Language Model. Today's episode introduces FlashAttention, a trick that allows for much larger context windows.


The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Five-Minute Friday, SuperDataScience, YouTube Tags flash attention, ai, Data Science
← Newer: Tools for Building Real-Time Machine Learning Applications, with Richmond Alake Older: Contextual A.I. for Adapting to Adversaries, with Dr. Matar Haller →
Back to Top