• Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
  • Menu

Jon Krohn

  • Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
Jon Krohn

Six Reasons Why Building LLM Products Is Tricky

Added on June 16, 2023 by Jon Krohn.

Many of my recent podcast episodes have focused on the bewildering potential of fine-tuning open-source Large Language Models (LLMs) to your specific needs. There are, however, six big challenges when bringing LLMs to your users:

1. Strictly limited context windows
2. LLMs are slow and compute-intensive at inference time
3. "Engineering" reliable prompts can be tricky
4. Prompt-injection attacks make you vulnerable to data and IP theft
5. LLMs aren't (usually) products on their own
6. There are legal and compliance issues

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Five-Minute Friday, SuperDataScience, YouTube Tags llms, ai, Data Science
← Newer: Observing LLMs in Production to Automatically Catch Issues Older: Generative Deep Learning, with David Foster →
Back to Top