• Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
  • Menu

Jon Krohn

  • Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
Jon Krohn

The Four Types of Memory Every AI Agent Needs, with Richmond Alake

Added on April 22, 2026 by Jon Krohn.

To build an effective A.I. agent, getting its memory right is essential. In today's episode, our agent-memory guide is brilliant (and very funny!) machine-learning architect and engineer, Richmond Alake.

More on Richmond:
• Director of A.I. developer experience at Oracle.
• Previously roles include: staff developer advocate for AI/ML at MongoDB, ML architect at Slalom, writer for NVIDIA and computer-vision engineer at Loveshark.
• Holds a master's in ML and robotics from the University of Surrey.

In this episode, Richmond magnificently covers:
• How agent memory is the encapsulation of systems (embedding models, rerankers, databases, and LLMs) that allow AI agents to learn and adapt with new information over time, rather than starting from scratch every session.
• The four types of agent memory (all drawn from human cognition).
• Memory-first agent harnesses.
• Predictions for a flattening of AI engineering roles, where the future developer will need end-to-end understanding of the full agent stack.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Interview, Podcast, SuperDataScience, YouTube Tags #superdatascience, #agenticAI, #AIagent, #AgentMemory, #LLMs, #LLM
Older: Building AI Agents Where 99.9% Accuracy Isn't Good Enough, with Raju Malhotra →
Back to Top