To build an effective A.I. agent, getting its memory right is essential. In today's episode, our agent-memory guide is brilliant (and very funny!) machine-learning architect and engineer, Richmond Alake.
More on Richmond:
• Director of A.I. developer experience at Oracle.
• Previously roles include: staff developer advocate for AI/ML at MongoDB, ML architect at Slalom, writer for NVIDIA and computer-vision engineer at Loveshark.
• Holds a master's in ML and robotics from the University of Surrey.
In this episode, Richmond magnificently covers:
• How agent memory is the encapsulation of systems (embedding models, rerankers, databases, and LLMs) that allow AI agents to learn and adapt with new information over time, rather than starting from scratch every session.
• The four types of agent memory (all drawn from human cognition).
• Memory-first agent harnesses.
• Predictions for a flattening of AI engineering roles, where the future developer will need end-to-end understanding of the full agent stack.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.