• Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
  • Menu

Jon Krohn

  • Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
Jon Krohn

Attention, World Models and the Future of AI, with Prof. Kyunghyun Cho

Added on March 24, 2026 by Jon Krohn.

What's going to be the next big step function in A.I.? To find out, I saw down with Prof. KyungHyun Cho, who's 200,000 citations put him among the most influential A.I. researchers in the world... and he's a delight to listen to!

In case you aren't already aware of KyungHyun:

  • Iconic New York University professor of computer science and data science.

  • Co-directs the Global A.I. Frontier Lab alongside Yann LeCun.

  • Regularly keynotes at the most prestigious academic A.I. conferences (including being a keynote at NeurIPS 2025).

  • Was a postdoc under Yoshua Bengio at the Université de Montréal, where they coauthored a paper introducing attention for neural networks, a technique that is ubiquitous within the transformer-based LLMs that enable most A.I. capabilities today.

  • Lots of other hugely influential papers on deep recurrent neural networks, neural machine translation, visual attention, speech recognition and multivariate time-series modeling.

In today's episode, which will be of particular interest to hands-on A.I. practitioners, KyungHyun eloquently discusses:

  • The human story behind the invention of attention.

  • World models.

  • Why today’s models have already captured most correlations in passive data, making the real challenge about actively choosing which data to collect.

  • Whether A.I. needs high-fidelity, step-by-step imagination or whether a high-level latent representation that lets it skip ahead is sufficient.

  • How he's adapting computer-science and A.I. education at the university level now that such capable code-generating agents exist.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Interview, Podcast, SuperDataScience, YouTube Tags superdatascience, ai, machinelearning, attention, neuralnetworks, llm, LLMs
Older: NVIDIA’s Nemotron 3 Super: The Perfect LLM for Multi-Agent Systems →
Back to Top