• Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
  • Menu

Jon Krohn

  • Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
Jon Krohn

The “100x Engineer”: How to Be One, But Should You?

Added on February 27, 2026 by Jon Krohn.

This image shows a 3x3 grid of terminals, allowing 9 code-generating agents to be supervised. This is one of Peter Steinberger's tricks to being a "100x Engineer". What are his other tricks? Read on...

THE PHASE SHIFT

  • Andrej Karpathy (OpenAI co-founder, former Tesla AI director) recently went from 80% manual coding to 80% AI agent coding in just weeks; he says he's now "mostly programming in English."

  • This rapid phase shift was facilitated by tools like Anthropic's Claude Code, which (as many of us have experienced personally) have vastly improved their accuracy and capability in the past few months.

THE 100x ENGINEER

  • Developer Peter Steinberger racked up ~6,500 commits over two months adding 2.5 million lines of code (and removing 1.9 million). Many engineering teams ship a few hundred commits per month; he was doing an average of >200 per day!

  • His setup: 3–9 AI coding agents (e.g., Claude Code) running simultaneously in a grid of 3x3 terminal windows, rotating attention across them like a conductor directing an orchestra.

THE COUNTERINTUITIVE 100x WORKFLOW

  • Steinberger now spends *more* time planning, not less. His ratio has flipped from the traditional ~20% planning / 80% coding to ~60% planning / 40% AI execution.

  • He uses a voice-first spec system: dictates raw ideas, uses AI to structure them into a design doc, then asks a fresh AI context to tear the specification apart. He iterates until the critiques become increasingly niche -- his signal that the spec is solid.

  • The key insight from both Karpathy and Steinberger: shift from imperative ("do this step by step") to declarative ("here are the success criteria, figure it out"). Write tests first, then let the agent pass them.

LIMITATIONS/DOWNSIDES

  • AI agents no longer make simple syntax errors — their mistakes have evolved into subtle conceptual errors, like wrong assumptions they charge ahead with without checking.

  • Karpathy notes his manual coding ability is atrophying. Steinberger admits he ships code he never reads — relying on tests as the quality gate.

SHOULD YOU BE A 100x ENGINEER?

  • In my view, "lines of code committed" is not the best benchmark of quality... perhaps aiming for 2x–10x volume increases with a closer eye on quality is wiser than chasing 100x.

  • The main effect shouldn't be speed — it should be an expansion of what's possible because you can now tackle problems that wouldn't have been worth the effort before.

BOTTOM LINE: Think declaratively, invest in specs and testing, and treat AI agents as extraordinary amplifiers of your expertise. Dream up something big and go build it... it's never been easier!

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Five-Minute Friday, Data Science, Podcast, SuperDataScience, YouTube Tags superdatascience, AICoding, AgenticAI, FutureofWork
Older: The Laws of Thought: The Math of Minds and Machines, with Prof. Tom Griffiths →
Back to Top