Two books have been published so far in Pearson's "Jon Krohn A.I. Signature Series"... and now both are near the top of Amazon's "Artificial Intelligence" bestsellers list!
Sinan Ozdemir's "Building Agentic A.I." (circled in purple) is in 9th while Sadie St Lawrence's "Becoming an A.I. Orchestrator" (circled in red) is in 11th.
Both books are excellent (as the Amazon reviews quantify) and they are complementary — I (of course!) highly recommend them both.
Filtering by Tag: generativeai
Book Two in the Pearson AI Signature Series Has Arrived
Announcing today: The second book in my "Pearson AI Signature Series" is "Becoming an AI Orchestrator" by the inimitable Sadie St Lawrence!
Read MoreCreative Machines: AI in Music and Art, with Prof. Maya Ackerman
Humans haven't been able to distinguish Mozart from machine since the 1980s... why is it that we're only now freaking out about A.I. creativity? Prof. Maya Ackerman explains in today's episode!
More on Maya:
• Associate professor of computer science and engineering at Santa Clara University, where she specializes in generative A.I. research, a field she’s been immersed in for decades.
• Author of the brand-new book "Creative Machines: A.I., Art & Us".
• Co-founder and CEO of lyric- and music-generation startup WaveAI.
• Holds a PhD in computer science from the University of Waterloo.
Today's episode should be accessible — and fascinating! — to any interested listener.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Odds of AGI by 2040? LEAP Expert Forecasts and Workforce Implications
What are the odds of AGI (roughly, a machine with all the cognitive abilities of a human adult) by 2040? Based on predictions by >300 experts, read on for the skinny...
Read MoreLLMs Are Delighted to Help Phishing Scams
Reuters recently tested 6 major LLMs (Grok, ChatGPT, Meta AI, Claude, DeepSeek, Gemini) to assess whether they'd create phishing content... with minor prompt adjustments, 4 out of 6 complied — yikes!
THE INVESTIGATION
Reporters from Reuters requested phishing emails targeting elderly people, fake IRS/bank messages, and tactical scam advice.
THE RESULTS
• Despite initial refusals across the board, relatively simple prompt modifications bypassed safety guardrails.
• Grok, for example, generated a fake charity phishing email targeting the elderly with urgency tactics like "Click now to act before it's too late!"
• When tested on 100 California seniors, the A.I.-generated messages successfully persuaded people to click on malicious links, often because messages seemed urgent or familiar.
REAL-WORLD IMPACT
• The FBI reports phishing is the #1 cybercrime in the U.S., with billions of messages sent daily.
• BMO Bank, as one corporate example, currently blocks 150,000-200,000 phishing emails per month targeting employees... a representative says the problem is escalating: "The numbers never go down, they only go up."
• Cybersecurity experts state criminals are already using A.I. for faster, more sophisticated phishing campaigns.
IMPLICATIONS FOR THOSE OF US IN THE AI INDUSTRY
• LLM misuse is an industry-wide challenge affecting all major frontier labs.
• Reveals fundamental tension between making AI "helpful" vs. "harmless", highlighting the need for more robust safety guardrails across AI systems.
KEY TAKEAWAYS
• For A.I. Builders: Keep security implications front and center when developing applications.
• For users: The same LLMs that helps you write emails can help bad actors craft convincing scams... stay vigilant and educate vulnerable populations (e.g., seniors) about A.I.-enhanced phishing threats. They're only going to get more and more compelling and frequent.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Automating Code Review with AI, feat. CodeRabbit’s David Loker
Today, enjoy hearing from the super-intelligent engineer David Loker on how A.I. transforming software development by dramatically accelerating code reviews and automatically improving code bases. It's a great one!
(He also, like me, is a big fan of GPT-5... hear why later in the episode.)
More on David:
• Director of A.I. at CodeRabbit (who've raised $88m in venture capital including a $60m Series B a couple weeks ago, congrats!)
• Previously Lead Data Scientist, ML Engineer and Senior Software Engineer at firms like Netflix and Amazon.
• Holds a Master of Mathematics in Computer Science from the University of Waterloo.
Today's episode will be particularly appealing to software developers and other hands-on practitioners (data scientists, ML engineers, etc.) but David is an outstanding communicator of complex info so any interested listener will enjoy it.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AI is Disrupting Journalism: The Good, The Bad and The Opportunity
Back in Episode #896, I argued that AI probably won’t be taking your job anytime soon. I followed that up in Episode #904 by discussing how some industries are nevertheless being rapidly and thoroughly disrupted by AI. In that episode, I focused on how AI is overhauling the advertising industry in particular. My post announcing the episode on LinkedIn generated a lot of discussion in the comments and garnered over 50,000 impressions within the first few hours of posting, which led me to the idea of having a series of Friday episodes that cover how particular industries are, like advertising, being rapidly and thoroughly overhauled by AI, with lessons for everyone on how we can adapt to this inevitable change and potentially leverage the winds of change to thrive professionally.
Read MoreNeuroscience, AI and the Limitations of LLMs, with Dr. Zohar Bronfman
I was blown away by today's guest, the brilliant dual-PhD Zohar Bronfman as we discussed neuroscience, A.I., and why predictive models offer a better ROI than generative ones. Enjoy!
Dr. Bronfman:
• Is the co-founder and CEO of Pecan AI, a predictive analytics platform that has raised over $100m in venture capital.
• Holds two PhDs — one in computational neuroscience and another in philosophy — bringing a deep, multidisciplinary lens to the design and impact of A.I. systems.
• Focuses on the evolution of machine learning from statistical models to agentic systems that influence real-world outcomes.
Today’s episode will be fascinating for every listener.
In it, Zohar details:
• The trippy implications of the reality that your brain makes decisions hundreds of milliseconds before you're consciously aware of them.
• The intelligence feat that bumblebees can do that current A.I. cannot, with implications for the realization of human-like intelligence in machines.
• Why predictive models are more important than generative models for businesses but how generative LLMs can nevertheless make building and deploying predictive models much easier and accessible.
• The rollercoaster journey that led him to create a sensationally successful A.I. startup immediately upon finishing his academic degrees.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Why RAG Makes LLMs Less Safe (And How to Fix It), with Bloomberg’s Dr. Sebastian Gehrmann
In today's episode, A.I. researcher Dr. Sebastian Gehrmann details what RAG is and why it makes LLMs *less* safe... despite popular perception of the opposite.
Sebastian:
Is Head of Responsible A.I. at Bloomberg, the New York-based financial, software, data, and media company that (with 20,000 employees) is huge.
Previously, as Head of NLP at Bloomberg, he directed the development and adoption of language technology to bring the best A.I.-enhanced products to the Bloomberg Terminal.
Prior to Bloomberg, was a senior researcher at Google, where he worked on the development of large language models, including the groundbreaking BLOOM and PaLM models.
He holds a Ph.D. in computer science from Harvard University.
Today’s episode skews slightly toward our more technical listeners like data scientists, A.I. engineers and software developers, but anyone who’d like to be up to date on the latest A.I. research may want to give it a listen.
In today’s episode, Sebastian details:
The shocking discovery that retrieval augmented generation (RAG) actually makes LLMs LESS safe, despite the popular perception of the opposite.
Why the difference between 'helpful' and 'harmless' A.I. matters more than you may think.
The hidden “attack surfaces” that emerge when you combine RAG with enterprise data.
The problems that can happen when you push LLMs beyond their intended context window limits.
What you can do to ensure your LLMs are Helpful, Honest and Harmless for your particular use cases.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
My Four-Hour Agentic AI Workshop is Live and 100% Free
In case you missed my post last week, my four-hour Agentic A.I. workshop (with Ed Donner, pictured) is live. 8,000 people have already watched it! Here's what they're saying:
Read MoreAgentic AI Hands-On in Python: MCP, CrewAI and OpenAI Agents SDK (by Jon Krohn and Ed Donner)
Now live! Four hours long and 100% free, this hands-on workshop covers all the Agentic A.I. theory and tools you need to develop and deploy multi-agent teams with Python.
Beautifully shot by a professional film crew (led by the exceptional Lucie McCormick) at the Open Data Science Conference (ODSC) East in Boston a few weeks ago and then meticulously edited by SuperDataScience's inimitable Mario Pombo, this training (within the GenAI-forward Cursor IDE) features all of today's essential agent frameworks:
OpenAI Agents SDK
CrewAI
Anthropic's Model Context Protocol (MCP)
From design considerations through to practical implementation tips, by completing all four modules in this video, you will have all the knowledge and skills needed to create effective multi-agent systems. The four modules are:
Defining Agents
Designing Agents
Developing Agents
The Future of Agents
The coding elements are led by the wonderful Ed Donner, whom many of you will already know as one of the very best in the world at creating and teaching hands-on A.I. content.
We received rave reviews for the session at ODSC East and the lecture hall was standing-room only for the entire duration, so I anticipate that you'll love it too!
Watch the full training here: youtu.be/LSk5KaEGVk4
Conversational AI is Overhauling Data Analytics, with Martin Brunthaler
Fascinating new episode for you from serial entrepreneur/CTO Martin Brunthaler on how GenAI and Agentic A.I. are transforming data analytics today... and how analytics will continue to evolve in the coming years.
Martin Brunthaler:
CTO of Adverity, an Austrian data analytics platform he co-founded a decade ago and that has since raised over $160m in venture capital.
Before Adverity, Martin was co-founder and CTO at two other European tech start-ups, giving him over 20 years of combined experience in starting, scaling and exiting companies across multiple industries including eCommerce, media and mobile.
Holds an engineering diploma (equivalent to a Bachelor's degree) from the Salzburg University of Applied Sciences in Austria.
Today’s episode should be of interest to just about anyone who’d be interested in this podcast because it touches on data analytics, transforming user experiences with modern AI capabilities and growing tech businesses.
In today’s episode, Martin details:
How a childhood fascination with computer programming evolved into founding a globally leading platform for marketing data analytics.
What "data democratization" really means and how the traditional dashboard-based approach to data reporting is failing businesses.
Why data analysts are spending too much time on "busy work" instead of delivering business value.
How conversational AI is overhauling how data insights are gleaned for hands-on data practitioners and business users alike.
His no-nonsense tips for tech startup success.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Generative AI for Business, with Kirill Eremenko and Hadelin de Ponteves
Craving an intro to building and deploying commercially successful Generative A.I. applications? In today's episode, superstar data-science instructors Kirill and Hadelin (>5 million students between them) will fill you in!
Kirill Eremenko is one of our two guests today. He's:
Founder and CEO of SuperDataScience, an e-learning platform.
Founded the SuperDataScience Podcast in 2016 and hosted the show until he passed me the reins four years ago.
Our second guest is Hadelin de Ponteves:
Was a data engineer at Google before becoming a content creator.
In 2020, took a break from Data Science content to produce and star in a Bollywood film featuring "Miss Universe" Harnaaz Sandhu.
Together, Kirill and Hadelin:
Have created dozens of data science courses; they are the most popular data science instructors on the Udemy platform, with over five million students between them!
They also co-founded CloudWolf, an education platform for quickly mastering Amazon Web Services (AWS) certification.
And, in today’s episode, they announce (for the first time anywhere!) another (brand-new) venture they co-founded together.
Today’s episode is intended for anyone who’s interested in real-world, commercial applications of Generative A.I. — a technical background is not required.
In today’s episode, Kirill and Hadelin detail:
What generative A.I. models like Large Language Models are and how they fit within the broader category of “Foundation Models”.
The 12 crucial factors to consider when selecting a foundation model for a given application in your organization.
The 8 steps to ensuring foundation models are deployed commercially successfully.
Many real-world examples of how companies are customizing A.I. models quickly and at remarkably low cost.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Flexible AI Deployments Are Critical, with Chris Bennett and Joseph Balsamo
Today's episode features heavyhitters from Dell (Chris Bennett) and Iternal (Joseph Balsamo) detailing why we must have flexibility in our A.I. model deployment (and why generative A.I. is overhyped)!
In a bit more detail, today's guests are:
Chris Bennett: Global CTO for Data & A.I. Solutions at Dell Technologies
Joseph Balsamo: Sr VP of Product Development at Iternal Technologies
This episode was filmed live at Insight Partners' ScaleUp:AI conference in New York a few weeks ago. Thanks to George Mathew, Jennifer Jordan, Kristen Zeck and Deanna Uzarski for inviting me and making the magic of this session happen.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Virtual Humans and AI Clones, with Natalie Monbiot
Today, the clever and astoundingly well-spoken Natalie Monbiot provides a fascinating, mind-expanding episode on virtual humans, A.I. clones and the emerging virtual-human economy.
Natalie:
Is Head of Strategy and a Founding Team member of Hour One, a leader in virtual-human video generation that raised $20m in a Series A led by Insight Partners.
Through her own consultancy, EKLEKTIK, she advises virtual-human and A.I.-clone companies.
Regularly speaks at the world's largest conferences, including Web Summit and SXSW.
Holds a Master's in Languages and Literature from the University of Oxford.
Today's episode will of interest to everyone. In it, Natalie details:
What virtual humans are.
How virtual humans will buy us time and unleash a virtual-human economy.
The ethical quandaries and challenges associated with creating virtual twins.
What distinguishes virtual humans from deep fakes.
(P.S.: This is the first time we've ever shot an episode with three video cameras... if you watch the video version, let me know if you think it's worth the extra effort and investment!)
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
NotebookLM: Jaw-Dropping Podcast Episodes Generated About Your Documents
Today’s episode topic is on Google’s newly-released (and frankly sensational) product NotebookLM. All you need is a Google login, which is as easy as having a Gmail account. Use of NotebookLM is likewise totally free.
Read MoreHow to Be a Supercommunicator, with Charles Duhigg
Today, Pulitzer Prize winner and NY Times bestselling author Charles Duhigg reveals how you can become a "Supercommunicator", allowing you to connect with anyone, form deep bonds and get more done with others.
More on Charles:
• Pulitzer prize-winning journalist who currently writes for The New Yorker.
• His first book, "The Power of Habit", was published in 2012, spent over three years on New York Times bestseller lists and was translated into 40 languages.
• His second book, "Smarter Faster Better", was published in 2016 and was also a New York Times bestseller.
• Is a graduate of Yale University and Harvard Business School.
Today’s episode should be of great interest to everyone. In it, Charles provides the key takeaways from "Supercommunicators" including:
• Step-by-step instructions on how to connect meaningfully with anyone.
• The three types of conversation and how to ascertain which one you’re in at any given moment.
• How to have productive conflicts without the conversation spiraling out of control.
• How generative A.I. is transforming our conversations today and how the technology may transform them even more dramatically in the future.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Aligning Large Language Models, with Sinan Ozdemir
For today’s quick Five-Minute Friday episode, the exceptional author, speaker and entrepreneur Sinan Ozdemir provides an overview of what it actually means for an LLM to be “aligned”.
More on Sinan:
• Is Founder and CTO of LoopGenius, a generative AI startup.
• Has authored several excellent books, including, most recently, the bestselling "Quick Start Guide to Large Language Models".
• Is a serial AI entrepreneur, including founding a Y Combinator-backed generative AI startup way back in 2015 that was later acquired.
This episode was filmed live at the Open Data Science Conference (ODSC) East in Boston last month. Thanks to ODSC for providing recording space.
The Super Data Science Podcast is available on all major podcasting platforms and a video version is on YouTube. This is episode #784!
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Generative A.I. for Solar Power Installation, with Navdeep Martin
A startling 70% of solar-power projects fail. In today's episode, hear how Navdeep Martin's startup Flypower is using Generative A.I. to ensure we install renewable energy sources more effectively and efficiently.
Navdeep:
• Co-founder and CEO of Flypower, a generative A.I. startup dedicated to ensuring clean-energy projects, particularly solar-power projects, succeed.
• Previously held senior product leadership roles at VC-backed Bay Area AI startups as well as for AI products at Comcast and The Washington Post.
• Before that, was a software engineer for the CIA.
• Holds a degree in computer science from William & Mary and an MBA from the University of Virginia.
Today’s episode will appeal to anyone who’d like to hear about the evolution of generative A.I. technologies in products and applications, including how you can best make use of the various categories of Gen-A.I. technologies today and how, in particular, A.I. is being used to overcome the social and regulatory hurdles associated with combating climate change.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
2024 Data Science Trend Predictions
What are the big A.I. trends going to be in 2024? In today's episode, the magnificent data-science leader and futurist Sadie St. Lawrence fill us in by methodically making her way from the hardware layer (e.g., GPUs) up to the application layer (e.g., GenAI apps).
Read More