Modern, cutting-edge A.I. basically depends entirely on the Transformer. But now, the first serious contender to the Transformer has emerged and it’s called Mamba; we’ve got the full paper—called "Mamba: Linear-TimeSequence Modeling with Selective State Spaces" and written by researchers at Carnegie Mellon and Princeton.
Read MoreFiltering by Category: YouTube
How to Speak so You Blow Listeners’ Minds, with Cole Nussbaumer Knaflic
Cole Nussbaumer Knaflic's book, "storytelling with data", has sold over 500k copies... wild! In today's episode, Cole details the best tricks from her latest book, "storytelling with you" — a goldmine on how to inform and profoundly engage people.
Cole:
• Is the author of “storytelling with data”, which has sold half a million copies, been translated into over 20 languages and is used by more than 100 universities. Nearly a decade old, the book is the #1 bestseller still today in several Amazon categories.
• Also wrote the follow-on, hands-on “storytelling with data: let’s practice!” a bestseller in its own right.
• Serves as the Founder and CEO of the storytelling with data company, which provides data-storytelling workshops and other resources.
• Previously she was a People Analytics Manager at Google.
• Holds a degree in math as well as an MBA from the University of Washington.
Today’s episode will be of interest to anyone who’d like to communicate so effectively and compellingly that people are blown away.
In this episode, Cole details:
• Her top tips for planning, creating and delivering an incredible presentation.
• A few special tips for communicating data effectively for all of you data nerds like me.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AlphaGeometry: AI is Suddenly as Capable as the Brightest Math Minds
Google DeepMind's open-sourced AlphaGeometry blends "fast thinking" (like intuition) with "slow thinking" (like careful, conscious reasoning) to enable a big leap forward in A.I. capability and match human Math Olympiad gold medalists on geometry problems.
KEY CONTEXT
• A couple weeks ago, DeepMind published on AlphaGeometry in the prestigious journal peer-reviewed Nature.
• DeepMind focused on geometry due to its demand for high-level reasoning and logical deduction, posing a unique challenge that traditional ML models struggle with.
MASSIVE RESULTS
• AlphaGeometry tackled 30 International Mathematical Olympiad problems, solving 25. This outperforms human Olympiad bronze and silver medalists' averages (who solved 19.3 and 22.9, respectively) and closely rivals gold medalists (who solved 25.9).
• This new system crushes the previous state-of-the-art A.I., which solved only 10 out of 30 problems.
• Beyond solving problems, AlphaGeometry also generates understandable proofs, making A.I.-generated solutions more accessible to humans.
HOW?
• AlphaGeometry uses a new method of generating synthetic theorems and proofs, simulating 100 million unique examples to overcome the limitations of (expensive, laborious) human-generated proofs.
• It combines a neural (deep learning) language model for intuitive guesswork with a symbolic deduction engine for logical problem-solving, mirroring "fast" and "slow thinking" processes akin to human cognition (per Daniel Kahneman's "Thinking, Fast and Slow" book).
IMPACT
• A.I. that can "think fast and slow" like AlphaGeometry could generalize across mathematical fields and potentially other scientific disciplines, pushing the boundaries of human knowledge and problem-solving capabilities.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Brewing Beer with A.I., with Beau Warren
In today's episode, Beau Warren of the innovative "Species X" brewery, details how we collaborated together on an A.I. model to craft the perfect beer. Dubbed "Krohn&Borg" lager, you can join us in Columbus, Ohio on Thursday night to try it yourself! 🍻
Read MoreA Code-Specialized LLM Will Realize AGI, with Jason Warner
Don't miss this mind-blowing episode with Jason Warner, who compellingly argues that code-specialized LLMs will bring about AGI. His firm, poolside, was launched to achieve this and facilitate an "AI-led, developer-assisted" coding paradigm en route.
Jason:
• Is Co-Founder and CEO of poolside, a hot venture capital-backed startup that will shortly be launching its code-specialized Large Language Model and accompanying interface that is designed specifically for people who code like software developers and data scientists.
• Previously was Managing Director at the renowned Bay-Area VC Redpoint Ventures.
• Before that, held a series of senior software-leadership roles at major tech companies including being CTO of GitHub and overseeing the Product Engineering of Ubuntu.
• Holds a degree in computer science from Penn State University and a Master's in CS from Rensselaer Polytechnic Institute.
Today’s episode should be fascinating to anyone keen to stay abreast of the state of the art in A.I. today and what could happen in the coming years.
In today’s episode, Jason details:
• Why a code-generation-specialized LLM like poolside’s will be far more valuable to humans who code than generalized LLMs like GPT-4 or Gemini.
• Why he thinks AGI itself will be brought about by a code-specialized ML model like poolside’s.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AI is Disadvantaging Job Applicants, But You Can Fight Back
In today's important episode, the author, professor and journalist Hilke Schellmann details how specific HR-tech firms misuse A.I. to facilitate biased hiring, promotion, and firing decisions. She also covers how you can fight back and how A.I. can be done right!
Hilke’s book, "The Algorithm: How A.I. Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now", was published earlier this month. In the exceptionally clear and well-written book, Hilke draws on exclusive information from whistleblowers, internal documents and real‑world tests to detail how many of the algorithms making high‑stakes decisions are biased, racist, and do more harm than good.
In addition to her book, Hilke:
• Is Assistant Professor of Journalism and A.I. at New York University.
• Previously worked in journalism roles at The Wall Street Journal, The New York Times and VICE Media.
• Holds a Master’s in investigative reporting from Columbia University.
Today’s episode will be accessible and interesting to anyone. In it, Hilke details:
• Examples of specific HR-technology firms that employ misleading Theranos-like tactics.
• How A.I. *can* be used ethically for hiring and throughout the employment lifecycle.
• What you can do to fight back if you suspect you’ve been disadvantaged by an automated process.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
The Five Levels of AGI
Artificial General Intelligence (AGI) is a term thrown around a lot, but it's been poorly defined. Until now!
Read MoreA Continuous Calendar for 2024
Today's super-short episode provides a "Continuous Calendar" for 2024. In my view, far superior to the much more common Weekly or Monthly calendar formats, a Continuous Calendar can keep you on top of all your projects and commitments all year 'round.
I know I’m not the only one who Continuous Calendars because my annual blog post providing an updated continuous calendar for the new year is reliably one of my most popular blog posts. The general concept is that Continuous Calendars enable you to:
1. Overview large blocks of time at a glance (I can easily fit six months on a standard piece of paper).
2. Get a more realistic representation of how much time there is between two given dates because the dates don’t get separated by arbitrary 7-day or ~30-day cutoffs.
The way they work so effectively is that continuous calendars are a big matrix where every row corresponds to a week and every column corresponds to a day of the week.
So if you’d like to get started today with your own super-efficient Continuous Calendar in 2024, simply head to jonkrohn.com/cal24.
At that URL, you’ll find a Google Sheet with the full 52 weeks of the year, which will probably suit most people’s needs. If you print it on standard US 8.5” x 11” paper, it should get split exactly so that the first half of the year is on page one and the second half of the year is on page two.
The calendar template is simple: It’s all black except that we’ve marked U.S. Federal Holidays with red dates. If you’re in another region, or you’d like to adapt our continuous calendar for any reason at all, simply make a copy of the sheet or download it, and then customize it to your liking.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
2024 Data Science Trend Predictions
What are the big A.I. trends going to be in 2024? In today's episode, the magnificent data-science leader and futurist Sadie St. Lawrence fill us in by methodically making her way from the hardware layer (e.g., GPUs) up to the application layer (e.g., GenAI apps).
Read MoreTo a Peaceful 2024
Today I reflect on the wild advances in A.I. over the past year, opine on how A.I. could make the world more peaceful, and wrap 2023 up by singing a tune. Thanks to all eight humans of the Super Data Science Podcast for their terrific work all year 'round:
• Ivana Zibert: Podcast Manager
• Natalie Ziajski: Operations & Revenue
• Mario Pombo: Media Editor
• Serg Masís: Researcher
• Sylvia Ogweng: Writer
• Dr. Zara Karschay: Writer
• Kirill Eremenko: Founder
It's these terrifically talented and diligent people that make it possible for us to create 104 high-quality podcast episodes per year for now over seven years running 🙏
I'm looking forward to the next 104 episodes with awesome guests and (no doubt!) oodles of revolutionary new machine learning breakthroughs to cover. To a wonderful and hopefully much more peaceful 2024 🥂
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How to Integrate Generative A.I. Into Your Business, with Piotr Grudzień
Want to integrate Conversational A.I. ("chatbots") into your business and ensure it's a (profitable!) success? Then today's episode with Quickchat AI co-founder Piotr Grudzień, covering both customer-facing and internal use cases, will be perfect for you.
Piotr:
• Is Co-Founder and CTO of Quickchat AI, a Y Combinator-backed conversation-design platform that lets you quickly deploy and debug A.I. assistants for your business.
• Previously worked as an applied scientist at Microsoft.
• Holds a Master’s in computer engineering from the University of Cambridge.
Today's episode should be accessible to technical and non-technical folks alike.
In this episode, Piotr details:
• What it takes to make a conversational A.I. system successful, whether that A.I. system is externally facing (such as a customer-support agent) or internally facing (such as a subject-matter expert).
• What’s it’s been like working in the fast-developing Large Language Model space over the past several years.
• What his favorite Generative A.I. (foundation model) vendors are.
• What the future of LLMs and Generative A.I. will entail.
• What it takes to succeed as an A.I. entrepreneur.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Happy Holidays from All of Us
Today's podcast episode is a quick one from all eight of us humans at the SuperDataScience Podcast, wishing you the happiest of holiday seasons ☃️
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How to Visualize Data Effectively, with Prof. Alberto Cairo
The renowned data-visualization professor and many-time bestselling author Dr. Alberto Cairo is today's guest! Want a copy of his fantastic new book, "The Art of Insight"? I'm giving away ten physical copies; see below for how to get one.
Alberto:
• Is the Knight Chair in Infographics and Data Visualization at the University of Miami.
• Leads visualization efforts at the University of Miami’s Institute for Data Science and Computing.
• Is a consultant for Google, the US government and many more prominent institutions.
• Has written three bestselling books on data visualization, all in the past decade.
• His fourth book, "The Art of Insight", was just published.
Today’s episode will be of interest to anyone who’d like to understand how to communicate with data more effectively.
In this episode, which tracks the themes covered in his "The Art of Insight" book, Alberto details:
• How data visualization relates to the very meaning of life.
• What it takes to enter in a meditation-like flow state when creating visualizations.
• When the “rules” of data communication should be broken.
• His data visualization tips and tricks.
• How infographics can drive social change.
• How extended reality, A.I. and other emerging technologies will change data viz in the coming years.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Q*: OpenAI’s Rumored AGI Breakthrough
Today’s episode is all about a rumored new model out of OpenAI called Q* (pronounced “Q star”) that has been causing quite a stir, both for its purported role in Altmangate and its implications for Artificial General Intelligence (AGI).
Key context:
• Q* is reported to have advanced capabilities in solving complex math problems expressed in natural language, indicating a significant leap in A.I.
• The rumors about Q* emerged during OpenAI's corporate drama involving the firing and re-hiring of CEO Sam Altman.
• Reports suggested a connection between Q*'s development and the OpenAI upheaval, with staff expressing concerns about its potential dangers to humanity (no definitive evidence links Q* to the OpenAI CEO controversy, however, leaving its role in the incident ambiguous).
Research overview:
• OpenAI's recent published research on solving grade-school word-based math problems (e.g., “The cafeteria had 23 apples. They used 20 for lunch and bought 6 more. How many apples do they have?”) hints at broader implications of step-by-step reasoning in A.I.
• While today's Large Language Models (LLMs) show better results on logical problems when we use chain-of-thought prompting ("work through the problem step by step"), the contemporary LLMs do so linearly (they don't go back to correct themselves or explore alternative intermediate steps), which limits their capability.
• To develop a model that can be trained and evaluated at each intermediate step, OpenAI gathered tons of human feedback on math-word problems, amassing a dataset of 800,000 individual intermediate steps across 75,000 problems.
• Their approach involves an LLM generating solutions at each step and a second model acting as a verifier.
The Q* connection:
• The above research merges LLM reasoning abilities with search-tree methods, inspired by Google DeepMind's AlphaGo algorithm and its ilk.
• The decades-old Q* concept is used for training models to simulate and evaluate prospective moves, a concept from reinforcement learning.
• Q*'s potential for automated self-play could lead to significant advancements in AGI, particularly by reducing reliance on (expensive) human-generated training data.
Implications:
• Q* could yield significant societal benefits (e.g., by solving mathematical proofs humans can't or discovering new physics), albeit with potentially high inference costs.
• Q* raises concerns about security and the unresolved challenges in achieving AGI.
• While Q* isn't the final leap towards AGI, it would represent a major milestone in general reasoning abilities.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AI is Eating Biology and Chemistry, with Dr. Ingmar Schuster
For today's exceptional episode, I traveled to Berlin to find out how the visionary Dr. Ingmar Schuster is using A.I. to transform biology and chemistry research, thereby helping solve the world's most pressing problems, from cancer to climate change.
Ingmar:
• Is CEO and co-founder of Exazyme, a German biotech startup that aims to make chemical design as easy as using an app.
• Previously he worked as a research scientist and senior applied scientist at Zalando, the gigantic European e-retailer.
• Completed his PhD in Computer Science at Leipzig University and postdocs at the Université Paris Dauphine and the Freie Universität Berlin, throughout which he focused on using Bayesian and Monte Carlo approaches to model natural language and time series.
Today’s episode is on the technical side so may appeal primarily to hands-on practitioners such as data scientists and machine learning engineers.
In this episode, Ingmar details:
• What kernel methods are and how he uses them at Exazyme to dramatically speed the design of synthetic biological catalysts and antibodies for pharmaceutical firms and chemical producers, with applications including fixing carbon dioxide more effectively than plants and allowing our own immune system to detect and destroy cancer.
• When “shallow” machine learning approaches are more valuable than deep learning approaches.
• Why the benefits of A.I. research far outweigh the risks.
• What it takes to become a deep-tech entrepreneur like him.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Engineering Biomaterials with Generative AI, with Dr. Pierre Salvy
Today, the brilliant Dr. Pierre Salvy details the "double deep-tech sandwich" that blends cutting-edge A.I. (generative LLMs) with cutting-edge bioengineering (creating new materials). This is a fascinating one, shot live at the Merantix AI Campus in Berlin.
Pierre:
• Has been at Cambrium for three years. Initially as Head of Computational Biology and then Head of Engineering for the past two years, growing the team from 2 to 7 to bridge the gap between wet-lab biology, data science, and scientific computing.
• Holds a PhD in Biotechnology from EPFL in Switzerland and a Master’s in Math, Physics and Engineering Science from Mines in Paris.
Today’s episode touches on technical machine learning concepts here and there, but should largely be accessible to anyone.
In it, Pierre details:
• How data-driven R&D allowed Cambrium to go from nothing to tons of physical product sales inside two years.
• How his team leverages Large Language Models (LLMs) to be the biological-protein analogue of a ChatGPT-style essay generator.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Scikit-learn’s Past, Present and Future, with scikit-learn co-founder Dr. Gaël Varoquaux
For today's massive episode, I traveled to Paris to interview Dr. Gael Varoquaux, co-founder of scikit-learn, the standard library for machine learning worldwide (downloaded over 1.4 million times PER DAY 🤯). In it, Gaël fills us in on sklearn's history and future.
More on Gaël:
• Actively leads the development of the ubiquitous scikit-learn Python library today, which has several thousand people contributing open-source code to it.
• Is Research Director at the famed Inria (the French National Institute for Research in Digital Science and Technology), where he leads the Soda ("social data") team that is focused on making a major positive social impact with data science.
• Has been recognized with the Innovation Prize from the French Academy of Sciences and many other awards for his invaluable work.
Today’s episode will likely be of primary interest to hands-on practitioners like data scientists and ML engineers, but anyone who’d like to understand the cutting edge of open-source machine learning should listen in.
In this episode, Gaël details:
• The genesis, present capabilities and fast-moving future direction of scikit-learn.
• How to best apply scikit-learn to your particular ML problem.
• How ever-larger datasets and GPU-based accelerations impact the scikit-learn project.
• How (whether you write code or not!) you can get started on contributing to a mega-impactful open-source project like scikit-learn yourself.
• Hugely successful social-impact data projects his Soda lab has had recently.
• Why statistical rigor is more important than ever and how software tools could nudge us in the direction of making more statistically sound decisions.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How to Officially Certify your AI Model, with Jan Zawadzki
In today's episode, learn from Jan Zawadzki how independent certification of A.I. models makes them safer and more reliable, gives you an advantage over your competitors, and, in the EU at least, will soon be mandatory!
Jan:
• Is CTO and Co-Managing Director of CertifAI, a startup that is an early mover in the fast-developing A.I. certification ecosystem.
• Was previously the Head of A.I. at CARIAD, the software development subsidiary of Volkswagen Group, where he grew the team from scratch to over 50 engineers.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
A.I. Product Management, with Google DeepMind's Head of Product, Mehdi Ghissassi
The elite team at Google DeepMind cranks out one world-changing A.I. innovation after another. In today's episode, their affable Head of Product Mehdi Ghissassi shares his wisdom on how to design and release successful A.I. products.
Mehdi:
• Has been Head of Product at Google DeepMind — the world’s most prestigious A.I. research group — for over four years.
• Spent an additional three years at DeepMind before that as their Head of A.I. Product Incubation and a further four years before that in product roles at Google, meaning he has more than a decade of product leadership experience at Alphabet.
• Member of the Board of Advisors at CapitalG, Alphabet’s renowned venture capital and private equity fund.
• Holds five (!!!) Master’s degrees, including computer science and engineering Master’s degrees from the École Polytechnique, in International Relations from Sciences Po, and an MBA from Columbia Business School.
Today’s episode will be of interest to anyone who’s keen to create incredible A.I. products.
In this episode, Mehdi details:
• Google DeepMind’s bold mission to achieve Artificial General Intelligence (AGI).
• Game-changing DeepMind A.I. products such as AlphaGo and AlphaFold.
• How he stays on top of fast-moving A.I. innovations.
• The key ethical issues surrounding A.I.
• A.I.’s big social-impact opportunities.
• His guidance for investing in A.I. startups.
• Where the big opportunities lie for A.I. products in the coming years.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Humanoid Robot Soccer, with the Dutch RoboCup Team
In today's unique episode, robots from the Dutch Nao Team (Naos are the little humanoids shown in the photo) compete against each other at football (⚽️) while Dário Catarrinho, a developer on the team, describes the machine learning involved.
The Dutch Nao Team is one of many international teams that competes annually in RoboCup Federation tournaments. The lofty goal of the RoboCup competitions is to develop a team of humanoid robots that is able to win against the human World Cup Championship team by the year 2050. Very cool.
Dario, my human guest in today's episode is Secretary of the Dutch Nao Team as well as a software developer on the team. He's also pursuing a degree in A.I. at the University of Amsterdam.
Most of today’s episode should be accessible to anyone but occasionally Dario and I talk a bit technically about ML algorithms so those brief parts might be most meaningful to hands-on practitioners.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.