What are the big A.I. trends going to be in 2024? In today's episode, the magnificent data-science leader and futurist Sadie St. Lawrence fill us in by methodically making her way from the hardware layer (e.g., GPUs) up to the application layer (e.g., GenAI apps).
Read MoreFiltering by Category: YouTube
To a Peaceful 2024
Today I reflect on the wild advances in A.I. over the past year, opine on how A.I. could make the world more peaceful, and wrap 2023 up by singing a tune. Thanks to all eight humans of the Super Data Science Podcast for their terrific work all year 'round:
• Ivana Zibert: Podcast Manager
• Natalie Ziajski: Operations & Revenue
• Mario Pombo: Media Editor
• Serg Masís: Researcher
• Sylvia Ogweng: Writer
• Dr. Zara Karschay: Writer
• Kirill Eremenko: Founder
It's these terrifically talented and diligent people that make it possible for us to create 104 high-quality podcast episodes per year for now over seven years running 🙏
I'm looking forward to the next 104 episodes with awesome guests and (no doubt!) oodles of revolutionary new machine learning breakthroughs to cover. To a wonderful and hopefully much more peaceful 2024 🥂
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How to Integrate Generative A.I. Into Your Business, with Piotr Grudzień
Want to integrate Conversational A.I. ("chatbots") into your business and ensure it's a (profitable!) success? Then today's episode with Quickchat AI co-founder Piotr Grudzień, covering both customer-facing and internal use cases, will be perfect for you.
Piotr:
• Is Co-Founder and CTO of Quickchat AI, a Y Combinator-backed conversation-design platform that lets you quickly deploy and debug A.I. assistants for your business.
• Previously worked as an applied scientist at Microsoft.
• Holds a Master’s in computer engineering from the University of Cambridge.
Today's episode should be accessible to technical and non-technical folks alike.
In this episode, Piotr details:
• What it takes to make a conversational A.I. system successful, whether that A.I. system is externally facing (such as a customer-support agent) or internally facing (such as a subject-matter expert).
• What’s it’s been like working in the fast-developing Large Language Model space over the past several years.
• What his favorite Generative A.I. (foundation model) vendors are.
• What the future of LLMs and Generative A.I. will entail.
• What it takes to succeed as an A.I. entrepreneur.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Happy Holidays from All of Us
Today's podcast episode is a quick one from all eight of us humans at the SuperDataScience Podcast, wishing you the happiest of holiday seasons ☃️
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How to Visualize Data Effectively, with Prof. Alberto Cairo
The renowned data-visualization professor and many-time bestselling author Dr. Alberto Cairo is today's guest! Want a copy of his fantastic new book, "The Art of Insight"? I'm giving away ten physical copies; see below for how to get one.
Alberto:
• Is the Knight Chair in Infographics and Data Visualization at the University of Miami.
• Leads visualization efforts at the University of Miami’s Institute for Data Science and Computing.
• Is a consultant for Google, the US government and many more prominent institutions.
• Has written three bestselling books on data visualization, all in the past decade.
• His fourth book, "The Art of Insight", was just published.
Today’s episode will be of interest to anyone who’d like to understand how to communicate with data more effectively.
In this episode, which tracks the themes covered in his "The Art of Insight" book, Alberto details:
• How data visualization relates to the very meaning of life.
• What it takes to enter in a meditation-like flow state when creating visualizations.
• When the “rules” of data communication should be broken.
• His data visualization tips and tricks.
• How infographics can drive social change.
• How extended reality, A.I. and other emerging technologies will change data viz in the coming years.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Q*: OpenAI’s Rumored AGI Breakthrough
Today’s episode is all about a rumored new model out of OpenAI called Q* (pronounced “Q star”) that has been causing quite a stir, both for its purported role in Altmangate and its implications for Artificial General Intelligence (AGI).
Key context:
• Q* is reported to have advanced capabilities in solving complex math problems expressed in natural language, indicating a significant leap in A.I.
• The rumors about Q* emerged during OpenAI's corporate drama involving the firing and re-hiring of CEO Sam Altman.
• Reports suggested a connection between Q*'s development and the OpenAI upheaval, with staff expressing concerns about its potential dangers to humanity (no definitive evidence links Q* to the OpenAI CEO controversy, however, leaving its role in the incident ambiguous).
Research overview:
• OpenAI's recent published research on solving grade-school word-based math problems (e.g., “The cafeteria had 23 apples. They used 20 for lunch and bought 6 more. How many apples do they have?”) hints at broader implications of step-by-step reasoning in A.I.
• While today's Large Language Models (LLMs) show better results on logical problems when we use chain-of-thought prompting ("work through the problem step by step"), the contemporary LLMs do so linearly (they don't go back to correct themselves or explore alternative intermediate steps), which limits their capability.
• To develop a model that can be trained and evaluated at each intermediate step, OpenAI gathered tons of human feedback on math-word problems, amassing a dataset of 800,000 individual intermediate steps across 75,000 problems.
• Their approach involves an LLM generating solutions at each step and a second model acting as a verifier.
The Q* connection:
• The above research merges LLM reasoning abilities with search-tree methods, inspired by Google DeepMind's AlphaGo algorithm and its ilk.
• The decades-old Q* concept is used for training models to simulate and evaluate prospective moves, a concept from reinforcement learning.
• Q*'s potential for automated self-play could lead to significant advancements in AGI, particularly by reducing reliance on (expensive) human-generated training data.
Implications:
• Q* could yield significant societal benefits (e.g., by solving mathematical proofs humans can't or discovering new physics), albeit with potentially high inference costs.
• Q* raises concerns about security and the unresolved challenges in achieving AGI.
• While Q* isn't the final leap towards AGI, it would represent a major milestone in general reasoning abilities.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AI is Eating Biology and Chemistry, with Dr. Ingmar Schuster
For today's exceptional episode, I traveled to Berlin to find out how the visionary Dr. Ingmar Schuster is using A.I. to transform biology and chemistry research, thereby helping solve the world's most pressing problems, from cancer to climate change.
Ingmar:
• Is CEO and co-founder of Exazyme, a German biotech startup that aims to make chemical design as easy as using an app.
• Previously he worked as a research scientist and senior applied scientist at Zalando, the gigantic European e-retailer.
• Completed his PhD in Computer Science at Leipzig University and postdocs at the Université Paris Dauphine and the Freie Universität Berlin, throughout which he focused on using Bayesian and Monte Carlo approaches to model natural language and time series.
Today’s episode is on the technical side so may appeal primarily to hands-on practitioners such as data scientists and machine learning engineers.
In this episode, Ingmar details:
• What kernel methods are and how he uses them at Exazyme to dramatically speed the design of synthetic biological catalysts and antibodies for pharmaceutical firms and chemical producers, with applications including fixing carbon dioxide more effectively than plants and allowing our own immune system to detect and destroy cancer.
• When “shallow” machine learning approaches are more valuable than deep learning approaches.
• Why the benefits of A.I. research far outweigh the risks.
• What it takes to become a deep-tech entrepreneur like him.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Engineering Biomaterials with Generative AI, with Dr. Pierre Salvy
Today, the brilliant Dr. Pierre Salvy details the "double deep-tech sandwich" that blends cutting-edge A.I. (generative LLMs) with cutting-edge bioengineering (creating new materials). This is a fascinating one, shot live at the Merantix AI Campus in Berlin.
Pierre:
• Has been at Cambrium for three years. Initially as Head of Computational Biology and then Head of Engineering for the past two years, growing the team from 2 to 7 to bridge the gap between wet-lab biology, data science, and scientific computing.
• Holds a PhD in Biotechnology from EPFL in Switzerland and a Master’s in Math, Physics and Engineering Science from Mines in Paris.
Today’s episode touches on technical machine learning concepts here and there, but should largely be accessible to anyone.
In it, Pierre details:
• How data-driven R&D allowed Cambrium to go from nothing to tons of physical product sales inside two years.
• How his team leverages Large Language Models (LLMs) to be the biological-protein analogue of a ChatGPT-style essay generator.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Scikit-learn’s Past, Present and Future, with scikit-learn co-founder Dr. Gaël Varoquaux
For today's massive episode, I traveled to Paris to interview Dr. Gael Varoquaux, co-founder of scikit-learn, the standard library for machine learning worldwide (downloaded over 1.4 million times PER DAY 🤯). In it, Gaël fills us in on sklearn's history and future.
More on Gaël:
• Actively leads the development of the ubiquitous scikit-learn Python library today, which has several thousand people contributing open-source code to it.
• Is Research Director at the famed Inria (the French National Institute for Research in Digital Science and Technology), where he leads the Soda ("social data") team that is focused on making a major positive social impact with data science.
• Has been recognized with the Innovation Prize from the French Academy of Sciences and many other awards for his invaluable work.
Today’s episode will likely be of primary interest to hands-on practitioners like data scientists and ML engineers, but anyone who’d like to understand the cutting edge of open-source machine learning should listen in.
In this episode, Gaël details:
• The genesis, present capabilities and fast-moving future direction of scikit-learn.
• How to best apply scikit-learn to your particular ML problem.
• How ever-larger datasets and GPU-based accelerations impact the scikit-learn project.
• How (whether you write code or not!) you can get started on contributing to a mega-impactful open-source project like scikit-learn yourself.
• Hugely successful social-impact data projects his Soda lab has had recently.
• Why statistical rigor is more important than ever and how software tools could nudge us in the direction of making more statistically sound decisions.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How to Officially Certify your AI Model, with Jan Zawadzki
In today's episode, learn from Jan Zawadzki how independent certification of A.I. models makes them safer and more reliable, gives you an advantage over your competitors, and, in the EU at least, will soon be mandatory!
Jan:
• Is CTO and Co-Managing Director of CertifAI, a startup that is an early mover in the fast-developing A.I. certification ecosystem.
• Was previously the Head of A.I. at CARIAD, the software development subsidiary of Volkswagen Group, where he grew the team from scratch to over 50 engineers.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
A.I. Product Management, with Google DeepMind's Head of Product, Mehdi Ghissassi
The elite team at Google DeepMind cranks out one world-changing A.I. innovation after another. In today's episode, their affable Head of Product Mehdi Ghissassi shares his wisdom on how to design and release successful A.I. products.
Mehdi:
• Has been Head of Product at Google DeepMind — the world’s most prestigious A.I. research group — for over four years.
• Spent an additional three years at DeepMind before that as their Head of A.I. Product Incubation and a further four years before that in product roles at Google, meaning he has more than a decade of product leadership experience at Alphabet.
• Member of the Board of Advisors at CapitalG, Alphabet’s renowned venture capital and private equity fund.
• Holds five (!!!) Master’s degrees, including computer science and engineering Master’s degrees from the École Polytechnique, in International Relations from Sciences Po, and an MBA from Columbia Business School.
Today’s episode will be of interest to anyone who’s keen to create incredible A.I. products.
In this episode, Mehdi details:
• Google DeepMind’s bold mission to achieve Artificial General Intelligence (AGI).
• Game-changing DeepMind A.I. products such as AlphaGo and AlphaFold.
• How he stays on top of fast-moving A.I. innovations.
• The key ethical issues surrounding A.I.
• A.I.’s big social-impact opportunities.
• His guidance for investing in A.I. startups.
• Where the big opportunities lie for A.I. products in the coming years.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Humanoid Robot Soccer, with the Dutch RoboCup Team
In today's unique episode, robots from the Dutch Nao Team (Naos are the little humanoids shown in the photo) compete against each other at football (⚽️) while Dário Catarrinho, a developer on the team, describes the machine learning involved.
The Dutch Nao Team is one of many international teams that competes annually in RoboCup Federation tournaments. The lofty goal of the RoboCup competitions is to develop a team of humanoid robots that is able to win against the human World Cup Championship team by the year 2050. Very cool.
Dario, my human guest in today's episode is Secretary of the Dutch Nao Team as well as a software developer on the team. He's also pursuing a degree in A.I. at the University of Amsterdam.
Most of today’s episode should be accessible to anyone but occasionally Dario and I talk a bit technically about ML algorithms so those brief parts might be most meaningful to hands-on practitioners.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
OpenAssistant: The Open-Source ChatGPT Alternative, with Dr. Yannic Kilcher
Yannic Kilcher — famed Machine Learning YouTuber and creator of OpenAssistant, the best-known open-source conversational A.I. — is today's rockstar guest! Hear from this luminary where the biggest A.I. opportunities are in the coming years 😎
If you’re not already aware of him, Dr. Yannic:
• Has over 230,000 subscribers on his machine learning YouTube channel.
• Is the CTO of DeepJudge, a Swiss startup that is revolutionizing the legal profession with AI tools.
• Led the development of OpenAssistant, a leading open-source alternative to ChatGPT, that has over 37,000 stars (⭐️⭐️⭐️!!!) on GitHub.
• Holds a PhD in A.I. from the outstanding Swiss technical university, ETH Zürich.
Despite being such a technical expert himself, most of today’s episode should be accessible to anyone who’s interested in A.I., whether you’re a hands-on practitioner or not.
In this episode, Yannic details:
• The behind-the-scenes stories and lasting impact of his OpenAssistant project.
• The technical and commercial lessons he’s learned while growing his A.I. startup.
• How he stays up to date on ML research.
• The important, broad implications of adversarial examples in ML.
• Where the biggest opportunities are in A.I. in the coming years.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Data Science for Astronomy, with Dr. Daniela Huppenkothen
Our planet is a tiny little blip in a vast universe. In today's episode, the astronomical data scientist and talented simplifier of the complex, Dr. Daniela Huppenkothen, explains how we collect data from space and use ML to understand the universe.
Daniela:
• Is a Scientist at both the University of Amsterdam and the SRON Netherlands Institute for Space Research.
• Was previously an Associate Director of the Institute for Data-Intensive Research in Astronomy and Cosmology at the University of Washington, and was also a Data Science Fellow at New York University.
• Holds a PhD in Astronomy from the University of Amsterdam.
Most of today’s episode should be accessible to anyone but there is some technical content in the second half that may be of greatest interest to hands-on data science practitioners.
In today’s episode, Daniela details:
• The data earthlings collect in order to observe the universe around us.
• The three categories of ways machine learning is applied to astronomy.
• How you can become an astronomer yourself.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
A.I. Agents Will Develop Their Own Distinct Culture, with Nell Watson
Nell Watson is the most insightful person I've spoken to on where A.I. is going in the coming decades and how it will overhaul our lives. In today's mind-bending episode, she conveys these insights with amusing analogies and clever literary references.
This sensational guest, Nell:
• Is IEEE — the Institute of Electrical and Electronics Engineers’ — A.I. Ethics Certification Maestro, a role in which she engineers mechanisms into A.I. systems in order to safeguard trust and safety in algorithms.
• Also works for Apple as an Executive Consultant on philosophical matters related to machine ethics and machine intelligence.
• Is President of EURAIO - European Responsible Artificial Intelligence Office.
• Is renowned and sought-after as a public speaker, including at venerable venues like The World Bank and the United Nations General Assembly.
• On top of all that, she’s currently wrapping up a PhD in Engineering from the University of Gloucestershire in the UK.
Today’s episode covers rich philosophical issues that will be of great interest to hands-on data science practitioners but the content should be accessible to anyone. And I do highly recommend that everyone give this extraordinary episode a listen.
In this episode, Nell details:
• The distinct, and potentially dangerous, new phase of A.I. capabilities that our society is stumbling forward into.
• How you yourself can contribute to IEEE A.I. standards that can offset A.I. risks.
• How we together can craft regulations and policies to make the most of A.I.’s potential, thereby unleashing a fast-moving second renaissance and potentially bringing about a utopia in our lifetimes.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How GitHub Operationalizes AI for Teamwide Collaboration and Productivity, with GitHub COO Kyle Daigle
Today's episode features the exceptionally passionate GitHub COO Kyle Daigle detailing how generative A.I. tools improve not only the way individuals work, but also dramatically transform the way people across entire firms collaborate.
Kyle was my on-stage guest for a "fireside chat" live on stage at Insight Partners' ScaleUp:AI conference in New York. It was a terrifically slick conference and a ton of fun to collaborate on stage with Kyle! He's an energizing and inspiring speaker.
Check out the episode for all of our conversation; some of the key takeaways are:
• Generative AI tools like GitHub CoPilot are most useful and efficient when they’re part of your software-development flow.
• These kinds of in-flow generative AI tools can be used for collaboration (such as speeding up code review) not just on an individual basis.
• "Innersourcing" takes open-source principles but applies them within an organization on their proprietary assets.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Universal Principles of Intelligence (Across Humans and Machines), with Prof. Blake Richards
Today's episode is wild! The exceptionally lucid Prof. Blake Richards will blow your mind on what intelligence is, why the "AGI" concept isn't real, why AI doesn't pose an existential risk to humans, and how AI could soon directly update our thoughts.
Blake:
• Is Associate Professor in the School of Computer Science and Department of Neurology and Neurosurgery at the revered McGill University in Montreal.
• Is a Core Faculty Member at Mila, one of the world’s most prestigious A.I. research labs, which is also in Montreal.
• His lab investigates universal principles of intelligence that apply to both natural and artificial agents and he has received a number of major awards for his research.
• He obtained his PhD in neuroscience from the University of Oxford and his Bachelor’s in cognitive science and AI from the University of Toronto.
Today’s episode contains tons of content that will be fascinating for anyone. A few topics near the end, however, will probably appeal primarily to folks who have a grasp of fundamental machine learning concepts like cost functions and gradient descent.
In this episode, Blake details:
• What intelligence is.
• Why he doesn’t believe in Artificial General Intelligence (AGI).
• Why he’s skeptical about existential risks from A.I.
• The many ways that A.I. research informs our understanding of how the human brain works.
• How, in the future, A.I. could practically and directly influence your thoughts and behaviors through brain-computer interfaces (BCIs).
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Use Contrastive Search to get Human-Quality LLM Outputs
Historically, when we deploy a machine learning model into production, the parameters that the model learned during its training on data were the sole driver of the model’s outputs. With the Generative LLMs that have taken the world by storm in the past few years, however, the model parameters alone are not enough to get reliably high-quality outputs. For that, the so-called decoding method that we choose when we deploy our LLM into production is also critical.
Read MoreSeven Factors for Successful Data Leadership
Today's episode is a fun one with the jovial EIGHT-time book author, Ben Jones. In it, Ben covers the seven factors of successful data leadership — factors he's gleaned from administering his data literacy assessment to 1000s of professionals.
Ben:
• Is the CEO of Data Literacy, a firm that specializes in training and coaching professionals on data-related topics like visualization and statistics.
• Has published eight books, including bestsellers "Communicating Data with Tableau" (O'Reilly, 2014) and "Avoiding Data Pitfalls" (Wiley, 2019).
• Has been teaching data visualization at the University of Washington for nine years.
• Previously worked for six years as a director at Tableau.
Today’s episode should be broadly accessible to any interested professional.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Neuroscience + Machine Learning, with Google DeepMind’s Dr. Kim Stachenfeld
Today's episode with is one of my favorite conversations ever. In it, the hilarious and fascinating Dr. Kimberly Stachenfeld (of both DeepMind and Columbia) blows my mind by detailing relationships between human neuroscience and A.I.
More on Kim:
• Research Scientist at Google DeepMind, the world’s leading A.I. research group.
• Affiliate Professor of Theoretical Neuroscience at Columbia University.
• Research interests include deep learning, reinforcement learning, representation learning, graph neural networks and a brain structure called the hippocampus.
• Holds a PhD in Computational Neuroscience from Princeton.
Today’s episode should be fascinating for anyone (🧠 + 🤖 = 🤯).
In it, Kim details:
• Her research on computer-based simulations of how the human brain simulates the real world.
• What today’s most advanced A.I. systems (like Large Language Models) can do… and what they can’t.
• How language serves as an efficient compression mechanism for both humans and machines.
• How a leading neuroscience theory called the dopamine reward-prediction error hypothesis relates to reinforcement learning in machines.
• The special role of our brain’s hippocampus in memory formation.
• The best things we personally can do to improve our cognitive abilities.
• What it might take to realize Artificial General Intelligence (AGI)
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.