Today’s episode focuses on Nick Bostrom's latest book, Deep Utopia. Published a couple of weeks ago, it delves into the possibilities of a future where artificial intelligence has solved humanity's deepest problems.
Read MoreFiltering by Category: Five-Minute Friday
The Mamba Architecture: Superior to Transformers in LLMs
Modern, cutting-edge A.I. basically depends entirely on the Transformer. But now, the first serious contender to the Transformer has emerged and it’s called Mamba; we’ve got the full paper—called "Mamba: Linear-TimeSequence Modeling with Selective State Spaces" and written by researchers at Carnegie Mellon and Princeton.
Read MoreAlphaGeometry: AI is Suddenly as Capable as the Brightest Math Minds
Google DeepMind's open-sourced AlphaGeometry blends "fast thinking" (like intuition) with "slow thinking" (like careful, conscious reasoning) to enable a big leap forward in A.I. capability and match human Math Olympiad gold medalists on geometry problems.
KEY CONTEXT
• A couple weeks ago, DeepMind published on AlphaGeometry in the prestigious journal peer-reviewed Nature.
• DeepMind focused on geometry due to its demand for high-level reasoning and logical deduction, posing a unique challenge that traditional ML models struggle with.
MASSIVE RESULTS
• AlphaGeometry tackled 30 International Mathematical Olympiad problems, solving 25. This outperforms human Olympiad bronze and silver medalists' averages (who solved 19.3 and 22.9, respectively) and closely rivals gold medalists (who solved 25.9).
• This new system crushes the previous state-of-the-art A.I., which solved only 10 out of 30 problems.
• Beyond solving problems, AlphaGeometry also generates understandable proofs, making A.I.-generated solutions more accessible to humans.
HOW?
• AlphaGeometry uses a new method of generating synthetic theorems and proofs, simulating 100 million unique examples to overcome the limitations of (expensive, laborious) human-generated proofs.
• It combines a neural (deep learning) language model for intuitive guesswork with a symbolic deduction engine for logical problem-solving, mirroring "fast" and "slow thinking" processes akin to human cognition (per Daniel Kahneman's "Thinking, Fast and Slow" book).
IMPACT
• A.I. that can "think fast and slow" like AlphaGeometry could generalize across mathematical fields and potentially other scientific disciplines, pushing the boundaries of human knowledge and problem-solving capabilities.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
The Five Levels of AGI
Artificial General Intelligence (AGI) is a term thrown around a lot, but it's been poorly defined. Until now!
Read MoreA Continuous Calendar for 2024
Today's super-short episode provides a "Continuous Calendar" for 2024. In my view, far superior to the much more common Weekly or Monthly calendar formats, a Continuous Calendar can keep you on top of all your projects and commitments all year 'round.
I know I’m not the only one who Continuous Calendars because my annual blog post providing an updated continuous calendar for the new year is reliably one of my most popular blog posts. The general concept is that Continuous Calendars enable you to:
1. Overview large blocks of time at a glance (I can easily fit six months on a standard piece of paper).
2. Get a more realistic representation of how much time there is between two given dates because the dates don’t get separated by arbitrary 7-day or ~30-day cutoffs.
The way they work so effectively is that continuous calendars are a big matrix where every row corresponds to a week and every column corresponds to a day of the week.
So if you’d like to get started today with your own super-efficient Continuous Calendar in 2024, simply head to jonkrohn.com/cal24.
At that URL, you’ll find a Google Sheet with the full 52 weeks of the year, which will probably suit most people’s needs. If you print it on standard US 8.5” x 11” paper, it should get split exactly so that the first half of the year is on page one and the second half of the year is on page two.
The calendar template is simple: It’s all black except that we’ve marked U.S. Federal Holidays with red dates. If you’re in another region, or you’d like to adapt our continuous calendar for any reason at all, simply make a copy of the sheet or download it, and then customize it to your liking.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
To a Peaceful 2024
Today I reflect on the wild advances in A.I. over the past year, opine on how A.I. could make the world more peaceful, and wrap 2023 up by singing a tune. Thanks to all eight humans of the Super Data Science Podcast for their terrific work all year 'round:
• Ivana Zibert: Podcast Manager
• Natalie Ziajski: Operations & Revenue
• Mario Pombo: Media Editor
• Serg Masís: Researcher
• Sylvia Ogweng: Writer
• Dr. Zara Karschay: Writer
• Kirill Eremenko: Founder
It's these terrifically talented and diligent people that make it possible for us to create 104 high-quality podcast episodes per year for now over seven years running 🙏
I'm looking forward to the next 104 episodes with awesome guests and (no doubt!) oodles of revolutionary new machine learning breakthroughs to cover. To a wonderful and hopefully much more peaceful 2024 🥂
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Happy Holidays from All of Us
Today's podcast episode is a quick one from all eight of us humans at the SuperDataScience Podcast, wishing you the happiest of holiday seasons ☃️
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Humanoid Robot Soccer, with the Dutch RoboCup Team
In today's unique episode, robots from the Dutch Nao Team (Naos are the little humanoids shown in the photo) compete against each other at football (⚽️) while Dário Catarrinho, a developer on the team, describes the machine learning involved.
The Dutch Nao Team is one of many international teams that competes annually in RoboCup Federation tournaments. The lofty goal of the RoboCup competitions is to develop a team of humanoid robots that is able to win against the human World Cup Championship team by the year 2050. Very cool.
Dario, my human guest in today's episode is Secretary of the Dutch Nao Team as well as a software developer on the team. He's also pursuing a degree in A.I. at the University of Amsterdam.
Most of today’s episode should be accessible to anyone but occasionally Dario and I talk a bit technically about ML algorithms so those brief parts might be most meaningful to hands-on practitioners.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Data Science for Astronomy, with Dr. Daniela Huppenkothen
Our planet is a tiny little blip in a vast universe. In today's episode, the astronomical data scientist and talented simplifier of the complex, Dr. Daniela Huppenkothen, explains how we collect data from space and use ML to understand the universe.
Daniela:
• Is a Scientist at both the University of Amsterdam and the SRON Netherlands Institute for Space Research.
• Was previously an Associate Director of the Institute for Data-Intensive Research in Astronomy and Cosmology at the University of Washington, and was also a Data Science Fellow at New York University.
• Holds a PhD in Astronomy from the University of Amsterdam.
Most of today’s episode should be accessible to anyone but there is some technical content in the second half that may be of greatest interest to hands-on data science practitioners.
In today’s episode, Daniela details:
• The data earthlings collect in order to observe the universe around us.
• The three categories of ways machine learning is applied to astronomy.
• How you can become an astronomer yourself.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How GitHub Operationalizes AI for Teamwide Collaboration and Productivity, with GitHub COO Kyle Daigle
Today's episode features the exceptionally passionate GitHub COO Kyle Daigle detailing how generative A.I. tools improve not only the way individuals work, but also dramatically transform the way people across entire firms collaborate.
Kyle was my on-stage guest for a "fireside chat" live on stage at Insight Partners' ScaleUp:AI conference in New York. It was a terrifically slick conference and a ton of fun to collaborate on stage with Kyle! He's an energizing and inspiring speaker.
Check out the episode for all of our conversation; some of the key takeaways are:
• Generative AI tools like GitHub CoPilot are most useful and efficient when they’re part of your software-development flow.
• These kinds of in-flow generative AI tools can be used for collaboration (such as speeding up code review) not just on an individual basis.
• "Innersourcing" takes open-source principles but applies them within an organization on their proprietary assets.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Use Contrastive Search to get Human-Quality LLM Outputs
Historically, when we deploy a machine learning model into production, the parameters that the model learned during its training on data were the sole driver of the model’s outputs. With the Generative LLMs that have taken the world by storm in the past few years, however, the model parameters alone are not enough to get reliably high-quality outputs. For that, the so-called decoding method that we choose when we deploy our LLM into production is also critical.
Read MoreDecoding Speech from Raw Brain Activity, with Dr. David Moses
Dr. David Moses and his colleagues have pulled off a miracle with A.I.: allowing paralyzed patients to "speak" through a video avatar in real time — using brain waves alone. In today's episode, David details how ML makes this possible.
David:
• Is an adjunct professor at the University of California, San Francisco.
• Is the project lead on the BRAVO (Brain-Computer Interface Restoration of Arm and Voice) clinical trial.
• The success of this extraordinary BRAVO project led to an article in the prestigious journal Nature and YouTube video that already has over 3 million views.
Today’s episode does touch on specific machine learning (ML) terminology at points, but otherwise should be fascinating to anyone who’d like to hear how A.I. is facilitating real-life miracles.
In this episode, David details:
• The genesis of the BRAVO project.
• The data and the ML models they’re using on the BRAVO project in order to predict text, speech sounds and facial expressions from the brain activity of paralyzed patients.
• What’s next for this exceptional project including how long it might be before these brain-to-speech capabilities are available to anyone who needs them.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AI Emits Far Less Carbon Than Humans (Doing the Same Task)
There's been a lot of press about Large Language Models (LLMs), such as those behind ChatGPT, using vast amounts of energy per query. In fact, however, a person doing the same work emits 12x to 45x more carbon from their laptop alone.
Today’s "Five-Minute Friday" episode is a quick one on how “The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans”. Everything in today’s episode is based on an ArXiV preprint paper with that title by researchers from UC Irvine, the Massachusetts Institute of Technology and other universities.
For writing a page of text, for example, the authors estimate:
• BLOOM open-source LLM (including training) produces ~1.6g CO2/query.
• OpenAI's GPT-3 (including training) produces ~2.2g CO2/query.
• Laptop usage for 0.8 hours (average time to write page) emits ~27g CO2 (that's 12x GPT-3).
• Desktop for same amount of writing time emits ~72g CO2 (32 x GPT-3).
For creating a digital illustration:
• Midjourney (including training) produces ~1.9g CO2/query.
• DALL-E 2 produces ~2.2g CO2/query.
• Human takes ~3.2 hours for the same work, emitting ~100g CO2 (45 x DALL-E 2) on a laptop or ~280g CO2 (127 x DALL-E 2) on a desktop.
There are complexities here, such as what humans do with their time instead of writing or illustrating; if it’s spent driving, for example, then the net impact would be worse. As someone who’d love to see the world at net negative carbon emissions ASAP through innovations like nuclear fusion and carbon capture, however, I have been getting antsy about how much energy state-of-the-art LLMs use, but this simple article turned that perspective upside down. I’ll continue to use A.I. to augment my work wherever I can... and hopefully get my day done earlier so I can get away from my machine and enjoy some time outdoors.
Hear more detail in today's episode or check out the video version to see figures as well.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
OpenAI’s DALL-E 3, Image Chat and Web Search
Today's episode details three big releases from OpenAI: (1) DALL-E 3 text-to-image model, which "exactly" adheres to your prompt. (2) Image-to-text chat. (3) Real-time web search integrated into ChatGPT (which seems to lag behind Google's Bard).
So, first, DALL-E 3 text-to-image generation:
• Appears to generate images that are on par with Midjourney V5, the current state-of-the-art.
• The big difference is that apparently DALL-E 3 will actually generate images that adhere “exactly” to the text you provide.
• In contrast, the incumbent models in the state of the art typically ignore words or key parts of the description even though the quality is typically stunning.
• This adherence to prompts extends even to language that you’d like to include in the image, which is mega.
• Watch today's YouTube version for examples of all the above.
In addition, using Midjourney is a really bizarre user experience because it's done through Discord where you provide prompts and get results alongside dozens of other people at the same time. DALL-E 3, in contrast, will be within the slick ChatGPT Plus environment, which could completely get rid of the need to develop text-to-image prompt-engineering expertise in order to get great results. Instead, you can simply have an iterative back-and-forth conversation with ChatGPT to produce the image of your dreams.
Next up is image-to-text chat in ChatGPT Plus:
• We've known this was coming for a while.
• Works stunningly well in the test I've done so far.
• Today's YouTube version also shows an example of this.
Finally, real-time web search with Bing is now integrated into ChatGPT Plus:
• In my personal (anecdotal tests), this lagged behind Google's Bard.
• Bard is also free, so if real-time web search is what you're after, there doesn't seem to be a reason to pay for ChatGPT Plus. That said, for state-of-the-art general chat plus now image generation and text-to-image chat (per the above), ChatGPT Plus is well worth the price tag.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
ChatGPT Custom Instructions: A Major, Easy Hack for Data Scientists
Thanks to Shaan Khosla for tipping me off to a crazy easy hack to get markedly better results from GPT-4: providing Custom Instructions that prompt the algorithm to iterate upon its own output while critically evaluating and improving it.
Here's Shaan's full Custom Instructions text, which he himself has been iterating on in recent months:
"I need you to help me with a task. To help me with the task, first come up with a detailed outline of how you think you should respond, then critique the ideas in this outline (mention the advantages, disadvantages, and ways it could be improved), then use the original outline and the critiques you made to come up with your best possible solution.
"Overall, your tone should not be overly dramatic. It should be clear, professional, and direct. Don't sound robotic or like you're trying to sell something. You don't need to remind me you're a large language model, get straight to what you need to say to be as helpful as possible. Again, make sure your tone is clear, professional, and direct - not overly like you're trying to sell something."
Try it out! If you haven't used Custom Instructions before, in today's episode I talk you through how to set it up and explain why this approach is so effective. In the video version, I provide a screenshare that makes getting started foolproof.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Using A.I. to Overcome Blindness and Thrive as a Data Scientist
Today's guest is the remarkable Tim Albiges, who lost the ability to see as an adult. Thanks to A.I. tools, as well as learning how to learn by sound and touch, he is now thriving as a data scientist and pursuing a fascinating PhD!
Tim was working as a restaurant manager eight years ago when he tragically lost his sight.
In the face of countless alarming and discriminatory acts against him on account of his blindness, he taught himself Braille and auditory learning techniques (and to raise math equations and diagrams using a special thermoform machine so that he can feel them) in order to be able to return to college and study computing and data science.
Not only did he succeed in obtaining a Bachelor’s degree in computing (with First-Class Honours), he is now pursuing a PhD at Bournemouth University full-time, in which he’s applying machine learning to solve medical problems. His first paper was published in the peer-reviewed journal Sensors earlier this year.
Today’s inspiring episode is accessible to technical and non-technical listeners alike.
In it, Tim details:
• Why a career in data science can be ideal for a blind person.
• How he’s using ML to automate the diagnosis of chronic respiratory diseases.
• The techniques he employs to live a full and independent life, with a particular focus on the A.I. tools that assist him both at work and at leisure.
• A keen athlete, how he’s adapted his approach to fitness in order to run the London marathon and enjoy a gripping team sport called goalball.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Code Llama
Meta's Llama 2 offered state-of-the-art performance for an "open-source"* LLM... except on tasks involving code. Now Code Llama is here and it magnificently fills that gap by outperforming all other open-source LLMs on coding benchmarks.
Read MoreChatGPT Code Interpreter: 5 Hacks for Data Scientists
The ChatGPT Code Interpreter is surreal: It creates and executes Python code for whatever task you describe, debugs its own runtime errors, displays charts, does file uploads/downloads, and suggests sensible next steps all along the way.
Whether you write code yourself today or not, you can take advantage of GPT-4's stellar natural-language input/output capabilities to interact with the Code Interpreter. The mind-blowing experience is equivalent to having an expert data analyst, data scientist or software developer with you to instantaneously respond to your questions or requests.
As an example of these jaw-dropping capabilities (and given the data science-focused theme of my show), I use today's episode demonstrate the ChatGPT Code Interpreter's full automation of data analysis and machine learning. If you watch the episode on YouTube, you can even see the Code Interpreter hands-on in action while I interact with it solely with natural language.
Over the course of today's episode/video, the Code Interpreter:
1. Receives a sample data file that I provide it.
2. Uses natural language to describe all of the variables that are in the file.
3. Performs a four-step Exploratory Data Analysis (EDA), including histograms, scatterplots that compare key variables and key summary statistics (all explained in natural language).
4. Preprocesses all of my variables for machine learning.
5. Selects an appropriate baseline ML model, trains it and quantitatively evaluates its performance.
6. Suggests alternative models and approaches (e.g., grid search) to get even better performance and then automatically carries these out.
7. Optionally provides Python code every step of the way and is delighted to answer any questions I have about the code.
The whole process is a ton of fun and, again, requires no coding abilities to use (the "Code Interpreter" moniker could be misleadingly intimidating to non-coding folks). Even as an experienced data scientist, however, I would estimate that in many everyday situations use of the Code Interpreter could decrease my development time by a crazy 90% or more.
The big caveat with all of this is whether you're comfortable sharing your code with OpenAI. I wouldn't provide proprietary company code to it without clearing it with your firm first and — if you do use proprietary code with it — turn "Chat history & training" off in your ChatGPT Plus settings. To circumnavigate the data-privacy issue entirely, you could alternatively try Meta's newly-released "Code Llama — Instruct 34B" Large Language Model on your own infrastructure. Code Llama won't, however, be as good as the Code Interpreter in many circumstances and will require some technical savvy to get it up and running.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Jon’s “Generative A.I. with LLMs” Hands-on Training
Today's episode introduces my two-hour "Generative A.I with LLMs" training, which is packed with hands-on Python demos in Colab notebooks. It details open-source LLM (Hugging Face; PyTorch Lightning) and commercial (OpenAI API) options.
Read MoreLLaMA 2 — It’s Time to Upgrade your Open-Source LLM
If you've been using fine-tuned open-source LLMs (e.g. for generative A.I. functionality or natural-language conversations with your users), it's very likely time you switch your starting model over to Llama 2. Here's why:
Read More