What are the big A.I. trends going to be in 2024? In today's episode, the magnificent data-science leader and futurist Sadie St. Lawrence fill us in by methodically making her way from the hardware layer (e.g., GPUs) up to the application layer (e.g., GenAI apps).
Read MoreFiltering by Category: Data Science
How to Integrate Generative A.I. Into Your Business, with Piotr Grudzień
Want to integrate Conversational A.I. ("chatbots") into your business and ensure it's a (profitable!) success? Then today's episode with Quickchat AI co-founder Piotr Grudzień, covering both customer-facing and internal use cases, will be perfect for you.
Piotr:
• Is Co-Founder and CTO of Quickchat AI, a Y Combinator-backed conversation-design platform that lets you quickly deploy and debug A.I. assistants for your business.
• Previously worked as an applied scientist at Microsoft.
• Holds a Master’s in computer engineering from the University of Cambridge.
Today's episode should be accessible to technical and non-technical folks alike.
In this episode, Piotr details:
• What it takes to make a conversational A.I. system successful, whether that A.I. system is externally facing (such as a customer-support agent) or internally facing (such as a subject-matter expert).
• What’s it’s been like working in the fast-developing Large Language Model space over the past several years.
• What his favorite Generative A.I. (foundation model) vendors are.
• What the future of LLMs and Generative A.I. will entail.
• What it takes to succeed as an A.I. entrepreneur.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
How to Visualize Data Effectively, with Prof. Alberto Cairo
The renowned data-visualization professor and many-time bestselling author Dr. Alberto Cairo is today's guest! Want a copy of his fantastic new book, "The Art of Insight"? I'm giving away ten physical copies; see below for how to get one.
Alberto:
• Is the Knight Chair in Infographics and Data Visualization at the University of Miami.
• Leads visualization efforts at the University of Miami’s Institute for Data Science and Computing.
• Is a consultant for Google, the US government and many more prominent institutions.
• Has written three bestselling books on data visualization, all in the past decade.
• His fourth book, "The Art of Insight", was just published.
Today’s episode will be of interest to anyone who’d like to understand how to communicate with data more effectively.
In this episode, which tracks the themes covered in his "The Art of Insight" book, Alberto details:
• How data visualization relates to the very meaning of life.
• What it takes to enter in a meditation-like flow state when creating visualizations.
• When the “rules” of data communication should be broken.
• His data visualization tips and tricks.
• How infographics can drive social change.
• How extended reality, A.I. and other emerging technologies will change data viz in the coming years.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Q*: OpenAI’s Rumored AGI Breakthrough
Today’s episode is all about a rumored new model out of OpenAI called Q* (pronounced “Q star”) that has been causing quite a stir, both for its purported role in Altmangate and its implications for Artificial General Intelligence (AGI).
Key context:
• Q* is reported to have advanced capabilities in solving complex math problems expressed in natural language, indicating a significant leap in A.I.
• The rumors about Q* emerged during OpenAI's corporate drama involving the firing and re-hiring of CEO Sam Altman.
• Reports suggested a connection between Q*'s development and the OpenAI upheaval, with staff expressing concerns about its potential dangers to humanity (no definitive evidence links Q* to the OpenAI CEO controversy, however, leaving its role in the incident ambiguous).
Research overview:
• OpenAI's recent published research on solving grade-school word-based math problems (e.g., “The cafeteria had 23 apples. They used 20 for lunch and bought 6 more. How many apples do they have?”) hints at broader implications of step-by-step reasoning in A.I.
• While today's Large Language Models (LLMs) show better results on logical problems when we use chain-of-thought prompting ("work through the problem step by step"), the contemporary LLMs do so linearly (they don't go back to correct themselves or explore alternative intermediate steps), which limits their capability.
• To develop a model that can be trained and evaluated at each intermediate step, OpenAI gathered tons of human feedback on math-word problems, amassing a dataset of 800,000 individual intermediate steps across 75,000 problems.
• Their approach involves an LLM generating solutions at each step and a second model acting as a verifier.
The Q* connection:
• The above research merges LLM reasoning abilities with search-tree methods, inspired by Google DeepMind's AlphaGo algorithm and its ilk.
• The decades-old Q* concept is used for training models to simulate and evaluate prospective moves, a concept from reinforcement learning.
• Q*'s potential for automated self-play could lead to significant advancements in AGI, particularly by reducing reliance on (expensive) human-generated training data.
Implications:
• Q* could yield significant societal benefits (e.g., by solving mathematical proofs humans can't or discovering new physics), albeit with potentially high inference costs.
• Q* raises concerns about security and the unresolved challenges in achieving AGI.
• While Q* isn't the final leap towards AGI, it would represent a major milestone in general reasoning abilities.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AI is Eating Biology and Chemistry, with Dr. Ingmar Schuster
For today's exceptional episode, I traveled to Berlin to find out how the visionary Dr. Ingmar Schuster is using A.I. to transform biology and chemistry research, thereby helping solve the world's most pressing problems, from cancer to climate change.
Ingmar:
• Is CEO and co-founder of Exazyme, a German biotech startup that aims to make chemical design as easy as using an app.
• Previously he worked as a research scientist and senior applied scientist at Zalando, the gigantic European e-retailer.
• Completed his PhD in Computer Science at Leipzig University and postdocs at the Université Paris Dauphine and the Freie Universität Berlin, throughout which he focused on using Bayesian and Monte Carlo approaches to model natural language and time series.
Today’s episode is on the technical side so may appeal primarily to hands-on practitioners such as data scientists and machine learning engineers.
In this episode, Ingmar details:
• What kernel methods are and how he uses them at Exazyme to dramatically speed the design of synthetic biological catalysts and antibodies for pharmaceutical firms and chemical producers, with applications including fixing carbon dioxide more effectively than plants and allowing our own immune system to detect and destroy cancer.
• When “shallow” machine learning approaches are more valuable than deep learning approaches.
• Why the benefits of A.I. research far outweigh the risks.
• What it takes to become a deep-tech entrepreneur like him.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Engineering Biomaterials with Generative AI, with Dr. Pierre Salvy
Today, the brilliant Dr. Pierre Salvy details the "double deep-tech sandwich" that blends cutting-edge A.I. (generative LLMs) with cutting-edge bioengineering (creating new materials). This is a fascinating one, shot live at the Merantix AI Campus in Berlin.
Pierre:
• Has been at Cambrium for three years. Initially as Head of Computational Biology and then Head of Engineering for the past two years, growing the team from 2 to 7 to bridge the gap between wet-lab biology, data science, and scientific computing.
• Holds a PhD in Biotechnology from EPFL in Switzerland and a Master’s in Math, Physics and Engineering Science from Mines in Paris.
Today’s episode touches on technical machine learning concepts here and there, but should largely be accessible to anyone.
In it, Pierre details:
• How data-driven R&D allowed Cambrium to go from nothing to tons of physical product sales inside two years.
• How his team leverages Large Language Models (LLMs) to be the biological-protein analogue of a ChatGPT-style essay generator.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Scikit-learn’s Past, Present and Future, with scikit-learn co-founder Dr. Gaël Varoquaux
For today's massive episode, I traveled to Paris to interview Dr. Gael Varoquaux, co-founder of scikit-learn, the standard library for machine learning worldwide (downloaded over 1.4 million times PER DAY 🤯). In it, Gaël fills us in on sklearn's history and future.
More on Gaël:
• Actively leads the development of the ubiquitous scikit-learn Python library today, which has several thousand people contributing open-source code to it.
• Is Research Director at the famed Inria (the French National Institute for Research in Digital Science and Technology), where he leads the Soda ("social data") team that is focused on making a major positive social impact with data science.
• Has been recognized with the Innovation Prize from the French Academy of Sciences and many other awards for his invaluable work.
Today’s episode will likely be of primary interest to hands-on practitioners like data scientists and ML engineers, but anyone who’d like to understand the cutting edge of open-source machine learning should listen in.
In this episode, Gaël details:
• The genesis, present capabilities and fast-moving future direction of scikit-learn.
• How to best apply scikit-learn to your particular ML problem.
• How ever-larger datasets and GPU-based accelerations impact the scikit-learn project.
• How (whether you write code or not!) you can get started on contributing to a mega-impactful open-source project like scikit-learn yourself.
• Hugely successful social-impact data projects his Soda lab has had recently.
• Why statistical rigor is more important than ever and how software tools could nudge us in the direction of making more statistically sound decisions.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
A.I. Product Management, with Google DeepMind's Head of Product, Mehdi Ghissassi
The elite team at Google DeepMind cranks out one world-changing A.I. innovation after another. In today's episode, their affable Head of Product Mehdi Ghissassi shares his wisdom on how to design and release successful A.I. products.
Mehdi:
• Has been Head of Product at Google DeepMind — the world’s most prestigious A.I. research group — for over four years.
• Spent an additional three years at DeepMind before that as their Head of A.I. Product Incubation and a further four years before that in product roles at Google, meaning he has more than a decade of product leadership experience at Alphabet.
• Member of the Board of Advisors at CapitalG, Alphabet’s renowned venture capital and private equity fund.
• Holds five (!!!) Master’s degrees, including computer science and engineering Master’s degrees from the École Polytechnique, in International Relations from Sciences Po, and an MBA from Columbia Business School.
Today’s episode will be of interest to anyone who’s keen to create incredible A.I. products.
In this episode, Mehdi details:
• Google DeepMind’s bold mission to achieve Artificial General Intelligence (AGI).
• Game-changing DeepMind A.I. products such as AlphaGo and AlphaFold.
• How he stays on top of fast-moving A.I. innovations.
• The key ethical issues surrounding A.I.
• A.I.’s big social-impact opportunities.
• His guidance for investing in A.I. startups.
• Where the big opportunities lie for A.I. products in the coming years.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Data Science for Astronomy, with Dr. Daniela Huppenkothen
Our planet is a tiny little blip in a vast universe. In today's episode, the astronomical data scientist and talented simplifier of the complex, Dr. Daniela Huppenkothen, explains how we collect data from space and use ML to understand the universe.
Daniela:
• Is a Scientist at both the University of Amsterdam and the SRON Netherlands Institute for Space Research.
• Was previously an Associate Director of the Institute for Data-Intensive Research in Astronomy and Cosmology at the University of Washington, and was also a Data Science Fellow at New York University.
• Holds a PhD in Astronomy from the University of Amsterdam.
Most of today’s episode should be accessible to anyone but there is some technical content in the second half that may be of greatest interest to hands-on data science practitioners.
In today’s episode, Daniela details:
• The data earthlings collect in order to observe the universe around us.
• The three categories of ways machine learning is applied to astronomy.
• How you can become an astronomer yourself.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
A.I. Agents Will Develop Their Own Distinct Culture, with Nell Watson
Nell Watson is the most insightful person I've spoken to on where A.I. is going in the coming decades and how it will overhaul our lives. In today's mind-bending episode, she conveys these insights with amusing analogies and clever literary references.
This sensational guest, Nell:
• Is IEEE — the Institute of Electrical and Electronics Engineers’ — A.I. Ethics Certification Maestro, a role in which she engineers mechanisms into A.I. systems in order to safeguard trust and safety in algorithms.
• Also works for Apple as an Executive Consultant on philosophical matters related to machine ethics and machine intelligence.
• Is President of EURAIO - European Responsible Artificial Intelligence Office.
• Is renowned and sought-after as a public speaker, including at venerable venues like The World Bank and the United Nations General Assembly.
• On top of all that, she’s currently wrapping up a PhD in Engineering from the University of Gloucestershire in the UK.
Today’s episode covers rich philosophical issues that will be of great interest to hands-on data science practitioners but the content should be accessible to anyone. And I do highly recommend that everyone give this extraordinary episode a listen.
In this episode, Nell details:
• The distinct, and potentially dangerous, new phase of A.I. capabilities that our society is stumbling forward into.
• How you yourself can contribute to IEEE A.I. standards that can offset A.I. risks.
• How we together can craft regulations and policies to make the most of A.I.’s potential, thereby unleashing a fast-moving second renaissance and potentially bringing about a utopia in our lifetimes.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Universal Principles of Intelligence (Across Humans and Machines), with Prof. Blake Richards
Today's episode is wild! The exceptionally lucid Prof. Blake Richards will blow your mind on what intelligence is, why the "AGI" concept isn't real, why AI doesn't pose an existential risk to humans, and how AI could soon directly update our thoughts.
Blake:
• Is Associate Professor in the School of Computer Science and Department of Neurology and Neurosurgery at the revered McGill University in Montreal.
• Is a Core Faculty Member at Mila, one of the world’s most prestigious A.I. research labs, which is also in Montreal.
• His lab investigates universal principles of intelligence that apply to both natural and artificial agents and he has received a number of major awards for his research.
• He obtained his PhD in neuroscience from the University of Oxford and his Bachelor’s in cognitive science and AI from the University of Toronto.
Today’s episode contains tons of content that will be fascinating for anyone. A few topics near the end, however, will probably appeal primarily to folks who have a grasp of fundamental machine learning concepts like cost functions and gradient descent.
In this episode, Blake details:
• What intelligence is.
• Why he doesn’t believe in Artificial General Intelligence (AGI).
• Why he’s skeptical about existential risks from A.I.
• The many ways that A.I. research informs our understanding of how the human brain works.
• How, in the future, A.I. could practically and directly influence your thoughts and behaviors through brain-computer interfaces (BCIs).
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Use Contrastive Search to get Human-Quality LLM Outputs
Historically, when we deploy a machine learning model into production, the parameters that the model learned during its training on data were the sole driver of the model’s outputs. With the Generative LLMs that have taken the world by storm in the past few years, however, the model parameters alone are not enough to get reliably high-quality outputs. For that, the so-called decoding method that we choose when we deploy our LLM into production is also critical.
Read MoreSeven Factors for Successful Data Leadership
Today's episode is a fun one with the jovial EIGHT-time book author, Ben Jones. In it, Ben covers the seven factors of successful data leadership — factors he's gleaned from administering his data literacy assessment to 1000s of professionals.
Ben:
• Is the CEO of Data Literacy, a firm that specializes in training and coaching professionals on data-related topics like visualization and statistics.
• Has published eight books, including bestsellers "Communicating Data with Tableau" (O'Reilly, 2014) and "Avoiding Data Pitfalls" (Wiley, 2019).
• Has been teaching data visualization at the University of Washington for nine years.
• Previously worked for six years as a director at Tableau.
Today’s episode should be broadly accessible to any interested professional.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Decoding Speech from Raw Brain Activity, with Dr. David Moses
Dr. David Moses and his colleagues have pulled off a miracle with A.I.: allowing paralyzed patients to "speak" through a video avatar in real time — using brain waves alone. In today's episode, David details how ML makes this possible.
David:
• Is an adjunct professor at the University of California, San Francisco.
• Is the project lead on the BRAVO (Brain-Computer Interface Restoration of Arm and Voice) clinical trial.
• The success of this extraordinary BRAVO project led to an article in the prestigious journal Nature and YouTube video that already has over 3 million views.
Today’s episode does touch on specific machine learning (ML) terminology at points, but otherwise should be fascinating to anyone who’d like to hear how A.I. is facilitating real-life miracles.
In this episode, David details:
• The genesis of the BRAVO project.
• The data and the ML models they’re using on the BRAVO project in order to predict text, speech sounds and facial expressions from the brain activity of paralyzed patients.
• What’s next for this exceptional project including how long it might be before these brain-to-speech capabilities are available to anyone who needs them.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Mathematical Optimization, with Jerry Yurchisin
Mathematical Optimization complements Machine Learning and Statistics in the data scientist's tool belt, but before today's episode with Mathematical Optimization guru Jerome Yurchisin, I knew almost nothing about the powerful technique.
Jerry:
• Works as a Data Science Strategist at Gurobi Optimization, a leading decision-intelligence company that provides mathematical optimization solutions to the likes of Uber, Air France and the National Football League.
• Spent eight years as a mathematical consultant at Booz Allen Hamilton where he paired mathematical optimization with ML, statistics and simulation to inform decision-making.
• Was also previously an instructor at the University of North Carolina at Chapel Hill, where he obtained his Master’s in Operations Research and Statistics.
• Also holds an additional Master’s in Applied Math from Ohio University.
Today’s episode will appeal most to hands-on data science practitioners such as data scientists and ML engineers.
In this episode, Jerry details:
• What mathematical optimization is and how it works.
• Specific real-world examples where mathematical optimization is a better choice than a statistical or machine learning approach.
• His recommended resources for getting started with mathematical optimization in Python (or whatever your preferred programming language is) today.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
AI Emits Far Less Carbon Than Humans (Doing the Same Task)
There's been a lot of press about Large Language Models (LLMs), such as those behind ChatGPT, using vast amounts of energy per query. In fact, however, a person doing the same work emits 12x to 45x more carbon from their laptop alone.
Today’s "Five-Minute Friday" episode is a quick one on how “The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans”. Everything in today’s episode is based on an ArXiV preprint paper with that title by researchers from UC Irvine, the Massachusetts Institute of Technology and other universities.
For writing a page of text, for example, the authors estimate:
• BLOOM open-source LLM (including training) produces ~1.6g CO2/query.
• OpenAI's GPT-3 (including training) produces ~2.2g CO2/query.
• Laptop usage for 0.8 hours (average time to write page) emits ~27g CO2 (that's 12x GPT-3).
• Desktop for same amount of writing time emits ~72g CO2 (32 x GPT-3).
For creating a digital illustration:
• Midjourney (including training) produces ~1.9g CO2/query.
• DALL-E 2 produces ~2.2g CO2/query.
• Human takes ~3.2 hours for the same work, emitting ~100g CO2 (45 x DALL-E 2) on a laptop or ~280g CO2 (127 x DALL-E 2) on a desktop.
There are complexities here, such as what humans do with their time instead of writing or illustrating; if it’s spent driving, for example, then the net impact would be worse. As someone who’d love to see the world at net negative carbon emissions ASAP through innovations like nuclear fusion and carbon capture, however, I have been getting antsy about how much energy state-of-the-art LLMs use, but this simple article turned that perspective upside down. I’ll continue to use A.I. to augment my work wherever I can... and hopefully get my day done earlier so I can get away from my machine and enjoy some time outdoors.
Hear more detail in today's episode or check out the video version to see figures as well.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Quantum Machine Learning, with Dr. Amira Abbas
Brilliant, eloquent Dr. Amira Abbas introduces us to Quantum Machine Learning in today's episode. She details the key concepts (like qubits), what's possible today (Quantum SVMs) and what the future holds (e.g., Quantum Neural Networks).
Amira:
• Is a postdoctoral researcher at the University of Amsterdam as well as QuSoft, a world-leading quantum-computing research institution also in the Netherlands.
• Was previously on the Google Quantum A.I. team and did Quantum ML research at IBM.
• Holds a PhD in Quantum ML from the University of KwaZulu-Natal, during which she was a recipient of Google's PhD fellowship.
Much of today’s episode will be fascinating to anyone interested in how quantum computing is being applied to machine learning; there are, however, some relatively technical parts of the conversation that might be best-suited to folks who already have some familiarity with ML.
In this episode, Amira details:
• What Quantum Computing is, how it’s different from the classical computing that dominates the world today, and where quantum computing excels relative to its classical cousin.
• Key terms such as qubits, quantum entanglement, quantum data and quantum memory.
• Where Quantum ML shows promise today and where it might in the coming years.
• How to get started in Quantum ML research yourself.
• Today’s leading software libraries for Quantum ML.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
OpenAI’s DALL-E 3, Image Chat and Web Search
Today's episode details three big releases from OpenAI: (1) DALL-E 3 text-to-image model, which "exactly" adheres to your prompt. (2) Image-to-text chat. (3) Real-time web search integrated into ChatGPT (which seems to lag behind Google's Bard).
So, first, DALL-E 3 text-to-image generation:
• Appears to generate images that are on par with Midjourney V5, the current state-of-the-art.
• The big difference is that apparently DALL-E 3 will actually generate images that adhere “exactly” to the text you provide.
• In contrast, the incumbent models in the state of the art typically ignore words or key parts of the description even though the quality is typically stunning.
• This adherence to prompts extends even to language that you’d like to include in the image, which is mega.
• Watch today's YouTube version for examples of all the above.
In addition, using Midjourney is a really bizarre user experience because it's done through Discord where you provide prompts and get results alongside dozens of other people at the same time. DALL-E 3, in contrast, will be within the slick ChatGPT Plus environment, which could completely get rid of the need to develop text-to-image prompt-engineering expertise in order to get great results. Instead, you can simply have an iterative back-and-forth conversation with ChatGPT to produce the image of your dreams.
Next up is image-to-text chat in ChatGPT Plus:
• We've known this was coming for a while.
• Works stunningly well in the test I've done so far.
• Today's YouTube version also shows an example of this.
Finally, real-time web search with Bing is now integrated into ChatGPT Plus:
• In my personal (anecdotal tests), this lagged behind Google's Bard.
• Bard is also free, so if real-time web search is what you're after, there doesn't seem to be a reason to pay for ChatGPT Plus. That said, for state-of-the-art general chat plus now image generation and text-to-image chat (per the above), ChatGPT Plus is well worth the price tag.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Computational Mathematics and Fluid Dynamics, with Prof. Margot Gerritsen
Today, the extremely intelligent and super delightful Prof. Margot Gerritsen returns to the show to introduce what Computational Mathematics is, detail countless real-world applications of it, and relate it to the field of data science.
Margot:
• Has been faculty at Stanford University for more than 20 years, including eight years as Director of the Institute for Computational and Mathematical Engineering.
• In 2015, co-founded Women in Data Science (WiDS) Worldwide, an organization that supports, inspires and lowers barriers to entry for women across over 200 chapters in over 160 countries.
• Hosts the corresponding Women in Data Science podcast.
• Holds a PhD from Stanford in which she focused on Computational Fluid Dynamics — a passion she has retained throughout her academic career.
Today’s episode should appeal to anyone.
In it this episode, Margot details:
• What computational mathematics is.
• How computational math is used to study fluid dynamics, with fascinating in-depth examples across traffic, water, oil, sailing, F1 racing, the flight of pterodactyls and more.
• Synaesthesia, a rare perceptual phenomenon, which in her case means she sees numbers in specific colors and how this relates to her lifelong interest in math.
• The genesis of her Women in Data Science organization and the impressive breadth of its global impact today.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
ChatGPT Custom Instructions: A Major, Easy Hack for Data Scientists
Thanks to Shaan Khosla for tipping me off to a crazy easy hack to get markedly better results from GPT-4: providing Custom Instructions that prompt the algorithm to iterate upon its own output while critically evaluating and improving it.
Here's Shaan's full Custom Instructions text, which he himself has been iterating on in recent months:
"I need you to help me with a task. To help me with the task, first come up with a detailed outline of how you think you should respond, then critique the ideas in this outline (mention the advantages, disadvantages, and ways it could be improved), then use the original outline and the critiques you made to come up with your best possible solution.
"Overall, your tone should not be overly dramatic. It should be clear, professional, and direct. Don't sound robotic or like you're trying to sell something. You don't need to remind me you're a large language model, get straight to what you need to say to be as helpful as possible. Again, make sure your tone is clear, professional, and direct - not overly like you're trying to sell something."
Try it out! If you haven't used Custom Instructions before, in today's episode I talk you through how to set it up and explain why this approach is so effective. In the video version, I provide a screenshare that makes getting started foolproof.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.