• Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
  • Menu

Jon Krohn

  • Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
Jon Krohn

Generative Deep Learning, with David Foster

Added on June 13, 2023 by Jon Krohn.

Today, bestselling author David Foster provides a fascinating technical introduction to cutting-edge Generative A.I. concepts including variational autoencoders, diffusion models, contrastive learning, GANs and (my favorite!) "world models".

David:
• Wrote the O'Reilly book “Generative Deep Learning”; the first edition from 2019 was a bestseller while the second edition was released just last week.
• Is a Founding Partner of Applied Data Science Partners, a London-based consultancy specialized in end-to-end data science solutions.
• Holds a Master’s in Mathematics from the University of Cambridge and a Master’s in Management Science and Operational Research from the University of Warwick.

Today’s episode is deep in the weeds on generative deep learning pretty much from beginning to end and so will appeal most to technical practitioners like data scientists and ML engineers.

In the episode, David details: 
• How generative modeling is different from the discriminatory modeling that dominated machine learning until just the past few months.
• The range of application areas of generative A.I.
• How autoencoders work and why variational autoencoders are particularly effective for generating content.
• What diffusion models are and how latent diffusion in particular results in photorealistic images and video.
• What contrastive learning is.
• Why “world models” might be the most transformative concept in A.I. today.
• What transformers are, how variants of them power different classes of generative models such as BERT architectures and GPT architectures, and how blending generative adversarial networks with transformers supercharges multi-modal models.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Computer Science, Interview, Podcast, SuperDataScience, YouTube Tags ai, ml, Data Science, generativeai

Open-Source “Responsible A.I.” Tools, with Ruth Yakubu

Added on June 9, 2023 by Jon Krohn.

In today's episode, Ruth Yakubu details what Responsible A.I. is and open-source options for ensuring we deploy A.I. models — particularly the Generative variety that are rapidly transforming industries — responsibly.

Ruth:
• Has been a cloud expert at Microsoft for nearly seven years; for the past two, she’s been a Principal Cloud Advocate that specializes in A.I.
• Previously worked as a software engineer and manager at Accenture.
• Has been a featured speaker at major global conferences like Websummit.
• Studied computer science at the University of Minnesota.

In this episode, Ruth details:
• The six principles that underlie whether a given A.I. model is responsible or not.
• The open-source Responsible A.I. Toolbox that allows you to quickly assess how your model fares across a broad range of Responsible A.I. metrics.


The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Interview, Podcast, SuperDataScience, YouTube Tags AI, Microsoft, generativeai, responsible AI

Tools for Building Real-Time Machine Learning Applications, with Richmond Alake

Added on June 6, 2023 by Jon Krohn.

Today, the astonishingly industrious ML Architect and entrepreneur Richmond Alake crisply describes how to rapidly develop robust and scalable Real-Time Machine Learning applications.

Richmond:
• Is a Machine Learning Architect at Slalom Build, a huge Seattle-based consultancy that builds products embedded with analytics and ML.
• Is Co-Founder of two startups: one uses computer vision to correct peoples’ form in the gym and the other is a generative A.I. startup that works with human speech.
• Creates/delivers courses for O'Reilly and writes for NVIDIA.
• Previously worked as a Computer Vision Engineer and as a Software Developer.
• Holds a Masters in Computer Vision, ML and Robotics from the University of Surrey.

Today’s episode will appeal most to technical practitioners, particularly those who incorporate ML into real-time applications, but there’s a lot in this episode for anyone who’d like to hear about the latest tools for developing real-time ML applications from a leader in the field.

In this episode, Richmond details:
• The software choices he’s made up and down the application stack — from databases to ML to the front-end — across his startups and the consulting work he does.
• The most valuable real-time ML tools he teaches in his courses.
• Why writing for the public is an invaluable career hack that everyone should be taking advantage of.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Interview, Podcast, Professional Development, SuperDataScience, YouTube Tags ML, AI, ML applications, ML Architect

Get More Language Context out of your LLM

Added on June 2, 2023 by Jon Krohn.

The "context window" limits the number of words that can be input to (or output by) a given Large Language Model. Today's episode introduces FlashAttention, a trick that allows for much larger context windows.


The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Five-Minute Friday, SuperDataScience, YouTube Tags flash attention, ai, Data Science

Contextual A.I. for Adapting to Adversaries, with Dr. Matar Haller

Added on May 30, 2023 by Jon Krohn.

Today, the wildly intelligent Dr. Matar Haller introduces Contextual A.I. (which considers adjacent, often multimodal information when making inferences) as well as how to use ML to build moat around your company.

Matar:
• Is VP of Data and A.I. at ActiveFence, an Israeli firm that has raised over $100m in venture capital to protect online platforms and their users from malicious behavior and malicious content.
• Is renowned for her top-rated presentations at leading conferences.
• Previously worked as Director of Algorithmic A.I. at SparkBeyond, an analytics platform.
• Holds a PhD in neuroscience from the University of California, Berkeley.
• Prior to data science, taught soldiers how to operate tanks.

Today’s episode has some technical moments that will resonate particularly well with hands-on data science practitioners but for the most part the episode will be interesting to anyone who wants to hear from a brilliant person on cutting-edge A.I. applications.

In this episode, Matar details:
• The “database of evil” that ActiveFence has amassed for identifying malicious content.
• Contextual A.I. that considers adjacent (and potentially multimodal) information when classifying data.
• How to continuously adapt A.I. systems to real-world adversarial actors.
• The machine learning model-deployment stack she uses.
• The data she collected directly from human brains and how this research relates to the brain-computer interfaces of the future.
• Why being a preschool teacher is a more intense job than the military.


The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Interview, Podcast, SuperDataScience, YouTube, Professional Development Tags contextual AI, AI, Data Science

Business Intelligence Tools, with Mico Yuk

Added on May 26, 2023 by Jon Krohn.

Today's guest is the straight shooter Mico Yuk, who pulls absolutely no punches in her assessment of, well, anything! ...but particularly about vendors in the business intelligence and data analytics space. Enjoy!

Mico:
• Is host of the popular Analytics on Fire Podcast (top 2% worldwide).
• Co-founded the BI Brainz Group, an analytics consulting and solutions company that has taught over 15,000 students analytics, visualization and data storytelling courses — included at major multinationals like Nestlé, FedEx and Procter & Gamble.
• Authored the "Data Visualization for Dummies" book.
• Is a sought-after keynote speaker and TV-news commentator.

In this episode, Mico details:
• Her BI (business intelligence) and analytics framework that persuades executives with data storytelling.
• What the top BI tools are on the market today.
• The BI trends she’s observed that could predict the most popular BI tools of the coming years.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Computer Science, Interview, Podcast, Professional Development, SuperDataScience, YouTube Tags data visualization, data analytics, Data Science, BI

XGBoost: The Ultimate Classifier, with Matt Harrison

Added on May 23, 2023 by Jon Krohn.

XGBoost is typically the most powerful ML option whenever you're working with structured data. In today's episode, world-leading XGBoost XPert (😂) Matt Harrison details how it works and how to make the most of it.

Matt:
• Is the author of seven best-selling books on Python and Machine Learning.
• His most recent book, "Effective XGBoost", was published in March.
• Teaches "Exploratory Data Analysis with Python" at Stanford University.
• Through his consultancy MetaSnake, he’s taught Python at leading global organizations like NASA, Netflix, and Qualcomm.
• Previously worked as a CTO and Software Engineer.
• Holds a degree in Computer Science from Stanford.

Today’s episode will appeal primarily to practicing data scientists who are keen to learn about XGBoost or keen to become an even deeper expert on XGBoost by learning about it from a world-leading educator on the library.

In this episode, Matt details:
• Why XGBoost is the go-to library for attaining the highest accuracy when building a classification model.
• Modeling situations where XGBoost should not be your first choice.
• The XGBoost hyperparameters to adjust to squeeze every bit of juice out of your tabular training data and his recommended library for automating hyperparameter selection.
• His top Python libraries for other XGBoost-related tasks such as data preprocessing, visualizing model performance, and model explainability.
• Languages beyond Python that have convenient wrappers for applying XGBoost.
• Best practices for communicating XGBoost results to non-technical stakeholders.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Computer Science, Data Science, SuperDataScience, YouTube Tags XGBoost, Python, Python libraries

Automating Industrial Machines with Data Science and the Internet of Things (IoT)

Added on May 19, 2023 by Jon Krohn.

Despite poor lighting on my face in today's video version (my bad!), we've got a fascinating episode with the brilliant (and well-lit!) Allegra Alessi, who details how data science is automating industrial machines.

Allegra:
• Is Product Owner for IoT (Internet of Things) devices at BOBST, a Swiss industrial manufacturing giant.
• Previously, she worked as a Product Owner and Data Scientist for Rolls-Royce in the UK and as a Data Scientist for Alstom, the enormous train manufacturing company, in Paris.
• She holds a Master’s in Engineering from Politecnico di Milano in Italy.

In this episode, Allegra details:
• How modern industrial machinery depends on data science for real-time performance analytics, predicting issues before they happen, and fully automating their operations.
• The tech stack her team uses to build data-driven IoT platforms.
• The key methodologies she uses to be effective at product management.
• The kinds of data scientists that might be ideally suited to moving into a product role.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Computer Science, Podcast, Professional Development, SuperDataScience, YouTube Tags Automation, industrial, data science, IoT

The A.I. and Machine Learning Landscape, with investor George Mathew

Added on May 16, 2023 by Jon Krohn.

Today, razor-sharp investor George Mathew (of Insight Partners, which has a whopping $100-billion AUM 😮) brings us up to speed on the Machine Learning landscape, with a particular focus on Generative A.I. trends.

George:
• Is a Managing Director at Insight Partners, an enormous New York-based venture capital and growth equity firm ($100B in assets under management) that has invested in the likes of Twitter, Shopify, and Monday.com.
• Specializes in investing in A.I., ML and data "scale-ups" such as the enterprise database company Databricks, the fast-growing generative A.I. company Jasper, and the popular MLOps platform Weights & Biases.
• Prior to becoming an investor, was a deep operator at fast-growing companies such as Salesforce, SAP, the analytics automation platform Alteryx (where he was President & COO) and the drone-based aerial intelligence platform Kespry (where he was CEO & Chairman).

Today’s episode will appeal to technical and non-technical listeners alike — anyone who’d like to be brought up to speed on the current state of the data and machine learning landscape by a razor-sharp expert on the topic.

In this episode, George details:
• How sensational generative A.I. models like GPT-4 are bringing about a deluge of opportunity for domain-specific tools and platforms.
• The four layers of the "Generative A.I. Stack" that supports this enormous deluge of new applications.
• How RLHF — reinforcement learning from human feedback — provides an opportunity for you to build your own powerful and defensible models with your proprietary data.
• The new LLMOps field that has emerged to support the suddenly ubiquitous LLMs (Large Language Models), including generative models.
• How investment criteria differ depending on whether the prospective investment is seed stage, venture-capital stage, or growth stage.
• The flywheel that enables the best software companies to scale extremely rapidly.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, SuperDataScience, YouTube, Personal Improvement Tags LLM, investing, AI, generativeai, GPT-4

StableLM: Open-source “ChatGPT”-like LLMs you can fit on one GPU

Added on May 12, 2023 by Jon Krohn.

Known for their widely popular text-to-image generators like Stable Diffusion, the company's recent release of the first models from their open-source suite of StableLM language models marks a significant advancement in the AI domain.

Read More
In Five-Minute Friday, Data Science, Podcast, SuperDataScience, YouTube Tags StableL, Open-source, ChatGPT, GPT, GPU, LLM

Digital Analytics with Avinash Kaushik

Added on May 9, 2023 by Jon Krohn.

Today's guest is an icon, a bestselling author and world-leading authority on digital analytics. In this interview, Avinash Kaushik masterfully describes how A.I. is transforming analytics and how you can capitalize to deliver joy to your customers.

Avinash:
• Is Chief Strategy Officer at Croud, a leading marketing agency.
• Was until recently Sr. Director of Global Strategic Analytics at Google, where he spent 16 years and where he launched the ubiquitous Google Analytics tool.
• Is a multi-time author, including the industry-standard book "Web Analytics 2.0".
• Is an authority on marketing analytics through his widely-read "Occam's Razor" blog and "The Marketing Analytics Intersect" newsletter (55k subscribers).
• His prodigious posting of useful analytics insights has landed him 200k Twitter followers and 300k followers on LinkedIn.

Today’s episode has a few deeply technical moments but for the most part is accessible to anyone who’d like to glean practical digital analytics insights from a world leader in the space.

In this episode, Avinash details:
• The distinction between brand analytics and performance analytics, and why both are critical for commercial success.
• His “four clusters of intent” for understanding your audience, delivering joy to them, and accelerating business profit.
• Why it’s a superpower for executives to be hands-on with data tools and programming.
• His favorite data tools and programming languages.
• How A.I. is transforming analytics today and his concrete vision for how A.I. will transform analytics in the coming years.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, YouTube, SuperDataScience Tags marketing, web analytics, marketing analytics, digitalmarketing, digital analytics

52nd St. Gallen Symposium Recap

Added on May 8, 2023 by Jon Krohn.

The St. Gallen Symposium, held annually in Switzerland since student riots in the 1960s, promotes cross-generational dialogue. This year's theme of "A New Generational Contract" set a path for a more resilient, sustainable future. Throughout the week, I reconnected with many inspiring old friends from previous Symposia and met many exceptional new ones, particularly a large number of electrifying social-impact-oriented entrepreneurs and business leaders. A *lot* happened over my three days there; below are the highlights.

Read More
In Professional Development, Personal Improvement Tags St. Gallen Symposiu, future, professional development

The Chinchilla Scaling Laws

Added on May 5, 2023 by Jon Krohn.

The Chinchilla Scaling Laws dictate the amount of training data needed to optimally train a Large Language Model (LLM) of a given size. For Five-Minute Friday, I cover this ratio and the LLMs that have arisen from it (incl. the new Cerebras-GPT family).

Read More
In Five-Minute Friday, Podcast, YouTube, SuperDataScience Tags SuperDataScience, data science, chinchilla scaling laws, LLM, NLP, GPT

Pandas for Data Analysis and Visualization

Added on May 2, 2023 by Jon Krohn.

Today's episode is jam-packed with practical tips on using the Pandas library in Python for data analysis and visualization. Super-sharp Stefanie Molin — a bestselling author and sought-after instructor on these topics — is our guide.

Stefanie:
• Is the author of the bestselling book "Hands-On Data Analysis with Pandas".
• Provides hands-on pandas and data viz tutorials at top industry conferences.
• Is a software engineer and data scientist at Bloomberg, the financial data giant, where she tackles problems revolving around data wrangling/visualization and building tools for gathering data.
• Holds a degree in operations research from Columbia University as well as a masters in computer science, with an ML specialization, from Georgia Tech.

Today’s episode is intended primarily for hands-on practitioners like data analysts, data scientists, and ML engineers — or anyone that would like to be in a technical data role like these in the future.

In this episode, Stefanie details:
• Her top tips for wrangling data in pandas.
• In what data viz circumstances you should use pandas, matplotlib, or Seaborn.
• Why everyone who codes, including data scientists, should develop expertise in Python package creation as well as contribute to open-source projects.
• The tech stack she uses in her role at Bloomberg.
• The productivity tips she honed by simultaneously working full-time, completing a masters degree and writing a bestselling book.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, SuperDataScience, YouTube Tags dataviz, data sci, data visualization, SuperDataScience, pandas, matploblib, python

Parameter-Efficient Fine-Tuning of LLMs using LoRA (Low-Rank Adaptation)

Added on April 28, 2023 by Jon Krohn.

Large Language Models (LLMs) are capable of extraordinary NLP feats, but are so large that they're too expensive for most organizations to train. The solution is Parameter-Efficient Fine-Tuning (PEFT) with Low-Rank Adaptation (LoRA).

This discussion comes in the wake of introducing models like Alpaca, Vicuña, GPT4All-J, and Dolly 2.0, which demonstrated the power of fine-tuning with thousands of instruction-response pairs.

Training LLMs, even those with tens of billions of parameters, can be prohibitively expensive and technically challenging. One significant issue is "catastrophic forgetting," where a model, after being retrained on new data, loses its ability to perform previously learned tasks. This challenge necessitates a more efficient approach to fine-tuning.

PEFT

By reducing the memory footprint and the number of parameters needed for training, PEFT methods like LoRA and AdaLoRA make it feasible to fine-tune large models on standard hardware. These techniques are not only space-efficient, with model weights requiring only megabytes of space, but they also avoid catastrophic forgetting, perform better with small data sets, and generalize better to out-of-training-set instructions. They can also be applied to other A.I. use cases — not just NLP — such as machine vision.

LoRA

LoRA stands out as a particularly effective PEFT method. It involves inserting low-rank decomposition matrices into each layer of a transformer model. These matrices represent data in a lower-dimensional space, simplifying computational processing. The key to LoRA's efficiency is freezing all original model weights except for the new low-rank matrices. This strategy reduces the number of trainable parameters by approximately 10,000 times and lowers the memory requirement for training by about three times. Remarkably, LoRA sometimes not only matches but even outperforms full-model training in certain scenarios. This efficiency does not come at the cost of effectiveness, making LoRA an attractive option for fine-tuning LLMs.

AdaLoRA

AdaLoRA, a recent innovation by researchers at Georgia Tech, Princeton, and Microsoft, builds on the foundations of LoRA. It differs by adaptively fine-tuning parts of the transformer architecture that benefit most from it, potentially offering enhanced performance over standard LoRA.

These developments in PEFT and the emergence of tools like LoRA and AdaLoRA mark an incredibly exciting and promising time for data scientists. With the ability to fine-tune large models efficiently, the potential for innovation and application in the field of AI is vast and continually expanding.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, SuperDataScience, YouTube Tags LLaMA, LoRA, LLM, NLP, PEFT

Taipy, the open-source Python application builder

Added on April 25, 2023 by Jon Krohn.

An A.I. expert for nearly 40 years, Vincent Gosselin adores the field's lingua franca, Python. In today's episode, hear how he created its open-source Taipy library so you can easily build Python-based web apps and scalable, reusable data pipelines.

Vincent:
• Is CEO and co-founder of taipy.io, an open-source Python library that works up and down the stack to both easily build web applications and back-end data pipelines.
• Having obtained his Masters in CS and A.I. from the Université Paris-Saclay in 1987, he’s amassed a wealth of experience across a broad range of industries, including semiconductors, finance, airspace, and logistics.
• Has held roles including Director of Software Development at ILOG, Director of Advanced Analytics at IBM, and VP of Advanced Analytics at DecisionBrain.

Today’s episode will appeal primarily to hands-on practitioners who are keen to hear about how they can be accelerating their productivity in Python, whether it’s on the front end (to build a data-driven web-application) or on the back end (to have scalable, reusable and maintainable data pipelines). That said, Vincent’s breadth of wisdom — honed over his decades-long A.I. career — may prove to be fascinating and informative to technical and non-technical listeners alike.

In this episode, Vincent details:
• The critical gaps in Python development that led him to create Taipy.
• How much potential there is for data-pipeline engineering to be improved.
• How shifting toward lower-code environments can accelerate Python development without sacrificing any flexibility.
• The 50-year-old programming language that was designed for A.I. and that he was nostalgic for until Python emerged on the scene.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Podcast, SuperDataScience, YouTube Tags Taipy, Python, data pipeline, data science

Open-source “ChatGPT”: Alpaca, Vicuña, GPT4All-J, and Dolly 2.0

Added on April 21, 2023 by Jon Krohn.

Want a GPT-4-style model on your own hardware and fine-tuned to your proprietary language-generation tasks? Today's episode covers the key open-source models (Alpaca, Vicuña, GPT4All-J, and Dolly 2.0) for doing this cheaply on a single GPU 🤯

We begin with a retrospective look at Meta AI's LLaMA model, which was introduced in episode #670. LLaMA, with its 13 billion parameters, achieves performance comparable to GPT-3 while being significantly smaller and more manageable. This efficiency makes it possible to train the model on a single GPU, democratizing access to advanced AI capabilities.

The focus then shifts to four models that surpass LLaMA in terms of power and sophistication: Alpaca, Vicuña, GPT4All-J, and Dolly 2.0. Each of these models presents a unique blend of innovation and practicality, pushing the boundaries of what's possible with AI:

Alpaca

Developed by Stanford researchers, Alpaca is an evolution of the 7 billion parameter LLaMA model, fine-tuned with 52,000 examples of instruction-following natural language. This model excels in mimicking GPT-3.5's instruction-following capabilities, offering high performance at a fraction of the cost and size.

Vicuña

Vicuña, a product of collaborative research across multiple institutions, builds on both the 7 billion and 13 billion parameter LLaMA models. It's fine-tuned on 70,000 user-shared ChatGPT conversations from the ShareGPT repository, achieving GPT-3.5-like performance with unique user-generated content.

GPT4All-J

GPT4All-J, released by Nomic AI, is based on EleutherAI's open source 6 billion parameter GPT-J model. It's fine-tuned with an extensive 800,000 instruction-response dataset, making it an attractive option for commercial applications due to its open-source nature and Apache license.

Dolly 2.0

Dolly 2.0, from database giant Databricks, builds upon EleutherAI's 12 billion parameter model. It's fine-tuned with 15,000 human-generated instruction response pairs, offering another open source, commercially viable option for AI applications.

These models represent a significant shift in the AI landscape, making it economically feasible for individuals and small teams to train and deploy powerful language models. With a few hundred to a few thousand dollars, it's now possible to create proprietary, ChatGPT-like models tailored to specific use cases.

The advancements in AI models that can be trained on a single GPU mark a thrilling era in data science. These developments not only showcase the rapid progression of AI technology but also significantly lower the barrier to entry, allowing a broader range of users to explore and innovate in the field of artificial intelligence.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Five-Minute Friday, SuperDataScience, YouTube, Podcast Tags LLaMA, CHATGPT, GPT4, Alpaca, Vicuña, data science, SuperDataScience, AI, ML

Cloud Machine Learning

Added on April 18, 2023 by Jon Krohn.

As ML models, particularly LLMs, have scaled up to having trillions of trainable parameters, cloud compute platforms have never been more essential. In today's episode, Hadelin and Kirill cover how data scientists can make the most of the cloud.

Kirill:
• Is Founder and CEO of SuperDataScience, an e-learning platform.
• Founded the SuperDataScience Podcast in 2016 and hosted the show until he passed me the reins in late 2020.

Hadelin:
• Was a data engineer at Google before becoming a content creator.
• Took a break from Data Science content in 2020 to produce and star on Bollywood.

Together, Kirill and Hadelin:
• Are the most popular data science instructors on the Udemy platform, with over two million students.
• Have created dozens of data science courses.
• Recently returned from a multi-year course-creation hiatus to publish their “Machine Learning in Python: Level 1" course as well as their brand-new course on cloud computing.

Today’s episode is all about the latter so will appeal primarily to hands-on practitioners like data scientists who are keen to be introduced to — or brush up upon — analytics and ML in the cloud.

In this episode, Kirill and Hadelin detail:
• What cloud computing is.
• Why data scientists increasingly need to know how to use the key cloud computing platforms such as AWS, Azure, and the Google Cloud Platform.
• The key services the most popular cloud platform AWS offers, particularly with respect to databases and machine learning.

*Note that it is a coincidence that AWS sponsored this show with a promotional message about their hardware accelerators. Kirill and Hadelin did not receive any compensation for developing content on AWS nor for covering AWS topics in this episode.


The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, SuperDataScience, YouTube, Podcast Tags cloud, machine learning, AWS, SuperDataScience, data science, ai

LLaMA: GPT-3 performance, 10x smaller

Added on April 16, 2023 by Jon Krohn.

By training (relatively) small LLMs for (much) longer, Meta AI's LLaMA architectures achieve GPT-3-like outputs at as little as a thirteenth of GPT-3's size. This means cost savings and much faster execution time.

LLaMA, a clever nod to LLMs (Large Language Models), is Meta AI's latest contribution to the AI world. Based on the Chinchilla scaling laws, LLaMA adopts a principle that veers away from the norm. Unlike its predecessors, which boasted hundreds of millions of parameters, LLaMA emphasizes training smaller models for longer durations to achieve enhanced performance.

The Chinchilla Principle in LLaMA

The Chinchilla scaling laws, introduced by Hoffmann and colleagues, postulate that extended training of smaller models can lead to superior performance. LLaMA, with its 7 billion to 65 billion parameter models, is a testament to this principle. For perspective, GPT-3 has 175 billion parameters, making the smallest LLaMA model just a fraction of its size.

Training Longer for Greater Performance

Meta AI's LLaMA pushes the boundaries by training these relatively smaller models for significantly longer periods than conventional approaches. This method contrasts with last year's top models like Chinchilla, GPT-3, and PaLM, which relied on undisclosed training data. LLaMA, however, uses entirely open-source data, including datasets like English Common Crawl, C4, GitHub, Wikipedia, and others, adding to its appeal and accessibility.

LLaMA's Remarkable Achievements

LLaMA's achievements are notable. The 13 billion parameter model (LLaMA 13B) outperforms GPT-3 in most benchmarks, despite having 13 times fewer parameters. This implies that LLaMA 13 can offer GPT-3 like performance on a single GPU. The largest LLaMA model, 65B, competes with giants like Chinchilla 70B and PaLM, even preceding the release of GPT-4.

This approach signifies a shift in the AI paradigm – achieving state-of-the-art performance without the need for enormous models. It's a leap forward in making advanced AI more accessible and environmentally friendly. The model weights, though intended for researchers, have been leaked and are available for non-commercial use, further democratizing access to cutting-edge AI.

LLaMA not only establishes a new benchmark in AI efficiency but also sets the stage for future innovations. Building on LLaMA's foundation, models like Alpaca, Vicuna, and GPT4ALL have emerged, fine-tuned on thoughtful datasets to exceed even LLaMA's performance. These developments herald a new era in AI, where size doesn't always equate to capability.


The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Five-Minute Friday, SuperDataScience, YouTube, Computer Science, Data Science Tags LLaMA, Meta, AI, SuperDataScience, data science

Streaming, reactive, real-time machine learning

Added on April 11, 2023 by Jon Krohn.

Real-time, reactive data processing and streaming machine learning: In today's episode, the positively brilliant researcher and entrepreneur Adrian Kosowski, PhD fills us in on what the future of ML will be like.

Adrian:
• Is Co-Founder and Chief Product Officer at Pathway, a framework for real-time, reactive data processing that is based in Paris.
• Has over 15 years of research experience, including 9 years at Inria (a prestigious French computer science center), leading to the co-authorship of over 100 articles in a range of fields (theoretical computer science, physics, and biology) covering topics like network science, distributed algorithms and complex systems.
• Previously co-founded and led business development for Spoj.com, a competitive programming platform used by millions of software developers.
• Obtained his PhD in Computer Science at the ripe old age of 20.

Adrian has also generously offered to ship a Pathway hoodie (to anywhere in the world!) to the first ten commenters on this post who request one!

Today’s episode will appeal primarily to hands-on practitioners like data scientists, ML engineers, and data engineers. However, we do our best to break down technical terms and provide concrete examples of topics so that anyone can enjoy learning about the cutting edge in training ML models.

In this episode, Adrian details:
• What streaming data processing is and why it’s superior in many ways to the batch training of ML models that historically dominated data science.
• How streaming data processing allows efficient, real-time model training.
• How reactive data processing enables data applications to react instantly and automatically to never-before-seen input data potentially saving firms vast sums.
• When a computer scientist should become a product leader.
• What programming languages Pathway selected for their platform & why.
• The big up-and-coming opportunity for data and ML start-ups.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Podcast, SuperDataScience, YouTube Tags ML, data science, reactive data processing
← Newer Posts Older Posts →
Back to Top