Today’s episode is all about an LLM trained for robotics applications called RFM-1 that completely blows my mind because of the implications for what can now suddenly be accomplished so easily with robotics.
Read MoreFiltering by Tag: NLP
LLaMA 2 — It’s Time to Upgrade your Open-Source LLM
If you've been using fine-tuned open-source LLMs (e.g. for generative A.I. functionality or natural-language conversations with your users), it's very likely time you switch your starting model over to Llama 2. Here's why:
Read MoreNLP with Transformers, feat. Hugging Face’s Lewis Tunstall
Lewis Tunstall — brilliant author of the bestseller "NLP with Transformers" and an ML Engineer at Hugging Face — today details how to train and deploy your own LLMs, the race for an open-source ChatGPT, and why RLHF leads to better models.
Dr. Tunstall:
• Is an ML Engineer at Hugging Face, one of the most important companies in data science today because they provide much of the most critical infrastructure for A.I. through open-source projects such as their ubiquitous Transformers library, which has a staggering 100,000 stars on GitHub.
• Is a member of Hugging Face’s prestigious research team, where he is currently focused on bringing us closer to having an open-source equivalent of ChatGPT by building tools that support RLHF (reinforcement learning from human feedback) and large-scale model evaluation.
• Authored “Natural Language Processing with Transformers”, an exceptional bestselling book that was published by O'Reilly last year and covers how to train and deploy Large Language Models (LLMs) using open-source libraries.
• Prior to Hugging Face, was an academic at the University of Bern in Switzerland and held data science roles at several Swiss firms.
• Holds a PhD in theoretical and mathematical physics from Adelaide in Australia.
Today’s episode is definitely on the technical side so will likely appeal most to folks like data scientists and ML engineers, but as usual I made an effort to break down the technical concepts Lewis covered so that anyone who’s keen to be aware of the cutting edge in NLP can follow along.
In the episode, Lewis details:
• What transformers are.
• Why transformers have become the default model architecture in NLP in just a few years.
• How to train NLP models when you have few to no labeled data available.
• How to optimize LLMs for speed when deploying them into production.
• How you can optimally leverage the open-source Hugging Face ecosystem, including their Transformers library and their hub for ML models and data.
• How RLHF aligns LLMs with the outputs users would like.
• How open-source efforts could soon meet or surpass the capabilities of commercial LLMs like ChatGPT.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
The Chinchilla Scaling Laws
The Chinchilla Scaling Laws dictate the amount of training data needed to optimally train a Large Language Model (LLM) of a given size. For Five-Minute Friday, I cover this ratio and the LLMs that have arisen from it (incl. the new Cerebras-GPT family).
Read MoreParameter-Efficient Fine-Tuning of LLMs using LoRA (Low-Rank Adaptation)
Large Language Models (LLMs) are capable of extraordinary NLP feats, but are so large that they're too expensive for most organizations to train. The solution is Parameter-Efficient Fine-Tuning (PEFT) with Low-Rank Adaptation (LoRA).
This discussion comes in the wake of introducing models like Alpaca, Vicuña, GPT4All-J, and Dolly 2.0, which demonstrated the power of fine-tuning with thousands of instruction-response pairs.
Training LLMs, even those with tens of billions of parameters, can be prohibitively expensive and technically challenging. One significant issue is "catastrophic forgetting," where a model, after being retrained on new data, loses its ability to perform previously learned tasks. This challenge necessitates a more efficient approach to fine-tuning.
PEFT
By reducing the memory footprint and the number of parameters needed for training, PEFT methods like LoRA and AdaLoRA make it feasible to fine-tune large models on standard hardware. These techniques are not only space-efficient, with model weights requiring only megabytes of space, but they also avoid catastrophic forgetting, perform better with small data sets, and generalize better to out-of-training-set instructions. They can also be applied to other A.I. use cases — not just NLP — such as machine vision.
LoRA
LoRA stands out as a particularly effective PEFT method. It involves inserting low-rank decomposition matrices into each layer of a transformer model. These matrices represent data in a lower-dimensional space, simplifying computational processing. The key to LoRA's efficiency is freezing all original model weights except for the new low-rank matrices. This strategy reduces the number of trainable parameters by approximately 10,000 times and lowers the memory requirement for training by about three times. Remarkably, LoRA sometimes not only matches but even outperforms full-model training in certain scenarios. This efficiency does not come at the cost of effectiveness, making LoRA an attractive option for fine-tuning LLMs.
AdaLoRA
AdaLoRA, a recent innovation by researchers at Georgia Tech, Princeton, and Microsoft, builds on the foundations of LoRA. It differs by adaptively fine-tuning parts of the transformer architecture that benefit most from it, potentially offering enhanced performance over standard LoRA.
These developments in PEFT and the emergence of tools like LoRA and AdaLoRA mark an incredibly exciting and promising time for data scientists. With the ability to fine-tune large models efficiently, the potential for innovation and application in the field of AI is vast and continually expanding.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
NLP with GPT Architectures (ChatGPT, GPT-4, and other LLMs)
Large Language Models have revolutionized the field of Natural Language Processing, powering mind blowing tools like ChatGPT and GPT-4. Today, we released the recording of a half-day conference I hosted on the topic.
In partnership with my publisher Pearson, the "A.I. Catalyst" conference was held earlier this month in the O'Reilly Media platform. It has now been cleaned up and released for anyone to view as a standalone three-hour video. In it, we cover the full Large Language Model (LLM) lifecycle from development to deployment.
The presenters are at the absolute vanguard on their topics:
• Sinan Ozdemir: The A.I. entrepreneur and author introduces the theory behind Transformer Architectures and LLMs like BERT, GPT, and T5.
• Melanie Subbiah: A first author on the original GPT-3 paper, Melanie leads interactive demos of the broad range of LLM capabilities.
• Shaan Khosla: A data scientist on my team at Nebula.io, he details practical tips on training, validating, and productionizing LLMs.
If you don't have access to the O'Reilly online platform through your employer or school, you can use my special code "SDSPOD23" to get a 30-day trial and enjoy the video for free!
Check it out here: learning.oreilly.com/videos/catalyst-conference-nlp/9780138224912/
Astonishing CICERO negotiates and builds trust with humans using natural language
Meta AI's CICERO algorithm — which negotiates and build trust with humans to perform in the top decile at the game of Diplomacy — is (in my view) the most astounding A.I. feat yet. Hear all about it from Alexander.
As published in the prestigious academic journal Science in November, CICERO is capable of using natural-language conversation to coordinate with humans, develop strategic alliances, and ultimately win in Diplomacy, an extremely complex board game.
Excelling in a game with incomplete information and vastly more possible states of play than games previously conquered by A.I. like chess and go would be a wild feat in and of itself, but CICERO’s generative capacity to converse and negotiate in real-time with six other human players in order to strategize victoriously is the truly mind-boggling capability.
To detail for you how the game of Diplomacy works, why Meta chose to tackle this game with A.I., and how they developed a model that competes in the top decile of human Diplomacy players without any other players even catching a whiff that CICERO could possibly be a machine, my guest in today's episode is Alexander Holden Miller, a co-author of the CICERO paper.
Alex:
• Has been working in Meta AI’s Fundamental AI Research group, FAIR, for nearly eight years.
• Currently serves as a Senior Research Engineering Manager within FAIR.
• Has supported researchers working in most ML sub-domains but has been especially involved in conversational A.I. research and more recently reinforcement learning and planning.
• Holds a degree in Computer Science from Cornell University.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Open-Source Tools for Natural Language Processing
In today's episode, the brilliant Vincent Warmerdam regales us with invaluable ideas and open-source software libraries for developing A.I. (particularly Natural Language Processing) applications. Enjoy!
Vincent:
• Is an ML Engineer at Explosion, the German software company that specializes in developer tools for A.I. and NLP such as spaCy and Prodigy.
• Is renowned for several open-source tools of his own, including Doubtlab.
• Is behind an educational platform called Calmcode that has over 600 short and conspicuously enjoyable video tutorials about software engineering concepts.
• Was Co-Founder and Chair of PyData Amsterdam.
• Has delivered countless amusing and insightful PyData talks.
• Holds a Masters in Econometrics and Operations Research from Vrije Universiteit Amsterdam (VU Amsterdam)).
Today’s episode will appeal primarily to technical listeners as it focuses primarily on ideas and open-source software libraries that are indispensible for data scientists, particularly those developing A.I. or NLP applications.
In this episode, Vincent details:
• The prompt recipes he developed to enable OpenAI GPT architectures to perform tremendously helpful NLP tasks such as data labeling.
• The super-popular open-source libraries he’s developed on his own as well as with Explosion.
• The software tools he uses daily including several invaluable open-source packages made by other folks.
• How both linguistics and operations research are extremely useful fields to be a better NLP practitioner and ML practitioner, respectively.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.