Today’s episode is all about an LLM trained for robotics applications called RFM-1 that completely blows my mind because of the implications for what can now suddenly be accomplished so easily with robotics.
Read MoreFiltering by Tag: GPT4
Llama 2, Toolformer and BLOOM: Open-Source LLMs with Meta’s Dr. Thomas Scialom
Thomas Scialom, PhD is behind many of the most popular Generative A.I. projects including Llama 2, the world's top open-source LLM. Today, the Meta A.I. researcher reveals the stories behind Llama 2 and what's in the works for Llama 3.
Thomas:
• Is an A.I. Research Scientist at Meta.
• Is behind some of the world’s best-known Generative A.I. projects including Llama 2, BLOOM, Toolformer and Galactica.
• Is contributing to the development of Artificial General Intelligence (AGI).
• Has lectured at many of the top A.I. labs (e.g., Google, Stanford, MILA).
• Holds a PhD from Sorbonne University, where he specialized in Natural-Language Generation with Reinforcement Learning.
Today’s episode should be equally appealing to hands-on machine learning practitioners as well as folks who may not be hands on but are nevertheless keen to understand the state-of-the-art in A.I. from someone who’s right on the cutting edge of it all.
In this episode, Thomas details:
• Llama 2, today’s top open-source LLM, including what is what like behind the scenes developing it and what we can expect from the eventual Llama 3 and related open-source projects.
• The Toolformer LLM that learns how to use external tools.
• The Galactica science-specific LLM, why it was brought down after a few days, and how it might eventually re-emerge in a new form.
• How RLHF — reinforcement learning from human feedback — shifts the distribution of generative A.I. outputs from approximating the average of human responses to excellent, often superhuman quality.
• How soon he thinks AGI — artificial general intelligence — will be realized and how.
• How to make the most of the Generative A.I. boom as an entrepreneur.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Code Llama
Meta's Llama 2 offered state-of-the-art performance for an "open-source"* LLM... except on tasks involving code. Now Code Llama is here and it magnificently fills that gap by outperforming all other open-source LLMs on coding benchmarks.
Read MoreOpen-source “ChatGPT”: Alpaca, Vicuña, GPT4All-J, and Dolly 2.0
Want a GPT-4-style model on your own hardware and fine-tuned to your proprietary language-generation tasks? Today's episode covers the key open-source models (Alpaca, Vicuña, GPT4All-J, and Dolly 2.0) for doing this cheaply on a single GPU 🤯
We begin with a retrospective look at Meta AI's LLaMA model, which was introduced in episode #670. LLaMA, with its 13 billion parameters, achieves performance comparable to GPT-3 while being significantly smaller and more manageable. This efficiency makes it possible to train the model on a single GPU, democratizing access to advanced AI capabilities.
The focus then shifts to four models that surpass LLaMA in terms of power and sophistication: Alpaca, Vicuña, GPT4All-J, and Dolly 2.0. Each of these models presents a unique blend of innovation and practicality, pushing the boundaries of what's possible with AI:
Alpaca
Developed by Stanford researchers, Alpaca is an evolution of the 7 billion parameter LLaMA model, fine-tuned with 52,000 examples of instruction-following natural language. This model excels in mimicking GPT-3.5's instruction-following capabilities, offering high performance at a fraction of the cost and size.
Vicuña
Vicuña, a product of collaborative research across multiple institutions, builds on both the 7 billion and 13 billion parameter LLaMA models. It's fine-tuned on 70,000 user-shared ChatGPT conversations from the ShareGPT repository, achieving GPT-3.5-like performance with unique user-generated content.
GPT4All-J
GPT4All-J, released by Nomic AI, is based on EleutherAI's open source 6 billion parameter GPT-J model. It's fine-tuned with an extensive 800,000 instruction-response dataset, making it an attractive option for commercial applications due to its open-source nature and Apache license.
Dolly 2.0
Dolly 2.0, from database giant Databricks, builds upon EleutherAI's 12 billion parameter model. It's fine-tuned with 15,000 human-generated instruction response pairs, offering another open source, commercially viable option for AI applications.
These models represent a significant shift in the AI landscape, making it economically feasible for individuals and small teams to train and deploy powerful language models. With a few hundred to a few thousand dollars, it's now possible to create proprietary, ChatGPT-like models tailored to specific use cases.
The advancements in AI models that can be trained on a single GPU mark a thrilling era in data science. These developments not only showcase the rapid progression of AI technology but also significantly lower the barrier to entry, allowing a broader range of users to explore and innovate in the field of artificial intelligence.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
GPT-4: Apocalyptic stepping stone?
The final episode in our trilogy on GPT-4 is on the risks posed by the model today and the potentially existential risks posed by the models it paves the way for. Our guest for this is Jeremie Harris, a world leader on A.I. safety.
Jeremie:
• Is co-founder of Gladstone AI, an advisor to US and Canadian government entities on A.I. risk.
• Co-hosts the "Last Week in A.I.", the premier podcast on ML news.
• Wrote the new (released this week!) book "Quantum Physics Made Me Do It" that covers human consciousness and speculates on the future of A.I.
• Co-founded SharpestMinds, a Y Combinator-backed A.I.-career mentorship platform.
In today's episode, Jeremie details:
• How GPT-4 is a “dual-use technology” — capable of tremendous good but it can also be wielded malevolently.
• How RLHF — reinforcement learning from human feedback — has made GPT-4 outputs markedly more aligned with the outputs humans would like to see, but how this doesn’t necessarily mean we’re in the clear with respect to A.I. acting in the broader interest of humans.
• Emerging approaches for how we might ensure A.I. is aligned with humans, not only today but — critically — as machines overtake human intelligence, the “singularity” event that may occur in the coming decades, or even in the coming years.
The SuperDataScience GPT-4 trilogy is comprised of:
• #666 (last Friday): a ten-minute GPT-4 overview by yours truly.
• #667 (Tuesday): world-leading A.I. monetization expert Vin Vashishta on the unprecedented commercial opportunity of GPT-4.
• #668 (today): GPT-4 risks
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Harnessing GPT-4 for your Commercial Advantage
Episode two in our trilogy on GPT-4 is dedicated to how you can leverage GPT-4 to your commercial benefit. In it, I'm joined by Vin Vashishta — perhaps the best person on the planet for covering A.I. monetization.
Vin:
• Is Founder of V Squared, a consultancy that specializes in monetizing machine learning by helping Fortune 100 companies with A.I. strategy.
• Is the creator of a four-hour course on “GPT Monetization Strategy” which teaches how to build new A.I. products, startups, and business models with GPT models like ChatGPT and GPT-4.
• Is author of the forthcoming book “From Data To Profit: How Businesses Leverage Data to Grow Their Top and Bottom Lines”, which will be published by Wiley.
Today’s episode will be broadly appealing to anyone who’d like to drive commercial value with the powerful GPT-4 model that is taking the world by storm.
In this episode, Vin details:
• What makes GPT-4 so much more commercially useful than any previous A.I. model.
• The levels of A.I. capability that have been unleashed by GPT-4 and how we can automate or augment specific types of human tasks with these new capabilities.
• The characteristics that enable individuals and organizations to best take advantage of foundation models like GPT-4 enabling them overtake their competitors commercially.
The SuperDataScience GPT-4 trilogy is comprised of:
• #666 (last Friday): a ten-minute GPT-4 overview by yours truly.
• #667 (today): GPT-4 commercial opportunities.
• #668 (this Friday): world-leading A.I.-safety expert Jeremie Harris joins me to detail the (existential!) risks of GPT-4 and the models it paves the way for.
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
GPT-4 Has Arrived
SuperDataScience episode #666 — appropriate for an algorithm that has folks (quixotically) signing a letter to pause all A.I. development. In this first episode of the GPT-4 trilogy; in ten minutes, I introduces GPT-4's staggering capabilities.
A Leap in AI Safety and Accuracy
GPT-4 marks a significant advance over its predecessor, GPT-3.5, in terms of both safety and factual accuracy. It is reportedly 82% less likely to respond with disallowed content and 40% more likely to produce factually correct responses. Despite improvements, challenges like sociodemographic biases and hallucinations persist, although they are considerably reduced.
Academic and Professional Exam Performance
The prowess of GPT-4 becomes evident when revisiting queries initially tested on GPT-3.5. Its ability to summarize complex academic content accurately and its human-like response quality are striking. In one test, GPT-4’s output was mistaken for human writing by GPTZero, an AI detection tool, underscoring its sophistication. In another test, the uniform bar exam, GPT-4 scored in the 90th percentile, a massive leap from GPT-3.5's 10th percentile.
Multimodality
GPT-4 introduces multimodality, handling both language and visual inputs. This capability allows for innovative interactions, like recipe suggestions based on fridge contents or transforming drawings into functional websites. This visual aptitude notably boosted its performance in exams like the Biology Olympiad, where GPT-4 scored in the 99th percentile.
The model also demonstrates proficiency in numerous languages, including low-resource ones, outperforming other major models in most languages tested. This linguistic versatility extends to its translation capabilities between these languages.
The Secret Behind GPT-4’s Success
While OpenAI has not disclosed the exact number of model parameters in GPT-4, it's speculated that they significantly exceed GPT-3's 175 billion. This increase, coupled with more and better-curated training data, and the ability to handle vastly more context (up to 32,000 tokens), are likely contributors to GPT-4's enhanced performance.
Reinforcement Learning from Human Feedback (RLHF)
GPT-4 incorporates RLHF, a method that refines its output based on user feedback, allowing it to align more closely with desired responses. This approach has already proven effective in previous models like InstructGPT.
GPT-4 represents a monumental step in AI development, balancing unprecedented capabilities with improved safety measures. Its impact is far-reaching, offering new possibilities in various fields and highlighting the importance of responsible AI development and use. As we continue to explore its potential, the conversation around AI safety and ethics becomes increasingly vital.
The SuperDataScience GPT-4 trilogy is comprised of:
• #666 (today): an introductory overview by yours truly
• #667 (Tuesday): world-leading A.I.-monetization expert Vin Vashishta joins me to detail how you can leverage GPT-4 to your commercial advantage
• #668 (next Friday): world-leading A.I.-safety expert Jeremie Harris joins me to detail the (existential!) risks of GPT-4 and the models it paves the way for
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
NLP with GPT Architectures (ChatGPT, GPT-4, and other LLMs)
Large Language Models have revolutionized the field of Natural Language Processing, powering mind blowing tools like ChatGPT and GPT-4. Today, we released the recording of a half-day conference I hosted on the topic.
In partnership with my publisher Pearson, the "A.I. Catalyst" conference was held earlier this month in the O'Reilly Media platform. It has now been cleaned up and released for anyone to view as a standalone three-hour video. In it, we cover the full Large Language Model (LLM) lifecycle from development to deployment.
The presenters are at the absolute vanguard on their topics:
• Sinan Ozdemir: The A.I. entrepreneur and author introduces the theory behind Transformer Architectures and LLMs like BERT, GPT, and T5.
• Melanie Subbiah: A first author on the original GPT-3 paper, Melanie leads interactive demos of the broad range of LLM capabilities.
• Shaan Khosla: A data scientist on my team at Nebula.io, he details practical tips on training, validating, and productionizing LLMs.
If you don't have access to the O'Reilly online platform through your employer or school, you can use my special code "SDSPOD23" to get a 30-day trial and enjoy the video for free!
Check it out here: learning.oreilly.com/videos/catalyst-conference-nlp/9780138224912/
MIT Study: ChatGPT Dramatically Increases Productivity
With all of this ChatGPT and GPT-4 news, I was wondering whether these generative A.I. tools actually result in the productivity gains everyone supposes them to. Well, wonder no more…
Read More