OpenAI released many of the most revolutionary A.I. models of recent years, e.g., DALL-E 2, GPT-3 and Codex. Dr. Miles Brundage was behind the A.I. Policy considerations associated with each transformative release.
Miles:
• Is Head of Policy Research at OpenAI.
• He’s been integral to the rollout of OpenAI’s game-changing models such as the GPT series, DALL-E series, Codex, and CLIP.
• Previously he worked as an A.I. Policy Research Fellow at the University of Oxford’s Future of Humanity Institute.
• He holds a PhD in the Human and Social Dimensions of Science and Technology from Arizona State University.
Today’s episode should be deeply interesting to technical experts and non-technical folks alike.
In this episode, Miles details:
• Considerations you should take into account when rolling out any A.I. model into production.
• Specific considerations OpenAI concerned themselves with when rolling out:
• The GPT-3 natural-language-generation model,
• The mind-blowing DALL-E artistic-creativity models,
• Their software-writing Codex model, and
• Their bewilderingly label-light image-classification model CLIP.
• Differences between the related fields of AI Policy, AI Safety, and AI Alignment.
• His thoughts on the risks of AI displacing versus augmenting humans in the coming decades.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Filtering by Tag: gpt3
GPT-3 for Natural Language Processing
With its human-level capacity on tasks as diverse as question-answering, translation, and arithmetic, GPT-3 is a game-changer for A.I. This week's brilliant guest, Melanie Subbiah, was a lead author of the GPT-3 paper.
GPT-3 is a natural language processing (NLP) model with 175 billion parameters that has demonstrated unprecedented and remarkable "few-shot learning" on the diverse tasks mentioned above (translation between languages, question-answering, performing three-digit arithmetic) as well as on many more (discussed in the episode).
Melanie's paper sent shockwaves through the mainstream media and was recognized with an Outstanding Paper Award from NeurIPS (the most prestigious machine learning conference) in 2020.
Melanie:
• Developed GPT-3 while she worked as an A.I. engineer at OpenAI, one of the world’s leading A.I. research outfits.
• Previously worked as an A.I. engineer at Apple.
• Is now pursuing a PhD at Columbia University in the City of New York specializing in NLP.
• Holds a bachelor's in computer science from Williams College.
In this episode, Melanie details:
• What GPT-3 is.
• Why applications of GPT-3 have transformed not only the field of data science but also the broader world.
• The strengths and weaknesses of GPT-3, and how these weaknesses might be addressed with future research.
• Whether transformer-based deep learning models spell doom for creative writers.
• How to address the climate change and bias issues that cloud discussions of large natural language models.
• The machine learning tools she’s most excited about.
This episode does have technical elements that will appeal primarily to practicing data scientists, but Melanie and I put an effort into explaining concepts and providing context wherever we could so hopefully much of this fun, laugh-filled episode will be engaging and informative to anyone who’s keen to learn about the start of the art in natural language processing and A.I.
The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.
Transformers for Natural Language Processing
This week's guest is award-winning author Denis Rothman. He details how Transformer models (like GPT-3) have revolutionized Natural Language Processing (NLP) in recent years. He also explains Explainable AI (XAI).
Denis:
• Is the author of three technical books on artificial intelligence
• His most recent book, "Transformers for NLP", led him to win this year's Data Community Content Creator Award for technical book author
• Spent 25 years as co-founder of French A.I. company Planilog
• Has been patenting A.I. algos such as those for chatbots since 1982
In this episode, Denis fills us in on:
• What Natural Language Processing is
• What Transformer architectures are (e.g., BERT, GPT-3)
• Tools we can use to explain *why* A.I. algorithms provide a particular output
We covered audience questions from Serg, Chiara, and Jean-charles during filming. For those we didn't get to ask, Denis is kindly answering via a LinkedIn post today!
The episode's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.