The DataScienceGO conference is this weekend — registration for Friday and Saturday is 100% free! I'm speaking Saturday on the pros and cons of TensorFlow vs PyTorch for training and deploying deep-learning models.
Awesome speakers — whom you may already be familiar with from recent SuperDataScience episodes — include:
• Erica Greene (episode # 435)
• Harpreet Sahota (# 457)
• Andrew Jones (# 483)
I don't (yet!) personally know the other speakers pictured here but their weighty reputations precede them and I'm looking forward to getting to know them better over the course of the weekend: Gabriela de Queiroz, Karen JEAN-FRANCOIS, Yudan Lin, Ken Jee, and Danny Ma.
Free registration here!
Monetizing Machine Learning
This week's guest is the legendary Vin Vashishta! Vin details his A.I. commercialization strategy, which allows data science teams and machine learning companies alike to be profitable and successful long-term.
Vin is founder of and chief data scientist at V Squared, his own consulting practice that specializes in monetizing machine learning by helping Fortune 100 companies with A.I. strategy. He's also the creator of several platforms (including The ML Rebellion) for learning about critical skill gaps related to artificial intelligence such as commercial strategy, data science leadership, and model explainability.
In addition to the episode's focus on A.I. strategy, Vin answers questions from SuperDataScience listeners (thanks, Serg, Joe, Daniel, Nikhil, and Michael!), including on:
• Efficiency gains from no-code or low-code machine learning tools
• The biggest skills gaps that data scientists have
• The most disturbing data sets
• Investing in socially beneficial models
• The most challenging problem with commercializing AI
Listen or watch here.
(With thanks to Harpreet Sahota for another stellar guest suggestion!)
The Power Rule on a Function Chain — Topic 61 of Machine Learning Foundations
This is the FINAL (of nine) videos in my Machine Learning Foundations series on the Derivative Rules. It merges together the Power Rule and the Chain Rule into a single easy step.
Next begins a chunk of long, meaty videos on Automatic Differentiation — i.e., using the PyTorch and TensorFlow libraries to, well, automatically differentiate equations (e.g., ML models) instead of needing to do it painstakingly by hand.
Because these forthcoming videos are so meaty, we're moving from a twice-weekly publishing schedule to a weekly one: Starting next week, we'll publish a new video to YouTube every Wednesday.
My growing "Calculus for ML" course available on YouTube here.
More detail about my broader "ML Foundations" curriculum and all of the associated open-source code is available in GitHub here.
The Price of Your Attention
Time is money. Every second of your life is yours to use and one of the options you have is to generate income. You can do this hourly, or, as a data scientist, invest time in a digitally-sharable product with a huge potential ROI.
Listen or watch here.
Advanced Exercises on Derivative Rules — Topic 60 of Machine Learning Foundations
Having now covered the product rule, quotient rule, and chain rule, we're well-prepared for advanced exercises that confirm your comprehension of all of the derivative rules in my Machine Learning Foundations series.
There’s just one quick derivative rule left after this — one that conveniently combines together two of the rules we’ve already covered — and then we’re ready to move on to the next segment of videos on Automatic Differentiation with PyTorch and TensorFlow.
New videos are published every Monday and Thursday to my "Calculus for ML" course, which is available on YouTube here.
More detail about my broader "ML Foundations" curriculum and all of the associated open-source code is available in GitHub here.
TensorFlow vs PyTorch @ DataScienceGo Virtual
The DataScienceGO Virtual conference is coming up next Saturday and it is FREE! I'm giving a talk on TensorFlow vs PyTorch with lots of time for audience questions.
Fixing Dirty Data
My guest this week is the fixer of dirty data herself, the one and only Susan Walsh. We have a lot of laughs in this episode as we discuss how organizations can save substantial sums by tidying up their data.
Susan has worked for a decade as a data-quality specialist for a wide range of firms across the private and public sectors. For the past four years, she's been doing this work as the founder and managing director of her own company, The Classification Guru Ltd. She's also the author of the forthcoming book, "Between the Spreadsheets", and she hosts her own video interview show called "Live from the Data Den".
Listen or watch here.
The Chain Rule for Derivatives — Topic 59 of Machine Learning Foundations
Today's video introduces the Chain Rule — arguably the single most important differentiation rule for ML. It facilitates several of the most ubiquitous ML algorithms, such as gradient descent and backpropagation.
Gradient descent and backprop will be covered in great detail later in my "Machine Learning Foundations" video series. This video is critical for understanding those applications.
New videos are published every Monday and Thursday to my "Calculus for ML" course, which is available on YouTube.
More detail about my broader "ML Foundations" curriculum and all of the associated open-source code is available in GitHub.
The History of Calculus
Y'all seem to love these "History of..." episodes, so for Five-Minute Friday this week, here's another one. It's on the History of Calculus! Enjoy 😄
(Leibniz and Newton, who independently devised modern calculus around the same time, are pictured.)
Listen or watch here.
The Quotient Rule for Derivatives — Topic 58 of Machine Learning Foundations
This is the penultimate Derivative Rule and then we're moving onward to AutoDiff with TensorFlow and PyTorch! The Quotient Rule is analogous to the Product Rule introduced on Monday but is for division instead of multiplication.
New videos are published every Monday and Thursday. The playlist for my "Calculus for ML" course is here.
More detail about my broader "ML Foundations" series and all of the associated open-source code is available in GitHub here.
Upcoming O'Reilly Calculus Classes
Starting a week today, I'm offering my entire "ML Foundations" curriculum as a series of 14 live, interactive workshops via O'Reilly Media. The first five classes are open for registration; two are already waitlist-only, so grab a spot now:
• Jul 14 — Intro to Linear Algebra (waitlisted)
• Jul 21 — LinAlg II: Matrix Tensors (5 spots remaining)
• Jul 28 — LinAlg III: Eigenvectors (waitlisted)
• Aug 12 — Intro to Calculus (143 spots remaining)
• Aug 18 — Calc II: AutoDiff (148 spots remaining)
REGARDING THE WAITLIST: I have a made a request with O'Reilly to increase the maximum class size from 600 students to 1000, so if you sign up for a waitlisted class now, you should still be able to get in.
Overall, there will be four subject areas covered:
• Linear Algebra (3 classes)
• Calculus (4 classes)
• Probability and Statistics (4 classes)
• Computer Science (3 classes)
Sign up opens about two months prior to each class. All 14 training dates, running from next week through December, are provided at jonkrohn.com/talks
A detailed curriculum and all of the code for my ML Foundations series is available open-source in GitHub here.
Financial Data Engineering
This week's guest is Doug Eisenstein, an exceptionally clear and content-rich communicator. He fills us in on the complexity of engineering a coherent source of truth for financial models, integrating hundreds of data sources.
Topics covered in the episode include:
• A breakdown of the primary financial sectors and departments
• Why data source integration for finance is wildly complicated
• Specific data engineering approaches that resolve these issues including entity resolution, knowledge graph mapping and tri-temporality.
20 years ago, Doug founded the consulting firm, Advanti and they have since become a critical provider of solutions to complex data engineering problems faced by some of the world's largest banks and asset managers including Morgan Stanley, Bank of America, Citibank and State Street.
Listen or watch here.
The Product Rule for Derivatives
Today's video is on the Product Rule, a relatively advanced Derivative Rule. Only a couple such rules remain and then we move onward to Automatic Differentiation with PyTorch and TensorFlow.
New videos are published every Monday and Thursday. The playlist for my "Calculus for ML" course is here.
More detail about my broader "ML Foundations" series and all of the associated open-source code is available in GitHub here.
machinelearning,datascience,calculus,mathematics,python
Algorithm Aversion
Exercises on Derivative Rules — Topic 56 of Machine Learning Foundations
Today's YouTube video uses five fun exercises to test your understanding of the derivative rules we’ve covered so far: the Constant Rule, Power Rule, Constant-Multiple Rule, and Sum Rule.
New videos are published every Monday and Thursday. The playlist for my "Calculus for ML" course is here.
More detail about my broader "ML Foundations" series and all of the associated open-source code is available in GitHub here.
If you’d happen to like a detailed walkthrough of the solutions to all the exercises in this video, you can check out my Udemy course called Mathematical Foundations of Machine Learning. See jonkrohn.com/udemy
Finalist for Technical Author of the Year
Finished watching last week's Data Community Content Creator Awards and discovered I was one of three finalists for Favorite Technical Author alongside the iconic Aurélien Géron and the category winner Denis Rothman!
This is venerable company and I'm honored — thanks to everyone who voted for me in this category! (The full category name was "Author of Instructional, Technical, or Textbook".)
Many thanks are due to the artist Aglae Bassens, who did an incredible job illustrating "Deep Learning Illustrated", thereby making the book a unique and popular addition to the field. Thanks are also due to co-author Grant Beyleveld who lent his expertise and colorful writing style to many of the topics.
This also couldn't have happened without Debra Williams, Chris Zahn, Julie Nahil, Betsy Hardinger, and many others at Pearson who put extra effort into a book with unusually complex development and production requirements.
Finally thanks again to Kate Strachnyi and Harpreet Sahota for devising and executing the awards ceremony flawlessly.
You can watch the entire ceremony on YouTube here. And you can get more info on my book at deeplearningillustrated.com.
Setting Yourself Apart in Data Science Interviews
For this week's guest episode, I interrogated Andrew Jones on his data science interview secrets. If you want to improve your interview performance — especially if you're in a data-related career — this episode's for you.
Andrew has held a number of senior data roles over the past decade, including at the tech giant Amazon. In those roles, Andrew interviewed hundreds upon hundreds of data scientists, leading him to create his Data Science Infinity educational program, a curriculum that provides you with the hard and soft skills you need to set yourself apart from other data scientists during the interview process.
Listen or watch here.
The Sum Rule for Derivatives
Thus far in this set of videos on Differentiation Rules, we’ve covered the Constant, Power, and Constant-Multiple rules. Today's video is on the Sum Rule. On Thursday, we'll have comprehension exercises on all four key rules!
Continuous Calendars
Extremely practical post for you today! It's on the Continuous Calendar, which in my opinion is vastly superior to the standard monthly calendar in every imaginable respect. Click through for more detail.
The Constant Multiple Rule for Derivatives
Continuing my short series on Differentiation Rules, today’s video covers the Constant Multiple Rule. This rule is often used in conjunction with the Power Rule, which was covered in the preceding video, released on Monday.
New videos are published every Monday and Thursday. The playlist for my "Calculus for ML" course is here.
More detail about my broader "ML Foundations" series and all of the associated open-source code is available in GitHub here.