In our most recent session, we began learning about word vectors and vector-space embeddings. This is a stepping stone en route to applying Deep Learning techniques to process natural language.
Deep Learning Study Group VIII: Unsupervised Learning, Regularisation, and Venture Capital →
At the eighth iteration of our Deep Learning Study Group, we discussed unsupervised learning techniques and enjoyed presentations from Raffi Sapire and Katya Vasilaky on venture capital investment for machine-learning start-ups and L2 regularisation, respectively.
Fundamental Deep Learning code in TFLearn, Keras, Theano and TensorFlow →
I had a ball giving this talk on Deep Learning hosted by the Open Statistical Programming Meetup at eBay New York last week. It was a packed house, with many thoughtful questions and fruitful discussions. This post summarises the content, and includes the full video, the slides, Jupyter notebooks, and an introduction to the code snippets I presented.
Introductory Talk on Deep Learning at Wilfrid Laurier University →
I gave an introductory talk on the Artificial Intelligence approach Deep Learning last week to the Faculty of Science at Wilfrid Laurier University on what it is, how it's impacting each of us today, and how it is poised to become ever-more uniquitous in the years to come. Here is a summary write-up.
Upcoming Talk: Deep Learning with Artificial Neural Networks →
Next week, I'll be giving an introductory, public lecture on the topics I'm most passionate about -- Deep Learning and AI -- at Wilfrid Laurier University. If you're in Waterloo, Ontario, I'd love to share it with you live -- on January 4th at 4pm. Click through the title for details.
Deep Learning Study Group #6: A History of Machine Vision →
A fortnight ago, our Deep Learning Study Group held a particularly fun and engaging session.
This was our first meeting since completing Michael Nielsen’s Neural Networks and Deep Learning text. Nielsen’s work provided us with a solid foundation for exploring more thoroughly the convolutional neural nets that are the de facto standard in contemporary machine-vision applications.
Deep Learning Study Group #5: How Deep Convolutional Neural Networks Work and How to Improve Them →
This week, our Deep Learning Study Group focused on a popular neural network architecture for machine vision called Convolutional Neural Networks. This technique is used to, for example, recognise that a face is yours on Facebook or to enable Tesla's self-driving cars to identify traffic. Modelled loosely on the primate visual cortex, this approach is powerful and has become ubiquitous within contemporary Artificial Intelligence.
“Fundamentals of Deep Learning” talk to Data Science + FinTech Meetup →
On Wednesday, I had the joy of presenting to a highly-engaged audience at the Data Science + FinTech meetup. Click through for a quick summary, my full slides, and video clips.
Deep Learning Study Group Session #4: Proofs of Key Neural Net Properties →
In our fourth iteration, we discussed key neural net properties, namely that they can compute any function and that deep ones tend to have unstable (typically, vanishing) gradients.
Three Themes of Seth Moulton’s Inaugural Term, Visualized with Data →
In 2014, I volunteered on Congressman Seth Moulton's first political campaign. Two years on, I've dusted off my media-analytics toolkit to summarize his inaugural term in office with data visualizations.
It's a joy to see Seth working with Democrats and Republicans alike to deliver for his country and his constituents in Massachusetts. Running unopposed this November, I'm excited to see the contributions his across-the-aisle diplomacy will bring in his second term.
Deep Learning Study Group Session #3: Improving Neural Networks →
During the third installment of our Deep Learning Study Group, we examined rules of thumb for improving neural networks. This post summarises what we covered, including step-by-step details for tuning a network's hyperparameters.
Deep Learning Study Group Session #2: The Backpropagation Algorithm →
Our Deep Learning Study Group moved forward yesterday by focusing on the ubiquitous Backpropagation Algorithm.
Minimizing Unwanted Bias with Recruitment Algorithms →
I woke up this morning to a thoughtful Financial Times piece on the “risks of relying on robots for fairer staff recruitment”.
While the author advances well-founded concerns for our industry, the risks associated with integrating algorithms into the talent acquisition process are appreciably offset by the benefits: scalability, access to a broader candidate pool, and, vitally, openness.
Deep Learning Study Group Session #1: Perceptrons and Sigmoid Neurons →
Last week, it was my pleasure to host the heavily-oversubscribed inaugural session of the Deep Learning Study Group at our untapt HQ in New York.
I learned a ton from the broad range of well-prepared, articulate, and intelligent engineers and scientists that attended.
Here's a recap of what we covered and an overview of where we're going next.
Deep Learning Study Group →
I am launching a study group for folks interested in applying techniques from the explosively influential field of Deep Learning.
Our first meeting will be held at untapt HQ in two weeks. Details, including recommended preparatory work, can be found by clicking through to the full post.
Joint Statistical Meetings 2016 →
I'm delighted to represent untapt at the Joint Statistical Meetings in Chicago, the largest gathering of statisticians this year. We presented on our novel approach to hierarchical Bayesian modelling, which enables us to efficiently automate decision-making with our data.
The End of Code
This is a brief, popular lay piece about the increasingly widespread field of Deep Learning that demonstrates the statistical technique's beauty, mystery and the power it has over us all.
"As our technological and institutional creations have become more complex, our relationship to them has changed. Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle, and it has a life of its own."
Optimal Resume Length →
Leveraging our rich internal data, we observed that the optimal resume length is 250 to 350 words.
Hiring Quickly Matters →
We used our internal data, which consists of ten thousand applications to technology roles, to uncover that hiring managers respond significantly more quickly in situations that eventually lead to a hire relative to situations where they don't.
A Data Science Approach to Maximizing Data Scientist Salary →
Does a Ph.D. improve your salary? In the field of Data Science, it does a tiny bit, but is hardly worth the investment of years of your life (if pay is primarily why you're pursuing the degree). On the other hand, learning a single cutting-edge analytics technique relates to a pay jump of $12,000. Here are my thoughts on using data to maximise your pay, based on a recent talk at Women in Machine Learning in Data Science.