The biggest AI news of 2020 so far is the success of OpenAI’s monstrous new language model, GPT-3. In … Keep reading
Category: Neural networks
Organizing applied machine learning research
Over the past three years, I’ve spent >50% of my time thinking about what the applied research teams I’ve been part of should be building, and how. This post is about some of the challenges we’ve faced organizing applied machine … Keep reading
Predicting the performance of deep learning models
It’s widely acknowledged that the recent successes of Deep Learning rest heavily upon the availability of huge amounts of data. Vision was the first domain in which the promise of DL was realised, probably because of the availability of large … Keep reading
Abuse detection on Twitter: a collaboration with Amnesty International
At the NeurIPS 2018 workshop on AI for Social Good we presented a piece of work we performed in collaboration with Amnesty International. We leveraged a mixture of crowdsourcing and deep learning to study the nature and quantity of abuse suffered by prominent … Keep reading
Driverless cars and the attention economy
It’s increasingly hard to ignore the buzz surrounding driverless cars. Tesla and Google/Waymo are at the front of the pack, logging hundreds of thousands of driverless miles on the roads of California. If you believe the hype, then our roads … Keep reading
Deep Learning Practical 2: Decoding MNIST
MNIST (Mixed National Institute of Standards and Technology) is a database of handwritten digits. Compiled by Yann LeCun and colleagues, it’s a classic benchmark problem in machine learning. Pleasingly, you can write a model in Tensorflow that does a … Keep reading