The most significant event in the recent history of technology is perhaps the explosion in the power of neural networks since 2012. This was when the growth in labeled datasets, increases in computation power, and innovations in algorithms came together and reached a critical mass. Since then, deep neural networks have made previously unachievable tasks achievable and boosted the accuracies in other tasks, pushing them beyond academic research and into practical applications in domains such as speech recognition, image labeling, generative models, and recommendation systems, just to name a few.
It was against this backdrop that our team at Google Brain started developing TensorFlow.js. When the project started, many regarded “deep learning in JavaScript” as a novelty, perhaps a gimmick, fun for certain use cases, but not to be pursued with seriousness. While Python already had several well-established and powerful frameworks for deep learning, the JavaScript machine-learning landscape remained splintered and incomplete. Of the handful of JavaScript libraries available back then, most only supported deploying models pretrained in other languages (usually in Python). For the few that supported building and training models from scratch, the scope of supported model types was limited. Considering JavaScript’s popular status and its ubiquity that straddles client and server sides, this was a strange situation.