This is the age of artificial intelligence. Machine Learning and predictive analytics are now established and integral to just about every modern businesses, but artificial intelligence expands the scale of what’s possible within those fields. It’s what makes deep learning possible. Systems with greater ostensible autonomy and complexity can solve similarly complex problems.
If Deep Learning is able to solve more complex problems and perform tasks of greater sophistication, building them is naturally a bigger challenge for data scientists and engineers. Luckily, there’s a growing range of frameworks that make it a little easier to build deep learning solutions of some complexity. This wave of frameworks is another manifestation of a wider trend across modern technology for engineering communities to develop their own tools that offer a higher level of abstraction and simplify potentially difficult programming tasks. Every framework is different, built for a different purpose and offering a unique range of features. However, understanding what this landscape looks like will help to inform how you take on your next deep learning challenge, giving you a better sense of what’s available to help you.
Top 10 Deep learning Frameworks
One of the most popular Deep Learning libraries out there, Tensorflow, was developed by the Google Brain team and open-sourced in 2015. Positioned as a ‘second-generation machine learning system’, Tensorflow is a Python-based library capable of running on multiple CPUs and GPUs. It is available on all platforms, desktop and mobile. It also has support for other languages such as C++ and R, and can be used directly to create deep learning models, or by using wrapper libraries (for e.g. Keras) on top of it.
One of the first deep learning libraries, Theano is Python-based and is particularly good when it comes to numerical computation on CPUs and GPUs. Like Tensorflow Theano is a low-level library, which you can use on its own to create deep learning models, or alternatively, use with wrapper libraries on top of it to simplify the process. It is, however, not very scalable unlike other deep learning frameworks. It also lacks multi-GPU support. However, it is still a choice of many developers all over the world when it comes to general-purpose deep learning.
While Theano and Tensorflow are very good deep learning libraries, creating models using them directly can be a challenge, as they’re pretty low-level. To tackle this challenge, Keras was built as a simplified interface for building efficient neural networks. Keras can be configured to work on either Theano or Tensorflow. Written in Python, it is very lightweight and straightforward to learn. It has a very good documentation despite being relatively new, and you can build a neural network using Keras in just a few lines of code.
Built with expression, speed and modularity in mind, Caffe is one of the first deep learning libraries developed mainly by Berkeley Vision and Learning Center (BVLC). It is a C++ library which also has a Python interface, and finds its primary application in modeling Convolutional Neural Networks. One of the major benefits of using this library is that you can get a number of pre-trained networks directly from the Caffe Model Zoo, available for immediate use. If you’re interested in modeling CNNs or solve your image processing problems, you might want to consider this library.
Following the footsteps of Caffe, Facebook also recently open-sourced Caffe2, a new light-weight, modular deep learning framework which offers greater flexibility for building high-performance deep learning models.
Torch is a Lua-based deep learning framework, and has been used and developed by big players such as Facebook, Twitter and Google. It makes use of the C/C++ libraries as well as CUDA for GPU processing. Torch was built with an aim to achieve maximum flexibility and make the process of building your models extremely simple. More recently, the Python implementation of Torch, called as PyTorch, has found popularity and is gaining rapid adoption.
DeepLearning4j (or DL4J) is a popular deep learning framework developed in Java and supports other JVM languages as well. It is very slick, and is very widely used as a commercial, industry-focused distributed deep learning platform. The advantage of using DL4j is that you can bring together the power of the whole Java ecosystem to perform efficient deep learning, as it can be implemented on top of the popular Big Data tools such as Apache Hadoop and Apache Spark.
MXNet is one of the most languages-supported deep learning frameworks, with support for languages such as R, Python, C++ and Julia. This is helpful because if you know any of these languages, you won’t need to step out of your comfort zone at all, to train your deep learning models. It’s backend is written in C++ and cuda, and is able to manage its own memory like Theano. MXNet is also popular because it scales very well and is able to work with multiple GPUs and computers, which makes it very useful for the enterprises. This is also one of the reasons why Amazon made MXNet its reference library for Deep Learning too.
8. Microsoft Cognitive Toolkit
Microsoft Cognitive Toolkit, previously known by its acronym CNTK, is an open-source deep learning toolkit to train deep learning models. It is highly optimized, and has support for languages such as Python and C++. Known for its efficient resource utilization, you can easily implement efficient Reinforcement Learning models or Generative Adversarial Networks (GANs) using the Cognitive Toolkit. It is designed to achieve high scalability and performance, and is known to provide high performance gains when compared to other toolkits like Theano and Tensorflow, when running on multiple machines.
Lasagne is a high-level deep learning library that runs on top of Theano. It has been around for quite some time now, and was developed with the aim of abstracting the complexities of Theano, and provide a more friendly interface to the users to build and train neural networks. It requires Python, and finds many similarities to Keras, which we just saw above. However, if we are to find differences between the two, Keras is faster, and has a better documentation in place.
BigDL is distributed deep learning library for Apache Spark, and is designed to scale very well. With the help of BigDL, you can run your deep learning applications directly on Spark or Hadoop clusters, by writing them as Spark programs. It has a rich deep learning support, and uses Intel’s Math Kernel Library (MKL) to ensure high performance. Using BigDL, you can also load your pre-trained Torch or Caffe models into Spark. If you want to add deep learning functionalities to a massive set of data stored on your cluster, this is a very good library to use.
There are many other deep learning libraries and frameworks available for use today – DSSTNE, Apache Singa, Veles are just a few worth an honourable mention.The list above presents a very interesting question, then. Which deep learning framework would best suit your needs? Ultimately, that depends on a number of factors. If you want to get started with deep learning, your safest bet would be to use a Python-based framework like Tensorflow or Theano, which are quite popular. For seasoned professionals, the efficiency of the trained model, ease of use, speed and resource utilization are all important considerations for choosing the best deep learning framework.