Performance of running NNs across Azure GPU Series Data Science Virtual Machines

One of the question I regularly get, from Student and academics is,

Which NN runs the best on Azure?

Caffe2
MXNet
Gluon
CNTK
PyTorch
Tensorflow
Keras(CNTK)
Chainer
Keras(TF)
Lasagne(Theano)
Keras(Theano)

NNs on Azure

So Azure has lots of support for these including prebuilt Azure Batch shipyard container see https://github.com/Azure/batch-shipyard/tree/master/recipes 

So one of our colleagues Ilia Karmanov has developed a set of jupyter notebooks for the purpose of specifically comparing the performance of the specific frameworks on Azure running on the Microsoft Data Science VM.

Ilia stresses that the thenotebooks are not specifically written for speed, instead they aim to create an easy comparison between the frameworks.

Notebooks are run on NVidia K80 GPU  on Microsoft Azure Data Science Virtual Machine for Linux (Ubuntu), where frameworks have been updated to the latest version

Goal of the Notebooks

Create a Rosetta Stone of deep-learning frameworks to allow data-scientists to easily leverage their expertise from one framework to another (by translating, rather than learning from scratch). Also, to make the models more transparent to comparisons in terms of training-time and default-options.

A lot of online tutorials use very-low level APIs, which are very verbose, and don't make much sense (given higher-level helpers being available) for most use-cases unless one plans to create new layers. Here we try to apply the highest-level API possible, conditional on being to override conflicting defaults, to allow an easier comparison between frameworks. It will demonstrated that the code structure becomes very similar once higher-level APIs are used and can be roughly represented as:

  • Load data; x_train, x_test, y_train, y_test = cifar_for_library(channel_first=?, one_hot=?)
  • Generate CNN/RNN symbol (usually no activation on final dense-layer)
  • Specify loss (cross-entropy comes bundles with softmax), optimiser and initialise weights + sessions
  • Train on mini-batches from train-set using custom iterator (common data-source for all frameworks)
  • Predict on fresh mini-batches from test-set
  • Evaluate accuracy

Since we are essentially comparing a series of deterministic mathematical operations (albeit with a random initialization), it does not make sense to me to compare the accuracy across frameworks and instead they are reported as checks we want to match, to make sure we are comparing the same model architecture.

Getting the Notebooks to run performance test

You can download and utilise these notebooks from Ilia Karmanov GitHub at https://github.com/ilkarman/DeepLearningFrameworks or view each of the Notebooks below.

Caffe2
MXNet
Gluon
CNTK
PyTorch
Tensorflow
Keras(CNTK)
Chainer
Keras(TF)
Lasagne(Theano)
Keras(Theano)