• Native implementation of Deep Learning models for GPU-optimized backends (MXNet, Caffe, TensorFlow, etc.)
  • Train user-defined or pre-defined deeplearning models for image/text/H2OFrame classification from Flow, R, Python, Java, Scala or REST API

You can pull and start your H2O Deepwater container via: nvidia-docker run -it --rm opsh2oai/h2o-deepwater

Whether you are inside your container or on your GPU-equipped host VM, you can monitor processes and GPU utilization with the nvidia-smi tool. You can monitor it live using: watch -d -n 1 nvidia-smi

If hope this information was useful for you to get started with GPU-powered Deep Learning workloads in the cloud. An don't forget to shut down your virtual machines after finishing your jobs or using the auto-shutdown feature in Azure VMs.