https://www.youtube.com/watch?v=hrXiqKGb7OQ
This is the first in a series of videos where we are going to talk about leveraging the power of GPUs to accelerate machine learning and deep learning algorithms using TensorFlow running on Mesosphere DC/OS.
Mesosphere DC/OS (and the Apache Mesos software at the core of our platform) provides rich capabilities to support multiple workloads on your infrastructure — such as containers, machine learning, and data services — while also making it easy to run some of these workloads using GPU based scheduling.
TensorFlow is a popular open source machine learning library. It was developed by Google Machine learning Intelligence research organization for internal use on the Google Brain Team. It was recently open sourced to the general public and provides many tools for deep learning and neural network research.
The first step of developing any machine learning algorithm is to first test the algorithm on your local machine on a small dataset. Once you are sure your algorithm is running locally you can then move it to production for training on larger datasets.
In our first step-by-step video, learn how to run TensorFlow locally with and without GPUs for development purposes and show you the speed advantage of running with Nvidia GPUs. We will be using the TensorFlow Docker image with
Nvidia-Docker. Docker is a container file format that allows you to easily package and run your machine learning algorithm and the associated library and run them everywhere.
In follow up videos and tutorials, we will show you how to move your TensorFlow machine learning model into production with Mesosphere DC/OS, and run it even faster on larger and more powerful GPU-Enabled machines on any datacenter or public cloud. With DC/OS you can also have multiple teams share the infrastructure for increased utilization while guaranteeing performance through resource with isolation.
For detailed step-by-step guide of this tutorial, please check
here.