Using GPUs with Python Deep Learning

Prelude

In order to get your set up properly and test your environment, you will want to allocate a compute node that has gpu. Here are a couple examples:

srun --pty -p gpu -c 2 -t 12:00:00 --gres=gpu:1 bash 

or, if the gpu queue is busy, try to scavenge some PI gpus:

srun --pty -p scavenge -c 2 -t 12:00:00 --gres=gpu:1 bash 

Next, we'll walk though the setup and activation of your environment.

One time Setup

Modules

Load the modules for either Farnam or Grace:

# load modules for Farnam
module purge
module load GCC/7.3.0-2.30
module load cuDNN/7.1.4-CUDA-9.0.176
module load Python/miniconda

# or load modules for Grace
module purge
module load Langs/GCC/5.2.0
module load GPU/cuDNN/9.0-v7
module load Langs/Python/miniconda

Then save your modules as a collection.

# save module environment
module save cuda
module purge

Create Your Python Environment

For more information on conda environments, see our page about it.

# create conda environment for deep learning/neural networks
conda create -y -n dlnn python=3.6 anaconda
source activate dlnn

#install libraries
pip install --force --upgrade setuptools
pip install keras
pip install Theano
pip install https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl
pip install http://download.pytorch.org/whl/cu90/torch-0.4.0-cp36-cp36m-linux_x86_64.whl 
pip install torchvision

To get to work

Now, to re-enter your Deep Neural Network environment, you just need the following:

module restore cuda
source activate dlnn