Using GPUs with Python Deep Learning


In order to get your set up properly and test your environment, you will want to install and compile on a compute node that has gpus that you have been allocated. To do this with slurm:

srun --pty -p gpu -c 2 -t 12:00:00 --gres=gpu:2 --gres-flags=enforce-binding bash 

or, if the gpu queue is busy, try to scavenge some PI gpus:

srun --pty -p scavenge -c 2 -t 12:00:00 --gres=gpu:2 --gres-flags=enforce-binding bash 

Next, we'll walk though the setup and activation of your environment:

One time Setup:


# load modules 
module load GCC/5.4.0-2.26
module load CUDA/8.0.44 
module load cuDNN/5.1-CUDA-8.0.44
# save module environment
module save cuda

Create Your Python Environment

module restore cuda
# install Anaconda
bash -b -p $HOME/anaconda3
echo "# Next line makes anaconda my default python, comment out with # to disable this" >> ~/.bashrc
echo 'export PATH="$HOME/anaconda3/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc

# create conda environment for deep learning/neural networks
conda create -y -n dlnn python=3.6 anaconda
source activate dlnn

# install libraries 
conda install -y pygpu
pip install --upgrade --force setuptools
pip install --upgrade Theano
pip install --upgrade tensorflow-gpu
pip install 
pip install torchvision
pip install keras

To get to work:

Now, to re-enter your Deep Neural Network environment, you just need the following:

module restore cuda
source activate dlnn