Specify GPU device

notes
Pytorch
Jupyter
CUDA
How to specify GPU device

TL;DR

A generic solution to specify the GPU device(s) to use is to set the CUDA_VISIBLE_DEVICES environment variable. When running from terminal:

CUDA_VISIBLE_DEVICES=0,1 [executable] [params]

When running inside a Jupyter notebook, use the following snippet. To avoid potential conflicts, use this before any CUDA/GPU-related modules are imported:

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"

Breakdown

  1. Using CUDA_VISIBLE_DEVICES will restrict the devices visible to the executed program. Internally, CUDA device IDs will be remmaped, i.e. cuda:0 will be mapped to the first visible device, which could could be the second one if CUDA_VISIBLE_DEVICES=1.
  2. CUDA_DEVICE_ORDER=PCI_BUS_ID or os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" ensures that devices are always listed according to their PCI bus ID, avoiding potential non-determinism when listing GPU devices.

Resources

  • Post in NVIDIA technical blog.
  • Stackoverflow thread specifically for Jupyter notebooks. Uses the CUDA_DEVICE_ORDER environment variable.
  • Pytorch documentation on torch.cuda.set_device() method (which also indicates using CUDA_VISIBLE_DEVICES is usually a better option).