Specify GPU device
notes
Pytorch
Jupyter
CUDA
How to specify GPU device
TL;DR
A generic solution to specify the GPU device(s) to use is to set the CUDA_VISIBLE_DEVICES environment variable. When running from terminal:
CUDA_VISIBLE_DEVICES=0,1 [executable] [params]When running inside a Jupyter notebook, use the following snippet. To avoid potential conflicts, use this before any CUDA/GPU-related modules are imported:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"Breakdown
- Using
CUDA_VISIBLE_DEVICESwill restrict the devices visible to the executed program. Internally, CUDA device IDs will be remmaped, i.e.cuda:0will be mapped to the first visible device, which could could be the second one ifCUDA_VISIBLE_DEVICES=1. CUDA_DEVICE_ORDER=PCI_BUS_IDoros.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"ensures that devices are always listed according to their PCI bus ID, avoiding potential non-determinism when listing GPU devices.
Resources
- Post in NVIDIA technical blog.
- Stackoverflow thread specifically for Jupyter notebooks. Uses the
CUDA_DEVICE_ORDERenvironment variable. - Pytorch documentation on
torch.cuda.set_device()method (which also indicates usingCUDA_VISIBLE_DEVICESis usually a better option).