Welcome to MLink Developer Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
870 views
in Technique[技术] by (71.8m points)

pytorch - RuntimeError: CUDA out of memory. Tried to allocate... but memory is empty

I'm trying to run the train file from this Unet with their default hyperparameters, batch size = 1.

I have a GTX970 with 4GB and made Windows use the integrated graphics.

When I run nvidia-smi, it says that the memory of the GPU is almost free (52MiB / 4096MiB), "No running processes found " and pytorch uses the GPU not the integrated graphics

I do not understand what is using the memory:

RuntimeError: CUDA out of memory. Tried to allocate 150.00 MiB (GPU 0; 4.00 GiB total capacity; 2.77 GiB already allocated; 72.46 MiB free; 2.82 GiB reserved in total by PyTorch).


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

GPU memory allocation is not done all at once. As the program loads the data and the model, GPU memory usage gradually increases until the training actually starts. In your case, the program has allocated 2.7GB and tries to get more memory before training starts, but there is not enough space. 4GB GPU memory is usually too small for CV deep learning algorithms.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to MLink Developer Q&A Community for programmer and developer-Open, Learning and Share
...