Quantcast
Channel: Active questions tagged ubuntu - Stack Overflow
Viewing all articles
Browse latest Browse all 7069

torch.OutOfMemoryError: CUDA out of memory

$
0
0

I am working on a video anomaly detection model using the code from BN-WVAD. The authors reported high accuracy on two datasets (XD-Violence and UCF-Crime), but since the UCF-Crime-specific code wasn't released, I used code shared by others in the issue tab of the repository.

However, I keep encountering a CUDA Out of Memory (OOM) error during training, both on my local GPU and on Google Colab.

Below is the error log:

WARNING:root:Found CUDA without GPU_NUM_DEVICES. Defaulting to PJRT_DEVICE=CUDA with GPU_NUM_DEVICES=1wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.wandb: Tracking run with wandb version 0.19.1wandb: Run `wandb offline` to turn off syncing.wandb: Syncing run trainTraceback (most recent call last):File "/content/drive/MyDrive/fyp/models/translayer.py", line 57, in forwarddots = torch.matmul(q, k.transpose(-1, -2)) * self.scale  # root(d_k) (128, 4, 200, 200)torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.79 GiB. GPU 0 has a total capacity of 14.75 GiB of which 6.61 GiB is free. Process 28173 has 100.00 MiB memory in use. Process 272388 has 8.04 GiB memory in use. Of the allocated memory 7.85 GiB is allocated by PyTorch, and 70.97 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)**

Viewing all articles
Browse latest Browse all 7069

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>