Quantcast
Channel: Active questions tagged ubuntu - Stack Overflow
Viewing all articles
Browse latest Browse all 5952

`ptxas warning : Registers are spilled to local memory` on Tensorflow and PyTorch

$
0
0

In one of our research, we are using Tensorflow and Pytorch with other major models. Whenever we use our data server at our University, we can use the full GPU in model training. The computer has an Nvidia Titan Xp 12GB GPU.

On the otherhand, my home computer has an Nvidia GeForce RTX 3060 12GB GDDR6 GPU. The problem is, however I try, the models do not use the full GPU during model training. It uses almost 8 and a half GB GPU even though other portions are empty and no other application is using them. Therefore, the epoch takes longer time. Also, I am receiving a warning message in each epoch, even though the dataset, and notebooks are the same.

The codeblock which contains the epoch-related code is given below:

num_epoch = 100history = model.fit(    train_dataset,    epochs = num_epoch,    steps_per_epoch = len(train_paths) // batch_size,    validation_data = test_dataset,    validation_steps= len(test_paths) // batch_size)

I am also receiving an error in each epoch given below even though that might be irrelevant.

WARNING: All log messages before absl::InitializeLog() is called are written to STDERRI0000 00:00:1715325398.195254   36322 service.cc:145] XLA service 0x7f1648003120 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:I0000 00:00:1715325398.195290   36322 service.cc:153]   StreamExecutor device (0): NVIDIA GeForce RTX 3060, Compute Capability 8.6WARNING: All log messages before absl::InitializeLog() is called are written to STDERRI0000 00:00:1715325400.187164   36503 asm_compiler.cc:369] ptxas warning : Registers are spilled to local memory in function 'triton_gemm_dot_4361', 112 bytes spill stores, 112 bytes spill loadsI0000 00:00:1715325456.896302   36322 device_compiler.h:188] Compiled cluster using XLA!  This line is logged at most once for the lifetime of the process.

I was wondering whether there is any way to explicitly force the notebook to use GPU memory at a fixed size. It would be able to reduce epoch time on my home computer. Please let me know what you think about it.


Viewing all articles
Browse latest Browse all 5952

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>