|||
Traceback (most recent call last):
File "cluster_resnet_9.py", line 361, in <module>
train_model.save_model(model_path)
File "cluster_resnet_9.py", line 211, in eval_model
if self.cuda_id > -1:
File "/home/linlin/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "/home/linlin/anaconda2/lib/python2.7/site-packages/torchvision-0.1.9-py2.7.egg/torchvision/models/resnet.py", line 139, in forward
File "/home/linlin/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "/home/linlin/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 254, in forward
self.padding, self.dilation, self.groups)
File "/home/linlin/anaconda2/lib/python2.7/site-packages/torch/nn/functional.py", line 52, in conv2d
return f(input, weight, bias)
RuntimeError: tensors are on different GPUs
==============================
I checked my codes, all the cuda_ids are the same, but I still get this bug
==============================
Reason: this is because I evaluate my model_ft after saving the model.
In pytorch, when saving the model_ft, I need to convert the model to GPU. However, when evaluating the model_ft, the model_ft should be on the GPU.
Thus, this cause the model_ft on CPU yet the input variable on GPU. This is why the bug appears.
==============================
Like this way: This is a CPU/GPU error. The error message might be misleading. If some inputs are on GPU and other on CPU it will give the same message.
From: https://discuss.pytorch.org/t/runtime-error-tensors-are-on-different-gpus-but-i-have-only-one-gpu/8316
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-10-19 22:41
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社