||
When using DataParallel
, with a list of gpu like [dev0, dev1, ...]
, all the inputs that you give to the module have to be on dev0
. We can use the following codes to set
with torch.cuda.device(dev0): t = t.cuda()
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2025-1-10 10:18
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社