gll89的个人博客分享 http://blog.sciencenet.cn/u/gll89

博文

Pytorch training with multi GPUs

已有 1093 次阅读 2019-5-27 22:00 |个人分类:DeepLearning|系统分类:科研笔记

When using DataParallel, with a list of gpu like [dev0, dev1, ...], all the inputs that you give to the module have to be on dev0. We can use the following codes to set

    with torch.cuda.device(dev0):    t = t.cuda()


From https://discuss.pytorch.org/t/how-to-solve-the-problem-of-runtimeerror-all-tensors-must-be-on-devices-0/15198/6



https://blog.sciencenet.cn/blog-1969089-1181522.html

上一篇:3D ResNet pre-trained on Kinetics
下一篇:Python图中中文字体设置

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2022-5-26 04:18

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部