||
(1) TensorDataset
TensorDataset 可以用来对 tensor 进行打包,就好像 python 中的 zip 功能。该类通过每一个 tensor 的第一个维度进行索引。因此,该类中的 tensor 第一维度必须相等。
from torch.utils.data import TensorDataset
import torch
from torch.utils.data import DataLoader
a = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3], [4, 5, 6], [7, 8, 9]])
b = torch.tensor([44, 55, 66, 44, 55, 66, 44, 55, 66, 44, 55, 66])
train_ids = TensorDataset(a, b) #相当于zip函数
# 切片输出
print(train_ids[0:1])
print('=' * 60)
# 循环取数据
for x_train, y_label in train_ids:
print(x_train, y_label)
# DataLoader进行数据封装
print('=' * 60)
train_loader = DataLoader(dataset=train_ids, batch_size=4, shuffle=True) #shuffle参数:打乱数据顺序
for i, data in enumerate(train_loader, 1): # 注意enumerate返回值有两个,一个是序号,一个是数据(包含训练数据和标签),参数1是设置从1开始编号
x_data, label = data
print(' batch:{0} x_data:{1} label: {2}'.format(i, x_data, label))
参考博客:https://blog.csdn.net/qq_40211493/article/details/107529148
(2)DataLoader
DataLoader就是用来包装所使用的数据,每次抛出一批数据
import torch
import torch.utils.data as Data
BATCH_SIZE = 5
x = torch.linspace(1, 10, 10)
y = torch.linspace(10, 1, 10)
# 把数据放在数据库中
torch_dataset = Data.TensorDataset(x, y)
loader = Data.DataLoader(
# 从数据库中每次抽出batch size个样本
dataset=torch_dataset,
batch_size=BATCH_SIZE,
shuffle=True,
num_workers=0,
)
def show_batch():
for epoch in range(10):
for step, (batch_x, batch_y) in enumerate(loader):
# training
print("steop:{}, batch_x:{}, batch_y:{}".format(step, batch_x, batch_y)) #方便输出
if __name__ == '__main__':
show_batch()
参考博客:https://www.cnblogs.com/demo-deng/p/10623334.html
点滴分享,福泽你我!Add oil!
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-9-10 16:15
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社