||
此处mark一下图像处理中常见的Conv和BatchNorm操作,提供参考链接!!
(一)Conv1D和Conv2D实现
(1)pytorch之nn.Conv1d详解 (建议先看这个)
(2)进一步查看此: PyTorch中的nn.Conv1d与nn.Conv2d (Pytorch库)
神经网络-Conv1D和Conv2D实现 (所用库为Keras)
实现代码和相应卷积的数据输入格式要求
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data
class myCNN(torch.nn.Module):
def __init__(self):
super(myCNN, self).__init__()
#nn.Model.__init__(self)
self.conv1 = nn.Conv2d(1, 6, 5) # 输入通道数为1,输出通道数为6
self.conv2 = nn.Conv2d(6, 16, 5) # 输入通道数为6,输出通道数为16
self.fc1 = nn.Linear(5 * 5 * 16, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self,x):
# 输入x -> conv1 -> relu -> 2x2窗口的最大池化
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
print(x.size())
# 输入x -> conv2 -> relu -> 2x2窗口的最大池化
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
print(x.size())
# view函数将张量x变形成一维向量形式,总特征数不变,为全连接层做准备
x = x.view(x.size()[0], -1)
print(x.size())
x = F.relu(self.fc1(x))
print(x.size())
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
if __name__ == "__main__":
cnnmy = myCNN()
input = torch.randn((1,1,32, 32))
out = cnnmy.forward(input)
总结:
特别注意--#Pytorch处理图片(二维卷积)的输入维度应为四维【batch,channel,height,width】;向量(一维卷积)要求是【batch,channel,时序长度】!!!
图源:参考:PyTorch的Tensor(张量)
(二)BatchNorm1d、BatchNorm2d
pytorch中BatchNorm1d、BatchNorm2d、BatchNorm3d
pytorch中批量归一化BatchNorm1d和BatchNorm2d函数
点滴分享,福泽你我!Add oil!
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-11-30 09:42
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社