||
(一)一维、二维常使用模块方法
二维学习中,主要使用的是:
self.conv1 = nn.Conv2d(in_channels=1, out_channels=5, kernel_size=7, stride=2, padding=1)
self.fc1 = nn.Linear(2432,512)
F.max_pool2d(self.conv1(x), 2)
一维
self.conv1 = nn.Conv1d(in_channels=1, out_channels=5, kernel_size=7, stride=2, padding=1)
self.fc1 = nn.Linear(2432,512)
F.max_pool1d(self.conv1(x), 2)
(二)激活函数包括多种,用于处理输出形式,规范等
torch.nn.functional.threshold(input, threshold, value, inplace=False)
torch.nn.functional.relu(input, inplace=False)
torch.nn.functional.hardtanh(input, min_val=-1.0, max_val=1.0, inplace=False)
torch.nn.functional.relu6(input, inplace=False)
torch.nn.functional.elu(input, alpha=1.0, inplace=False)
torch.nn.functional.leaky_relu(input, negative_slope=0.01, inplace=False)
torch.nn.functional.prelu(input, weight)
torch.nn.functional.rrelu(input, lower=0.125, upper=0.3333333333333333, training=False, inplace=False)
torch.nn.functional.logsigmoid(input)
torch.nn.functional.hardshrink(input, lambd=0.5)
torch.nn.functional.tanhshrink(input)
torch.nn.functional.softsign(input)
torch.nn.functional.softplus(input, beta=1, threshold=20)
torch.nn.functional.softmin(input)
torch.nn.functional.softmax(input)
torch.nn.functional.softshrink(input, lambd=0.5)
torch.nn.functional.log_softmax(input)
torch.nn.functional.tanh(input)
torch.nn.functional.sigmoid(input)
注意:relu什么的,实际上种类很多,还有各种改进,
https://www.jianshu.com/p/68bd249327ce,列举了几种样式
(三)优化器的话也是有上十种,基于基类 Optimizer
随机梯度下降算法 SGD算法
torch.optim.SGD(params, lr=, momentum=0, dampening=0, weight_decay=0, nesterov=False)
平均随机梯度下降算法 ASGD算法
torch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0)
AdaGrad算法
torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0)
自适应学习率调整 Adadelta算法
torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0)
RMSprop算法
torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)
自适应矩估计 Adam算法
torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)
Adamax算法(Adamd的无穷范数变种)
torch.optim.Adamax(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)
SparseAdam算法
torch.optim.SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08)
L-BFGS算法
torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-05, tolerance_change=1e-09, history_size=100, line_search_fn=None)
弹性反向传播算法 Rprop算法
torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50))
大佬的很详细
https://blog.csdn.net/shanglianlm/article/details/85019633
(四)池化方法也有好几种
# https://blog.csdn.net/HowardWood/article/details/79508805
torch.nn.functional.avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)
torch.nn.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)
torch.nn.functional.max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)
torch.nn.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)
torch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None)
torch.nn.functional.max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None)
torch.nn.functional.lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False)
torch.nn.functional.adaptive_max_pool1d(input, output_size, return_indices=False)
torch.nn.functional.adaptive_max_pool2d(input, output_size, return_indices=False)
torch.nn.functional.adaptive_avg_pool1d(input, output_size)
torch.nn.functional.adaptive_avg_pool2d(input, output_size)
【参考】
点滴分享,福泽你我!Add oil!
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-11-30 18:45
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社