洁云分享 http://blog.sciencenet.cn/u/zhguoqin

博文

基于神经网络的马尾松毛虫精细化预报Matlab建模试验

已有 3118 次阅读 2018-2-13 21:49 |个人分类:生物灾害学|系统分类:论文交流| 松毛虫, 神经网络, 建模

点击下载原文:

基于神经网络的马尾松毛虫精细化预报Matlab建模试验.doc

基于神经网络的马尾松毛虫精细化预报Matlab建模试验

张国庆

(安徽省潜山县林业局)

1.数据来源

马尾松毛虫发生量、发生期数据来源于潜山县监测数据,气象数据来源于国家气候中心。

2.数据预处理

为了体现马尾松毛虫发生发展时间上的完整性,在数据处理时,将越冬代数据与上一年第二代数据合并,这样,就在时间上保持了一个马尾松毛虫世代的完整性,更便于建模和预测。

1气象数据处理

根据《松毛虫综合管理》、《中国松毛虫》等学术资料以及近年来有关马尾松毛虫监测预报学术论文,初步选择与松毛虫发生量、发生期有一定相关性气象因子,包括卵期极低气温,卵期平均气温,卵期积温(日度),卵期降雨量,第12龄极低气温,第12龄平均气温,第12龄积温(日度),第12龄降雨量,幼虫期极低气温,幼虫期平均气温,幼虫期积温(日度),幼虫期降雨量,世代极低气温,世代平均气温,世代积温(日度),世代降雨量共16个变量。将来自于国家气候中心的气象原始数据,按年度分世代转换成上述16个变量数据系列。

2发生量数据处理

为了在建模时分析发生强度,在对潜山县19832014年原始监测数据预处理时,按照“轻”、“中”、“重”3个强度等级,分类按世代逐年汇总。

3发生期数据处理

首先对潜山县19832014年原始发生期监测数据按世代逐年汇总,然后日期数据转换成日历天,使之数量化,以便于建模分析。

3.因子变量选择

通过相关性分析和建模试验比较,第一代发生量因子变量选择第12龄极低气温,卵期极低气温,上一代防治效果,上一代防治面积;第二代发生量因子变量选择第12龄极低气温,卵期极低气温,上一代防治效果,上一代防治面积,第12龄降雨量,卵期降雨量;第一代幼虫高峰期因子变量选择第12龄平均气温,第12龄积温(日度),第12龄极低气温,卵期极低气温;第二代幼虫高峰期因子变量选择成虫始见期,卵期平均气温,卵期积温(日度),第12龄极低气温。

将第一代发生量变量命名为s1y,因变量命名为s1x;第二代发生量变量命名为s2y,因变量命名为s2x;第一代幼虫高峰期变量命名为t1y,因变量命名为t1x;第二代幼虫高峰期变量命名为t2y,因变量命名为t2x

4.第一代发生量建模试验

4.1程序代码

程序代码(Simple Script)为:

% Solve an Input-Output Fitting problem with a Neural Network

% Script generated by Neural Fitting app

% Created Wed Oct 28 19:28:48 CST 2015

%

% This script assumes these variables are defined:

%

%   s1x - input data.

%   s1y - target data.

x = s1x';

t = s1y';

% Choose a Training Function

% For a list of all training functions type: help nntrain

% 'trainlm' is usually fastest.

% 'trainbr' takes longer but may be better for challenging problems.

% 'trainscg' uses less memory. NFTOOL falls back to this in low memory situations.

trainFcn = 'trainlm';  % Levenberg-Marquardt

% Create a Fitting Network

hiddenLayerSize = 10;

net = fitnet(hiddenLayerSize,trainFcn);

% Setup Division of Data for Training, Validation, Testing

net.divideParam.trainRatio = 90/100;

net.divideParam.valRatio = 5/100;

net.divideParam.testRatio = 5/100;

% Train the Network

[net,tr] = train(net,x,t);

% Test the Network

y = net(x);

e = gsubtract(t,y);

performance = perform(net,t,y)

% View the Network

view(net)

% Plots

% Uncomment these lines to enable various plots.

%figure, plotperform(tr)

%figure, plottrainstate(tr)

%figure, plotfit(net,x,t)

%figure, plotregression(t,y)

%figure, ploterrhist(e)

程序代码(Advanced Script)为:

% Solve an Input-Output Fitting problem with a Neural Network

% Script generated by Neural Fitting app

% Created Wed Oct 28 19:29:03 CST 2015

%

% This script assumes these variables are defined:

%

%   s1x - input data.

%   s1y - target data.

x = s1x';

t = s1y';

% Choose a Training Function

% For a list of all training functions type: help nntrain

% 'trainlm' is usually fastest.

% 'trainbr' takes longer but may be better for challenging problems.

% 'trainscg' uses less memory. NFTOOL falls back to this in low memory situations.

trainFcn = 'trainlm';  % Levenberg-Marquardt

% Create a Fitting Network

hiddenLayerSize = 10;

net = fitnet(hiddenLayerSize,trainFcn);

% Choose Input and Output Pre/Post-Processing Functions

% For a list of all processing functions type: help nnprocess

net.input.processFcns = {'removeconstantrows','mapminmax'};

net.output.processFcns = {'removeconstantrows','mapminmax'};

% Setup Division of Data for Training, Validation, Testing

% For a list of all data division functions type: help nndivide

net.divideFcn = 'dividerand';  % Divide data randomly

net.divideMode = 'sample';  % Divide up every sample

net.divideParam.trainRatio = 90/100;

net.divideParam.valRatio = 5/100;

net.divideParam.testRatio = 5/100;

% Choose a Performance Function

% For a list of all performance functions type: help nnperformance

net.performFcn = 'mse';  % Mean squared error

% Choose Plot Functions

% For a list of all plot functions type: help nnplot

net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...

'plotregression', 'plotfit'};

% Train the Network

[net,tr] = train(net,x,t);

% Test the Network

y = net(x);

e = gsubtract(t,y);

performance = perform(net,t,y)

% Recalculate Training, Validation and Test Performance

trainTargets = t .* tr.trainMask{1};

valTargets = t  .* tr.valMask{1};

testTargets = t  .* tr.testMask{1};

trainPerformance = perform(net,trainTargets,y)

valPerformance = perform(net,valTargets,y)

testPerformance = perform(net,testTargets,y)

% View the Network

view(net)

% Plots

% Uncomment these lines to enable various plots.

%figure, plotperform(tr)

%figure, plottrainstate(tr)

%figure, plotfit(net,x,t)

%figure, plotregression(t,y)

%figure, ploterrhist(e)

% Deployment

% Change the (false) values to (true) to enable the following code blocks.

if (false)

% Generate MATLAB function for neural network for application deployment

% in MATLAB scripts or with MATLAB Compiler and Builder tools, or simply

% to examine the calculations your trained neural network performs.

genFunction(net,'myNeuralNetworkFunction');

y = myNeuralNetworkFunction(x);

end

if (false)

% Generate a matrix-only MATLAB function for neural network code

% generation with MATLAB Coder tools.

genFunction(net,'myNeuralNetworkFunction','MatrixOnly','yes');

y = myNeuralNetworkFunction(x);

end

if (false)

% Generate a Simulink diagram for simulation or deployment with.

% Simulink Coder tools.

gensim(net);

end

4.2网络训练过程

网络训练为:

4.3训练结果

训练结果为:

训练样本、验证样本、测试样本的R值分别为0.875337-11

误差直方图为:

训练样本、验证样本、测试样本、所有数据回归图为:

验证样本和测试样本R值均为1

5.第二代发生量建模试验

5.1程序代码

程序代码(Simple Script)为:

% Solve an Input-Output Fitting problem with a Neural Network

% Script generated by Neural Fitting app

% Created Wed Oct 28 20:04:18 CST 2015

%

% This script assumes these variables are defined:

%

%   s2x - input data.

%   s2y - target data.

x = s2x';

t = s2y';

% Choose a Training Function

% For a list of all training functions type: help nntrain

% 'trainlm' is usually fastest.

% 'trainbr' takes longer but may be better for challenging problems.

% 'trainscg' uses less memory. NFTOOL falls back to this in low memory situations.

trainFcn = 'trainlm';  % Levenberg-Marquardt

% Create a Fitting Network

hiddenLayerSize = 10;

net = fitnet(hiddenLayerSize,trainFcn);

% Setup Division of Data for Training, Validation, Testing

net.divideParam.trainRatio = 90/100;

net.divideParam.valRatio = 5/100;

net.divideParam.testRatio = 5/100;

% Train the Network

[net,tr] = train(net,x,t);

% Test the Network

y = net(x);

e = gsubtract(t,y);

performance = perform(net,t,y)

% View the Network

view(net)

% Plots

% Uncomment these lines to enable various plots.

%figure, plotperform(tr)

%figure, plottrainstate(tr)

%figure, plotfit(net,x,t)

%figure, plotregression(t,y)

%figure, ploterrhist(e)

程序代码(Advanced Script)为:

% Solve an Input-Output Fitting problem with a Neural Network

% Script generated by Neural Fitting app

% Created Wed Oct 28 20:04:31 CST 2015

%

% This script assumes these variables are defined:

%

%   s2x - input data.

%   s2y - target data.

x = s2x';

t = s2y';

% Choose a Training Function

% For a list of all training functions type: help nntrain

% 'trainlm' is usually fastest.

% 'trainbr' takes longer but may be better for challenging problems.

% 'trainscg' uses less memory. NFTOOL falls back to this in low memory situations.

trainFcn = 'trainlm';  % Levenberg-Marquardt

% Create a Fitting Network

hiddenLayerSize = 10;

net = fitnet(hiddenLayerSize,trainFcn);

% Choose Input and Output Pre/Post-Processing Functions

% For a list of all processing functions type: help nnprocess

net.input.processFcns = {'removeconstantrows','mapminmax'};

net.output.processFcns = {'removeconstantrows','mapminmax'};

% Setup Division of Data for Training, Validation, Testing

% For a list of all data division functions type: help nndivide

net.divideFcn = 'dividerand';  % Divide data randomly

net.divideMode = 'sample';  % Divide up every sample

net.divideParam.trainRatio = 90/100;

net.divideParam.valRatio = 5/100;

net.divideParam.testRatio = 5/100;

% Choose a Performance Function

% For a list of all performance functions type: help nnperformance

net.performFcn = 'mse';  % Mean squared error

% Choose Plot Functions

% For a list of all plot functions type: help nnplot

net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...

'plotregression', 'plotfit'};

% Train the Network

[net,tr] = train(net,x,t);

% Test the Network

y = net(x);

e = gsubtract(t,y);

performance = perform(net,t,y)

% Recalculate Training, Validation and Test Performance

trainTargets = t .* tr.trainMask{1};

valTargets = t  .* tr.valMask{1};

testTargets = t  .* tr.testMask{1};

trainPerformance = perform(net,trainTargets,y)

valPerformance = perform(net,valTargets,y)

testPerformance = perform(net,testTargets,y)

% View the Network

view(net)

% Plots

% Uncomment these lines to enable various plots.

%figure, plotperform(tr)

%figure, plottrainstate(tr)

%figure, plotfit(net,x,t)

%figure, plotregression(t,y)

%figure, ploterrhist(e)

% Deployment

% Change the (false) values to (true) to enable the following code blocks.

if (false)

% Generate MATLAB function for neural network for application deployment

% in MATLAB scripts or with MATLAB Compiler and Builder tools, or simply

% to examine the calculations your trained neural network performs.

genFunction(net,'myNeuralNetworkFunction');

y = myNeuralNetworkFunction(x);

end

if (false)

% Generate a matrix-only MATLAB function for neural network code

% generation with MATLAB Coder tools.

genFunction(net,'myNeuralNetworkFunction','MatrixOnly','yes');

y = myNeuralNetworkFunction(x);

end

if (false)

% Generate a Simulink diagram for simulation or deployment with.

% Simulink Coder tools.

gensim(net);

end

5.2网络训练过程

网络训练为:

5.3训练结果

训练结果为:

训练样本、验证样本、测试样本的R值分别为0.9423880.9999991

误差直方图为:

训练样本、验证样本、测试样本、所有数据回归图为:

验证样本和测试样本R值均为1,训练样本R=0.94239,所有数据R=0.89479

6.第一代幼虫高峰期建模试验

6.1程序代码

程序代码(Simple Script)为:

% Solve an Input-Output Fitting problem with a Neural Network

% Script generated by Neural Fitting app

% Created Wed Oct 28 20:16:32 CST 2015

%

% This script assumes these variables are defined:

%

%   t1x - input data.

%   t1y - target data.

x = t1x';

t = t1y';

% Choose a Training Function

% For a list of all training functions type: help nntrain

% 'trainlm' is usually fastest.

% 'trainbr' takes longer but may be better for challenging problems.

% 'trainscg' uses less memory. NFTOOL falls back to this in low memory situations.

trainFcn = 'trainlm';  % Levenberg-Marquardt

% Create a Fitting Network

hiddenLayerSize = 10;

net = fitnet(hiddenLayerSize,trainFcn);

% Setup Division of Data for Training, Validation, Testing

net.divideParam.trainRatio = 90/100;

net.divideParam.valRatio = 5/100;

net.divideParam.testRatio = 5/100;

% Train the Network

[net,tr] = train(net,x,t);

% Test the Network

y = net(x);

e = gsubtract(t,y);

performance = perform(net,t,y)

% View the Network

view(net)

% Plots

% Uncomment these lines to enable various plots.

%figure, plotperform(tr)

%figure, plottrainstate(tr)

%figure, plotfit(net,x,t)

%figure, plotregression(t,y)

%figure, ploterrhist(e)

程序代码(Advanced Script)为:

% Solve an Input-Output Fitting problem with a Neural Network

% Script generated by Neural Fitting app

% Created Wed Oct 28 20:17:08 CST 2015

%

% This script assumes these variables are defined:

%

%   t1x - input data.

%   t1y - target data.

x = t1x';

t = t1y';

% Choose a Training Function

% For a list of all training functions type: help nntrain

% 'trainlm' is usually fastest.

% 'trainbr' takes longer but may be better for challenging problems.

% 'trainscg' uses less memory. NFTOOL falls back to this in low memory situations.

trainFcn = 'trainlm';  % Levenberg-Marquardt

% Create a Fitting Network

hiddenLayerSize = 10;

net = fitnet(hiddenLayerSize,trainFcn);

% Choose Input and Output Pre/Post-Processing Functions

% For a list of all processing functions type: help nnprocess

net.input.processFcns = {'removeconstantrows','mapminmax'};

net.output.processFcns = {'removeconstantrows','mapminmax'};

% Setup Division of Data for Training, Validation, Testing

% For a list of all data division functions type: help nndivide

net.divideFcn = 'dividerand';  % Divide data randomly

net.divideMode = 'sample';  % Divide up every sample

net.divideParam.trainRatio = 90/100;

net.divideParam.valRatio = 5/100;

net.divideParam.testRatio = 5/100;

% Choose a Performance Function

% For a list of all performance functions type: help nnperformance

net.performFcn = 'mse';  % Mean squared error

% Choose Plot Functions

% For a list of all plot functions type: help nnplot

net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...

'plotregression', 'plotfit'};

% Train the Network

[net,tr] = train(net,x,t);

% Test the Network

y = net(x);

e = gsubtract(t,y);

performance = perform(net,t,y)

% Recalculate Training, Validation and Test Performance

trainTargets = t .* tr.trainMask{1};

valTargets = t  .* tr.valMask{1};

testTargets = t  .* tr.testMask{1};

trainPerformance = perform(net,trainTargets,y)

valPerformance = perform(net,valTargets,y)

testPerformance = perform(net,testTargets,y)

% View the Network

view(net)

% Plots

% Uncomment these lines to enable various plots.

%figure, plotperform(tr)

%figure, plottrainstate(tr)

%figure, plotfit(net,x,t)

%figure, plotregression(t,y)

%figure, ploterrhist(e)

% Deployment

% Change the (false) values to (true) to enable the following code blocks.

if (false)

% Generate MATLAB function for neural network for application deployment

% in MATLAB scripts or with MATLAB Compiler and Builder tools, or simply

% to examine the calculations your trained neural network performs.

genFunction(net,'myNeuralNetworkFunction');

y = myNeuralNetworkFunction(x);

end

if (false)

% Generate a matrix-only MATLAB function for neural network code

% generation with MATLAB Coder tools.

genFunction(net,'myNeuralNetworkFunction','MatrixOnly','yes');

y = myNeuralNetworkFunction(x);

end

if (false)

% Generate a Simulink diagram for simulation or deployment with.

% Simulink Coder tools.

gensim(net);

end

6.2网络训练过程

网络训练为:

6.3训练结果

训练结果为:

训练样本、验证样本、测试样本的R值分别为0.875337-11

误差直方图为:

训练样本、验证样本、测试样本、所有数据回归图为:

验证样本和测试样本R值均为1

7.第二代幼虫高峰期建模试验

7.1程序代码

程序代码(Simple Script)为:

% Solve an Input-Output Fitting problem with a Neural Network

% Script generated by Neural Fitting app

% Created Wed Oct 28 20:22:04 CST 2015

%

% This script assumes these variables are defined:

%

%   t2x - input data.

%   t2y - target data.

x = t2x';

t = t2y';

% Choose a Training Function

% For a list of all training functions type: help nntrain

% 'trainlm' is usually fastest.

% 'trainbr' takes longer but may be better for challenging problems.

% 'trainscg' uses less memory. NFTOOL falls back to this in low memory situations.

trainFcn = 'trainlm';  % Levenberg-Marquardt

% Create a Fitting Network

hiddenLayerSize = 10;

net = fitnet(hiddenLayerSize,trainFcn);

% Setup Division of Data for Training, Validation, Testing

net.divideParam.trainRatio = 90/100;

net.divideParam.valRatio = 5/100;

net.divideParam.testRatio = 5/100;

% Train the Network

[net,tr] = train(net,x,t);

% Test the Network

y = net(x);

e = gsubtract(t,y);

performance = perform(net,t,y)

% View the Network

view(net)

% Plots

% Uncomment these lines to enable various plots.

%figure, plotperform(tr)

%figure, plottrainstate(tr)

%figure, plotfit(net,x,t)

%figure, plotregression(t,y)

%figure, ploterrhist(e)

程序代码(Advanced Script)为:

% Solve an Input-Output Fitting problem with a Neural Network

% Script generated by Neural Fitting app

% Created Wed Oct 28 20:22:29 CST 2015

%

% This script assumes these variables are defined:

%

%   t2x - input data.

%   t2y - target data.

x = t2x';

t = t2y';

% Choose a Training Function

% For a list of all training functions type: help nntrain

% 'trainlm' is usually fastest.

% 'trainbr' takes longer but may be better for challenging problems.

% 'trainscg' uses less memory. NFTOOL falls back to this in low memory situations.

trainFcn = 'trainlm';  % Levenberg-Marquardt

% Create a Fitting Network

hiddenLayerSize = 10;

net = fitnet(hiddenLayerSize,trainFcn);

% Choose Input and Output Pre/Post-Processing Functions

% For a list of all processing functions type: help nnprocess

net.input.processFcns = {'removeconstantrows','mapminmax'};

net.output.processFcns = {'removeconstantrows','mapminmax'};

% Setup Division of Data for Training, Validation, Testing

% For a list of all data division functions type: help nndivide

net.divideFcn = 'dividerand';  % Divide data randomly

net.divideMode = 'sample';  % Divide up every sample

net.divideParam.trainRatio = 90/100;

net.divideParam.valRatio = 5/100;

net.divideParam.testRatio = 5/100;

% Choose a Performance Function

% For a list of all performance functions type: help nnperformance

net.performFcn = 'mse';  % Mean squared error

% Choose Plot Functions

% For a list of all plot functions type: help nnplot

net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...

'plotregression', 'plotfit'};

% Train the Network

[net,tr] = train(net,x,t);

% Test the Network

y = net(x);

e = gsubtract(t,y);

performance = perform(net,t,y)

% Recalculate Training, Validation and Test Performance

trainTargets = t .* tr.trainMask{1};

valTargets = t  .* tr.valMask{1};

testTargets = t  .* tr.testMask{1};

trainPerformance = perform(net,trainTargets,y)

valPerformance = perform(net,valTargets,y)

testPerformance = perform(net,testTargets,y)

% View the Network

view(net)

% Plots

% Uncomment these lines to enable various plots.

%figure, plotperform(tr)

%figure, plottrainstate(tr)

%figure, plotfit(net,x,t)

%figure, plotregression(t,y)

%figure, ploterrhist(e)

% Deployment

% Change the (false) values to (true) to enable the following code blocks.

if (false)

% Generate MATLAB function for neural network for application deployment

% in MATLAB scripts or with MATLAB Compiler and Builder tools, or simply

% to examine the calculations your trained neural network performs.

genFunction(net,'myNeuralNetworkFunction');

y = myNeuralNetworkFunction(x);

end

if (false)

% Generate a matrix-only MATLAB function for neural network code

% generation with MATLAB Coder tools.

genFunction(net,'myNeuralNetworkFunction','MatrixOnly','yes');

y = myNeuralNetworkFunction(x);

end

if (false)

% Generate a Simulink diagram for simulation or deployment with.

% Simulink Coder tools.

gensim(net);

end

7.2网络训练过程

网络训练为:

7.3训练结果

训练结果为:

训练样本、验证样本、测试样本的R值分别为0.40215011

误差直方图为:

训练样本、验证样本、测试样本、所有数据回归图为:




https://blog.sciencenet.cn/blog-3344-1099751.html

上一篇:生态论之“心”与“行”
下一篇:复杂系统生态论方法及其应用
收藏 IP: 120.210.174.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-11-10 07:17

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部