zsh2270365028的个人博客分享 http://blog.sciencenet.cn/u/zsh2270365028

博文

[转载]python least-squares fitting

已有 1232 次阅读 2021-4-7 23:41 |个人分类:Python|系统分类:科研笔记|文章来源:转载

refer to:

python4mpia.github.io/fitting_data/least-squares-fitting.html


Least-squares fitting in Python

Many fitting problems (by far not all) can be expressed as least-squares problems.

What is least squares?

  • Minimise \chi^2 = \sum_{n=1}^N \left(\frac{y_n - f(x_n)}{\sigma_n}\right)^2

  • If and only if the data’s noise is Gaussian, minimising \chi^2 is identical to maximising the likelihood \mathcal L\propto e^{-\chi^2/2}.

  • If data’s noise model is unknown, then minimise J(\theta) = \sum_{n=1}^N \left[(y_n - f(x_n;\theta)\right]^2

  • For non-Gaussian data noise, least squares is just a recipe (usually) without any probabilistic interpretation (no uncertainty estimates).

scipy.optimize.curve_fit

curve_fitis part ofscipy.optimizeand a wrapper forscipy.optimize.leastsqthat overcomes its poor usability. Likeleastsq,curve_fitinternally uses a Levenburg-Marquardt gradient method (greedy algorithm) to minimise the objective function.

Let us create some toy data:

import numpy# Generate artificial data = straight line with a=0 and b=1# plus some noise.xdata = numpy.array([0.0,1.0,2.0,3.0,4.0,5.0])ydata = numpy.array([0.1,0.9,2.2,2.8,3.9,5.1])# Initial guess.x0    = numpy.array([0.0, 0.0, 0.0])

Data errors can also easily be provided:

sigma = numpy.array([1.0,1.0,1.0,1.0,1.0,1.0])

The objective function is easily (but less general) defined as the model:

def func(x, a, b, c):
    return a + b*x + c*x*x

Usage is very simple:

import scipy.optimize as optimizationprint optimization.curve_fit(func, xdata, ydata, x0, sigma)

This outputs the actual parameter estimate (a=0.1, b=0.88142857, c=0.02142857) and the 3x3 covariance matrix.

scipy.optimize.leastsq

Scipy provides a method calledleastsqas part of itsoptimizepackage. However, there are tow problems:

  • This method is not well documented (no easy examples).

  • Error/covariance estimates on fit parameters not straight-forward to obtain.

Internally,leastsquses Levenburg-Marquardt gradient method (greedy algorithm) to minimise the score function.

First step is to declare the objective function that should be minimised:

# The function whose square is to be minimised.# params ... list of parameters tuned to minimise function.# Further arguments:# xdata ... design matrix for a linear model.# ydata ... observed data.def func(params, xdata, ydata):
    return (ydata - numpy.dot(xdata, params))

The toy data now needs to be provided in a more complex way:

# Provide data as design matrix: straight line with a=0 and b=1 plus some noise.xdata = numpy.transpose(numpy.array([[1.0,1.0,1.0,1.0,1.0,1.0],
              [0.0,1.0,2.0,3.0,4.0,5.0]]))

Now, we can use the least-squares method:

print optimization.leastsq(func, x0, args=(xdata, ydata))

Note theargsargument, which is necessary in order to pass the data to the function.

This only provides the parameter estimates (a=0.02857143, b=0.98857143).




https://blog.sciencenet.cn/blog-587102-1280835.html

上一篇:[转载]LaTex符号大全(LaTeX_Symbols)
下一篇:[转载]Python两个 list 获取交集,并集,差集的方法
收藏 IP: 119.78.226.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-6-27 02:26

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部