hans

hans

【Python】Three, Scipy - "Scientific Computing with Python"


1. Least Squares Fitting#

Suppose there is a set of experimental data (x[i], y[i]), and we know the functional relationship between them: y = f(x). Based on this known information, we need to determine some parameter items in the function. For example, if f is a linear function f(x) = k*x + b, then the parameters k and b are the values we need to determine. If we denote these parameters as p, then we are looking for a set of p values that minimize the S function in the following formula:
1668631852453.jpg
This algorithm is called Least Squares Fitting.


Using the known dataset (x, y) to determine a set of parameters (k, b) such that the sum of the squares of the actual y values minus the predicted y values is minimized.


The sub-library optimize in scipy has provided a function leastsq to implement the least squares fitting algorithm. Below is an example of data fitting using leastsq:

# -*- coding: utf-8 -*-
# Note that if there are Chinese characters in the code, the above line must be added at the beginning of the file, or add "#encoding:utf-8".
import numpy as np
from scipy.optimize import leastsq
import pylab as pl

def func(x, p):
    """
    Function used for data fitting: A*sin(2*pi*k*x + theta)
    """
    A, k, theta = p
    return A*np.sin(2*np.pi*k*x+theta)   

def residuals(p, y, x):
    """
    The difference between experimental data x, y and the fitting function, p is the coefficient to be found for fitting
    """
    return y - func(x, p)

x = np.linspace(0, -2*np.pi, 100)
A, k, theta = 10, 0.34, np.pi/6 # Real data function parameters
y0 = func(x, [A, k, theta]) # Real data
y1 = y0 + 2 * np.random.randn(len(x)) # Experimental data with added noise    

p0 = [7, 0.2, 0] # Initial guess for the function fitting parameters

# Call leastsq for data fitting
# residuals is the function for calculating errors
# p0 is the initial value for fitting parameters
# args are the experimental data to be fitted
plsq = leastsq(residuals, p0, args=(y1, x))

print u"Real parameters:", [A, k, theta]
print u"Fitted parameters", plsq[0] # Parameters after fitting experimental data

pl.plot(x, y0, label=u"Real data")
pl.plot(x, y1, label=u"Experimental data with noise")
pl.plot(x, func(x, plsq[0]), label=u"Fitted data")
pl.legend()
pl.show()

In this example, the function we want to fit is a sine wave function, which has three parameters A, k, theta, corresponding to amplitude, frequency, and phase angle, respectively. Suppose our experimental data is a set of noisy data x, y1, where y1 is obtained by adding noise to the real data y0.

By using the leastsq function to fit the noisy experimental data x, y1, we can find the three parameters A, k, theta that describe the sine relationship between x and the real data y0.
Finally, we can find that although the fitted parameters are completely different from the real parameters, due to the periodic nature of the sine function, the function obtained from the fitted parameters is actually consistent with the function corresponding to the real parameters.

2. Function Minimization#

The optimize library provides several algorithms for finding the minimum of a function: fmin, fmin_powell, fmin_cg, fmin_bfgs. The following program demonstrates the functionality of fmin by solving the inverse operation of convolution.

For a discrete linear time-invariant system h, if its input is x, then its output y can be represented by the convolution of x and h:
y = x*h

The current problem is how to calculate the system's transfer function h if the system's input x and output y are known; or how to calculate the system's input x if the system's transfer function h and output y are known. This operation is called deconvolution, which is quite difficult, especially in practical applications where there are always errors in measuring the system's output.

The following uses fmin to calculate deconvolution. This method can only be used on very small sequences, so it does not have much practical value, but it is good for evaluating the performance of the fmin function.

# -*- coding: utf-8 -*-
# This program uses various fmin functions to solve the inverse operation of convolution

import scipy.optimize as opt
import numpy as np

def test_fmin_convolve(fminfunc, x, h, y, yn, x0):
    """
    x (*) h = y, (*) represents convolution
    yn is the result of adding some interference noise to y
    x0 is the initial value for solving x
    """
    def convolve_func(h):
        """
        Calculate yn - x (*) h's power
        fmin will minimize this power
        """
        return np.sum((yn - np.convolve(x, h))**2)

    # Call fmin function with x0 as the initial value
    h0 = fminfunc(convolve_func, x0)

    print fminfunc.__name__
    print "---------------------"
    # Output the relative error between x (*) h0 and y
    print "error of y:", np.sum((np.convolve(x, h0)-y)**2)/np.sum(y**2)
    # Output the relative error between h0 and h
    print "error of h:", np.sum((h0-h)**2)/np.sum(h**2)
    print

def test_n(m, n, nscale):
    """
    Randomly generate sequences x, h, y, yn, x0, and call various fmin functions to solve b
    m is the length of x, n is the length of h, nscale is the strength of interference
    """
    x = np.random.rand(m)
    h = np.random.rand(n)
    y = np.convolve(x, h)
    yn = y + np.random.rand(len(y)) * nscale
    x0 = np.random.rand(n)

    test_fmin_convolve(opt.fmin, x, h, y, yn, x0)
    test_fmin_convolve(opt.fmin_powell, x, h, y, yn, x0)
    test_fmin_convolve(opt.fmin_cg, x, h, y, yn, x0)
    test_fmin_convolve(opt.fmin_bfgs, x, h, y, yn, x0)

if __name__ == "__main__":
    test_n(200, 20, 0.1)

if __name__ == "__main__"

The function of the above line of code is to execute the statements under if when compiling and running in the current file. If this file is called through import in another file, the statements under if will not execute.


3. Solving Nonlinear Equations#

The fsolve function in the optimize library can be used to solve systems of nonlinear equations. Its basic calling form is as follows:

fsolve(func, x0)

func(x) is a function that calculates the error of the equation system, its parameter x is a vector representing a set of possible solutions for the unknowns in the equation system, and func returns the error obtained by substituting x into the equation system; x0 is the initial value of the unknown vector. If we want to solve the following system of equations:

f1(u1,u2,u3) = 0
f2(u1,u2,u3) = 0
f3(u1,u2,u3) = 0

Then func can be defined as follows:

def func(x):
    u1,u2,u3 = x
    return [f1(u1,u2,u3), f2(u1,u2,u3), f3(u1,u2,u3)]

Below is a practical example to solve the following system of equations:

5*x1 + 3 = 0
4*x0*x0 - 2*sin(x1*x2) = 0
x1*x2 - 1.5 = 0

The program is as follows:

from scipy.optimize import fsolve
from math import sin,cos

def f(x):
    x0 = float(x[0])
    x1 = float(x[1])
    x2 = float(x[2])
    return [
        5*x1+3,
        4*x0*x0 - 2*sin(x1*x2),
        x1*x2 - 1.5
    ]

result = fsolve(f, [1,1,1])

print result
print f(result)

The output is:

[-0.70622057 -0.6        -2.5       ]
[0.0, -9.1260332624187868e-14, 5.3290705182007514e-15]

Since the fsolve function passes an array as a parameter when calling the function f, the calculation speed will be reduced if we directly use the elements in the array for calculations. Therefore, we first use the float function to convert the elements in the array to standard floating-point numbers in Python, and then call the standard math library functions for calculations.

When solving the system of equations, fsolve will automatically calculate the Jacobian matrix of the system of equations. If there are many unknowns in the system of equations, and relatively few unknowns related to each equation, i.e., the Jacobian matrix is sparse, passing a function that computes the Jacobian matrix can significantly improve computation speed.
Jacobian Matrix
The Jacobian matrix is a matrix of first-order partial derivatives arranged in a certain way, providing the optimal linear approximation of a differentiable equation at a given point, similar to the derivative of a multivariable function. For example, the Jacobian matrix of the previous functions f1, f2, f3 and unknowns u1, u2, u3 is as follows:
1668631974254.jpg
An example of using the Jacobian matrix with fsolve is as follows, where the function j that computes the Jacobian matrix is passed to fsolve through the fprime parameter. The function j, like function f, has a parameter x which is a vector of unknown solutions. The function j computes the Jacobian matrix of the nonlinear equation system at the vector x point. Since there are few unknowns in this example, calculating the Jacobian matrix does not bring a speed improvement.

# -*- coding: utf-8 -*-
from scipy.optimize import fsolve
from math import sin,cos
def f(x):
    x0 = float(x[0])
    x1 = float(x[1])
    x2 = float(x[2])
    return [
        5*x1+3,
        4*x0*x0 - 2*sin(x1*x2),
        x1*x2 - 1.5
    ]
    
def j(x):
    x0 = float(x[0])
    x1 = float(x[1])
    x2 = float(x[2])    
    return [
        [0, 5, 0],
        [8*x0, -2*x2*cos(x1*x2), -2*x1*cos(x1*x2)],
        [0, x2, x1]
    ]
 
result = fsolve(f, [1,1,1], fprime=j)
print result
print f(result)

4. Numerical Integration#

Numerical integration is the numerical solution of definite integrals, for example, it can be used to calculate the area of a certain shape. Now let's consider how to calculate the area of a semicircle with a radius of 1. According to the formula for the area of a circle, its area should be equal to PI/2. The unit semicircle (the semicircle above the X-axis) can be represented by the following function:

def half_circle(x):
    return (1-x**2)**0.5

The following program uses the classic method of calculating the total area by dividing into small rectangles to calculate the area of the unit semicircle:

>>> N = 10000
>>> x = np.linspace(-1, 1, N)
>>> dx = 2.0/N # This value is actually the distance between each two adjacent points in x, which is the width of the rectangles
>>> y = half_circle(x) # The y-coordinate of each point, each point to the next point is considered to have the same height. Thus, multiplying the y value of each point by dx gives the area of the rectangle between the current point and the next point.
>>> dx * np.sum(y) * 2 # Calculate twice the area for easier viewing of the pi value
3.1412751679988937

Using the above method to calculate the coordinates of a series of points on the circle, we can also use numpy.trapz for numerical integration:

>>> import numpy as np
>>> np.trapz(y, x) * 2 # Twice the area
3.1415893269316042

This function calculates the area enclosed by the polyline with coordinates x, y and the X-axis. With the same number of division points, the result of the trapz function is closer to the exact value.

If we call the quad function in the scipy.integrate library, we will get a very accurate result:

>>> from scipy import integrate
>>> pi_half, err = integrate.quad(half_circle, -1, 1)
>>> pi_half*2
3.1415926535897984

This section has much less content than I brought over, and readers who want to learn more are advised to study at the original address.
This article is referenced from: http://old.sebug.net/paper/books/scipydoc/scipy_intro.html#id1

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.