The derivative of some function at each point has a certain value. Thus, when differentiation is obtained, a new feature that can also be differentiable. In this case, its derivative is called the second derivative of the original function and denoted F’(x).
The first derivative is called the limit of the increment function to increment of argument, i.e.:F’(x) = lim (F(x) – F(x_0))/(x – x_0) when x → 0.The second derivative of the original function is the derivative function F’(x) at the same point x_0, namely:F’ (x) = lim (F’(x) – F’(x_0))/(x – x_0).
To find the second derivatives of complex functions that are difficult to define in the usual way, apply methods of numerical differentiation. Thus used to calculate the approximate formulas:F’(x) = (F(x + h) – 2*F(x) + F(x - h))/h^2 + α(h^2)F’(x) = (-F(x + 2*h) + 16*F(x + h) – 30*F(x) + 16*F(x - h) – F(x – 2*h))/(12*h^2) + α(h^2).
The basis of methods of numerical differentiation is the approximation of the interpolation polynomial. The formulas are obtained by double differentiation of the interpolation polynomials of Newton and Stirling.
The parameter h is the step of the approximation, adopted for calculations, and α(h^2) is the approximation error. Similarly, α(h) for the first derivative, this infinitesimal is inversely proportional to h^2. Accordingly, it is the greater, the smaller the step length. Therefore, to minimize errors it is important to choose the optimal value of h.The choice of optimal values of h is called the regularization step. Thus believe that there is such a value of h that is true:|F(x + h) – F(x)| > ε, where ε is some small value.
There is another algorithm to minimize the error of approximation. He is to select several points of the domain of values of the function F near the starting point x_0. Calculate the values of the function at these points which the regression line, which is a smoothing of F on a small interval.
The obtained values of the function F represent the partial sum of the Taylor series:G(x) = F(x) + R, where G(x) is a smooth function, the approximation error of R. After double differentiation we get:G’(x) = F’(x) + R’, where R’ = G’(x) – F’(x).The value of R’ as a deviation of the approximate values of a function from its true value and is the minimum approximation error.