mathutil_nwchem
esta.externalBag.mathutil_nwchem
¶
numderiv(func, x, step, eps)
¶
Use central differences to compute the gradient and diagonal elements of the Hessian. func(x) = function to be differentiated x[] = (array) point at which to differentiate step[] = (array) remembers finite difference step between . successive calls. Set to zero on first call . or set close to appropriate value eps = expected precision in func
Some care is taken to adjust the step so that the gradient and Hessian diagonal are estimated with about 4 digits of precision but some noise is unavoidable due either to the noise in the function or cubic/higher terms in the Taylor expansion.
quadfit(alpha0, f0, alpha1, f1, alpha2, f2)
¶
Given 3 points compute the gradient and hessian at point 0 using a quadratic fit.
jacobi(ainput)
¶
Diagonalize a real symmetric matrix using the variable threshold cyclic Jacobi method.
(v,e) = jacobi(a)
Input: a[n][m] is a real symmetric matrix
Returns: (v,e) where v is the list of eigenvectors and e is an array of the corresponding eigenvalues in ascending order. v[k] is a vector containing the kth eigenvector. These satisfy
A*Vt = Vt*e
or
V*A = e*V
or
sum(j)(a[i][j]v[k][j]) = e[k]*v[k][i]
hessian_update_bfgs(hp, dx, g, gp)
¶
Apply the BFGS update to the approximate Hessian h[][].
hp[][] = Hessian matrix from previous iteration dx[] = Step from previous iteration . (dx[] = x[] - xp[] where xp[] is the previous point) g[] = gradient at current point gp[] = gradient at previous point
Returns the updated hessian
quasinr(func, guess, tol, eps, printvar=None)
¶
Unconstrained minimization of a function of n variables without analytic derivatives using quasi-Newtwon with BFGS update and numerical gradients.
func(x) is a function that takes an array of n values and returns the function value
guess[] is an array of n values for the initial guess
tol is the convergence criterion for the maximum value of the gradient
eps is the expected precision in the function value
printvar(x) is an optional user function to print the values of parameters each macro iteration
cgminold(func, dfunc, guess, tol)
¶
Simple conjugate gradient assuming analtyic derivatives.
cgmin(func, dfunc, guess, tol, precond=None, reset=None)
¶
Conjugate gradient with optional preconditioning and use of analytic gradients.
cgmin2(func, guess, tol, eps, printvar=None, reset=None)
¶
Unconstrained minimization of a function of n variables without analytic derivatives using conjugate gradient with diagonal preconditioning.
func(x) is a function that takes an array of n values and returns the function value
guess[] is an array of n values for the initial guess
tol is the convergence criterion for the maximum value of the gradient
eps is the expected precision in the function value
printvar(x) is an optional user function to print the values of parameters each iteration
reset is the number of iterations between forced resets of the conjugacy. In principle this could be n but noise in the numerical gradients makes a smaller number a better choice.