# Lsqr¶

aprod(mode, m, n, x, iw, rw)[source]

This is the simplest example for testing LSQR. $$A = rw$$. If mode = 1, aprod computes $$y = A x$$. Ff mode = 2, aprod computes $$y = A^T x$$. for some matrix A.

cgls(Aname, shift, b, m, n, kmax, tol, prnt)[source]

Solves a symmetric system

$$(A^T A + shift I)x = A^T b$$ or $$N x = A^T b$$,

where A is a general m by n matrix and shift may be positive or negative. The method should be stable if $$N = (A^T A + shift I)$$ is positive definite. It MAY be unstable otherwise.

‘gemat’ specifies an M-file gemat.m (say), such that:

• y = gemat(0, x, m, n) identifies gemat (if you wish),
• y = gemat(1, x, m, n) gives $$y = A x$$,
• y = gemat(2, x, m, n) gives $$y = A^T x$$.

The M-file must not be called Aname.m!

Usage

[x, resNE, k, info] = cgls( Aname ,shift, b, m, n, kmax, tol, prnt)

Inputs

• Aname – name of file
• shift – if 0 then cgls is Hestenes and Stiefel’s specialized form
• b – used in the formulas in description
• m, n – dimensions of the matrix
• kmax – maximum number of iterations.
• tol – desired relative residual size, $$norm(rNE)/norm(A^T b)$$, where $$rNE = A^T b - N x$$.
• prnt – 1 gives an iteration log, 0 suppresses it.

Outputs

• x – from the formulas in description
• resNE – relative residual for normal equations: $$norm(A^T b - Nx)/norm(A^T b)$$.
• k – final number of iterations performed.
• info – gives information on the result:
• 1 if required accuracy was achieved,
• 2 if the limit kmax was reached,
• 3 if N seems to be singular or indefinite.
• 4 if instability seems likely. (N indefinite and normx decreased.)

Note

When $$shift = 0$$, cgls is Hestenes and Stiefel’s specialized form of the conjugate-gradient method for least-squares problems. The general shift is a simple modification.

lsqr(m, n, aprodname, b, damp, atol, btol, conlim, itnlim, show, covindex, userstop)[source]

LSQR solves $$A x = b$$ or $$min ||b - A x||_2$$ if $$damp = 0$$, or $$min ||(b) - (A) x||$$ otherwise. $$||(0) (damp\ I)||2$$ A is an m by n matrix defined by y = aprod(mode, m, n), where aprod is a function handle that performs the matrix-vector operations. If mode = 1, aprod must return $$y = A x$$ without altering x. If mode = 2, aprod must return $$y = A^T x$$ without altering x.

Usage

[x, istop, itn, r1norm, r2norm, anorm, acond, arnorm, xnorm, var, cov, userout] = lsqr( m, n, aprodname, b, damp, atol, btol, conlim, itnlim, show, covindex, userstop)

Inputs

• m,n – dimensions of the matrix

• aprodname – function handle

• b – from $$A x = b$$

• atol, btol – are stopping tolerances. If both are 1.0e-9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp.)

• conlim – is also a stopping tolerance. lsqr terminates if an estimate of cond(A) exceeds conlim. For compatible systems Ax = b, conlim could be as large as 1.0e+12 (say). For least-squares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by setting atol = btol = conlim = zero, but the number of iterations may then be excessive.

• itnlim – is an explicit limit on iterations (for safety).

• show – 1 - gives an iteration log, 0 - suppresses output,

• covindex – specified index set of the columns and rows of the covariance matrix

• userstop – a function handle that checks a user-defined stopping criteria. The function handle should be in the form [istop, atol, userout] = userstop(istop, atol, arnorm, itn);

if userstop returns istop > 0 lsqr will stop if userstop returns istop = 0 lsqr will continue the userstop function can alter the tolerance atol.

Outputs

• x – is the final solution.
• istop – gives the reason for termination.
• 1 means x is an approximate solution to Ax = b.
• 2 means x approximately solves the least-squares problem.
• itn – changed to 0 in the code
• r1norm – = norm(r), where $$r = b - A x$$.
• r2norm – = $$sqrt(norm(r)^2 + damp^2 * norm(x)^2)$$ = r1norm if $$damp = 0$$.
• anorm – = estimate of Frobenius norm of Abar = [A]. [damp*I]
• acond – = estimate of cond(Abar).
• arnorm – = estimate of $$norm(A^T r - damp^2 x)$$.
• xnorm – = norm(x).
• var – (if present) estimates all diagonals of $$(A^T A)^{-1}$$ (if $$damp = 0$$) or more generally $$(A^T A + damp^2 I)^{-1}$$. This is well defined if A has full column rank or $$damp > 0$$. (Not sure what var means if $$rank(A) < n$$ and $$damp = 0$$.)
• cov – (if present) esitmates the rows and columns of the covariance matrix specified by the covindex set.
• userout – output data from the userstop function

LSQR uses an iterative (conjugate-gradient-like) method. For further information, see:

1. C. C. Paige and M. A. Saunders (1982a). LSQR: An algorithm for sparse linear equations and sparse least squares, ACM TOMS 8(1), 43-71.
2. C. C. Paige and M. A. Saunders (1982b). Algorithm 583. LSQR: Sparse linear equations and least squares problems, ACM TOMS 8(2), 195-209.
3. M. A. Saunders (1995). Solution of sparse rectangular systems using LSQR and CRAIG, BIT 35, 588-604.
lsqraprod(mode, m, n, x, iw, rw)[source]

if mode = 1, lsqraprod computes $$y = A x$$, if mode = 2, lsqraprod computes $$y = A^T x$$ for some matrix A.

This is a simple example for testing LSQR. It uses the leading m n submatrix from:

Example

A = [ 1
1 2
2 3
3 4
...
n ]


lsqrtest`(m, n, damp)[source]
If $$m = n$$ and $$damp = 0$$, this sets up a system $$A x = b$$ and calls lsqr.m to solve it. Otherwise, the usual least-squares or damped least-squares problem is solved. lsqraprod.m defines the m x n matrix A.