Due 23:59 Sunday, 2022-04-24
Let \(A\in\mathbb{R}^{n\times n}\) be nonsingular. Show that \(A\) has an LU decomposition if and only if for each \(k\) with \(1\le k \le n\), the upper left \(k\times k\) block A[1:k,1:k]
is nonsingular. Prove that this LU decomopsition is unique.
Write an R function LUdecomp
with interface
LUdecomp(obj, tol=1e-8)
The decomposition must be done in place. That is, if \(A=LU \in \mathbb{R}^{n\times n}\), the \(U\) should overwrite the upper triangular part of the input matrix A
, and the strictly lower triangular part of A
should be overwritten by the same part of the \(L\) matrix computed. (Where does the diagonal part of \(L\) go?) Since R does not allow in-place modification of atomic objects, you are recommended to use a Reference Class (RC) object.
The RC for this homework can be declared by
setRefClass("LUclass",
fields = list(
mat = "matrix",
vec = "vector"
) )
A RC object can be created, for instance, by
<- matrix(c(1.0, 0.667, 0.5, 0.333), nrow=2)
A <- c(1.5, 1.0)
b <- new("LUclass", mat=A, vec=b) obj
Matrix A
stored in obj
can be referenced by obj$mat
, and vector b
can be by obj$vec
(field vec
is reserved for the next question).
You must also implement partial pivoting: function LUdecomp
must return a list
that consists of two elements: the first element of the list is the array of the permutation indexes of the rows, and the second element is the indicator of success: if A
is (numerically) singular, the function must return the row index where singularity occurs (where may a singularity occur in the LU decomposition?) as the second return value; otherwise return 0
. Use tol
to determine the singularity.
Using the LUdecomp
function written above, write function LUsolve0
that solves the linear equation \(Ax = b\) with interface
LUsolve0(obj)
again in place. That is, in addition to A
overwritten by LUdecomp
, vector b
should be overwritten by the solution \(A^{-1}b\). Then, write a wrapper function
LUsolve(A, b)
which does not alter neither A
nor b
but solves \(Ax=b\) by calling LUsolve0
. Compare your results with the R expression solve(A, b)
.
Write your functions in a separate .R
file within your branch.
심송용, 제 6장 연습문제 6.1, 6.2.
From the lecture note on QR decompostion, explain why classical Gram-Schmidt (cgs()
) fails with the given matrix A
.
From the same lecture note, explain why the modified Gram-Schmidt (mgs()
) fails with the given matrix B
. Will the classical Gram-Schmidt succeed?
Implement the Householder QR decomposition in R.
geqrf
is designed. Note that \(\mathbf{Q}\) can be recovered from \(\mathbf{u}_1, \mathbf{u}_2, \ldots, \mathbf{u}_p\). The function interface should be<- function(qrobj) householder
taking a Reference Class (RC) object
setRefClass("QRclass",
fields = list(
mat = "matrix",
vec = "vector"
) )
<- function(qrobj) recoverQ
that recovers \(Q\). (Hint. \(\mathbf{Q}=\mathbf{Q}\mathbf{I}\).)
A
and B
of the previous question. Compare the orthogonality of the computed \(Q\) matrix.The Longley data set of labor statistics was one of the first used to test accuracy of least squares computations. This data set is available at the National Institute of Standards and Technology (NIST) website, consists of one response variable (number of people employed) and six predictor variables (GNP implicit price deflator, Gross National Product, number of unemployed, number of people in the armed forces, ‘noninstitutionalized’ population \(\ge\) 14 years of age, year) observed yearly from 1947 to 1962, and is also available in R package .
Download the data set from the NIST website, read into R, and construct a data matrix \(\mathbf{X}\) for linear model \(\mathbf{y}=\mathbf{X}\boldsymbol{\beta} + \boldsymbol{\varepsilon}\). Include an intercept in your model.
svd()
, list up the 7 singular values of \(\mathbf{X}\). What is the condition number of \(\mathbf{X}\)?Using the function you wrote for Q3, compute the regression coefficients \(\hat{\boldsymbol{\beta}}\), their standard errors, and variance estimate \(\hat{\sigma}^2\). Verify your results using the R function .
Using the Cholesky decomposition of \(\mathbf{G}\), compute the regression coefficients \(\hat{\boldsymbol{\beta}}\), their standard errors, and variance estimate \(\hat{\sigma}^2\). Compare the results with the values of the above question.
Show that the norm \(\|\mathbf{x}\|_{\delta}\) in the lecture note on iterative methods is indeed a vector norm
심송용, 제 3장 연습문제 3.3.
Consider solving the linear system of equations \(Ax=b\) using Jacobi’s method, where \[ A = \begin{bmatrix} 2 & -1 & & & \\ -1 & 2 & -1 & & \\ & \ddots & \ddots & \ddots & \\ & & -1 & 2 & -1 \\ & & & -1 & 2 \end{bmatrix} \in \mathbb{R}^{n\times n}. \] Note the eigenvalues of \(A\) are analytically given by \[ \lambda_i = 2 - 2\cos\frac{i\pi}{n+1}, \quad i = 1, 2, \dotsc, n. \]
Find the spectral radius of \(R\), when the Jacobi iteration is written as \[ x^{(k+1)} = Rx^{(k)} + c. \] Will the iterative algorithm converge?
Let the eigenvector of \(A\) associated with the eigenvalue \(\lambda_i\) be \(v_i \in \mathbb{R}^n\). Then we can express the error of the iteration as \[ \varepsilon^{(k)} := x^{(k)} - x^* = a^{(k)}_1v_1 + a^{(k)}_2v_2 + \dotsb + a^{(k)}_nv_n, \] for some scalars \(a^{(k)}_1,\dotsc,a^{(k)}_n\); \(x^*\) is the true solution of \(Ax=b\). What can you say about the attenuation of the sequence \(\{a^{(k)}_i\}_{k=1,2,\dotsc}\) for different values of \(i\)?