POLYNOMIAL SCALING IN THE CONJUGATE GRADIENT METHOD AND RELATED TOPICS IN MATRIX SCALING
- Author:
- CHEN, SO CHENG
- Physical Description:
- 164 pages
- Additional Creators:
- Pennsylvania State University
Access Online
- Summary:
- One purpose of scaling a matrix is to reduce the condition number of the matrix. The accuracy of the solution of a linear system and the stability of an algorithm to solve this system are also improved in the process. When an iterative method is used to solve the linear system, scaling also increases the convergence rate.
In this thesis, we study how to scale the matrix A in order to reduce its condition number. Matrix scaling in the Frobenius norm is studied first. Then, we discuss the relationship among three kinds of iterative methods: Variational methods, semi-iterative methods, and relaxation methods. All these methods could minimize the norm of the error vector in each iteration. The following investigation is how to choose a matrix to approximate the inverse of the matrix A. We will use the approximating matrix to scale the conjugate gradient method, a special case of variational methods, which could be efficiently implemented in parallel or vector computers. If the scaling matrix approximates the matrix A itself, as with the incomplete LU-decomposition, the scaled iterative algorithms might not efficiently utilize parallel or vector machines.
If A is a positive definite matrix, a polynomial of the matrix A will sometimes approximate A('-1) very well, but we need to estimate the extreme eigenvalues of A. The optimal polynomial for the set of matrices A with the same condition number Cond(,2)(A) is a properly shifted and scaled Chebyshev polynomial. Let (lamda)(,1) and (lamda)(,n) be the smallest and the largest eigenvalues of A and (lamda)(,1)(' )and (lamda)(,n)(' )be their estimates. We show that if(' )(lamda)(,n) is a little larger than (lamda)(,n) (this is easy to achieve), there exists a large range between (lamda)(,1) and (lamda)(,n) for(' )(lamda)(,1) such that the scaled conjugate gradient method with (lamda)(,1)(' )and(' )(lamda)(,n) converges faster than that with the exact extreme eigenvalues (lamda)(,1) and (lamda)(,n). - Other Subject(s):
- Dissertation Note:
- Ph.D. The Pennsylvania State University 1982.
- Note:
- Source: Dissertation Abstracts International, Volume: 43-10, Section: B, page: 3293.
- Part Of:
- Dissertation Abstracts International
43-10B
View MARC record | catkey: 13611777