- Home
- Documents
*Semi-Toeplitz Preconditioning for Linearized Boundary ...uu.diva- 116873/ preconditioning*

prev

next

out of 28

View

0Download

0

Embed Size (px)

IT Licentiate theses 2002-007

Semi-Toeplitz Preconditioning for Linearized Boundary Layer Problems

SAMUEL SUNDBERG

UPPSALA UNIVERSITY Department of Information Technology

Semi-Toeplitz Preconditioning for Linearized Boundary Layer Problems

BY

SAMUEL SUNDBERG

December 2002

DEPARTMENT OFINFORMATION TECHNOLOGY SCIENTIFIC COMPUTING UPPSALA UNIVERSITY

UPPSALA SWEDEN

Dissertation for the degree of Licentiate of Philosophy in Numerical Analysis at Uppsala University 2002

Semi-Toeplitz Preconditioning for Linearized Boundary Layer Problems

Samuel Sundberg

Samuel.Sundberg@tdb.uu.se

Department of Information Technology Scientific Computing Uppsala University

Box 337 SE-751 05 Uppsala

Sweden

http://www.it.uu.se/

c© Samuel Sundberg 2002 ISSN 1404-5117

Printed by Eklundshofs Grafiska AB

Abstract

We have defined and analyzed a semi-Toeplitz preconditioner for time- dependent and steady-state convection-diffusion problems. Analytic ex- pressions for the eigenvalues of the preconditioned systems are obtained. An asymptotic analysis shows that the eigenvalue spectrum of the time- dependent problem is reduced to two eigenvalues when the number of grid points go to infinity. The numerical experiments sustain the results of the theoretical analysis, and the preconditioner exhibits a robust behavior for stretched grids.

A semi-Toeplitz preconditioner for the linearized Navier–Stokes equa- tions for compressible flow is proposed and tested. The preconditioner is applied to the linear system of equations to be solved in each time step of an implicit method. The equations are solved with flat plate boundary conditions and are linearized around the Blasius solution. The grids are stretched in the normal direction to the plate and the quotient between the time step and the space step is varied. The preconditioner works well in all tested cases and outperforms the method without preconditioning both in number of iterations and execution time.

Keywords: Iterative solution, preconditioning, finite difference methods, boundary layer flows.

2

Contents

1 Summary 5 1.1 Numerical context . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Solving linear systems of equations . . . . . . . . . . . . . . . 6

1.2.1 Krylov subspaces . . . . . . . . . . . . . . . . . . . . . 7 1.2.2 Methods for Hermitian matrices . . . . . . . . . . . . 7 1.2.3 Methods for non-Hermitian matrices . . . . . . . . . . 8

1.3 Preconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.1 Application of the preconditioner . . . . . . . . . . . . 10 1.3.2 Some important classes of preconditioners . . . . . . . 11

1.4 Semi-Toeplitz preconditioning . . . . . . . . . . . . . . . . . . 12 1.4.1 Analysis of a semi-Toeplitz preconditioner for a convection-

diffusion problem . . . . . . . . . . . . . . . . . . . . . 13 1.4.2 Solving the linearized Navier–Stokes equations using

semi-Toeplitz preconditioning . . . . . . . . . . . . . . 15 1.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

A Analysis of a semi-Toeplitz preconditioner for a convection-diffusion problem

B Solving the linearized Navier–Stokes equations using semi-Toeplitz preconditioning

3

4

Chapter 1

Summary

In order to properly present my research on semi-Toeplitz preconditioning a small context orientation is needed. The field of scientific computing is connected to several other fields in science such as mathematics, computer science, physics, chemistry and the biosciences. Therefore we need to un- derstand the connection to these fields to properly understand the role of numerical methods.

Many different science fields use to some extent mathematical modeling to describe essential features of the objects studied in just a few equations. These equations are elegant since they contain an implicit description of complex interactions in a dense format. They suffer however from the draw- back that they generally lack a solution that can be expressed in an explicit formula. In order to be able to predict the behavior of an actual object we need to find ways to solve these equations approximatively. This is done using numerical methods.

Numerical methods have a long and rich history even before the advent of the computer age. It was however the invention of the electronic computer that made these methods interesting for a wider audience. Today scientists use numerical simulation as a research tool as important as theory and experiments. We find computational methods as an engineering tool in the automobile industry to simulate crash tests, they are used in the aeronautical industry to design aero-planes that emits less noise and there exist many other applications as well.

5

1.1 Numerical context

The research presented here is concerned with iterative methods to solve linear systems of equations. These systems typically arise in scientific com- puting when we discretize Partial Differential Equations (PDEs) using finite differences or some other discretization method like finite elements or finite volumes. There are of course other applications like circuit theory and signal processing where iterative methods are used to solve systems of equations, but here we have restricted ourselves to the study of iterative methods in a PDE setting.

The use of preconditioning to enhance the performance of iterative solvers in [2, 19] was the turning point that made modern iterative methods widely used in numerical computations. To find an efficient preconditioner for a certain problem might even be more important than to choose the right it- erative method. Here we study a preconditioning strategy that is based on knowledge of the origin of the system [25, 24].

In the reports that comprise this licentiate thesis we study the solution of equations that are derived from boundary layer problems in fluid dynamics. The ultimate goal is to make it feasible to use semi-Toeplitz preconditioning in Navier–Stokes solvers, and therefore we make some effort to ensure that the model problems we solve exhibit the same essential features.

The rest of this overview presents some material on iterative methods, preconditioning of these in general and some material on semi-Toeplitz pre- conditioners.

1.2 Solving linear systems of equations

There are several methods to solve a linear system of equations. The most well-known method is Gaussian Elimination (GE), which is a direct method. For a linear system of equations

Ax = b, (1.1)

where A is an n-by-n nonsingular matrix, GE requires 2n3/3 arithmetic operations as well as storage of n2 entries in the general case. For the large sparse matrices that often appear in scientific computing this is very inefficient, and we need to use the fact that only a few of the entries in the matrix are nonzero. This is difficult with GE, and other direct methods like frontal, multifrontal and supernodal methods have been developed to handle sparse matrices [4, 5]. Despite large improvements in this area there are still

6

many situations where iterative methods are the only feasible choice, e.g. for discretizations of three-dimensional PDEs.

1.2.1 Krylov subspaces

The idea behind most iterative methods that are in use today is to find the solution in the subspace spanned by successive multiplications with A. Here we use b as a starting vector, but there are cases when a different starting vector is used. We thus find x1 ∈ span{b} and then compute the matrix- vector product Ab to find x2 ∈ span{b, Ab}. At step k in this process we find the approximate solution as

xk ∈ span{b, Ab, . . . , Ak−1b}. (1.2)

This subspace is usually called the Krylov subspace for the matrix A using the initial vector b at step k. We use the notation Kk(A, b) to denote this subspace.

To find the best approximative solution in a given Krylov subspace is not a trivial task. First of all there are several ways to define what criterion an optimal solution should fulfill. Some methods minimize the norm of the residual, ‖b− Axk‖, while other methods find a residual that is orthogonal to the subspace. Secondly there is a choice how much iteration overhead1

we will allow.

1.2.2 Methods for Hermitian matrices

For real symmetric and complex Hermitian matrices the task is not so diffi- cult, however, as there exist methods with a fixed amount of overhead that generate the optimal solution in some sense. The most well-known method in this class is the Conjugate Gradient (CG) method [14]. This is a method that minimizes the A-norm2 of the error over the subspace. CG requires that the matrix is positive definite in order to be able to compute its three-term recurrence, which is based on LU -decomposition of the Lanczos matrix. For general Hermitian matrices the Minimum Residual (MINRES) method [21] generates the approximation which minimizes the residual in the Krylov sub- space. The coefficients for the MINRES recurrence are computed by means of Givens rotations.

For CG and MINRES we have sharp estimates of how good approxi- mations these methods yield [7]. The residual rk at step k generated by

1The extra work done in each iteration besides the matrix-vector multiply. 2‖ · ‖A denotes the A-norm, ‖v‖A =

p 〈v, Av〉

7

MINRES fulfills ‖rk‖2/‖r0‖2 ≤ min

pk max

i=1,.