of 7
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

The Behavior of Algorithms in Practice 28 Feb Lecture 6

Category:

Reports

Publish on:

Views: 2 | Pages: 7

Extension: PDF | Download: 0

Share
Description
8409 The Behavior of Algorithms in Practice 8 Feb 00 Lecture 6 Lecturer: Dan Spielman Scribe: Brian Sutton Outline When Gaussian elimination with partial pivoting fails Analysis of complete pivoting Speeding
Transcript
8409 The Behavior of Algorithms in Practice 8 Feb 00 Lecture 6 Lecturer: Dan Spielman Scribe: Brian Sutton Outline When Gaussian elimination with partial pivoting fails Analysis of complete pivoting Speeding up the solution of linear systems When partial pivoting fails An example In lecture 3, we saw an example of a linear system that Matlab fails to solve Here is a similar system: x = We will solve this system using Gaussian elimination with partial pivoting First, we permute the rows so that the, -entry has the largest magnitude in the first column Actually, we do not need to permute; our pivot is already in the right place Eliminating the nonzero entries in the first column, we get The rest of the reduction proceeds similarly, obtaining The exponentially large magnitude of the lower-right entry is an artifact of the reduction Because of rounding, the lower-right entry of the original matrix may not be recoverable from the reduced matrix Technically, Matlab solves this system, because every entry in the last column is exactly a power of two See the file kahanm for a very similar system that Matlab cannot solve Smoothed analysis What does smoothed analysis tell us about the example? On one hand, we have already seen that if an entire matrix has noise in it, then Gaussian elimination requires relatively few bits of precision, even without pivoting But on the other hand, if we perturb only the nonzero entries in the above system, then the error still creeps in Hence, our current application of smoothed analysis to Gaussian elimination fails for this example Applying smoothed analysis to Gaussian elimination with complete pivoting may resolve this issue 3 Complete pivoting Gaussian elimination with complete pivoting permutes rows and columns to ensure that the pivot is the largest magnitude entry in the entire submatrix that remains to be row reduced It is easy to check that complete pivoting guarantees L The algorithm also gives Theorem Wilkinson U A n 4 log n+ Therefore, complete pivoting requires no more than log n bits of precision for best possible accuracy However, people do not use complete pivoting in practice Partial pivoting rarely fails, and complete pivoting requires twice as many floating point operations to find the largest magnitude entry the pivot We should also note that it may be possible to improve upon Wilkinson s theorem Nearly always, the growth factor is actually less than n Perhaps by assuming that there is noise in the input, we can find a better bound Or perhaps we can calculate a better bound directly Proof of Wilkinson Let A r denote the lower-right r-by-r submatrix that is obtained after n r eliminations Assume that complete pivoting has already been performed, so that the largest magnitude entry in A r is the upper-left entry Let p r denote this entry, ie the pivot U Then A = p n and A r = p r, so we need to bound p r / p n Note that A max r A r A Wilkinson s theorem will be a consequence of the next theorem Theorem p log 3 / 4 /3 n /n + log n p n Proof The result follows from two facts: r r If q r = log p r, then i= q i log r + rq r, so r r deta r = p i, i= deta r r p r r Inequality is Hadamard s inequality Immediately, we have which implies i= r p i r p r r i= r r q i log r + r q r, 3 i= q i log r + q r + q r = log r r 4 rr r r r 3 Now, rewriting, we obtain Summing inequality 4 over r =,, n with 5, we get so n i= q i logdeta = 5 n n n n qr q n n /n qr logdeta r + n log 3/ 4 /3 + +, r n r= r= q n q + n /n logdeta n log 3/ 4 /3 + n Using and 3, we can bound the last term in the RHS: Thus, so n logdeta n = q i log n + q n n n n q n q + n log 3/ 4 /3 n n /n + n log n + q n, q n n q q n q n n /n /n n log 3 / 4 /3 + log n + log n, n and finally, q q n log 3 / 4 /3 n /n n /n + log n i= 4 Speeding up the solution of linear systems If a matrix A = a ij is banded within b diagonals of the main diagonal, ie a ij = 0 implies i j b, then Gaussian elimination with partial pivoting can reduce the matrix in time Onb and space Onb You can check that the upper-triangular matrix U in the factorization is banded by b approximately Given an arbitrary matrix, we would like to permute the rows and columns so that the matrix is banded If the matrix is symmetric, then this problem becomes a graph problem Define the graph of a symmetric n-by-n A = a ij to be an undirected graph on n vertices in which there is an edge i, j if and only if a ij = 0 The diagonal of 4 A is irrelevant 4 Breadth-first search 4 Cuthill-McKee 69 One of the first algorithms to transform matrices into banded matrices was developed by Cuthill and McKee in 969 Given an arbitrary vertex v, the algorithm is as folows: Perform a breadth-first search on the graph of A rooted at v Label each vertex with its distance from v, so that the vertices are partitioned into distinct levels Order the vertices by level, ie find a permutation σ such that if σi σj, then the level of i is no higher than the level of j 3 Permute the rows and columns of A according to the permutation σ Note that if the BFS partitions the vertices so that each level contains relatively few vertices, then the resulting matrix is banded into a relatively small region To be precise, if L i is the set of vertices in level i, then the bandwidth of the permuted matrix is 3 max i L i 4 Turner 86 In 986, Turner analyzed Cuthill-McKee, finding an improvement in the process His improvement was Start with a random root v Run Cuthill-McKee 3 Let w be the last vertex corresponding to row and column n 4 Run Cuthill-McKee from w Turner s analysis was probablistic For a fixed b, he constructed a random 0, -matrix in which each entry within b diagonals from the main diagonal was with probability p and all other entries were 0 He found that for p log n, the bandwidth of the matrix was n almost certainly b Cuthill-McKee produced a permuted matrix with bandwith about 3b In contrast, Turner s improvement achieved a bandwith of b + Olog n 5 43 Feige-Krauthgamer 98 Feige and Krauthgamer performed a more general analysis Starting with an arbitrary 0, -matrix of bandwidth b, they flipped entries in the band with probability p The case in which the original matrix is the zero matrix is Turner s analysis Feige and Krauthgamer s analysis deals with a certain class of perturbed matrices, similar to smoothed complexity analysis 4 Graph Separators For a graph G, an α-vertex separator is a subset C V such that V \ C is the disjoint union of sets A and B with no edges between A and B, and A, B α V If we can find an α-vertex separator, then we can make progress toward rearranging the matrix into banded form, since there are no edges between the vertices in A and the vertices in B Theorem 3 Lipton-Tarjan Every n-node planar graph has a 3 -vertex separator C satisfying C 8n Definition 4 If S is a family of graphs closed under subgraph, then S satisfies an fn- separator theorem if there exists an α such that all graphs in S have α-separators of size f V Theorem 5 Lipton-Rose-Tarjan If S has cn γ -separators, then if G is the non-zero structure of a linear system, and G S, then we can solve G in space and time given by the following table space = fill time γ = On log n On 3/ γ γ n n 3γ 43 Non-planar graphs What if the graph of a matrix is not planar? There are a number of heuristic contenders One of the first was developed by Kernighan and Cuthill A newer, promising approach uses the eigenvector structure of the Laplacian matrix of the graph of A Each diagonal entry of a Laplacian matrix of a graph is the degree of the corresponding vertex The off-diagonal entries are negated edge weights 6 The Laplacian matrix is positive semidefinite, with 0 as an eigenvalue with multiplicity if the graph is connected The eigenvector corresponding to the second smallest eigenvalue can be used to order the vertices The ordering can be used to cut the graph efficiently Details will be provided in the next lecture 7
Similar documents
View more...
Search Related
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks