You are expected to be in class every day. On the other hand, if you need to miss one class, on a day when there is no exam, just do it. (In such a case, permission from me is irrelevant.)
This page pertains only to Professor Taylor's section of Mathematics 5150, for the spring semester of 2002. As far as I know, this is the only section occurring this semester. For other sections or other semesters, other details and regulations will no doubt apply.
An attempt will be made to keep this page up-to-date, but this is not guaranteed. Students are responsible for every assignment made in class, whether or not it ultimately appears on this page.
Likely topics:
ISBN: 0-521-38632-2.
Our book has blue letters on the cover. There is another book by these authors, with a similar title, which has has red letters on the cover. Two years ago, one of the bookstores here got mixed up and was offering the one with red letters. Do not buy that one; it is not correct for this course.
Final Grades and comments on the final exam
1 2 0 2 1 1 0 1 1 maybe another matrix to come. stay tuned.
Find an orthogonal matrix O such that O'BO is diagonal. 2 0 1 0 1 0 1 0 2
In other words, find an orthogonal matrix O such that O'AO is rotation about the z-axis. Can you tell the angle of this rotation before you get O? (Hint: trace.) .600 .48 -.64 -.8 .36 -.48 0.0 .8 .6
3 18 -25 -4 1 -25 0 20 -5
4 1 2 3 4 1 2 3 4 0 2 3 4 0 2 2 0 4 2 2 0 4 0 2 0 4 2 2 0 4 2 2 0 0 4 1 0 0 4 1 0 0 4 0 0 0 4 0 0 0 0 4 0 0 0 4 0 0 0 4 0 0 0 4 (While you're at it, what is the minimal polynomial of each of them? This is easy, once you have the Jordan form.)
x'' -2x' + x = 0 by the methods outlined on pages 133-134. Also solve the system of differential equations x_1'' + 2 x_1 + x_2 =0 x_2'' + 2 x_2 + x_1 =0, by the the same method.
2x^2 + y^2 + 2z^2 + 2xz = 10, by the method implied in 4.0.2 (pp 167-168).
0 1 1 1 Prove (by induction) that the n-th power of A is the matrix F(n-1) F(n) F(n) F(n+1) where F(n) is the Fibonacci sequence 0, 1, 1, 2, 3, 5, 8, 13 ... (i.e. F(0) = 0, F(1) = 1, F(2) = 1, F(3) = 2, and so on). Apply Corollary 5.6.14 (using a convenient matrix norm) to evaluate the limit as n approaches infinity of the n-th-root sequence F(n+2)^{1/n). (To do this, you will need first to evaluate rho(A) by solving the characteristic equation.) The arbitrary linear recurrence relation can be studied in the same way. One example should suffice to explain the basics. Let the sequence G(n) be defined by the recurrence G(n) = G(n-1) + G(n-2) + G(n-3) (with any three starting values G(0), G(1), G(2)). This sequence can be analyzed with powers of the 3x3 matrix B given by 0 1 0 0 0 1 . 1 1 1 Let v be the vector whose three entres are G(0), G(1), G(2). Prove (by induction) that three entries of B^n v are G(n), G(n+1), G(n+2). Estimate rho(B) by finding some large powers of the matrix and applying 5.6.14 (you probably cannot solve exactly for the eigenvalues). What is the asymptotic value of G(n)?
Let A be the 3 x 3 matrix 0 0 1 1 E 0 0 1 0 Think of E as a sort of error or uncertainty in measurement, which might be close to zero. What can one learn about the three eigenvalues of A from a direct application of Gershgorin's Theorem? (Not much, eh!) Now let w be exp(2 pi i/3), a non-real cube root of 1 lying in the upper half of the complex plane. (The other non-real cube root of 1 is w^2, which happens also to be the conjugate of w.) Let U be the unitary matrix 1 1 1 1/sqrt(3) 1 w^2 w 1 w w^2 It is known (and you should check) that U yields a unitary diagonalization of A when E = 0. What are the eigenvalues in this case? Next compute the matrix U*AU in the case of arbitrary E (don't forget the conjugates in U*!). If E is relatively small, then U*AU will be very strongly diagonally dominant and Gershgorin's Theorem may profitably be applied. So, what can one learn about the three eigenvalues of A from an application of Gershgorin's Theorem to U*AU ? (Give three disks that contain the eigenvalues.) This is an example of a perturbation result -- we begin with a method (namely a similarity via U) that yields a full answer for E = 0, and then look for estimates on how the answer may shift when one goes to non-zero E. (P.S. Calculations with w are easier if you keep in mind that w^3 = 1, that the conjugate of w is w^2, and that 1 + w + w^2 = 0. Corrected from previous typo.)
In this and the next two exercises, we find some approximate eigenvectors, in the sense of Corollary 8.1.29 (page 493). For example, let the 3x3 matrix B be defined as in the 9-th homework set, and let the sequence G(n) be as defined there, with the starting values G(0) = G(1) = G(2) = 1. For a given m, say m = 10, let x be the the transpose of (G(m),G(m+1),G(m+2)). what are the best values of alpha and beta satisfying the hypothesis of 8.1.29, namely alpha x <= B x <= beta x ? (By best, one means the greatest alpha and the smallest beta.) Hence what estimate does 8.1.29 provide for rho(B) ? Try a larger value of m, and see how closely you can approximate rho(B).
Let Dn be the n x n matrix 0 1 0 0 ... 0 0 0 1 0 1 0 ... 0 0 0 0 1 0 1 ... 0 0 0 ... 0 0 0 0 ... 1 0 1 0 0 0 0 ... 0 1 0 Using the transpose of (5,8,10,10, ... ,10,10,8,5)) as an approximate eigenvector for Dn, show that rho(Dn) lies between 1.6 and 2.0 (for n > 3). Verify that the transpose of (sin pi/(n+1), sin 2pi/(n+1), ... , sin n pi/(n+1)) is a positive eigenvector of Dn. What is its eigenvalue? Hence, what is the precise value of rho(Dn)? Then verify that, as n goes to infinity, the limit of rho(Dn) is 2.0. (You may have to brush up your trigonometry!)
Let C be the 3x3 matrix 1 1 1 1 2 1 1 1 1 and let x be the transpose of (12,17,12). What are the best alpha and beta for which one may invoke 8.1.29? What bounds does one therefore obtain on the top eigenvalue rho(C)? Next, find another approximate eigenvector, with integer entries, which yields a sharper estimate of the eigenvalue rho(C). Finally, find the precise value of rho(C), and an associated eigenvector. (If you can _guess_ an eigenvector from what has come before, so much the better!)
This exercise may help elucidate the power method, which was introduced in the somewhat mysterious Exercise 7 on page 63, which we had for February 16. Here we are able to carry out the power method for C, and moreover analyze the situation with the methods of Section 8.2. (But, be aware: the analysis of Section 8.2 applies only to non-negative matrices, whereas the power method is applicable to a wider class of matrices (not all matrices, but not just non-negative matrices).) Let us continue with the matrix C of the previous exercise. According to Theorem 8.2.8 (page 499), the powers of C/rho(C) approach a very simple matrix L. All columns of L are multiples of a single vector x, where x is an eigenvector with eigenvalue rho(C). Moreover, according to clause (j) on page 498, the convergence with respect to the infinity-norm is of the order of K r^n, where r is any number larger than the quotient of the next-to-top eigenvalue to the top eigenvalue, and K is a constant. What is this ratio for the matrix C? What is the eighth power of this ratio? Compute the eighth power of C. (Don't worry about the scalar factor that should really be there in a fastidious invocation of Theorem 8.2.9.) Let c be the first column of the eighth power of C. How close is c to being an eigenvector of C? (You could answer this question by again obtaining best alpha and beta for an application of 8.1.29. One expects alpha and beta to agree within roughly alpha*r^8.)
Optional. Carry out a similar power analysis of the matrix B from the 9-th Exercise Set, including the rate of convergence. You should already have found (above) an approximate value of rho(B), but for this exercise will also require an approximate value of the next-to- largest eigenvalue. This may be obtained either by deflation (Ex 8, page 63), or simply by dividing the characteristic polynomial by (x - r), where r is the approximate value of rho(B).