Computer Science 6676, Spring 1999                                         March 3, 1999
 

Assignment 5

Due Wednesday, April 7

Chapter 6, problems 13, 17, 20.


 

Class Project, Phase II

Due Wednesday, April 7

  1. Code and debug the remaining routines required for a complete trust region method for unconstrained optimization, using finite difference gradients and Hessians. Algorithm D6.1.1 (the main driver for unconstrained optimization) should serve as the guide for the routines you need and for the driver program. You may use either the dogleg or hookstep. That is, you will need to implement either
    1. HOOKDRIVER / HOOKSTEP / TRUSTREGUP

    or

      DOGDRIVER / DOGSTEP / TRUSTREGUP

    as well as the stopping routines UMSTOP and UMSTOP0, and the appropriate driver program. You do not need to implement scaling (equivalently, you may assume typx(i) = S_x (i)= 1 for each i, and typf = 1).

  2. Verify that your program runs correctly on each of the five test functions in the appendix. For problems 1-3, use n= 2, 4, and 6 respectively (as well as larger values if you like.) You should run each of these problems from initial points x_0, 10(x_0), 100(x_0), where x_0 is the point given in the appendix. For stopping tolerances, use gradtol = (macheps)^(1/3), steptol = (macheps)^(2/3), itnlimit = 200. For each problem, print the termination code, final value of x, function value at that point, and the number of iterations taken. Also tabulate and print the total number of function evaluations taken, not counting those used for finite difference derivatives. Finally, for each problem accumulate and print out (or write by hand): the number of iterations that take the Newton step; the number of iterations where the trust region is decreased within the iteration (and how many times it was decreased in each case); and the number of iterations where the trust region is increased within the iteration (and how many times it was increased in each case).