http://interval.louisiana.edu/courses/487/487-spring-2004-notebooks.html

Math. 487-01, Spring, 2004 Notebooks

These are Mathematica notebooks used during class.  They are not all perfect, but illustrate how to explore, experiment, and learn.  These notebooks were created mostly during class.

Home page for the course;
Entry page for list of all courses;
R. Baker Kearfott


  1. January 15, 2004 (Simple examples from the "tour," a Newton's method equation, numeric versus symbolic computations, a simple 3-dimensional plot.)
  2. January 20, 2004 (Formatting text and titles, improper integrals, numerical versus symbolic integration.)
  3. January 22, 2004 (Working with series, including using series as objects in an arithmetic; approximating integrals using series.  Also, using DSolve to solve a differential equation, and solving a differential equation approximately using series. Manipulating the output of DSolve and the series to define other functions and plot) Note: On the board, we reviewed Taylor polynomials, including the error term, and talked about the "Big Oh" notation.
  4. January 27, 2004 (using DSolve to solve higher order equations both directly and by rewriting as a system; using ParametricPlot to graph phase planes; more manipulation using rules and the substitution operator, as well as more on plotting; numerical solution of more general differential equations; graphical comparison of the solutions to y" + y = 0 and y" + sin(y) = 0.)
  5. January 29, 2004 Illustration of lists and vectors in the context of differential equations;  more on using Plot to depict solutions of differential equations; use of vectors in solving algebraic equations.  (Also, a discussion of accumulation of error in sums with many terms was presented on the board.)
  6. February 3, 2004: We discussed floating point numbers and the relationship to relative and absolute error on the board in class.  Although we discussed the relation to the Mathematica options AccuracyGoal and PrecisionGoal, we did not produce any notebooks.
  7. February 5, 2004 We used DSolve and NDSolve to explore the effects of changing the relative and absolute error goals on a simple initial value problem that we could solve by hand.  I discussed the use of standard existence / uniqueness theory to guide analysis of numerical solutions.  I discussed the fact that heuristics are used in determining stepsize, etc. in methods, and heuristics may sometimes fail.  I also talked about interpolation versus extrapolation.
  8. Tuesday, February 10, 2004  The notebook contains illustrations of solving systems of equations, checking the residual, computing matrix inverses, and computing and using LU-factorizations, both for symbolic matrices and numerical (approximate) matrices.  In conjunction with that material, I reviewed the Gaussian elimination with back substitution process, briefly explained what an LU-factorization is, and explained how to interpret condition numbers (the relative error in  the output is bounded by the condition number times the relative error in the input).
  9. Thursday, February 12, 2004 We began the class by reviewing the difference between exact and approximate computations, and reviewed the concept of a condition number.  The notebook contains a construction of a 20 by 20 Hilbert matrix, and indicates the bound on the condition number is around 1018, indicating a total loss of accuracy.  We also mentioned the ODE solutions we saw on February 5, and how the numerical approximate solution may be present even over intervals in which an actual solution doesn't exist.  We discussed situations where exact solutions are impractical or not possible to compute (such as for large systems of linear equations or for integration of the system of differential equations modelling the position of an asteroid), yet, for such systems, the approximate solution provides not mathematical proof of where the exact solution lies, or even if the exact solution exists.  I spent most of the remainder of the period introducing interval arithmetic, and showing how interval arithmetic can give rigorous bounds on the exact result.  The notebook contains some elementary computations with interval arithmetic in Mathematica.  We will add to this notebook with linear algebra with interval arithmetic next time.  For a fairly high-level introduction to interval arithmetic, see my Euromath Bulletin article. Also see the transparencies for a talk I gave in 2002 at Southeastern Louisiana University.
  10. Tuesday, February 17, 2004:  We discussed interpolating polynomials on the board, including both the error in interpolating polynomials as approximations to functions (the classical error formula was presented and interpreted) and the error in computing the coefficients of interpolating polynomials by solving the Vandermonde system.  I showed how  a Vandermonde system is formed by forming it for a simple example (showing where the equations come from), then I used Mathematica to solve the Vandermonde system so formed, and to form and plot the associated interpolating polynomial.  We also looked at interval solution of a linear system of equations with interval coefficients using Mathematica.  We briefly examined Mathematica's For loop.  Finally, we used Table within Mathematica to first generate the set of xi in the third assignment, then to use that list of xi to generate the entire Vandermonde matrix.  (Sorry, the in-class computations somehow appear to have been lost.  See me if you have questions about them.)
  11. Thursday, February 19, 2004:  We first discussed, on the board, Mathematica's For, While, and Do functions, and the relationship with similar constructs in earlier, traditional programming languages such as Fortran and "C".  We then produced an initial notebook  that illustrates these three constructs, and also contains some symbolic computations related to a model of total error (roundoff plus approximation error) for approximation of a derivative by a forward difference quotient.  We then examined the notebook from the March 13, 2002 lecture, which uses a "Do" loop to study total error (roundoff plus approximation) as the step in the difference quotient is decreased.  (In addition to running that notebook in interval arithmetic, as it stands posted, we also computed the difference quotients with floating point arithmetic to illustrate that we obtain similar results.)  Finally, we examined the notebook from the March 1, 2002 lecture, which contains an illustration of a "While" construct.  We improved upon this notebook, by programming Newton's method in general as a Mathematica function.  This appears in our second in-class notebook for the day.
  12. Thursday, February 26, 2004: We first discussed details of the assignment due Tuesday, March 2, including setting up, interpretation, and usefulness of the interval computations, the approximation error in polynomial interpolation (and how it can be large), and other details.  I then reviewed power series solution of differential equations, and examined a notebook from February 8, 2002 illustrating how we can use Mathematica to solve such systems of equations.  I have posted an improved version of this notebook that contains a plot, etc.

  13. In the remainder of the class, I introduced Matlab, explaining that, although it does not have notebooks, it has the "diary" command and "m" files, and that "m" files can contain either scripts or functions. I recommended students now have on hand the Recktenwald text for the remainder of the course.
  14. Tuesday, March 2, 2004: We used Matlab to do problem 1 and problem 6 of the assignment due today.  I produced the following:
  15. During this process, we saw some elementary ways of manipulating Matrices in Matlab, we examined the For loop, and we saw how to create Matlab scripts.
  16. For Thursday, March 4, 2004:  I introduced the next assignment, due Tuesday, March 16.  I then explained the difference between Matlab functions and Matlab scripts, and we examined two simple Matlab functions, x_squared.m and difference_quotient.m.  After looking at the Matlab function fprintf (see pp. 102-105 of our Recktenwald text), we produced a Matlab script, difference_table.m.  We discussed the output from difference_table.m from the point of view of roundoff and truncation error.
  17. For Tuesday, March 9, 2004: There were no notebooks.  However, we accomplished the following:
  18. For Thursday, March 11, 2004: After reviewing our analysis of Newton's method, we carefully constructed a Matlab function to perform Newton's method.  We debugged this function in class, then we tried it on f(x) = sin(x), for various starting points.  We looked at what was actually happening inside the iteration by setting breakpoints and using Matlab's interactive debugger.
  19. For Tuesday, March 16, 2004: I introduced the concept of vector norm with the abstract definition, and I discussed the 1, 2, and infinity norms.  I then explained the definition of derived matrix norm, and followed that by a derivation of the classical condition number of a matrix.  I provided some simple examples of norm and condition number computation in Matlab in class.
  20. For Thursday, March 18, 2004:
  21. For Tuesday, March 23, 2004:
  22. For Thursday, March 25, 2004:
  23. For Tuesday, March 30, 2004: Basically, material from Chapter 9 of the Recktenwald text.
  24. For Thursday, April 1, 2004: I exhibited the normal equations for linear least squares for general basis functions, in terms of discrete dot products of these basis functions.  I then explained what orthogonal matrices are, an what a QR-factorization is.  Finally, I showed how QR factorizations relate to least squares.  I did a short set of Matlab computations illustrating the QR factorization, but I will post that next time, together with least squares computations involving QR factorizations.
  25. For Tuesday, April 6, 2004: I reviewed setting up general least squares, and I illustrated the concepts and notation with the simple example previously introduced in class.  I also reviewed in detail, and with the example, the QR factorization, and use of QR factorizations for solving least squares problems.  I illustrated the use of QR factorizations to solve our example least squares problem in a Matlab session.
  26. For Thursday, April 8, 2004: We first reviewed the derivation of the finite difference collocation equations from March 25. We then discussed what would happen if we solved the resulting tridiagonal system of equations directly with a general Gaussian elimination program, based on the operations counts we discussed on March 23.  I then discussed the need for using efficient algorithms to solve cutting-edge scientific problems; I followed that with an explanation of why we need to pivot in Gaussian elimination and with mention that pivoting is unnecessary for diagonally dominant systems.  I then briefly pointed out how we can solve a diagonally dominant tridiagonal system without pivoting.  I concluded with a brief overview of sparse matrix technology, including a description of the way that sparse matrices are represented to Matlab.
  27. For Tuesday, April 20, 2004:  I first reviewed the concept of eigenvalue and eigenvector, then illustrated the eigenvalue decomposition A = VDV-1, where D is a diagonal matrix containing the eigenvalues of A on the diagonal and V is a matrix whose columns are the corresponding eigenvectors.  (I talked about when this can be done, and when V can be chosen to be orthogonal.)  I then introduced the singular value decomposition (SVD) for arbitrary rectangular matrices as a generalization of the eigenvalue decomposition.  Using computations on the matrices and their norms, I showed how the singular value decomposition can be used to compute the condition number of a matrix.  I did some computations in Matlab to illustrate these singular value decomposition concepts.
  28. For Thursday, April 22, 2004: I showed in detail how to use the SVD to find least squares solutions of overdetermined systems and to find minimum-norm solutions to underdetermined systems.  (I explained why the computations work the way they do.)  I then talked about rank-deficient overdetermined systems, and how the SVD can be used to project a matrix that is almost rank-deficient onto a rank-deficient matrix, so that the computations are more well-conditioned.  I then used Matlab and the SVD to compute the least squares line to the simple example previously introduced (y(0)=0, y(1)=4, y(2)=5, y(3)=8).  Sorry; I failed to properly save that Matlab session.
  29. For Tuesday, April 27, 2004: I reviewed use of the singular value decomposition to compute least squares solutions, solutions of minimum norm, and least squares solutions of minimum norm.  I then showed how to form the pseudo-inverse of a matrix from its singular value decomposition, and did various computations illustrating use of the pseudo-inverse.
  30. For Thursday, April 29, 2004.  I first showed the connection between the roots of the characteristic equation of a higher-order constant coefficient differential equation, then I described the relationship between these roots and the eigenvalues of the matrix A for the linear system w' = Aw one gets when one rewrites the higher-order differential equation as a system of linear first-order equations.  I then explained the physical significance of the real parts of the eigenvalues and eigenvectors of A.  I then showed how Euler's method for approximately solving initial value problems w' = Aw, w(0) given, is of the form w(k)= (I + hA)kw(0), and how this can be written in terms of an eigenvalue-eigenvector expansion of the initial condition w(0).  I concluded by noting that eigenvalue analyses are important in structural engineering, etc.  Example computations supporting these ideas are found in the Matlab diary for the day.