http://interval.louisiana.edu/courses/487/487-spring-2004-notebooks.html
Math. 487-01, Spring, 2004 Notebooks
These are Mathematica notebooks used during class. They are not all
perfect, but illustrate how to explore, experiment, and learn. These
notebooks were created mostly during class.
Home
page for the course;
Entry
page for list of all courses;
R. Baker Kearfott
-
January
15, 2004 (Simple examples from the "tour," a Newton's method equation,
numeric versus symbolic computations, a simple 3-dimensional plot.)
-
January
20, 2004 (Formatting text and titles, improper integrals, numerical
versus symbolic integration.)
-
January
22, 2004 (Working with series, including using series as objects in
an arithmetic; approximating integrals using series. Also, using
DSolve
to solve a differential equation, and solving a differential equation approximately
using series. Manipulating the output of DSolve and the series
to define other functions and plot) Note: On the board, we reviewed Taylor
polynomials, including the error term, and talked about the "Big Oh" notation.
-
January
27, 2004 (using DSolve to solve higher order equations both
directly and by rewriting as a system; using ParametricPlot to
graph phase planes; more manipulation using rules and the substitution
operator, as well as more on plotting; numerical solution of more general
differential equations; graphical comparison of the solutions to y"
+ y = 0 and y" + sin(y) = 0.)
-
January
29, 2004 Illustration of lists and vectors in the context of differential
equations; more on using Plot to depict solutions of differential
equations; use of vectors in solving algebraic equations. (Also,
a discussion of accumulation of error in sums with many terms was presented
on the board.)
-
February 3, 2004: We discussed floating point numbers and the relationship
to relative and absolute error on the board in class. Although we
discussed the relation to the Mathematica options AccuracyGoal and
PrecisionGoal, we did not produce any notebooks.
-
February
5, 2004 We used DSolve and NDSolve to explore the
effects of changing the relative and absolute error goals on a simple initial
value problem that we could solve by hand. I discussed the use of
standard existence / uniqueness theory to guide analysis of numerical solutions.
I discussed the fact that heuristics are used in determining stepsize,
etc. in methods, and heuristics may sometimes fail. I also talked
about interpolation versus extrapolation.
-
Tuesday,
February 10, 2004 The notebook contains illustrations of solving
systems of equations, checking the residual, computing matrix inverses,
and computing and using LU-factorizations, both for symbolic matrices and
numerical (approximate) matrices. In conjunction with that material,
I reviewed the Gaussian elimination with back substitution process, briefly
explained what an LU-factorization is, and explained how to interpret condition
numbers (the relative error in the output is bounded by the condition
number times the relative error in the input).
-
Thursday,
February 12, 2004 We began the class by reviewing the difference between
exact and approximate computations, and reviewed the concept of a condition
number. The notebook contains a construction of a 20 by 20 Hilbert
matrix, and indicates the bound on the condition number is around 1018,
indicating a total loss of accuracy. We also mentioned the ODE solutions
we saw on February 5, and how the numerical approximate solution may be
present even over intervals in which an actual solution doesn't exist.
We discussed situations where exact solutions are impractical or not possible
to compute (such as for large systems of linear equations or for integration
of the system of differential equations modelling the position of an asteroid),
yet, for such systems, the approximate solution provides not mathematical
proof of where the exact solution lies, or even if the exact solution exists.
I spent most of the remainder of the period introducing interval arithmetic,
and showing how interval arithmetic can give rigorous bounds on the exact
result. The notebook contains some elementary computations with interval
arithmetic in Mathematica. We will add to this notebook with linear
algebra with interval arithmetic next time. For a fairly high-level
introduction to interval arithmetic, see my Euromath
Bulletin article. Also see the transparencies
for a talk I gave in 2002 at Southeastern Louisiana University.
-
Tuesday, February 17, 2004: We discussed interpolating polynomials
on the board, including both the error in interpolating polynomials as
approximations to functions (the classical error formula was presented
and interpreted) and the error in computing the coefficients of interpolating
polynomials by solving the Vandermonde system. I showed how
a Vandermonde system is formed by forming it for a simple example (showing
where the equations come from), then I used Mathematica to solve the Vandermonde
system so formed, and to form and plot the associated interpolating polynomial.
We also looked at interval solution of a linear system of equations with
interval coefficients using Mathematica. We briefly examined Mathematica's
For
loop.
Finally, we used Table within Mathematica to first generate the
set of xi in the third assignment, then to use that list
of xi to generate the entire Vandermonde matrix.
(Sorry, the in-class computations somehow appear to have been lost.
See me if you have questions about them.)
-
Thursday, February 19, 2004: We first discussed, on the board, Mathematica's
For,
While,
and Do functions, and the relationship with similar constructs
in earlier, traditional programming languages such as Fortran and "C".
We then produced an
initial notebook that illustrates these three constructs, and
also contains some symbolic computations related to a model of total error
(roundoff plus approximation error) for approximation of a derivative by
a forward difference quotient. We then examined the notebook
from the March 13, 2002 lecture, which uses a "Do" loop to study total
error (roundoff plus approximation) as the step in the difference quotient
is decreased. (In addition to running that notebook in interval arithmetic,
as it stands posted, we also computed the difference quotients with floating
point arithmetic to illustrate that we obtain similar results.) Finally,
we examined the notebook
from the March 1, 2002 lecture, which contains an illustration of a
"While" construct. We improved upon this notebook, by programming
Newton's method in general as a Mathematica function. This appears
in our second
in-class notebook for the day.
-
Thursday, February 26, 2004: We first discussed details of the assignment
due Tuesday, March 2, including setting up, interpretation, and usefulness
of the interval computations, the approximation error in polynomial interpolation
(and how it can be large), and other details. I then reviewed power
series solution of differential equations, and examined a notebook from
February 8, 2002 illustrating how we can use Mathematica to solve such
systems of equations. I have posted an improved
version of this notebook that contains a plot, etc.
In the remainder of the class, I introduced Matlab, explaining that,
although it does not have notebooks, it has the "diary" command and "m"
files, and that "m" files can contain either scripts or functions. I recommended
students now have on hand the
Recktenwald text for the remainder of the course.
-
Tuesday, March 2, 2004: We used Matlab to do problem 1 and problem 6 of
the assignment
due today. I produced the following:
During this process, we saw some elementary ways of manipulating Matrices
in Matlab, we examined the For loop, and we saw how to create
Matlab scripts.
-
For Thursday, March 4, 2004: I introduced the next
assignment, due Tuesday, March 16. I then explained the difference
between Matlab functions and Matlab scripts, and we examined two simple
Matlab functions, x_squared.m
and difference_quotient.m.
After looking at the Matlab function fprintf (see pp. 102-105
of our Recktenwald text), we produced a Matlab script, difference_table.m.
We discussed the output from difference_table.m
from the point of view of roundoff and truncation error.
-
For Tuesday, March 9, 2004: There were no notebooks. However, we
accomplished the following:
-
I handed back Assignment
3 and explained several aspects of it.
-
I briefly reviewed Assignment
4 (Due Tuesday, March 16)
-
I discussed programming style, exhibiting and explaining portions of Chapter
4 of our Recktenwald text). In particular, I emphasized the fact
that a computer program is meant to communicate both with the machine and
with people, as well as serve as a kind of notes for the programmer.
I mentioned various aspects of style, including choice of variable names,
what goes in the prologue, and appropriate use of in-line documentation.
I also gave some examples of white space. We also discussed the importance
of a consistent style. I gave several examples of consistent style,
both in computer programs and in traditional technical writing.
-
Finally, I reviewed Newton's method, and (next class), will program it
as a simple case study of designing a program and following a good style
pattern.
-
For Thursday, March 11, 2004: After reviewing our analysis of Newton's
method, we carefully constructed a Matlab
function to perform Newton's method. We debugged this function
in class, then we tried it on f(x) = sin(x), for various
starting points. We looked at what was actually happening inside
the iteration by setting breakpoints and using Matlab's interactive debugger.
-
For Tuesday, March 16, 2004: I introduced the concept of vector norm with
the abstract definition, and I discussed the 1, 2, and infinity norms.
I then explained the definition of derived matrix norm, and followed that
by a derivation of the classical condition number of a matrix. I
provided some simple
examples of norm and condition number computation in Matlab in class.
-
For Thursday, March 18, 2004:
-
We first reviewed the geometric interpretation of Newton's method for a
single variable in terms of replacing the graph of a function by its tangent
line.
-
We talked about systems of equations, introducing a notation for systems
that is almost identical to the notation for single equations in a single
variable. We illustrated the concepts with the simple system as in
the Matlab m-file simple_2d.m.
We did a couple of graphs
in Mathematica illustrating this system, and we talked about the solution
set to such a system as being the intersection of the intersection curves
of the surfaces of the function components with the coordinate hyperplane.
-
We then talked about tangent planes and tangent hyperplanes, gradients,
and Jacobi matrices. The Jacobi matrix of simple_2d.m
is simple_2d_prime.m.
-
We derived Newton's method for systems in terms of these concepts.
We made minor modifications to our previous program newton.m
for Newton's method for a single variable and single equation, to produce
newton_sys.m,
Newton's method for systems. We tried newton_sys.m
with initial function and Jacobi matrix simple_2d.m
and simple_2d_prime.m,
and found that it worked for initial point x(0) = (1,2)T.
-
For Tuesday, March 23, 2004:
-
I handed back the assignments and made a short
assignment due next Tuesday.
-
I reviewed sections of Recktenwald that are relevant to what we have been
discussing:
-
Condition numbers, §8.3.2, p. 399, and also 8.3.3 through 8.3.6.
-
Gaussian Elimination, §8.3.4
-
Nonlinear systems, §8.5, and Newton's method, in particular, in on
p. 427.
I recommended that people read all of Chapter 8, for a more complete understanding
of systems of equations.
-
I reviewed LU-decomposition and the back-substitution process on the simple
example Ax = b, where (using Matlab notation) A =
[1 2 3; 4 5 6; 7 8 10] and b = [-1;0;1]. I then proceeded
to count
the total number of multiplications for the factorization phase, writing
this as a sum and simplifying it with a very
short Mathematica program. I also counted the total number of multiplications
to both do the forward phase and the back substitution on a single right-hand
side.
-
I interpreted the result in terms of how one would expect the time to solve
a system to increase if the order of the system were doubled, etc.
-
I briefly discussed computing the inverse, indicating that, using Kramer's
rule, the amount of computation is at least proportional to n4,
where n is the order of the system, while the more efficient, standard
way of computing the inverse had number of operations roughly four times
the number of operations for just solving Ax = b, for n
large.
-
For Thursday, March 25, 2004:
-
I began by introducing the idea of fixed point, or Picard iteration, x(n+1)
=
f(x(n)),
indicating that this framework is used in a variety of contexts, including
single equations, linear and nonlinear systems of algebraic equations,
and operator equations.
-
I gave an example of Picard iteration for a matrix equation of the form
x
=
G
x + k. This example occurs in a
short in-class Matlab session.
-
I analyzed the convergence of Picard (fixed point) iteration in terms of
||G||, and illustrated the concepts in the in-class
Matlab session.
-
I derived a linear system of equations A x = b corresponding
to finite-difference collocation, then illustrated the splitting A =
D
-
C
to
put the system in a form x = G x +
k, as an illustration
of a class of equations in which fixed point iteration works.
-
I indicated that fixed point iteration applied to such linear systems is
called the "Jacobi method".
-
I showed that the Jacobi method can be set up by solving the i-th
equation for the i-th variable, then plugging in the present values
in the right-hand-sides to get the new values of the components of x.
-
I indicated that, in the iteration, one might use the new values of the
components of x whenever they are available, rather than waiting
until all components have been recomputed before starting to use the new
values. The iteration where the new values are used as soon as they
are computed is called the "Gauss--Seidel method".
-
For Tuesday, March 30, 2004: Basically, material from Chapter 9 of the
Recktenwald text.
-
I introduced linear regression with a general discussion of fitting approximate
data with mathematical functions of a particular form.
-
I derived the normal equations for a fitting line, for a simple example.
I produced a Matlab
diary for this example.
-
I exhibited the normal equations for more general polynomial fits, and
I exhibited the function form for general linear least squares (with basis
functions other than powers).
-
For Thursday, April 1, 2004: I exhibited the normal equations for linear
least squares for general basis functions, in terms of discrete dot products
of these basis functions. I then explained what orthogonal matrices
are, an what a QR-factorization is. Finally, I showed how QR factorizations
relate to least squares. I did a short set of Matlab computations
illustrating the QR factorization, but I will post that next time, together
with least squares computations involving QR factorizations.
-
For Tuesday, April 6, 2004: I reviewed setting up general least squares,
and I illustrated the concepts and notation with the simple example previously
introduced in class. I also reviewed in detail, and with the example,
the QR factorization, and use of QR factorizations for solving least squares
problems. I illustrated the use of QR
factorizations to solve our example least squares problem in a Matlab session.
-
For Thursday, April 8, 2004: We first reviewed the derivation of the finite
difference collocation equations from March
25. We then discussed what would happen if we solved the resulting
tridiagonal system of equations directly with a general Gaussian elimination
program, based on the operations
counts we discussed on March 23. I then discussed the need for
using efficient algorithms to solve cutting-edge scientific problems; I
followed that with an explanation of why we need to pivot in Gaussian elimination
and with mention that pivoting is unnecessary for diagonally dominant systems.
I then briefly pointed out how we can solve a diagonally dominant tridiagonal
system without pivoting. I concluded with a brief overview of sparse
matrix technology, including a description of the way that sparse matrices
are represented to Matlab.
-
For Tuesday, April 20, 2004: I first reviewed the concept of eigenvalue
and eigenvector, then illustrated the eigenvalue decomposition A
= VDV-1, where D is a diagonal matrix containing
the eigenvalues of A on the diagonal and V is a matrix whose
columns are the corresponding eigenvectors. (I talked about when
this can be done, and when V can be chosen to be orthogonal.)
I then introduced the singular value decomposition (SVD) for arbitrary
rectangular matrices as a generalization of the eigenvalue decomposition.
Using computations on the matrices and their norms, I showed how the singular
value decomposition can be used to compute the condition number of a matrix.
I did some computations
in Matlab to illustrate these singular value decomposition concepts.
-
For Thursday, April 22, 2004: I showed in detail how to use the SVD to
find least squares solutions of overdetermined systems and to find minimum-norm
solutions to underdetermined systems. (I explained why the computations
work the way they do.) I then talked about rank-deficient overdetermined
systems, and how the SVD can be used to project a matrix that is almost
rank-deficient onto a rank-deficient matrix, so that the computations are
more well-conditioned. I then used Matlab and the SVD to compute
the least squares line to the simple example previously introduced (y(0)=0,
y(1)=4, y(2)=5, y(3)=8). Sorry; I failed to properly save that Matlab
session.
-
For Tuesday, April 27, 2004: I reviewed use of the singular value decomposition
to compute least squares solutions, solutions of minimum norm, and least
squares solutions of minimum norm. I then showed how to form the
pseudo-inverse of a matrix from its singular value decomposition, and did
various computations illustrating use of the pseudo-inverse.
-
For Thursday, April 29, 2004. I first showed the connection between
the roots of the characteristic equation of a higher-order constant coefficient
differential equation, then I described the relationship between these
roots and the eigenvalues of the matrix A for the linear system
w' = Aw one gets when one rewrites the higher-order differential
equation as a system of linear first-order equations. I then explained
the physical significance of the real parts of the eigenvalues and eigenvectors
of A. I then showed how Euler's method for approximately solving
initial value problems w' = Aw, w(0) given, is of the form w(k)=
(I + hA)kw(0), and how this can be written
in terms of an eigenvalue-eigenvector expansion of the initial condition
w(0). I concluded by noting that eigenvalue analyses are important
in structural engineering, etc. Example computations supporting these
ideas are found in the Matlab
diary for the day.