This page contains a running commentary on what was covered in
the lectures of math 380.
Thursday 10-11:30 am.
Thursday 10-11:30 am.
lecture 1, 8/24: we talked about R^n, dot product, length,
triangle inequality, Cauchy-Schwarz inequality. We proved
Cauchy-Schwarz and used it to prove the triangle inequality. Matrices
and matrix multiplication were also briefly mentioned.
lecture 2, 8/26: We saw that matrices define linear maps and
that any linear map from R^n to R^m defines a unique matrix. This is
very important. The columns of the matrix corresponding to a linear
map L are the images under L of the standard basis vectors. Matrix
multiplication is defined the way it is so that it corresponds to the
composition of linear maps.
We then briefly talked about determinants. What
do they measure? They measure the (signed) volume of the image of the
unit cube. We talked about various ways of computing the
determinants. We briefly reviewed the properties of the determinants.
lecture 3, 8/29: We talked about limits. It is important to
understand what limits are, for otherwise we cannot define
differentiability. The official epsilon/delta definition was discussed.
It was then interpreted geometrically, in terms of open balls .
lecture 4, 8/31: We spend a bit of time discussing a function
of two variables that had partial derivatives at the origin but was
not continuous. The moral was: for a function to be
differentiable at a point it's not enough for the partials to
exist. We then defined the derivative of a function to be a
linear map that approximates the function ``well."
lecture 5, 9/02: We continued talking about differentiability.
If a function is differentiable then the first order partials exist.
Conversely if the first order partials exist and are continuous
then the function is differentiable. We talked briefly about the
geometric meaning of differentiability: the tangent plane to the graph
of the function is the graph of the derivative (differential).
Finally, two special cases of taking derivatives were mentioned. (1)
the derivative of a real valued function can be thought of as a
vector, the gradient vector. (2) the derivative of a curve is the
lecture 6, 9/07: The lecture dealt with the chain rule: the
derivative (differential) of the composition of two maps is
the composition of the differentials.
lecture 7, 9/09:
We talked about directional derivatives. We proved, using the chain
rule, that the directional derivative of a function f in the
direction v is the dot product of the gradient of f with v.
We then discussed a subtle point: a function can have all the
directional derivatives at a point and yet it need not be
differentiable at the point. Lastly we talked about the
function increasing the fastest in the direction of the
gradient and we proved that the gradient is perpendicular to
the level sets of the function.
lecture 8, 9/12: We spent the lecture discussing the implicit
function theorem and applying it to a few examples. It is an
important theorem and you should understand its statement.
lecture 9, 9/14: More examples of the implicit function
theorem. A brief discussion of the inverse function theorem
and why it's true.
lecture 10, 9/16: I gave a proof of the implicit function
theorem assuming the inverse function theorem. Since the
proof is not in the book, I posted it on the web as a pdf file
(see the homework page). We then spent a bit of time
discussing vector fields, sketching a few examples. We also
discussed an important concept of a flow line of a
lecture 11, 9/19: I defined divergence and curl of a vector
field. We proved that curl (grad f) = 0 and div (curl F) =
0. The reason is that mixed partials commute. The question as
to when a given vector field is the gradient of a function or
the curl of another vector field is more subtle and we put it off.
lecture 12, 9/21: We discussed Taylor's theorem (no proofs)
and computed a few examples.
lecture 13, 9/23: More of Taylor's theorem. Maxima and
minima, critical points. It is important to understand why
maxima and minima are critical points. Hessian as the
generalization of the second derivative of the function of one
variable. Classification of critical points in terms of
Hessians (the test involving principal minors of the
corresponding matrix). We also discussed a few shortcuts in
computing the Hessians.
lecture 14, 9/25: We reviewed classification of symmetric
matrices into positive definite, negative definite and indefinite
matrices and the application of this classification to the
classification of the critical points of a function.
discussed compact sets and the fact that a continuous function on a
compact set achieves its maximum and minimum on the set.
we talked about constrained maxima and minima and, in particular, the
method of Lagrange multipliers for one constraint. It's important to
understand why the method works.
lecture 15, 9/27: We discussed the method of Lagrange
multipliers for 2 constraints. The important points were: the
restriction of a function f to a constraint S has an extremal point at
P in S if and only if the gradient of f at P is perpendicular to S.
If the constraint S is cut out by the functions g_1 and g_2 then at
any point Q the set of vectors perpendicular to S are in the span of
grad g_1 (Q) and grad g_2 (Q). Therefore to find the extremal points
of f on S look for points where grad f is a linear combination of grad
g_1 and grad g_2.
lecture 16, 9/30: We reviewed integration over regions in R^2
and R^3. In particular we discussed how to set up an integral over a
region D in R^2 as an iterated integral. There maybe several ways of
doing it with one more convenient than the other.
lecture 17, 10/3:
Change of variables in double and triple integrals.
lecture 18, 10/5:
Change of variables in double and triple integrals.
lecture 19, 10/7: group work on the true/false questions in
preparation for the midterm on Wednesday, October 12. The pdf file of
the question sheet is here .
lecture 20, 10/10: I answered questions in preparation for the
midterm on Wednesday, October 12 and handed out answers to the
question sheet .
lecture 21, 10/12: midterm exam
lecture 22, 10/14: line integrals of functions and vector fields
lecture 23, 10/17: I explained what covectors are and defined
1-forms are covector fields. I discussed integration of 1-forms over
curves in two different ways. The first one is an algorithm: you
parameterize and plug in x(t) for x, y(t) for y, x'(t) dt for dx
etc. The second way is conceptual: you feed a tangent vector to the
curve into a 1-form and get a function. You then integrate the
function. The second method is not really in the book, but check
chapter 8 for more on differential forms.
lecture 24, 10/19: We reviewed 1-forms and discussed the
correspondence with vector fields. Under the correspondence gradients
correspond to total differentials. We then proved that line integrals
of vector fields depend only on the direction in which the paths are
traversed and not so much on the parameterization. Reversing the
direction of the path changes the sign of the integral.
lecture 25, 10/21:
We discussed Green's theorem for vector fields and for differential forms.
In particular I defined 2-forms and exterior derivatives of 1-forms.
lecture 26, 10/24: We continued the discussion of Green's
theorem; in particular its statement for differential forms. We then
proved it for a rectangle (this is a simpler computation than the one
in the book), then for regions that are unions of rectangles and
finally for general regions by a limiting argument. We then stated
Green's theorem in terms of the curl of the vector field.
lecture 27, 10/26: We started out by proving the divergence
theorem for vector fields in the plane.
I then stated two theorems on the dependence of line integrals on the path.
To do that we needed to discuss simply-connectedness.
lecture 28, 10/28: We proved the two theorems stated in the
previous lecture: theorem 3.3 and theorem 3.5 of the textbook.
lecture 29, 10/31: We discussed conservative vector fields and
two related methods of finding a potential of a conservative vector
field. We then discussed the differential forms analogues of theorem
3.3 and 3.5 (I am not sure if this is in the textbooks. Look in chapter 8).
lecture 30, 11/02: Parameterizations of surfaces. Surface
integrals of functions.
lecture 31, 11/04: Normal vectors to surfaces, tangent planes,
orientability of surfaces.
lecture 33, 11/07: Examples of flux integrals, geometric
meaning of flux integrals. Dependencs of surface and flux integrals
lecture 34, 11/09: Stokes and Gauss (divergence) theorems.
lecture 35, 11/11: Two examples of use of Gauss theorem.
lecture 36, 11/14: Geometric meaning of divergence and curl.
lecture 37, 11/16: Differential forms.
lecture 38, 11/18: More differential forms.
lecture 39, 11/28: Pull-backs of differential forms.
lecture 40, 11/30: my notes of
lecture 41, 12/02: my notes of
the lecture (corrected)
lecture 42, 12/05: my notes of
the lecture All you wanted to know and more about orientations.
Namely, how to use a k-form to orient a k-manifold and induce the correct
orientation on the boundary.
lecture 43, 12/07: my notes of
the lecture More on orientation and lots of examples.
Return to Lerman's
math380 homework page
Last modified: Tue Dec 13 16:47:57 CST 2005