# Math 104 vs Math 113

## Choosing between Math 104 and Math 113

Where to go after taking Math 51?

A student who completes Math 51 is in a position to take a number of other Math classes that will be useful for further studies both within mathematics and in other fields (e.g., natural sciences, engineering, economics, computer science, etc.). This is a brief guide concerning two courses that are natural follow-ups: Math 104 and Math 113 (both offered every autumn, winter, and spring), each of which develops linear algebra beyond what is covered in Math 51 but go in quite different directions.

Both of these courses give a broader understanding of how to work with matrices and much more about the fundamental concepts of eigenvalues and eigenvectors than there is time to cover in Math 51.

## Math 104

This is a course in applied linear algebra. One of the themes in Math 51 is that modern techniques for analyzing data, no matter the discipline in which they arise, rely on doing very large-scale linear algebra computations. Math 51 teaches that understanding the basic principles of linear algebra in R² or R³ is not so different from Rⁿ for possibly very large values of n when one has set up an appropriate language.

However, when it comes to doing actual computations and applications, there is a big difference between smaller and larger dimensions. With Rⁿ for n ≤ 5 it is not particularly laborious for a computer to solve linear systems by the process of eliminating variables (Gaussian elimination), or to compute determinants to check whether matrices are invertible, etc. However, these techniques become less and less feasible when n is large. In real-world problems, n can be on the scale of millions or billions, and then computer calculations must be done with great forethought.

There is an entirely different side to linear algebra: finding efficient and quick (and numerically stable) algorithms for work in very large dimensions. The importance of numerical stability is briefly touched upon in the Math 51 book (e.g., in discussions related to the QR-decomposition), but there is a lot more to the story. Math 104 addresses such considerations. The emphasis in the course is on acquiring practical and conceptual fluency with some of the most important techniques in applied linear algebra. (Key ideas in the proofs of major results are discussed in Math 104, but proof-writing is generally not a primary emphasis.)

For example, in practice it is sometimes important to be able to estimate “how long” a given computation takes. This is the beginning of the study of computational complexity. If you are solving a problem involving an n × n matrix, then does the computation need only around n steps (this is regarded as very good), or steps (less good, but still quite tolerable), or 10ⁿ steps (an utter disaster)?

Also, many problems simply cannot be handled “exactly”. For example, the computation of the eigenvalues of an n×n matrix involves finding the roots of a specific polynomial of degree n (the “characteristic polynomial”), and not only is there no exact formula for those roots (for n ≥ 5), but once n is even of moderate size it is not numerically feasible to use that polynomial to find the eigenvalues. Instead, one employs efficient and clever algorithms for numerically approximating the eigenvalues: one devises a sequence of matrices that allow one to create sequences of numbers converging to the eigenvalues of the original matrix.

Such material and much more, including a broader mastery of eigenvalues and the related fundamental “singular value decomposition” (introduced briefly in Chapter 27 of the Math 51 book), is essential for applications of linear algebra throughout data science, natural sciences, and engineering. There is a lot of excellent software to implement such algorithms, but the best scientists and engineers have a good sense of what is going on inside these software tools because in any real-life situation it is only a matter of time before one needs to make a computation or solve a problem for which the software tools in hand are not good enough and one needs to dig deeper to make things work. This is one among many reasons for learning the computational theory of linear algebra with the breadth and depth developed in Math 104.

## Math 113

Further coursework in pure as well as many parts of applied mathematics involves knowing how to read and write proofs. Math 51 provides a flavor of how the conceptual side of math can involve ways of thinking that are rather different from how one usually encounters math in classes before Math 51 (e.g., the importance of precise definitions, and the utility of thinking in terms of properties rather than explicit numbers).

Gaining familiarity and facility with proof-writing and reading is best done in the context of learning a substantive subject (just as learning how to cook is best done by preparing actual meals). There are several classes that the Math department offers which have proof-writing as one of their course-level learning goals: Math 110 (proofs in the context of applied number theory and cryptography, offered once each year), Math 115 (proofs of the core theorems of single-variable calculus, offered every autumn and spring), and Math 113 (proofs in the context of linear algebra). The ability to write proofs is assumed in many 100-level Math courses (e.g., Math 107 on graph theory, Math 143 on differential geometry, Math 151 on probability theory, and Math 152 on classical number theory) and is very helpful in parts of physics, CS, etc.

As an introduction to proof-writing in the context of linear algebra, Math 113 presents a lot of Math 51 material in the broader setting of vector spaces (with complete proofs throughout), and goes much further. In particular, it teaches how to think about linear algebra without the crutch of ambient “coordinates” (akin to the standard basis in Math 51); this perspective is very useful in further mathematics as well as in applications of linear algebra to differential equations and beyond. The type of reasoning covered in Math 113 is at the heart of nearly all contemporary mathematics, and is tremendously useful in parts of other disciplines too (e.g, theoretical computer science, quantum computation, quantitative finance, theoretical physics, computer graphics). An example from another discipline is: “What are the theoretical limits of anonymity when I am making queries on a large data set”? For instance, can one get useful data out of large collections of genomic information, or even just health records, without compromising the anonymity of the people from whom this data was collected? (This is a field called “differential privacy”, by the way.)