Can someone explain (in a easy to understand way) or direct me to (online) some of the uses of matrixes. What kind of things are they used for?
I have an idea of how to multiply and add them etc at a basic level but would like to understand real contexts where they are used.
The books I have looked at just mention some contexts by name without highlighting how or why they are used in those situations. For example, I know they are used in 'networks', but what are these in the real world?
I have become really interested in them but want to understand them better, they seem different to most of the maths taught in schools. I wish writers of maths textbooks would write so that it was easier to understand stuff. Also, what on earth is an eigenvalue or eiginvector?
Any help appreciated.
Matrices (the singular is matrix; plural is matrices, pronounced MAY-truh-sees) have many uses. For a good understanding of them, I strongly suggest you take a course in linear algebra (or do some searches for it online).
Originally Posted by GAdams
Matrices can be used to represent and solve systems of linear equations (look up Gaussian elimination), to represent linear transformations, and much more, and they have loads of applications in computer graphics (linear transformations), in encryption (e.g., look up the Hill cipher), in economics, in graph theory, in linear programming, in networks (such as traffic networks and electrical networks), and in many other areas.
Again, an elementary linear algebra class should give you the necessary foundation, as well as an indication of potential applications.
As Reckoner mentioned, matrices are exceedingly useful for networks. The first place I saw a real world application (which, conveniently, was at the same time I was taking my first linear algebra course) was in the solving of simple electric circuits.
Electric circuits are nice because they always have a solution: the currents and voltages are always well-defined. So, as long as you set up your matrix right, you should have a solution. Unfortunately this is not true for many of the other applications of linear algebra- usually you have way more information than you need and a large part of the work you do is figuring out how to best fit the data using transformations and projections.
I'm not sure how familiar you are with circuits, but they're pretty simple to understand. Basically, there are a few results from physics that are really useful when analyzing a circuit. These results are even simpler if we have a simple circuit- that is, one composed of only resistors. Resistors, if you don't know, dissipate energy.
The rules we're really interested are Ohm's Law, Kirchhoff's Voltage Law (KVL), and Kirchhof's Current Law (KCL).
Ohm's Law: V=IR
KVL: The sum of the voltages around a closed loop in a circuit is zero (conservation of energy- you can only get out what you've already put in).
KCL: The sum of the current going into a circuit node is equal to the current leaving the node.
Using these three rules, you can take any simple circuit (or any circuit actually, but the analysis gets a little harder and I'm trying to give a simple example) and characterize it with some equations.
What will happen is that you will have some unknown currents and unknown voltages. Using KCL and KVL, you will generate a bunch of linear equations with unknowns. Then you will generate a matrix from these linear equations. So, for example, if I know that I have a circuit characterized by
where A stands for Amperes (the unit of measurement for current).
We can express this set of linear equations using matrix notation as:
This is a matrix, and in fact, one we can solve for the answer! If you're familiar with linear algebra, you'll know that the most basic way to do this is multiplying on both sides by the inverse matrix. You could also just row reduce, which is essentially the same thing.
Now, you may notice that I gave you a trivial example: since I1=1 A, it's pretty clear that I2=2 A by inspection. However, when you have a circuit with many branches and many nodes, solving the linear equations by hand becomes much harder. Generating a matrix from the equations we find gives us a way to organize the data, and we can use cool tricks like Cramer's Rule to find the answer really quickly, as long as the matrix isn't too big.
There is a good source for this with an example worked out exhaustively here
This is a method for finding voltages and currents that was beaten into me by one of my electrical engineering professors. In practice, I usually found myself just solving the linear equations by hand because it was easier for me, and Cramer's rule gets tedious for matrices that are larger than 3x3. However, it's possible to do it this way, and useful for some people. It's also a great way to organize data if you're going to try to plug this stuff into a computer to solve it.
A great introductory text on linear algebra is "Linear Algebra and its Applications" by David C. Lay. I used it in my first linear algebra course, and I still keep it on hand because it's the clearest explanation of basic linear algebra I've found. You can find out all about eigenvalues or eigenvectors there: the definitions for those are confusing unless you've studied some of the other basics of linear algebra first.
Thanks guys. I found that really useful and interesting.
That was a great explanation osmenog!
I'd be interested in any other input from anyone else too.
I am interested in linear transformations, the maths there isn't clear to me. E.g. the way that a coordinate, say (3, 5) of a 2d shape is reflected in the x-axis. The book I have talks about the new point having coordinates
(ax + by, cx + dy) and i don't really get what the a,b,c and d refer to.