Archive for the ‘Mathematics’ Category

Linear Aljava

The Indian Institute of Technology in Bombay, India hosts this site, filled with wonderful applets demonstrating basic linear algebra concepts. There are also mathematical demonstrations interleaved with the applets. Topics range from vector operations to linear transformations to eigenvectors.

Advertisements

Book iz gud

About two months ago, Scott Aaronson reviewed and recommended Timothy Gowers’ Princeton Companion to Mathematics (here, for instance). I won’t reiterate his enthusiasm for the book except to note that its awesomeness-to-price ratio is much greater than one. Which, considering the cost of your average math textbook (although the Companion isn’t strictly a textbook), is both impressive and vexing. Let’s just say that this thin red PDE book I’m glaring at right now cost me about twice as much for a tenth of the content (if not less). If you know the one I’m talking about, holla.

The Eigenvalue Problem (A Refresher)

One of the most interesting functions in mathematical analysis and calculus is the exponential function, \displaystyle x(t) = e^t.

For one, this is the only non-trivial function that is it’s own derivative (the trivial function would be x(t)=0). That is, it is the solution to this very simple differential equation:

\displaystyle \frac{dx}{dt} = x , with initial conditions x(0)=1

(Reminder: You solve it by separating the variables and then integrating. See here for a demonstration).

This “definition” of the exponential has interesting implications when you turn away from a simple equation and look toward systems of equations.

Consider this matrix equation:

\mathbf{x'} = \mathbf{Ax}

… where x_i'(t) is in the i^{th} place of the vector \mathbf{x'}, x_i(t) is in the i^{th} place of the vector \mathbf{x}, i goes from 1 to n, and \mathbf{A} is just any n \times n matrix.

The form of this matrix equation is identical to the differential equation written above. Indeed, you might want to just say that it’s a generalization to arbitrary dimensions. It might also invite you to suppose, as was done previously, a solution involving exponentials. Except this time you must account for the vectorized nature of the equation. So try this:

\displaystyle \mathbf{x} = e^{at}\mathbf{z}, where \mathbf{z} is unknown at this point.

Differentiating with respect to t, we get:

\displaystyle \mathbf{x'} = ae^{at}\mathbf{z}.

Plugging into our matrix equation:

\displaystyle ae^{at}\mathbf{z} = e^{at}\mathbf{Az}.

Since the exponentials can’t attain zero, we can divide them out, getting the following:

\mathbf{Az}=a\mathbf{z}

This particular equation is the so-called eigenvalue problem. Waxing ecstatic about the eigenvalue problem is all too easy, because it’s quite a profound little tool, both in mathematics and in science. It gives rise to the ideas of operator calculus, wherein one thinks of a complicated system of equations as an operator acting on a simpler system of equations. This allows you to reduce the system so that it’s (hopefully) solveable.

In the case of the eigenvalue problem, we’ve transformed a system of differential equations into a matrix operation that simply scales any vector it’s applied to by a. Solving this problem involves application of what universities sometimes call ‘linear algebra’. Solving it involves knowledge which I’ll slightly gloss over, but which you can look up easily in a linear algebra text or online.

Subtracting everything to one side, we get:

\mathbf{Az}-a\mathbf{z}=\mathbf{0}

Using the fact that \mathbf{Ix}=\mathbf{x}, for any vector \mathbf{x}, where \mathbf{I} is the identity matrix, we can write this equation as:

\mathbf{Az}-a\mathbf{Iz}=\mathbf{0}

Removing the common factor (which requires the previous step; can’t subtract a vector from a matrix!), the equation resolves to:

\left(\mathbf{A}-a\mathbf{I} \right) \mathbf{z}=\mathbf{0}

Now, we wish this equation to have a non-trivial solution–that is, one where \mathbf{z} \neq \mathbf{0}.

There’s a particular theorem in linear algebra (sometimes called the Invertible Matrix Theorem) which will guide us here. First, a quick definition.

A matrix \mathbf{B} is said to be invertible if there exists a matrix \mathbf{C} such that \mathbf{BC}=\mathbf{CB}=\mathbf{I}. The matrix \mathbf{C} is called the inverse of \mathbf{B} and is written \mathbf{B^{-1}}.

The Invertible Matrix Theorem states that a matrix \mathbf{B} is invertible if and only if the matrix equation \mathbf{Bx}=\mathbf{0} has only the trivial solution \mathbf{x}=\mathbf{0}. Which means if we want a non-trivial solution out of our equation, we need to ensure that \mathbf{A}-a\mathbf{I}, which we’ll just call \mathbf{B}, is not invertible. To ensure this, let’s take a look at a hypothetical inverse for \mathbf{B}.

Assume, for a moment, that \mathbf{B} is the following:

\mathbf{B} = \left(\begin{array}{cc}b_1 & b_2\\ b_3 & b_4\end{array}\right)

You may recall that if \mathbf{B} has an inverse, it must be:

\mathbf{B^{-1}} = \frac{1}{\det{\mathbf{B}}} \left(\begin{array}{cc}b_4 & -b_2\\ -b_3 & b_1\end{array}\right)

This rule generalizes to arbitrary finite dimensions. Which means if \det{\mathbf{B}} = 0, then the matrix \mathbf{B} is not invertible, because you’re dividing by zero. Horrah! (Here, \det is the determinant, in case you have not seen it before).

So we get this equation:

\det{\mathbf{B}} = \det{\left(\mathbf{A}-a\mathbf{I}\right)} = 0

The left side of this equation turns out to be a polynomial in a. Solving for its zeroes gives you your different scale factors, called eigenvalues (these are your a‘s). What do you do with them? Plug ’em back into the equation and solve for your \mathbf{z}‘s!

Since you already know an expression for \mathbf{x} in terms of \mathbf{z}, you can now solve for various values of \mathbf{x}.

MathIM and Adium

July, last year, Sum 1 to N said the following:

Someone somewhere should make a Web 2.0-ish collaboration site where people can do math together. Maybe like an IM service meshed with LaTeX capabilities, but without the coding hassle. Point-and-click type stuff. And an ability to save the notes and do projects together. Oh, and some basic graphing capabilities. And the ability to export the LaTeX code, and import and share MATLAB, Mathematica, (etc.) code.

Well, close enough.

Also, if you happen to use a Mac, there is always Adium X, which Cosmic Variance happily informs can be made to support LaTeX equations. You can even click the equations, inter alia, and get the raw TeX back out of it if you want. Super!

Oh, My, God. Pointy-Headed Biden Talks Calculus!

You will not believe this. Joe Biden, at a campaign rally in Wooster, Ohio today yesterday, used mathematics in an analogy to demonstrate why you shouldn’t vote McCain. And I’m not talking about arithmetic or even algebra. He used calculus in his analogy–specifically inflection points! WTF?

This is not change I can believe in. Candidates for president are not allowed to even give the slightest impression that they are intelligent, curious people. I strongly disapprove.

Check out the video at around the 2:35 mark or just read the transcript and you can scoff along:

Biden: Folks, remember your calculus classes from undergraduate school, you few crazy people who were engineers, God love ya? But all kidding aside, remember your calculus class you learned about an inflection point? That’s the point at which, like, you’re driving your car, where the steering wheel is dead straight, and once you make a move, even to a degree, you commit that automobile hurdling in a direction you can’t immediately change. Well, in American history, there have been about four or five inflection points… [etcetera]

HOW DARE HE INSULT MY STUPIDITY WITH HIS ELABORATELY ABSTRACT MATHEMATICAL METAPHORS THAT ONLY D&D NERDS WILL UNDERSTAND.

Math on YouTube

I’ve been watching this channel on YouTube for awhile. You should too! It’s a guy who posts quickie lectures on vector calculus, Fourier analysis, quantum mechanics, fluid mechanics, and more. He has over two hundred videos up, so be prepared to be entertained.

A Cute Lil Somethin’

Over at this math blog I found this, and I found it totally adorable. In an Aww, you wyke yer wittle chew toy, dontcha? Yes you dooooo kind of way.

Basically, the idea is this. You take two functions f and g from \mathbb{R} \rightarrow \mathbb{R} and you plot g as if f were its x-axis. When you’re done you get some pretty sweet lookin’ plots.

Here are some of the ones I tried. The function f is in black, and T(g) is in green, (where T is the transformation discussed above). Note that using the same domains for f and T pretty much zooms f out of the picture a lot of the time.

f(x)=\lfloor x \rfloor
g(x)=\lfloor x \rfloor

f(x)=\lfloor x \rfloor
g(x)={\lfloor x \rfloor}^2

f(x)=\cos x
g(x)=\lfloor x \rfloor

f(x)=x \cos x^2
g(x)=x^2

f(x)=0.5x^2
g(x)=x^3 \cos x

f(x)=\cos x
g(x)=\exp (\cos x^2 )

A fountain

And here’s the Scilab code I used, if you’re interested:


function [r] = perpfunc(f,g,p) //f and g are functions
//p is a vector of x-values you want to plot

//The function perpfunc plots g as if f was its x-axis.

deff('y=df(f,x)','y=derivative(f,x)');
deff('y=T(x)','y=[x-g(x)*sin(atan(df(f,x))),f(x)+g(x)*cos(atan(df(f,x)))]');

lengthp=length(p);
b=ones(lengthp,2);

for i = 1:lengthp
b(i,:)=T(p(i));
end

plot2d(b(:,1),b(:,2),style=3); fplot2d(h,f);

endfunction