A Special Case

My statistics professor told a joke today that, while relatively funny, is also a bit interesting too. It goes like this:

A math professor is proving some general theorem, and ends up with a result involving x‘s and y‘s. One student raises his hand and asks, “Can you show us a special case of that theorem?”. In other words, the student wanted a specific application of the theorem. The professor proceeds to erase the x‘s and y‘s and replace them with x_0‘s and y_0‘s. The student replies, “Ah! That makes more sense.”

Now, as cute as that is, it actually demonstrates a real frustration math students have with mathematical notation. By convention, people who use math tend to take variable names from the end of the alphabet, constants from the beginning of the alphabet (or by subscripting a nought), and indices from the middle of the alphabet (unless its the Greek alphabet, in which case we tend to take indices from anywhere in the alphabet). And students get used to this convention. Defy it, and bewilderment ensues.

Now, I wonder, should math teachers regularly replace symbols to root out this problem early (which might annoy a lot of instructors, especially the incapable ones), or is the problem entirely benign? On the one hand, it’s useful to keep to conventions because it lets you understand notations more quickly. If you look at some expression involving variables and constants, and you know that usually constants are c_i‘s, then when an instructor says “Let c be some constant”, you remember that fact more readily than if she says “Let x be some constant”. On the other hand, keeping to conventions puts you into a mental rut where you ignore the relationship between constituents of an expression in favor of simply memorizing the formula.

Now, some other conventions make perfect sense. Matching the relative size of a mathematical something to the case of its symbol makes sense. For instance, members of a set are often represented in lower case, while the set itself is in uppercase. Subsets are uppercase because they’re sets in and of themselves, with their own constituent members. Matrices are represented in upper case, while elements and vectors of matrices are usually lower case. This convention serves as a mnemonic device, and is quite helpful to that end. Do choices of symbols have a mnemonic purpose? I don’t know. Perhaps someone more versed in the history of such things could explain.

In any case, the conventions aren’t likely to change. And it’s not like it affects me, although I have encountered students in upper math classes whom it still affects to a degree.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: