Archive for the ‘Math’ Category

Honeybee geneology

Friday, February 15th, 2008

Male honeybees are born from unfertilized eggs. Female honeybees are born from fertilized eggs. Therefore males have only a mother, but females have both a mother and a father.

Take a male honeybee and graph his ancestors. Let B(n) be the number of bees at the nth level of the family tree. At the first level of the tree is our male honeybee all by himself, so B(1) = 1. At the next level of our tree is his mother, all by herself, so B(2) = 1.

Pick one of the bees at level n of the tree. If this bee is male, he has a mother at level n+1, and a grandmother and grandfather at level n+2. If this bee is female, she has a mother and father at level n+1, and one grandfather and two grandmothers at level n+2. In either case, the number of grandparents is one more than the number of parents. Therefore B(n) + B(n+1) = B(n+2).

To summarize, B(1) = B(2) = 1, and B(n) + B(n+1) = B(n+2). These are the initial conditions and recurrence relation that define the Fibonacci numbers. Therefore the number of bees at level n of the tree equals F(n), the nth Fibonacci number.

This is a more realistic demonstration of Fibonacci numbers in nature than the oft-repeated rabbit problem.

Chebyshev polynomials

Saturday, February 9th, 2008

I posted a four-page set of notes on Chebyshev polynomials on my web site this morning. These polynomials have many elegant properties that are simple to prove. They’re also useful in applications.

drawing of Pafnuty Lvovich Chebyshev

Mr. Chebyshev may have the honor of the most variant spellings for a mathematician’s name. I believe “Chebyshev” is now standard, but his name has been transliterated from the Russian as Chebychev, Chebyshov, Tchebycheff, Tschebyscheff, etc. His polynomials are denoted Tn(x) based on his initial in one of the older transliterations.

Proofs of false statements

Wednesday, February 6th, 2008

Mark Dominus brought up an interesting question last month: have there been major screw-ups in mathematics? He defines a “major screw-up” to be a flawed proof of an incorrect statement that was accepted for a significant period of time. He excludes the case of incorrect proofs of statements that were nevertheless true.

It’s remarkable that he can even ask the question. Can you imagine someone asking with a straight face whether there have ever been major screw-ups in, say, software development? And yet it takes some hard thought to come up with examples of really big blunders in math.

No doubt there are plenty of flawed proofs of false statements in areas too obscure for anyone to care about. But in mainstream areas of math, blunders are usually uncovered very quickly. And there are examples of theorems that were essentially correct but neglected some edge case. Proofs of statements that are just plain wrong are hard to think of. But Mark Dominus came up with a few.

Yesterday he gave an example of a statement by Kurt Gödel that was flat-out wrong but accepted for over 30 years. Warning: reader discretion advised. His post is not suitable for those who get queasy at the sight of symbolic logic.

Laws of large numbers and small numbers

Thursday, January 24th, 2008

In case my previous note on the law of small numbers confused anyone, I’ll compare it to the law of large numbers.

The law of large numbers is a mathematical theorem; the law of small numbers is an observation about human psychology.

The name “law of large numbers” is a standard term applied to a theorem about the convergence of random variables. (OK, actually two theorems. Like nuclear forces, the laws of large numbers comes in a strong and a weak form.)

The name “law of small numbers” is a pun, and I don’t believe the term is commonly used. Too bad. It’s a convenient label for a common phenomena.

The law of small numbers

Thursday, January 24th, 2008

The book Judgment under uncertainty analyzes common fallacies in how people estimate probabilities. The book asserts that no one has good intuition about probability. Statisticians do better than the general public, not because their intuition is much better, but because they know not to trust their intuition; they know they need to rely on calculations.

One of the common fallacies listed in the book is the “law of small numbers.” In general, people grossly underestimate the variability in small samples. This phenomena comes up all the time. It’s good to know someone has given it a name.

Thick tails

Friday, January 18th, 2008

Bart Kosko in his book Noise argues is that thick-tailed probability distributions such as the Cauchy distribution are common in nature. This is the opposite of what I was taught in college. I remember being told that the Cauchy distribution, a distribution with no mean or variance, is a mathematical curiosity more useful for constructing academic counterexamples than for modeling the real world. Kosko disagrees. He writes

… all too many scientists simply do not know that there are infinitely many different types of bell curves. So they do not look for these bell curves and thus they do not statistically test for them. The deeper problem stems from the pedagogical fact that thick-tailed bell curves get little or no attention in the basic probability texts that we still use to train scientists and engineers. Statistics books for medicine and the social sciences tend to be even worse.

We see thin-tailed distributions everywhere because we don’t think to look for anything else. If we see samples drawn from a thick-tailed distribution, we may throw out the “outliers” before we analyze the data, and then a thin-tailed model fits just fine.

How do you decide what’s an outlier? Two options. You could use your intuition and discard samples that “obviously” don’t belong, or you could use a formal test. But your intuition may implicitly be informed by experience with thin-tailed distributions, and your formal test may also circularly depend on the assumption of a thin-tailed model.

Quick TeX to graphic utility

Thursday, January 17th, 2008

Here’s a web site where you can type in some TeX code, click a button, and get back a GIF with a transparent background. Handy for pasting equations into HTML.

For example:

gaussian integral

Coping with exponential growth

Thursday, January 17th, 2008

Everything is supposedly growing exponentially these days. But when most people say “exponential,” they don’t mean what they say. They mean “fast.” Exponential growth can indeed be fast. Or it can be slow. Excruciatingly slow.

If you earn a million dollars a day, your wealth is growing quickly, but is not exponentially. And if you have $100 in the bank earning 3% compound interest, you’re money is growing slowly, but it is growing exponentially.

Linear growth is a constant amount of increase per unit of time. Exponential growth is a constant percentage increase per unit of time. If you buy a pack of baseball cards every Friday, the size of your baseball card collection will grow linearly. But if you breed rabbits with no restriction, the size of your bunny heard will grow exponentially.

It matters a great deal whether you’re growing linearly or exponentially.

When you start a new enterprise — a company, a web site, etc. — it may truly grow exponentially. Growth may be determined by word of mouth, which is exponential (at first). The number of new people who hear each month depends on the number people who talk, and hearers become talkers. But that process can be infuriatingly slow when it’s just getting started. If the number of visitors to your web site is growing 5% per month, that’s great in the long term, but disappointing at first when it means going from 40 visitors one month to 42 the next.

How do you live on an exponential curve? You need extraordinary patience. While any exponential curve will eventually pass any linear curve, it may take a long time. If you’re making barely perceptible but compounding progress, be encouraged that you’re on the right curve. Eventually you’ll have all the growth you can handle. Realize that you may be having a harder time initially because you’re on the exponential curve rather than the linear curve.

How do you know whether you’re on an exponential curve? This is not as easy as it sounds. Because of random noise, it may be hard to tell from a small amount of data whether growth is linear or exponential, or even to tell growth from stagnation. Eventually the numbers will tell you. But until enough data come in to reveal what’s going on, look at the root causes of your growth. If you’re growing because customers are referring customers, that’s a recipe for exponential growth. If you’re growing because you’re working more hours, that’s linear growth.

Nothing grows exponentially forever. Word of mouth slows down when the message reaches saturation, when the talkers run into fewer people who haven’t heard. Rabbit farms slow down when they can’t feed all the rabbits. Most of the things we call exponential growth are more accurately logistic growth: exponential growth slows to linear growth, then linear growth begins to plateau.

How do you live on a logistic curve? Realize that initial exponential growth doesn’t last. Watch the numbers. They’ll tell you when you’ve gone from approximately exponential to approximately linear. Understand the mechanisms that turn exponential into logic growth in your context.

Stochastic independence

Wednesday, January 16th, 2008

Independence in probability can be both intuitive and mysterious. Intuitively, two events are independent if they have nothing to do with each other. Suppose I ask you to guess whether the next person walking down the street is left handed. I stop this person, and before I ask which hand he writes with, I ask him for the last digit of his phone number. He says it’s 3. Does knowing the last digit of his phone number make you change your estimate of the chances this stranger is a south paw? No, phone numbers and handedness are independent. Presumably about 10% of right-handers and the same percentage of left-handers have this distinction. Even if the phone company is more likely or less to assign numbers ending in 3, there’s no reason to believe they take customer handedness into account when handing out numbers. On the other hand, if I tell you the stranger is an artist, that should change your estimate: a disproportionate number of artists are lefties.

Formally, two events A and B are independent if P(A and B) = P(A) P(B). This implies that P(A | B), the probability of A happening given that B happened, is just P(A). Similarly P(B | A) = P(B).  Knowing whether or not one of the events happened tells you nothing about the likelihood of the other. Knowing someone’s phone number doesn’t help you guess which hand they write with, unless you use the phone number to call them and ask about their writing habits.

Now lets extend the definition to more events. A set of events is mutually independent if the probability of any subset of two or more events is the product of the probabilities of each event separately.

So, let’s look at three events: A, B, and C. If we know P(A and B and C) = P(A) P(B) P(C), are the three events mutually independent? Not necessarily. It is possible for the above equation to hold and yet P(A and B) is not equal to P(A) P(B). The definition of mutual independence requires something of every subset of {A, B, C} with two or more elements, not just the subset consisting of all elements. So we have to look at the subsets {A, B}, {B, C}, and {A, C} as well.

What if A and B are independent, B and C are independent, and A and C are independent? In other words, every pair of events is independent. Is that enough for mutual independence? Surprisingly, the answer is no. It is possible to construct a simple example where

  • P(A and B) = P(A) P(B)
  • P(B and C) = P(B) P(C)
  • P(A and C) = P(A) P(C)

and yet P(A and B and C) does not equal P(A) P(B) P(C).

There are no short cuts to the definition of mutual independence.

Irrelevant uncertainty

Monday, January 14th, 2008

Suppose I asked where you want to eat lunch. Then I told you I was about to flip a coin and asked again where you want to eat lunch. Would your answer change? Probably not, but sometimes the introduction of irrelevant uncertainty does change our behavior.

In a dose-finding trial, it is often the case that a particular observation has no immediate importance to decision making. Suppose Mr. Smith’s outcome is unknown. We calculate what the next dose will be if he responds to treatment and what it will be if he does not respond. If both doses are the same, why wait to know his outcome before continuing? Some people accept this reasoning immediately, while others are quite resistant.

Not only may a patient’s outcome be irrelevant, the outcome of an entire clinical trial may be irrelevant. I heard of a conversation with a drug company where a consultant asked what the company would do if their trial were successful. He then asked what they would do if it were not successful. Both answers were the same. He then asked why do the trial at all, but his question fell on deaf ears.

While it is irrational to wait to resolve irrelevant uncertainty, it is a human tendency. For example, businesses may delay a decision on some action pending the outcome of a presidential election, even if they would take the same action regardless which candidate won. I see how silly this is when other people do it, but it’s not too hard for me to think of analogous situations where I act the same way.

Complementary validation

Thursday, January 10th, 2008

Edsgar Dijkstra quipped that software testing can only prove the existence of bugs, not the absense of bugs. His research focused on formal techniques for proving the correctness of software, with the implicit assumption that proofs are infallible. But proofs are written by humans, just as software is, and are also subject to error. Donald Knuth had this in mind when he said “Beware of bugs in the above code; I have only proved it correct, not tried it.” The way to make progress is to shift from thinking about the possibility of error to thinking about the probability of error.

Testing software cannot prove the impossibility of bugs, but it can increase your confidence that there are no bugs, or at least lower your estimate of the probability of running into a bug. And while proofs can contain errors, they’re generally less error-prone than source code. (See a recent discussion by Mark Dominus about how reliable proofs have been.) At any rate, people tend to make different kinds of errors when proving theorems than when writing software. If software passes tests and has a formal proof of correctness, it’s more likely to be correct. And if theoretical results are accompanied by numerical demonstrations, they’re more believable.

Leslie Lamport wrote an article entitled How to Write a Proof where he addresses the problem of errors in proofs and recommends a pattern of writing proofs which increases the probability of the proof being valid. Interestingly, his proofs resemble programs. And while Lamport is urging people to make proofs more like programs, the literate programming folks are urging us to write programs that are more like prose. Both are advocating complementary modes of validation, adding machine-like validation to prosaic proofs and adding prosaic explanations to machine instructions.

Integration and pragmatism

Wednesday, January 9th, 2008

When I first saw integration techniques in calculus, I thought they were a waste of time because software packages could do any integral I could do by hand. Besides, you can always just use Simpson’s rule to compute integrals numerically.

In short, I thought symbolic integration was useless and numerical integration was trivial. Of course I was wrong on both accounts. I’ve solved numerous problems at work by being able compute an integral in closed form, and I’ve had a lot of fun cracking challenging numerical integration problems.

Many of the things I thought were impractical when I was in college have turned out to be very practical. And many things I thought would be supremely useful I have yet to apply. Paying too much attention to what is “practical” can be self-defeating. Pragmatism is impractical.