Is 0.999... = 1? (spoiler alert: no it is not)

You may have encountered the popular claim that \( 0.999... = 1 \), where the three dots signify that the decimal continues forever. This is a somewhat weird claim, since it would mean that mathematics is broken. There should be no way for two different numbers to have the same value. What makes it weirder is that this is quite popular claim. I've even seen mathematicians say that it's true! But is it though?

One popular proof is to first denote \( S = 0.999...\) and then multiply by \(10\) to get \( 10S = 9.999...\) and subtract \( S \) from it, to get  \( 10S - S = 9.000...\) and finally dividing by \(9\) yields  \( S = 1.000... = 1 \) and we see that  \(0.999... = 1\)!

However, there's a problem. This short derivation is not strictly speaking correct. It is veeeery close to being correct, and to see why let's look at finite decimals first.

Let's say that \(S = 0.999\) (note that this is not the same as \(S = 0.999...\) ). Let's do the same trick as before, so multiply by ten, subtract \( S \) and divide by 9. We get \(\frac{10S - S}{9} = \frac{9.99-0.999}{9} =  0.999\). We just recovered the original \( S \) and not some other number! You can try it for any other longer decimal you want, the result is always the same.

This is not particularly surprising, since \( \frac{n S - S}{n - 1} = S\) and not some other number. This is true regardless of what \( S\) or \( n \) is.

But wait, if we always just recover the original number and not some other number, then what changes when there are infinitely many decimals? The answer is that nothing changes. It's just that the original "proof" is flawed. It does not take into account what multiplication by \( 10 \) means conceptually: it just moves the decimal point to the next possible position on the right. So \( 0.999... \) becomes \( 9.99... \), which means that \( 9.99... - 0.999... = 8.99... \) and there is a \( 1\) after infinitely many decimal places.

In other words, there is one reasonable assumption in the proof that is misguided: if you multiply a number that has infinitely many decimals with \( 10 \), the new number should still have infinitely many decimals. But in fact, you would have infinity \( - 1 \) decimal places. You could say that the original number is always "ahead" by one decimal place.

Let's look at the earlier example with \( S = 0.999 \). If we multiply by ten and subtract one \( S\), we get \(10S - S = 9.99-0.999 = 8.991\). For the proof to work, we need to get \( 9 \) on the right hand side. The difference is \(9 - 8.991 = 0.009\), so we can write \( 10S - S = 9 - 0.009 \). Adding more decimals to the original \( S \), we notice that the only thing that changes is that the number we must subtract from \( 9\) becomes smaller and smaller.

If we take this all the way to infinite decimals, so that we approach \( S = 0.999...\) we immediately see that \(10S - S = 9 - dS \), where dS is an infinitesimal number (a number that is closer to zero than any real number, but is still nonzero). Therefore, we can carry out the rest of the proof and divide with \( 9\) to get \( S = 0.999... = 1 - dS \).

All of a sudden it all makes sense! The number \(0.999... \) is not equal to \( 1 \), but it is veeeeeery close. Infinitesimally close, that is!

Comments

Popular posts from this blog

Is it possible to make a laser out of wood?

The Nobel prize in physics 2018: light all the way