There's an extremely subtle point here about the hyperreals that the author glosses over (and is perhaps unaware of):
If you take 0.999... to mean sum of 9/10^n where n ranges over every standard natural, then the author is correct that it equals 1-eps for some infinitesmal eps in the hyperreals.
This does not violate the transfer principle because there are nonstandard naturals in the hyperreals. If you take the above sum over all naturals, then 0.999... = 1 in the hyperreals too.
(this is how the transfer principle works - you map sums over N to sums over N* which includes the nonstandards as well)
The kicker is that as far as I know there cannot be any first-order predicate that distinguishes the two, so the author is on very confused ground mathematically imo.
(not to mention that defining the hyperreals in the first place requires extremely non-constructive objects like non-principal ultrafilters)
The right way to approach this is to ask a question: What does 0.999... mean? What is the mathematical definition of this notation? It's not "what you get when you continue to infinity" (which is not clear). It's the value your are approaching as you continue to add digits.
When applying the correct definition for the notation (the limit of a sequence) there's no question of "do we ever get there?". The question is instead "can we get as close to the target as we want if we go far enough?". If the answer is yes, the notation can be used as another way to represent the target.
So the authors tries to be rigorous, but again falls into the same traps that the people who claim 0.9… != 1 fall.
“0.999… = 1 - infinitesimal”
But this is simply not true. Only then they get back to a true statement:
“Inequality between two reals can be stated this way: if you subtract a from b, the result must be a nonzero real number c”.
This post doesn’t clear things up, nor is it mathematically rigorous.
Pointing towards hyperreals is another red herring, because again there 0.999… equals 1.
Where school kids tend to get stuck is that they'll hold contradictory views on how fractions can be represented.
First it'll be uncontroversial that â…“ = 0.333... usually because it's familiar to them and they've seen it frequently with calculators.
However they'll then they'll get stuck with 0.999... and posit that it is not equal to 1/1, because there must "always be some infinitesimally small amount difference from one".
However here lies the contradiction, because on one hand they accept that 0.333... is equal to â…“, and not some infinitesimally small amount away from â…“, but on the other hand they won't extend that standard to 0.999...
Once you tackle the problem of "you have to be consistent in your rules for representing fractions", then you've usually cracked the block in their thinking.
Another way of thinking about it is to suggest that 0.999.. is indistinguishable from 1.
Any confusion about this should go away as soon as you make clear what exactly you are talking about. If you construct the real numbers using Cauchy sequences and define the* decimal representation of a number using a Maclaurin series at x=1/10 then it's perfectly clear that 0.9... and 1.0... are two different representations of the same number. So it's the same equivalence class, but not the same representation. Thus, if you're talking about the representation of the abstract number 1, they're not equal but equivalent. If you're talking about the numbers they represent, they're equal.
* As the example shows, the decimal representation isn't unique, so perhaps we should say "_a_ decimal representation".
To me the most obvious proof is that therere are no numbers in between 0.999... and 1. Therefore it must be the same number.
I don't get what the author is trying to do here. I mean he complains that talking about the limit of a sequence is too asbstract and unfamiliar to most people so the explaination is not satisfaying. But then names drop the notion of an Archimedean group and introduces with a big ol' handwave the hyperreals to solve this very straightforward highschool math problem…
Now don't get me wrong, it is nice and good to have blogs presenting these math ideas in a easy if not rigorous way by attaching them to known concept. Maybe that was the real intend here, the 0.99… = 1 "controversy" is just bait, and I am too out of the loop to get the new meta.
FWIW, there's an old Arxiv paper with this same argument:
https://arxiv.org/abs/0811.0164
It feels intuitively correct is what I'll say in its favor.
This is why I love HN. One post about advanced SQL ACID concepts, the next about mathematics, yet another about history.
What a community.
All these supposed proofs are totally wrong. Students are correctly interpreting as hand waving, by people who themselves do not have a good answer because that is exactly the case.
The reason 0.999... and 1 are equal comes down to the definition of equality for real numbers. The informal formulation would be that two real numbers are equal if and only if their difference in magnitude is smaller than every rational number.
(Formally two real numbers are equal iff they belong to the same equivalence class of cauchy series, where two series are in the same equivalence class iff their element wise difference is smaller than every rational number)
It baffles me how there are still blogposts with a serious attitude about this topic. It’s akin to discussing possible loopholes of how homeopathy might be medicinally helpful beyond placebo, again and again.
Why are hyperreals even mentioned? This post is not about hyperreals or non-standard math, it’s about standard math, very basic one at that, and then comes along with »well under these circumstances the statement is correct« – well no, absolutely not, these aren’t the circumstances the question was posed under.
We don’t see posts saying »1+2 = 1 because well acktchually if we think modulo 2«, what’s with this 0.9… thing then?
I think rational thinking just doesn't work when it comes to infinity math. I'd say the same thing about probabilities.
ps: based on the title I thought this would be about IEEE 754 floats.
I'd say the key point is to understand the difference between a number, and the decimal representation of a number. 0.99999... is one possible representation of number 1. 1 is another one. Once one understand the definition of the decimal representation, it's just a simple proof to show that 0.99999... = 1.
The explanation is, that the number is _not_ the infinite string of characters, but the sum of the scaled digits of the string. This sum is defined as the limit of the partial sums. In Germany, you can understand this in high school.
0.999... -> 1 because you are correcting a supossed carry from a decimal forever. This is close to adding +1 to every odd number ever. No matter how much you try, you will get an even number.
“By definition, there is no real number between 0.9r and 1 therefore they are the same” … was how I heard it explained.
> The belief that 0.x must be less than 1.y makes perfect sense to rational people
what is meant here by this notation 0.x and 1.y ?
Maybe this can be fixed for good using (axiomatic) hyperreal numbers:
0.9̅ = 0.9̂ + ε = 1
For some definition of 0.9̂ = 1 - ε
are there no longer an infinite number of floating point numbers between every two floating point numbers?
Don't say that near Richard Dedekind, he'll cut you.
and 0.3… + 0.3… + 0.3… = 0.9… = 1.0
Maybe there is a difference, but it's intangible.
Maybe it is to the number line what Planck Length is to measures.
As a non-math-guy, I understand and accept it, but I feel like we can have both without breaking math.
In a non-idealized system, such as our physical reality; if we divide an object into 3 pieces, no matter what that object was we can never add our 3 pieces together in a way that recreates perfectly that object prior to division. Is there some sort of "unquantifiable loss" at play?
So yea, upvoting because I too am fascinated by this and its various connections in and out of math.
The way I was taught decimals in school (in Romania) always made 0.99... seem like an absurdity to me: we were always taught that fractions are the "real" representation of rational numbers, and decimal notation is just a shorthand. Doing arithmetic with decimal numbers was seen as suspect, and never allowed for decimals with infinite expansions. So, for example, if a test asked you to calculate 2 Ă— 0.2222... [which we notated as 2 Ă— 0,(2)], then the right solution was to expand it:
Once you're taught that this is how the numbers work, it's easy(ish) to accept that 0.999... is just a notational trick. At the very least, you're "immune" to certain legit-looking operations, like Instead of So, in this view, 0.3 or 0.333... are not numbers in the proper sense, they're just a convenient notation for 3/10 and 1/3 respectively. And there simply is no number whose notation would be 0.999..., it's just an abuse of the decimal notation.