It is also obvious that 3.3 * 2.0 is numerically identical to 6.6. The latter computation is nothing more than an increment of the binary exponent as it is the result of a multiplication with a power of two. You can see this in the following:
| s exponent significant
----+-------------------------------------------------------------------
1.1 | 0 01111111111 0001100110011001100110011001100110011001100110011010
2.2 | 0 10000000000 0001100110011001100110011001100110011001100110011010
3.3 | 0 10000000000 1010011001100110011001100110011001100110011001100110
6.6 | 0 10000000001 1010011001100110011001100110011001100110011001100110
Above you see the binary representation of the floating point numbers 3.3 and 6.6. The only difference in the two numbers is the exponent since they are only multiplied with two. We know that IEEE-754 will:
- approximate a decimal number with the smallest numerical error
- can represent all integers up to
2^53 exactly (for binary64)
So since 2.0 is exactly representable, a multiplication with this number will be nothing more than a change in the exponent. So all the following will create the same floating point numbers:
6.6 == 0.825 * 16.0 == 1.65 * 4.0 == 3.3*2.0 == 13.2 * 0.5 == ...
Does this mean that 2.2*3.0 is different from 6.6 because of the significant? No, this was just due to rounding errors in the multiplication.
An example where it would have worked would have been 5.5*2.0 == 2.2*5.0 == 11.0. Here the rounding was favourable
==. Useabs(a-b) < Thresholdif you really want to.abs(a-b) <= rel_prec * max(abs(a), abs(b))is better (with rel_prec close to 1e-16, for instance, for Python's double precision floats). In addition to this, the case of a zero value should be handled too. I did not fully check this, but the following might work:abs(a-b) <= rel_prec * (max(abs(a), abs(b)) if a != 0 != b else 1).==. It works correctly.==to compare two floating-point numbers." That's horrible advice and it enforces unjustified superstitions.gotoand floating-point numbers. Situations where==is appropriate: If you can prove no roundoff error will occur, you use==. For instance, Graham's scan can be implemented correctly withdoubles if your points have, say, integer coordinates in [-2^24, 2^24]. The cases where you can't use==are scarier, since it means you probably need to fall back to MPFR to see whether your predicate is actually true or actually false.