14

I'm just reviewing some basics of Python and there's a tricky problem about comparing floating point numbers.

2.2 * 3.0 == 6.6
3.3 * 2.0 == 6.6

I thought these should both return a False. However, the second one gave me a True. enter image description here

Please help me here. Thanks!

16
  • 10
    You shall never compare two float numbers by ==. Use abs(a-b) < Threshold if you really want to. Commented Sep 29, 2014 at 2:22
  • 3
    @Steve: A correct comparison is more complicated than this, since the precision of floats is a number of digits, not an absolute numerical value. Something like abs(a-b) <= rel_prec * max(abs(a), abs(b)) is better (with rel_prec close to 1e-16, for instance, for Python's double precision floats). In addition to this, the case of a zero value should be handled too. I did not fully check this, but the following might work: abs(a-b) <= rel_prec * (max(abs(a), abs(b)) if a != 0 != b else 1). Commented Sep 29, 2014 at 2:47
  • 4
    @Steve: Huh? You can compare two floating-point numbers for equality using ==. It works correctly. Commented Sep 29, 2014 at 4:30
  • 5
    @abarnert: No, Steve said "don't use == to compare two floating-point numbers." That's horrible advice and it enforces unjustified superstitions. Commented Sep 29, 2014 at 5:46
  • 4
    @abarnert: I prefer the rule "understand your tools before using them in serious code" to some hodgepodge of superstitions about goto and floating-point numbers. Situations where == is appropriate: If you can prove no roundoff error will occur, you use ==. For instance, Graham's scan can be implemented correctly with doubles if your points have, say, integer coordinates in [-2^24, 2^24]. The cases where you can't use == are scarier, since it means you probably need to fall back to MPFR to see whether your predicate is actually true or actually false. Commented Sep 29, 2014 at 13:04

3 Answers 3

13

This might be illuminating:

>>> float.hex(2.2 * 3.0)
'0x1.a666666666667p+2'
>>> float.hex(3.3 * 2.0)
'0x1.a666666666666p+2'
>>> float.hex(6.6)
'0x1.a666666666666p+2'

Although they are all displayed in decimal as 6.6, when you inspect the internal representation, two of them are represented in the same way, while one of them is not.

Sign up to request clarification or add additional context in comments.

1 Comment

But they're not all displayed in decimal as 6.6. In 3.x, print(2.2 * 3.0) or just 2.2 * 3.0 will show 6.6000000000000005. In 2.x, the print will truncate, but just 2.2 * 3.0 will still show 6.6000000000000005. So, this is simpler to see than the answer implies.
3

In order to complete Amadan's good answer, here is a more obvious way of seeing that 2.2*3. and 3.3*2. are not represented by the same float: in a Python shell,

>>> 2.2 * 3.
6.6000000000000005
>>> 3.3 * 2.
6.6

In fact, the Python shell displays the representation of numbers, which by definition should allow the corresponding float to be correctly built back from the representation, so you see the numerical approximation of 2.2*3 that Python does. The fact that 2.2*3. != 3.3*2. is obvious when seeing all the necessary digits, like above.

3 Comments

The part about the representation vs. the friendly version is relevant to Python 2.x, but not to 3.x. In 3.x, print(2.2*3.) will still give you 6.6000000000000005. In 3.x, both str and repr return the shortest string that will evaluate back to the same float; in 2.x, str truncates to a platform-specific number of digits, while repr doesn't.
Interesting. There was no "part about the representation vs the friendly version" before your comment, though. :D
Reference about Python 3's str change: stackoverflow.com/questions/25898733/….
0

It is also obvious that 3.3 * 2.0 is numerically identical to 6.6. The latter computation is nothing more than an increment of the binary exponent as it is the result of a multiplication with a power of two. You can see this in the following:

    | s exponent    significant
----+-------------------------------------------------------------------
1.1 | 0 01111111111 0001100110011001100110011001100110011001100110011010
2.2 | 0 10000000000 0001100110011001100110011001100110011001100110011010
3.3 | 0 10000000000 1010011001100110011001100110011001100110011001100110
6.6 | 0 10000000001 1010011001100110011001100110011001100110011001100110

Above you see the binary representation of the floating point numbers 3.3 and 6.6. The only difference in the two numbers is the exponent since they are only multiplied with two. We know that IEEE-754 will:

  • approximate a decimal number with the smallest numerical error
  • can represent all integers up to 2^53 exactly (for binary64)

So since 2.0 is exactly representable, a multiplication with this number will be nothing more than a change in the exponent. So all the following will create the same floating point numbers:

6.6 == 0.825 * 16.0 == 1.65 * 4.0 == 3.3*2.0 == 13.2 * 0.5 == ...

Does this mean that 2.2*3.0 is different from 6.6 because of the significant? No, this was just due to rounding errors in the multiplication.

An example where it would have worked would have been 5.5*2.0 == 2.2*5.0 == 11.0. Here the rounding was favourable

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.