I have a Postgres-driven application which rounds quantities to the next (not nearest) 4th decimal place. So, 0.00341 becomes 0.0035.
In implementing it, I came upon a situation where 0.0012 even is being rounded up to 0.0013, even though it shouldn't; it's 0.0012 even. In fact, even ceil() agrees, at first glance:
postgres=> SELECT ceil(((0.0012) * 10000)) / 10000;
?column?
------------------------
0.00120000000000000000
(1 row)
We know that there is no arbitrary precision introduced:
postgres=> SELECT (ceil(((0.0012) * 10000)) / 10000) = 0.0012;
?column?
----------
t
(1 row)
Yet, when the figure 0.0012 is arrived at by way of a computation, the situation changes:
postgres=> SELECT (12::double precision / 60) * 0.006;
?column?
----------
0.0012
(1 row)
postgres=> SELECT ((12::double precision / 60) * 0.006) = 0.0012;
?column?
----------
f
(1 row)
It would appear that the computed 0.0012 is greater than actual 0.0012:
postgres=> SELECT ((12::double precision / 60) * 0.006) > 0.0012;
?column?
----------
t
(1 row)
Predictably, this leads the rounding mechanism to round 0.0012 "up" to 0.0013, which is obviously wrong if the expression evaluates to 0.0012 even:
postgres=> SELECT ceil(((12::double precision / 60) * 0.006) * 10000) / 10000;
?column?
----------
0.0013
(1 row)
So, clearly I'm missing something here about how the expression is evaluated and/or how the data types involved are cast. There's additional precision introduced that shouldn't be there.
Any help would be appreciated!
SELECT ((12::numeric(16,8) / 60) * 0.006) = 0.0012;returnst