Do pressure-time integral using integer values

Linus Torvalds torvalds at linux-foundation.org
Sun Jan 6 23:58:33 PST 2013


Now that the pressure_time calculations are done in our "native"
integer units (millibar and seconds), we might as well keep using
integer variables.

We still do floating point calculations at various stages for the
conversions (including turning a depth in mm into a pressure in mbar),
so it's not like this avoids floating point per se. And the final
approximation is still done as a fraction of the pressure-time values,
using floating point. So floating point is very much involved, but
it's used for conversions, not (for example) to sum up lots of small
values.

With floating point, I had to think about the dynamic range in order
to convince myself that summing up small values will not subtly lose
precision.

With integers, those kinds of issues do not exist. The "lost
precision" case is not subtle, it would be a very obvious overflow,
and it's easy to think about. It turns out that for the pressure-time
integral to overflow in "just" 31 bits, we'd have to have pressures
and times that aren't even close to the range of scuba cylinder air
use (eg "spend more than a day at a depth of 200+ m").

Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
---

Maybe I have a thinko somewhere, and an 'int' really isn't sufficient.
I doubt it, but at least the overflow will result in *obvious*
problems, rather than very subtle rounding errors.

Somewhat interestingly, 'int' (even signed, so with just a 31-bit
range) should be perfectly sufficient, but a 'float' - commonly a
32-bit floating point value with a 24-bit mantissa would *not*
necessarily have sufficient range to avoid rounding issues.

So while a 'double' - with a 52-bit mantissa - was plenty, I think
using a 'float' would actually have been able to show the possible
loss of precision in floating point math. In practice it probably
wouldn't have ever mattered, but I just get a bit worried when
floating point math gets subtle enough that you may start seeing the
difference between 'float' and 'double'.

Basically, I'm trying to explain why I'm *not* nervous about using
floating point for "conversion" style math, but I *am* nervous about
using floating point for "integration" style math. Doing integration
well using floating point involves dividing the range up in equal
parts and then adding them up separately. Exactly because doing it
using lots of close-to-zero values can result in adding small values
with big values, and having catastrophic loss of precision.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: patch.diff
Type: application/octet-stream
Size: 2428 bytes
Desc: not available
URL: <http://lists.hohndel.org/pipermail/subsurface/attachments/20130106/108bfb90/attachment.obj>


More information about the subsurface mailing list