Odd calculated deco "ripples" (was Re: RFC: color change for calculated deco)
torvalds at linux-foundation.org
Sun Jan 13 16:42:44 PST 2013
On Sun, Jan 13, 2013 at 4:07 PM, Dirk Hohndel <dirk at hohndel.org> wrote:
> Exactly. That's why I think it's got to be a weird rounding and clipping
> effect in our code. The noise of data from the depth sensor should drown
> out harmonic changes like that.
Yes. However, if that is the case, I really don't see why it would
seem to be divecomputer-specific, though. Very odd.
But maybe it's just that I don't have a lot of different dives and
divecomputers to go by, and it really just happened to be that way.
>> Our own depth rounding does *not* seem to be an issue. Yes, we do
>> depth to "only" a precision of mm, and we do this in integers, but the
>> fact is, the sample precision itself is much less than that (the
>> Suunto depth samples, for example, seems to actually be in whole feet,
>> so the dive computer itself will have rounded the depth to *much* more
>> than mm).
> Yeah - that seems just too unlikely.
Actually, it turns out that the two suunto's I have do things in cm.
Some *other* suunto dive computers (like the older Vyper Air) do
things in whole feet, but not the ones I have. My old Gekko did it,
and it looks like the import from SDM will have then rounded the
feet-conversion to whole cm.
But the Uemis is also in cm, so we have extra precision there. Doing a
linear interpolation to mm is going to be *very* exact, and should
absolutely not have any ripples at any longer timeframe. Sure, we'll
round, but we round those things correctly to the nearest mm, so we
should not have any macro-level visible noise from that rounding.
And the Cobalt seems to have its depth samples be in millibar - good
choice - converted to meter by libdivecomputer. And that's actually
same one-centimeter basic precision (although it's not representable
as *whole* centimeters, so our xml files has mm in it). Again, any
interpolation to mm should be more than plenty good enough.
Because none of the depth interpolation has any *cumulative* errors.
So the only possible rounding issues I can see would be in deco.c,
where it does a lot of incremental math. But that uses 'double', and
with a 52-bit mantissa and just a few thousand samples, I don't see
how that could really lose precision enough to be visible ripples
either. The math would have to be very unstable.
Do we ever add big values with tiny values? That's where FP math tends
to lose precision. The Buehlmann factors introduce a
five-orders-of-magnitude difference, but that's still "only" fifteen
binary digits apart. With a 52-bit mantissa, you'd still have
effective precision of 35+ bits. But I didn't look at the rest of the
Hmm.. Looking at it now, the *factors* may be "only" 15 bits apart,
but they are used to multiply a difference. And that *is* the other
way to get catastrophic precision loss: subtracting two values that
are close to each other.
So maybe that *is* where the ripples come from.
Hmm. Sadly, I don't have access to any machine with 128-bit FP "long
double". I wonder if a 32-bit version with the 80-bit "long double"
would make any difference. The Intel 80-bit FP format has 64 bits of
mantissa, iirc, so it would be an extra 12 bits, and if the ripple
More information about the subsurface