Dive data import error with Hollis DG03

Hamish Moffatt hamish at cloud.net.au
Mon Mar 24 06:01:10 PDT 2014


On 24/03/14 23:49, Jef Driesen wrote:
> On 2014-03-24 12:35, Hamish Moffatt wrote:
>> On 24/03/14 21:21, Jef Driesen wrote:
>>>
>>> Or make it persistent by disable the low_latency flag when opening 
>>> the serial port in libdivecomputer. See the attached patch for a 
>>> prototype implementation. On Linux, it will clear the low_latency 
>>> flag, without setting the latency_timer value, because that requires 
>>> root permissions. But based on the info we gathered so far, that 
>>> should already do the trick. On Mac OS X, it seems possible to set 
>>> the latency timer directly, although I have no idea whether that 
>>> actually works or not.
>>>
>>> Can anyone give this a try?
>>>
>> Isn't this the wrong solution? As best I can tell, the dive computer
>> needs a bit of delay between commands, which it gets by accident due
>> to the read latency. The default read latency works on Windows and
>> Mac, but those could be changed by the user. The default read latency
>> doesn't work on linux.
>>
>> A short delay within libdivecomputer won't even be noticeable on
>> systems where the default read latency is already larger.
>
> I'm certainly not claiming it's a perfect solution, but the 
> alternative is far from perfect too.
>
> If we just add some unconditional delay before the write, then we're 
> also increasing the download time for systems where there is already a 
> delay due to the higher latency timer.
>
> Let's say we have a Windows system where the default latency timer is 
> 16ms. There, every read operation (or at least the first one for each 
> packet) will already take approximately 16ms. If we add an extra 16ms 
> sleep before writing the request, then we are doubling the total wait 
> time, while it's not even necessary. Compared to the total time it 
> takes to request/receive a packet that is a lot. Roughly a 100% 
> increase. The fact that we are dealing with tiny 16 byte packets 
> doesn't make it any nicer. With 64K of memory (4096 packets), a full 
> dump would take 40 seconds extra (with a 10ms delay).
>
> Ideally we need a way to add this small delay, only for those systems 
> that really need it. We could throw in a conditionally compiled sleep 
> for linux only:
>
> #ifdef __linux__
>     serial_sleep (device, 10);
> #endif
>
> But besides the fact that I don't like seeing conditionally compiled 
> code in otherwise platform independent code (in the serial code itself 
> I don't mind, because that's where it belongs), it won't help other 
> systems that have the low latency enabled. Either by default or 
> because the user changed the default. It will also break or slowdown 
> if the kernel ever changes the default latency timer again.
>
> Ideally we need a solution that only introduces a delay when strictly 
> necessary. For example by checking the low_latency flag and/or latency 
> timer, or measuring it (like we do for the halfduplex workaround).
>
> Maybe we can introduce some kind of adaptive delay? We start with no 
> delay at all, and if we get too many nak's or timeouts, we increase 
> the delay. On systems where no delay is necessary, the extra delay 
> will never get activated. But for systems where it is necessary, it 
> would get activated after a few failed packets. What do you think?
>
> This adaptive delay is something I already considered a long time ago 
> for the delay in the error retrying. Now it's just hardcoded to 100ms, 
> but it might be better to increase it dynamically after successive 
> failures. I just never had time to try this.

Right, I agree it wouldn't be good to add more delay on systems where 
we're already waiting 16ms. But if we know that commands should be 
separated by at least (let's say) 4ms, could we time the read() calls 
and only add whatever delay is necessary to ensure there's at least 4ms 
between commands?

Otherwise you could look up the current latency by looking at the 
low_latency flag + sysfs on linux, the registry on Windows and the ioctl 
on Mac (if there's one to read the current setting), but that's pretty 
ugly and not portable to other kernels very readily. Or maybe the 
adaptive delay would work.

Hamish


More information about the subsurface mailing list