<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi Robert,<br>
<br>
<div class="moz-cite-prefix">Am 12.10.14 21:47, schrieb Robert
Helling:<br>
</div>
<blockquote
cite="mid:4E17B2C3-670E-4253-A05C-36AD93DC5F1A@atdotde.de"
type="cite">the image data design of subsurface is by far not
final and indeed we currently access images by their local
filename. The problems with that are obvious and as soon as you
want to take your log to a different computer you are in trouble.
OTOH you don’t want to download some hundreds of pictures from a
web server every time you scroll through you dive log, so at least
some cache has to be local. We still have to figure out how to do
this right.</blockquote>
<br>
Totally agree on that, it needs to be local, so you can view it on
the liveaboard, when you need it.<br>
<br>
<blockquote
cite="mid:4E17B2C3-670E-4253-A05C-36AD93DC5F1A@atdotde.de"
type="cite">
<div><br>
</div>
<div>But for the time being, why do you include the actual image
in your download api and not just a URL? You are running a web
service that actually hosts those pictures. So downloading would
be much faster and if some downloader would really want the
image file it could still GET them from the URL without any
hassle. What do you think?<br>
</div>
</blockquote>
<br>
Well, the idea behind the way it's implemented now is: You want to
transfer your log with all data from one machine to another and have
it available offline. That includes the pictures and you get them
with one request (and a big file) in one go. The alternative is off
course to just provide URLs to the pictures and keep the DLD file
small, but that would result in another 2090 HTTP Requests just to
get all the pictures in my case....<br>
<br>
The API does offer a bit more than subsurface uses at the moment
though, which would abolish the need to download all this data every
time: There is a call
<meta http-equiv="content-type" content="text/html;
charset=windows-1252">
to /xml_available_dives.php , that gives you a list of all dives
with ID, date and time as well as the last-change-timestamp of the
dive. Based on that list, you COULD compare which dives are already
there based on date and time (which should be unique for each dive
as most people can't do two dives at the same moment) and just
request the IDs you don't have yet (or wish to update based on the
last change timestamp). the IDs are an optional parameter to the
retrieval call for the DLD and will then limit the export to only
those.<br>
<br>
Using that, each dive would only be downloaded once, including the
pictures...<br>
<br>
But I'm open to discussions about this.<br>
<br>
Rainer<br>
</body>
</html>