GPS data from jpeg's and more

Dirk Hohndel dirk at hohndel.org
Thu Jan 31 21:30:11 PST 2013


Hi Robert,

"Robert C. Helling" <helling at atdotde.de> writes:
> here is my attempt to use jpeg EXIF tags (like e.g. created by taking
> pictures with my IPhone) to obtain gps data for dives.

Cool

> I have to apologize, however, for several reasons: Mainly, given the
> shortness of time, it was beyond my capabilities to include this in
> the main C binary (and add GUI stuff for this etc). Thus I wrote a
> separate perl script (possibly you will not like this) that operates
> on the XML files to add this data. Another reason for using perl was
> to make use of a number of powerful libraries (modules in perl speak,
> and yes, I read what Linus wrote on G+ about using C and libraries) to
> handle XML, EXIF, and do all kinds of data/time manipulations (but
> this makes this script fully time-zone, DST, leap second etc
> aware). It can even handle images that are not local files but that
> are given as URI's.

There are good things and bad things about this. I like the idea of
having a tool to do this - so I guess any implementation is a step in
the right direction. I can't figure out how to include this in a
shipping binary distribution (so how / where would this be installed for
a Windows user? what about the dependencies), but I can see myself
including this in the sources and possibly providing it over the web.

> What it does as well is, for images that have time stamps during a
> dive, add "image events" to the dive log (I took the liberty to create
> events with the number 666).
>
> Not yet fully functional is input of .gpx files as created by all
> kinds of GPS loggers. The script should be able to read these files (I
> could not test this since I currently don't have such a file), but it
> does not yet use this track data to manipulate the XML file.
>
> How to get this to run:
>
> I hope the shebang works for you (I had some trouble here as MacPorts
> decides to install its own version of perl at a weird non-standard
> path).
>
> It is quite likely that you do not have all required modules installed
> on your computer (in that case the script will stop with an error
> message of @INC not containing <module_name>). But installing those is
> as simple as
>
> sudo cpan module_name 
>
> (you might have to answer a few question about mirrors and installing
> dependencies the first time but it should be safe to accept the
> suggested answer by pressing return).
>
> Once that works, you can run the script as
>
> ./import_data.pl -i input.xml -o output.xml image1.jpeg http://www.example.com/image3.jpeg
Can it take a path and (recursively?) look at all the pictures in that
path? That would seem like a very useful feature...

> For the matching to work, it is quite essential that the camera and
> your computer (and the dive computer) have roughly the same idea about
> the current time. I added the possibility to correct for this: To
> determine the time difference between your camera and your desktop
> first run
>
> ./import_data.pl -t
>
> This prints the epoch every second. Now take your camera and take a
> picture of your screen. Transfer this image to the computer and write
> down the last time stamp that can be seen on the picture. Then run
>
> ./import_data -c <timestamp> <image.jpeg>
>
> This will print the offset between the timestamp and the timestamp
> produced by the camera. You can use this offset as -s option for all
> other image operations.

Wow. That's elaborate :-)

> If your dive computer is not set to the correct time (as my dive
> computers usually are) you can use
>
> ./import_data -a <HH:MM> -d <device_id> -i <input.xml> -o <output.xml>
>
> where HH:MM is the current time of the dive computer with the
> corresponding device_id. This corrects all times of dives recorded
> with this computer. If you are unsure about the deviceid, you can list
> all dive computers in an xml-file by
>
> ./import_data -l -i <input.xml>

Ok, that seems manageable...

> The script by now has grown some complexity and there are several good
> reasons to reject it (e.g. it's not C and not part of the main binary)
> but maybe it can be useful for somebody. Let me know what you think.

I think it could be very useful and I certainly am willing to include it
in the sources and make it available via the web site. I need to play
with it a bit to see how robust and useful it already is, but it sounds
promising.

One thing I wonder about is the XML parsing that you do. How much effort
is spent on understanding Subsurface's XML file format and keeping it
intact? Might it be easier to create an XML file in a format similar to
what we get from the webservice? It would be easy enough to add an
option to Subsurface to import that file and merge its data with the
dive log...

/D


More information about the subsurface mailing list