git backend: actually update local cache from remote

Davide DB dbdavide at gmail.com
Thu Jun 11 00:08:02 PDT 2015


I look forward to use this feature asap (as soon as it doesn't byte me)
I edit my logbook from three different machine and it's a PITA pying
attention to keep them in sync.
I do not like DropBox. Actually I like it but I do not like to donate
my personal data to another stakeholder on the net.
In my workplace we have strict proxy/firewall rules hence Dropbox
works but google drive not.

I think that as a starting point a straigh simple approache as Dirk
suggested is enough.
It's a shame that Subsurface has not a fixed status bar: we could have
some icons there showing if you are connected/disconneted and if your
local repo is updated or not (my main concern).
>From the main menu we could have a choice to manually fire up a push
on the remote.

Bye

On Thu, Jun 11, 2015 at 7:25 AM, Robert C. Helling <helling at atdotde.de> wrote:
>
> On 11 Jun 2015, at 06:16, Dirk Hohndel <dirk at hohndel.org> wrote:
>
> Good morning,
>
>> This is kind of a hybrid of what you are talking about. If you are on a
>> boat or on an island with shitty network you can simply work from the
>> local cache. And if you suddenly have some connectivity you can trigger an
>> upload to the remote in a very straight forward fashion.
>>
>> What am I missing?
>
> I would aim at making it maximally transparent to the user (as dropbox does for example): There are two states: Good network connection or no good network connection. Subsurface tries to guess that (for example by asking the OS about network connectivity beyond localhost or by trying to ping the cloud server or quickly downloading some data from the cloud server to estimate the connection speed) but there is a switch for the user to set it manually (like a little cloud icon that gets greyed out when there is no good connectivity).
>
> Without net, the remote server is ignored (obviously).
>
> When transitioning from bad to good state (or on startup) we trigger a sync (fetch and merge and then possibly push) in the background and after save we trigger a push (again in the background).
>
> Writing this, I wonder if we really should differentiate between push and pull or if we (again as dropbox) just try as much as possible to keep local and remote in sync (i.e. always fetch, merge and push in one go).
>
> I would to all cloud access in the background and offer the user (via a modal dialog) to reload when the background operation finished and implied an update to the currently displayed log.
>
> I never thought these operations are that hard to design a good workflow/UI for. I thought, only conflict resolution and version management beyond opening a terminal and ask the user to fix it on the command line would be hard…
>
> Best
> Robert
>
> --
> .oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oO
> Robert C. Helling     Elite Master Course Theoretical and Mathematical Physics
>                       Scientific Coordinator
>                       Ludwig Maximilians Universitaet Muenchen, Dept. Physik
> print "Just another   Phone: +49 89 2180-4523  Theresienstr. 39, rm. B339
>     stupid .sig\n";   http://www.atdotde.de
>
>
> _______________________________________________
> subsurface mailing list
> subsurface at subsurface-divelog.org
> http://lists.subsurface-divelog.org/cgi-bin/mailman/listinfo/subsurface
>



-- 
Davide
https://vimeo.com/bocio/videos


More information about the subsurface mailing list