git backend: actually update local cache from remote

Dirk Hohndel dirk at hohndel.org
Thu Jun 11 06:23:44 PDT 2015


On Thu, Jun 11, 2015 at 07:25:46AM +0200, Robert C. Helling wrote:
> 
> > This is kind of a hybrid of what you are talking about. If you are on a
> > boat or on an island with shitty network you can simply work from the
> > local cache. And if you suddenly have some connectivity you can trigger an
> > upload to the remote in a very straight forward fashion.
> > 
> > What am I missing?
> 
> I would aim at making it maximally transparent to the user (as dropbox
> does for example): There are two states: Good network connection or no
> good network connection. Subsurface tries to guess that (for example by
> asking the OS about network connectivity beyond localhost or by trying
> to ping the cloud server or quickly downloading some data from the cloud
> server to estimate the connection speed) but there is a switch for the
> user to set it manually (like a little cloud icon that gets greyed out
> when there is no good connectivity).

It's not easy to test this in a reliable manner. Ping is blocked in a ton
of environments. We'd really have to open a direct https connection to the
cloud server to figure out if we have connectivity or not.

Thanks to some asking around I learned a minute ago that libgit2 might
soon get support for using libcurl as a transport and in return libcurl
supports https via proxy. This hasn't landed in master, yet (there's just
a pull request so far), but the code looks reasonably mature.

https://github.com/libgit2/libgit2/pull/3183

The timeliness of this is stunning - the pull request was opened a week
ago :-)

> Without net, the remote server is ignored (obviously).
> 
> When transitioning from bad to good state (or on startup) we trigger a
> sync (fetch and merge and then possibly push) in the background and
> after save we trigger a push (again in the background).

The save in the background when quitting is dangerous. On a larger data
set / slow connection this can take a while - I hate it when software does
something like this and then "hangs" instead of closing.

> Writing this, I wonder if we really should differentiate between push
> and pull or if we (again as dropbox) just try as much as possible to
> keep local and remote in sync (i.e. always fetch, merge and push in one
> go).

Dropbox runs as a service, by definition in the background. We're an
application - I think that's different.

> I would to all cloud access in the background and offer the user (via a
> modal dialog) to reload when the background operation finished and
> implied an update to the currently displayed log.
> 
> I never thought these operations are that hard to design a good
> workflow/UI for. I thought, only conflict resolution and version
> management beyond opening a terminal and ask the user to fix it on the
> command line would be hard…

I think it's not that simple. Yes, if network is instantaneous and there
is no delay at all, I could see how this could be done completely
transparently, but even on my 100Mb/s home internet connection it takes
several seconds to do the fetch of my data file.

But maybe we could do almost what you suggest.

Subsurface preference asks you "automatic or manual sync to cloud".
If you are in manual mode, you have to manually start a "sync to cloud"
from the menu.

If you are in automatic mode, Subsurface acts like this:
- if there is network connectivity, it displays a spinner and prevents
  modifications in the UI when opening or saving the data file
- if there is no network connectivity, it displays a message somewhere
  that cloud sync is currently not available

/D


More information about the subsurface mailing list