git backend: actually update local cache from remote

Dirk Hohndel dirk at hohndel.org
Wed Jun 10 21:16:48 PDT 2015


On Wed, Jun 10, 2015 at 06:18:17PM -0700, Linus Torvalds wrote:
> On Wed, Jun 10, 2015 at 2:04 PM, Linus Torvalds
> <torvalds at linux-foundation.org> wrote:
> >
> > I'll try to get the "push to remote" done too.
> 
> Ok, I think this is it.

First of all, THANKS.

I was tempted to try to implement this last weekend. But with the inventor
of git on our team it seemed preposterous to do this - and I knew that
you'd chew my code to bits and spit it out in disgust, anyway :-)

> It's not particularly well-tested, but it is fairly straightforward.
> HOWEVER, note the big commit message comment about why technically
> this works in the sense that it can now sync both ways to the remote
> git repository, but the whole "cloud sync" needs more thought from a
> interface standpoint.

Yes, what you do right now is straight forward and logical, but to the
layman it's highly unintuitive, I bet.

> I could add some "sync up to remote" after doing a local save, but I
> didn't add that, because I do believe we need more smarts than that
> (or if not "smarts", then an actual GUI). Because I don't think it's a
> good idea to just try to do the whole network thing unconditionally,
> when you often don't have good (or any) internet access when actually
> diving with subsurface.

Yes, especially with f*@#$ing libgit2 being too stupid to use proxies. My
Linux VM is basically ALWAYS behind a proxy. It never ever has direct
internet access. So I am sorely tempted to hack libgit2 to add https proxy
support to it...

Anyone looking for a fun little project? I'll pay you for the work...

> So I think the local git cache should basically always load/store to
> the local cache, and we should add some GUI to actually trigger this
> "sync with cloud" code. Right now it always triggers when you load
> from the cache (but *not* when saving to it).

I've been going back and forth about this a bit, and here's my current
thinking (which may be silly, but it's much easier to start a discussion
by putting something out there and then have Davide come up with something
better :-D)

- when you start Subsurface (with cloud storage picked as default) or open
  the cloud storage from the menu, if there is a local cache, the local
  cache is read and the UI is show. At the same time a background thread
  tries to connect to the remote.
  If it gets an error it stops and displays a message at the bottom of the
  window "can't connect to cloud storage, you can continue working from
  the local cache" but doesn't otherwise disrupt operation.
  If it succeeds with the initial connection it brings up a spinner and a
  message "synching with remote storage" - I guess it should allow
  "cancel" for that, but unless cancelled it should disallow modifications
  to the data while this is happening.

- when you click save or quit we immediately save to the local cache but
  also bring up a dialog that asks "should I attempt to write this data to
  the cloud storage?".
  If the user says no, just ignore the remote.
  If the user says yes, sync in the background (if this was save) or show
  a spinner while saving (if this was quit).

This is kind of a hybrid of what you are talking about. If you are on a
boat or on an island with shitty network you can simply work from the
local cache. And if you suddenly have some connectivity you can trigger an
upload to the remote in a very straight forward fashion.

What am I missing?

> Dirk, I haven't actually tried your cloud infrastructure, I've just
> been testing with my "remote repository on local disk" setup.

Fact of the matter is, exactly ONE PERSON has tried this in the past 24
hours. Apparently this is far less exciting than I thought.

I applied your patches and will play with them to see if they work with
the existing backend implementation.

Again, thanks for working on this, Linus. I'm glad RC7/RC8 are fairly
quiet :-)

/D


More information about the subsurface mailing list