Open / Save to cloud on Windows 10

Dirk Hohndel dirk at hohndel.org
Mon Feb 20 09:41:08 PST 2017


On Mon, Feb 20, 2017 at 09:49:54PM +0700, Jérémie Guichard wrote:
> Hey guys,
> 
> I narrowed it!

Excellent.

> It took me a little while to get the windows dev env in place, but
> instructions in packaging/windows/mxe-based-build.sh were very helpful.
> Only one little call to curl's buildconf script is missing in
> mxe-based-build.sh but once done manually the rest went smoothly. I may fix
> this in a separate commit, maybe together with automatically cloning
> dependencies from github...

Yes, please. I don't recall ever calling curl's buldconf script, so I'm
curious what you ran into.

> It "only" took 8h to build mxe inside the vm, thanks to the warning in the
> comments and from Dirk's last message, I started the build before going to
> bed :)

Yes, it is somewhat heavy...

> Back to the non-ASCII user name/path for local checkout of the repository,
> the issue is caused by the call to stat(sys/stat.h) that do not support
> non-ASCII paths. I could not find a place that was stating it explicitly
> but seems reasonable explanation. Since the cloning and checkout itself
> works in libgit2 I made some digging in that code and there a set of
> portability functions are used (p_stat) with simple mapping to system's
> stat() for posix systems and specific implementations using win32 APIs for
> win case.
> 
> As a proof of concept I added an ugly: "extern int p_stat(const char* path,
> struct stat* buf);" on top of git-access.c and used it in get_remote_repo
> and is_git_repository (these are the 2 only places stat function is used in
> the whole project). It fixed my issue (I could Open and Save could
> storage)...

Cool. Very cool.

> Now what should we take as a fix:
> 1. The ugly extern
> Cons
> I call it ugly since (in libgit2) that function is not actually exposed in
> the headers, not sure how Android and iOS apps are built (actually
> linked...) so it could caused some issues there.
> I have to mention the "bad design" / hidden dependency to libgit2 internals
> argument too.

Yeah, that one has bad design written all over it.

> 2. Implement our own portable is_dir
> In all the places stat function is used, it is to check if the path is a
> directory (and exists). Going through a portable stat is a bit "overkill"
> (processing to provide all the elements that output "struct stat" needs)
> when we only need to know if we have a directory. Not that this part of the
> code is actually perf critical, it is always good to ask ourselves the
> question.
> Plus it makes the code a bit more readable. I think this option is my
> favorite and would avoid issues in other platforms (if they exist at all).

Not a bad idea - but read on

> 3. Implement our own portable stat
> The choice between this option and the previous is_dir proposal is always
> hard when it comes to portability (specifically with windows).
> Should you port system calls or should you write client code using higher
> level abstractions is always a big debate.
> Furthermore, in bigger projects that use multiple 3rd party libs the
> portage is often already done in several of the included libs, so writing
> yet another port results in adding code that typically already exists
> multiple times in your final binary...

No, that seems like the wrong approach.

> I'm more often working with C++ so I tend to love abstractions, when it
> comes to C the choice is a bit harder.
> Subsurface core is, from my point of view, higher level code so I would go
> for abstraction, but I'm curious to hear about other point of views.
> 
> 3. Other proposal?
> There could be other solutions I have overseen, do you guys think about
> something else?

How about writing a wrapper function that can be called from C in
qthelper.cpp (we have quite a few of them there already) and have that do
something like

extern "C" bool dir_exists(const char *dir)
{
	return QDir(dir).exists();
}

That shoves the responsibility of platform independent implementation down
to Qt which in the past has worked very well for us. Do you think that
would work? I'm not sure if this covers all the tests we need to do, so
please expand as necessary for the two call sites.

> I'm still a bit fresh with this project and do not have yet an overview of
> all use cases, platforms and targets that are involved, this is why I
> preferred to ask before proposing a pull request with changes that could
> have side effects...

Excellent plan. Time zones mean that it may take a bit longer to get
answers, but I think that's usually worth it.

We support roughly the following:

Any type of Linux, with binaries being built for about 30 platforms, from
odd ARM versions to i686/x86_64 for Fedora, OpenSUSE, Ubuntu and related
OSs. Plus a generic Linux AppImage that runs on most (but not all) x86
Linux flavors. With occasional strange bugs :-)

Also, native build on Mac (because of our dependencies we now support 10.8
and newer, I believe). Native build on Mac for iOS (that's
Subsurface-mobile). Build on either Mac or Linux for Android
(Subsurface-mobile) and I believe at least on Linux you can build full
Subsurface for Android (I don't know if anyone has tried that from a Mac).
Subsurface on Android is more a proof of concept - the UI isn't really
usable there.

And finally we cross build Windows i686 binaries on a Ubuntu box (but this
should work on many Linux systems) using MXE. More on that below.

> On a related point, I have not looked yet on how testing is done, (and how
> far it goes) I would be more than happy to extend the test with this use
> case, I'd only need a bit of guidance.

Testing is done using the QTEST framework. And the tests are run locally -
which means testing Windows specific things has not really been solved,
yet. Automated tests are run on Travis in a trusty VM (this is a recent
achievement thanks to Anton) - right now we see random failures of the
gitstorage test there - I need to investigate what's going on. The tests
can also be run locally on a Mac. I don't do that often, though. Not sure
if Robert does.

> While we are talking about porting, is there a thread I could look into to
> understand the choice that was made to go for cross compilation rather than
> building on target? cmake is quite helpful in that regard. Maybe the main
> reason is that most developer do not want to install a win VM to check
> their code do build there :)

There is no single thread about this. I have never been a Windows
developer and always hated the problems with either cmd.exe or bash.exe on
Windows. I tried several times to get native builds going and wasted days
on that with no success. Both in the early Gtk days and then again when we
switched to Qt (first with qmake) and then AGAIN when we switched to
cmake).

Lubomir is the only one who ever got this to work and his description
should be somewhere in the email archives. But he himself called this
extremely fragile and hacky, including finding random weird pre-compiled
MinGW build chains and hacking around the lack of a package manager. And
as far as I remember, he posted a while ago that this no longer worked for
him, either.

Long story short. None of the main developers / divers had any experience
with Windows. I did a crude port of a very early version with MinGW and
thankfully Lubomir came along and helped us over and over along the way.
At some point my MinGW build environment broke (I don't recall what
exactly happened - I think there was something we mis-compiled, I
upgraded, things went to hell in a handbasked, I asked around and people
recomended that I should switch to the much more actively supported MXE).

Which results in the odd situation that Windows is the platform that most
of our end users are on, yet we don't build on it, don't test on it, and
tend to treat it as a step child - because most everyone who writes code
besides Lubomir does so on (predominantly) Linux and (Robert, sometimes
me, I'm sure some others) Mac.

If you want to spend time on getting a native build going I am certainly
not opposed to that. I'll be happy to set this up in a VM and use it for
better testing, etc. I doubt that I would switch over to building the
official binaries that way, unless there was a clear, meaningful advantage
doing that, simply because my current environment in which I build all of
the binaries is already insane and complicated enough and held together by
odd scripts and habits of mine - adding another "and then boot that VM and
manually do the following" would need a rather significant motivation. But
for better testing? Sure, that sounds useful.

Let me know if you have more questions.

/D


More information about the subsurface mailing list