new very large sample files

Linus Torvalds torvalds at linux-foundation.org
Thu Aug 30 08:58:51 PDT 2018


On Thu, Aug 30, 2018 at 7:47 AM Dirk Hohndel <dirk at hohndel.org> wrote:
>
> For those who like to test with very large sample data, I have created
> (with the help of one of our members here - thanks for that) an anonymized
> dive log of substantial size. 8064 dives, 4.19M samples.

Heh. We're doing something *horribly* bad with "Select all".

Try this:

 - load your monster set over git:

    ./subsurface
https://github.com/Subsurface-divelog/large-anonymous-sample-data[git]

 - pick one dive at random

 - edit it to have a random divemaster, apply changes to that one file

 - now, "Select All",

 - First problem: that changes the current dive. WHAT?

 - go find that original dive, and use control-click to unselect it
and select it again to make it the active one.

 - try to edit that random divemaster to another name ("Multiple dives
are being edited"). Things are slow.

 - Do "Apply changes". Things go from "slow" to "hung".

The above sequence *should* change just the one dive, since the logic
for multi-dive edits is that it should only change changed fields, and
should only change them in other dives if they match (which in this
case shouldn't be happening).

So it *should* be fairly cheap.

It's not.

Doing a profile on it seems to indicate that it's spending a lot of
time in get_dive/get_divenr/get_dive_by_unique_id.

Not checking why, I need to do some kernel work.

             Linus


More information about the subsurface mailing list