-
-
Notifications
You must be signed in to change notification settings - Fork 427
Description
Because of the discussion that started here:
I feel like we (or at least I) haven't been too careful about the effects of fixed overhead on performance. In some situations, like finding small paths in a huge graph, almost all time could be spent on error checking or heap initialization. Should we change this?
Maybe we should benchmark more to check when the error checking time overtakes the time the algorithm takes in practice?
For a full graph of 10000 vertices (about 50,000,000 edges), using A* to find the path from 0
to 0
, repeated 100x:
| full graph, no weight error checking 0.014s 0.011s 0.003s
| full graph, with error checking 11.8s 11.7s 0.007s
Things like heap initialization scale with the number of nodes, so they're less of a problem for a full graph. Here's a ring graph for comparison, 50000000 edges and vertices, run once:
| ring graph, no weight error checking 0.343s 0.206s 0.136s
| ring graph, with error checking 0.469s 0.319s 0.15s
So for dense graphs, error checking can have a big impact. For sparse graphs, error checking and initialization can have a big impact.