Replies: 3 comments
-
Also, thank you so much for developing this amazing library and showing what can be done with modern GPU infrastructure from python! |
Beta Was this translation helpful? Give feedback.
-
Hi! sorry for the late response. tagging @almarklein since lots of this pertains to For performance you are broadly describing LOD (level of detail). It's something we've talked about at a surface level for a while but we haven't had the time to implement it. If you are interested in implementing a generalized LOD mechanism we would be very interested!
Have you tried right click + drag with scale-2025-05-09_04.13.40.mp4I'll get back to your other points soon. |
Beta Was this translation helpful? Give feedback.
-
I'll have a proper look into the camera-zoom issue soon. As for the numerical issues when viewing the end of a large time-series, it would be great if we can accommodate for this in Pygfx, e.g. using positional offsets or something. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am interested in plotting very long time series and allow interactive viewing of the series (i.e., a 2d line plot where the x-coordinates of the points is
np.arange(N)
, and the y-coordinates is the data). By long, I mean up toN=10**9
, and ideally I'd like to plot a few of these series on the same plot. The plot should be interactive with the possibility of zooming from viewing the whole dataset down to viewing only a few points.Conceptually, everything I need (and more!) is provided by
LineCollection
, but this use-case is pushing the limits and encountering some limitations. I therefore started developing a library to work around these limitations (https://github.com/cassiersg/scaviz), which I discuss below. Any comment/input is welcome. I am also interested in contributing upstream if some features are of interest to fastplotlib.Performance
The first is performance: on my quick experiments with my iGPU, I see the frame-rate dropping (and overall high system resource usage) when going above ~
10**6
points in total. This is expected since calling the vertex shader for that many points will be costly and won't scale well to larger datasets. I combined two solutions to solve this issue:k
points into one, we take the min and max of thek
points, then color the whole (min, max) interval along the y axis for the corresponding x coordinate.The current implementation does 1. using a custom shader which is very similar to pygfx's Line, but stores only the y coordinate buffer to save vRAM (z is a uniform and x is computed). For this to work, the vertex indices range that are rendered needs to change upon camera changes, which is currently done with a hack to pygfx (pygfx/pygfx#1078).
For 2., the pooling is done with numpy when creating the graphic and the pooled vectors are concatenated and transferred to the GPU. At every camera update, we select a "pooling scale" in the data, along with the rendered range and update uniform buffer with this information. The rest of the rendering is still to be done properly: currently I simply render a line that zig-zags between the min and max (this works reasonably well, but shows small artifacts).
Numerical issues
Given that 32-bit floating point numbers have 23 bits of mantissa, they have an effective precision of 24 bits: they can represent integers exactly up to
2**24-1
, which is approx 16e6 and therefore lower than the maximum x coordinate I want to show. This limitation is not an issue in many use-cases since for most scientific data, a value of e.g.10**9
or10**9+50
doesn't make a difference, so the limited precision can be ignored.However, for my usecase, when zooming in at the end of the sequence, it is actually important: when zooming on the end of the data that would mean that all points in a range
[10**9, 10**9+50]
get collapsed. In that case, the shader essentially computesx_orig = 10**9
(offset to start data rendering),delta = 20
(index of rendered point), then compute the world x coordinatex_orig+delta
. Then, applying the camera transform essentially subtracts10**9
, giving the computation(10**9+20)-10**9
, which will be incorrect.I think that this can be fixed by setting
x_orig=0
and passing to the shader a non-standard camera matrix whose position is the true camera position shifted byx_orig
. The camera computations themselves are done in python and are lightweight so they can be done using 64-bit floats (maybe they already are), which solves the problem (with 52 mantissa, the data will not fit in any memory/storage before precision becomes an issue).The same precision issues might appear for the axes/tick computations, and might need similar adaptation.
Other
add_graphic(..., center=True)
, zooming may end up in seeing only the background (no more axes of graphic visible), due to z=0 getting out of the camera's frustum (i.e., in front of the near plane or behind the far plane). This is a problem of numerical accuracy in the computation of the camera: while the x dimension is very large, the z range is small. Due to howshow_object
works whenfov==0
, we get the camera position very far from z=0 (approximately as far as the x dimension). The result of "large z distance" with "small z range" is that the z range gets shifted a bit. See Orthographic camerashow_object
distance. pygfx/pygfx#1081(10**5, 10**5+100)
, all the tick labels are1e5
, so we don't get any sense of the scale anymore. I didn't investigate this issue yet: it seems to me that the best solution is to dynamically adjust the precision of the ticks according to the displayed range, which actually looks like it is already done (I can see it going from 1 up to 4 significant digits when zooming, but then it stops: maybe there is a hard-coded threshold).Beta Was this translation helpful? Give feedback.
All reactions