Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@almarklein
Copy link
Member

@almarklein almarklein commented Mar 7, 2025

Fixes #1005

  • Rename "kwargs" to "template-vars".
  • Split into two groups, one for init, and one for bindings, so that the latter can be cleared, so that when values are not set, this is detected, changing the hash, and thus triggering a recompile.
  • Check clearing of _binding_codes in BindingDefinitions -> Not necessary, since on a recompile all definitions are recreated. done, use clear().
  • Have a look at the performance of the hashing. -> with the mesh in gltf_unlit.py re-hasing takes 12us 🀷
  • Documentation.

@Korijn
Copy link
Collaborator

Korijn commented Mar 7, 2025

Remark about hashing using json: isn't it a lot faster to convert to a frozendict and call hash on that? Or (named)tuples?

@almarklein
Copy link
Member Author

Remark about hashing using json: isn't it a lot faster to convert to a frozendict and call hash on that? Or (named)tuples?

I tried. Yes it's faster, going from 12us to 3us for the mesh in gltf_unlit.py. However, the mesh shader has one template var which is a dict (used_uv), so the code will have to check for these.

Implemented a variant where dicts and lists are tuple-ized, and with 9 us it's still faster. Will open a new pr, because this is rather orthogonal.

@almarklein almarklein marked this pull request as ready for review March 7, 2025 14:24
@almarklein almarklein requested a review from Korijn as a code owner March 7, 2025 14:24
@Korijn Korijn merged commit f0cb7ea into main Mar 7, 2025
14 checks passed
@Korijn Korijn deleted the binding-tracking branch March 7, 2025 14:44
@almarklein
Copy link
Member Author

Implemented a variant where dicts and lists are tuple-ized, and with 9 us it's still faster.

To do this well, checking for instances quickly adds up, bringing it to about the same performance as the json implementation. Also we can calculate 100 hashes in 1 ms, so I'm not looking into this further right now.

@almarklein
Copy link
Member Author

The only way, I think, that this hashing can be made faster is if you can traverse the object tree and calculate the hash on the fly, rather than producing a big tuple of tuples and hashing that. Updating a hash is easy with the tools in hashlib, but this these operate on bytes, not Python objects. Maybe something for another day.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug when dynamically applying or removing env_map

3 participants