Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: e3nn/e3nn

0.5.8

07 Oct 02:04
d766146

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.5.7...0.5.8

0.5.7

04 Sep 00:33
92c87a5

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.5.6...0.5.7

0.5.6

22 Mar 20:33
cf17804

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.5.5...0.5.6

0.5.5

02 Feb 23:07
c370f49

Choose a tag to compare

What's Changed

Full Changelog: 0.5.4.1...0.5.5

0.5.4.1

01 Feb 16:13
f952979

Choose a tag to compare

What's Changed

  • hack to turn off jit script in _spherical_harmonics by @mitkotak in #485
  • Fixes #493 by @mitkotak in #494
  • explicitly specify device placement in random number generator in e3nn.math._normalize_activation.moment by @lyuwen in #492
  • Force CodeGenMixin to entirely respect e3nn.set_optimization_defaults by @Linux-cpp-lisp in #484
  • Replace flake8 with ruff by @mitkotak in #498

New Contributors

Full Changelog: 0.5.4...0.5.4.1

0.5.4

06 Nov 02:07
ef93f87

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.5.2...0.5.4

2024-07-26

27 Jul 03:01
7f6be7d

Choose a tag to compare

[0.5.2] - 2024-07-26

Added

  • o3.experimental.FullTensorProductv2 | ElementwiseTensorProductv2 for compatibility with torch.compile(..., fulgraph=True)
  • enable pip caching in CI
  • Optional scalar bias term in _batchnorm.py

Changed

  • refactor to use pyproject.toml for packaging
  • refactor gh community files
  • move pylint, coverage and flake8 configuration to pyproject.toml

Fixed

  • Fix TorchScript warning "doesn't support instance-level annotations" (#437)

2022-12-12

12 Dec 21:42

Choose a tag to compare

Added

  • L=12 spherical harmonics

Fixed

  • TensorProduct.visualize now works even if the TP is on the GPU.
  • Github actions only trigger a push to coveralls if the corresponding token is set in github secrets.
  • Batchnorm

2022-04-13

13 Apr 19:24

Choose a tag to compare

[0.5.0] - 2022-04-13

Added

  • Sparse Voxel Convolution
  • Clebsch-Gordan coefficients are computed via a change of basis from the complex to real basis. (see #341)
  • o3, nn and io are accessible through e3nn. For instance e3nn.o3.rand_axis_angle.

Changed

  • Since now the code is no more tested against torch==1.8.0, only tested against torch>=1.10.0

Fixed

  • wigner_3j now always returns a contiguous copy regardless of dtype or device

2021-12-15

16 Dec 08:48

Choose a tag to compare

[0.4.4] - 2021-12-15

Fixed

  • Remove CartesianTensor._rtp. Instead recompute the ReducedTensorProduct everytime. The user can save the ReducedTensorProduct to avoid creating it each time.
  • *equivariance_error no longer keeps around unneeded autograd graphs
  • CartesianTensor builds ReducedTensorProduct with correct device/dtype when called without one

Added

  • Created module for reflected imports allowing for nice syntax for creating irreps, e.g. from e3nn.o3.irreps import l3o # same as Irreps("o3")
  • Add uvu<v mode for TensorProduct. Compute only the upper triangular part of the uv terms.
  • (beta) TensorSquare. computes x \otimes x and decompose it.
  • *equivariance_error now tell you which arguments had which error

Changed

  • Give up the support of python 3.6, set python_requires='>=3.7' in setup
  • Optimize a little bit ReducedTensorProduct: solve linear system only once per irrep instead of 2L+1 times.
  • Do not scale line width by path_weight in TensorProduct.visualize
  • *equivariance_error now transforms its inputs in float64 by default, regardless of the dtype used for the calculation itself