Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: ORNL/HydraGNN

HydraGNN v4.0

15 Aug 21:28
3ac07be

Choose a tag to compare

The new version of HydraGNN v4.0 provides additional core capabilities, such as:

  • Inclusion of multi-body atomistic cluster expansion MACE, polarizable atom interaction neural network PAINN, and equivariant principal neighborhood aggregation (PNAEq) among the message passing layers supported
    -Inclusion of graph transformers to directly model long-range interactions between nodes that are distant in the graph topology
  • Integration of graph transformers with message passing layers by combining the graph embedding generated by the two mechanisms, which allows for an improved expressivity of the HydraGNN architecture
  • Improved re-implementation of multi-task learning (MTL) to allow its use for stabilized training across imbalanced, multi-source, multi-fidelity data
  • Introduction of multi-task parallelism, a newly proposed type of model parallelism specifically for MTL architectures, which allows to dispatch different output decoding heads to different GPU devices
  • Integration of multi-task parallelism with pre-existing distributed data parallelism to enable a 2D parallelization for distributed training
  • Improved portability of the distributed training across Intel GPUs, which has been testes on ALCF exascale supercomputer Aurora
  • Inclusion of 2-level fine-grained energy profilers portable across NVIDIA, AMD, and Intel GPUs to monitor the power and energy consumption associated with different functions executed by the HydraGNN code during data pre-load and training
  • Restructuring of previous examples and inclusion of new sets of examples to illustrate the download, preprocess, and training of HydraGNN models on new large-scale open-source datasets for atomistic materials modeling (e.g., Alexandria, Transition1x, OMat24, OMol25)

HydraGNN v3.0 Release

10 Nov 15:25
6635212

Choose a tag to compare

Summary

New or improved capabilities included in v3.0 release are as follows:

  1. Enhancement in message passing layers through generalization of the class inheritance to enable the inclusion of a broader set of message passing policies
  2. Inclusion of equivariant message passing layers from the original implementations of:
  1. Restructuring of class inheritance for data management
  2. Support of DDStore https://github.com/ORNL/DDStore capabilities for improved distributed data parallelism on large volumes of data that cannot fit on intra-node memory capacities
  3. Large-scale system support for OLCF-Crusher and OLCF-Frontier

HydraGNN v2.0.0 Release

20 Jan 15:37
9fae102

Choose a tag to compare

Summary

New or improved capabilities included in v2.0.0 release are as follows:

  • Enhancement in message passing layers through class inheritance
  • Adding transformation to ensure translation and rotation invariance
  • Supporting various optimizers
  • Atomic descriptors
  • Integration with continuous CI test
  • Distributed printouts and timers
  • Profiling
  • Support of ADIOS2 for scalable data loading
  • Large-scale system support, including Summit (ORNL) and Perlmutter (NERSC)

Capabilities provided in v1.0.0 release (Oct 2021)

Major capabilities included in the previous release v1.0.0 are as follows:

  • Multi-task graph neural network training with enhanced message passing layers
  • Distributed Data Parallelism (DDP) support