Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: modern-fortran/neural-fortran
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.21.0
Choose a base ref
...
head repository: modern-fortran/neural-fortran
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v0.22.0
Choose a head ref
  • 3 commits
  • 36 files changed
  • 2 contributors

Commits on May 2, 2025

  1. Generic conv & maxpool (#220)

    * Generic conv constructor for specific conv1d and conv2d layers
    
    * Generic maxpool constructor for maxpool1d_layer and maxpool2d_layer
    
    * Fix arguments in 2d CNN
    
    * Update src/nf/nf_layer_constructors.f90
    
    Co-authored-by: Jeremie Vandenplas <[email protected]>
    
    * Update src/nf/nf_layer_constructors.f90
    
    Co-authored-by: Jeremie Vandenplas <[email protected]>
    
    * Update src/nf/nf_layer_constructors.f90
    
    Co-authored-by: Jeremie Vandenplas <[email protected]>
    
    * Update src/nf/nf_layer_constructors.f90
    
    Co-authored-by: Jeremie Vandenplas <[email protected]>
    
    * Add generic locally_connected wrapper around locally_connected1d
    
    ---------
    
    Co-authored-by: Jeremie Vandenplas <[email protected]>
    milancurcic and jvdp1 authored May 2, 2025
    Configuration menu
    Copy the full SHA
    402b84a View commit details
    Browse the repository at this point in the history

Commits on Jul 30, 2025

  1. Apply optimizer to model weights without data copy (#222)

    * WIP optimizer refactor w/ pointers
    
    * WIP optimizer optimization
    
    * Send the data to optimizer without a copy works for dense layers
    
    * Get weights and weight gradients as 1d
    
    * get_params_ptr and get_gradients_ptr for conv1d, conv2d, and locally_connected1d
    
    * Define optimizer instance per layer to preserve memory across layers
    
    * Initialization of network-wide optimizer no longer needed now that we switched to per-layer optimizer instances
    
    * Bookkeeping for velocity, rms_gradient, etc.; optimizer tests now pass
    
    * Update optimizer flow for linear2d
    
    * Update optimizer flow for layernorm
    
    * Previous bookkeeping for successive calls to optim % minimize() assumed 2 calls per batch; this is now generalized to allow any number of calls until size(params) is exhausted
    
    * Remove get_gradients from network, layer, dense, conv1d, conv2d
    
    * Remove optimizer as component to the network class
    milancurcic authored Jul 30, 2025
    Configuration menu
    Copy the full SHA
    1c968ce View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    b2073fa View commit details
    Browse the repository at this point in the history
Loading