-
Notifications
You must be signed in to change notification settings - Fork 97
Permalink
Choose a base ref
{{ refName }}
default
Choose a head ref
{{ refName }}
default
Comparing changes
Choose two branches to see what’s changed or to start a new pull request.
If you need to, you can also or
learn more about diff comparisons.
Open a pull request
Create a new pull request by comparing changes across two branches. If you need to, you can also .
Learn more about diff comparisons here.
base repository: modern-fortran/neural-fortran
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.18.0
Could not load branches
Nothing to show
Loading
Could not load tags
Nothing to show
{{ refName }}
default
Loading
...
head repository: modern-fortran/neural-fortran
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v0.19.0
Could not load branches
Nothing to show
Loading
Could not load tags
Nothing to show
{{ refName }}
default
Loading
- 6 commits
- 41 files changed
- 5 contributors
Commits on Sep 13, 2024
-
Update for cmake use of neural-fortran (#192)
* Update for cmake use of neural-fortran * Update readme * Fix up comment in cmake file * Add CMakeLists.txt to CI * Update cmake * Remove -DSERIAL=1 * Remove -DSERIAL=1 --------- Co-authored-by: Milan Curcic <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for d516437 - Browse repository at this point
Copy the full SHA d516437View commit details
Commits on Feb 16, 2025
-
* Add Input2d layer by redesigning the input parameters to input layer constructors * input2d: add forwards and backwards for 2d, create separate `predict_batch` interface * input2d: add output2d * input2d: tests * input2d: update cmake * input2d: update readme * Tidy up * Bump version & update copyright year * Tidy up --------- Co-authored-by: Mikhail Voronov <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for a28a9be - Browse repository at this point
Copy the full SHA a28a9beView commit details -
Generic flatten (2d and 3d) (#202)
* Generic flatten() with 2-d and 3-d inputs * Explicitly enable preprocessing for fpm builds * Update README * generic-flatten: use assumed-rank instead of generics --------- Co-authored-by: Mikhail Voronov <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 4ad75bc - Browse repository at this point
Copy the full SHA 4ad75bcView commit details
Commits on Feb 17, 2025
-
* linear2d_layer forward implementation * implement backward * introduce concurrency, outtroduce stupidity * fix style * add parameters api to linear2d_layer * add constructor for linear2d_layer * add integration for linear2d layer * set usage rules for linear2d_layer * add linear2d_layer to public api * update tests for linear2d layer * remove extra comment * remove rubbish * move linear2d layer logic into submodule * update cmake for linear2d_layer * update tests for linear2d_layer * update linear2d_layer tests * update linear2d_layer tests for batch last * make linear2d_layer with batch as last dimension (performance) * linear2d_layer: fix gradient updates * linear2d_layer: make it 2d * linear2d_layer: forgot a file * linear2d_layer: temporarily remove api * Don't expose the concrete layer type via nf * Report success to stdout * Include linear2d test in cmake * Add Linear2d to README * Plumbing of linear2d with input2d and linear2d * linear2d_layer: add flatten2d layer * linear2d_layer: make linear2d layer work with input2d and flatten2d * update cmake * linear2d_layer: use flatten layer instead of flatten2d * linear2d_layer: remove flatten2d layer * linear2d_layer: remove public api * linear2d_layer: update cmakelists * linear2d_layer: workaround cpu imprecision to make ci happy * Add linear2d example * linear2d_layer: remove redundant constructor args * linear2d_layer: make example converge * linear2d_layer: make weighs init with normal distribution * linear2d_layer: add loss stopping and more iterations * linear2d_layer: update tests * Tidy up * Require passing only out_features to linear2d(); tidy up * Remove linear2d example --------- Co-authored-by: milancurcic <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c316ee1 - Browse repository at this point
Copy the full SHA c316ee1View commit details
Commits on Feb 21, 2025
-
* First stab at dropout; conflict with base type TODO * Partial dropout integration * Test uninitialized dropout layer * Test dropout state that follows an input layer * Enable forward pass for dropout; backward pass TODO * Version bump and add dropout to the features table * Add dropout to CMake * Enable preprocessing in fpm.toml (needed with recent versions of fpm) * Small change in scale implementation * Integration of backward pass for dropout * Reduce tolerance in conv2d convergence tests * Fix bug in dropout scaling Co-authored-by: Ricardo Orsi <@ricor07> * disable dropout in inference mode (net % predict); TODO enable in net % train * Set dropout's training mode to true in net % train(); add tests * WIP dropout tests * Dropout layers always in training mode; except when is called, when they are in inference mode * Update the layers table * Ensure the actual dropout rate == requested dropout rate in most cases * Accumulate the gradient in dropout % backward and flush in network % update * Guard against bad dropout rate * Connect the backward pass; expand tests * Expand tests * Use the reference scaling in dropout; don't accumulate gradients because it's not needed * Add dropout to MNIST example; small model changes * Add reference * Update print_info dropout * Update print_info * Compute scale once in dropout constructor * dropout % backward() doesn't need input from the previous layer * Timing info of dropout --------- Co-authored-by: Vandenplas, Jeremie <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 039638d - Browse repository at this point
Copy the full SHA 039638dView commit details -
* linear2d_layer forward implementation * linear2d_layer: temporarily remove api * Don't expose the concrete layer type via nf * Plumbing of linear2d with input2d and linear2d * linear2d_layer: add flatten2d layer * linear2d_layer: make linear2d layer work with input2d and flatten2d * update cmake * linear2d_layer: remove flatten2d layer * linear2d_layer: remove public api * linear2d_layer: update cmakelists * Add linear2d example * linear2d_layer: remove redundant constructor args * linear2d_layer: make example converge * linear2d_layer: add loss stopping and more iterations * start impementing MultiHeadAttention * scaled dot product attention * combine attention heads * forward (not working) * rearrange attention dimensions in more efficient way * initial forward implementation for multi-head attention * tests for multihead_attention%forward * multihead_attention: move most logic to subroutines (performance) * multihead_attention: update tests * multihead_attention: concurrency * multihead_attention: proof of concept backward (works, but not mathematically correct) * multihead_attention: fix minor scaling issue * multihead_attention: complete backward implementation * multihead_attention: add comments for forward prop * multihead_attention: add tests for backward * multihead_attention: adjust expected test values for updated scaling * multihead_attention: calculate scaling factor only once * multihead_attention: use heap-allocated arrays during back prop * multihead_attention: use heap-allocated arrays in forward * multihead_attention: set values from correct shape to tests * multihead_attention: fix issues with shapes (softmax prime became even more monstruos) * multihead_attention: minor refactoring and optimization * multihead_attention: fix comments * multihead_attention: tests, add checks for attention weights * multihead_attention: remove some of the copypaste comments * multihead_attention: optimize shapes * multihead_attention: params api * multihead_attention: fix incorrect dw bug * multihead_attention: tests for updated parameters * multihead_attention: remove reshape crutches * multihead_attention: rename common forward and backward calls * multihead_attention: tidy mha up * multihead_attention: self attention * multihead_attention: add cross attention * multihead_attention: add more comments * multihead_attention: arrange attention into submodule * multihead_attention: update cmakelists * multihead_attention: update attention in accordance with linear2d * multihead_attention: remove redundand constructor args for attention layers * multihead_attention: use pure and elemental where necessary * multihead_attention: plumbing * multihead_attention: add reference * multihead_attention: remove rebase artifact * multihead_attention: remove redundant args * multihead_attention: update tests * multihead_attention: add the most important lines to tests * multihead_attention: simple MHA example * multihead_attention: update cmake * multihead_attention: remove debug line from tests * multihead_attention: set slightly higher margin for fp imprecision (due to IEEE_DENORMAL) * Rename mha_simple example * Update src/nf/nf_multihead_attention.f90 Co-authored-by: Jeremie Vandenplas <[email protected]> * Update src/nf/nf_multihead_attention.f90 Co-authored-by: Jeremie Vandenplas <[email protected]> * Update src/nf/nf_multihead_attention.f90 Co-authored-by: Jeremie Vandenplas <[email protected]> * Update src/nf/nf_multihead_attention.f90 Co-authored-by: Jeremie Vandenplas <[email protected]> * Tidy up * Add self_attention to the layers table --------- Co-authored-by: milancurcic <[email protected]> Co-authored-by: Jeremie Vandenplas <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ed8b340 - Browse repository at this point
Copy the full SHA ed8b340View commit details
Loading
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff v0.18.0...v0.19.0