Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Tags: FluxML/Flux.jl

Tags

v0.16.3

Toggle v0.16.3's commit message
[Diff since v0.16.2](v0.16.2...v0.16.3)

**Merged pull requests:**
- fix `cpu(dataloader)` (#2587) (@CarloLucibello)

**Closed issues:**
- Data loading & preprocessing pipeline feature (#1282)
- Infinite time of gradient (#2585)

v0.16.2

Toggle v0.16.2's commit message
[Diff since v0.16.1](v0.16.1...v0.16.2)

**Merged pull requests:**
- Update deps & bump to 0.16.1 (#2574) (@pxl-th)

**Closed issues:**
- New Gradients ruin everything (#2580)
- Failure to precompile on 1.12: cannot declare Flux.destructure public; it is already declared exported (#2583)

v0.16.1

Toggle v0.16.1's commit message
[Diff since v0.16.0](v0.16.0...v0.16.1)

**Merged pull requests:**
- Adding RecurrentLayers to ecosystem.md (#2555) (@MartinuzziFrancesco)
- Fixed typo in recurrence documentation (#2556) (@MartinuzziFrancesco)
- Adding return state option to recurrent layers (#2557) (@MartinuzziFrancesco)
- update Schedulers docs (#2560) (@CarloLucibello)
- collapse doc string in layers docs (#2562) (@CarloLucibello)
- fix test enzyme (#2563) (@CarloLucibello)
- Remove 2 items from public, to fix 1.12 (#2569) (@mcabbott)
- Add reactant forward and reverse pass tests (#2576) (@wsmoses)
- cleanup Reactant and Enzyme tests (#2578) (@CarloLucibello)

**Closed issues:**
- cell output is not clearly distinguishable from the state (#2548)
- Flux.cpu and Flux.gpu no longer move data on views (#2553)
- remove usage example of old optimiser (#2558)
- Optimizing over `AbstractMatrix` subtypes (#2559)
- introduce a FlattenLayer (#2561)
- [enzyme] broken MeanPool gradient (#2564)
- [enzyme] broken BatchNorm gradient (#2566)
- [enyzme] broken recurrent cell loss (#2568)

v0.16.0

Toggle v0.16.0's commit message
[Diff since v0.15.2](v0.15.2...v0.16.0)

**Merged pull requests:**
- Recurrence layer (#2549) (@CarloLucibello)
- Add `WeightNorm` reparametrization (#2550) (@pxl-th)
- Change cells' return to `out, state` (#2551) (@CarloLucibello)
- fix: `gpu_device` not defined in `Flux.DistributedUtils` (#2552) (@AntonOresten)

**Closed issues:**
- Feature request: Weight normalization (#942)
- recurrent dropout  (#1040)
- Stacked RNN in Flux.jl?  (#2452)

v0.15.2

Toggle v0.15.2's commit message
[Diff since v0.15.1](v0.15.1...v0.15.2)

**Merged pull requests:**
- hotfix LSTM ouput (#2547) (@CarloLucibello)

v0.15.1

Toggle v0.15.1's commit message
[Diff since v0.15.0](v0.15.0...v0.15.1)

**Merged pull requests:**
- Re-write "basics" page of docs (#2535) (@mcabbott)
- Adding initialstates function to RNNs (#2541) (@MartinuzziFrancesco)
- Update NEWS.md highlighting breaking changes (#2542) (@CarloLucibello)
- relax identity test for devices (#2544) (@CarloLucibello)
- fix `Flux.@functor` (#2546) (@CarloLucibello)

**Closed issues:**
- `Flux.@functor` is broken on 0.15 (#2545)

v0.15.0

Toggle v0.15.0's commit message
[Diff since v0.14.25](v0.14.25...v0.15.0)

**Merged pull requests:**
- Use `NNlib.bias_act!` (#2327) (@mcabbott)
- Allow `Parallel(+, f)(x, y, z)` to work like broadcasting, and enable `Chain(identity, Parallel(+, f))(x, y, z)` (#2393) (@mcabbott)
- Epsilon change in normalise for stability (#2421) (@billera)
- Add more `Duplicated` methods for Enzyme.jl support  (#2471) (@mcabbott)
- Export Optimisers and remove params and Optimise from tests (#2495) (@CarloLucibello)
- RNNs redesign (#2500) (@CarloLucibello)
- Adjust docs & `Flux.@functor` for Functors.jl v0.5, plus misc. depwarns (#2509) (@mcabbott)
- GPU docs (#2510) (@mcabbott)
- CompatHelper: bump compat for Optimisers to 0.4, (keep existing compat) (#2520) (@github-actions[bot])
- Distinct init for kernel and recurrent (#2522) (@MartinuzziFrancesco)
- Functors v0.5 + tighter version bounds (#2525) (@CarloLucibello)
- deprecation of params and Optimise (continued) (#2526) (@CarloLucibello)
- Bump codecov/codecov-action from 4 to 5 (#2527) (@dependabot[bot])
- updates for Functors v0.5 (#2528) (@CarloLucibello)
- fix comment (#2529) (@oscardssmith)
- set expand option as default for `@layer` (#2532) (@CarloLucibello)
- misc stuff for v0.15 release (#2534) (@CarloLucibello)
- Tweak quickstart.md (#2536) (@mcabbott)
- Remove usage of global variables in linear and logistic regression tutorial training functions (#2537) (@christiangnrd)
- Fix linear regression example (#2538) (@christiangnrd)
- Update gpu.md (#2539) (@AdamWysokinski)

**Closed issues:**
- RNN layer to skip certain time steps (like `Masking` layer in keras) (#644)
- Backprop through time (#648)
- Initial state in RNNs should not be learnable by default (#807)
- Bad recurrent layers training performance  (#980)
- flip function assumes the input sequence is a Vector or List, it can be Matrix as well.  (#1042)
- Regression in package load time (#1155)
- Recurrent layers can't use Zeros() as bias (#1279)
- Flux.destructure doesn't preserve RNN state (#1329)
- RNN design for efficient CUDNN usage (#1365)
- Strange result with gradient (#1547)
- Call of Flux.stack results in StackOverfloxError for approx.  6000 sequence elements of a model output of a LSTM (#1585)
- Gradient dimension mismatch error when training rnns (#1891)
- Deprecate Flux.Optimisers and implicit parameters in favour of Optimisers.jl and explicit parameters (#1986)
- Pull request #2007 causes Flux.params() calls to not get cached (#2040)
- gradient of `Flux.normalise` return NaN when `std` is zero (#2096)
- explicit differentiation for RNN gives wrong results (#2185)
- Make RNNs blocked (and maybe fixing gradients along the way) (#2258)
- Should everything be a functor by default? (#2269)
- Flux new explicit API does not work but old implicit API works for a simple RNN (#2341)
- Adding Simple Recurrent Unit as a recurrent layer (#2408)
- deprecate Flux.params (#2413)
- Implementation of `AdamW` differs from PyTorch (#2433)
- `gpu` should warn if cuDNN is not installed (#2440)
- device movement behavior inconsistent (#2513)
- mark as public any non-exported but documented interface (#2518)
- broken image in the quickstart (#2530)
- Consider making the `:expand` option the default in `@layer` (#2531)
- `Flux.params` is broken (#2533)

v0.14.25

Toggle v0.14.25's commit message
[Diff since v0.14.24](v0.14.24...v0.14.25)

**Merged pull requests:**
- reintroduce FluxCUDAAdaptor etc.. to smooth out the transition (#2512) (@CarloLucibello)

v0.14.24

Toggle v0.14.24's commit message
[Diff since v0.14.23](v0.14.23...v0.14.24)

**Merged pull requests:**
- deprecate properly GPU_BACKEND (#2511) (@CarloLucibello)

v0.14.23

Toggle v0.14.23's commit message
[Diff since v0.14.22](v0.14.22...v0.14.23)

**Merged pull requests:**
- Support for lecun normal weight initialization (#2311) (@RohitRathore1)
- Some small printing upgrades (#2344) (@mcabbott)
- simplify test machinery (#2498) (@CarloLucibello)
- Correct dead link for "quickstart page" in README.md (#2499) (@zengmao)
- make `gpu(x) = gpu_device()(x)` (#2502) (@CarloLucibello)
- some cleanup (#2503) (@CarloLucibello)
- unbreak some data movement cuda tests (#2504) (@CarloLucibello)

**Closed issues:**
- Add support for lecun normal weight initialization (#2290)
- `using Flux, cuDNN` freezes, but `using Flux, CUDA, cuDNN` works (#2346)
- Problem with RNN and CUDA. (#2352)
- since new version: Flux throws error when for train! / update! even on quick start problem (#2358)
- Cannot take `gradient` of L2 regularization loss (#2441)
- Potential bug of RNN training flow (#2455)
- Problem with documentation (#2485)
- Flux has no Lecun Normalization weight init function? (#2491)
- Zygote fails to differentiate through Flux.params on julia v0.11 (#2497)
- ERROR: UndefVarError: `ADAM` not defined in `Main` in flux (#2507)