Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: FluxML/Flux.jl

v0.16.9

01 Feb 21:52
ce4b8a0

Choose a tag to compare

Flux v0.16.9

Diff since v0.16.8

Merged pull requests:

v0.16.8

23 Jan 18:28
d15c7dc

Choose a tag to compare

Flux v0.16.8

Diff since v0.16.7

Merged pull requests:

Closed issues:

  • Local Response Normalisation (#192)
  • generic_matmul! hit in back! because type-promotion in activation function (#613)
  • Layer Transpose (#635)
  • add trainstep! (#666)
  • Hypernetwork API (#797)
  • Optimizer handling of infinite loss (#821)
  • Flux Optimizers should define equality (#823)
  • Model optimization fails (NaNs) with Zygote.pullback but works with Tracker.forward (#876)
  • more issue labels? (#879)
  • Diagonal does not return same size due to broadcast (#890)
  • Integrate epochs within Flux.train! (#1058)
  • Simplest linear model on housing data w/ Flux (#1122)
  • OneHotVector(i, n) when i > n (#1300)
  • Support DirectML (#1347)
  • Feature request: Modifying Dense Layer to accommodate kernel/bias constraints and kernel/bias regularisation (#1389)
  • Flux.softmax returns wrong result with CuArray (#1425)
  • Rethink train design and better callbacks support (#1461)
  • tied weights (by transposition) are not tied when sent to gpu (#1504)
  • Regularization example from docs can't be differentiated (#1588)
  • Tied weights using Flux layers (#1592)
  • Import Flux on worker crashes (#1625)
  • Unclear wording in "Composing Optimizers" section of docs (#1627)
  • Recurrent network interface updates/design (#1678)
  • Please do not deprecate Dense(...;initb=...) (#1684)
  • Triage Meetings (#1709)
  • Differentiating a Model While Resetting the Parameters at every Epoch (#1766)
  • BatchNorm on GPU without affine or tracking statistics (#1810)
  • Float32 parameters in structs unsupported? (#1817)
  • using Flux -> InitError: IOError: mkdir("/pbs/software/centos-7-x86_64/julia/1.7.0/share/julia/packages/Flux/BPPNj/src/data/../../deps"; mode=0o777): read-only file system (EROFS) (#1839)
  • Easy to make mistake with gpu() (#1887)
  • Inline printing for OneHotArray is not GPU-friendly (#1905)
  • Freezing layers at model construction time (#1931)
  • Unable to precompile -- "allequal not defined" (#1934)
  • Weird Side Effects of loadparams! (#1979)
  • Taking serialization seriously (#1988)
  • Issue with logitcrossentropy on onehotencoded input on GPU (#2002)
  • Add default inner constructor to Dense (#2158)
  • docs on freezing layers should be ported to the explicit syntax (#2216)
  • Loading Flux 0.13.15 for the first time results in error (#2232)
  • Default for init_score in early_stopping (#2639)
  • Does not compile anymore in conjunction with CUDA due to dependency on MLDataDevices (#2647)

v0.16.7

09 Dec 06:51
64c3979

Choose a tag to compare

Flux v0.16.7

Diff since v0.16.6

Merged pull requests:

Closed issues:

  • Docker images for Floydhub and similar (#148)
  • Implement einsum function/macro ร  la PyTorch and TF (#297)
  • Flux and Images (#326)
  • "Tracing" memory pre-allocator (#349)
  • make Juno dependency conditional (#454)
  • Encoding array dimensions in flux type system? (#614)
  • Gradient Interface Design (#628)
  • New New Optimisers (#637)
  • Clipping (#672)
  • CUDA Programming Model (#706)
  • LBFGS Optimizer (#719)
  • Flux plots (#729)
  • "ADAM" and friends should be called "Adam" (#795)
  • Add lookahead optimizer (#838)
  • ADAM does not accept keyword arguments (#871)
  • Compatibility with Tracker (#883)
  • Numerical issues for (logit)binarycrossentropy (#914)
  • Change abstract argument names to meaningful ASCII (#915)
  • Roadmap to Flux 1.0 (#961)
  • Zygote gives extra gradient entries for BatchNorm (#1018)
  • Helper methods for extracting RNN final state in a GPU compatible way (#1043)
  • helper function for selecting a gpu in multi-gpu setting (#1074)
  • Provide iper-simple examples directly in readme.md (#1115)
  • gpu function does nothing, but only on first run (#1119)
  • Behavior of chunk (#1120)
  • ArrayFire (#1126)
  • MethodError: no method matching zero(::Type{Array{Float32,2}}) In Flux Loss function (#1134)
  • Parameter collection and GPU movement fail on models defined via functions (#1201)
  • Derivative in loss function error (#1464)
  • Document OneHotArray (#1519)
  • Second order derivative (#1582)
  • Conv is not working for Complex when using CUDA (#1655)
  • Flux installation errors in julia 1.7.0-rc1, WSL2 (#1757)
  • Two-arg update!(x, d) is never used (#1860)
  • cpu() type stability (#1878)

v0.16.6

08 Dec 20:55
19534df

Choose a tag to compare

Flux v0.16.6

Diff since v0.16.5

Merged pull requests:

Closed issues:

  • Enzyme gradient example broken (#2554)
  • [enzyme] broken Bilinear gradient (#2565)
  • [enzyme] broken MultiHeadAttention gradient (#2567)
  • Regression: cpu function is incompatible with DataFrames since Functors became opt-in (#2617)
  • Flux on GPU is incompatible with NVIDIA driver version 13 (#2618)
  • Views built within MLUtils.kfolds scalar indexing error on gpu (#2620)
  • Simple single layer line fit does not converge for most sets of sample points (#2623)
  • Julia 1.12: Deadlock detected in loading Flux ext (#2625)
  • Enzyme Documentation example does not work on CPU (#2627)
  • Minor documentation issue ("Gradients and Layers") (#2628)
  • Deadlock when loading FluxCUDAcuDNNExt on Julia 1.12.2 (works on 1.11.7) (#2631)
  • [Metal] error in forward pass with tanh activation (#2633)

v0.16.5

23 Jul 18:57
461a1b6

Choose a tag to compare

Flux v0.16.5

Diff since v0.16.4

Merged pull requests:

  • Fix Typos in Old Tutorials Documentation (#2610) (@leopardracer)
  • CompatHelper: bump compat for AMDGPU in [weakdeps] to 2, (keep existing compat) (#2613) (@github-actions[bot])
  • Bump to 0.16.5 (#2614) (@pxl-th)

Closed issues:

  • unsafe_free! from MLDataDevices fails for CuArray{CartesianIndex{4}, 1, CUDA.DeviceMemory}) (#2612)

v0.16.4

02 Jun 13:23
676c816

Choose a tag to compare

Flux v0.16.4

Diff since v0.16.3

Merged pull requests:

Closed issues:

  • Reduce hcat creates dense matrix (#1596)
  • Update GSoC 2025 Idea List (#2586)
  • Type piracy breaks (dev::AbstractDevice)(d::DataLoader) (#2592)
  • Dropout erroring on CUDA, when using cu but not gpu (#2594)

v0.16.3

06 Feb 21:12
9147e84

Choose a tag to compare

Flux v0.16.3

Diff since v0.16.2

Merged pull requests:

Closed issues:

  • Data loading & preprocessing pipeline feature (#1282)
  • Infinite time of gradient (#2585)

v0.16.2

21 Jan 16:00
009d984

Choose a tag to compare

Flux v0.16.2

Diff since v0.16.1

Merged pull requests:

Closed issues:

  • New Gradients ruin everything (#2580)
  • Failure to precompile on 1.12: cannot declare Flux.destructure public; it is already declared exported (#2583)

v0.16.1

13 Jan 17:33
44695a0

Choose a tag to compare

Flux v0.16.1

Diff since v0.16.0

Merged pull requests:

Closed issues:

  • cell output is not clearly distinguishable from the state (#2548)
  • Flux.cpu and Flux.gpu no longer move data on views (#2553)
  • remove usage example of old optimiser (#2558)
  • Optimizing over AbstractMatrix subtypes (#2559)
  • introduce a FlattenLayer (#2561)
  • [enzyme] broken MeanPool gradient (#2564)
  • [enzyme] broken BatchNorm gradient (#2566)
  • [enyzme] broken recurrent cell loss (#2568)

v0.16.0

15 Dec 15:34
bcbecab

Choose a tag to compare

Flux v0.16.0

Highlights

This release has a single breaking change:

  • The recurrent cells RNNCell, LSTMCell, and GRUCell forward has been changed to
    $y_t, state_t = cell(x_t, state_{t-1})$ in (#2551). Previously, it was $state_t = cell(x_t, state_{t-1})$.

Other highlights include:

  • Added WeightNorm normalization layer.
  • Added Recurrence layer, turning a recurrent layer into a layer processing the entire sequence at once.

Diff since v0.15.2

Merged pull requests:

Closed issues:

  • Feature request: Weight normalization (#942)
  • recurrent dropout (#1040)
  • Stacked RNN in Flux.jl? (#2452)