Skip to content

Deprecate Flux.Optimisers and implicit parameters in favour of Optimisers.jl and explicit parameters #1986

Closed
@CarloLucibello

Description

@CarloLucibello

So all the things are in place and we can get rid of the current pattern using implicit params:

using Flux
ps = Flux.params(model)
opt = Flux.Optimise.ADAM()
gs = gradient(() -> loss(model(x), y), ps)
Flux.Optimise.update!(opt, ps, grads)

to the one using explicit parameters and Optimisers.jl

using Flux, Optimisers
opt_state = Optimisers.setup(Optimisers.Adam(), model)
∇model = gradient(m -> loss(m(x), y), model)[1]
opt_state, model = Optimisers.update!(opt_state, model, ∇model)
## or the non-mutating
# state, model = Optimisers.update(opt_state, model, ∇model)

Code

julia> gradient(m -> (sum(norm, Flux.params(m))), (x=[1,2.0], y=[3.0]))
(nothing,)

Documentation

Examples

  • Port model zoo examples -- tag "update"
  • Help porting downstream libraries and check there are no surprises
    • GraphNeuralNetworks.jl

@mcabbott @ToucheSir @darsnack feel free to add to this

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    Status

    Done

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions