FiniteDiff.GradientCacheMethod
GradientCache(c1, c2, c3, fx, fdtype = Val(:central), returntype = eltype(fx), inplace = Val(false))

Construct a non-allocating gradient cache.

Arguments

  • c1, c2, c3: (Non-aliased) caches for the input vector.
  • fx: Cached function call.
  • fdtype = Val(:central): Method for cmoputing the finite difference.
  • returntype = eltype(fx): Element type for the returned function value.
  • inplace = Val(false): Whether the function is computed in-place or not.

Output

The output is a GradientCache struct.

julia> x = [1.0, 3.0]
2-element Vector{Float64}:
 1.0
 3.0

julia> _f = x -> x[1] + x[2]
#13 (generic function with 1 method)

julia> fx = _f(x)
4.0

julia> gradcache = GradientCache(copy(x), copy(x), copy(x), fx)
GradientCache{Float64, Vector{Float64}, Vector{Float64}, Vector{Float64}, Val{:central}(), Float64, Val{false}()}(4.0, [1.0, 3.0], [1.0, 3.0], [1.0, 3.0])