API Manual

Compiling Tools (Reexported from NiLangCore)

NiLangCore.@assignMacro
@assign a b [invcheck]

Perform the assign a = b in a reversible program. Turn off invertibility check if the invcheck is false.

NiLangCore.@assignbackMacro
@assignback f(args...) [invcheck]

Assign input variables with output values: args... = f(args...), turn off invertibility error check if the second argument is false.

NiLangCore.@code_juliaMacro
@code_julia ex

Get the interpreted expression of ex.

julia> @code_julia x += exp(3.0)
quote
    var"##results#267" = ((PlusEq)(exp))(x, 3.0)
    x = var"##results#267"[1]
    try
        (NiLangCore.deanc)(3.0, var"##results#267"[2])
    catch e
        @warn "deallocate fail: `3.0 → var"##results#267"[2]`"
        throw(e)
    end
end

julia> @code_julia @invcheckoff x += exp(3.0)
quote
    var"##results#257" = ((PlusEq)(exp))(x, 3.0)
    x = var"##results#257"[1]
end
NiLangCore.@code_preprocessMacro
@code_preprocess ex

Preprocess ex and return the symmetric reversible IR.

julia> NiLangCore.rmlines(@code_preprocess if (x < 3, ~) x += exp(3.0) end)
:(if (x < 3, x < 3)
      x += exp(3.0)
  end)
NiLangCore.@code_reverseMacro
@code_reverse ex

Get the reversed expression of ex.

julia> @code_reverse x += exp(3.0)
:(x -= exp(3.0))
NiLangCore.@dualMacro
@dual f invf

Define f and invf as a pair of dual instructions, i.e. reverse to each other.

NiLangCore.@fieldviewMacro
@fieldview fname(x::TYPE) = x.fieldname
@fieldview fname(x::TYPE) = x[i]
...

Create a function fieldview that can be accessed by a reversible program

julia> struct GVar{T, GT}
           x::T
           g::GT
       end

julia> @fieldview xx(x::GVar) = x.x

julia> chfield(GVar(1.0, 0.0), xx, 2.0)
GVar{Float64, Float64}(2.0, 0.0)
NiLangCore.@iMacro
@i function fname(args..., kwargs...) ... end
@i struct sname ... end

Define a reversible function/type.

julia> @i function test(out!, x)
           out! += identity(x)
       end

julia> test(0.2, 0.8)
(1.0, 0.8)

See test/compiler.jl for more examples.

NiLangCore.@invcheckMacro
@invcheck x val

The macro version NiLangCore.deanc, with more informative error.

NiLangCore.@withMacro

e.g. @with x.y = val will return a new object similar to x, with the y field changed to val.

NiLangCore._zeroMethod
_zero(T)
_zero(x::T)

Create a zero of type T by recursively applying zero to its fields.

NiLangCore.almost_sameMethod
almost_same(a, b; atol=GLOBAL_ATOL[], kwargs...) -> Bool

Return true if a and b are almost same w.r.t. atol.

NiLangCore.assign_varsMethod
assign_vars(args, symres, invcheck)

Get the expression of assigning symres to args.

NiLangCore.check_invMethod
check_inv(f, args; atol::Real=1e-8, verbose::Bool=false, kwargs...)

Return true if f(args..., kwargs...) is reversible.

NiLangCore.chfieldFunction
chfield(x, field, val)

Change a field of an object x.

The field can be a Val type

julia> chfield(1+2im, Val(:im), 5)
1 + 5im

or a function

julia> using NiLangCore

julia> struct GVar{T, GT}
           x::T
           g::GT
       end

julia> @fieldview xx(x::GVar) = x.x

julia> chfield(GVar(1.0, 0.0), xx, 2.0)
GVar{Float64, Float64}(2.0, 0.0)
NiLangCore.compile_exMethod
compile_ex(m::Module, ex, info)

Compile a NiLang statement to a regular julia statement.

NiLangCore.deancFunction
deanc(a, b)

Deallocate varialbe a with value b. It will throw an error if

  • a and b are objects with different types,
  • a is not equal to b (for floating point numbers, an error within NiLangCore.GLOBAL_ATOL[] is allowed),
NiLangCore.get_argnameMethod
get_argname(ex)

Return the argument name of a function argument expression, e.g. x::Float64 = 4 gives x.

NiLangCore.get_ftypeMethod
get_ftype(fname)

Return the function type, e.g.

  • obj::ABC => ABC
  • f => typeof(f)
NiLangCore.isprimitiveMethod
isprimitive(f)

Return true if f is an instruction that can not be decomposed anymore.

NiLangCore.match_functionMethod
match_function(ex)

Analyze a function expression, returns a tuple of (macros, function name, arguments, type parameters (in where {...}), statements in the body)

NiLangCore.nilang_irMethod
nilang_ir(ex; reversed::Bool=false)

Get the NiLang reversible IR from the function expression ex, return the reversed function if reversed is true.

This IR is not directly executable on Julia, please use macroexpand(Main, :(@i function .... end)) to get the julia expression of a reversible function.

julia> ex = :(@inline function f(x!::T, y) where T
                @routine begin
                    anc ← zero(T)
                    anc += identity(x!)
                end
                x! += y * anc
                ~@routine
           end);

julia> NiLangCore.nilang_ir(Main, ex) |> NiLangCore.rmlines
:(@inline function f(x!::T, y) where T
          begin
              anc ← zero(T)
              anc += identity(x!)
          end
          x! += y * anc
          begin
              anc -= identity(x!)
              anc → zero(T)
          end
      end)

julia> NiLangCore.nilang_ir(Main, ex; reversed=true) |> NiLangCore.rmlines
:(@inline function (~f)(x!::T, y) where T
          begin
              anc ← zero(T)
              anc += identity(x!)
          end
          x! -= y * anc
          begin
              anc -= identity(x!)
              anc → zero(T)
          end
      end)
NiLangCore.precomMethod
precom(module, ex)

Precompile a function, returns a tuple of (macros, function name, arguments, type parameters, function body).

NiLangCore.precom_exMethod
precom_ex(module, ex, info)

Precompile a single statement ex, where info is a PreInfo instance.

NiLangCore.protectfMethod
protectf(f)

Protect a function from being inverted, useful when using an callable object.

NiLangCore.rmlinesMethod
rmlines(ex::Expr)

Remove line number nodes for pretty printing.

NiLangCore.unzipped_broadcastMethod
unzipped_broadcast(f, args...)

unzipped broadcast for arrays and tuples, e.g. SWAP.([1,2,3], [4,5,6]) will do inplace element-wise swap, and return [4,5,6], [1,2,3].

NiLangCore.DivEqType
DivEq{FT} <: Function
DivEq(f)

Called when executing out /= f(args...) instruction. See PlusEq for detail.

NiLangCore.InvType
Inv{FT} <: Function
Inv(f)

The inverse of a function.

NiLangCore.MinusEqType
MinusEq{FT} <: Function
MinusEq(f)

Called when executing out -= f(args...) instruction. See PlusEq for detail.

NiLangCore.MulEqType
MulEq{FT} <: Function
MulEq(f)

Called when executing out *= f(args...) instruction. See PlusEq for detail.

NiLangCore.PlusEqType
PlusEq{FT} <: Function
PlusEq(f)

Called when executing out += f(args...) instruction. The following two statements are same

julia> x, y, z = 0.0, 2.0, 3.0
(0.0, 2.0, 3.0)

julia> x, y, z = PlusEq(*)(x, y, z)
(6.0, 2.0, 3.0)

julia> x, y, z = 0.0, 2.0, 3.0
(0.0, 2.0, 3.0)

julia> @instr x += y*z


julia> x, y, z
(6.0, 2.0, 3.0)
NiLangCore.XorEqType
XorEq{FT} <: Function
XorEq(f)

Called when executing out ⊻= f(args...) instruction. See PlusEq for detail.

Instructions

NiLang.@zerosMacro

Create zeros of specific type.

julia> @i function f(x)
           @zeros Float64 a b c
           # do something
       end
source
NiLang.HADAMARDMethod
HADAMARD(x::Real, y::Real)

Hadamard transformation that returns (x + y)/√2, (x - y)/√2

source
NiLang.ROTMethod
ROT(a!, b!, θ) -> a!', b!', θ

\[\begin{align} {\rm ROT}(a!, b!, \theta) = \begin{bmatrix} \cos(\theta) & - \sin(\theta)\\ \sin(\theta) & \cos(\theta) \end{bmatrix} \begin{bmatrix} a!\\ b! \end{bmatrix}, \end{align}\]

source
NiLang.allocFunction
alloc(f, args...)

allocate function output space (the first argument), where args only contains the last N-1 arguments.

source
NiLang.bennett!Method
bennett!(step, state::Dict, args...; k, N, logger=BennettLog(), do_uncomputing=false, kwargs...)
  • step is a reversible step function,
  • state is the dictionary state, with state[1] the input state, the return value is stored in state[N+1],
  • k is the number of steps in each Bennett's recursion,
  • N is the total number of steps,
  • logger=BennettLog() is the logging of Bennett's algorithm,
  • args... and kwargs... are additional arguments for steps.
source
NiLang.bennettMethod
bennett(step, y, x, args...; k, N, logger=BennettLog(), kwargs...)
  • step is a reversible step function,
  • y is the output state,
  • x is the input state,
  • k is the number of steps in each Bennett's recursion,
  • N is the total number of steps,
  • logger=BennettLog() is the logging of Bennett's algorithm,
  • args... and kwargs... are additional arguments for steps.
source
NiLang.i_ascending!Method
i_ascending!(xs!, inds!, arr)

Find the ascending sequence in arr and store the results into xs!, indices are stored in inds!. This function can be used to get the maximum value and maximum indices.

source
NiLang.i_cor_covMethod
 i_cor_cov(rho!,cov!,a,b)

get Pearson correlation and covariance of two vectors a and b

source
NiLang.i_dirtymulMethod
i_dirtymul(out!, x, anc!)

"dirty" reversible multiplication that computes out! *= x approximately for floating point numbers, the anc! is anticipated as a number ~0.

source
NiLang.i_filter!Method
i_filter!(f, out!, iter)

Reversible filter function, out! is an emptied vector.

source
NiLang.i_inv!Method
i_inv!(out!, A)

Get the inverse of A.

this function is implemented as a primitive.
source
NiLang.i_logsumexpMethod
i_logsumexp(logout!, out!, xs!, inds!, x)

Compute logout! = log(sum(exp(x))).

Arguments

* `out!`, output,
* `logout!`, logged output,
* `xs!`, an empty vector to cache the ascending values (same type as `x`),
* `inds!`, an empty vector to cache the ascending indices (integer type),
* `x`, input vector.
source
NiLang.i_mapfoldlMethod
i_mapfoldl(map, fold, out!, iter)

Reversible mapfoldl function, map can be irreversible, but fold should be reversible.

source
NiLang.i_mul!Method
i_mul!(out!, x, y)

compute x * y (x and y are matrices, and store results in out!.

source
NiLang.i_umm!Method
i_umm!(x!, θ)

Compute unitary matrix multiplication on x, where the unitary matrix is parameterized by (N+1)*N/2 θs.

source
NiLang.i_var_mean_sumMethod
i_var_mean_sum(varinfo, sqv)
i_var_mean_sum(var!, varsum!, mean!, sum!, v)

Compute the variance, the accumulated variance, mean and sum. varinfo is the VarianceInfo object to store outputs.

source
NiLang.unwrapMethod
unwrap(x)

Unwrap a wrapper instance (recursively) to get the content value.

source
NiLang.IWrapperType
IWrapper{T} <: Real

IWrapper{T} is a wrapper of for data of type T. It will forward >, <, >=, <=, ≈ operations.

source
NiLang.NoGradType
NoGrad{T} <: IWrapper{T}
NoGrad(x)

A NoGrad(x) is equivalent to GVar^{-1}(x), which cancels the GVar wrapper.

source
NiLang.PartialType

Partial{FIELD, T, T2} <: IWrapper{T2}

Take a field FIELD without dropping information. This operation can be undone by calling ~Partial{FIELD}.

source

Automatic Differentiation

NiLang.AD.check_gradMethod
check_grad(f, args; atol::Real=1e-8, verbose::Bool=false, iloss::Int, kwargs...)

Return true if the gradient of f(args..., kwargs...) is reversible.

source
NiLang.AD.hessian_backbackMethod
hessian_backback(f, args; iloss::Int, kwargs...)

Obtain the Hessian matrix of f(args..., kwargs...) by back propagating adjoint program.

source
NiLang.AD.jacobianMethod
jacobian(f, args...; iin::Int, iout::Int=iin, kwargs...)

Get the Jacobian matrix for function f(args..., kwargs...) using vectorized variables in the gradient field. One can use key word arguments iin and iout to specify the input and output tensor.

source
NiLang.AD.jacobian_repeatMethod
jacobian_repeat(f, args...; iin::Int, iout::Int=iin, kwargs...)

Get the Jacobian matrix for function f(args..., kwargs...) using repeated computing gradients for each output. One can use key word arguments iin and iout to specify the input and output tensor.

source
NiLang.AD.GVarType
GVar{T,GT} <: IWrapper{T}
GVar(x)

Add gradient information to variable x, where x can be a real number or a general structure. If it is a non-integer real number, it will wrap the element with a gradient field, otherwise it will propagate into the type and wrap the elements with GVar. Runing a program backward will update the gradient fields of GVars. The following is a toy using case.

Example

julia> using NiLang.AD: GVar, grad

julia> struct A{T}
           x::T
       end

julia> GVar(A(2.0+3im), A(3.0+3im))
A{Complex{GVar{Float64, Float64}}}(GVar(2.0, 3.0) + GVar(3.0, 3.0)*im)

julia> @i function f(a::A, b::A)
           a.x += log(b.x)
       end

julia> outputs = f(A(2.0+3im), A(2.0-1im))  # forward pass
(A{ComplexF64}(2.8047189562170503 + 2.536352390999194im), A{ComplexF64}(2.0 - 1.0im))

julia> outputs_with_gradients = (GVar(outputs[1], A(3.0+3im)), GVar(outputs[2]))  # wrap `GVar`
(A{Complex{GVar{Float64, Float64}}}(GVar(2.8047189562170503, 3.0) + GVar(2.536352390999194, 3.0)*im), A{Complex{GVar{Float64, Float64}}}(GVar(2.0, 0.0) - GVar(1.0, -0.0)*im))

julia> inputs_with_gradients = (~f)(outputs_with_gradients...)  # backward pass
(A{Complex{GVar{Float64, Float64}}}(GVar(2.0, 3.0) + GVar(3.0, 3.0)*im), A{Complex{GVar{Float64, Float64}}}(GVar(2.0, 1.8) - GVar(1.0, -0.6000000000000002)*im))

julia> grad(inputs_with_gradients)
(A{ComplexF64}(3.0 + 3.0im), A{ComplexF64}(1.8 + 0.6000000000000002im))

The outputs of ~f are gradients for input variables, one can use grad to take the gradient fields recursively.

source
NiLang.AD.NGradType
NGrad{N,FT} <: Function

Obtain gradients Grad(f)(Val(i), args..., kwargs...), where i is the index of loss in args. Grad object calls forward first, and then backward.

Note

Val(1) is specially optimized, so putting the loss as the first parameter can avoid potential overhead.

```

source