Essentials

Initialization

CUDA.functionalMethod
functional(show_reason=false)

Check if the package has been configured successfully and is ready to use.

This call is intended for packages that support conditionally using an available GPU. If you fail to check whether CUDA is functional, actual use of functionality might warn and error.

CUDA.has_cudaFunction
has_cuda()::Bool

Check whether the local system provides an installation of the CUDA driver and toolkit. Use this function if your code loads packages that require CUDA.jl.

Note that CUDA-dependent packages might still fail to load if the installation is broken, so it's recommended to guard against that and print a warning to inform the user:

using CUDA
if has_cuda()
    try
        using CuArrays
    catch ex
        @warn "CUDA is installed, but CuArrays.jl fails to load" exception=(ex,catch_backtrace())
    end
end
CUDA.has_cuda_gpuFunction
has_cuda_gpu()::Bool

Check whether the local system provides an installation of the CUDA driver and toolkit, and if it contains a CUDA-capable GPU. See has_cuda for more details.

Note that this function initializes the CUDA API in order to check for the number of GPUs.

Global state

CUDA.contextFunction
context()::CuContext

Get or create a CUDA context for the current thread (as opposed to CuCurrentContext which may return nothing if there is no context bound to the current thread).

CUDA.context!Method
context!(ctx::CuContext)

Bind the current host thread to the context ctx.

Note that the contexts used with this call should be previously acquired by calling context, and not arbirary contexts created by calling the CuContext constructor.

CUDA.context!Method
context!(f, ctx)

Sets the active context for the duration of f.

CUDA.device!Method
device!(dev::Integer)
device!(dev::CuDevice)

Sets dev as the current active device for the calling host thread. Devices can be specified by integer id, or as a CuDevice (slightly faster).

Although this call is fairly cheap (50-100ns), it is only intended for interactive use, or for initial set-up of the environment. If you need to switch devices on a regular basis, work with contexts instead and call context! directly (5-10ns).

If your library or code needs to perform an action when the active context changes, add a hook using CUDA.atcontextswitch.

CUDA.device!Method
device!(f, dev)

Sets the active device for the duration of f.

CUDA.device_reset!Function
device_reset!(dev::CuDevice=device())

Reset the CUDA state associated with a device. This call with release the underlying context, at which point any objects allocated in that context will be invalidated.

If you have a library or application that maintains its own global state, you might need to react to context or task switches:

CUDA.attaskswitchFunction
CUDA.attaskswitch(f::Function)

Register a function to be called after switching tasks on a thread. The function is passed two arguments: the thread ID, and the task switched to.

Use this hook to invalidate thread-local state that depends on the current task.

CUDA.atcontextswitchFunction
CUDA.atcontextswitch(f::Function)

Register a function to be called after switching contexts on a thread. The function is passed two arguments: the thread ID, and the context switched to.

If the new context is nothing, this indicates that the context is being unbound from this thread (typically during device reset).

Use this hook to invalidate thread-local state that depends on the current device or context.