CUDAapi.find_libcudadevrtMethod
find_libcudadevrt(toolkit_dirs::Vector{String})

Look for the CUDA device runtime library in any of the CUDA toolkit directories toolkit_dirs.

CUDAapi.find_libdeviceMethod
find_libdevice(toolkit_dirs::Vector{String})

Look for the CUDA device library supporting targets in any of the CUDA toolkit directories toolkit_dirs. On CUDA >= 9.0, a single library unified library is discovered and returned as a string. On older toolkits, individual libraries for each of the targets are returned as a vector of strings.

CUDAapi.find_toolkitMethod
find_toolkit()::Vector{String}

Look for directories where (parts of) the CUDA toolkit might be installed. This returns a (possibly empty) list of paths that can be used as an argument to other discovery functions.

The behavior of this function can be overridden by defining the CUDA_PATH, CUDA_HOME or CUDA_ROOT environment variables, which should point to the root of the CUDA toolkit.

CUDAapi.has_cudaMethod
has_cuda()::Bool

Check whether the local system provides an installation of the CUDA driver and toolkit. Use this function if your code loads packages that require CUDA, such as CuArrays.jl.

Note that CUDA-dependent packages might still fail to load if the installation is broken, so it's recommended to guard against that and print a warning to inform the user:

using CUDAapi
if has_cuda()
    try
        using CuArrays
    catch ex
        @warn "CUDA is installed, but CuArrays.jl fails to load" exception=(ex,catch_backtrace())
    end
end
CUDAapi.has_cuda_gpuMethod
has_cuda_gpu()::Bool

Check whether the local system provides an installation of the CUDA driver and toolkit, and if it contains a CUDA-capable GPU. See has_cuda for more details.

Note that this function initializes the CUDA API in order to check for the number of GPUs.

CUDAapi.usable_cuda_gpusMethod
usable_cuda_gpus(; suppress_output=false)::Int

Returns the number of CUDA GPUs that are available for use on the local system.

Note that this function initializes the CUDA API in order to check for the number of GPUs.

CUDAapi.@checkedMacro
@checked function foo(...)
    rv = ...
    return rv
end

Macro for wrapping a function definition returning a status code. Two versions of the function will be generated: foo, with the function body wrapped by an invocation of the @check macro (to be implemented by the caller of this macro), and unsafe_foo where no such invocation is present and the status code is returned to the caller.

CUDAapi.@runtime_ccallMacro
@runtime_ccall((function_name, library), returntype, (argtype1, ...), argvalue1, ...)

Extension of ccall that performs the lookup of function_name in library at run time. This is useful in the case that library might not be available, in which case a function that performs a ccall to that library would fail to compile.

After a slower first call to load the library and look up the function, no additional overhead is expected compared to regular ccall.

CUDAapi.find_binaryMethod
find_binary(names; locations=String[])

Similar to find_library, performs an exhaustive search for a binary in various subdirectories of locations, and finally PATH.

CUDAapi.find_libraryMethod
find_library(names; locations=String[], versions=VersionNumber[], word_size=Sys.WORD_SIZE)

Wrapper for Libdl.find_library, performing a more exhaustive search:

  • variants of the library name (including version numbers, platform-specific tags, etc);
  • various subdirectories of the locations list, and finally system library directories.

Returns the full path to the library.