Skip to content

CUDA Toolkit

Overview

The NVIDIA CUDA Toolkit is a development platform that provides all the tools necessary to integrate GPU acceleration into software, such as those used in the high-performance computing field. Graphics cards excel at parallelization by containing a large amount of cores, which software is able to take advantage of by utilizing the CUDA API.

Availability

Cluster Module/Version
BOSE cuda/10.0
cuda/11.4
cuda/12.6
BGSC cuda/10.2
cuda/11.4

Note: You can simply use module load cuda to activate the most recently installed version of this software, however we highly recommend targeting a specific version due to changes.

Already Being Used?

By default, most software on our cluster will automatically load the CUDA module if it has support for utilizing our GPUs. This module is only necessary in a select few cases, such as building your own software.

You can verify this by doing module list after loading another module and seeing if it is listed.

Sample Slurm Script

submit.sh
#!/bin/bash
# -- SLURM SETTINGS -- #
# [..] other settings here [..]

# The following settings are for the overall request to Slurm
#SBATCH --ntasks-per-node=32     # How many CPU cores do you want to request
#SBATCH --nodes=1                # How many nodes do you want to request

# -- SCRIPT COMMANDS -- #

# Load the needed modules
module load cuda/12.6    # Load the CUDA Toolkit

Real Example

Has your research group used CUDA in a project? Contact the HPC Team and we'd be glad to feature your work.

Citation

Please include the following citation in your papers to support continued development of CUDA.

John Nickolls, Ian Buck, Michael Garland, and Kevin Skadron. 2008. Scalable Parallel Programming with CUDA: Is CUDA the parallel programming model that application developers have been waiting for? Queue 6, 2 (March/April 2008), 40–53. https://doi.org/10.1145/1365490.1365500

Resources