Skip to content

Interactive Mode

When you use sbatch to submit a job to our cluster, all the commands listed in your script are automatically run on a compute node until the job is completed. Sometimes you want to interactively run the commands yourself to see the output or test for errors, which doesn't work well with this method.

By using our custom command sinteract, users are logged into one of our compute nodes with same resources you can request with sbatch, giving you the ability to run each command manually.

Compiling Software?

Are you going to be compiling software or seeing references to missing packages that end in "-devel" or "-dev"? You can use the sdevelop command to run jobs on our development node instead. This is the same as our compute nodes, but also includes a few additional header libraries.

The usage of sdevelop is the same as sinteract.

sdevelop # Defaults to 16 cores for 8 hours
sdevelop --ntasks=4 --mem=100G  # Request 32 cores and 100GB of memory

Starting A Session

Command:

# Defaults to 1 core for 8 hours 
sinteract

You can request any resources needed (CPU Cores, Memory) using arguments for salloc, which is similar to those supported in sbatch.

More salloc Options: https://slurm.schedmd.com/salloc.html#SECTION_OPTIONS

CPU (week partition):

# Request 8 cores and 10GB of memory
sinteract --ntasks=8 --mem=10G

GPU:

# Request 1 GPU card and 8 CPU cores
sinteract --partition=GPU --gres=gpu:1 --ntasks=8

Running Software

Once you log into a compute node, you'll be able to use the module system to load the software required to run your calculations / simulations.

module load python-libs
python my-file.py

Ending A Session

To end this job and release the resources you can simply type:

exit

Slurm will terminate the shell sessions and return you to the head node shell.

Scratch Space

By default, all temporary files are created locally in /local/scratch/job_XXXXXX, which is accessible through the $TMPDIR environment variable.

[user@bose ~]$ sinteract
---------------------------------------------------------
Starting Interactive Session...

Please hold while we try to find an available node with your requested resources within 30 seconds.
---------------------------------------------------------
salloc: Pending job allocation 12345
salloc: job 12345 queued and waiting for resources
salloc: job 12345 has been allocated resources
salloc: Granted job allocation 12345
salloc: Nodes cn56 are ready for job

[user@cn01 ~]$ echo $TMPDIR
/local/scratch/job_12345

# Change directory to scratch location
[user@cn01 ~]$ cd $TMPDIR
[user@cn01 job_12345]

If you would like to use network scratch instead due to space limitations (> 800GB), you can prefix your sinteract command with USE_NETWORK_SCRATCH=1.

[user@bose ~]$ USE_NETWORK_SCRATCH=1 sinteract
---------------------------------------------------------
Starting Interactive Session...

Please hold while we try to find an available node with your requested resources within 30 seconds.
---------------------------------------------------------
salloc: Pending job allocation 12345
salloc: job 12345 queued and waiting for resources
salloc: job 12345 has been allocated resources
salloc: Granted job allocation 12345
salloc: Nodes cn56 are ready for job

[user@cn01 ~]$ echo $TMPDIR
/local/network/job_12345

Note: All scratch files will be automatically purged upon job completion.