CS 146 Cluster Walkthrough
Part 1: Logging in to Open OnDemand
For your convenience, the cluster can be accessed through a graphical user interface (GUI) called Open OnDemand. This interface will allow you to access all of the features you will need for DS150.
- Navigate to https://ondemand.hpc.uwec.edu (Requires on-campus or VPN connection).
- Enter your UWEC login and authenticate with Okta.
- Complete the first time sign-in process if you have never used the cluster before. * Follow the instructions on screen exactly. Make sure you accept all the terms of service.
Trouble with setup?
If you're struggling with first time sign-in or accepting the terms of service, try following our detailed guide at https://docs.hpc.uwec.edu/ood/first-time/
Part 2: Demo 1 - Linux + Interactive Job
-
Click on the "BOSE Cluster Shell Access" Pinned App.
-
You will be presented with something similar to this prompt. This is the main interface of a terminal/shell environment:
-
On first login, your shell runs on the cluster's head node named
bose
. We want to change to a node that is meant to handle our work. We can achieve this through a program called Slurm, our job management software. Change to a new node by typing: -
Confirm you current working directory is correct. It should appear similar to
/data/users/<your-username>
. Check by typing: -
Change your current working directory to the
my_cs146
project folder by typing: -
Copy all files from the
course_files
directory into your current directory by typing:
Look Closely!
If you look closely, you'll see a period at the end. The period refers to "the current location", so we are saying copy everything ("*") under the course_files directory to the current directory.
- Confirm the files copied. The following command shows all files in your current directory. Type:
- Activate the cluster's Python module by typing:
- Assuming your files copied, there should be a file titled
run-script.py
. The.py
signifies the file is a Python file. You can run its code with the following command: -
Take note of the output. As mentioned, we used a software called Slurm when we typed the
sinteract
command. Your output should show the specific details of your job. -
Finish it off by saying "I'm done" by exiting your job session on our cluster.
It should bring you back to the host named "bose". Keep this tab open for now and go back to the tab with Open OnDemand.
Part 3: Demo 2 - Jupyter
Back To OnDemand
Head back to the dashboard on Open OnDemand with all the tiles. This may be another tab in your browser.
Starting a Jupyter session
To use Jupyter, you must request resources from the cluster. First, click on the "Jupyter Notebook" tile on the OnDemand home screen. Once you click it, request the following resources on the next screen:
- Accounting Group: 2255.cs.146.001
- Slurm Partition: Week (7 days) - Default
- CPU Cores: 4
- Memory: 15G
- #GPU Cards: No GPUs - Default
- Number of Hours: 2 (This is how long Jupyter will run for before automatically stopping)
- Working Directory: (Leave this blank)
- Email Notifications: None - No Email
Double-check that all of your values are right, then click the blue Launch button.
Accessing your session
Once you launch the session, it will need to start. This can take some time, but once your session changes from "Starting" to "Running", click the blue Connect to Jupyter button.
Opening the demo notebook
Once you open Jupyter, you will be in your "Home Directory". You should see a folder called my_cs146
. Open this folder, then click on the Demo-2.ipynb
file inside. Once opened, you will be on the Jupyter Notebook screen, and you can now begin the demo.
Part 4: Demo 3 - Batch Job
Back to the command line interface on BOSE!
- Earlier we copied over files from
course_files
. One of these files is titledrun-demo-3.sh
. Look at the contents of this file by typing: Note the following section at the top of the file:This top section of the file is where we specify the resource requirements of our job.#!/bin/bash #SBATCH --partition=week # Partition to submit to #SBATCH --time=0-01:00:00 # Time limit for this job (DD-HH:MM:SS) #SBATCH --nodes=1 # Nodes to be used for this job during runtime. Use MPI jobs with multiple nodes. #SBATCH --ntasks=8 # Number of CPUs. Cannot be greater than number of CPUs on the node. #SBATCH --mem=5G # Total memory for this job (M = Megabytes, G = Gigabytes, T = Terabytes) #SBATCH --job-name="MandelbrotDemo" # Name of this job in work queue #SBATCH --output=jobOutputs/job-%j.out # Output file name (%j = Automatic Job ID) #SBATCH --error=jobOutputs/job-%j.err # Error file name (%j = Automatic Job ID) #SBATCH --mail-type=END,FAIL # Email notification type (BEGIN, END, FAIL, ALL). To have multiple use a comma separated list. i.e END,FAIL. #SBATCH --gpus=0 #Number of GPU cards to use (Applies only to BOSE and is required to use the GPUs)
- Run the following command to submit
run-demo-3.sh
to Slurm: - Check your job's status by typing:
Note
myjobs
will only show your job so long as it is actively running and hasn't finished or terminated yet.
-
Return to the main dashboard of Open OnDemand and click the "Home Directory" Pinned App.
-
You will see a filesystem set up in front of you. Click on
my_cs146
and then click on the.gif
file that matches your job's number. -
The result you get is the work of parallel processing through the cluster. Enjoy!