Skip to content

Cluster: BGSC

Also known as the "Blugold Supercomputing Cluster", BGSC is the original general-purpose cluster at UW-Eau Claire. Funded by the Blugold Differential Tuition in 2013, and extended through purchases of hardware by individual faculty members, this cluster was the first system opened to all faculty, staff, and students at UW-Eau Claire.

This cluster is now only used for smaller single-node (serial) jobs, smaller research projects, and classrooms.

Are you looking to transition from BGSC to BOSE?

How To Connect

Off Campus?

If you are not on campus, you'll need to first connect to the UW-Eau Claire VPN before you can access our computing resources.

SSH / SFTP / SCP - Command Line

Hostname bgsc.hpc.uwec.edu
Port 22
Username Your UW-Eau Claire username - all lowercase
Password Your UW-Eau Claire password

Hardware Specs

Overall Specs

Hardware Overview # w/ Slurm # Total*
# of Nodes 20 27
CPU Cores 404 452
GPU Cores 22,952 29,952
Memory 1.32 TB 1.43 TB
Network Scratch 11 TB 11 TB

*Note: BGSC has some nodes that are not available through the normal job submission process via Slurm and require special access. We listed the specs here for completeness.

Node Specs

Node Name CPU Model CPU Cores (Total) CPU Clock (Base) Memory (Slurm) Local Scratch Slurm?
compute[29-33]
compute[35-36]
Intel Xeon E5430 (x2) 8 2.66 GHz 16 GB 125 TB No
compute[37-40] Intel Xeon X5450 (x2) 12 2.67 GHz 24 GB 450 GB Yes
compute[58-70] Intel Xeon E5-2670 v2 (x2) 20 2.5 GHz 80 GB 900 GB Yes
compute[71-73] Intel Xeon E5-2683 v4 (x2) 32 2.1 GHz 60 GB 100 GB Yes

Graphic Cards

We have three nodes containing four NVIDIA Telsa K20m graphic cards 5 GB under the "GPU" partition.

Slurm Partitions

Name # of Nodes Max Time Limit Purpose
week 9 7 days General use partiton. It should be used when your job will take less than a week, or if you can restart your job to continue running it from a checkpoint. This partition is highly recommended for most jobs.
batch 12 30 days General use partition for longer-length jobs that need to run up to a month.
GPU 3 7 days Partition that uses exclusively nodes that contain GPUs, only to be used when GPUs are required for your job.
extended 4 104 days Special partiton for longer-length jobs that need to run up to 104 days. This partition is extremely limited in availability.
scavenge 19 5 days Low-priority partition available on all nodes, but jobs may be requeued if resources are needed by a job on another partition.

Acknowledgement

We request that you support our group by crediting us with the following message in your papers whenever using the BGSC supercomputing cluster for your research.

The computational resources of the study were provided by the Blugold Center for High-Performance Computing.