Zeno

Part of the training infrastructure, Zeno is a computing cluster that supports high-performance networking for highly parallel jobs, and GPU computing.

You can use Zeno for training and teaching purposes. You can also use Zeno for research computations if there is no training scheduled.

To take advantage of the processing power of the GPUs, a program must be  compiled with the CUDA libraries. The CUDA 4.2 environment is currently available on Zeno, including OpenCL.

Specifications


  • 1 head node: zeno.usask.ca
    • 2 x six-core Intel Xeon E5649 2.53GHz processors
    • 24 GB RAM
    • 2800GB RAIDed storage (not backed up)
    • TORQUE/Maui scheduling software
  • 8 computational nodes: compute-0-0 to compute-0-7
    • 2 x six-core Intel Xeon E5649 2.53GHz processors (12 cores per node)
    • 24 GB ECC RAM
    • 500GB 7200 RPM hard drive
    • Tesla M2075 GPU
  • 4x Infiniband interconnect
  • Centos 6.3 Linux / ROCKS clustering software
  • 2.8 TB RAID storage exported to nodes
  • Local hard drives on computational nodes for high performance, non-network scratch

Software

Most of the software you will need to use the Zeno cluster can be found here:

/share/apps

Applications:

MATLAB - /usr/local/bin/matlab

Environment variables are configured via the "module" software for given applications.

To see your currently configured modules, use

module list

To see the list of available modules, use

module avail

To load a module for one login shell/session:

module add <modulename>
or
module load <modulename>

To load a module for every future login shell/session:

module initadd <modulename>

For more information, see 

man module

Compilers


/usr/bin/gcc
/usr/bin/gfortran

Modules

The module command controls which environments are loaded at each shell invocation for the default installed versions of mpi on the cluster.

Modules available:

nvidia/cuda

Required for using CUDA (i.e. GPGPU processing)

openmpi/1.6

Required for OpenMPI

openmpi/1.6_noib

Required if you want to use OpenMPI, but not the InfiniBand interconnect

rocks-openmpi

Use of the default OpenMPI routines, not recommended

 

Use the "initadd" command to add the appropriate modules. The following is recommended:

module initadd openmpi/1.6 nvidia/cuda
Last modified on