Socrates (Training Cluster)

Part of our training infrastructure, Socrates is a high performance computing cluster designed for tasks requiring parallel computing resources. You can use Socrates for training, teaching or research purposes.

Specifications


  • 1 head node: socrates.usask.ca
    • 2 x Quad core Intel Xeon processors
    • 8GB ECC RAM
    • 1 TB of RAIDed storage
    • TORQUE/Maui scheduling software
  • 8 capability nodes: compute-0-0 to compute-0-7
    • Sun Fire X4150
    • 2 x Quad core Intel Xeon L5420 at 2.5GHz (8 cores)
    • 32 GB ECC RAM
    • 146 GB 10000 rpm SAS hard drive
    • 1 Gb NIC
  • 28 capacity nodes: compute-0-8 to compute-0-35
    • Sun Fire X2250
    • 2 x Quad core Intel Xeon L5420 at 2.5GHz (8 cores)
    • 8 GB ECC RAM
    • 250 GB 7200 rpm SATA hard drive
    • 1 Gb NIC
  • 1 Gigabit Ethernet private network (48 port GigE switch)
  • RHEL 5.3 Linux/OSCAR clustering software
  • 1 TB of RAIDed storage on head node
  • Local hard drives on nodes for scratch space

Software

Depending on the class, there may be class specific software, but most software will be located here:

/share/apps

Applications:

MATLAB - /usr/local/bin/matlab

Compliers

Compilers

/usr/bin/gcc
/usr/bin/g77
/usr/bin/gfortran

Intel Compilers

In order to configure your session to use the Intel compilers for "mpif90" and "mpicc", you should ensure that you do not have another MPI configuration loaded:

module unload rocks-openmpi

you must use the following command to load the correct configurations:

module load  intel/xe12 intel/mpi_xe12

You will have to use the "initadd" command to use the correct libraries for running your programs in the batch system.

module initadd intel/xe12 intel/mpi_xe12

Note that the Intel compilers include the optimised Math Kernel Library with BLAS, LAPACK, ScaLAPACK, LINPACK and FFT libraries. This utility can help you determine the library linking procedure.

Modules (Set up environment for MPI)

The module command controls which parallel processing environment is loaded at each shell invocation for the default installed versions of mpi on the cluster. If your class uses a different version, another module will apply. Unless you have a need for another environment, the OpenMPI environment is recommended for your use.

module initadd mpi_gnu
Last modified on