Computing at SLAC

Getting Started with SLAC's MPI "Pinto" Cluster

The Pinto Cluster

SLAC's "pinto" cluster consists of 32 SunFire x2270 systems, each with two quad-core Xeon processors running at 2.93 GHz. Thus there is a total of 256 cpus available in the cluster. The systems have an InfiniBand low-latency interconnect. Two of the machines, available via the generic pool name 'pinto', are open to interactive login to allow you to build and test your code.

MPI Environment

The pinto machines are installed with RHEL5, including the OpenMPI packages supplied by RedHat. The current version is OpenMPI 1.4. When you log into a pinto, your environment should be set up to use this version (unless you have used RedHat's mpi-selector script, or your login scripts, to override the default). You can check to see if your PATH is correct by issuing the command  which mpirun . Currently, this should return: /usr/lib64/openmpi/1.4-gcc/bin/mpirun  Future updates to the MPI version may change the exact details of this path.

In addition, your LD_LIBRARY_PATH should include: /usr/lib64/openmpi/1.4-gcc/lib  (or something similar).

The following assumes you are logged onto a pinto and your environment is set up as described.

Compiling Code

MPI versions of the common gnu compilers are available, such as mpicc, mpic++, mpif90. For example if you have the hello.c program from the MPI Tutorial, you can say
mpicc hello.c -o hello
to compile and link the code into an executable named hello . You can execute the program interactively by just saying
./hello

Submitting jobs

Batch jobs can be submitted to the pinto cluster by specifying -a mympi on your bsub command. This will send the job to the mpi-ibq LSF queue by default. You can also specify the queue explicitly: -q mpi-ibq , but that should not be necessary. You specify how many CPUs your job needs with the -n opt. So if you have the hello program (from above) in your home directory, and want to run on 96 CPUs, you would say:
bsub -a mympi -n 96 ~/hello

Further Information

Check the links at the top of this page for more information on using MPI and batch at SLAC.

For more help, send email to unix-admin@slac.stanford.edu .




John Bartelt. Last Modified: 2010 August 09.