Stanford Linear
Accelerator Center

Public Machines at SLAC

Unix at SLAC
Security
Updated: 3 April 2014

SLAC Computing Division provides a number of UNIX machines for use by the SLAC Community. This page describes the machines that are available, how to select them, and their intended uses to help you select an appropriate one for your use.

Many of these machines are monitored with Ganglia and Nagios.

The SLAC UNIX Compute Farm has machines for compute-intensive interactive work (e.g., compiling and linking C, C++, and FORTRAN code, interactive debugging, interactive data analysis), batch production jobs, and tape access. For interactive work that is not compute-intensive, SLAC has a number of hosts for e-mail (e.g., pine), web browsing (e.g., firefox), general editing (e.g., emacs, vi), and other "light weight" interactive work. Details are described in the table below.

All SLAC systems are for use only by authorized users for SLAC business. Violators are subject to criminal and civil penalties. All activities may be monitored and recorded and these records provided to law enforcement officials; by using these system you expressly consent to such monitoring.

Workstations

The SLAC Computing Division provides public workstations in the lobbies of the Computer Building (B50). Currently these are Dell Windows machines with SSH and SCP clients and an X Windows server for access to public UNIX systems.

Servers

Machine or
Pool Name
# in
Pool
Software Hardware Intended Use
flora 4 Solaris 10 Sun Fire V210
dual 1000MHz UltraSparc CPUs, 2GB memory
X applications and
light interactive work. 1,2
tersk 4 Solaris 10,
Sun Studio 11
Sun Fire V240
dual 1000MHz UltraSparc CPUs, 6GB memory
Interactive compute-
intensive work. 1,3
iris 3 RedHat Enterprise Linux 5 (32-bit kernel) Sun Fire V20z
dual 1.8GHz Opteron CPUs, 2GB memory7
Light interactive work. 1,2
rhel5-32
(aka noric, yakut)
6 RedHat Enterprise Linux 5 (32-bit kernel),
GCC 4.1.2, GCC 4.4.7, glibc-2.5
/usr/work local scratch space
2 CPU cores (3.07GHz Intel Xeon), 16 GB memory7
Interactive compute-
intensive work.1,3
rhel5-64 3 RedHat Enterprise Linux 5 (64-bit kernel),
GCC 4.1.2, GCC 4.4.7, GCC 4.8.28, glibc-2.5
/usr/work local scratch space
2 CPU cores (3.07GHz Intel Xeon), 16 GB memory7
Interactive compute-
intensive work.1,3
rhel6-32 3 RedHat Enterprise Linux 6 (32-bit kernel),
GCC 4.4.7, glibc-2.12
/usr/work local scratch space
2 CPU cores (3.07GHz Intel Xeon), 16 GB memory7
Interactive compute-
intensive work.1,3
rhel6-64 13 RedHat Enterprise Linux 6 (64-bit kernel),
GCC 4.4.7, GCC 4.8.28, glibc-2.12
/usr/work local scratch space
2 CPU cores (3.07GHz Intel Xeon), 16 GB memory7
Interactive compute-
intensive work.1,3
suncron 1 Solaris 10 Sun T5120
quad-core 1.2GHz UltraSPARC-T2 CPU, 8GB memory
Host for user cron jobs 2,6
lnxcron 1 RedHat Enterprise Linux 5 (64-bit kernel) Sun Fire X4150
dual quad-core 2.33GHz Intel Xeon E5410 CPUs, 4GB memory7
Host for user cron jobs 2,6

Batch Workers

Pool Name # in
Pool
Software Hardware Intended Use
kiso 68 RedHat Enterprise Linux 6 (64-bit kernel), GCC 4.4.7, glibc-2.12 Dell R410
dual hexa-core 2.66GHz Intel Xeon X5650 CPUs, 48GB memory7
Batch work only;
no logins.4
dole 38 RedHat Enterprise Linux 6 (64-bit kernel), GCC 4.4.7, glibc-2.12 Dell R410
dual hexa-core 2.66GHz Intel Xeon X5650 CPUs, 24GB memory7
Batch work only;
no logins.4
hequ 192 RedHat Enterprise Linux 5 (64-bit kernel), GCC 4.1.2, glibc-2.5 Dell R410
dual quad-core 2.93GHz Intel Xeon X5570 CPUs, 24GB memory7
Batch work only;
no logins.4
yili 25 RedHat Enterprise Linux 5 (64-bit kernel), GCC 4.1.2, glibc-2.5 Sun Fire X4100
dual dual-core 2.2GHz Opteron 275 CPUs, 4GB memory7
Batch work only;
no logins.4,5
bullet 185 RedHat Enterprise Linux 6 (64-bit kernel), GCC 4.4.7, glibc-2.12 Dell PowerEdge M620
dual 8 core 2.2GHz Intel Xeon E5-2660, 64GB memory7
Batch work only;
no logins.4,5
fell 368 RedHat Enterprise Linux 5 (64-bit kernel), GCC 4.1.2, glibc-2.5 Dell Poweredge 1950
dual quad-core 2.66GHz Xeon X5355 CPUs, 16GB memory7
Batch work only;
no logins.4,5

Notes:

  1. The flora, tersk, iris, rhel6-64, rhel6-32, rhel5-32, rhel5-64, etc. names (and variants, such as flora-old) will give you the least loaded machine from a pool of machines for logging in. (If these names do not work for you at your site, try e.g., flora.best.slac.stanford.edu.) Suffixes such as "-new" and "-old", e.g., flora-old, are used from time to time to identify pools of machines running software different from the version currently recommended by Computing, e.g., for testing or for backwards compatibility. When a new or old version is not available, these hyphenated names will be synonyms for the corresponding base name for some period of time but then may be retired until a different version is available again.
  2. CPU usage on these interactive machines is limited to 20% of available CPU or less, measured on 15 minute intervals.
  3. These are part of the SLAC UNIX Compute Farm and are for interactive compute-intensive work only. As a rule of thumb, a process is deemed to be non-interactive if it consumes a significant fraction of a CPU continuously for more than 12 hours. Running CPU-intensive processes in the background or through non-interactive mechanisms such as trscron, especially several such processes at the same time (either on a single machine or on several machines within the pool), is also considered non-interactive use. Such work should be moved to batch or else it may be reniced to a poorer priority or even killed if regular abuse occurs, possibly with no notice. In addition, any process that seriously interferes with the usability of the machine in other ways, e.g., by filling up most of memory or /tmp space, may be killed with little or no notice.
  4. These are part of the SLAC UNIX Compute Farm. Various scientific programs have priority on a large subset of these machines (i.e. FGST/GLAST, Babar). Other users may use them, but in some cases running jobs may be preempted or users may be asked to stop if the resources are needed by these science programs. Science program priorities will influence system availability and upgrade schedules.
  5. 32-bit or 64-bit machines may be specifically requested through the use of a batch resource. Otherwise a job will be dispatched on the next available worker.
  6. User cron jobs are only permitted on the cron servers, suncron and lnxcron, in order to avoid spikes of poor performance for interactive users on our other public machines.
  7. Memory use is limited to 50% of addressable memory by default, or a maximum of 80% with the use of the limit or ulimit shell command.
  8. GCC 4.8.2 available via Developers Tool Set (DTS) and Software Collections (SCL).
    Here are two ways to use DTS:
    scl enable devtoolset-2 'bash'
    scl enable devtoolset-2 'gcc ...'


Owner: Karl Amrhein