Controls Computing Infrastructure for Test Facilities

 

Overview

The ARD Test Facilities, utilized to develop and test solutions for accelerator systems at SLAC, are mostly controlled by EPICS. The current facilities include XTA, GTF, ASTA, NLCTA, and etc. The new controls computing infrastructure for the facilities, designed to support all programs in the facilities in a centralized fashion and with scalability, is based on Linux (RHEL5) and SCCS AFS. The first program to use the new infrastructure was XTA, followed by GTF and ASTA. It is our plan to  migrate the rest of facilities (e.g., NLCTA) to the new infrastructure.

The controls computing infrastructure for Test Facilities resides on a IFZ (Internet Free Zone) network (ACCTESTFAC), which allows full access to SCCS resources the infrastructure depends on, yet protects the infrastructure from Internet malicious attacks. It is independent of the infrastructures for LCLS and FACET operations, which are completely standalone and provides high availability services as required. The architecture for the Networking and Computing is shown in the diagram below.

 

 

Controls Servers

The controls infrastructure is hosted on a number of RHEL5 servers listed in the table below:

nodename Functions OS User Access
testfac-srv01

- interactive works

- build/release platform

RHEL5-32 yes
testfac-srv02 - HLA applications launched from Controls Panels (e.g. xtahome) RHEL5-32 no
testfac-daemon1

- soft IOCs

- screen host for IOC console

RHEL5-32 no
testfac-daemon2

- daemon host (CMLOG, alh, ChannelWatcher, Xvfb, PV Gateway, and etc)

- cron system

RHEL5-32 no
testfac-archeng

- archiver sampling system

- archiver data server

RHEL5-32 no

Users should log in testfac-srv01 as tfprod to launch xtahome or astahome. tfprod is the production account for Test Facilities and has a Matlab license. All the environment for Test Facilities should be automatically set up upon login as tfprod:

$ ssh testfac-srv01 -l tfprod

$ xtahome&

Navigated by the controls panels in xtahome or astahome, you are set to fly ...

If you are very new to Test Facilties, it would be very effective if you can consult Cecile Limborg for just few minutes, as she is the founder for the controls infrastructure for Test Facilities and the most experienced user on this infrastructure.


Please ask Ken for help if you have a problem to access tfprod.

For any other issue with your personal SLAC account (e.g., Matlab license failure), please consult unix-admin@slac directly for help. All the servers listed in the table above are managed by Controls; if there is any issue with any other machine (such as ar-asta*) which is not listed in the table, please contact unix-admin@slac (which mamages SLAC wide Linux machines), ithelp@slac (which manages SLAC wide Windows machines) directly for quick response.

For developers (e.g. IOC engineers who want to check out and test IOCs for Test Facilities), please be sure to read the rest of this document.

Environment Setup

You should log in testfac-srv01 via SLAC UNIX account to do your works for Test Facilities. The rest of servers are dedicated to host the controls services, and only authorized persons have login access. All the daemon services are managed by acctf account. Please check with Ken for access, if you need to manage the controls daemon services.

If your login process does not include the environment setup for Test Facilities, you can do so after login (make sure you are in bash shell) with

$ source /afs/slac/g/acctest/tools/script/ENVS_acctest.bash

To make it automatic upon login, you should add the following lines to your .bashrc, so you don't have to type "source /afs/slac/g/acctest/tools/script/ENVS_acctest.bash" each time when you log in testfac-srv01


#####################
# environment setup
#####################
case $HOSTNAME in
"testfac-srv01")
    source /afs/slac/g/acctest/tools/script/ENVS_acctest.bash
    ;;
"other-machine")
    # source your_own_environment_setup
    ;;
*)
    source /afs/slac/g/lcls/tools/script/ENVS.bash
    ;;
esac

Launch e.g., xtahome.

Filesystem Structure

The root filesystem for the applications is SCCS AFS based and is in /afs/slac/g/acctest, accessible via $ACCTEST_ROOT; the root filesystem for data is SCCS NFS based and is in /nfs/slac/g/acctest, accessible via $ACCTEST_DATA.  The communication is shown in the diagram below

 

 

The filesystem structure, similar to LCLS and FACET production, is described in the table below.

filesystem purposes ENV variable Comments
/afs/slac/g/acctest/ Applications ACCTEST_ROOT *
                              epics/

EPICS applictions

   
                              rtems/ RTEMS   Centrally managed by Till for all AFS based programs.
                              physics/

HLA applications

   
                              tools/ Controls tools and configurations    
                              packages/ Third party packages   Centrally managed by Jingchen and etc. for AFS based programs
/nfs/slac/g/acctest Data ACCTEST_DATA *

* Unfortunately, the desired path names with testfac (e.g. /afs/slac/g/testfac and /nfs/slac/g/testfac) have been all taken before this infrastructure is born, so we ended up to use acctest, after consulting with Test Facilities.

As usual, the IOC boot area is in $IOC, the data in $IOC_DATA, and the IOC application area in $EPICS_IOC_TOP.

EPICS environment

The EPICS environment for soft and hard IOCs is defined in $IOC/All/Acctest in soft_pre_st.cmd and pre_st.cmd:

[jingchen@testfac-srv01 Acctest]$ ls
pre_st.cmd  screenrc          soft_pre_st.cmd
post_st.cmd  screeniocs  soft_post_st.cmd

The Channel Access server and client variables are defined as:


Environment Variables Value Comments
EPICS_CA_SERVER_PORT 5058 For Channel Access Client
EPICS_CA_REPEATER_PORT                       5059 For Channel Access Client
EPICS_CAS_SERVER_PORT                   EPICS_CA_SERVER_PORT For Channel Access Server
EPICS_CAS_BEACON_PORT         EPICS_CA_REPEATER_PORT For Channel Access Server


The hard IOCs will boot from afsnfs2, and you may need the following information:

Server IP:  134.79.19.29
Gateway IP: 172.27.96.1

Netmask: 255.255.252.0

DNS server 1: 134.79.18.40
DNS server 2: 134.79.18.41

NTP server 1: 134.79.18.40
NTP server 2: 134.79.18.41

NFS and DHCP:

As shown in the diagram above, Test Facilities IOCs boot from afsnfs2 and their data (e.g. save/restore, iocInfo, logging) writes to SCCS NFS. Please directly send a request to unix-admin to enable your hard IOC's access to SCCS NFS.

Please send a request to unix-admin for DHCP service, if needed. Please consult Ken Brobeck, if there is any network service related issues.

IOC engineers may use IOC Development and Release Procedures listed in Reference below for reference. Just like LCLS and FACET, when a IOC is born, please send a message to Ken, Ernest and Jingchen, who will create $IOC/nodename and $IOC_DATA/nodename, integrate your IOC to the Test Facilites.

If there is any EPICS related development and release issue, please consult Ernest Williams.  For all production related issues, please consult Jingchen Zhou.

PV Gateway :

A PV gateway is set up to provide public read-only access to Test Facilities PVs. One can read the Test Facilities PVs on any SLAC public networks if the environment is defined in such:
 
$ export EPICS_CA_ADDR_LIST=testfac-daemon2
$ export EPICS_CA_SERVER_PORT=5048
$ export EPICS_CA_REPEATER_PORT=5049

AIDA :

Based on the agreement with Test Facilities, there is no AIDA support for Test Facilities.  To access Archiver data from Matlab, one can use one of the following methods recommended by Murali:

1) Use ArchiveViewer and export to Matlab
2) Use ArchiveExport (this requires access to the main index file /nfs/slac/g/cd/tf_archiver/arch_acctf/current_and_all_index).
3) The ArchiveViewer uses the XML-RPC protocol; so, use python to get  data using XML-RPC URL http://testfac-archeng.slac.stanford.edu/cgi-bin/ArchiveDataServer.cgi and then send over to MatLab.

Please consult Justin May for other solutions.

Reference:

1 Test Facility Controls Computing Infrastructure Design

2. IOC Development and Release Procedure (soft)

3. IOC Development and Release Procedure (hard)

4. When an IOC is born

5. Production accounts

 

 


Contact: Jingchen Zhou (X4661, jingchen@slac). Last edited on 4/18/13 .