Controls Computing Infrastructure for Test Facilities
The ARD Test Facilities, utilized to develop and test solutions for accelerator systems at SLAC, are mostly controlled by EPICS. The current programs in Test Facilities include ASTA, XTA, GTF, NLCTA, and etc. The new controls computing infrastructure is designed and implemented to support the programs in a centralized fashion and with scalability, and is based on Linux (RHEL6 64) and SCCS AFS. The first program to use the new infrastructure was XTA, followed by GTF and ASTA.
The controls computing infrastructure for Test Facilities resides on a IFZ (Internet Free Zone) network (ACCTESTFAC), which allows full access to SCCS resources the infrastructure depends on, yet protects the infrastructure from Internet malicious attacks. It is independent of the infrastructures for LCLS and FACET operations, which are completely standalone and provides high availability services as required. The architecture for the Networking and Computing is shown in the diagram below.
The controls infrastructure is hosted on a number of RHEL6 64 servers listed in the table below:
nodename Functions OS User Access testfac-srv01
- interactive works
- build/release platform
RHEL6 64 yes testfac-srv02 - HLA applications launched from Controls Panels (e.g. astahome) RHEL6 64 no testfac-daemon1
- soft IOCs
- screen host for IOC console
RHEL6 64 no testfac-daemon2
- daemon host (Message Logging, alh, ChannelWatcher, Xvfb, PV Gateways, and etc)
- cron system
RHEL6 64 no testfac-archapp
- archiver sampling system
- archiver data server
RHEL6 64 no testfac-camsrv* - a cluster of camera servers in the fields RHEL6 64 no
Users should log in testfac-srv01 as tfprod to launch xtahome or astahome. tfprod is the production account for Test Facilities and has a Matlab license. All the environment for Test Facilities should be automatically set up upon login as tfprod:
$ ssh testfac-srv01 -l tfprod
Navigated by the controls panels in xtahome or astahome, you are set to fly ...
If you are very new to Test Facilties, it would be very effective if you can consult Cecile Limborg for just few minutes, as she is the founder for the controls infrastructure for Test Facilities and the most experienced user on this infrastructure.
Please ask Ken for help if you have a problem to access tfprod.
For any other issue with your personal SLAC account (e.g., Matlab license failure), please consult unix-admin@slac directly for help. All the servers listed in the table above are managed by Controls; if there is any issue with any other machine (such as ar-asta*) which is not listed in the table, please contact unix-admin@slac (which mamages SLAC wide Linux machines), ithelp@slac (which manages SLAC wide Windows machines) directly for quick response.
For developers (e.g. IOC engineers who want to check out and test IOCs for Test Facilities), please be sure to read the rest of this document.
You should log in testfac-srv01 via SLAC UNIX account to do your works for Test Facilities. The rest of servers are dedicated to host the controls services, and only authorized persons have login access. All the daemon services are managed by acctf account. Please check with Ken for access, if you need to manage the controls daemon services.
If your login process does not include the environment setup for Test Facilities, you can do so after login (make sure you are in bash shell) with
$ source /afs/slac/g/acctest/tools/script/ENVS_acctest.bash
To make it automatic upon login, you should add the following lines to your .bashrc, so you don't have to type "source /afs/slac/g/acctest/tools/script/ENVS_acctest.bash" each time when you log in testfac-srv01
# environment setup
case $HOSTNAME in
# source your_own_environment_setup
Launch e.g., xtahome.
The root filesystem for the applications is SCCS AFS based and is in /afs/slac/g/acctest, accessible via $ACCTEST_ROOT; the root filesystem for data is SCCS NFS based and is in /nfs/slac/g/acctest, accessible via $ACCTEST_DATA. The communication is shown in the diagram below
The filesystem structure, similar to LCLS and FACET production, is described in the table below.
filesystem purposes ENV variable Comments /afs/slac/g/acctest/ Applications ACCTEST_ROOT * epics/
rtems/ RTEMS Centrally managed by Till for all AFS based programs. physics/
tools/ Controls tools and configurations packages/ Third party packages Centrally managed by Jingchen and etc. for AFS based programs /nfs/slac/g/acctest Data ACCTEST_DATA *
* Unfortunately, the desired path names with testfac (e.g. /afs/slac/g/testfac and /nfs/slac/g/testfac) have been all taken before this infrastructure is born, so we ended up to use acctest, after consulting with Test Facilities.
As usual, the IOC boot area is in $IOC, the data in $IOC_DATA, and the IOC application area in $EPICS_IOC_TOP.
The EPICS environment for soft and hard IOCs is defined in $IOC/All/Acctest in soft_pre_st.cmd and pre_st.cmd:
[jingchen@testfac-srv01 Acctest]$ ls
pre_st.cmd screenrc soft_pre_st.cmd
post_st.cmd screeniocs soft_post_st.cmd
The Channel Access server and client variables are defined as:
Environment Variables Value Comments EPICS_CA_SERVER_PORT 5058 For Channel Access Client EPICS_CA_REPEATER_PORT 5059 For Channel Access Client EPICS_CAS_SERVER_PORT EPICS_CA_SERVER_PORT For Channel Access Server EPICS_CAS_BEACON_PORT EPICS_CA_REPEATER_PORT For Channel Access Server
The hard IOCs will boot from afsnfs2, and you may need the following information:
Server IP: 188.8.131.52
Gateway IP: 172.27.96.1
DNS server 1: 184.108.40.206
DNS server 2: 220.127.116.11
NTP server 1: 18.104.22.168
NTP server 2: 22.214.171.124
NFS and DHCP:
As shown in the diagram above, Test Facilities IOCs boot from afsnfs2 and their data (e.g. save/restore, iocInfo, logging) writes to SCCS NFS. Please directly send a request to unix-admin to enable your hard IOC's access to SCCS NFS.
Please send a request to unix-admin for DHCP service, if needed. Please consult Ken Brobeck, if there is any network service related issues.
IOC engineers may use IOC Development and Release Procedures listed in Reference below for reference. Just like LCLS and FACET, when a IOC is born, please send a message to Ken, Ernest and Jingchen, who will create $IOC/nodename and $IOC_DATA/nodename, integrate your IOC to the Test Facilites.
If there is any EPICS related development and release issue, please consult Ernest Williams. For all production related issues, please consult Jingchen Zhou.
PV Gateway :
A PV gateway is set up to provide public read-only access to Test Facilities PVs. One can read the Test Facilities PVs on any SLAC public networks if the environment is defined in such:
$ export EPICS_CA_ADDR_LIST=testfac-daemon2
$ export EPICS_CA_SERVER_PORT=5048
$ export EPICS_CA_REPEATER_PORT=5049
Based on the agreement with Test Facilities, there is no AIDA support for Test Facilities. To access Archiver data from Matlab, one can use one of the following methods recommended by Murali:
1) Use ArchiveViewer and export to Matlab
2) Use ArchiveExport (this requires access to the main index file /nfs/slac/g/cd/tf_archiver/arch_acctf/current_and_all_index).
3) The ArchiveViewer uses the XML-RPC protocol; so, use python to get data using XML-RPC URL http://testfac-archeng.slac.stanford.edu/cgi-bin/ArchiveDataServer.cgi and then send over to MatLab.
Please consult Justin May for other solutions.
1 Test Facility Controls Computing Infrastructure Design
2. IOC Development and Release Procedure (soft)
3. IOC Development and Release Procedure (hard)
4. When an IOC is born
5. Production accounts
Contact: Jingchen Zhou (X4661, jingchen@slac). Last edited on 4/18/13 .