[SLAC Controls Software Group [SLAC Controls Department] [SLAC Home Page]

Go to bottom of page



1.1 BASIC ARCHITECTURE . . . . . . . . . . . . . . . . 1-1

1.1.1 Main Drive Line . . . . . . . . . . . . . . . . 1-1

1.1.2 Communications Line . . . . . . . . . . . . . . 1-2

1.1.3 Computing Nodes . . . . . . . . . . . . . . . . 1-2

1.1.4 Consoles . . . . . . . . . . . . . . . . . . . . 1-3

1.1.5 Software Organization . . . . . . . . . . . . . 1-4

1.2 MAJOR FACILITIES . . . . . . . . . . . . . . . . . 1-5

1.2.1 Timing And Synchronization . . . . . . . . . . . 1-5

1.2.2 Modulator And Klystron Control . . . . . . . . . 1-7

1.2.3 Beam Position Monitors . . . . . . . . . . . . . 1-7

1.2.4 Magnet Control . . . . . . . . . . . . . . . . . 1-8

1.3 DATA STRUCTURES . . . . . . . . . . . . . . . . . 1-9

1.3.1 Introduction . . . . . . . . . . . . . . . . . . 1-9

1.3.2 The Database . . . . . . . . . . . . . . . . . 1-10

1.3.3 Configurations . . . . . . . . . . . . . . . . 1-11

1.3.4 The T-Matrix . . . . . . . . . . . . . . . . . 1-12

1.4 COMMUNICATIONS STRUCTURES . . . . . . . . . . . 1-14

1.4.1 SLCNET . . . . . . . . . . . . . . . . . . . 1-15

1.4.2 Message Service . . . . . . . . . . . . . . . 1-17

1.4.3 Error Reporting . . . . . . . . . . . . . . . 1-17

1.5 HOST ARCHITECTURE . . . . . . . . . . . . . . . 1-18

1.5.1 Hardware Description . . . . . . . . . . . . . 1-18

1.5.2 SLC Control Program . . . . . . . . . . . . . 1-19

1.5.3 Paranoia . . . . . . . . . . . . . . . . . . . 1-20

1.5.4 Other Host Processes . . . . . . . . . . . . . 1-21

1.6 MICROCLUSTER ARCHITECTURE . . . . . . . . . . . 1-21

1.6.1 Hardware Description . . . . . . . . . . . . . 1-22

1.6.2 Software Organization . . . . . . . . . . . . 1-23

1.6.3 Software Development . . . . . . . . . . . . . 1-23 CHAPTER 1 THE SLC CONTROL SYSTEM The SLC control system has as a basic goal the collision of high intensity bunches of electrons and positrons at the final focus. It must achieve this goal while enhancing the Linac's ability to fill the storage rings and deliver other experimental beams. The basic philosophy of the system is to generate configurations of machine parameters based on Quantum Electrodynamics and to utilize closed loop feedback networks to correct errors.

1.1 BASIC ARCHITECTURE All control signals flow along two major paths. The first is the Main Drive Line (MDL) which transmits all phase and fiducial timing information. The second is the Communications Line (Comm Line) which is a broad band cable carrying all information that can tolerate variations in propagation delay of a few nanoseconds.

1.1.1 Main Drive Line The MDL is the existing temperature stabilized coaxial line. It carries the 476 Mhz phase reference signal at a power level of less


THE SLC CONTROL SYSTEM Page 1-2 than 100 watts. During normal operation, this signal is continously on the MDL. At approximately one millisecond before injection, a fiducial consisting of one double amplitude cycle is coupled to the MDL. This signal appears at a nominal rate of 360 hz with detailed adjustments of its timing accommodating the AC lines, storage ring phases, and other constraints. The SLC timing system is based on a series of programmable modules which are synchronized by the fiducial and count the 476 Mhz waveform to produce adequately timed output signals.

1.1.2 Communications Line The Comm Line is a CATV hard coaxial line with a mid split system of cable amplifiers feeding an up-converter in an inverted tree topology. Approximately 140 Mhz of useful, bidirectional channel space is available, divided by industry custom into 6 Mhz subchannels. These subchannels have been allocated to computer-computer networks, terminal-computer networks, high speed command broadcast links, private high bandwidth feedback control loops, general purpose video channels, audio communications channels, and Comm Line diagnostic channels.

1.1.3 Computing Nodes The SLC control system uses a distributed system of about 50 computing nodes to monitor and control the machine. The logical topology is a star network with a Host machine coordinating the microclusters. The task organization for the microclusters is geographical rather than


THE SLC CONTROL SYSTEM Page 1-3 functional; each cluster controls all functions for a given area rather than one function for a larger area. The system employs a hierarchical distribution of intelligence. Where possible, first level processing of signals is performed by the I/O modules themselves. The microclusters perform the conversion to and from engineering units and execute standard algorithms in response to commands from the Host computer. The Host machine determines the operational configuration of the accelerator, maintains a centralized database, generates periodic checking and error reporting, and handles all operator interface functions such as display formatting and command transmission.

1.1.4 Consoles The SLC control system interface to people is provided by multiple independent control consoles. These consoles are (in principle) portable and receive information only from the Comm Line. Two different levels of consoles have been designed: Consoles On Wheels (COWs) which have a full complement of interface hardware, and CALFs which are small COWs and handle a subset of COW functionality at a small fraction of the COW investment. The COW consists of a microcluster package, a high resolution color graphics display, a touch panel, a set of general purpose knobs, a computer terminal, a video monitor, and an audio intercom. The COW connects only to the Comm Line and AC power.


THE SLC CONTROL SYSTEM Page 1-4 The Calf consists of a standardized terminal and an audio intercom coupled to a terminal-Host network interface. Interface software permits the terminal to emulate the display screen, the touch panel, and the real terminal. In many applications, particularly development, the audio link is superfluous and a simple terminal can emulate all the console functionality.

1.1.5 Software Organization The software for the SLC control system has three major levels: System software, Facilities, and Applications. The System level includes not only language and network support but also the software that defines the underlying architecture. It includes standardized routines for terminal I/O, touch panel communication, knob handling, database access, message services, error reporting, and display generation. These routines are carefully layered to isolate the other software levels from the details of network protocols. They provide structured and disciplined access to system resources. The SLC Facilities provide standard functions and displays for each of the major hardware systems, including Timing, Klystrons, Beam Position Monitors, Magnets, and Status. Each Facility has a dedicated job in the microclusters, a collection of control, display,and I/O routines in the Host, and a well defined set of functions and error codes for message communication. Systems and Facility code is designed, written, and maintained by the SLC software group. It encompasses a broad set of tools for use by the Applications programs.


THE SLC CONTROL SYSTEM Page 1-5 Applications code provides the customization necessary for the individual machine groups while making extensive use of the Systems and Facilities software. The Applications level includes specific database entries, touch panel layouts, tailored displays, and functions not provided by the standard Facilities. These aspects of the software are the responsibility of the operations department and they are typically implemented by applications programmers assigned to the individual machine groups.

1.2 MAJOR FACILITIES

1.2.1 Timing And Synchronization The machine produces a pulse defined by a "beam code." The beam code is selected and broadcast by a dedicated microcluster called the Master Pattern Generator (MPG) about 2 milliseconds before the fiducial signal is put on the Main Drive Line. The beam code is broadcast on a dedicated sub-channel of the Comm Line and is demodulated and received on the Pattern Receiver Interrupt Module (PRIM) in each microcluster about 50 microseconds after transmission by the MPG. Upon receipt of the beam code, the PRIM interrupts the micro to execute a routine to load timing data into a set of Programmable Delay Units (PDUs). The PDU is a Camac module that counts the fourth sub-harmonic of the 476 Mhz rf on the MDL, beginning at the receipt of the fiducial. When the count is reached, the PDU outputs a logic pulse. The PDU can be programmed between linac pulses either to produce no pulse or to produce it at a specified time after the fiducial. Alternatively, the PDU can be programmed to produce pulses after each fiducial without subsequent programming. The PDU


THE SLC CONTROL SYSTEM Page 1-6 signals are used to control all synchronization on the machine; in particular, klystron thyratron triggers, RF diagnostic triggers, beam position monitor triggers, and kicker triggers are derived from these signals. The beam code as broadcast is a two byte message. The first byte is the actual code and the second is a command byte used to specify other synchronous activities for the system. Thus 256 different beams can be selected by the beam code. These beams are specified by a set of Beam Matrices stored in the micro memory or in the PDU itself (Version II). Each row of the Beam Matrix corresponds to a particular beam code, and the elements of each row are the PDU timings that correspond to a particular beam. Specifically each row contains the data describing whether or not a particular klystron will produce a null, accelerate, or standby pulse for a given beam. Slow changes to a beam, for example changing a klystron from accelerate to standby timing, are accomplished by rewriting the appropriate data in the Beam Matrix. The Master Pattern Generator is responsible for generating the proper sequence of beam codes in response to general strategies received from higher level computing nodes and from lower level data indicating experiment and machine readiness. The MPG is also responsible for collating diagnostic requests from higher level programs with the sequence of beam codes it is generating and attaching appropriate command codes to the beam code. For example, a process might request a beam position monitor scan of the trajectory of a particular pulse down the machine; such a scan is synchronized by PDUs loaded in response to a command code.
THE SLC CONTROL SYSTEM Page 1-7

1.2.2 Modulator And Klystron Control Successful operation of the SLC implies very precise control and monitoring of the phase and amplitude of the RF power. A set of instruments has been developed to monitor the klystron phase relative to a reference output of the X6 frequency multiplier; the rf forward power; and various modulator parameters on each pulse. Additionally this package controls and monitors the modulator and klystron, including the phase shifter and attenuators. The rf phase and amplitude video signals are developed in an rf head, using double balanced mixers and diodes. The video signals are gated and digitized on an adjacent digital board. Digital signals from this board are processed by a Parallel Input Output Processor (PIOP). The PIOP is a multi-purpose micro-processor (Intel 8088) controlled Camac module. The PIOP also controls a set of peripherals constituting the Modulator-Klystron package. The MK will handle interlocks and protection as well as monitoring and control.

1.2.3 Beam Position Monitors A Beam Position Monitor system allows a "snapshot" observation of the transverse position of a bunch as it transits the SLC. The basic monitor is a quadruple stripline with a radius of approximately one cm. The four signals from a monitor are processed to produce two difference signals and a sum; the bunch displacement is then proportional to the difference over the sum. The processor is gated by the PDU to identify the bunch to about 20 ns.; the processor then generates an internal gate to integrate one half of the bipolar


THE SLC CONTROL SYSTEM Page 1-8 stripline signals. The BPM's are gated and processed in response to a command broadcast by the MPG. The micro clusters actually cause the PDU's to generate gates in response to one command and then process the data in response to a second command. The micros reduce the data to transverse positions in millimeters and asynchronously transmit the data to the requesting process using the Message services.

1.2.4 Magnet Control The magnet control facility consists of general purpose command and display routines in the Host and setting and monitoring routines in the microcluster. All parametric information including magnetization polynomials, power supply transfer functions, channel assignments, tolerances, and set points are stored in the database. The microcluster has a repertoire of five commands which are executed upon their reception from a SCP, PARANOIA, or (later) FEEDBACK: 1) CHECK sets status lists indicating whether set points and actual values are within Check tolerances; 2) TRIM sets actual values to within Trim tolerances, iterating if necessary; 3) CALIBRATE determines the power supply transfer function and checks whether it is within tolerance of the database nominal value; 4) STANDARDIZE cycles a magnet so that it is at a known point on its hysteresis curve; and 5) PERTURB sets a magnet current by extrapolation from its current value without using the readback system (Perturb is used for making changes small compared to setting tolerances for some feedback applications and for hand knobbing of magnets).


THE SLC CONTROL SYSTEM Page 1-9

1.3 DATA STRUCTURES

1.3.1 Introduction Data for the SLC controls system is stored in three interrelated data structures. The first, called the database, stores static and quasi-static information about the machine. This information describes all control connections, device characteristics (e.g. magnetization polynomial fits), and quasi-static machine characteristics (e.g. values of magnetic fields). The database is hierarchical, identifying devices by symbolic names, locations, and sequence numbers. Actual data is retrieved by reference to secondary symbolic device attributes. No attempt is made to store information in the database relevant to only one pulse of the machine. Information organizing the quasi-static values of large sets of devices is stored as "configurations". A configuration is a formatted file identifying specific devices, secondary controllable attributes, and values for these attributes. The canonical example of a configuration is a set of explicitly identified magnets and their field values describing an operable lattice on the machine. Standardized facilities have been developed for the manipulation of configurations, e.g. saving or restoring configurations delineated by sector, region, or full machine. The last data structure is used to configure the machine to a previously developed "Beam Definition" on a pulse to pulse basis. This data structure is called the T-Matrix and can handle data for all the timing elements of the machine (PDUs) for up to 256 separate beams.


THE SLC CONTROL SYSTEM Page 1-10

1.3.2 The Database The database is a structure designed explicitly for the distributed control architecture of the SLC control system. It consists of a master section in the Host machine that is divided into sections for each microcluster. These sections are further divided into supertypes according to access requirements. The supertypes are 1) internal pointers (write accessible only by the database generation code), 2) Parametric data (write accessible only by the database generation code), 3) Control data (write accessible only by the Host), 4) Returned data (write accesible only by the microclusters), and 5) Local data (not accessible by the microclusters). Supertype sections 1 through 4 are maintained in each microcluster. Data updating of a non-local section is done over the network in response to specific update requests or "out of tolerance" conditions. Data transfers between the microcluster and the Host are managed by a DataBase EXecutive process (DBEX). Proper access to the database is provided by a set of structured routines that include transparent updating of non-local sections and list facilities for multiple, rapid accesses. Database generation is an offline activity, and has two major phases: definition of the schema, i.e. the definition of primary and secondary names and their attributes; and a second stage which consists of declarations of the existence of a specific device with values for its secondary characteristics. In this approach, the design of the first phase is part of the control system design, and is critical to basic operation of the control facilities. The second phase is an operations task, essentially ensuring that actual devices are properly described.


THE SLC CONTROL SYSTEM Page 1-11 Standalone facilities to examine and edit database values exist. The database on the Host is implemented using a memory segment of order 10 megabytes in size. Management of this block is done entirely by the VAX Virtual Memory System. The remote Multibus machines only support 1 megabyte of address space, and database size on the microclusters is of some concern.

1.3.3 Configurations The database contains the (supposedly) accurate description of the present state of the machine. Configurations are used to store and manage future and past states of the machine, in the sense of storing the output of TRANSPORT or other beam design software for future use, or for saving a state of the machine for possible future use. Configurations save only the directly controllable parameters of the machine, i.e. magnetic field values as opposed to magnet - power supply connection data. In general, controllable parameters have a CONfiguration value coming from a stored configuration or possibly from an online calculation, a DESired value which starts from the CON value but which may be adjusted by operators or feedback control loops, and an ACTual value indicating the last reported measurement of the relevant sensor. (Note that sensors with accuracies of at least the setting tolerances are required for all devices in the system). Structurally, a configuration is a VMS ASCII file. A hierarchy exists in the sense that a Sector configuration is a set of device identifiers and values (with suitable headers), a Region configuration is a list of Sector configurations, and a Machine configuration is a


THE SLC CONTROL SYSTEM Page 1-12 list of Region configurations. The configuration facilities maintain an index to the configurations and permit manipulation of the configuration components. The configurations are stored as Ascii files to permit easy examination and modification with a text editor. Saving a configuration is essentially copying ACTual or DESired values to a configuration file; specifically what is saved is determined by a configuration template file. Restoring a configuration is essentially copying values from the configuration files into database CONfiguration and DESired values.

1.3.4 The T-Matrix In addition to the quasi-static description of the machine which is primarily magnetic field values and "nominal" values of the RF amplitudes and phases, the behavior of the machine on a given pulse is determined by the triggerable devices, e.g. thyratrons for klystrons, phase shifters for SLED control, or kickers for beam manipulations in the Damping Rings. As previously described, all timing is derived from the MDL by counting the fourth subharmonic of the RF following an AC line synchronized fiducial. A "Beam Definition" is a set of values for those counters. The control system can support up to 256 independently defined beams, whose sequence is arbitrary and determined by the Master Pattern Generator. The data structure containing all of the timing information is stored in the Host machine as the T-Matrix. The appropriate subsets of the T-Matrix are transferred to the corresponding microclusters, to be stored in either local memory or in the PDUs (counters) themselves, depending on what generation PDU modules are in use. Timing facilities exist to


THE SLC CONTROL SYSTEM Page 1-13 manipulate elements of the T-Matrix and transparently communicate changes over SLCNET to the microclusters. While elements of the T-Matrix are in units of 1/(119 Mhz) or "ticks", the elements are more easily understood as relative times to two additive displacements. The first displacement is called TREF, and has a value in the database for each PDU. For a beam injected 1024 microseconds after the fiducial, as measured by an arbitrary but well defined procedure at the injector, TREF is the count required in the PDU for it to produce a pulse synchronized with the arrival of a beam-derived pulse from the Beam Position Monitor module in the same Camac crate as the PDU. (Note that the BPM cables are cut so that all signals to one crate arrive together, so it does not matter which cables are used.) Defining times relative to TREF removes the beam and fiducial propagation times from the problem, thus leaving only delays associated with the controlled device. For example, in this way a distribution of thyratron trigger times is directly meaningful in terms of thyratron delay characteristics. The second displacement is called T_NOMINAL, for a nominal beam time relative to the reference beam (at 1024 microseconds after the fiducial). T_NOMINALs are defined for each beam and each microcluster and are also stored in a the T-Matrix. The canonical example for the use of T_NOMINAL is a value of 8 ticks (1/2 damping ring revolution) for the ring and subsequent machine to extract the "second" positron bunch. This use of T_NOMINALs then permits one canonical device time, PDUT, to be kept in the database for each device receiving timing signals from a PDU, and simple software routines can manipulate the data to define various beams.
THE SLC CONTROL SYSTEM Page 1-14 The manipulation of timing elements has then has two stages, which are rather unrelated. The first is an offline exercise in designing beams, deciding what devices will be used on a particular beam code, and determining how this structure will interact with other activities in the machine. For example, different beams might be used to store bunches in the Damping Ring, separately extract bunches, "stack" bunches in the damping ring, store e- in PEP, store e+ in PEP, etc. These exercises are performed using an interactive program called Beam Design Language. The second aspect of T-Matrix control is real time control to adjust a time, swap klystrons, etc. These functions are handled by the SCP and by the FEEDBACK process, and have nothing to do with BDL. Simple SCP functionality for adjusting T-Matrix values exists; more complete SCP functionality is currently being defined; and the FEEDBACK process is a future project. The T-Matrix, like the database and configurations, is stable across Host crashes and normal system development. Additionally, the set of BDL commands that define the T-Matrix are normally saved as a file so the T-Matrix can be regenerated even when the physical components of the machine have changed, implying changes in either the values or the structure of the T-Matrix to get back to "the same old" machine. Finally, sections of the T-Matrix can be saved and restored in a manner similar to configurations.

1.4 COMMUNICATIONS STRUCTURES


THE SLC CONTROL SYSTEM Page 1-15

1.4.1 SLCNET SLCNET is a broadband local area network used for communication between the Host machine and remote microclusters in the SLC control system. The network is a logical star with the Host at the center and the other nodes as rays. The medium is a broadband CATV cable subchannel operating in a fully disciplined mode so that collision detection and recovery during normal operation is unnecessary. The basic protocol is half-duplex SDLC, and the RF modems operate as a fully transparent layer on this system. The Host machine supports a VAX SLCNET Channel (VSC) which is a high speed (AMD 2903, 2910) bit slice microprocessor 16 bits wide with a Unibus buffered data path interface and a parallel to SDLC FIFO buffered interface. The microclusters use Computrol SDLC--Multibus interfaces with custom firmware replacing the vendor code. The disciplined protocol admits only Host to Microcluster communication, and the microclusters may transmit only in response to the Host. In the idle state, the VSC polls each microcluster in turn (POLL message), receiving a negative acknowledgment (null RPOLL) indicating that the microcluster is alive and well but has no traffic for the Host. This POLL--RPOLL interchange is expected to take about 200 microseconds (though at present it takes about 800 microseconds), so that 50 microclusters can be polled in 10 milliseconds. Conversely, the mean time for the VSC to find a microcluster with a positive acknowledgment (non-null RPOLL) is 5 milliseconds.


THE SLC CONTROL SYSTEM Page 1-16 The basic protocols support synchronized and unsynchronized reads and writes of buffers up to 8192 bytes. In addition, other protocols support interrupt and broadcast functions as well as bootstrap downloading procedures for the microclusters. These protocols are described in detail in the SLCNET manual. The network performs the following major logical functions: 1. The network performs the initial loading of the remote micro clusters, including the loading of their operating system. Only a minimal bootstrap program to operate the network is stored in PROM on the SBCs. 2. The network maintains the SLC database by passing messages between the database executive and the microclusters using "Database Services" high level protocol. 3. The network controls microcluster activity and relays responses using the "Message Services" high level protocol. 4. The network communicates with the COW's, transmitting graphics for the displays and touch panel, and receiving touch panel and knob data. 5. The network is used to implement the cross debugging facilities. 6. The network drives remote line printers. A VMS spooler transmits printer queues over the network to dedicated line printer micro nodes.
THE SLC CONTROL SYSTEM Page 1-17

1.4.2 Message Service Communication among processes in the SLC system passes either through the shared data structures or through the SLC message service. Messages are used to convey commands to the microclusters and command completion responses back to the VAX. They are also used for error messages or for data from a single pulse such as beam position monitor readings. All SLC messages have a simple structure consisting of ten words of header information followed by up to 502 words of data. The header includes source and destination specifiers, a time stamp, a function code, and a data length word. The source and destination are 4 character Ascii names to identify either a VAX process or a microcluster. Vax names are of the form V0nn where nn is the SLCNET interrupt used by the process; microcluster names are the standard database names. The function code has two bytes, a high byte to identify the Facility and a low byte to specify the explicit function to be executed. A standard set of interface routines are available to construct messages, to synchronize commands and responses from multiple microclusters, to implement single pulse commands through the MPG, and to signal error messages.

1.4.3 Error Reporting The SLC control system uses a uniform protocol for error reporting which is based on the VMS Message Utility. The same protocol is used by stand-alone VMS programs, by components of the VMS SLC control system, and by the microclusters under iRMX. When an error condition is detected, it is reported using one of a set of predefined error messages. Error messages from microclusters or SCPs are sent first to


THE SLC CONTROL SYSTEM Page 1-18 PARANOIA (if the severity level is high enough), which obtains and formats the message text and puts it in the error log. The message is then relayed by PARANOIA to the remote destination for display.

1.5 HOST ARCHITECTURE

1.5.1 Hardware Description The requirements for the Host computer were a mainframe in the nominally one megaflop class and a modern operating system. VMS operating on a VAX 11/780 was selected as the main Host, although a more powerful mainframe will be desirable when it becomes available. The computing requirements were set by the modeling calculations and by the operator interface processing. The requirement for a modern operating system was dictated by the limited budget available for the project. The basic architecture in the VMS Host machine is centered on the three shared data structures described above: the database, the configurations and the T-Matrix. This implies that the Host machine is necessarily a single memory system. A second Host machine with parallel data structures is used for system development. This machine has a full copy of the database and T-Matrix with identical values for all parametric data but no attempt is made to have the control and returned data reflect the current state of the production machine. Each machine runs its own copies of the control and support programs. Communication between the two machines is over a standard DECNET link.


THE SLC CONTROL SYSTEM Page 1-19

1.5.2 SLC Control Program Each user of a console (COW or CALF) has his own copy of the SLC Control Program (SCP). The SCP is the console interface to the database, configurations, T-Matrix and the Message services. The SCP manages the console, translates user directives, and generates displays. The SCP handles only one user, and there is no requirement that all SCP's be identical. Because there is a separate copy of the SCP for each user, the SCP code is relatively easy to optimize and debug. The single user software is conceptually and actually much simpler than a multi-user version would be, and development is made possible by the co-existence of several versions of SCP. A natural consequence of the SCP design is that the SCP executes interactively on a terminal (as opposed to batch). This terminal is used to receive messages from the SCP and other processes, and to conduct dialogs that are unsuited for touch panel communication. The primary control interface for each console is a flexible touch panel system. The touch panel is a transparent screen capable of sensing the position of an operator's fingers, and it is backed up by a general purpose monochrome CRT display. Each panel is generated interpretively from a fixed format text file. A logical name service is used to provide communication between the application code and the panel description files. The logical name routines allow data to be moved from the panel, and allow the panel to manipulate displays and subroutines that have been identified to the logical name service. The structure is deliberately limited to ensure that the panels may be easily understood and debugged. System routines read and interpret the panel files and implement the buttons when pushed. Additional


THE SLC CONTROL SYSTEM Page 1-20 routines provide for output of character data to the panel from the user software. The COW supports up to eight knobs for the input of quasi-continous variables to the program. The actual hardware consists of 250 count shaft encoders polled at 5 hz by hardware. Also each knob has a software writeable legend of 16 characters per knob. Four knobs are simulated by the CALF software. A standard set of knob access routines are available for use by the facilities and applications.

1.5.3 Paranoia Paranoia is a Host process with two major responsibilities: error handling and machine monitoring. All errors detected by the SCPs or other Host programs are sent to a dedicated mailbox for Paranoia; those detected in the microclusters are sent to a dedicated SLCNET interrupt owned by Paranoia. Paranoia receives the messages, logs them, formats them, and broadcasts them to the appropriate control consoles. Messages from a SCP or detected in a microcluster while executing a command from a SCP are broadcast only to the originating SCP. Messages generated by the monitoring activities of either Paranoia or the microclusters themselves are broadcast to all active consoles. Paranoia's monitoring activities are an evolving collection of diagnostic functions. It requests periodic checks of machine parameters such as magnet settings, klystron phases, temperatures, etc from the individual microclusters. Error conditions reported are metered and broadcast to the consoles. Paranoia also checks the


THE SLC CONTROL SYSTEM Page 1-21 microclusters, consoles, and network and performs various maintainance and cleanup tasks. As the system develops and recurrent problems are recognized, additional monitoring functions are added to Paranoia. Future improvements will include global error analysis algorithms and more sophisticated diagnostics.

1.5.4 Other Host Processes In addition to PARANOIA and the individual SCP programs running each COW or CALF console, several other programs are included in the Host software. The DataBase EXecutive process, DBEX, handles all network transfers of database information between the VAX and the microclusters. When a VAX program changes control data in the database, the database access routines send a message to DBEX requesting that the appropriate information be updated in the microclusters. When database information is returned by a microcluster, it is sent to DBEX which then writes the new values to the VAX database. Another process called FEEDBACK will be responsible for the real time management of the SLC, primarily coordinating klystron replacement and the subsequent retuning of the machine, and the management of the feedback systems. Other programs provide Network monitoring and diagnostics, Error log formatting and filtering, Time logging of data, and Camac or database checking.

1.6 MICROCLUSTER ARCHITECTURE


THE SLC CONTROL SYSTEM Page 1-22

1.6.1 Hardware Description The microcluster structure requires a bus capable of supporting multiple masters to allow almost arbitary computing power to be applied when needed by simply adding more computers to a node. This requirement eliminated CAMAC as a contender for the microcluster bus. System due dates eliminated FASTBUS and VERSABUS, leaving MULTIBUS as the only industrially supported reasonable possibility. Thus a Multibus structure is used to house a set of Single Board Computers, a network interface to the Comm Line, and a channel interface to the other peripherals. Evaluation of several possible operating systems resulted in the selection of iRMX which runs on an Intel series of SBCs. The particular SBC being used is the Intel 86/30 containing an 8086 with an 8087 floating point co-processor, 768 kilobytes of RAM, and 8 kilobytes of EPROM. Such a machine has a CPU power in the neighborhood of 15% of a VAX 11/780. Multibus was considered an inappropriate bus structure for the other peripherals of the system; Camac was chosen as a standard. Each microcluster includes a Camac channel communicating with up to 16 Camac crates over a 5 Mbaud serial line. The MultiBus Camac Driver board is a high-speed direct memory access device that directly executes lists of Camac commands from the Multibus address space. The Camac crates are interfaced to Multibus through high-speed (10 microsecond/cycle) low-cost serial crate controllers. These crates house magnet control DACs and Scanning Analog Modules, Programmable Delay Units, Parallel Input Output Processors, etc. Each crate always contains a Crate Verifier to ascertain proper Camac protocol functionality.


THE SLC CONTROL SYSTEM Page 1-23

1.6.2 Software Organization The microcluster software was designed to take advantage of the multitasking capabilities of the iRMX operating system. Each microcluster runs 10 to 15 jobs which timeshare according to their assigned priorities. The jobs are organized by function, with a separate job for each of the Facilities, e.g. Magnets or Klystrons, plus a number of additional server jobs. Each Facility job is responsible for performing all of the monitoring and local control algorithms for its class of devices. These jobs execute commands received through the message service and use a shared common database. Network input is handled by two separate network server jobs: one for database updates the other for messages. At initialization, each job creates its own mailbox to receive communications from other jobs or from the VAX via the message server job. All messages are in a standard message service format. Each job performs its own network output to the Host, including database updates, response messages and error reporting.

1.6.3 Software Development All microcluster software is generated on the Host machine using modern cross compilers and linkers. Additionally, cross debuggers are used to interactively debug a microcluster program. (A cross debugger is one which executes on the Host, uses symbol tables resident on the Host, and targets the micro cluster over the network through the micro's operating system). In this architecture, compilers, linkers, editors and other language support of the micro operating system are irrelevant and have not been implemented; neither has the micro's disk


THE SLC CONTROL SYSTEM Page 1-24 or other virtual memory management facilities been used. Instead a rather minimal real time support nucleus has been generated with enhancements to support the cross debugger.
 
Go to top of page
Contact (until Aug. 15, 1996): Jeffrey Miller
Owner: Bob Sass

Converted from VAX Runoff output using doc2webset.pl