Minutes from the 11/24/04 LCLS SLC IOC meetings. Next Meeting is 12/01/04, 9:30am, second floor. (1) Coding Standards: Steph, Diane, and Debbie will look at the ESD coding standards and choose what makes sense for the SLC IOC project. In general, we all agree that we like the file and function naming standards (except use upper/lower case instead of "_"), argument naming and order, and the use of suffixes in variable names. No goto's Returns are allowed to prevent too many indentations. (2) dballoc: The database utilities called dblist, dblunits, and dblget will allocate memory as needed and return to the callers and it's the calling task's responsibility to free before exitting. Debbie will first allocate a little bit of memory like is done on the SLC micros and if later on it isn't enough, she'll reallocate with a larger amount of memory. She may use 2 separate memory pools for this function. However, we agree that tasks can guess better how much memory is needed. We recommend that tasks call dballoc before calling dblist/dblunits/dblget to get an appropriate amount of memory up-front. (3) Memory for the BPM Job: Diane is converting her prototype to use a memory pool for messages to and from the Alpha. However, for larger messages (ie, 128K) from the BPM job, she thinks a mechanism different from that used for the rest of the relatively smaller messages may be needed. From Tony: When the BPM subjob in the micro receives a BPM "prep" request, the code can tell from parameters in the message how much memory (possibly lots, for scans over a large # of beam pulses or over a large # of ring turns) will be needed, and right then allocates all the hardware readout buffers that will be needed, and if that memory is unavailable then whatever was obtained for that "prep" request is deallocated and the code returns a failure status in the "immediate reply" to that "prep" and the SCP then knows not to expect any "data reply" from that micro for this request. When the scan ends, the code does allocate additional memory for the "data reply" message, and that allocation may fail, in which case the SCP will eventually time out that micro's "data reply". But in all cases the BPM subjob cleans up afterward. Diane needs to know more about the BPM message requirements before she can finish her message memory pool design. This should come from saa and teg but saa is behind so Diane may pursue this on her own. From Tony: "Data reply" message size is more complicated, but routine Meas_compute_replysize in sourcefile REF_RMX_BPM:MEASPROC.C86 answers exactly that question, and is actually called at "prep" time (so the SCP can know from the "immediate reply" exactly how much memory to allocate for receiving the subsequent "data reply"). Just to reiterate: the "prep request" causes the micro to set up hardware readout buffers for the acquisition being requested, and the actual hardware data readout (driven by PNET information across however many beampulses) cant possibly start until after the SCP receives success/fail/size "immediate replies" from all micros involved in the given acquisition, because only then (and only if at least one micro's "immediate reply" indicates success) does the SCP send the message to the MPG saying please queue up my YY request. Then, after the MPG broadcasts the "enable" YY code on PNET, all the micros that succeeded for that prep will step through the hardware data acquisition. For each such micro, whenever its hw data acq sequence completes, the micro will allocate memory for the "data reply" message(s), and convert the raw data and package it up into that "data reply" (which may be big for a long scan) and send it/them to the requesting SCP. At that time the micro will deallocate the reply message memory, but keep the hardware readout buffers so the SCP can again send to the MPG the same "please queue up my YY request" message, without needing another prep to the micros. There is a separate message from the SCP to the micros saying please deallocate the hardware readout buffers for this setup (or possibly for all remaining setups that that SCP may have made; this last happens whenever you kill or exit a SCP). (4) Memory for Message to/from DBEX: Debbie does not plan to use the same kind of memory pool management for DBEX messages as for SCP messages. DBEX traffic is a lot smaller and a lot less frequent. (5) LCLS CPUs: Should probably have 128MB and 256MB would be better. The ethernet RMX micros have 32MB and Tony mentioned that sometimes there is not enough memory for all BPM acquisitions. (6) Integration Test Plan: At some point, we need an integration test plan to make sure we test things like the Alpha down/up, proxy down/up, etc along with normal running. Such a plan won't be started until next year. Everybody keep notes on what should go into the integration test plan. If we had a QA dept, we'd dump it on their laps. (7) IOC Shell Help: In addition to the normal IOC shell help provided with registration (you can get a list of all registered IOC shell functions and for each function, you can see its arguments), it would be nice to have "dbHelp", "slcHelp", etc that provides more detail on what each function does (like the nfsHelp, etc at the vxWorks prompt). This is a good idea though we may implement this at a later date. Debbie and Diane will produce an "SLC-Aware IOC Shell Command User's Guide". Perhaps these two efforts can be done at the same time. (8) Phase I and II: At some point, we need to prioritize our tasks so that we get the important ones done first. (9) dbStop and dbRestart: This allows a database to be downloaded without restarting any tasks meaning tasks do not deallocate any resources but just clear and reinit memory instead. We decided not to add this feature. The code required to implement it are more significant than we thought. So downloading a database will always require restart of all tasks. (10)Database and Message Service Work: ~6 months total work left to do. Diane and Debbie will manage and synchronize the work between them. Both are almost finished with (and sick of!) a very good first draft of their documentation. We all agree it's time to get back to coding and plan an update to the documentation later for a second draft. And third draft if necessary.