Network Rack Requirements

The following are the requirements for the new racks from Gary. They are followed by comments from Boris and John:

Here is Gary's first pass response. Note he is using IETF definitions of must and should.

Size

Just as in NYC, one can build up or out. If one wants shorter racks, one just needs more of them since equipment comes in a fixed RU sizes. One can place 2 switches in 42RUs (with zero unused), only 1 switch in anything less. At least some of the new space must end up with some new switches (since Les looked only at existing equipment, not the replacements that *must* occur during the usage lifetime of the racks, and actually *should* occur during the move, since we want front/back airflow, not side to side as the old chassis which itself means more racks (you cannot place high power side to side airflow equipment side to side in racks), and we want to be able to "stage"/"install" at least one full height switch at a time.) One can presumably do the math.

The cabinets *must* be capable of installing equipment of at least 28 inches deep plus space for cable management (these are generically called "deep" racks, and are typically 32-40" deep).

Power

Power *should* be dual sourced power (one house, one UPS), where each feed can be serviced independentaly so that if we need (for example) additional circuits installed, or simply work on the panel, we do not have to turn off the entire row. Alternatively, we need two rows (both dual fed) so that if we are required to power down one row for service work, the other can remain active providing redundant services for "critical" functions. The "two row" option does not change the total power or number of rack requirements (just the arrangement of racks and equipment). Power utilization will not exceed a safe utilization based on full failover (i.e. 80% of 50% of derated capacity of the sources) both of the panel, and upstreams, and upstreams of upstreams....

Doors

All cabinets *should* have closeable/lockable doors (or the possibility of a "cage"). These doors must be a mesh type to allow airflow front/back. (alternative of ducted airflow could be considered). Note that this closable door issue for network equipment is in addition to the "EPN" system space, since all network admins (and some equipment) are also specified to be "moderate" qualified according to the DoE PSCP if we run "business systems". This requirement does not exist if someone is prepared to say we will *not allow* non-scientific systems at SLAC (or one can plan to replace all the racks/cabinets, or move the equipment yet again if SLAC is unable to meet that statement). As with the EPN "cage", this is a defensive (as in, we will not have to do it again if we lose our "wish") installation plan. An alternative for doors/cages is to allocate network racks within the logical "EPN" cage space (the requirements for protection will come together, or not at all), which would make that yellow tape space "larger".

Cable Management

All cabinets *must* have vertical cable management space on both sides of the rack proper, with at least 1 front/back horizontal "thrus" for every 15U, capable of at least 20 square inches of "thru" cabling between the side management space(s) for each "thru". The side cable management space (sometimes called a "Vertical patching channel") will be no less than 6 inches wide and no less than 6 inches deep on each side of the rack and front and back, and allow open access (i.e. doors must exist if the cabinets have doors) and include cable management "fingers" to provide cable strain relief for cables to the racked equipment itself.

Installation Posts

All cabinets *must* have movable posts for equipment installation, and *should* have the option to install additional posts for installation of multiple (fixed) depth equipment. (Note that if one cannot install multiple fixed depth equipment, which is FAR less than an ideal in any case, one is forced to use additional racks based on the fixed depth differences). To be clear, this generally means that if you put a 6500 in a rack, you are very limited in what else can go in that rack and still keep things neat/tidy/manageable. The usual answer is "only another 6500 or similar" (which for short racks may mean "nothing"). All equipment *should* be rack mounted. Any equipment to be moved where the rack mount kits are no longer available *must* have replacement rack mount kits ordered.

Cables

All equipment and cables will be inside the racks (nothing is to "stick out", as we see in some current locations with smaller racks). Doors must close. Installed cables must not imped airflow. Cables must be installed into equipment based on the natural design for maintainence (i.e. no cables are to cross removable power supplies, fans, etc.). All inter rack cabling will utilize vertical cable management spaces and use above rack cable management (i.e. no cables crossing thru 2 or more vertical cable management trays.)

All cables removed from service will be removed from the rack and cable trays at the time of the removal from service. Cables not used for more than 30 days are considered removed from service, and are to be pulled out.

Labelling

All cables will be labeled with a "TO and FROM" device/port, with the "TO" being the equipment end (i.e. where to "plug" it). This label is the "physical" definition example of cable on AFS04 side:
TO: AFS04 Pci0, Port 0
FROM: SWH-SERV2 Gi9/1
example of cable on SWH-SERV2 side:
TO: SWH-SERV2 Gi9/1
FROM: AFS04 Pci0, Port 0
Yes, each end is unique, since each end IS unique. Additional function labels may be useful (i.e. Usage: AFS FILE SERVICES FOR GENERAL USE), but are not required. It is encouraged that cables run through patch panels include the ultimate end connection points in addition to the physical. The only exception from the cable labeling requirement would be for cables which are entirely within a rack, and are of limited length (usually equipment cross connect), and for which each end can be clearly seen and accessed, and for which the equipment replacement (and recabling) timeframes for outages can exceed a few hours to get everything back right (serial cables come to mind. If you have to replace a digi, it may be acceptable for it to take a few hours to get the cables back working; so might redundant cross connects within a frame). The labels are intended to minimize time to repair.

More Comments

From Boris:
> -----Original Message-----
> From: owner-core-neteng@slac.stanford.edu
> [mailto:owner-core-neteng@slac.stanford.edu] On Behalf Of Ilinets,
> Boris
> Sent: Thursday, July 03, 2008 6:36 AM
> To: Cottrell, Les; core-neteng
> Cc: Boeheim, Charles T.; Weisskopf, John A.
> Subject: RE: New racks
>
> Les,
>
> Considering the false ceiling if Rm. 210 is just 105" from the raised
> floor and at least 3" would be needed for the IsoBase, I was wondering
> if you could use shorter rack.
> Otherwise the installation would be at the same or even higher cost as
> Networking Row.
>
From John:

Since you only are utilizing 4 out of the "8" racks I plan to place in this area dedicated to your use, I see no real good reason to go with the higher racks. A 40RU rack will help keep the costs of installation down and we also won't have to "stretch" the cables already above the racks to raise the raceway.