-- Stanford Linear Accelerator Center

Package Maintenance

UNIX at SLAC
Updated: 23 Jul 2003
--

Introduction

This document is intended for people who install and maintain third-party software in SLAC's AFS file system. We intend the term "third-party" to be taken rather loosely: the approach described here also is used to maintain certain optional software from our primary computer vendors (e.g., compilers), as well as software being developed and maintained by people here at SLAC for a wider audience (e.g., the EGS system).

Most third-party software is stored in an area of our AFS file system called package space. The root of package space is /afs/slac/package; each directory appearing immediately below the root corresponds to a package, that is, an independently-maintained collection of software. Usually a package contains a single product (e.g., /afs/slac/package/frame for FrameMaker); sometimes it represents a collection of related products (e.g., /afs/slac/package/gnu for assorted utilities from the Free Software Foundation). Generally speaking, a single package is considered to include all variants of the product or collection, that is, binaries for each supported machine architecture as well as successive releases of the software with new features.

By way of contrast, here are some examples of software which should not be maintained in package space:

  • Software for your personal use. This belongs somewhere under your home directory.
  • Software for an experiment at SLAC. This is usually stored somewhere in group space (i.e., under /afs/slac/g/group-name) or in some other area of SLAC's file system reserved for use by the collaborators in a particular experiment.
  • Basic operating system software, which is normally installed locally on each machine.

Software is frequently distributed with Web-based documentation and, copyright permitting, it may be appropriate to install such documents somewhere in SLAC's Web space, i.e., the area of SLAC's AFS file system which is accessible from SLAC's UNIX Web server. This space mainly comprises all the files and directories under /afs/slac/www/. Thus, a package maintainer may optionally request an additional AFS volume mounted at

/afs/slac/www/comp/unix/package/pkgname
The URL corresponding to this Web volume would be
http://www.slac.stanford.edu/comp/unix/package/pkgname

Packages vary widely in size and requirements, and no single maintenance strategy will work well in all cases. Thus, we cannot provide a comprehensive and detailed reference manual for package maintenance. Instead, we explain the layout of SLAC's package space and the basic AFS commands you will need for maintaining your package, along with some illustrative examples and a few general guidelines.

We assume you are already familiar with basic AFS concepts and such AFS commands as klog, tokens, and a few of the file server (fs) family of commands, including fs listquota, fs listacl and fs setacl. If not, you should first review the SLAC AFS Users' Guide. As a package maintainer you will need to use a few other AFS commands, including some having to do with the directory protection server (the pts family of commands) and the volume server (the vos family). We will touch briefly on a few of these, but for more complete information you should refer to the AFS Administration Reference Manual.

Here are the major steps involved to install a new package:

  1. Decide on a package layout
  2. Request creation of a new package
  3. Consider License and Copyright Issues
  4. Install, build and test the software
  5. vos release read-only clones
  6. Make the executables accessible
  7. Announce availability of new software
When it's time to install a new release the procedure is a little different:
  1. Obtain space for new release
  2. Install, build and test new release
  3. Announce upcoming changes
  4. Update links and vos release
Finally, there is some additional administrative information that package maintainers should be aware of:

Decide on a Package Layout

Before beginning, you should give some thought to how you want to lay out your package. Of course, the basic source tree is usually determined by the author and encoded in the form of a make file and a tar distribution. However, you will probably have to decide how to deal with different machine architectures and new releases, and how to map the resulting directory tree onto one or more AFS volumes.

There are three main aspects of AFS that have implications for your package layout: size constraints on volumes, read-only clone volumes, and the AFS @sys variable.

Size Constraints on Volumes

You may already be familiar with the administrative constraints we place on AFS volume sizes here at SLAC. Though these are not hard limits, efficient management of the space on the AFS fileservers requires that we keep most volumes to relatively modest sizes. As a result, in the past we used to strongly recommend that larger packages allocate one volume for each machine architecture and, when multiple releases were to be simultaneously available, one for each release. Fortunately, disks have gotten larger and faster and, in addition, we have converted most of our AFS space to fast RAID systems. As a result, we've raised our (soft) volume size limit to about 500 MB. Most package maintainers should now have the option to consolidate either architectures or releases (if not both) into single volumes. Nevertheless, as we'll see below, there may still be some organizational advantages to a larger number of volumes.

Read-only Clones

There is another feature of AFS volumes that can be quite useful in package maintenance. A normal read-write volume can be set up to have one or more read-only replication sites or clones. Although such clones can provide performance and redundancy benefits for very heavily used or critical volumes, most packages do not really need this level of service. On the other hand, clones have another feature that can aid in making a smooth transition to a new release of a package. Changes to files in the read-write volume are not immediately reflected in the clones. Instead, a user (with the appropriate authority) must give an explicit AFS command to update the clones. Users accessing the clone see all the changes at once, thus assuring them of a consistent view of the software in the package.

The @sys Variable and Machine Architecture

NFS file systems are usually set up to hide the architecture-specific details of a "canonical" path name. For example, the canonical path to a third-party executable, named, say, foo, is usually /usr/local/bin/foo. Until recently, this was implemented at SLAC by setting up slightly different sets of symbolic links near the root of each NFS client machine's file system: /usr/local/bin was actually a symbolic link to /usr/local/bin.archname where archname was aix6000 on an IBM RS/6000, sun5 on a Sun Solaris machine, and so forth.

AFS provides an alternative approach for handling architecture dependencies. When the string "@sys" is encountered in a path name, AFS translates it on the fly to the appropriate architecture-related string for the particular client machine. In principal, symbolic links are not required on the client machines in this scheme, and all of the architecture-related details could be handled within the shared AFS file system. (However, in practice the /usr/local directory tree at SLAC is organized using a combination of symbolic links and the @sys variable.)

The maintainers of AFS have defined a set of standard @sys values, called sysnames. These values typically encode not only the machine architecture, but also some OS release information, giving them greater granularity than the SLAC-invented archname values. For example, instead of a single string, aix6000, for all RS/6000s regardless of AIX release, AFS uses sysnames like rs_aix32 for AIX 3.2.x, rs_aix41 for AIX 4.1.x, and so forth. To discover the sysname for a particular machine, you can run the command, fs sysname. You can get a list of the sysnames that are currently being actively maintained at SLAC by issuing the command, plug -sysnames (the plug utility, described below, is used to link your package into the /usr/local directory tree). For a more complete list of sysnames that have been used at SLAC at one time or another, browse through the /afs/slac/ directory and some of the existing packages.

Although this fine granularity in sysnames is occasionally useful, it often gives many more variants than are really needed (this is especially true for Sun's older machines). Thus, it is usually sufficient to build a single binary for a given hardware/OS family, and then use symbolic links (within AFS package space, not on the client machines) to tie together all the related sysnames. Since binary executables are usually forward compatible across OS releases, you should normally build the binary for a given architecture on a machine running the oldest version of the OS you're planning to support.

A Typical Example

NB: Some of the examples used in this document are based on a set of sysnames that are no longer being actively maintained by the plug utility.
Let's suppose you want to set up a package for a product called "foo", which is currently being distributed at release 2.1. It is under active development, so you expect you may from time to time need to support not only the current or "production" release, but also either a newer or an older release (or, if you're unlucky, both newer and older releases simultaneously). To avoid having to specify explicit release numbers in the canonical paths to the production, new and old releases, you decide to use the strings "pro", "new" and "old" for this purpose; however, it's also helpful to have an invariant way to access a particular release number. Thus you decide to make the relative release names, "pro', "new" and "old", symbolic links pointing to directories with names based on the absolute release number. You've also decided to support AIX 3 and 4 (sysnames rs_aix32 and rs_aix4[123]) and Solaris 2.5.1 and 2.6 (sun4x_5[56]) but not other architectures or OS releases, such as Next or pre-Solaris versions of SunOS. The next question you must resolve is whether to put the various architectures above or below the different releases in the directory tree. There are arguments for and against each approach, and probably no "best" answer event for a given package, let alone a general rule for all (or most) packages.

The Release/Architecture Layout

This is probably the simplest layout. In our hypothetical example, the directory tree would look something like this:

/afs/slac/package/foo/pro@ -> r2_1/
                      r2_1/rs_aix32/
                           rs_aix41/
                           rs_aix42@ -> rs_aix41/
                           rs_aix43@ -> rs_aix41/
                           sun4_55/
                           sun4x_55@ -> sun4_55/
                           sun4x_56@ -> sun4_55/
There are several things to notice here. First of all, we have chosen to set up separate directories for AIX 3 (rs_aix32/, since AIX 3.2.5 is the only release of AIX 3 that is currently in use at SLAC) and AIX 4 (rs_aix41/, corresponding to AIX 4.1.x, plus symbolic links for two other AIX releases, 4.2.x and 4.3.x). Two directories for AIX would not normally be needed -- you would link all the AIX sysnames to a single directory. However, for the purpose of illustration we will assume that you want to take advantage of some new features available in foo under AIX 4 but not under AIX 3.

Notice also that we have linked two releases of Solaris to a single directory with a name, sun4_55/, that does not actually correspond to any official sysname. In Sun sysnames, the letter before the underscore identifies architecture variants, so rather than singling out one such variant as more fundamental, package maintainers often name the actual directory for a given SunOS release without any letter (e.g., sun4_55/) and make all the real sysnames symbolic links to it.

Finally, notice the somewhat confusing Sun sysnames: Solaris release 2.x is also known as SunOS release 5.x, and the official sysnames use the latter release number for continuity with pre-Solaris SunOS releases. When people at SLAC refer to "SunOS" they usually mean a pre-Solaris release of Sun's operating system. The sysnames for such releases have a digit less than 5 immediately following the underscore, e.g., sun4c_412. (Things get even more confusing with the next release of Solaris, which Sun is now calling simply "Solaris 7" rather than "Solaris 2.7", and for which the official AFS sysname is sun4x_57.)

Beneath each of the architecture directories, you would normally have at least a bin/ directory, and possibly a few others, such as a lib/.

/afs/slac/package/foo/r2_1/rs_aix32/bin/
                                    lib/
                           rs_aix41/bin/
                                    lib/
                           rs_aix42@ -> rs_aix41/
                           rs_aix43@ -> rs_aix41/
                           sun4_55/bin/
                                   lib/
                           ...etc...

In addition to the architecture-specific directories, a package usually has some architecture-independent directories for such things as the source code and configuration files. You might also want to have a scratch directory in which to build the software for a particular architecture without disturbing either the original source code or any already built executables or libraries. Thus, we might add to the above,

/afs/slac/package/foo/r2_1/build/
                           common/
                           src/

The Architecture/Release Layout

Some experienced package maintainers prefer to put architectures (plus architecture-independent directories, like common/ and src/) at the top of the package tree. In our hypothetical example, this might look like:
/afs/slac/package/foo/build/
                      common/
                      src/
                      rs_aix32/
                      rs_aix41/
                      rs_aix42@ -> rs_aix41/
                      rs_aix43@ -> rs_aix41/
                      sun4_55/
                      sun4x_55@ -> sun4_55/
                      sun4x_56@ -> sun4_55/
Under each architecture, you could use the same relative and absolute release names (e.g., (.../pro@ -> r2_1/") as above., with .../bin/ and .../lib/ directories at the next level down. You'd also need release directories under .../src/ and .../common/, though you could probably dispense with the relative release names since these paths are not likely to be embedded in scripts. One advantage of this layout is that you would probably not need multiple releases under .../build/ since you would normally only build one release and architecture at a time. You would thus end up with something like this:
/afs/slac/package/foo/build/
                      common/r2_1/
                      src/r2_1/
                      rs_aix32/pro@ -> r2_1/
                               r2_1/bin/
                               r2_1/lib/
                      rs_aix41/pro@ -> r2_1/
                               r2_1/bin/
                               r2_1/lib/
                      rs_aix42@ -> rs_aix41/
                      rs_aix43@ -> rs_aix41/
                      sun4_55/pro@ -> r2_1/
                              r2_1/bin/
                              r2_1/lib/
                      sun4x_55@ -> sun4_55/
                      sun4x_56@ -> sun4_55/

Request Creation of a New Package

The next step is to map this directory tree onto one or more AFS volumes, then send a request to set up a new package to unix-admin@slac.stanford.edu. Your request should include the requested package name, the names and mount points of any needed subvolumes, the estimated space needed in each volume, and whether or not any of the volumes should have read-only clones (and if so, how many). You may also request a Web volume associated with your package if you know you'll have HTML documentation to install.

Mapping Directories to Volumes

In preparing this request, you should be aware of the following guidelines.

  • The size of any single volume should be kept below 500 MB.
  • Changing a volume's mount point is relatively time-consuming since it requires coordination between you, your users, and an AFS administrator.
  • Periods should be avoided in mount point names (notice the choice of "r2_1" in the path names for "release 2.1" in the hypothetical example of the previous section).
  • Volumes should not be mounted "too deeply" in your package's directory tree.
The last two points are due to our volume naming convention, which is designed to map volume names into their mount points. This convention is needed because there is no built-in way -- other than exhaustive search -- to determine where in the AFS file system a particular volume is mounted. The convention is that slashes in the mount point name map to periods in the volume name (hence the third guideline), with a few short abbreviations corresponding to the mount points for major divisions of SLAC's AFS file space. In the case of package space, the mount point /afs/slac/package, is mapped to the volume prefix "pkg". Thus, the topmost volume for package foo, mounted at /afs/slac/package/foo, will be named pkg.foo; a subvolume mounted at .../foo/bar, will be named pkg.foo.bar; and so forth. There is also a maximum length of 22 characters for a volume name, which is the reason for the last guideline (which is really a limit on the length of the path name rather than on the number of directory levels).

Because of the second guideline, we strongly recommend that you choose generic names for your actual volumes and mount points and use symbolic links to provide an alternate, more meaningful set of names. This will give you more freedom to rearrange your directory tree without requiring intervention by an AFS administrator.

The Release/Architecture Volume Map

The use of generic volume names is particularly straightforward in the release/architecture layout described in the previous section, since successive releases are likely to be roughly the same size and can thus be easily mapped to a set of generic volumes just below the top level of your package. In our hypothetical example, you would initially request a small (say 5 MB), top-level "stub" volume, named pkg.foo, plus a single larger subvolume, named "pkg.foo.vola", say, for the initial release. The top level of your package would then look like this:
/afs/slac/package/foo/pro@ -> r2_1@
                      r2_1@ -> vola/
                      vola/build/
                           common/
                           src/
                           rs_aix32/
                           ...etc...
When you're ready to install release 2.2 of foo, you would send a request to unix-admin for a new subvolume named "pkg.foo.volb", mounted at .../package/foo/volb and add a symlink .../r2_2@ -> volb/. Eventually, you could retire the release 2.1 and recycle its volume for a newer release.

The Architecture/Release Volume Map

In this layout you would probably map the different architectures, plus some, but not necessarily all, of the other top-level directories onto separate volumes. Unless the package was very large, you would probably not need to further subdivide your volumes by release. Because the volume mapping is likely to be more stable, and also because some of the volumes are likely to be of significantly different sizes, the use of generic volume names may be less attractive. Nevertheless, if your set of supported architectures changes over time, generic names may still prove useful. In our hypothetical example, let's assume that you initially estimate that you'll need about 100 MB for source, 300 MB for binaries in each architecture, 300 MB for building, and no more than 1 MB for the .../common/ directory. You decide to use specific names for the .../src/ and .../build/ directories, generic names for the architecture directories, and leave .../common/ in the top level volume because it's so small. Your request to unix-admin should include a list of volumes, sizes, and mount point something like this.
   Volume Name     Size, MB   Mount Point
   -----------     --------   -----------
   pkg.foo         5          .../package/foo
   pkg.foo.src     100        .../package/foo/src
   pkg.foo.build   300        .../package/foo/build
   pkg.foo.arch1   300        .../package/foo/arch1
   pkg.foo.arch2   300        .../package/foo/arch2
   pkg.foo.arch3   300        .../package/foo/arch3
When you are notified that these volumes are ready, you could then create additional directories and symbolic links resulting in something like this.
/afs/slac/package/foo/arch1/
                      arch2/
                      arch3/
                      build/
                      common/
                      rs_aix32@ -> arch1/
                      rs_aix41@ -> arch2/
                      rs_aix42@ -> rs_aix41@
                      rs_aix43@ -> rs_aix41@
                      src/
                      sun4_55@ -> arch3/
                      sun4x_55@ -> sun4_55@
                      sun4x_56@ -> sun4_55@
If you later decide to drop AIX 3 and add support for (Intel) Linux (sysname i386_linux22) you could simply recycle arch1, replacing the link, rs_aix32@ -> arch1/ with i386_linux22 -> arch1/

To Clone or Not To Clone

Your request to unix-admin should also specify which volumes, if any, should have read-only clones, and how many clones each volume needs. As explained above, read-only clones make it possible to make and test changes to your package without disturbing your users, and then activate a set of changes by issuing a single AFS command. However, the use of clones also adds a little extra complexity to the package maintenance process:
  • If you have read-only clones, they will be mounted within the "normal" part of the AFS package space, e.g., under /afs/slac.stanford.edu/package/foo/. In order to make changes, however, you'll have to access the read-write volume which is located under a separate but parallel tree beginning with /afs/.slac.stanford.edu/package/foo/ (note the period before "slac.stanford.edu").
  • The "activation" command, vos release, is normally only available to AFS administrators. SCS provides an automatic mechanism for package maintainers to have this command issued on their behalf on the volumes for which they are responsible. However, this mechanism requires typing in your password, and thus is not particularly easy to automate.
  • The vos release operates on a single AFS volume so if you have clones at two levels (e.g., the main stub volume and a subvolume), you must think carefully about the order in which you do the vos release commands.
Nevertheless, read-only clones provide quite useful protection for your users, and we will assume that they are used throughout the remainder of this document.

For the vast majority of packages that do choose to use read-only clones, a single clone should be all that's needed.

ACLs and AFS groups

In addition to creating the initial volumes for your package, our package creation process also creates two AFS groups, owner-pkg-pkgname and maint-pkg-pkgname. The maint- group is intended to define the set of people who can make changes to the files and directories of the package; normally, this group should be given the WRITE set of privileges (i.e., "rlidwk") on all the directories of the package. The owner- group is intended to define the set of users with authority to change the membership of either group or to change the ACLs of a package directory; normally it should have ALL privileges ("rlidwka") on all directories in the package. As the package requester, you will be the sole initial member of both groups. You can use various pts commands, such as pts adduser to change the groups' membership. For a complete list of pts commands, try pts help.

It is strongly recommended that you never attach ACLs for individual users to package directories, since this can make it very difficult to keep the ACLs correct when responsibilities change. If the standard two-group structure outlined above is not adequate for your package (for example, if you want to subdivide maintenance responsibilities for different parts of the package among several different users), you can use the pts creategroup command to create additional groups (be sure to specify the -owner option on this command in order to assign ownership of the newly-created group to one of your package groups instead of to you, personally).

Consider License and Copyright Issues

Software

AFS is a world-wide file system, but not all software should be made freely available to the whole world. It is your responsibility to review the terms of the software license and implement whatever access restrictions may be necessary. In some cases, this can be done fairly easily by means of the AFS ACLs. For example, packages are set up by default with the READ set of privilege ("rl") for the two special groups, system:slac and system:authuser. If you have only a site license for your package, however, it probably should not be available to people running on a remote computer with a SLAC AFS token (e.g., someone at CERN who has done a "klog -cell slac.stanford.edu"). In such a case, you should probably remove the system:authuser ACL from the directories in your package.

Similarly, if your license is restricted to a specific set of users you could create an AFS group, e.g., owner-pkg-pkgname:licensees, to be given READ privileges and then remove the ACLs for both the system:slac and system:authuser special groups.

Some software is distributed with "floating licenses" requiring an active license manager. SLAC has some dedicated license server machines for this purpose, and we are already running a couple of the popular license managers, FLEXlm and NetLS, on them. If your software needs a license manager, we suggest you talk to us (send mail to unix-admin@slac.stanford.edu) about running it on one of our servers.

Documentation

If the package includes Web-based documentation, you should also consider the most appropriate way to make it available. Unless there is a clear indication to the contrary in the distributed package, it is safest to assume that access to the documentation should be restricted similarly to the software itself.

If the access requirements permit, it is most convenient for users if you place the documents in a Web volume associated with your package. For example, this would probably be fine for software that was freely available (e.g., under the terms of the GNU Public License). Note that installing documents into such a volume does not cause them to be linked automatically into the SLAC Web; however, it does make them visible to anyone in the world who knows (or can guess) the URL.

To restrict Web documentation to SLAC nodes only (e.g., to correspond to software with a site license), you cannot simply remove the system:authuser ACL since SLAC's UNIX Web server does not enforce AFS access restrictions. Instead, you should also put such documents under a directory named "slaconly".

Documents with greater restrictions (e.g., an "internals" manual for commercial software) should probably not be placed within a Web volume at all, but should be left in Package space and viewed via a file: (rather than http:) URL.

Linking your Web documents from the SLAC Web is beyond the scope of the this document; consult SLAC's Web Policy and Resources page.

Install, Build and Test the Software

This is mainly a matter of following the instructions that came with the software. Here are just a couple of tips:
  • As always in AFS, don't forget to make sure you have a token.
  • If you have read-only clones, remember that you must access the read-write volume (via the /afs/.slac.stanford.edu/... path) in order to make changes or see the changes for testing purposes before doing a vos release.

Note concerning Web documentation:   If your package includes documents that you are installing into a Web volume you should physically locate the files in your the Web volume rather than simply providing symbolic links from that volume into your regular package area. Otherwise, there is some risk of inadvertently exposing other files outside SLAC's official Web space to access from anywhere in the world. Symbolic links pointing in the other direction are perfectly OK.

Note for M account users:   Some package maintainers may have previously used an M account to maintain software in SLAC's NFS file system. These were special no-login accounts, with names that, by convention, began with a capital "M" followed by the package name, e.g., "Memacs"). The purpose of these accounts was to control access to the shared NFS file space. With the richer set of access control mechanisms in AFS, M accounts are no longer needed.

vos release Read-only Clones

As mentioned above, if you have read-only clones defined for a volume in your package, a vos release command must be issued after making changes in the read-write volume, mounted somewhere below /afs/.slac.stanford.edu/package/pkgname/..., in order to make those changes visible in the clones at their normal mount points below /afs/slac.stanford.edu/package/pkgname/....

However, because vos release is a privileged command, you cannot issue it directly. Instead, you must:

  1. Be a member of the maint-pkg-pkgname AFS group for the packages.
  2. Issue the command,
    vos_release { vol_name | mount_point }
    to request that the vos release be done on your behalf. You will be prompted to enter your AFS password for authentication. If you specify the mount_point rather than the vol_name, you must specify the absolute path to the read-write volume, i.e., it must begin with /afs/.slac.stanford.edu/...

If you have several volumes to release, you can avoid having to type your password repeatedly by first issuing a special version of the klog command, klog.krb. Once you've successfully done a klog.krb on a given host, you'll remain authenticated for the vos_release command (on that particular host) for up to 25 hours, just as for a regular AFS token (in fact, as a byproduct you'll also get a fresh AFS token).

Suppose you've decided to use the Release/Architecture layout and map it onto generic volumes as suggested above. After you've completed your installation and testing, your read-write volume should look something like this:

/afs/.slac.stanford.edu/package/foo/pro@ -> r2_1@
                                    r2_1@ -> vola/
                                    vola/build/
                                         common/
                                         src/
                                         rs_aix32/
                                         ...etc...
However, assuming you have not yet done a vos_release, the clone volumes will appear to be empty. You will need to vos_release two volumes, pkg.foo (to make the top-level symbolic links visible) and pkg.foo.vola (to make the contents of the subvolume visible). Although in the case of a new package the order does not much matter, it is probably a good idea to get into the habit of vos_release-ing the subvolume(s) first followed by the top-level volume. This order will usually make for smoother transitions to new releases once you have active users.

Here's what this might look like:

ljm@flora02 11:09 > klog.krb
Password: [...type in your regular AFS password...]
ljm@flora02 11:09 > vos_release pkg.foo.vola
Recloning RW volume ...

pkg.foo.vola 
    RWrite: 536886656     ROnly: 536886657     Backup: 536886658 
    number of sites -> 3
       server afs11.slac.stanford.edu partition /vicepb RW Site 
       server afs08.slac.stanford.edu partition /vicepa RO Site 
       server afs11.slac.stanford.edu partition /vicepb RO Site 
This is a complete release of the volume 536886656
Updating existing ro volume 536886657 on afs08.slac.stanford.edu ...
Starting ForwardMulti from 536886657 to 536886657 on afs08.slac.stanford.edu.
updating VLDB ... done
Released volume pkg.foo.vola successfully
ljm@flora02 11:10 > vos_release pkg.foo
Recloning RW volume ...

pkg.foo 
    RWrite: 536876792     ROnly: 536876793     Backup: 536876794 
    number of sites -> 3
       server afs11.slac.stanford.edu partition /vicepb RW Site 
       server afs08.slac.stanford.edu partition /vicepa RO Site 
       server afs11.slac.stanford.edu partition /vicepb RO Site 
This is a complete release of the volume 536876792
Updating existing ro volume 536876793 on afs08.slac.stanford.edu ...
Starting ForwardMulti from 536876793 to 536876793 on afs08.slac.stanford.edu.
updating VLDB ... done
Released volume pkg.foo successfully
Note that it can take a significant amount of time (tens of seconds to minutes, depending on the size of the volume and the load on the file servers) for the vos_release to complete.

One word of caution: it is not a good idea to rely on keeping changes in the read-write volume from appearing in the clones for long periods. If you share responsibility for a package with others, someone else may do a vos_release before you. In addition, from time to time an AFS administrator may need to move a volume from one file server to another, which has the same effect as the vos release command.

Make the Executables Accessible

If the software in your package is of interest to only a handful of SLAC, users, it may be sufficient to tell your users to add the appropriate directory, e.g., /afs/slac/package/foo/pro/@sys/bin, to their PATH environment variable. Most package maintainers, however, will want to provide symbolic links to their executables from /usr/local/bin.

Package maintainers are not normally given the authority to make changes directly in the /usr/local directory tree. Instead, a SLAC-written facility, named plug, runs several times a day to update symbolic links from /usr/local into package space. This facility reads a control file, named PACKAGE.LINKS, in the top-level directory of each package to determine what symbolic links should be made and how they should be mapped into the package. In addition to the executables, this facility can also handle other types of files that should appear in /usr/local, such as libraries and man pages. After making changes to their PACKAGE.LINKS file, package maintainers can run the plug command in test mode in order to see what will happen at the next production plug run. For more complete information, please see man plug.

Announce Availability of the New Software

The last step in creating a new package is to announce its availability to your users. The primary place for doing this is the mailing list, comp-change@slac.stanford.edu (and the associated USENET newsgroup, slac.computing.changes). Unfortunately, due to the increasing problem of SPAM, we have been forced to restrict this mailing list to SCS personnel only. We plan to add package maintainers to the list of allowed posters in the near future, but until then you'll have to send a draft of your proposed announcement to unix-admin@slac.stanford.edu, with a request to repost it to the comp-change mailing list.

Please keep your announcements brief and to the point, and include pointers to more complete information. We try very hard to keep the traffic on this announcement list (and its twin, comp-out, used for announcing outages) to a minimum so that as many people as possible will subscribe.

You also may wish to post your announcement elsewhere, e.g., in an experimental group's news system.


Obtain Space for New Release

At some point you'll probably need to install a new release of your software (or, possibly, support for a new architecture). The first couple of times this happens, you may simply request the necessary space, in the form of additional AFS volumes and/or quota, by sending mail to unix-admin@slac.stanford.edu. Eventually, however, you should be prepared to recycle some of your existing package space. This may require some lead time to permit any users of the obsolete release or architecture to find alternatives (or to try to convince you not to retire their favorite version ;-).

It can be difficult to know for sure who might be using your package, so it's always a good idea to post announcements of potentially disruptive changes (such as withdrawal of a release or architecture) to the comp-change@slac.stanford.edu mailing list (via unix-admin@slac.stanford.edu), as well as to more specifically targeted fora, such as a group's news system.

Continuing with our by now somewhat tired example, suppose some time has passed and you're now running release 7.6 of foo in production, plus the previous release, 7.5, as "old". Let us further assume that some time back a small group of users developed some systems that are critically dependent on release 7.1, and that you have agreed to leave a "frozen" copy of this release in place indefinitely, but outside the normal "new" -> "pro" -> "old" progression. Assuming the release/architecture layout, package foo may now look something like this:

/afs/slac/package/foo/old@ -> r7_5@
                      pro@ -> r7_6@
                      r7_1@ -> volb/
                      r7_4@ -> vold/
                      r7_5@ -> volc/
                      r7_6@ -> vola/
                      vola/
                      volb/
                      volc/
                      vold/
Note that there is currently no "new" symbolic link (you removed it shortly after release 7.6 went into production), and that the former "old" release, 7.4, still exists.

You first send out an announcement stating your intention to remove release 7.4 in a few days time in order to make way for the recently announced, new and improved release 8.0. On the stated day, you remove the r7_4 symbolic link and do a vos_release of pkg.foo, then remove the contents of vold/ and vos_release pkg.foo.vold.

An alternative strategy is to remove such "older" releases a few days after moving a new release into production, e.g., at the same time that you remove the "new" symbolic link. This saves you some lead time at the next release, but the trade off is a little more peremptory treatment of your users.

Install, Build and Test New Release

At this point, you can install, build and test your new release in vold/, and create new symbolic links,
/afs/.slac.stanford.edu/package/foo/new@ -> r8_0@
                                    r8_0@ -> vold/
These will not be visible via the /afs/slac/... path until you vos_release the two volumes (i.e., pkg.foo.vold first, and then pkg.foo). You may want to defer these vos_release commands until just before you're ready to announce that release 8.0 is available for testing.

Announce Upcoming Changes

As for new packages and withdrawal of old releases, announcements of new releases should be posted to (at least) comp-change@slac.stanford.edu (via unix-admin@slac.stanford.edu). If you are simply introducing a "new" (i.e., non-production) release, you can usually make the announcement after the release is available. However, if you are changing the production release, you should usually give some warning and, if possible, a chance to test the new release before it goes into production. In practice, you can usually combine the two announcements:
Subject: New release of foo available for testing

Release 8.0 of foo is now available for testing. To try it, add /afs/slac/package/foo/new/@sys/bin to your PATH environment variable. For a summary of the new features of this release, see the file, /afs/slac/package/foo/new/common/doc/release-notes.

Please send any comments or questions to your-email-address-here. If no serious problems are discovered before then, release 8.0 will become the production release on Tuesday, 29 February 2000.

Update Links and vos release

Assuming your new release has been available for user testing for at least a few days, and that you've given reasonable notice of a change to the production version, you should be able to move the new version into production (and make the current production version the "old" version) by changing a few symbolic links in the top-level volume and then doing a vos_release of just that volume.
ljm@flora02 20:04 > cd /afs/.slac.stanford.edu/package/foo
ljm@flora02 20:04 > rm old && ln -s r7_6 old
ljm@flora02 20:04 > rm pro && ln -s r8_0 pro
ljm@flora02 20:04 > vos_release pkg.foo
Password: [...type in your regular AFS password...]
Recloning RW volume ...

pkg.foo 
    RWrite: 536876792     ROnly: 536876793     Backup: 536876794 
    number of sites -> 3
       server afs11.slac.stanford.edu partition /vicepb RW Site 
       server afs08.slac.stanford.edu partition /vicepa RO Site 
       server afs11.slac.stanford.edu partition /vicepb RO Site 
This is a complete release of the volume 536876792
Updating existing ro volume 536876793 on afs08.slac.stanford.edu ...
Starting ForwardMulti from 536876793 to 536876793 on afs08.slac.stanford.edu.
updating VLDB ... done
Released volume pkg.foo successfully
At this point, you should send out one more announcement confirming that the new release is now in production, repeating instructions on reporting problems, and warning that the now redundant "new" symbolic link will be removed in a few days (don't forget to do so).

Congratulations! You are now an experienced package maintainer.


Package Mailing Lists

The plug utility that maintains the /usr/local symlinks for packages also maintains a UNIX mailing alias for each package to make it easier to communicate with the maintainers of that package. The names of these aliases are in the form, "maint-pkgname", and the members are the users in the corresponding maint-pkg-pkgname AFS group. From time to time, someone in the unix-admin mailing list may forward user problem reports concerning your package to it mailing list, and request that you respond directly to the user.

You may also want to advertise this mailing list to your users when you announce changes to your package. However, if you do so please remember that these UNIX aliases do not have corresponding entries in our central mailrouter database, so you must specify the host name, "mailbox", when using them, e.g., maint-foo@mailbox.slac.stanford.edu.

We also combine all the individual package mailing lists into a single master mailing alias, maint-all, which we use from time to time to make announcements to all package maintainers. If you are responsible for more than one package, your email address will only occur once in the maint-all alias so you shouldn't not be inundated with a lot of duplicate mail.

Index of Package Maintainers

We also maintain a master index of package maintainers and the packages each one is responsible for. You can find it at, /afs/slac/package/PACKAGE.MAINTAINERS. To review your list of packages, you can do,

   grep '^userid ' /afs/slac/package/PACKAGE.MAINTAINERS

Len Moss