Les Cottrell, Chuck Boeheim, SLAC, last update April 13, 1997
There were over 400 attendees at CHEP, which is a bit larger than the previous CHEP (Brazil), and about 280 talks. The conference center had to be moved in the last week before the start due to a fire in the planned center. The conference was held in the previous East German Staats Sicherheit (Stasi) secret police headquarters. There were about 30 Windows NT desktops available for logon. Unfortunately the Internet links was a single 128kbps ISDN link. It was usable for character oriented mail and WWW. X servers were disabled after the first day when it was found that supporting them was not possible. The Internet link was also inadeqate to support MBONE/CuSeeMe feeds.
The major topics appeared to be the understanding and impact of poor Internet performance, the challenge of maximizing the effectiveness of geographically dispersed and disparate collaborations, the emergence, growth and promise of Web enabled applications and Java, the move to object oriented programming and databases, HPSS is emerging as the mass storage system of choice, the ascendance of Intel (over RISC), the question of Unix versus Windows NT, commodity hardware and software, and the need for focus on the management of a diverse/distributed environment.
We will not report on all the sessions that we attended. In particular, we won't report on given talks by others at SLAC or on sessions which will probably be covered by others at SLAC. Rather we will focus on the information of particular interest to networking and communications, mass storage, and farm systems.
Since I was a session organizer for the Networking and Communication track, I mainly attended these talks, so most of my information will cover this area. The main areas covered were: video conferencing and collaborative environments, WWW, the ICFA initiative and network monitoring.
A major challenge for HEP is "maximizing the cohesiveness and effectiveness of geographically dispersed and disparate collaborations" - Richard Mount. This is particularly difficult over the WAN if WAN performance only doubles every 8.2 years as in the past where the evolution is constrained by markets that are in many cases cartel and/or regulation driven. This is in contrast to the driving factors such as desktop performance, newly enabled applications such as WWW and video conferencing which are are growing much faster, together with the commercialization and public demand for Internet access. HEP's needs do not appear to being met by the public networks so we must not accept unacceptable performance, and must be informed and lobby for better performance.
The emerging themes and enabling technologies according to Stu Loken are:
There are 2 types of video conferencing, circuit switched and packet switched. The advantages of circuit switched (typically using ISDN) is that the resources are reserved which eliminates interference and guarantees service. However it can be costly especially for international conferences and requires a professional service infrastructure for scheduling etc. Packet switched is essentially "free" apart form some minimal initial hardware and software costs (few hundred dollars), it uses existing facilities (desktops, the LAN and WAN), however there is no resource reservation today so one cannot guarantee quality of service, and the de facto standards have not settled, not all platforms are supported by any given protocol, there are several competing protocols (e.g. CuSeeMe and the nv/vat suite) which do not fully interwork, some implementations do not appear to have addressed Internet traffic concerns (e.g. by using multicast). Packetized video conferencing requires about 200kbps sustained bit rate.
One way to minimize the Internet traffic requirements for packetized video-conferences is to use the MBONE, unfortunately the technology is limited in terms of the granularity with which one can setup conferences with a view to restricting traffic. To meet this concern HEPNRC have developed a Multi Session Bridge (MSB). This is built on the RTP-2 protocols (vic, vat, rat, wb, nt...). It works well and is network efficient for sparse to many conferences. It can unicast to sites that do not support multicast or are logically not in the multicast area (TTL) defined for a conference, it can also by-pass poor Internet connectivity using an ISDN call.
Joe Izen reported on his experiences of using CuSeeMe. It runs on most Macs and PCs. Joe claimed it is easier to set up than MBONE conferences, but said it is not better than MBONE based video conferences, only different. It uses "reflectors" to reduce traffic and is extensively used by BaBar and BES. Resolutions of 160x120 and 3240x240 pixels are supported, at the 160x120 resolution, clients typically require 90kbps. Packet losses of <5% result in a healthy conference, 10% is usable but the effect is noticeable, and at 20% conversations are difficult. There are commercial and freeware versions, the commercial version supports a whiteboard. Slides can be scanned in advance and/or made available on WWW.
An interesting collaborative tool based on Java aims at providing integrated full audio-video teleconferencing, real time text/audio chat, whiteboard, email/newsgroups, calendar scheduling, all Java enabled browser accessible. It will be a commercial product, and the current version only supports email, calendar, an interactive slide show and whiteboard.
HEPNRC have set up a commercial Web indexing/search engine (Verity) to provide searchable indexes for several HEP collaborations including BaBar, SDSS, Dart, CMS and D0. The indexing may be done at HEPNRC or at the remote site (in which case the results are transferred to HEPNRC for the search engine).
A mini workshop was arranged by David Williams and Les Cottrell on the Sunday preceding CHEP97. There were about 40 attendees representing about 15 countries. Many, but not all, were also attending CHEP97. There were also two evening follow on sessions during CHEP97 itself. This was in response to requests from the International Committee of Future Accelerators (ICFA). The morning was devoted to presentations and discussions on the the current status of, plans and initiatives for the Internet. Areas covered included: Internet II and the NGI, European status and TEN-34 (a plan to connect 10 countries with up to 34Mbps links), ESnet status and plans in particular to improve university to ESnet links, and transatlantic links and how they are interconnected.The afternoon was devoted to understanding what people in various countries are doing for monitoring Internet connections and how to coordinate activities.
It was agreed to form an ICFA Networking Task Force (NTF) to serve for 18 months. The task force will consist of: a steering committee composed of working group convenors and others designated by ICFA; and a larger group of members who will also serve on the working groups. Membership in the task force will be open to all those interested and willing to work. Most of the work will be carried out by working groups, which include:
A mandate is under preparation and the first set of reports will be presented to ICFA at their July meeting (I forget where it is). There will be an ICFA NTF meeting (for those who can make it, ideally including the working group chairs) to coordinate with the ESSC and ESSC International group meeting at Santa Fe in September 1997. This meeting will report on the ICFA reactions to the July presentations, updates on status of the working groups and presentations by them, and getting the proposal group organized.
HEP research is performed by large, geographically dispersed groups,
each site chooses its own Internet Service Provider (ISP) and there is
no single point for reporting problems or coordinated way of tracking or
resolving them. HEP communities in many countries/sites have undertaken
extensive monitoring of their Internet connectivity. In the U.S. an ESCC
Network Monitoring Focus Group (NMFG) including SLAC, HEPNRC, ORNL and
BNL is coordinating activities ESnet wide, building on the SLAC base and
developing for deployment more widely. The monitoring is based on ping to
provide end-to-end performance measures. Over 200 sites are currently being
monitored using this code. CERN, DESY, INFN, KEK, RAL, TRIUMF and MSU have
agreed to join. The approach provides a hierarchical approach of remote
sites, collecting sites and (top level) analysis sites, with information
being available on WWW. The hierarchical approach reduces the non-scalability/network
impact of of "fully meshed" monitoring; provides a common well
understood (right down to the details such as frequency of pinging, payloads,
timeouts etc.), documented, reviewed, believable monitoring methodology,
and adds considerable weight to the conclusions drawn which are thus less
Much of the discussion of where farm computing is going was dominated by the 'commodity computing' discussion -- the implementation of farms with a very lost cost per node by basing them on mass-market components such as Pentium processors. Many labs have feasibility studies currently underway, with most farms in the size range of five to ten nodes. The NILE project has reported substantial success with a Pentium-based farm under the Linux operating system. They are planning a 100-node farm purchase in the near future. The D0 collaboration is experimenting with an NT farm, and are considering a potentially very large NT farm for Run II.
The division between the Linux supporters and the NT supporters was quite evident. Both operating systems appear to do the job of running physics code. Both support Fortran/C/C++ development environments. The Linux environment is primarily the GNU tools; NT has a number of slick commericial development environments. Both environments adhere quite strictly to the ANSI Fortran environments, cause some porting problems of code that used Fortran extensions. Licensing software for NT farms sounded somewhat painful, as most software was commercial and required per-node licenses; most Linux software is free. NT has a much larger base of commericial software, though it is not clear how much of that is relevant to farm nodes. Both NT and Linux support web servers, Java, database access, and other emerging technologies. Neither yet support Objectivity, HPSS, or a few other critical pieces for Babar. LSF does have support for NT, though it is reportedly not as nice as the UNIX support.
Both NT and Linux seemed to make the hardware perform well. Linux often seemed to have better network and disk performance. Raw code benchmarks were inconclusive, showing wide variations between different codes on the same compilers. Both have multiprocessor support, and both were reported to be quite stable. Linux was often touted for the ability to get fixes for problems the next day via email from the developers.
Below are a few comments/observations which do not fit under the network & communications topics.
An STK speaker did a nice job of reviewing the data recording technology futures. He showed one plot of area density trends that indicates that magnetic disk areal densities (Mb/in^2) have reached those of optical disk, which might indicate tat optical disk is now constrained to special niches. His graph also showed that the rate of improvement for magnetic disk has exceeded that of tape since the late 80's. There is still a factor of 10 advantage in the $/MB for tape compared to disk, but maybe this will erode. At the same time the access times for disk are 2-3 orders of magnitude better than disk systems. The emergence in the last 10 years of effective automated tape libraries is improving the $/MB storage costs by over a factor of 100 in 10 years. Somewhere in the future we may expect to see holographic/volumetric optical technologies to start to become important since they have the potential to to improve on today's access times by 1-2 orders of magnitude at similar costs ($/MB).
An IBM speaker showed that single mode fiber has a bandwidth of about 25,000 GHz with 0.2db/km. This can be wavelength multiplexed into 10,000 1Gbps channels each separated by 120GHz. IBM today sells a multiplexer that provides 10 ful-duplex channels on 1 fiber (using 20 wavelengths). David Williams said that the best (in the laboratory) fiber capacity is about 300 Tbps x 1000km, which was achieved with 52 way multiplexing.
"I am dazzled by our ability to exagerate our abilities" - Joel Butler.
"Dramatic changes are expected near-term as our computation centers workloads will shift from one of NIC to data-intensive computing. This is being fueled by a 1000-fold increase in the ability to acquire and store data and new and modified applications producing large amounts of data which need to be analyzed" -Reagan Moore, SDSC.
"The trend toward user-friendliness will cause (storage systems) to grow faster than the data." - Gartner Group, April 1995.
"The old metaphors are constantly dying off into literalness", - quoted by John Gage, Sun. He also gave some examples of the metaphors including the computer as a sketchpad, desktop, flying carpet, command & control desk, market, conversation.
"this is a major shift, based on netcentric programs based on Java. We are at a point of change equivalent to the 1980's move to microprocessors"- quoted by John Gage Sun.
There was another quote that I missed but said something about yesterdays religious arguments are made irrelevant by technology advances.
Runs on existing desk tops - Win, Mac, Unix. The codec is implemented in software (there are a choice of encodings supported, the most most universal appears to be DVI)- so no special hardware is needed and there are both freeware & commercial versions. The whiteboard requires everyone to be using the commercial WhitePine client. Includes chat window. Voice activated as well as push button audio. Two resolutions are supported, 160x120 pixels & 320x240. At 160x120 pixels, 1.5-4 frames/sec = 90kbps. There is support for a color codec, but lose crispness. Uses UDP (no retransmit). The video protocols are non-standard, WhitePine are said to be working on implementing standards. Nothing was said about whether the freeware version might be expected to support standards even if WhitePine get it going for the commercial version.
Multipoint conferences are supported by means of a reflector, runs on Unix, Windows NT and Windows 95. The reflector has an interface to nv/vat. One can coordinate reflectors (e.g. to optimize traffic) but it is more complex to set up. The reflector network strategy can be used to avoid bad network congestion points: e.g. SLAC-UK poor; SLAC-CERN is good, so feed UK thru CERN reflector. Most support requests are for interconnect between CuSeeMe and nv/vat.
Joe characterises the impact of packet loss as: < 5% OK the conference is healthy, < 10% packet loss the impact is noticeable but the conference is usable, 10-20% annoying to impossible. He also characterized the frame rates as: > 15 fps full motion video, 4-8fps jerky motion, 1-2 (typical) can see smiles, boredom, nods, 0.5 fps is the listener still present? The whiteboard is nice for presentations, can also use WWW for same purpose, though less interactive (e.e. can't do interactive markup). Would work over 64kbps link to Beijing, but now net usage on the link is too high (90% in both directions in March). Also use for tape delayed broadcasts. Need automatic scanner to prescan presentation pictures. Now support is in desktop group at SLAC. The HEP-CuSeeMe reflector is centered at SLAC. SLAC was subsidizing phone confs, but not any more, so interest in CUSeeMe increases. Hybrid CU-SeeMe plain old telephone system (POTS) conferences are under development. First attempts by dangling mikes by speakerphones' failed. Need electrical connections. Distressing to see dropping of CuSeeMe support by nv. Hopes for Unix port of CuSeeMe.
20 people at CEBAF in the Central Computing (CC) group.
One division has said "no more Macs"
They brought up AFS (could not wait for DFS), and are now looking for users (who are busy with te accelerator turn on.
They now have an ESnet connection.
They are exploring packet video conferencing. They use LSF. Mail with PMDF (on VMS looking at moving to Digital Unix), dataless clients (DMS). Have not used 9GB disks since concerns over reliability from their major vendor (Digital). CEBAF, RAL and SLAC echoed this concern based on their experience with such disks. She has concerns about license managers (e.g. flexlm) in case one of the managers die.
Migration from VM is major effort. Batch shut off end Jan-97, users stop end July, plug pulled end Nov-97. Users down from peak of 1600 to 1000. Concerns over manageability of Unix distributed environiment. Using ATM (155Mbps, which provides about 100Mbps/card) to connect tape servers to disk servers and user access. They had a lot of growing pains with drivers, no broadcast (and none anytime soon) which requires special gymnastics for NIS (they have concerns about how lack of broadcast will effect DCE). Tapes are 3480/3490 they are connected to 6 STK silos. They plan to go to Redwoods (partially to stay in step with CERN & SLAC) with the order for 4 drives this Spring. Awaiting DLT7000 from STK, so is RAL. For data interchange they are worried about users ability to fill interchange tapes in particular if using Redwood tapes. Using AFS for home directories and group files. Moving to RAID for operational as well as availability reasons. They have encountered problems wit RAID5. Have their own (BQS) batch system. Drives are allocated and tapes mounted via DIVA using TMS. Backup & archive is via AFS, Elliot (front end to ADSM which makes it look like Legato) and ADSM.
Plans to upgrade to AIX 4, HP-UX10, AFS 3.4, install DCE test bed, HPSS?, need something like VM's spool space , only better.
Migrating email to IMAP. Awaiting ACAP (IMAP clients) such as Pine (summer?), Netscape v4, Mulberry (Mac & windows), Java Applet (supermail). Experiment with DFS, experiment with NC/Java, improve NT management. Running Wincenter, now (Wincenter v3 - Feb-97) have NIS based acounts and passwords, and automatic NFS mount of NIS home directory. Wincenter issues are mainly to unify login between Unix, VM and NT (not only Wincenter). Other issues for Wincenter are NFS cross platform file sharing protocol, performance is not good (maybe move server to faster Ethernet connection), and application problems in a multiuser context which is hoped to improve with time.
Have contract for computing since 1993 through 1999 to use IN2P3 for most computing. Running AIX 4.1 on SP2. AFS clients on Solaris, have Load Leveller (LL) clients on Suns.
|April 3||April 4||Leave Stanford|
|April 4||April 5||Arrive Berlin|
|April 16||April 17||Leave Berlin|