IN2P3 Visit 3/27/98
Gilles Farrache (email@example.com), Jerome Bernier, Rolf Rumler (firstname.lastname@example.org), Elisabeth Piotelat (email@example.com), Chafia Tifra (firstname.lastname@example.org), Helene Jamet (email@example.com)
CCIN2P3 is the computer center for about 17 HENP centers (IN2P3) in France. They have a simulation farm, interactive farm, STK/Redwoods with tape servers, disk (RAID) server on ATM with PVC. There was a problem with SVCs in AIX 3.2 that has been fixed in 4.2. The machines are connected at 155Mbps (there is not a 622bps interface available for the IBM computers yet) and the interswitch (Cisco Lightstream 1010) connections are 622Mbps (there is no IP emulation on the 622 links). There is a direct connection between the data server and the compute server. They are using classical IP emulation. They do not have broadcast (not using LAN emulation). They do not expect to be able to connect machines at 622Mbps rates for some time (lack of interfaces), and even longer before they can have >= Gbps machine connections with ATM. They are thus planning to using Cisco 5500 and moving to Gbps Ethernet. They will keep the existing ATM and use LAN emulation. The C5500 will provide 100Mbps and 1Gbps connections to workstations and to each other. The existing LS1010s will not be connected together, they will be connected to the C5500s. The compute people are also looking at using Fiber Channel Standard (FCS) interfaces or HIPPI. Rumler reports that the limitations today appear to be the disk writing speed (for SSA disks) which appears to limit < 7 Mbytes/sec. They have big packets that should work OK for LAN emulation.
They are not using HSRP, or VLANs (they have only one building so the need for VLANs is minimal) or route switch modules, so far they have not played with fast Etherchannel, it could be interesting in future based on BaBar needs. They do not support DHCP yet, there has been little demand.
IN2P3 will be hosting BaBar (for 30 BaBar/France physicists out of 2000 French physicists) data (expect 100 Tbytes/year from BaBar), and also D0 (starting 2001). The storage work will be useful for LHC in future, but the BaBar requirement is very demanding early on. They can store up to 1.7Pbytes on the silos). IN2P3 are finding it hard to follow BaBar's change in emphasis from IBM (they are at AIX 4.1.5, and BaBar are promoting AIX 4.2) to Sun platforms (i.e. the builds are done on Sun first). There are also concerns with being able to distribute the data via Objectivity especially due to the security and reliability issues. IN2P3 have been asked (by Jurgen Mai/CERN) to look into an Intel farm, but they do not have any users clamoring for such a farm.
DECnet & AppleTalk
In the past PHYnet supported native DECnet, but now migrating to RENATER. RENATER supports IP only. There is DECnet phase IV or V internally. Before the end of June all international and WAN DECnet will not be supported by the computing center. There are no VMS nodes at IN2P3, there are some at other centers. They are using AppleTalk internally for special purposes such as printing. There is no support for AppleTalk over the WAN.
Mail at IN2P3
Started with AFS mail on IBM RS/6000 370 with RAID disk, was reliable but required people to logon. So they started to support POP for administrative folks with Macs & PCs. They are using the Netscape mail server (chose because thought would have good support etc., but not clear it has worked out this way). They also tested the CMU and Washington U servers. The Netscape version 3.1 server (it provides the SMTP service as well as IMAP/POP) does not appear to work well when it gets big bursts of email, and users see long times to read their mail. They will be testing a new version (current 3.1 moving to 3.5). They are thus not pushing the move to the Netscape server. Administering the Netscape server is very easy. A few people in the CC have moved to IMAP. They will also upgrade the machine to an IBM F40 (the IMAP server requires a lot of memory). Today they have about 10 simultaneous connections to IMAP (IN2P3 sees a need for about 2Mbytes memory per connection, Netscape say 700kbytes on this release, and 256kbytes on the next release) and 200-300 with POP. IN2P3 says that the initial CERN IMAP service got a bad name since it was slow, and IN2P3 wants to avoid getting a bad name from too early an introduction.
They are allowing a quota of 50Mbytes/user. There is no way for the user to know how much of their quota is in use. They use separate IMAP accounts from Unix (different userid and passwords). There is a database that keeps track of the IMAP and Unix accounts. They are supporting the Pine and Netscape IMAP clients, will look at Eudora. They have seen a lot of bugs with Netscape 4.0, Pine 3.96 has been fine. They have no current interest in an NT IMAP server.
Filtering IP broadcasts on the LAN (smurf attacks). They are blocking spoofing, IRC (Internet Relay Chat) was causing more problems that help, NFS will be stopped end of June on the WAN, it is allowed within the LAN, TFTP, ICMP (ping, about 2.5% of the trans-Atlantic link (of 600kbps) is used by ICMP) blocked except to one or two special machines (this could cause problems due to blocking ICMP quench and other things), don't block X, block Netbios to certain Labs (will be extended to all Labs), no restriction on machines which use SMTP.
They are looking at using a transparent cache server to provide HTTP offsite access. They will probably use the redirect in the Cisco router and use with a squid server - as opposed to the Cisco cache server. There are discussions on staleness of data. They are already running a squid server and requesting people to use proxy, but want to make transparent and users have to do nothing. They started out with a 5Gbyte cache. About 40 machines are using the squid caches (non transparent). They see the caches are quite well used (>40% today) and take this to reflect that people are not surfing but focussing on a specific sites. CCIN2P3 acts as a father to other IN2P3 sites (each IN2P3 sites will have their own squid servers eventually) which have their own squid server and will provide a cache server for server that do not have squid servers. They will only be fathers for sites that have the same routing policies (e.g. for IN2P3 sites but not for commercial sites).
They are not using any commercial tools to manage their routers. They have about 40 routers including border routers at remote sites such as Orsay. They monitor the WAN (but not the LAN). They take the IP accounting from Cisco routers and collect it on a machine at CCIN2P3. This provides destination/source and packets/bytes every 20 minutes. This is kept for security reasons. This information will be used for monitoring. They are working on analyzing this data and will generate alarms. They will tie it into an existing alarm enunciation system. They also use the SNMP alarms, and ping a router that does not have support for SNMP alarms. They do not have a 24x7 NOC, there is a computer center operator, who can call the networkers at home. The alarms will be able to be sent as email and also to pop up a window on a workstation and make an audible alert. The networkers also have ISDN lines to their homes, but do not carry a pager.
They have not looked at Netflow yet. Netflow has filter lists and can record a complete sessions, it needs a big link to the logging machine and a lot of disk space. Netflow runs on the Cisco and talks to a collection machine. FNAL use it to record suspicious sessions. On site (CCIN2P3) they have one person who worries about security (he is also a Unix system admin), there is one similar person in each Lab, plus there is a head located at Orsay for all of these activities.
They are part of the consortium that has a "fat pipe" (2*E1) across the Atlantic. They are guaranteed 600kbps and can burst to 1.54Mbps. The partners are CERN, WHO, UN, IN2P3 plus another. IN2P3 will upgrade its participation to 1Mps soon. They are very happy with the trans-atlantic service. The guarantees is implemented by Olivier Martin's Frame Relay IGX box.
If the CERN link goes down then it would be hard to redirect the traffic (Renater will not announce IN2P3 traffic outside France, so it has to go via CERN). National (French traffic) is routed via Renater. Some IN2P3 sites are still on PHYnet others are on Renater. Renater works well within France, apart from when France Telecom is working on the routing (e.g. 2 weeks ago, a couple of Labs were cut off from the network for 3 days over a weekend). Renater does not have enough capacity on it trans-atlantic connection so it does not work well to the U.S. The problems with Renater are greater than the problems when they had leased lines with star points at Lyon and Paris.
Like the idea of centralized special purpose machines (prefer to pinging general purpose machines) but would like to see some analysis or firm plans first.
LDAP - Helene Jamet
They are evaluating the Sun and the Netscape LDAP servers. They ran into problems with sharing the directory between several Suns.
They support c=fr, o=in2p3. The goal is to have one server to ask within IN2P3 but with the organization of each organization unit located at each Lab (e.g. Orsay etc.) There is a problem with the Sun server that if there is a request to the top level LDAP server then it is not forwarded to the subsidiary server in particular if there is a common name at multiple sites (e.g. smith at ccin2p3, and smith at lal). The Netscape server does not suffer from this bug, but there are a lot of other problems that Netscape do not appear to be addressing very well. Elaine has spent more time so far on the Netscape server. No decision made on what product to go to. CERN did not see the Sun problems since there is only a single directory at CERN, and CERN is very happy with the Sun server. It is possible the Sun server could be made to work if all LDAP queries came to CCIN2P3 which then farmed them out to LDAP servers at the various IN2P3 sites.
CCIN2P3 are not looking at the ACAP, Michel Jouvin at LAL is looking at the Simeon ACAP server.
They have about 3 dial-in connections for physicists, 3 for engineers (both are PPP), plus an X.25 to Minitel (via an old Micom X.25 mulitplexer), and a ISDN/PRI into a Cisco router.
The official policy is that there are no private Web servers or private Web pages. This is a cross IN2P3 Lab policy. They want to restrict the number of Web servers. They scan for Web ports. They expect to restrict the usage of Web servers and will probably block at the border router.