DARPA/NSF/DoE Joint NGI PI meeting, Washington Jan 5-7, 2001

Author: Les Cottrell. Created: Jan 5 '01

XIWT | IEPM | PingER | Tutorial
Contents
Attendees | Introduction | Discussions

Attendees: 

There were about 100 attendees. Including Nagy Rao from ORNL, Vint Cerf of WorldCom, Dave Cheriton of Stanford, Matt Mathis of PSC, Hal Edwards of Nortel/NTON, Bill Lennon of LLNL/NTON, Wu Feng of LANL, Richard Baraniuk of Rice, Danny Cohen of CNRI, Bob Braden of ICI/USC, Bob Kahn of CNRI, kc claffy of  NLANR, Jin Guojun of LBL. Presentations were 20 minutes including 2 minutes for questions. Transparencies to be sent to Julie Judd by Tuesday Jan 8 '02. The meeting was at the Hilton McLean Tysons Corner near Washington DC.

Introduction

DARPA has a new director, Tony Tether, which has sparked a reorganization. There are new programs for counter-terrorism, and airport security. 

Architecture for efficient networking of LEOS & wired/wireless terestrial nets - Vincent Chan MIT

Showed how can effectively use multiple channels, and under what conditions one should use satellite to satellite cross-channel verus back to earth and fiber between gateway then back up.  They have a collaboration of Motorola/Iridium, GlobalStar, QualComm. Links are usually 9.6kbps with 450 msec RTT. One issue is how to most effectively use the power of the satellite. The power in the battery can vary depending on whether the satellite is in the sun or not (typical orbit times presented are 90 minutes and are often polar). A "greedy" algorithm will give all the power requested and not defer requests based on possibly higher revenue requests coming later when the satellite is in shadow.

 Hybrid satellite & mobile multi-hop wireless for NGI - HRL

Want to assure broadband access to a dynamic mobile (war zone) environment. Use a hybrid satellite & multi-hop wireless networks. Requires fault tolerance, using divide & conquer approach with backups in an intelligent way. Some components are hard to duplicate (e.g. expensive satellites). CCAs might be truck mounted and may be destroyed by enemy, or may be compromised, or intermittent due for example to terrain blockage. Solutions can be mobile initiated or CCA initiated, and can be reactive or proactive. Proactive can be done using keep alive messages/status messages to all CCAs every "t" time period. Passive could be CCAs keep a list of who is alive and broadcast adverts including alive CCAs. The most effective is the CCA initiated reactive. They are starting to look at using multicast mainly using ns-2.

Inferring AS relationships - S. Argrawal - UCB 

95% of routes in Internet are ISP to customer routes, only 5% are peer to peer.

Space-Time Array Radios - Babak Daneshrad - UCLA

Want to maximize the spectral use efficiency. Take signal and break up into multiple channels, then for each channel one has a tmatrix of amplitudes, phase shifts (e.g. trellis) etc. They have built hardware based in FPGA, they are also using simulation. One critical function that needs to be minimized is training times.

Very high speed spectral efficiencies for wireless channels - Alex Pidwerbetsky - Lucent

Use multiple xmt/rcv antennas to permit multi propagation channels. Gets around error rates and throughput limitations in single channel. Made experiments with paths with varying noise impedances (e.g. manmade structures, or different amounts of foliage depending on time of year). Scheduled to demonstrate 18 bps/Hz per frequency channel in FY02

Regional testbed optical access network for IP multicast & diffServ  - Don Stevenson MCNC

Scheduling & wavelength assignment objective is to simultaneously schedule and assign outgoing wavelength to all packets belonging to a single flow. Helios is also looking at how to do multicast, which is difficult since the receivers are only slowly tunable to different frequencies. Transmitting via tunable lasers can change frequencies in << usec (32 possible wavelengths with 992 possible combinations). They have developed extensions to ns-2 to support wavelengths. They have a wavelength scheduling algorithm.

WDM all optical label swapping (AOLS) supporting sub-carrier & serial addressing - Dan Blumenthal - UCSB

Intel predicts by 2008 cpu chips will dissipate 10KW power. This does not scale for multicpu systems. He wants to avoid electronics where possible and use All Optical Systems (AOS). There may also be some security issues since never need to see the contents of a packet, only look at labels. The labels are put at the head of the serial flow and are handled by electronics (to make decision as to what wavelength to use). Only the header is handled by electronics the complete packets are also shipped in parallel by AOS. The wavelength changes are done by tunable lasers within nsecs  (~ 5 nsec for one given wavelength to another given wavelength) based on the label including adding new label. Label swapping is done at 40Gbps. Labels are NRZ, packet content is RZ, and the NRZ labels are not seen by packet processing. Have demonstrated Add/Drop muxing, ODTM to WDM multiplexing, multicast (e.g. replicate one packet on multiple wavelengths).

Terabit burst switching - Jon Turner Washington University

Main focus is the control mechanism required to coordinate forwarding of packets through wavelength routers.

IP-HORNET: A novel IP-over-WDM Multiple access WAN - Ian White Stanford

With RPR (rings with round robin) as scale beyond Terabit rates the latency for MANs start to increase exponentially. IP-HORNET was stated to scale easily beyond 1Tbit/s. IP=HORNET is a ring network which uses a combination of tunable transmitters + wavelength routing allowing packets to go straight from source to destination by passing all other nodes. MAC protocol has to find empty time slot before inserting a packet. May need to buffer packet if it will not fit in available empty slots (avoided the ATM design of frames=slots). Have demonstrated survivability since has redundancy and allows rearrangement of paths. Multi-access ring is inherently unfair so need a fairness protocol, this is a focus for future study.

PMD and NGI: Alan Wilmer - USC

Prior to 1990's PMD (Polarization Mode Dispersion) was not well understood, so much of the fiber laid in the 90's was poor with respect to PMD. Good PMD fiber is difficult to acquire. Causes of PMD include the fiber core is not perfectly round, optical amplifiers treat the polarizations differently. PMD is very important as one goes to high speeds (e.g. 10 Gbps to 40Gbps it becomes a major factor). It is a stochastic (Maxwellian process) so has a heavy tail. This means there can be outages of a few minutes/year due to PMD. Different polarizations/wavelennths can travel at different speeds (tens of ps/nm over hundreds of km for fdifferent wavelengths).  They have demonstrated higher-order PMD compensation as well as mitigation of rapid nonlinear PMD effects. Different wavelengths in WDM can interfere. 

BOSSnet Status & future directions: Peter Schulz MIT Lincoln Lab

BOSSNET is a project to develop a high speed wide area optical networking techniques for critical military applications. Two fibers between Boston and Washington DC via NY. Connects to SuperNet. Tradeoffs between few channels with v. high speed versus more channels with lower speed, may be different for military vs  commercial world. Also military also very interested in analog transmission whereas commercial is mainly digital. One application is at the Kwajalein Atoll taht is used for missile testing. Tested KG189 encryption with NSA & General Dynamics at OC48.

Wideband networked sensors: Frank Robey - MIT LL

Used for missile defense, e.g. to detect what parts of a missile complex are decoys. A second use is to do satellite threat assessment, i.e. what is the purpose of a enemy satellite (e.g. what can it see and when). By looking at radar image can one discern what is a solar panel, what is the body, then from that get its shape (using library of shapes) and how it is oriented (what it can see). Use X band & KU band radars. Get radar signals at Haystack site near Boston and process in Lincoln Lab or in Washington DC using BOSSnet to transport signals. Data rates are about 500Mbps/radar detector and they have 2 detectors. In the analysis they combine the data from the two signals to get agreement. Hope for deployment into field with covert duty naval ships in service and planned. Army is looking at using and combining multiple sensors.

Network architecture development for wideband terrestrial VLBI system: Alan Whitney - MIT Haystack Observatory

Talked about an astronomy application.  Variable Base Line Interferometry uses multiple scattered radio sensors. Traditionally recorded data onto tape and sent to correllator that creates synthesis of data as if came from a single antenna. Can be used to look at quasar hot spots.  A second use is to look at geodynamics, since have to know precise location, rotation and orientation of earth. Can measure to accuracy of cm.  From deriving this can derive continental drift. Want to remove expensive, unreliable tape recorders  yet limited (tape bandwidth wise limited to few Gbps) from the field.  Sensitivity is proportional to sqrt(bandwidth). Can get remote performance monitoring and control capability in near real time. Data recording now on disk for buffering and transmit on BOSSnet in quasi real time or real time.UT1 earth rotation observations with stations in Germany & Hawaii 1 hour obersvation every 24 hours generated 200GB data collected per day need faster turnaround and shipping disruptions (e.g. September 11, 2001 shutting down of airlines). Earth rotation period can change by couple of msec over a few days. Highly correlated with earth's weather (due to exchange of earth's atmosphere with solid earth).

Precision targeting enabled by collaborative networking (PTCN): Dave Martinez - MIT LL

Need high security, high capacity, high reliability data links. Provide predictive battlespace awareness (PBA). Need to increase probability of detection while minimizing false alarm rate. Idea is to have devices like predator to get signals and put them (multiple predator signals from below clouds) together plus AWACs  & satellite sensors to improve target ID and location tracking with a minimized time line. The time scales for putting ordnance on target is seconds. Key enabling technologies are wideband nets, networked computing, data fusion and exploitation and advanced signal processing.  Today can  get resolution down to 1 meter, goal is to get down to 1 foot. This helps to better ID targets and reduces false alarms. With control want to get sensor to go back and re-measure at a higher resolution (1 foot instead of 1 meter) within a short time frame (re-tasking). Also need to put together the real-time information with archival information (e.g. where are roads, rivers, lakes (tanks not likely to be in lakes, more likely on roads)). Want to use GRID to facilitate the processing and data access, and increase robustness. For precision targeting want to locate target in presence of a lot of noise, e.g. a truck on a busy freeway.

Geographic web, prototype index of text documents organized by geographic relevance: John Frank - MetaCarta

Want to derive lat-long from postal addresses, phone numbers, super unique names is easy. But Media is a place in Morocco, there are multiple places called Paris. 80% of documents may refer to specific places, e.g. "DARPA NGI PI meeting Jan 5, 2001." Like a super Google with geographic information added. Plus the ability to drag document found into a map and locate it, and then look for other things close by. Lot of attention to speed of search with new indexing technologies.

Advances in creating and visualizing the digital earth: SRI

Ultimate goal is to start at an image of the globe and drill down to buildings etc.  Need information e,g, from web crawlers, people registering data or clients like GeoStar for peer-to-peer model, web sites like MapQuest. Requirement is the ability to merge data from multiple sources and multi purposes (e.g. building details and map coordinates). Referred to as democratization of spatial information. Can distribute over tens of thousands of servers, eg. one for each minute. Heavy use of VRML for visualization (TerraVision) on the web. Source code is  available. Collaborator with OpenGIS. Geoster is an application to allow people to share photos by geographic location, hope it will be a way to draw attention to these tools.

Immersive & interactive telepresence over the NGI: Teddy Kumar, Sarnoff Corporation

Goal is to develop a virtual camera system to visualize in real time a dynamic 3D scene from any angle. Can deploy hundreds of cameras to visualize a scene. Need tools to correlate multiple views into a global view. Need to separate in each view what is transient (e.g. soldiers moving) and what is fixed (backgound)

3-D Teleimmersion: Henry Fuchs, UNC

Telepresence is important, but need it in the user environment, i.e. in her/his office as opposed to in a conference room. To do in 3D requires multiple cameras (likened to light bulbs in ubiquity). Need to acquire the environment, need to display (in 3D), need to transport it (networking), need applications that are transparent to use. Today proof of concept demo is with 15 cameras, 5 trinocular stereo streams. Also want to re-create (using longer time not RT) scenes with more detail for later training purposes. For medical trauma there are all kinds of legal problems. Also there are issues of how one scales for multiple participants, and what metaphors does one use especially for multiple people at one site.

Distributed Classroom: J Beavers - Microsoft

High quality, low latency video conferencing with commodity PCs with 1394 cameras, using Window XP video optimization, deployable today at well connected sites (e.g. universities), scalable to help solve existing problems. Use MS video (like a 4th generation MPEG3 with compression of 216:1), targeting low latency (< 250 msec), make it work with multicast. Mulicast turns out to be difficult due to inter-operability between routers). End-to-end connectivity can be problem to many sites outside I2. Working on network diagnostic tools for multicast, presentations and whiteboard, wireless devices in the classroom. Voice and video are separate streams, not trying to get voice-video synchronization. The MS video is software only, highly optimized for Athlon processor instruction set. 

NGI multimedia applications and architecture: Colin Perkins - USC/ISI

One goal is to scale to large audiences (digital amphitheatre). A second goal is HDTV over IP networks. Want to do uncompressed HDTV to avoid compression artifacts & loss (e.g. for medical apps), and extra delay from MPEG encoders. Requires very high speed 1.5Gbps networks. SMPTE 274M 1830x1080, SMPTE 296M 1280x720. Cards are becoming available with GE, OC-48, HDTV frame grabbers uo to 928Mbps on GE/UDP, 850Mbps on GE/RTP (not quite enough for raw HDTV).

Matisse SuperNet project: Danny Cohen & Bob Kahn - CNRI

Want to do resource sharing over the network for MEMS design, and MEMS testing. MEMS = micro-electro-mechanical systems (like VLSI with moving parts). Matisse files are very large, e.g. contain 8000 images can be 10-160GB, computing resources are expensive. Need Gbit SuperNet, have run into 2 problems: availability, effective bandwidth use (drinking from a fire hose). Want max app-to-app performance over TCP sockets. Need tuned TCP buffers, aync I/O, parallel sockets ... Aproach is to devlelop a Matisse grid focused on real time needs.

Migrate: An end-to-end network architecture for intelligent routing: Hari Balakrishnan - MIT

Need a general solution for applications to deal with mobility & discover resources in increasingly heterogeneous Internet. Need to be able to get to resource by generic name. Need to be able to go out of range of cell phone, reconnect to 802.11 or Blutooth, or suspend laptop and move from home to work and pick up network connections where left off. Today connections are defined by socket 4-tuples. IP address IDs an interface (not a host) and so today's socket is a poor definition for a connection. Today have to terminate and retry or preserve IP address. Forcing a constant address for end-point, is similar to web proxy, or mobile cell phone where number moves with phone. He feels mobile IP is the wrong approach, since requires added network support & infrastructure, many mobile apps don't care for seamlessness, applications can't be made aware of mobility. Mobility is an end-to-end problem. What stays constant is the name of the host. Have to deal with consistency of name mapping, correctness aroound time of movement, security, how to maintain the semantics of IP connection across the move. Idea is to introduce an new migrate SYN message and a migrate SYN/ACK. Migrate code now available for Linux 2.2. Project web page (code  & papers) http://nms.lcs.mit.edu/migrate/

Secure Internet access in the TRIAD project: Dave Cheriton - Stanford 

IPv6 is dead NAT has won (1998)! IPv6 needed for mobility is a fantasy. Alternative is to use DHCP and NAT but problem was security. Speaker has set out to address this issue but without requiring IPv6. We need Internet access as an amenity (like power, light, water ...) just limit what can do (don't need a password to use the bathroom) based on who you are (employee, manager, visitor, conference speaker allowed to connect to the overhead display server (not like today's direct analog cable connection to laptop) etc.). Need to protect accessor and providers, so need authentication. He described a secure access mechanism (SIAP/SLAP) for wireless Access Points that extends to wired networks.  Basically pushes the security up to a higher level (i.e. not an IP issue). 

Integrated Network & content aware protocols (INCAPs): Raja Suresh & Mitchell Swanson - General Dynamics

Looking at content aware routing. Routing is based on the data which a node requires or can provide. Big interest for Smart Mobile Networks (SMN).

Interplanetary Internet Status & Plans: Vint Cerf - WorldCom

Extension of Internet to interplanetary distances. Published top level IPN architecture as an Internet Draft and begun its peer review. Defining a core set of protocols. backbone network between planets has long delays, transaction sizes are small compared to bandwidth-delay product. Backbone contact periods are short relative to delay, possibly one-way, may be separated by days, weeks, cannot guarantee end-to-end path, operations are driven by power (costs power to deliver packets, so may need to decide if it worth using the power to transmit, i.e. protocols need to be power aware), weight, volume, can be high value data, and everything is mobile (e.g. delays change, at least in a predictable way). Need non-chatty protocols (reduce # round trips), need store & forward (message switching with custody transfers). Sending/receiving needs to be time synchronized to be pointing in the right direction for example to receive a signal transmitted days ago, so need time synchronization. Chose to build a layer above transport layer. It is referred to as the bundle layer. Will have different IP address spaces in regions with communication devices (do not want to use up all bandwidth getting IP address to name) so late binding of names (and rebind as we go from one region to another). There appear to be applications to other more earth oriented network requirements, especially military oriented (e.g. submarines, e.g. submarine may be able to receive but not transmit since in stealth mode). Did 2 prototype implementations on desk top.  Next steps are to get closer to reality, to look at overlap with earth bound applications, to open up to the public (e.g. via the InterPlanetary Research Group within the Internet Society).

New Architecture for an Internet: Dave Clark - MIT

The current Internet is an economic reality, there is an erosion of trust, there is a rise of third party involvement (a tussle of interests), a broader class of users, new application requirements (QoS, placement, delegation), new technology features (mobility, embedded processing, location aware computing). All of the above are not fully understood. Examples: used to be transparent end-to-end, now have firewalls; little state now have header compression etc., bursty traffic & aggregation are fundamental; recognize that people & societal issues are critical. New architecture must better preserve itself, be more tolerant of evolving requirements, need better model of how to build applications will in turn guide network architecture. Most fundamental change is the loss of trust. Others are the Internet as an economic entity, heterogeneity etc. Trust is assuming that another will act in our best interest even though not externally constrained, constraint is the opposite of trust, The Internet implies global trust. Users want selective transparency, regulated by trust relationship, so a framework for identity is central, identity theft is destructive. Need mechanisms for control of transparency (firewalls of the future delegate trust (who, not just what). Economics fundamentals: a number of entities have central interest, ISPs want to make money; need to make desired behavior be good for general interest. If users could pick routes it would drive competition. For the next generation transparency is not enough, need to explicitly talk about division of responsibility. Don't design for a rigid outcome, allow a tussle. 

Bio-Net a biologically designed network: Tatsuya Suda - UCI

Model of cyber entities (e.g. people, services) with energy (e.g. money) exchange. Evolution of CEs occurs through diversity and natural selection. There is no central or coordinating entities,

Survivable Real-time network services: David Mills - U of Delaware

Can address sensors dropped onto battlefield (security big issue), or dropped onto surface of Mars (less security concern). Fire & forget software. Goals include robustness to various kinds of failures. Timekeeping is a special case.  Need expanding ring search, minimize overhead (e.g. polling), need hierarchy (e.g. stratum clock levels, servers to first order reply only if equal or lower station). They have an RFC and implementations (NTP version 4, http://www.ntp.org/). Key management is a big issues, especially due to the need to regenerate keys at regular intervals (e.g. hourly).

A new paradigm for content distribution: Bob Lindell - USC/ISI

YOID - Your Own Internet Distribution a multicast application protocol. IP multicast not widely deployed yet users need multicast services for conferencing and file transfers.  Yoid can provide functionality without waiting for multicast deployment. Tree building to keep topologies. Allow loop detection. Attempt to support nodes behind NATs, this is also a future research direction. Demonstrated with Mbone tools (vic/rat/vic, wb/wbd, MS NetMeeting (H.323). Have an IP multicast gateway application, they also have a traffic generator and monitoring tools.  Looking at redundancy for rendezvous point to improve robustness.

Ambient Computing Environment (ACE): Gary Minden - University of Kansas

Want to go to a place/work space  (Hotel room, conference room) that provides the secure/private computing/networking environment to get to where one wants to and to do what one needs to do.

Next generation VPN with applications for regional collaboration: Tom Harris SAIC & Andrei Ghetie Telcordia

High performance applications require suitable end-to-end performance. But high bandwidth connection does not guarantee bandwidth needed to support the application. They are developing an application to application (AA) VPN to impact end-to-end performance. Besides security also provides dynamic QoS. Applications are in medicine and in national defense.

GRIP Gigabit Rate IPsec: Tom Lehman & Jaroslav Flidr- USC/ISI

Goal is to make a PCI (66/64) compatible hardware accelerator with a GE interface working in the context of a COTS workstations, want to allow scaling up to 10Mbits/s. Built on Linux 2.4 with FreeS WAN IPSec modified stack. Have measured 750Mbits/s with 3K MTU.

Internet measurements myths about Internet data: kc claffy, CAIDA

Myths are things people believe but are wrong. Four categories: workload, topology. Instead of measurement traffic doubles every 90 days (happened in 1995-1996, but no real data since 1995 (NSFnet sunset), more like 12-18 months.

Conclusion is we need more data measurement/analysis.

MINC (multi cast inference of network characteristics): Dan Townsley - UMass

Idea is to come up with measurements of how net is working (tomography, losses, RTTs etc.). During project realised had to focus on unicast (since multicast not ubiquitous).  Also use RTCP to measure tomography, receivers send loss reports and observer taps in and gathers loss reports. The observer can be the source. They have tools to do the analysis. Unicast tools called string (striped based ping). MINC is the first rigorous framework for network tomography. Approach surpassed expectations, stimulated significant follow on work. Not funded by NSF: too risky? NIMI invaluable for numerous studies. Multicast not prevalent - late emphasis on unicast. AT&T partnership created tensions with other ISPs. Validated with "ns", limited validation on Internet, access to AT& WorldNet never became available. Unclear if scalable. Future directions include: intrusion detection/isolation; use of unicast application traffic; fast, rough estimation, not as rigorous; scalability.

High speed network monitoring & measurement with commodity parts: Wu Feng - LANL

Talked about MAGNET & TICKET. Magnet monitors traffic between the protocol stack and the application to understand the modulation caused by the stack, and to gather traces.  Ticket is network capture on steroids.

What is the future of high performance networking: why hi-perf networking in supercomputers & clusters is not equal to high performance networking in grids. HPN is SC requires low latency (usec). Hosts can have excessive copying. Best buss is PCI with 64 bits at 66MHz=4.2Gbps. 10GE is 1.2us inter-arrival time (1500B MTU), null system call in Linux is 5-10us. Network speeds outstrip OSs. Even iwth jumbo grams get excessive cpu utilization. Also net bandwidth is over-running the host I/O bandwidth. Deliverable bandwidth max size packet/interrupt latency (e.g. 1500B/50us = 30MB/s = 240 Mbps.) Solutions: eliminate excessive copying, interrupt coalescing (but increases latency), jumbograms (difficult to build switches to switch large packets). OS by-pass (avoid TCP)  and some solutions do not scale to the WAN and TCP is here to stay.

Adaptation bottlenecks in two areas flow control and congestion. Recent Linux 2.4x does web based, sender side auto tuning of aggregated TCP connections. Flow control adaptation, todays default max window sizes are way too small.  Need faster convergence congestion control (there is a new binomial congestion control being worked on). AIMD is good for fair share, not well suited for high performance.

Network instrumentation for end-to-end performance: Nagi Rao - ORNL

IQ-Echo Interactive QoS across heterogeneous hdw/swr platforms: Karsten Schwan - Georgia Inst of Tech.

An End-to-end approach to network storage: Micah Beck - UTK

Cannot rely on low delay or high probability of delivery.  His model fits model of storage accessed over the global Internet. Unnecessary if storage operates with predictable delay, reliability. No reliance on timely or accurate delivery of any particular packet., fairness, needs to be scalable. Developed IBP (Internet Backplane Protocol), requests allocation of space for a time on Deport, this negotiated with depot who provides its capability. Then there is a store from source host and a retrieve from sink.  IBP provide redundancy, reliability, fragmentation etc. Depot servers make allocation of primitive byte arrays, not blocks (more abstract) not files (weaker semantics, no directory structure, no accounting, no backup). http://loci.cs.utk.edu/ Today have IBP API low level (to use like IP directly), and a higher level. Looking at putting under Unix file interface.

Discussions etc

I met with Richard Baraniuk of Rice and Wu Feng of LANL to discuss INCITE status and futures.  We will write up a report of our meeting, progress and action items and send to Thomas Ndousse. Wu Feng and I talked about using a new version (runs at application layer rather than requiring an OS modification) of their adaptive TCP window optimization and how to deploy with bbcp for evaluation.

I was introduced to Micah Beck (mbeck@cs.utk.edu) of UTK by Nagy Rao & Thomas Ndousse. Micah is putting together a proposal to DoE for an Internet Backplane Protocol (IBP). This basically consists of store and forward units at network edges (could be on site or at ISP). These units provide centralized functions such as allowing use of multi-paths, optimized TCP stack or replacement, checkpointing, caching. For example rather than every application/host at a site needing to have its TCP stack optimized for the WAN (e.g. with large windows, multi streams) or being able to use multiple paths (e.g. via RENATER and CERN to IN2P3) only one unit has this. It could also be used for sending large mail attachments which would otherwise be barred by the mail transfer agents due to their size, or for peer-to-peer applications (e.g. MP3 file fetching where caching might be useful). They envisage deploying the units at multiple sites such as ORNL, SLAC, IN2P3 which have large data transfer needs. At the moment they have 700GB aggregate in such sites as Lyon Ecole National Superior (ENS), UTK, Texas A&M. Micah is interested in coming to SLAC for a Friday at the end of January (25th) to present what he is doing, and to discuss with users/physicists with possible applications. A further use of IBP is for communicating with entities which are not available all the time, e.g. interplanetary communications, communications with submarines that may not be able to receive or send for long periods.

Bill Lennon feels NTON is in limbo for some time to come. Some of the NTON goals are now being met by CalREN. In particular the U Washington people are proposing "LightRail" a high speed experimental network. Hal Edwards of Nortel says he is still hopeful that something will come out of NTON and hopes he can say something in a month or two, however, most of the infrastructure that used to be in place has been removed since the main NTON carrier went bankrupt and Time Warner took over and were not interested in continuing NTON.

After talking to Thomas Ndousse (the DoE project manager for INCITE and IEPM/PingER) over lunch, he suggested submitting,  to Mary Anne's Grid program at DoE,  a 2 page abstract of a proposal for developing a high speed efficient file copy, measurement tool to assist in understanding data grid throughput rates (compare/contrast various mechanisms such as windows, streams, self pacing, QoS, compression, different bandwidth estimators and various file copy applications),  to see if she might be willing to fund it. Thomas also  felt that SLAC should have more of a presence at DoE/Washington to be better in tune/informed with what is going on. 

I briefly met with Bob Kahn of CNRI who funds our IPEX/CNRI proposal. We commiserated on how long (about 12 months) and difficult it was to get agreement on the CRADA between SLAC/DoE and CNRI/DARPA.

I talked to Jin Guojun of LBL about out pipechar results (i.e. poor correlation of pipechar prediction and iperf predictions especially abovr 100 Mbps). Jin is very interested, but will need an account at SLAC with sudo for pipechar to study what is going on. I agreed to get him an account. Jin also said that pipechar, when used with a Sysconnect NIC card, can measure bandwidths for links which have higher bandwidths than the measuring host's link. 


[ Feedback ]