Problems with link to/from BNL Network logo

Adnan Iqbal. Page created: Jan 05, 2006

Central Computer Access | Computer Networking | Network Group | More case studies
SLAC Welcome
Highlighted Home
Detailed Home
Search
Phonebook

Problem

Path chirp analysis reported alert to different monitored sites in same time span.

Monitoring Node

Monitored Node

Alert/Time/Drop

IEPM-BW.BNL.GOV

IEPM-BW.SLAC.STANFORD.EDU

1) 12/30/2005  01:32:34, %drop = 40.9

IEPM-BW.BNL.GOV

NODE1.CACR.CALTECH.EDU

1) 12/28/2005 23:10:15, %drop = 37.5

2) 12/29/2005 09:41:17, %drop = 46.7

IEPM-BW.BNL.GOV

NODE1.PD.INFN.IT

1) 12/28/2005 21:28:54, %drop = 45.6

2) 12/29/2005 05:32:21, %drop = 51.4

3) 12/29/2005 22:33:49, %drop = 49.4

IEPM-BW.BNL.GOV

IEPM-BW.CERN.CH

1) 12/29/2005 04:21:48, %%drop = 48.5

2) 12/29/2005 06:46:51, %%drop = 48.5

IEPM-BW.BNL.GOV

NODE1.CACR.CALTECH.EDU

1) 12/29/2005 09:41:17, %%drop = 46.7

IEPM-BW.BNL.GOV

NODE1.DL.AC.UK

2) 12/29/2005 01:52:17, %%drop = 34.4

IEPM-BW.BNL.GOV

Total Nodes = 6

Number of Alerts = 10

Earliest Detection  12/28/2005 21:28:54

Latest Detection  12/30/2005  01:32:34

We received number of e-mails from iepm-bw.bnl.gov about alerts for different monitored nodes between the time period 12/28/2005 21:28:54 - 12/30/2005 01:32:34 These events were detected by Path chirp analysis tool using Plateau algorithm.

Observations

At first we tried to find out any route change in the specific time period. We came to know that there were two nodes who experienced a route change but that route change was very small and cannot be associated with any adverse effect.

 

Our second step was to figure out what is the effect on other nodes monitored by iepm-bw.bnl.gov. We observed that majority of monitored sites experienced decrease in performance, some of them increase in performance and very few did not experience any thing.

NODE

COMMENT

iepmbw.cacr.caltech.edu

-ve, no route change, trigger unstable, 1 alert

node1.cacr.caltech.edu

-ve, no route change, trigger buf looking stablizing, 2 alert

node1.sdsc.edu

-ve, no route change, trg buf getting worse, 0 alert

node1.stanford.edu

+ve, no route change getting better

iepmbw.slac.stanford.edu

-ve, no route change, trg buf unstable, 1 alert

node2.slac.stanford.edu

-ve, no route change, trg buf getting worse, 0 alert

node1.indiana.edu

+ve, no route change getting better

node1.dl.ac.uk

-ve, no route change, trg buf unstable, 1 alert

node1.mcs.anl.gov

-ve, no route change, trg buf unstable, 0 alert

node1.desy.de

+ve, no route change getting better

node1.fzk.de

+ve, no route change getting better

iepmbw.cern.ch

-ve, no route change, trg buf stablizing, 2 alert

node1.bnp.nsk.su

no change

node1.pd.infn.it

-ve, no route change, trg buf unstable, 3 alert

node1.utdallas.edu

+ve, 2 minor route change getting better

node2.nslabs.ufl.edu

-ve, no route change, trg buf unstable, 0 alert

node1.utoranto.ca

-ve, 2 route change (minor) , trg buf unstable, 0 alert

 

Now we tried to find out does any network experience bad behavior. for this purpose, we separated all monitored nodes with respect to their trace route similarity and compared the reported performance among the group of similar trace routes. we came up with the result that there is no correlation between the network behavior and the node behavior. Following sheet describes it in detail. Similar color shows the same path up to that hop.
NODE COMMENT H1 H2 H3 H4 H5 H6 H7 H8 H9 H10 H11 H12 H13 H14 H15 H16 H17
iepmbw.cacr.caltech.edu -ve, no route change, trigger unstable, 1 alert anubis.s80.bnl.gov 130.199.80.124 0.867 ms shu.v400.bnl.gov 130.199.136.9 0.349 ms amon 130.199.3.24 0.694 ms esbnl-bnl.es.net 198.124.216.113 0.623 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.342 ms chicr1-oc192-aoacr1.es.net 134.55.209.57 22.347 ms snvcr1-oc192-chicr1.es.net 134.55.209.53 70.417 ms snv2sdn1-snvcr1.es.net 134.55.217.10 70.443 ms cenichpr-1-lo-jmb-704.snvaca.pacificwave.net 207.231.244.1 70.600 ms lax-hpr--svl-hpr-10ge.cenic.net 137.164.25.12 78.258 ms lax-hpr.losnettos-hpr.cenic.net 137.164.27.246 78.444 ms Booth-RSM.ilan.caltech.edu 131.215.254.253 78.604 ms CACR-ITS-BMR.caltech.edu 131.215.5.147 78.550 ms iepm-bw.cacr.caltech.edu 131.215.xxx.xxx 78.309 ms      
node1.cacr.caltech.edu -ve, no route change, trigger buf looking stablizing, 2 alert anubis.s80.bnl.gov 130.199.80.124 0.326 ms shu.v400.bnl.gov 130.199.136.9 0.401 ms amon 130.199.3.24 0.781 ms esbnl-bnl.es.net 198.124.216.113 0.739 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.444 ms chicr1-oc192-aoacr1.es.net 134.55.209.57 27.952 ms snvcr1-oc192-chicr1.es.net 134.55.209.53 70.418 ms snv2sdn1-snvcr1.es.net 134.55.217.10 70.469 ms cenichpr-1-lo-jmb-704.snvaca.pacificwave.net 207.231.244.1 70.941 ms lax-hpr--svl-hpr-10ge.cenic.net 137.164.25.12 78.379 ms lax-hpr.losnettos-hpr.cenic.net 137.164.27.246 78.558 ms Booth-RSM.ilan.caltech.edu 131.215.254.253 78.530 ms CACR-ITS-BMR.caltech.edu 131.215.5.147 78.505 ms node1.cacr.caltech.edu 131.215.xxx.xxx 78.211 ms      
node1.sdsc.edu -ve, no route change, trg buf getting worse, 0 alert anubis.s80.bnl.gov 130.199.80.124 0.241 ms shu.v400.bnl.gov 130.199.136.9 0.310 ms amon 130.199.3.24 0.654 ms esbnl-bnl.es.net 198.124.216.113 0.566 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.321 ms chicr1-oc192-aoacr1.es.net 134.55.209.57 22.353 ms snvcr1-oc192-chicr1.es.net 134.55.209.53 70.435 ms snv2sdn1-snvcr1.es.net 134.55.217.10 70.521 ms cenichpr-1-lo-jmb-704.snvaca.pacificwave.net 207.231.244.1 70.697 ms lax-hpr--svl-hpr-10ge.cenic.net 137.164.25.12 78.301 ms riv-hpr--lax-hpr-10ge.cenic.net 137.164.25.5 84.360 ms hpr-sdsc-sdsc2--riv-hpr-ge.cenic.net 137.164.27.54 84.145 ms thunder.sdsc.edu 132.249.30.5 84.076 ms node1.sdsc.edu 132.249.xxx.xxx 84.136 ms      
node1.stanford.edu +ve, no route change getting better anubis.s80.bnl.gov 130.199.80.124 0.317 ms shu.v400.bnl.gov 130.199.136.9 0.284 ms amon 130.199.3.24 0.656 ms esbnl-bnl.es.net 198.124.216.113 0.568 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.298 ms chicr1-oc192-aoacr1.es.net 134.55.209.57 22.328 ms snvcr1-oc192-chicr1.es.net 134.55.209.53 70.428 ms snv2sdn1-snvcr1.es.net 134.55.217.10 70.522 ms cenichpr-1-lo-jmb-704.snvaca.pacificwave.net 207.231.244.1 70.667 ms hpr-stan-ge--svl-hpr.cenic.net 137.164.27.162 70.791 ms bbr2-rtr.Stanford.EDU 171.64.1.133 71.226 ms rtf-rtr.Stanford.EDU 171.64.1.162 71.815 ms noc6.Stanford.EDU 171.64.xxx.xxx 70.940 ms        
iepmbw.slac.stanford.edu -ve, no route change, trg buf unstable, 1 alert anubis.s80.bnl.gov 130.199.80.124 0.289 ms shu.v400.bnl.gov 130.199.136.9 0.361 ms amon 130.199.3.24 0.777 ms esbnl-bnl.es.net 198.124.216.113 0.842 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.476 ms chicr1-oc192-aoacr1.es.net 134.55.209.57 22.522 ms snvcr1-oc192-chicr1.es.net 134.55.209.53 70.558 ms snv1mr1-snvcr1.es.net 134.55.218.22 70.668 ms snv2mr1-snv1mr1.es.net 134.55.217.6 79.437 ms slacmr1-snv2mr1.es.net 134.55.217.1 71.003 ms slacrt4-slacmr1.es.net 134.55.209.94 71.141 ms rtr-dmz1-vlan400.slac.stanford.edu 192.68.191.149 71.119 ms iepm-bw.slac.stanford.edu 134.79.xxx.xxx 71.115 ms        
node2.slac.stanford.edu -ve, no route change, trg buf getting worse, 0 alert anubis.s80.bnl.gov 130.199.80.124 0.202 ms shu.v400.bnl.gov 130.199.136.9 0.279 ms amon 130.199.3.24 0.736 ms esbnl-bnl.es.net 198.124.216.113 0.779 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.449 ms chicr1-oc192-aoacr1.es.net 134.55.209.57 22.494 ms snvcr1-oc192-chicr1.es.net 134.55.209.53 70.541 ms snv1mr1-snvcr1.es.net 134.55.218.22 70.564 ms snv2mr1-snv1mr1.es.net 134.55.217.6 79.117 ms slacmr1-snv2mr1.es.net 134.55.217.1 70.961 ms slacrt4-slacmr1.es.net 134.55.209.94 71.108 ms rtr-dmz1-vlan400.slac.stanford.edu 192.68.191.149 71.078 ms node2.slac.stanford.edu 134.79.xxx.xxx 71.053 ms        
node1.indiana.edu +ve, no route change getting better anubis.s80.bnl.gov 130.199.80.124 0.535 ms shu.v400.bnl.gov 130.199.136.9 0.393 ms amon 130.199.3.24 0.657 ms esbnl-bnl.es.net 198.124.216.113 0.626 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.432 ms chicr1-oc192-aoacr1.es.net 134.55.209.57 22.744 ms chi-esnet.abilene.iu.edu 198.125.140.54 22.411 ms iplsng-chinng.abilene.ucaid.edu 198.32.8.77 31.373 ms ul-abilene.indiana.gigapop.net 192.12.206.250 26.407 ms 149.165.254.230 149.165.254.230 26.491 ms icr2-ibs1-pb.noc.iu.edu 149.166.2.32 26.418 ms node1.indiana.edu 134.68.xxx.xxx 26.176 ms          
node1.dl.ac.uk -ve, no route change, trg buf unstable, 1 alert anubis.s80.bnl.gov 130.199.80.124 0.267 ms shu.v400.bnl.gov 130.199.136.9 0.399 ms amon 130.199.3.24 0.744 ms esbnl-bnl.es.net 198.124.216.113 0.587 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.345 ms esnet.ny1.ny.geant.net 62.40.105.25 2.406 ms ny.uk1.uk.geant.net 62.40.96.170 71.155 ms po2-0-0.gn2-gw1.ja.net 62.40.124.198 71.104 ms po1-1.lond-scr3.ja.net 146.97.35.97 71.066 ms po0-0.read-scr.ja.net 146.97.33.38 72.266 ms po3-0.warr-scr.ja.net 146.97.33.54 75.939 ms po1-0.manchester-bar.ja.net 146.97.35.166 76.308 ms gw-nnw.core.netnw.net.uk 146.97.40.202 76.505 ms gw-liv.core.netnw.net.uk 194.66.25.10 77.320 ms gw-fw.dl.ac.uk 193.63.74.233 77.758 ms alan3.dl.ac.uk 193.63.74.129 77.961 ms node1.dl.ac.uk 193.62.xxx.xxx 78.096 ms
node1.mcs.anl.gov -ve, no route change, trg buf unstable, 0 alert anubis.s80.bnl.gov 130.199.80.124 0.267 ms shu.v400.bnl.gov 130.199.136.9 0.399 ms amon 130.199.3.24 0.744 ms esbnl-bnl.es.net 198.124.216.113 0.587 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.345 ms esnet.ny1.ny.geant.net 62.40.105.25 2.406 ms ny.uk1.uk.geant.net 62.40.96.170 71.155 ms po2-0-0.gn2-gw1.ja.net 62.40.124.198 71.104 ms po1-1.lond-scr3.ja.net 146.97.35.97 71.066 ms po0-0.read-scr.ja.net 146.97.33.38 72.266 ms po3-0.warr-scr.ja.net 146.97.33.54 75.939 ms po1-0.manchester-bar.ja.net 146.97.35.166 76.308 ms gw-nnw.core.netnw.net.uk 146.97.40.202 76.505 ms gw-liv.core.netnw.net.uk 194.66.25.10 77.320 ms gw-fw.dl.ac.uk 193.63.74.233 77.758 ms alan3.dl.ac.uk 193.63.74.129 77.961 ms node1.dl.ac.uk 193.62.xxx.xxx 78.096 ms
node1.desy.de +ve, no route change getting better anubis.s80.bnl.gov 130.199.80.124 0.251 ms shu.v400.bnl.gov 130.199.136.9 0.347 ms amon 130.199.3.24 0.718 ms esbnl-bnl.es.net 198.124.216.113 0.739 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.340 ms esnet.ny1.ny.geant.net 62.40.105.25 2.575 ms ny.uk1.uk.geant.net 62.40.96.170 71.118 ms uk.fr1.fr.geant.net 62.40.96.89 78.028 ms fr.ch1.ch.geant.net 62.40.96.29 86.197 ms so-7-2-0.rt1.fra.de.geant2.net 62.40.112.22 94.267 ms dfn-gw.rt1.fra.de.geant2.net 62.40.124.34 94.354 ms cr-berlin1-po1-0.x-win.dfn.de 188.1.18.53 111.416 ms cr-hamburg1-po10-0.x-win.dfn.de 188.1.18.110 111.574 ms 188.1.47.42 188.1.47.42 111.456 ms node1.desy.de 131.169.xxx.xxx 111.445 ms    
node1.fzk.de +ve, no route change getting better anubis.s80.bnl.gov 130.199.80.124 0.229 ms shu.v400.bnl.gov 130.199.136.9 0.356 ms amon 130.199.3.24 94.310 ms esbnl-bnl.es.net 198.124.216.113 0.576 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.340 ms esnet.ny1.ny.geant.net 62.40.105.25 2.467 ms ny.uk1.uk.geant.net 62.40.96.170 71.066 ms uk.fr1.fr.geant.net 62.40.96.89 78.006 ms fr.ch1.ch.geant.net 62.40.96.29 86.145 ms so-7-2-0.rt1.fra.de.geant2.net 62.40.112.22 94.341 ms dfn-gw.rt1.fra.de.geant2.net 62.40.124.34 94.235 ms ar-karlsruhe1-po6-0.x-win.dfn.de 188.1.18.94 97.230 ms kr-fzk1.g-win.dfn.de 188.1.38.222 97.193 ms node1.fzk.de 192.108.xxx.xxx 97.395 ms node1.fzk.de 192.108.xxx.xxx 97.167 ms    
iepmbw.cern.ch -ve, no route change, trg buf stablizing, 2 alert anubis.s80.bnl.gov 130.199.80.124 0.207 ms shu.v400.bnl.gov 130.199.136.9 0.258 ms amon 130.199.3.24 0.617 ms esbnl-bnl.es.net 198.124.216.113 0.609 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.259 ms esnet.ny1.ny.geant.net 62.40.105.25 5.777 ms ny.uk1.uk.geant.net 62.40.96.170 71.077 ms uk.fr1.fr.geant.net 62.40.96.89 77.937 ms fr.ch1.ch.geant.net 62.40.96.29 86.139 ms swiCE2-10GE-1-1.switch.ch 62.40.124.22 91.433 ms e513-e-rci76-1-swice2.cern.ch 192.65.184.222 114.837 ms e513-e-rci65-3-vlan2.cern.ch 192.65.192.6 114.819 ms iepm-bw.cern.ch 192.91.xxx.xxx 114.570 ms        
node1.bnp.nsk.su no change anubis.s80.bnl.gov 130.199.80.124 0.275 ms shu.v400.bnl.gov 130.199.136.9 0.357 ms amon 130.199.3.24 0.652 ms esbnl-bnl.es.net 198.124.216.113 0.566 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.272 ms esnet.ny1.ny.geant.net 62.40.105.25 2.452 ms ny.uk1.uk.geant.net 62.40.96.170 71.207 ms uk.se1.se.geant.net 62.40.96.125 106.129 ms rbnet-gw.se1.se.geant.net 62.40.103.42 153.827 ms MSK-M9-RBNet-7.RBNet.ru 195.209.14.185 153.950 ms NSK-RBNet-3.RBNet.ru 195.209.14.74 201.869 ms 217.79.60.46 217.79.60.46 204.263 ms warrior.inp.nsk.su 193.124.164.42 208.407 ms zhmur.inp.nsk.su 193.124.164.17 211.262 ms 193.124.164.5 193.124.164.5 215.733 ms nko-315-1.inp.nsk.su 193.124.165.54 211.630 ms node1.binp.nsk.su 193.124.xxx.xxx 210.823 ms
node1.pd.infn.it -ve, no route change, trg buf unstable, 3 alert anubis.s80.bnl.gov 130.199.80.124 0.287 ms shu.v400.bnl.gov 130.199.136.9 0.307 ms amon 130.199.3.24 0.804 ms esbnl-bnl.es.net 198.124.216.113 0.806 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.531 ms esnet.ny1.ny.geant.net 62.40.105.25 2.641 ms ny.at1.at.geant.net 62.40.96.110 106.432 ms at.de2.de.geant.net 62.40.96.58 118.971 ms de.it1.it.geant.net 62.40.96.62 128.013 ms garr-gw.it1.it.geant.net 62.40.103.190 128.141 ms rt-mi1-rt-pd1.pd1.garr.net 193.206.134.46 131.287 ms infnpd-rtg.pd.garr.net 193.206.142.18 131.446 ms node1.pd.infn.it 193.206.xxx.xxx 142.159 ms        
node1.utdallas.edu +ve, 2 minor route change getting better anubis.s80.bnl.gov 130.199.80.124 0.341 ms shu.v400.bnl.gov 130.199.136.9 0.322 ms amon 130.199.3.24 0.790 ms esbnl-bnl.es.net 198.124.216.113 0.636 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.393 ms dccr1-oc48-aoacr1.es.net 134.55.209.62 6.646 ms atlcr1-oc48-dccr1.es.net 134.55.209.66 80.640 ms abilene-atlcr1.es.net 198.124.216.142 22.238 ms hstnng-atlang.abilene.ucaid.edu 198.32.8.33 41.774 ms ntg-gw1-so-3-0-0.northtexasgigapop.org 206.223.141.69 47.083 ms utd-ntg-gw1.northtexasgigapop.org 206.223.141.74 47.006 ms 129.110.5.20 129.110.5.20 48.472 ms node1.utdallas.edu 129.110.xxx.xxx 47.510 ms        
node2.nslabs.ufl.edu -ve, no route change, trg buf unstable, 0 alert anubis.s80.bnl.gov 130.199.80.124 0.300 ms shu.v400.bnl.gov 130.199.136.9 0.303 ms amon 130.199.3.24 0.718 ms esbnl-bnl.es.net 198.124.216.113 0.666 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.407 ms dccr1-oc48-aoacr1.es.net 134.55.209.62 6.530 ms atlcr1-oc48-dccr1.es.net 134.55.209.66 22.176 ms abilene-atlcr1.es.net 198.124.216.142 22.203 ms jax-flrcore-7609-1-te23-1800.net.flrnet.org 198.32.155.193 28.649 ms ssrb230a-ewan-msfc-1-v1805-1.ns.ufl.edu 198.32.155.222 33.839 ms ssrb6c-nexus-msfc-1-v30-1.ns.ufl.edu 128.227.236.81 33.723 ms ssrb230a-core-msfc-1-v20-1.ns.ufl.edu 128.227.236.2 33.804 ms ssrb201-nslabs-rsm-1-v222-1.ns.ufl.edu 128.227.129.158 34.335 ms node2.nslabs.ufl.edu 128.227.xxx.xxx 33.593 ms      
node1.utoranto.ca -ve, 2 route change (minor) , trg buf unstable, 0 alert anubis.s80.bnl.gov 130.199.80.124 0.516 ms shu.v400.bnl.gov 130.199.136.9 0.308 ms amon 130.199.3.24 0.728 ms esbnl-bnl.es.net 198.124.216.113 0.743 ms aoacr1-oc48-bnl.es.net 134.55.209.129 2.509 ms chislsdn1-chicr1.es.net 134.55.207.34 22.626 ms c4-tor01.canet4.net 198.125.140.142 33.012 ms c4-orano-tor.canet4.net 205.189.32.213 32.871 ms DIST1-TORO-GE1-1.IP.orion.on.ca 66.97.16.122 33.236 ms GTANET-ORION-RNE.DIST1-TORO.IP.orion.on.ca 66.97.23.58 33.300 ms utoronto-ut-hub-if.gtanet.ca 205.211.94.130 33.393 ms esc-gateway2.gw.utoronto.ca 128.100.200.102 34.166 ms mcl-gpb.gw.utoronto.ca 128.100.96.10 33.912 ms bigmac-fw.physics.utoronto.ca 128.100.190.2 34.854 ms node1.utoronto.ca 128.100.xxx.xxx 34.349 ms    

 Our next attempt was to analyze the behavior of iepm-bw.bnl.gov the statistics gave following results at time 12/30/05 about 17:00:00,  but it was the time when the storm had passed.

 Host at BNL = IEPM-BW.BNL.GOV

/usr/binb/top shows it is >98% idle, 70MB free memory, swap space barely used

/sbin/ifconfig showed no interface errors

The host up for 2 days:

21:45:27  up 2 days,  9:48,  1 user,  load average: 0.01, 0.03, 0.06

Possibly after it rebooted 2 days ago something was not set right.

 The max TCP window/buffer OK, and anyway this would have only affect the iperf measurements not Pathchirp.

 cat /proc/sys/net/core/wmem_max = 1048576

;cat /proc/sys/net/core/rmem_max = 1048576

;cat /proc/sys/net/core/rmem_default = 1048576

;cat /proc/sys/net/core/wmem_default = 1048576

;cat /proc/sys/net/ipv4/tcp_rmem = 262144       1048576 8388608

;cat /proc/sys/net/ipv4/tcp_wmem = 262144       1048576 8388608

 The ping RTT to SLAC = 70ms, so if we say the bottleneck bandwidth should be about 500Mbits/s then the Bandwidth Delay Product is ~ 4MB so doubling that we have ~ 8MB.

 Iperf from BNL to SLAC (iepm-resp) 20 secs measurement unless otherwise mentioned:

Streams   Window   Achievable throughput

64        256KB    552Mbits/s

64        128KB    496Mbits/s

32        256KB    522Mbits/s

16        128KB    261Mbits/s

16        512KB    383Mbits/s

8         1024KB   247Mbits/s

4         2048KB   173Mbits/s

2         4096KB   87Mbits/s

1         8192KB   43Mbits/s      

64        256KB    544Mbits/s     #200 secs

 Thus it appears that the host is not limited, it can push aside other TCP friendly traffic so it gets over 500Mbits/s achievable throughput.

Another interesting observation obtained from looking at iepmbw.cacr.caltech.edu data was that ping max Rtt showed a similar behavior ad path chirp throughput which forced us to think that ping can be the replacement of rather complex path chirp tool. The data from which the graph is generated is avialable here

 

Cause

After all this analysis we concluded that

1- Route change is not the cause

2- Network behavior/ Congestion is not the cause

3- Only possibility is that iepm-bw.bnl.gov was itself experiencing some bad performance which was causing some of the monitored nodes to show negative results.