Program Logic Manual of the SLAC WAN Bandwidth Tests
Connie Logg
February 11, 2003
Acknowledgements
Several people have contributed to this package including: Les Cottrell, Manish Bhargava, Fabrizio Coccetti,
Jerrod Williams, and I-Heng Mei
This package has been developed to send various types of probes to specific nodes on the internet. It performs the
probes, and then saves, analyzes, and reports the results via the WWW. The probes may use network
measurement tools such as ping, traceroute, and iperf, or, file transfer
applications such as bbftp and
bbcp.
Directory and Data Structure Formats
Iperf Information
Bbcp Information
Bbftp Information
Bandwidth Tests Home Page
Porting Document
SLAC WAN Bandwidth Test: how to install the CGI scripts
Oracle Database Access Information
Topology Graphs' Documentation
- Since our upgrade to Version 2 of Iepm-BW Tests in February, there have been three(3) new features added to the package: getbwversions, ck-win-str, and make-test-parms-html. These features were deemed necessary to aid in troubleshooting everyday problems you may see with your monitoring hosts. All features are combined in a tar file located here. These features should be downloaded and detarred from your 'v2src' directory an executed from within your '$SRCDIR/overnight-processing-script'. For a short description of what 'bwversions' and 'ck-win-str' do, refer to the section titled "Overnight Analysis". A description of 'make-test-parms-html' can be found in "After Run Analysis".
- $SRCDIR/getbwversions - 'Get BW Versions' was created to provide a handy webpage that includes the configuraions of all the hosts you are monitoring. Included on this page is the hosts operating system information, CPU speed, versions of tests being run(ie. Iperf, bbcp, and bbftp), and the location on the monitoring hosts that temporary files are being written to. Click here to see the current configurations of SLAC's monitoring hosts.
- $SRCDIR/ck-win-str - 'Check Windows & Streams' creates a link from the timeseries plots called 'Windows and Streams' that logs the date, time and vaule of each change made to the number of streams or the window size used on particular nodes for each test being executed.
- $SRCDIR/make-test-parms-html - 'Make Test Parameters' creates an 'all-inclusive' webpage to display the current parameters used on each test we run. This webpage is lists the window sizes and number of streams, specified port numbers, specific options, and even lets you know if the tests are set to be run or not. MUCH EASIER THAN REFERING TO THE ALIASES FILE!!! Even the username to the particular host, ssh versions, host operating system, and CPU speed are included on this page. Click here to see the current test parameter settings for SLAC's monitoring hosts.
How To Perform the Upgrade
- Download the tar file (newfeat.tar) to your source directory ($SRCDIR). newfeat.tar
- Detar the 'newfeat.tar'. Five NEW files should be created in your source directory: make-test-parms-html, getbwversions, ck-win-str, bw-version.txt, and bw-versionpl.
- Add an entry in '$SRCDIR/overnight-processing-script' for ck-win-str and getbwversions.
# call ck-win-str to check for changes in window sizes and number of streams
@ans = `$SRCDIR/ck-win-str 2>&1`;
# call the bwversions processing
@ans = `$SRCDIR/getbwversions 2>&1`;
- Add an entry in '$SRCDIR/post-test-processing-script' for make-test-parms-html.
# make the web page for the test parameters
@ans = `$SRCDIR/make-test-parms-html`;
Monitoring Host - (MH) The node that the sends the probes
Target Node - (TN) A node that the probes are being sent to
Probe - A script which runs a measurement from a MH to a TN and returns the results. The script
may use a network measurement tool or a network application to perform the measurement.
Monitoring Host Configuration File - (MHCF) - The configuration file
which specifies the directories for the source code and data & analysis, the
various paths required by the code, and other MH specific
definitions. For SLAC, this is a file with perl definitions. The information could be
contained in a data base or another form. In that case the porting site must write the script
monhost-getcfg to provide the information to the probes and analysis.
Alias File - (ALIASFILE) - The target node configuration file which contains the specifications
for all of the probes to all of the target nodes. This information could be in a data base or
other format, however, then the porting site must rewrite the
getparms-testname scripts
described below to return the paramters for each probe and target node.
Linux or Solaris
Perl and perl modules required:
Date::Calc qw(Add_Delta_Days Delta_Days)
Date::Manip qw(ParseDate UnixDate DateCalc)
Time::localtime
Sys::Hostname
getopts.pl
getopt.pl
IO::Handle
SOAP::Lite
At SLAC we have always used perl code snipets for the monitoring and target host
configuration files. Some remote monitoring host users have indicated that they would
rather keep the configuration information in an sql database or encode it in XML. To
facilitate that, this release has several interfaces to the configuration files which can be
rewritten by the users to fetch the information from what ever configuration structure
they wish to use. Except for the code related to the cgi scripts, all measurement and
analysis code fetches any information required via these interface files.
- $SRCDIR/monhost-getcfg
returns the monitoring host configuration information. Specific options can be requested,
however if none are provided all the information is returned.
The calling sequence is: $SRCDIR/monhost-getcfg or
$SRCDIR/monhost-getcfg -o options
The options available, at this time are:
("BASEDIR='/afs/slac/package/netmon/bandwidth-tests'",
"SRCDIR='/afs/slac/package/netmon/bandwidth-tests/v2src'",
"ALIASFILE='/afs/slac/package/netmon/bandwidth-tests/v2src/slaconly/aliases'",
"REPORTSDIR='/nfs/slac/g/net/iepm-bw/bandwidth-tests/antonia'",
"WEBPATH='http://www.slac.stanford.edu/comp/net/bandwidth-tests/antonia'",
"PRIVATEDIR='/afs/slac/package/netmon/bandwidth-tests/v2src/slaconly'",
"IPERFTOOL='/afs/slac/package/scsutils/bin.\@sys/iperf'",
"BBCPTOOL='/afs/slac/package/bbcp/prod/bin/\@sys/bbcp'",
"BBFTPTOOL='/afs/slac/public/software/bbftp/\@sys/bbftp'",
"UDPMONTOOL='N/A'",
"TRACETOOL='/usr/sbin/traceroute -m 10 -q 1 -w 3'",
"LOGDIR='/nfs/slac/g/net/iepm-bw/bandwidth-tests/antonia/slaconly'",
"PINGTOOL='/bin/ping -s 1000 -c 10 -w 20'",
"MAILTOOL='/bin/mail'",
"GREPTOOL='/bin/grep'",
"SSHTOOL='/usr/local/bin/ssh'",
"GNUPLOTTOOL='/usr/local/bin/gnuplot'",
"MAILTO='cal\@slac.stanford.edu'",
"DAYSTOANALYZE='28'",
"DATATOANALYZE='ping.avg=w i lt 1;ping.min=w i lt 0;bbcpdisk.Kbits/s=w p lt 16;bbcpmem.Kbits/s=w p lt 12;bbftp.Kbits/s=w p lt 5;iperf.Kbits/s=w p lt 8;bbcpdisk.eff-Kb/s;ping.max;ping.loss;trace.numhops'",
"TESTDATADIR='/nfs/slac/g/net/iepm-bw/bandwidth-tests/testfiles'",
"SITE='SLAC.Stanford.Edu'",
"MNUSER='iepm'",
"SECURITY='AFS,SSH'",
"REMOTEOSDIR='/afs/slac/package/netmon/bandwidth-tests/v2src/remoteos'",
"DATADIR='/nfs/slac/g/net/iepm-bw/bandwidth-tests/antonia/plotdata'",
"WEB100DIR='N/A'",
"TOOLKITDIR='/afs/slac/package/netmon/bandwidth-tests/v2src/toolkit/scripts.list'",
"RUNDURATION='7200'",
"NETFLOW='N/A'",
"NETFLOW_EXTRACTED='N/A'"
"MASTERTIMEOUT='60'"
)
For example:
@ans=`$SRCDIR/monhost-getcfg -o "SRCDIR,REPORTSDIR" `returns
("SRCDIR='/afs/slac/package/netmon/bandwidth-tests/v2src'","REPORTSDIR='/nfs/slac/g/net/iepm-bw/bandwidth-tests/antonia'")
in $ans[0] and this can be evaluated by
chop @ans;
#now define the parms
eval("\@parms = $ans[0];");
grep (eval("\$$_;"),@parms);
- $SRCDIR/get-nodelist
@ans = `$SRCDIR/get-nodelist`
returns in @ans a string which can be evaluated to produce an associative array %PHONY.
eval("$ans[0];"); will evaluate the returned value.
The keys of %PHONY are the aliasnames and the value of $PHONY{$key} is the real ipname or
address.
- $SRCDIR/get-nodeval -n aliasnodename -o vals
is useful for fetching various run and analysis parameters which are specified by the
target host configuration.
@ans = `get-nodeval -n node1.cacr.caltech.edu -o "REMUSER,REMHOST"`
("REMHOST='OS=Linux 2.4&CPU=1000 MHz&NIC=1000 Mbps'","REMUSER='cottrell'") in $ans[0];
This can be evaluated by
chop @ans;
#now define the parms
eval("\@parms = $ans[0];");
grep (eval("\$$_;"),@parms);
$REMUSER will equal "cottrell", and REMHOST will equal 'OS=Linux 2.4&CPU=1000 MHz&NIC=1000 Mbps'
- Other supporting scripts are those which are named getparms-
testname. These
provide the parameters for running the various measurement tests.
The MHCF is the SLAC method of describing to the bw tests code the source and
analysis directory paths, the probe tool paths, the
path to gnuplot, and other required information. More information is also
available in the directory and
data file structure documentation.
The critical components are:
- $BASEDIR - the base directory for the source code and documentation
- $SRCDIR = $BASRDIR/v2src - the source code resides under here
- $PRIVATEDIR = $SRCDIR/slaconly - where the Target Node configuration file is
stored.
At SLAC the web server will not serve anything off site which has 'slaconly' in
its path.
- $ALIASFILE - the target host configuration file. At SLAC this is in
$PRIVATEDIR/aliases.
- $MNUSER - the userid the main bw tests script is to run under on the MH.
- $REPORTSDIR - is the base directory for all the data and analysis.
See also the directory and
data file structure documentation
- $LOGDIR - where the raw output from the probe tools is stored
- $DATADIR - where the data extracted from the raw probe output is stored.
- $WEBPATH - the string to be used for referring the web pages created by
the analysis
- $DAYSTOANALYZE - the number of days worth of data to analyze in the general
analysis
- $DATATOANALYZE - the data values to analyze. See also the
directory and
data file structure documentation.
- $SITE - The string to be used in the html pages to define the
site where the monitoring host is located.
- $MASTERTIMEOUT - the maximum number of seconds any probe is allowed to run
before being timed out (TPTO on the failure analysis page)
- Various sensor tool and utility paths: $IPERFTOOL, $BBCPTOOL, $BBFTPTOOL,
$GREPTOOL, $GNUPLOTTOOL, $PINGTOOL,
$TRACETOOL, etc.
- $PERLPATH - the path for perl. This is used by the distribution tool
kit to set the proper first line of the perl scripts.
- $SECURITY - indicates the type of security. For example at SLAC, we use
AFS and SSH.
- $TESTDATADIR - the directory where the data files used for file transfer
probes are stored
- Additional variables for special purposes and not part of the general
distribution at this point
- $TOOLKITDIR - is the directory where the toolkit for distribution
resides. This is for SLAC use.
- $NETFLOW & $NETFLOW_EXTRACTED - used by SLAC for active/passive analysis -
currently disabled
- $WEB100DIR - where the web100 utilities are stored - currently disabled
The following are conventions used in the implementation for passing arguments
to the scripts. These descriptions will
not be repeated for each calling sequence. However, deviations from this
conventions should be documented in this
PLM description of the deviant code.
- -c "fully qualified filename" - is the fully qualified MHCF. If the
environment variable "BW_CFG" is
defined to be this name, it is not necessary to provide it via "-c".
- -s "fully qualified directory specification" - is the source code directory.
If the
environment variable "BW_SRC" is
defined to be this name, it is not necessary to provide it via "-s".
- -d "date" - This is used to specify a date to any of the scripts
which take one. It can be of the
form "today", "yesterday", or "6/20/02".
- -i "number of days" - is used to indicate the number of days of
data that certain scripts are to process
when they run. Its most prolific use is in time series plots. The parameter
$DAYSTOANALYZE must be specified in the
$mncf file. "-i" is primarily an internal parameter.
- -n "alias node name" - When a script takes a node name as input,
it must be the alias name of the node. In
the case of "run-bw-tests" if "-n all" is provided the tests are run on all nodes.
- -w "a number" - This is for web100 invocation, which is currently under
development. The number
is provided for future expansion to faciliate specifying various levels of web100
variable dumping.
- -h "hostname" - This is used to specify the name of the monitoring host
that is running the measurements. It is
primarily an internal parameter, which allows the analysis of the data for some
monitoring hosts to be run on other
machines. If required, and not provided, the hostname of the machine the analysis
is running on it used.
- -D n - This is the debug flag and is particularly important for
run-bw-tests when running it from the
command line. '-D 0' indicates that the measurements are to
be run, but that no data is to be written to the data files, only the terminal.
'-D 1'
indicates that the measurements are to be run and the data is to
be written to the data files, and not the terminal. '-D 2' indicates that
the measurements
are to be run and the data is to
be written to the data files, and the terminal. If called from a crontab, '-D 2'
can be used in production, and the output will be
printed to STDOUT.
- -g n - This is used to tell the html creation scripts whether the
cgi interfaces should be included in the html
pages. If '-g 1' is specified, they are included. It it is specified as
anything else,
or omitted, then the cgi scripts are not
included in the web pages.
The following describes the basic code that is needed to run the tests,
analyze the code
performance, and perform analysis of the basic results.
CONVENTIONS: files ending in ".pl" are code fragments. Some of these are
"required" in the scripts and
others are samples of how to do certain manipulations
such as dates.
Note that all scripts require that the environment variable
BW_SRC be set to the base path of the
source code. If using the SLAC provided monitoring host configuration
file format, BW_CFG must be set to
the fully qualified name of the file. In the case where the -c and -s parameters
are available for specifying these in the script calling sequences, the scripts
themselves will set the environment variables for their processes.
Measurement Engine (Connie Logg)
run-bw-tests - This is the main SLAC
driver script.
It is scheduled by the unix cron facility. The calling sequence is:
run-bw-tests -n [nodenames | all] -D n -c MHCF -s source_directory
-t [list of tests] -g 1
[-w1]
where:
- "-n" is optional, but can be used to specify specific nodes to be probed
- "-D" is optional, but IMPORTANT: If -D is not
specified or is
0, then the measurements will be run
and the results
printed to STDOUT only. If -D is 1, the measurements are run and the results
are logged to
the data files. If -D = 2, the measurements are run, the output is printed to
STDOUT and
the results are logged to the data files. This is an
important change from the past.
- "-g 1" indicates whether the text to invoke cgi scripts is imbeded into the html
pages. If they are, -g must be set to 1.
- "-c" fully qualified name of the monitoring host configuration file. If
this parameter is not specified, the information
must be provided by the environment variable "BW_CFG".
- "-s" fully qualified name of the source code directory. If
this parameter is not specified, the information
must be provided by the environment variable "BW_SRC".
- "-t" is the probe type to be used. If it is not provided, all probes are done.
Currently it can be:
"ping", "trace", "bbcpmem", "bbcpdisk", "bbftp", or "iperf".
- -w1 indicates that the web100 variables are to be
fetched (currently disabled);
actually any non-zero value will work, but I recommend 1 since it will allow
for further options at
a later date. If -w is specified, $WEB100DIR must be specified in the MHCF.
run-bw-tests calls the following probe scripts. A probe
script runs a given measurement, extracts the
results, analyzes the success or failure of the measurement, and returns the
results to run-bw-test. The probe scripts can actually be
called from the command line or any other scheduling mechanism. They only return
the results. They do not save the results anywhere.
It is up to the calling program to save the results. Each probe script calls
monhost-getcfg and a
getparms-testname to fetch the parameters for the
{aliasnodename, test} pair.
Each script also calls $SRCDIR/failures/failanal -m text -f failure_code_file to do
the failure analysis. This process is describes later on in this document. The "text" is the output of the measurement tool, and
the "failure_code_file" is a list of character strings that indicate failures. Please see the examplee in the $SRCDIR/failures
directory.
- $SRCDIR/do_ping -n aliasnodename calls
$SRCDIR/getparms-ping -n aliasnodename
- $SRCDIR/do_trace -n aliasnodename
calls $SRCDIR/getparms-trace -n aliasnodename
- $SRCDIR/do_iperf -n aliasnodename
calls $SRCDIR/getparms-iperf -n aliasnodename
- $SRCDIR/do_bbcpdisk -n aliasnodename calls
$SRCDIR/getparms-bbcpdisk -n aliasnodename
- $SRCDIR/do_bbftp -n aliasnodename calls
$SRCDIR/getparms-bbftp -n aliasnodename
- $SRCDIR/do_bbcpmem -n aliasnodename calls
$SRCDIR/getparms-bbcpmem -n aliasnodename</font>
currently not active: web100.pl - Is the code that is conditionally loaded for web100 (only loaded if
running with
web100 enabled) - (I-Heng Mei) not currently implemented
Timing out probes - If anything can hang, it will. The MHCF contains a variable $MASTERTIMEOUT which is
used by run-bw-tests. The target node configuration file also provides timeout information.
The parameter $TIME2RUN{"nodealiasname"} is the maximum amount of time that any command to a target node is allowed to
take. It can be overwritten for any given node for specific tests by "IPERFTIME2RUN",
"BBCPDTIME2RUN", "BBCPMTIME2RUN", or "BBFTPTIME2RUN"
parameters in the target node configuration file.
After Run Analysis (Connie Logg)
$SRCDIR/post-test-processing-script -g n -c MHCF -s source_directory - This called by "run-bw-tests" to process
the data after a run. Note the '-g' parameter. If -g = 1, as passed to
the run-bw-tests script, the html files include the text for invoking the
cgi scripts. If anything else, the cgi script is not included.
$SRCDIR/post-test-processing-script
calls the following
scripts:
- $SRCDIR/create-analysis-data - This uses the parameters $DAYSTOANALYZE and
$DATATOANALYZE
in the monitoring host configuration to put together
all the data for the specified period of time for the analysis by calling $SRCDIR/combine-data
for each node
- $SRCDIR/create-ts-plots - This plots the data created by
$SRCDIR/create-analysis-data by calling $SRCDIR/timeseries/plotem-ts for each node
- $SRCDIR/create-individual-plots - This calls plotem-each
to create an individual plot for each probe and node.
- $SRCDIR/failures/make-failanal-html - Makes the failure analysis web page
- $SRCDIR/make-bw-html -g n - makes the main bw results page. Note the -g parameter is
what was passed down to $SRCDIR/post-test-processing-script by run-bw-tests.
Overnight Analysis (Connie Logg)
There are several things that need to be taken care of after midnight. The '$SRCDIR/overnight-processing-script -g n' is called to do this.
Note that '-g' is that parameter which indicates whether the text for invoking the cgi
scripts is included in the html. '$SRCDIR/overnight-processing-script -c MHCF -s source_directory calls:
- $SRCDIR/create-diurnal-plots to perform the diurnal analysis
- $SRCDIR/getbwversions - creates a webpage to show remote host configurations complete with version names/numbers of code on the hosts.
- $SRCDIR/ck-win-str - accounts for the date, time and vaule of each change made to the number of streams or the window size used for particular nodes on each test.
- $SRCDIR/make-test-parms-html - creates a webpage to display the current values of each test parameter for each node.
- $SRCDIR/histograms/make-histograms to create the histogram panel grid
- $SRCDIR/failures/make-failanal-html -d yesterday to create the final failure analysis page for 'yesterday'
- $SRCDIR/scatterplots/make-scatter-plots - creates the individual scatter plots
- $SRCDIR/scatterplots/plot-scatter-combined-plots - creates the large scatterplots
- $SRCDIR/scatterplots/make-scatter-plots-html - creates the html pages for the
individual scatterplots
- $SRCDIR/scatterplots/make-combined-plots-html - creates the large scatterplots html page
- $SRCDIR/history/create-history-pages -d yesterday - does all the processing for the historical data
Summarizing the data for Historical Purposes (Connie Logg)
$SRCDIR/history/create-history-pages - calls the following scripts to perform the historical tasks.
Note that it is called by the overnight processing.
- $SRCDIR/history/make-weekly-summary - Makes the summary files of 4 points per day for the weekly graphs
- $SRCDIR/history/make-monthly-summary - Makes the summary files of 1 point per day for the monthly graphs
- $SRCDIR/history/combine-history-data - Makes the data files for the plots
- $SRCDIR/history/plotem-ts-history - Creates the weekly and monthly plots
- $SRCDIR/history/make-csv - Makes CSV (actually blank separated) files for use with the
pinger analysis structure.
Calling sequence: "$SRCDIR/history/make-csv -h host -d date"
After all the plots are done, it creates the historical html pages
Site Customization Code Snipets (Connie Logg) -
called by "make-bw-html". The snipits can be customized by non-SLAC sites to tailor the web page to their own site.
$SRCDIR/site-customization/top-of-page.pl - can be customized to provide site specific top of page boilerplate
$SRCDIR/site-customization/bottom-of-page.pl - can be customized to provide site specific bottom of page boilerplate
Other Useful Snipets of Code (Connie Logg)
$SRCDIR/date.pl - Code snipet which is useful for processing the date information in the schema used in the bw-tests
$SRCDIR/opt_c.pl - This snipet can be copied into code to process the -c parameter
$SRCDIR/evalparms.pl - This snipet can be copied into code to process the results coming back from the
configuration interface scripts.
Run Control and Automated Processing
(Connie Logg)
The following crontab entries control the running of the bw tests scripts at SLAC.
#Antonia: run the bw tests every 90 minutes
antonia;120 35 0,3,6,9,12,15,18,21 * * * /afs/slac/package/netmon/bandwidth-tests/v2src/run-bw-tests -c /afs/slac/package/netmon/bandwidth-tests/v2src/antonia.cfg -g 1 -D 2 -s /afs/slac/package/netmon/bandwidth-tests/v2src
antonia;120 5 2,5,8,11,14,17,20,23 * * * /afs/slac/package/netmon/bandwidth-tests/v2src/run-bw-tests -c /afs/slac/package/netmon/bandwidth-tests/v2src/antonia.cfg -g 1 -D 2 -s /afs/slac/package/netmon/bandwidth-tests/v2src
# do the overnight processing
antonia;120 30 2 * * * /afs/slac/package/netmon/bandwidth-tests/v2src/overnight-processing-script -c /afs/slac/package/netmon/bandwidth-tests/v2src/antonia.cfg -g 1 -s /afs/slac/package/netmon/bandwidth-tests/v2src
# run the hung processes cleranup script once an hour
antonia;30 1 * * * * /afs/slac/package/netmon/bandwidth-tests/v2src/bw-cleanup -c /afs/slac/package/netmon/bandwidth-tests/v2src/antonia.cfg -s /afs/slac/package/netmon/bandwidth-tests/v2src
Loading the Target Nodes (Les Cottrell)
(Jerrod Williams)
remoteos/remoteos - This is the code which downloads all the configuration files
to load the destination nodes. It does the following:
Prediction code which is still under development(Les Cottrell)(Connie Logg)
$bandsrc/bw_predict UNDER DEVELOPMENT
This returns a "prediction" for a given set of filtered data, using a given method, and given parameters.
Syntax:
($status,$average,$stdev,$min,$max,$num_samples,$tile25,$tile50,$tile75)=bw_predict($node,$datatype,$samplespec,$starttime);
($status,$average,$stdev,$min,$max,$num_samples,$tile25,$tile50,$tile75) = bw_predict -n node -t datatype -i $samplespec -d $starttime
Where:
- $node indicates the node which is the input data source
- node1.in2p3.fr
- node1.ccs.ornl.goc
- etc.
- $datatype is the type of data we are interested in
- iperf
- bbcpmem
- bbcpdisk
- bbftp
- $samplespec describes the ramge of data going back from $starttime
- "nH" is the number of hours to go back from $starttime
- "nD" is the number of days to go back from $starttime
- "n" is the number of measurements you want to go back from $starttime. Note that it goes back until
"n" samples have been found.
- $startime is the time to go back from. It must be if the form "mm/dd/yyyy hh[:mm]"
or can be just a date in which case the hh:mm:ss = 00:00:00.
And where:
- $status is a return code.
- $status = "" if the request was processed ok
- $status = "no data" if there was no data
- $status = "partial" if a request was made for n samples and there were not n samples available
- $average is the average of the samples (in megabits/sec)
- $stdev is the standard deviation of the samples (in megabits/sec)
- $min is the minimum values found (in megabits/sec)
- $max is the maximum value found (in megabits/sec)
- $num_samples is the actual number of valid samples found
- $tile25 is the 25%tile
- $tile50 is the 50%tile (median)
- $tile75 is the 75%tile
Back to the Porting Document
ERRATA:
- 3/2/03 - CAL - One of the iperf plotdata colums is mislabeled in some cases. The column that is labeled
"sumMbps" should actually be labeled "sumMByt". This is the total number of bytes transfered by the iperf. It is not
used for anything. I have just kept it as a sanity check, and it is useless without more information.