traffic stats -- was argus 2 changes

Russell Fulton r.fulton at auckland.ac.nz
Tue Jul 11 18:00:30 EDT 2000


On Tue, 11 Jul 2000 13:47:50 -0400 Mark Poepping <poepping at cmu.edu> 
wrote:

> 
> 
> > >    If it is, "what percentage of the link utilization
> > > was contributed by this flow", well now there is a stat
> > > we could put into each flow record.  I would do it by noting
> > > the total interface packets and bytes at the start of the
> > > flow report, and the total interface packets and bytes at
> > > the end of the flow report, and dividing the difference
> > > into the totals for the flow.
> > > 
> > 
> > 	This too would probably be interesting if it isn't to difficult to 
> > generate. The more data we have the better as far as I'm concerned!
> 
> As long as you're doing this, it might be useful to think about
> doing statistics on it..  a standard deviation might be useful,
> though I'm not sure yet how to think about the difference between
> relative use (mine/total) and relative utilization (mine/available),
> especially as relates to the effect on the statistics.
> 

If you want this sort of data I suggest that you look at Netramet, our 
implementation of the IETF Real Time Traffic Flow Measurement 
architecture.  (RFC 2720 - 2724).  The RTFM working group pages can be 
found at http://www.auckland.ac.nz/net/Internet/rtfm/, you will find 
pointers there to the NeTraMet distribution.

The rtfm architecture is explictily designed to define network flows 
and collect statistics or counts associated with them.  The one catch 
is that you do need to have some idea about what you are looking for 
when you set up the rule sets for the meters.  i.e. you define the 
flows before you collect data.

I rigged up a smurf alarm using netramet by examining running averages 
in the days that we brough International traffic on behalf of all the 
NZ universities.  (We payed by 95th percentile of traffic rates (over a 
month)  in a pipe with out rate limiting -- if smurf attacks lasted too 
long then it cost us big money...)

The fuctionality Mark suggests for argus would still be useful for 
'after the incident' analysis on data already collected.

If you want to have a look at one project I did with NetraMet see:

http://kaka.itss.auckland.ac.nz:999/

Where you can see what pathetic Internet access we have.  The stats for 
these plots are based on 10 second sample at the meter (currently an 
old 486) which are read by a collecter every 10 minutes.

There are a few holes in the data for the last few weeks because of 
hardware problems and my absence from the office...

Cheers, Russell.



More information about the argus mailing list