Argus tweaking and design considerations

Carter Bullard carter at qosient.com
Thu Feb 22 19:14:17 EST 2001


Based on comments today, its # 2, increase the number of
flows processed per turn, after 256K flows.  I've got the
numbers down for beta.7 which may be out tonight.  We'll
process flows at the high end to assure that we get
through all the flows in the queue every 16 seconds.

Carter

Carter Bullard
QoSient, LLC
300 E. 56th Street, Suite 18K
New York, New York  10022

carter at qosient.com
Phone +1 212 588-9133
Fax   +1 212 588-9134

-----Original Message-----
From: Mark Poepping [mailto:poepping at cmu.edu]
Sent: Thursday, February 22, 2001 5:50 PM
To: Carter Bullard
Cc: Argus (E-mail)
Subject: RE: Argus tweaking and design considerations


My comments mirror Peter's..

I think we should optimize for performance to get as much through as
possible.
Once we exceed maximum performance, it would be good to degrade as
gracefully as possible, but I think we should prefer beefier/specialized
hardware
to overflow handling that produces suboptimal results anyway. I think that
accuracy is *highly* important, up to the maximum data flow possible.

I would therefore suggest for 2.">0" to consider opportunities for
parallelization
to increase performance and an architecture for scaling to 'arbitrary' flow
rates.
Though that's impossible in the general case, I think we'd all like to be
able
to go as fast as any interface can get packets through the OS, i.e. a way
for argus to *not* be the limiting factor.  Requiring special hardware is
okay
at this level (multi-processor or multi-computer).  TopLevel is one strategy
to
help with the scaling, but there are others..  It's a fact that everyone's
first
inclination is to put the monitor at the most central (highest load) point
in the
network, so their first question is always, "how fast can it go?".

mark.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://pairlist1.pair.net/pipermail/argus/attachments/20010222/b120dac6/attachment.html>


More information about the argus mailing list