argus-clients informal survey
Russell Fulton
r.fulton at auckland.ac.nz
Thu Jun 21 18:06:41 EDT 2001
On Thu, 21 Jun 2001 14:00:25 -0300 Chris Newton <newton at unb.ca> wrote:
> 190,000 flows in 30 seconds. Basically, every packet sent by these two remote
> goofs, became a flow to argus. If this one 'issue' could be resolved, I
> believe Argus could survive quite well on large links.
The way NeTraMet (to give it it's propper capitalization ;-) deals with
this is to have an 'fall back rule set' and settable 'high water
mark'. Once the number of flows passes the 'high water mark' netramet
changes to the fallback ruleset. For our campus accounting this
involves dumping the remote address completely so that we have just one
flow per local address. Now netramet is rather a different beast to
argus and I am not sure if this concept can be adapted but it is worth
a though.
[before you ask there is also a 'low water mark' that sets the point
where the meter flips back to the normal ruleset]
The question is what information could argus discard that would reduce
the numbre of flows in these sorts of circumstances? In your
particular attacks aggregation the destination address and ports would
do the trick.
Carter is the flow aggregation working in the server now? I know you
planned to do it. If so the netramet model would work without change,
when the meter gets stressed you swap to an alternative flow model.
What would be really neat would be able to say "For source IP addresses
with over X active flows aggregate their destination peer and port
addresses.
Russell Fulton, Computer and Network Security Officer
The University of Auckland, New Zealand
More information about the argus
mailing list