argus-clients informal survey

Chris Newton newton at unb.ca
Thu Jun 21 18:43:35 EDT 2001


Yes, this is the sort of solution I was imagining, however, I have seen cases 
where the single direction concept (drop remote side, keep locals) wouldn't be 
enough.  We have had a couple cases where a trojan got installed in some unix 
lab run by a department that is less interested in security then mine is (the 
main computing center).  That lab was on a student network, with a netmask of 
255.255.240.0.  The intersting thing about this trojan, is that it used this 
netmask it found on the machines as a guide to determine the range of possible 
addresses it could spoof on this network, and have them all be valid.  So, 
boom, an outgoing attack from 'thousands' of machines on our student network, 
aimed at some poor soul out there in the big merky internet, which our routers 
gladly sent on their merry way, since they were all valid IP addresses, 
routes, TTLs, ...

  So, I think the highwater mark would need to work both ways...  maybe if a 
count was added to each IP address that argus was tracking...

  So, if 131.202.160.212 local is being attacked, each new flow you create, 
that is associated with that IP, increment the counter.   Reset that counter 
each time you print a MAR record.  If in that interval of time of the MAR 
records, the IP has more then xxx records associated with it... start adding 
all info to a single flow record.  And, maybe in this record, keep a count of 
the number of unique remote IP addresses you are including in this 
aggregation.  Do this logic with both sides (remote, local), and I think argus 
would be stable at these high packet count levels.

Chris




>===== Original Message From Russell Fulton <r.fulton at auckland.ac.nz> =====
>On Thu, 21 Jun 2001 14:00:25 -0300 Chris Newton <newton at unb.ca> wrote:
>
>> 190,000 flows in 30 seconds.  Basically, every packet sent by these two 
remote
>> goofs, became a flow to argus.  If this one 'issue' could be resolved, I
>> believe Argus could survive quite well on large links.
>
>The way NeTraMet (to give it it's propper capitalization ;-) deals with
>this is to have an 'fall back rule set' and settable 'high water
>mark'.  Once the number of flows passes the 'high water mark' netramet
>changes to the fallback ruleset.  For our campus accounting this
>involves dumping the remote address completely so that we have just one
>flow per local address.  Now netramet is rather a different beast to
>argus and I am not sure if this concept can be adapted but it is worth
>a though.
>
>[before you ask there is also a 'low water mark' that sets the point
>where the meter flips back to the normal ruleset]
>
>The question is what information could argus discard that would reduce
>the numbre of flows in these sorts of circumstances?  In your
>particular attacks aggregation the destination address and ports would
>do the trick.
>
>Carter is the flow aggregation working in the server now?  I know you
>planned to do it.  If so the netramet model would work without change,
>when the meter gets stressed you swap to an alternative flow model.
>What would be really neat would be able to say "For source IP addresses
>with over X active flows aggregate their destination peer and port
>addresses.
>
>Russell Fulton, Computer and Network Security Officer
>The University of Auckland,  New Zealand

_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/

Chris Newton, Systems Analyst
Computing Services, University of New Brunswick
newton at unb.ca 506-447-3212(voice) 506-453-3590(fax)

"The best way to have a good idea is to have a lot of ideas."
Linus Pauling (1901 - 1994) US chemist



More information about the argus mailing list