argus-clients informal survey

Chris Newton newton at unb.ca
Thu Jun 21 18:05:57 EDT 2001


Hehe, yea, reporting problems certainly helps get them solved :)  My problem 
was, I wasnt sure how/what to report.  When I report a problem, I try to have 
copies of the errors, snippets from log files, and a first hand account of 
what occured.  In the cases I'm talking about, I didn't manage to get that 
information, and so, I'd wait for the next occurance.  I don't like sending 
people on a wild goose chase, unless I'm darn sure I saw the goose! :)

  Yup, I did change that record writer value you mentionned before, up to 
1024.  That has helped, for sure.  It seems to have lived through the last few 
occurances.  As it happens, as soon as I pressed 'send' on my last email, we 
had another 'incident' on the network.  In this case, argus grew to 572 MB in 
size, but managed to cope.  I'm not sure the bit rate yet, as I haven't had 
time to look at those logs yet.  I'll glance in their direction tonight, and 
see what info I can conjure from them.

  What became obvious though, is that the problem stems from Argus deciding 
each of these tiny little packets is a unique flow.  Because of this, the 
attacker will _always_ be able to outgun any argus server... simply because 
all his machine need do is generate a simple, single packet to in turn cause 
argus a lot of work (ie: argus will decide this single packet is a unique flow 
and generate a lot of 'flow' information about it).  Now, when he does 200,000 
of them a second....   In terms of solutions, I was thinking that since argus 
is doing this counting anyways, that it might have some 'high water mark'.  A 
certin number of flows in memory with simlar target, or source, and it starts 
aggregating them, instead of considering them unique.

Chris


>===== Original Message From <carter at qosient.com> =====
>Hey Chris,
>[snip]
>> the confused stream output that argus gets into (when it
>> actually lives
>> through the attack, lots of times it crashes , not being able
>> to dump out the
>> flow records (1 record per packet basically) fast enough).
>
>It is very difficult to get it right if there are problems
>that are not being reported.  Is argus actually core dumping,
>or is it just exiting?
>
>It would also be nice if you were to test whether writing
>the records to the local disk would give you better
>performance.  I'm not saying that it will solve the problems,
>but it may help to figure out what the problem is.
>Did you change the queue sizes to try to solve the problem?
>Do you have enough memory on the machine?
>
>
>Carter
>
>Carter Bullard
>QoSient, LLC
>300 E. 56th Street, Suite 18K
>New York, New York  10022
>
>carter at qosient.com
>Phone +1 212 588-9133
>Fax   +1 212 588-9134
>http://qosient.com

_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/

Chris Newton, Systems Analyst
Computing Services, University of New Brunswick
newton at unb.ca 506-447-3212(voice) 506-453-3590(fax)

"The best way to have a good idea is to have a lot of ideas."
Linus Pauling (1901 - 1994) US chemist



More information about the argus mailing list