Argus and rasqlinsert problems
Leif Tishendorf
ltishend at gmail.com
Tue Apr 19 18:35:08 EDT 2011
> You could also archive argus records to disk (although doing that fast
> enough is neither easy nor cheap :-))
We've actually be doing that without loss or issue for months. We were
wanting to switch over to DB logging for speed and ease of dealing with
the data.
> exist although I have been thinking about it as the way argus needs
to go to
> be able to deal with 40 and 100 gig links :-)).
Our traffic is far below 40Gig. We have a 10G aggregate, but probably
only see 5-6 at peek currently.
-Leif
On 04/19/2011 02:24 PM, Peter Van Epp wrote:
> On Tue, Apr 19, 2011 at 12:30:39PM -0700, Leif Tishendorf wrote:
>> Carter,
>>
>> I think this may be a pure volume problem. Our normal setup has 3
>> instances of argus running on load balanced Dag card channels. I
>> cut out 2 instances so I'm just running the one and, while it hasn't
>> been running that long, I'm not experiencing the instant
>> rasqlinsert/radium stop responding issue. Now I'm just not sure how
>> to work around that issue.
>>
>> -Leif
>>
> Its easy to describe your options, its just hard to actually implement
> any of them them :-). Basically you either need to speed up the mysql box so
> it can keep up with the traffic volume (I'd guess this is going to be hard
> although I have no experience with mysql) or spread the load across multiple
> boxes until the chain can keep up with traffic. Argus can help with this to
> some extent because it will recombine separated flows (such as capturing and
> archiving the transmit and receive sides of a fdx flow and then recombing the
> two streams in to a single argus file with the complete flow later). I suspect
> in the database case, you may need to use filters to split traffic (probably
> based on local address as thats what you likely have the least spread in)
> across 2 or more database machines till the load on the database machine is
> low enough for them to keep up. Unfortunatly you now need to run queries
> against more than one database to get all the argus records you want, but at
> least you should be able to do so. It should be obvious that a bad traffic
> distribution such that all your traffic concentrates on a single mysql box
> can still cause an overload and the only way to cure that would be with a
> dynamic filter set based on traffic (which doesn't as far as I know currently
> exist although I have been thinking about it as the way argus needs to go to
> be able to deal with 40 and 100 gig links :-)).
> You could also archive argus records to disk (although doing that fast
> enough is neither easy nor cheap :-)) and then feed the database from the
> archives later. Obviously this only really works if the traffic has peaks
> (you are essentially caching) since if the totol volume is greater than mysql
> can handle you will fall behind and lose data. As well it adds latency to the
> collection which (assuming you are trying to do detection real time) may not
> be acceptable.
> Another option (which Q1 labs did in Qradar) is to switch to a higher
> performance database engine to effectively increase the perfromance of the
> database. I don't know of any open source projects in this area but there may
> well be. It may be worthwhile asking about the status of the CMU eddy (sp?)
> project which was trying to put put argus records in to a database. I haven't
> heard anything about it in some time, so it may have stalled. Hope this helps
> some and good luck (and good funding, as I suspect you will need both :-)).
>
> Peter Van Epp
--
--Leif
More information about the argus
mailing list