Argus Database.

Chris Keladis chris at cmc.optus.net.au
Sun Mar 13 09:42:05 EST 2005


Hi Peter,

Thinking about it some more, there are a few other ways to do this.

What may be more efficient (although less current) could be to load 
Argus data into the DB as chunks.

Say, running ragator to compress the flows, then grabbing your 
interested time-window of data, CSV-ifying (or TSV) it, and using 
something like MySQLs 'mysqlimport' on the Argus host to bulk-load the 
data into the DB.

mysqlimport also supports network-compression (assuming the server 
supports it) which may give a performance boost as well, given that the 
information is all text.

Loads could be done at whatever frequency is necessary, perhaps ra's 
'-t' option would come in handy to feed you the dataset of the time your 
interested in, to save extra processing costs.

And (as most have noted in this thread) if we are only taking the output 
of 'ra' and not the full Argus flow record, it would be possible to 
store quite a bit of data.

Just thinking out aloud. Thoughts, comments?




Regards,

Chris.

Peter Van Epp wrote:

> 	I've come to the conclusion (which Russell suggested long ago :-)) that
> instead of fighting with memory exhaustion in perl scripts post processing 
> argus output its time to let mysql do it for me. So far thats gotten as far as
> me actually installing mysql on one of my test machines and no further. Its
> my intent to start with the ra fields my scripts are currently using (which
> are a subset) and see what happens but I expect progress to be slow. As noted
> the start will be ra -> perl -> mysql but assuming I find something that works
> the correct answer would be a new r client that writes directly to mysql. The
> performace issues can probably be most easily beaten by being able to split
> argus streams across multiple boxes (I don't currently have enough volume to 
> need to do that, but the scripts as they stand can do it). Then assuming that
> you are doing once a day summarization as I am you can combine the outputs
> of multiple merged streams again in to a single output database.
> 
> Peter Van Epp / Operations and Technical Support 
> Simon Fraser University, Burnaby, B.C. Canada
> 
> On Sat, Mar 12, 2005 at 05:07:05PM +1100, Chris Keladis wrote:
> 
>>Hi all,
>>
>>I know this topic has come up before, but i was wondering how work was 
>>going in adding database support for Argus output?
>>
>>I've played around with raxml and managed to use a python script to 
>>create a MySQL schema from the XML DTD (although it is very inefficient, 
>>it's got the basic structure).
>>
>>I was thinking about performance with database output and have been 
>>thinking it might be best to use the same method Snort (IDS) uses to 
>>support high-speed monitoring, with database output.
>>
>>Snort employs a high-speed outfile format called unified output, which 
>>is read by a post-processor, and using checkpoints, writes the data into 
>>the RDBMS, leaving Snort free to handle the task of performing IDS.
>>
>>Perhaps a similar tool would be useful with Argus?
>>
>>Would appreciate your thoughts.
>>
>>
>>
>>
>>
>>Regards,
>>
>>Chris.




More information about the argus mailing list