Huge argus files and racluster

Marco listaddr at gmail.com
Tue Feb 7 09:45:01 EST 2012


2012/2/7 Carter Bullard <carter at qosient.com>:
> Hey Marco,
> No limit in size, other than a default racluster.1 will consume all your RAM
> and
> start swapping, which is what looks to be the issue.  You are tracking too
> many flows
> for the memory available on your machine.
>
> You should take the huge files that you have and split them into more
> manageable files using rasplit.1.
>
>    rasplit -r 1_40.argus 41_50.argus -M time 5m -w
> split/%Y/%m/%d/argus.%Y.%m.%d.%H.%M.%S
>
> This will split the data into daily directories, with a file for each 5
> minutes in the day.
> Then have racluster.1 process these individual files.  The "-M ind" is
> really important,
> without that, you not be doing anything different.
>
>    racluster -R split -M ind replace
>
> This will replace the files you have in the daily archive with aggregated
> files.
> You can then merge the files back into a single file if you like:
>
>    ra -R split -w all.the.argus.data

Thanks. But what about long-lived flows that last more than 5 minutes?
Will they be merged or will they appear once per 5-minute file in the
result? The whole point of clustering is having a single entry for
each of them, AFAIK.



More information about the argus mailing list