a memory oddness (possibly a Mac oddness)
Carter Bullard
carter at qosient.com
Tue Feb 6 10:29:58 EST 2007
Hey Karl,
argus-3.0 records are larger, but not necessarily so, so we can
work on making that a priority to fix after we release the code.
Getting a sense of what we want to do in this area, like very
fast support for sorting, aggregating, merging, spliting, filtering
searching, x ing ..., whatever, will be good efforts in the coming
months.
Until then, if you want to have multiple accesses to a set of
streams of argus data, radium() is the program of choice. argus()
is designed to support multiple output streams, but its a strain,
as each record now needs to be copied. There is a configuration
option to keep argus from supporting multiple readers at a time,
to avoid the additional processing burden to support them. But,
failures are not the desired outcome, so we need to work on solving
that problem.
Can't elaborate right now (in a meeting) but lets keep on this thread
to get it fixed.
Carter
Karl Tatgenhorst wrote:
> I forgot that we had run into this very same problem. I found that
>Racluster was very piggish in it's use of memory. We had two seperate
>streams that we were merging into one, so we had border1.file and merge
>with border2.file. Racluster was never able to do it for us.
>
> I will add to this one other problem we encounter with Argus. On our
>collector (the machine running ra, not argus) if we start a second argus
>socket program (rahist - ra etc...) the main ra process will eventually
>(within a short time) die. I find that very frustrating.
>
>Karl
>
>On Mon, 2007-02-05 at 20:44 -0500, Carter Bullard wrote:
>
>
>>Hmmm, racluster will aggregate across files, and so you may just be
>>running out
>>of memory. There are two things to consider. Try running racluster()
>>with a conf
>>file that sets an idle time. This will flush out the very short tcp
>>flows and scanners.
>>The other thing to do is have racluster() process each file
>>individually, rather than
>>aggregate across all the files, by using the "-M ind" option, "process
>>files independantly".
>>That will help, but it will not merge data that crosses files.
>>
>>Carter
>>
>>
>>Peter Van Epp wrote:
>>
>>
>>
>>> I'm trying to run racluster against an archive. I started with a
>>>script that does 24 hours (one hour at a time) and discovered that my 2 gig
>>>Mac ran out of memory a couple of files in. Rebooting (which seems to be
>>>required to free VM) and running them individually seems to still run out
>>>of memory (perhaps indicating a memory leak) on about the second file:
>>>
>>>test4:/var/log/argus vanepp$ /usr/local/bin/racluster -r /archive/argus3/com_argus.archive/2007/01/30/com_argus.2007.01.30.09.00.00.0.gz -w /archive/argus3c/com_argus.archive/2007/01/30/com_argus.2007.01.30.09.00.00.0
>>>test4:/var/log/argus vanepp$ /usr/local/bin/racluster -r /archive/argus3/com_argus.archive/2007/01/30/com_argus.2007.01.30.10.00.01.0.gz -w /archive/argus3c/com_argus.archive/2007/01/30/com_argus.2007.01.30.10.00.01.0
>>>racluster(364) malloc: *** vm_allocate(size=1069056) failed (error code=3)
>>>racluster(364) malloc: *** error: can't allocate region
>>>racluster(364) malloc: *** set a breakpoint in szone_error to debug
>>>Bus error
>>>
>>> I just started moving the source archive from the Mac to one of the
>>>IBMs with 4 gigs of ram to see if the same thing will happen there. I'd
>>>think that the racluster task ending should release all the memory its holding
>>>which doesn't seem to be happening (but may be a Mac issue rather than
>>>argus).
>>>
>>>Peter Van Epp / Operations and Technical Support
>>>Simon Fraser University, Burnaby, B.C. Canada
>>>
>>>
>>>
>>>
>>>
>
>
>
>
More information about the argus
mailing list