[ARGUS] Best Hardware
Thorbjörn Axelsson
thx+argus at medic.chalmers.se
Tue Oct 12 11:55:31 EDT 2004
Here's the dumps from defcon mentioned below (black hat):
http://www.shmoo.com/cctf/
/Thorbjörn
On 2004-10-12, at 17.09, Peter Van Epp wrote:
> I don't know of anything that will anonymize tcpdump traces which
> makes
> for a problem. If you are only looking for test data then the traces
> from the
> various "capture the flag" contests from the blackhat conferences
> (tcpdumps from
> the test network are online somewhere, although a quick check didn't
> turn up
> the URL, google on blackhat conference should find it, they are on a
> site
> called something odd like shmoosoft or such like) should do what you
> want.
> Which brings up the question how fast are your links? Tcpreplay (in
> the
> current version) only seems to make 85 megs or so (unless Aaron has
> gotten to
> tweaking in one of the later releases since last I tried). The
> original version
> would saturate a 100 link for files that will fit in buffer cache, but
> is
> limited by disk/buffer cache as to how fast it will go. As well at
> about 200
> megs or so it is said (I haven't verified it not having anything that
> goes
> that fast yet. although my latest gig sensor's network has achieved
> over 900
> megs on netperf, just not so far while argus has been watching) that
> libpcap
> becomes a limiting factor. If you are going faster than that moving to
> gargoyle is probably a better bet since the capture code no longer uses
> libpcap in gargoyle (and for really high speeds you need DAC cards
> which
> process on card).
>
> Peter Van Epp / Operations and Technical Support
> Simon Fraser University, Burnaby, B.C. Canada
>
> On Mon, Oct 11, 2004 at 11:40:25PM -0400, slif at bellsouth.net wrote:
>> Does anyone have a sanitized tcpdumps that they are willing to share ?
>> I'd like to run some load/time trials through tcpreplay to see if my
>> sensors
>> can bear the traffic without loss.
>> TIA
>> -Mike Slifcak
>>
>>>
>>> From: Peter Van Epp <vanepp at sfu.ca>
>>> Date: 2004/10/11 Mon PM 10:37:15 EDT
>>> To: argus-info at lists.andrew.cmu.edu
>>> Subject: Re: [ARGUS] Best Hardware
>>>
>>> Sounds some what similar to where I am. Our commodity link (one of
>>> the
>>> 4 argus instances now running) does around 60 to 80 megs compressed
>>> per hour.
>>> Each hour they get post processed (on a box separate from both the
>>> sensor and
>>> archiving boxes) and at 6 AM (other than this morning :-)) get
>>> summarized for
>>> the last 24 hours. The main problem for me is memory (currently 750
>>> Megs which
>>> maxes out the motherboard on a P3 600) as it sometimes starts
>>> swapping and
>>> gets very slow. While I have a pair of dual athelon 1.4 gig SMP
>>> boxes, both
>>> are currently being gigabit sensors and the archive and post
>>> processing hosts
>>> are both P3 600s. Likely the most useful thing I can do is take a
>>> ugly one
>>> (such as last night at 22:00) and run it through on all the machines
>>> I have
>>> here and timing it (22:00 seem to be port scan time our way :-)):
>>>
>>> -rw-r--r-- 1 argus argus 62153949 Oct 10 22:02
>>> com_argus.2004.10.10.21.00.00.0.gz
>>> -rw-r--r-- 1 argus argus 87990295 Oct 10 23:03
>>> com_argus.2004.10.10.22.00.00.0.gz
>>> -rw-r--r-- 1 argus argus 67225107 Oct 11 00:02
>>> com_argus.2004.10.10.23.00.00.0.gz
>>> A quick test indicates it more depends on disk speed than CPU :-) so
>>> a raid controller may be more of a factor than CPU since performance
>>> is about
>>> the same on each machine (by wall clock times anyway).
>>>
>>> P3 600mhz 750 megs of RAM:
>>>
>>> test6% time ra -r com_argus.2004.10.10.22.00.00.0.gz -nn >/dev/null
>>> 150.437u 48.115s 3:19.11 99.7% 222+1302k 0+0io 0pf+0w
>>>
>>> test6% time ra -r com_argus.2004.10.10.22.00.00.0.gz -nn >
>>> /usr/local/t
>>> 153.730u 67.459s 3:42.35 99.4% 224+1325k 2+1901io 0pf+0w
>>> test6% ls -l /usr/local/t
>>> -rw-r--r-- 1 vanepp wheel 249200362 Oct 11 19:19 /usr/local/t
>>>
>>> test6% time ra -r com_argus.2004.10.10.22.00.00.0.gz -nn host
>>> 142.58.200.82 >/dev/null
>>> 40.572u 28.910s 1:09.62 99.7% 208+1209k 0+0io 0pf+0w
>>>
>>> test6% time ra -r com_argus.2004.10.10.22.00.00.0.gz -nn host
>>> 142.58.200.82 > ./t
>>> 41.139u 29.120s 1:10.43 99.7% 208+1227k 0+55io 0pf+0w
>>>
>>> SMP 1.4 Gig Athelon machine 500 Megs of ram, both with 7200 RPM IDE
>>> disk.
>>>
>>> %time ra -r com_argus.2004.10.10.22.00.00.0.gz -nn > /dev/null
>>> 34.810u 47.074s 1:21.81 100.0% 213+1304k 1+0io 0pf+0w
>>>
>>> %time ra -r com_argus.2004.10.10.22.00.00.0.gz -nn > /usr/local/t
>>> 36.880u 56.285s 1:33.12 100.0% 214+1324k 0+1901io 0pf+0w
>>> %ls -l /usr/local/t
>>> -rw-r--r-- 1 vanepp wheel 249200362 Oct 11 19:15 /usr/local/t
>>>
>>> %time ra -r com_argus.2004.10.10.22.00.00.0.gz -nn host
>>> 142.58.200.82 > /dev/null
>>> 5.944u 28.070s 0:33.90 100.3% 205+1242k 0+0io 0pf+0w
>>>
>>> %time ra -r com_argus.2004.10.10.22.00.00.0.gz -nn host
>>> 142.58.200.82 > ./t
>>> 6.039u 28.347s 0:34.26 100.3% 205+1262k 0+54io 0pf+0w
>>>
>>> An older Mac G4 (533 mhz?) 1.5 gigs of ram IDE disk:
>>>
>>> [test4:~] vanepp% time /usr/local/bin/ra -r
>>> com_argus.2004.10.10.22.00.00.0.gz -nn >/dev/null
>>> 156.000u 150.690s 5:46.06 88.6% 0+0k 0+2io 0pf+0w
>>>
>>> Which is a fair bit slower at post processing that the PCs (a new G5
>>> may do better). This is similar to previous times I've done this, a
>>> 600 Meg
>>> PC with less memory seems to do better on post processing (capture
>>> may be a
>>> different matter though).
>>>
>>> A few pses on the SMP machine indicates it is probably only using
>>> one
>>> CPU (which was at %97) arguing against SMP as well (this is on
>>> FreeBSD 4.10 ):
>>>
>>> USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
>>> vanepp 24462 87.7 0.3 2084 1492 p0 R+ 7:19PM 0:07.04 ra -r
>>> com_argus.2004.10.10.22.00.00.0.gz -nn
>>> vanepp 24465 6.7 0.0 608 248 p0 S+ 7:19PM 0:00.63 gzip
>>> -dc com_argus.2004.10.10.22.00.00.0.gz
>>> vanepp 24448 0.0 0.2 1320 956 p1 Ss 7:18PM 0:00.01 -csh
>>> (csh)
>>>
>>> USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
>>> vanepp 24462 91.9 0.3 2084 1492 p0 R+ 7:19PM 0:37.26 ra -r
>>> com_argus.2004.10.10.22.00.00.0.gz -nn
>>> vanepp 24465 5.9 0.0 608 248 p0 S+ 7:19PM 0:02.81 gzip
>>> -dc com_argus.2004.10.10.22.00.00.0.gz
>>> vanepp 24448 0.0 0.2 1320 956 p1 Ss 7:18PM 0:00.01 -csh
>>> (csh)
>>>
>>> Peter Van Epp / Operations and Technical Support
>>> Simon Fraser University, Burnaby, B.C. Canada
>>>
>>>
>>> On Tue, Oct 12, 2004 at 08:57:02AM +1000, Andrew Hall wrote:
>>>>
>>>> I am looking for the best hardware for the following;
>>>>
>>>> - dedicated box for running multiple (>100) different ra queries
>>>> over 1GB
>>>> compressed argus files each day
>>>>
>>>> - This host will not be running argus captures itself.
>>>>
>>>> Andrew
>>>>
>>>> On 12/10/04 8:36 AM, "Steve McInerney" <spm at healthinsite.gov.au>
>>>> wrote:
>>>>
>>>>> "Best" is a somewhat loaded term. :-)
>>>>> Best for what? How much data? How much money? How fast are the
>>>>> results
>>>>> needed etc etc etc
>>>>>
>>>>> From previous comments (Carter's and Peter Van Epp's???), I
>>>>> understand
>>>>> that Mac platforms make excellent argus capture boxen. Something
>>>>> about
>>>>> the endian being the right way, from memory. That discussion
>>>>> should be
>>>>> in the archives in the last 12 months or so.
>>>>>
>>>>>
>>>>> I stopped using argus on my Solaris boxes a while ago; wasn't as
>>>>> stable
>>>>> as x86/linux and they were waaaay overworked as is - which may have
>>>>> influenced the first problem too.
>>>>>
>>>>>
>>>>> In our case, the work is currently done on a 1Ghz P3 - but then we
>>>>> don't
>>>>> have a lot of data, and how long it takes is largely irrelevant.
>>>>>
>>>>>
>>>>> Sorry to be so vague...
>>>>>
>>>>>
>>>>> - Steve
>>>>>
>>>>>
>>>>> Andrew Hall wrote:
>>>>>> What hardware do people recommend for the best crunching of argus
>>>>>> files with
>>>>>> the various argus clients, in particular ra and racount?
>>>>>>
>>>>>> Intel, AMD, Sparc ??
>>>>>>
>>>>>> Do these clients make use of an SMP machine (assuming the box is
>>>>>> dedicated
>>>>>> to log analysis)?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Andrew
>>>>>>
>>>>>>
>>>
More information about the argus
mailing list