argus-2.0.0Q, RA clients that buffer and dump data.

Chris Newton newton at unb.ca
Tue Jan 16 14:05:51 EST 2001


I just gave it a quick whirl, and the only problem I see, is that:

 ra -w file -S servername

  dumps the data out to 'file' in argus format.  I had expected that the argus 
daemon would do that, but expected ra to dump in the same format that it dumps 
to stdout, when no -w is given.

  So, if I do:

 ra -S servername > file

  and then, move that file every XX seconds, will ra still handle this 
properly?

Chris

>===== Original Message From <carter at qosient.com> =====
>Hey Chris,
>   No problem, it goes in the FAQ so it was worth
>the effort.
>
>   If you are set on the 10 seconds thing, be sure
>and run argus with '-S 2', '-S 3' or '-S 5' to see
>what works best for you.
>
>Hope this helps.  Send mail to the list when you
>get it the way you like.
>
>Carter
>
>Carter Bullard
>QoSient, LLC
>300 E. 56th Street, Suite 18K
>New York, New York  10022
>
>carter at qosient.com
>Phone +1 212 813-9426
>Fax   +1 212 813-9426
>
>> -----Original Message-----
>> From: Chris Newton [mailto:newton at unb.ca]
>> Sent: Tuesday, January 16, 2001 12:56 PM
>> To: Argus (E-mail); Carter Bullard
>> Subject: RE: argus-2.0.0Q, RA clients that buffer and dump data.
>>
>>
>> Thanks for putting a bunch of thought into this, and, for
>> being willing to
>> make some changes.  But, before you go to any work, I think
>> I'll try your
>> first suggestion... as I believe it may work well for me...
>> I envision:
>>
>>   argus -w detailedfile &
>>
>> then:
>>
>>   ra -S argushost -w logfile
>>
>>
>> then,
>>
>>   have a perl script that every 10 seconds mv's the 'logfile'
>> to logfile.old,
>> and processes logfile.old.  I know that some of the flows listed in
>> logfile.old will have started in the last 10 second period,
>> or, even before,
>> but I can deal with that I think.
>>
>> Chris
>>
>> >===== Original Message From <carter at qosient.com> =====
>> >Hey Chris,
>> >You will want to use ragator() as this is one of the
>> >things that it does very well.   But getting a 10 sec
>> >interval stat will require that you think about
>> >some things that you may not have considered.  With
>> >existing 2.0 code and a few command line options, ragator
>> >may be able to provide you with believable 120 second stats
>> >or better.  The configuration file needed to do this
>> >I've included below.
>> >
>> >Problems with existing software and your application.
>> >
>> >Argus outputs microflow audit records based on state
>> >and a time interval.  The -S option specifies what that
>> >time interval will be.  The default is 60 seconds.  What
>> >this does is guarantee that the maximum time duration
>> >of any argus audit record is 60 seconds.  With this type
>> >of granular data, you can't derive a 10 second event
>> >counter.  The best you could do would be 180 second
>> >event counter (3 * period).  In order to get 10 second
>> >link stats, you will need to run Argus with -S 2 or -S 3,
>> >i.e.. print status for flows every 2 seconds while they
>> >are alive.  Depending on your traffic loads, this may
>> >or may not be a lot of records.
>> >
>> >OK, with that out of the way, another thing to consider
>> >is that Argus is pretty lazy as to when it will print
>> >out its records.  This is so we will have maximum cycles
>> >for packet processing, rather than data output.  Argus
>> >can be easily tuned to be more timely in reporting
>> >audit events, but without that tuning Argus could take
>> >as long as 30-120 seconds to print out a particular record,
>> >depending on the protocol and when the last packet was
>> >seen.
>> >
>> >So Argus presents an interesting time map for its data
>> >events.  I'll try to draw a graph.  The Ax are Argus
>> >records in output order.  The bars are the times that
>> >the data covers. S is the time interval specified by
>> >the -S option, we'll say its 5 seconds.  The A's
>> >on the X axis are the times when the A records are
>> >actually reported.
>> >
>> >
>> >
>> >A1 +      +---------+
>> >A2 +                    +---+
>> >A3 +                                      ++
>> >A4 +  +---+
>> >A5 +                             +----+
>> >   |
>> >   +----+----+----+----+----+----+----+----+----+----+
>> >        5    10   15   20   25   30   35   40   45   50
>> >                        secs               A A A A A
>> >                                           1 2 3 4 5
>> >
>> >So, several things you'll need to do and one thing
>> >I will need to do.  Get your Argus status interval to
>> >about 1/3 of your desired stats interval, and if its
>> >really tight, tune argus to be more aggressive in processing
>> >its internal queues.
>> >
>> >I'll need to think if there is anything I can do
>> >to enable this type of application, say add a "-B 120"
>> >option to ragator() to hold and time sort records for
>> >120 seconds before it starts aggregating them.
>> >
>> >Test this out and see if it does what you want.
>> >
>> >   ragator -S remotehost -f flowmodel.conf
>> >
>> >Where this is the contents of flowmodel.conf
>> >#
>> >#label   id    SrcCIDRAddr        DstCIDRAddr         Proto  SrcPort
>> >DstPort   ModelList  Duration
>> >Flow     106       *                  *                 *
>>    *        *
>> >100        120
>> >
>> ># label  id      SrcAddrMask     DstAddrMask      Proto
>> SrcPort  DstPort
>> >Model    100      0.0.0.0          0.0.0.0          no
>> no       no
>> >
>> >
>> >If you want to go for 10 second stats, run
>> >   argus -S 2 ........
>> >
>> >and change the 120 in the above file to 10.
>> >
>> >If you want to do the same thing but count based on IP
>> protocol, put a "yes"
>> >in the proto field of Model 100.  Anyway, read the
>> ./examples/fmodel.conf
>> >file for suggestions on configuring ragator().
>> >
>> >Hope this helps.
>> >
>> >Carter
>> >
>> >Carter Bullard
>> >QoSient, LLC
>> >300 E. 56th Street, Suite 18K
>> >New York, New York  10022
>> >
>> >carter at qosient.com
>> >Phone +1 212 813-9426
>> >Fax   +1 212 813-9426
>> >
>> >
>> >> -----Original Message-----
>> >>
>> >>   The idea is to 'right now' (10 seconds of now), generate
>> >> byte counts and
>> >> packet counts for the link.  ie: quasi realtime.  I don't
>> >> want to process an
>> >> hour/day's worth of flow data at the end of the day/hour.
>> >> So, the idea is to
>> >> recieve a burst, that represents all the IN/OUT traffic that
>> >> happened in that
>> >> 10 seconds.  I'll then use something like MRTG to graph it,
>> >> or some other tool
>> >> (rrdtool, gdchart, or others... havent decided what it will
>> >> be yet).  I know I
>> >> could generate byte and packet counts in easier ways, but I
>> >> want the flow logs
>> >> around to look at later if I see problems.
>>
>> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
>>
>> Chris Newton, Systems Analyst
>> Computing Services, University of New Brunswick
>> newton at unb.ca 506-447-3212(voice) 506-453-3590(fax)
>>
>>

_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/

Chris Newton, Systems Analyst
Computing Services, University of New Brunswick
newton at unb.ca 506-447-3212(voice) 506-453-3590(fax)



More information about the argus mailing list