FlowScan w/argus?

Carter Bullard carter at qosient.com
Wed Jan 9 16:48:48 EST 2002


Hey Dave, 
Ok, lets get terminology down so we can communicate effectively.
Argus generates argus records, ra* programs collect, aggregate
process and manage streams, files, archives of argus data.

You want to work with a ra* (read argus)* style program.

Why don't we just write an argusfile -> flowfile converter.

Carter

Carter Bullard
QoSient, LLC
300 E. 56th Street, Suite 18K
New York, New York  10022

carter at qosient.com
Phone +1 212 588-9133
Fax   +1 212 588-9134
http://qosient.com




> -----Original Message-----
> From: owner-argus-info at lists.andrew.cmu.edu 
> [mailto:owner-argus-info at lists.andrew.cmu.edu] On Behalf Of 
> Dave Plonka
> Sent: Wednesday, January 09, 2002 2:36 PM
> To: argus-info at lists.andrew.cmu.edu
> Subject: Re: FlowScan w/argus?
> 
> 
> On Wed, Jan 09, 2002 at 12:55:47PM -0500, Carter Bullard wrote:
> > Hey Dave,
> >    So many concepts in one piece of mail.  I believe that
> > FlowScan could easily use argus data, although you may
> > have to change a few things to use the sematics that
> > argus data provides that netflow doesn't.
> > 
> >    As Mark mentioned in his e-mail, we're doing a lot of
> > cricket based graphing of real-time stats using argus
> > data at CMU, and the cricket part was pretty painful.
> > So interfacing to FlowScan may be a better ticket.
> 
> Yes thanks Mark for the (out-of-band) response.  The graphs look
> good...  If you don't do them already, adding pkts/sec and flows/sec
> graphs is really useful to identify anomalies such as DoS attacks.
> 
> >    The argus way of things should make it pretty trivial
> > for you to generate your 5 minute batches and process them
> > with perl.  ra() is very flexible allowing you to specify
> > whatever output format you want, so perl processing is
> > really simple if you use ra() to read the data.
> 
> I know that will be to costly performance-wise to convert to ascii and
> back, so I'll want to read the raw binary file directly.
> 
> I could have been more clear in characterizing how the Cflow perl
> module would work with argus files.  "Cflow.pm" is an XSUB 
> extension to
> perl written in C.  The "ratemplate.c" you mention below may be
> instructive, but for FlowScan, I can't give up control to the main()
> that is embedded in "argus_parse.c", hence the need for me to learn
> some of the argus internal API, such as ArgusReadStream.
> 
> Ideally argus would have a "mainloop" function instead of an embedded
> main().  (Forgive me if I overlooked something - as I said, I've just
> had a cursory look at the code.)  The way this is done with 
> FlowScan is
> that you essentially pass function pointers to a "looping" function.
> In Cflow, that "looping" subroutine is called "find" and you pass it a
> reference to a "wanted" subroutine that gets called once per flow, and
> an optional "perfile" subroutine that gets called once per file, as in
> the SYNOPSIS here:
> 
>    http://net.doit.wisc.edu/~plonka/Cflow/Cflow.html
> 
> If I do end up patching Cflow to read argus files, I'll try to
> introduce a reasonable C API so that that "looping" function can
> possibly be patched into the argus source in case any future 
> users need
> to write their own main().
> 
> >    In terms of writing programs to use argus data,
> > there are lots of things that can be done.  Many people
> > on the list have their own Perl scripts and programs for
> > parsing argus data, and Russell's scan detector scripts
> > are just one example.
> > 
> >    If you want to do perl, you may find that the output
> > of raxml() will be the easiest to parse all the
> > optional fields that argus supports.  I believe that
> > perl has an xml parser that's pretty good, and it may be
> > that that approach would provide the fastest way of
> > doing the perl thing.
> 
> Yes, I've had good luck with the perl XML::Simple module.
> Fast to code for it, slow to run though.
>  
> >    If you want to look at writing your own argus data
> > parsing programs, use the ratemplate.c in the ./clients
> > directory.  All you have to do is supply the appropriate
> > routines, and something like racount() is a simple example,
> > and all the ugly stuff is handled by the libraries.  That
> > way you can get near-real time argus server access, multi-
> > probe collecting and even netflow conversion for free.
> > 
> >    Performance with the free stuff seems to be good 
> > on the type of hardware you mention.  Hundreds of megabits,
> > 100K packets per second, around 20-30K records per second
> > peak.  We do rather well with DDOS attacks, at least we
> > don't stop running.
> 
> Great!
> 
> >    If you want to do some integration work, just send me
> > mail, and we can try to figure out what is required.
> 
> OK, I'll keep you posted.  I haven't decided how to prioritize this...
> It sounds like more fun than some of the 
> maintenance/documentation work
> on deck for the next FlowScan release, but one thing that's nice about
> doing freely-available stuff is that, sometimes, you get to do what
> _you_ want to do. ;^)
> 
> Thanks for the info!
> Dave
> 
> -- 
> plonka at doit.wisc.edu  http://net.doit.wisc.edu/~plonka  
> ARS:N9HZF  Madison, WI
> 
> 



More information about the argus mailing list