Argus-2.0 Issues

Carter Bullard carter at qosient.com
Wed Aug 9 21:21:56 EDT 2000


Hey Russell,
   I use Argus quite a bit on files, primarily
for debugging the flow engines.  There are some
packet sources that generate only files, and I'm
thinking about providing a version that can be
linked in with Ethereal's wiretap library.  That
would give us a good number of packet capture
formats.  I won't put this in on the first wave
but it is something to consider.

   The real win on the architecture is output
processing.  We get buffering of records out of
the flow modeler which can get back to packet
processing a bit faster, and with a separate thread
for each client or file, we get independent filtering
and buffering.  This helps to make the whole Argus
system much more reliable.

   Yes, I'm going to support changing the filter
on the fly, but modifying the flow modeler on 
the fly is a bit tricky.  We may have to put some
restrictions on what you can do as a client and
what you have to do on the command line.

Carter


-----Original Message-----
From: owner-argus at lists.andrew.cmu.edu
[mailto:owner-argus at lists.andrew.cmu.edu]On Behalf Of Russell Fulton
Sent: Wednesday, August 09, 2000 5:27 PM
To: Argus (E-mail)
Subject: Re: Argus-2.0 Issues



On Wed, 9 Aug 2000 08:38:30 -0400 Carter Bullard <carter at qosient.com> 
wrote:

> Gentle people,
>   Currently we support reading from one packet file at a
> time.  Is there a need to support multiple "-r filename"
> command line directives in argus()?
> 

Argus or argus clients?  I very rarely use argus to read tcpdump files 
- so far only when preparing bug reports ;-)

In the clients it could be useful where one wants to maintain state 
between files, raconnections for example.  However as Neil has pointed 
out anything that maintains lots of state soon runs out of memory.  I 
suspect that most of us tune out log rollover period to the size of log 
file that can be easily processed on what ever resources we have.

When I want to process a day's or weeks worth of files I use a perl 
script that runs ra on each file in the series.


In response to Carter's "Ground Breaking" post:

I like the architecture, I take it that the flow modeler will be rather 
like Netramet meter, this will be a very useful addition. 

> independent filtering on the output streams.  Once connected,
> remote clients will be able to transmit ASCII filters to the
> Output Processor so that filtering is done before transmitting
> Argus data onto the wire.

I also like the feature of filtering before sending the data over 
sockets.  Will it be capable of changing the filter on the fly?
One of the things I have talked about before is wanting to be able
to capture actual packets from particular streams.  E.g.  a process 
monitoring tcp connections (say just the SYNs -- will this be possible) 
sees what appears to be an scan from IP address A, it then asks argus
to capture and return *all* data from A for a while.

Cheers, Russell.



More information about the argus mailing list