Argus 2.0 wishes

Peter Van Epp vanepp at sfu.ca
Mon Mar 6 20:14:17 EST 2000


	By and large I think Russell has captured it. I've inserted a few 
comments below.

<snip>
> 
> So my focus is very much on detecting unwanted activity entering and 
> leaving our network, either in (more or less) real time or well after 
> the fact.
> 
> What I like about argus:
> 1/ Having the ability to process historical data, be it yesterday, last 
> week or last month.  This is something you don't get with standard NIDS.
> 2/ being able to keep several months data on line on a fairly modest PC 
> (OK we also have a fairly modest Internet feed (around 1Mbps) which 
> helps.

	Even with a less modest feed its possible with PC hardware (I'm becoming
concerned that won't be true anymore when my feed to CA*net3 goes to Gigether
later this summer on its way higher though ...). Big IDE disks in a castoff
486 serve as a fine archive sucked off from a fast sensor machine with fast SCSI
disks at the moment. Logging everything in and out and keeping the record is
invaluable in case of a problem. For starters its possible to verify if a 
reported problem really originated on my net (as opposed to being forged from
someone elses net ...). That alone is worth keeping the current functionality
of summarizing records.

> 
> Weakness:  From a NIDS point of view (which isn't what argus was 
> designed as) not being able to get into the packet payload is a 
> limitation.  What I really want is a cross between argus and snort.
> 

	I'm torn on this one :-) I don't know if its better to have Argus 
doing as it does now (look at only headers, and summarize) and have another
box that does full capture or combine them. I expect the answer is keep argus
as it is now and let it run in the same box (but separate from) a more 
complex analyser if desired. It strikes me that NFR has the right idea, a 
sensor box with no kernel or file system that boots from readonly media and
works entirely in memory. I beleive they then pass the recreated streams on
to an analysis station. I'd be tempted to not do that by default but rather 
pass the argus data along to an analysis station which then has the capability
of querying the sensor for the full data stream of a connection that has an
anomoly (which would then be dumped over the link to the analysis station and
saved for later analysis). With enough memory in the sensor box it should be
possible to do the analysis and still capture the packets of interest from
the sensor before they get overwritten. The sensor box should also reassemble
the TCP stream (and detect and dump the full packet stream to the analysis box
when odd fragements with overlapping offsets or variable TTLs that may indicate
an attack occur) and be capable of scanning for patterns passed back from the
analysis box to detect content type attacks. When something is found, dump the
entire data stream from memory in to the analysis station for later processing.
This should reduce the volume of data sufficiently to be doable (although only
trying it would tell). There are of course also the privacy issues surrounding
this, but they are there with any type of content scanning. Given the various
virus in email and content holes in web and other services I expect the 
content is going to have to get scanned (with appropriate privacy protection)
anyway. 


> Wish List:
> 
> 1/ Better reporting of 'anomolous' tcp traffic.  One way of doing this 
> would be to have a separate record class for things that are not part 
> of a 'proper' tcp stream which used the status bits differently.
> 
> 2/ a perl client ;-) I have wondered about building perl callouts for a 
> client.  I have never done anything invovling C/Perl interface so I 
> don't know how difficult or otherwise that would be and when it comes 
> down to it starting up ra from within perl seems to work OK.

	I tend to think ra is the appropriate answer here. Most of what I find
perl useful for in argus is summarizing and keeping state across time (such
as finding scans fast or slow) and/or sorting or summarizing by address and or 
content with associative arrays and that isn't particularly time sensitive. 
Thus the additional step of processing through ra doesn't seem a problem to me
against the additional work of maintaining a perl interface to argus, but I 
may well be wrong or not thinking far enough ahead.

> 
> I have toyed with the idea of logging content from addresses that are 
> scanning (to look for exploit attempts) but this is probably a waste of 
> time since most scanning is automated and any exploit attempts are 
> likely to come from somewhere else entirely.  To do this you need to be 
> able to start your packet capture very quickly (I tried to fork tcpdump 
> from my watcher script but it gets real messy trying to keep track of 
> processes and kill them off at appropriate times).
> 
> I am happy to see argus stay a UNIX tool for the moment.

	Or as a network tool that runs on a stripped down box that really isn't
Unix anymore (since Argus is really mostly a state machine that understands
the network).

> 
> Cheers, Russell.
> 

	And finally one wish list item of my own (to do with network 
performance): I'd like to see argus at least count non IP packets on the 
network interface with a view to providing an indication of how busy the 
network is over a time interval i.e. since it is seeing all packets on the 
network, at least we hope it is, count them and calculate how busy the network
has been over some possibly programable interval. I'd have to admit that it
may be a better bet to leave this to NetRaMet as the tool designed for the 
job, but argus has the information (and already keeps track of who is running
at least for IP).

Peter Van Epp / Operations and Technical Support 
Simon Fraser University, Burnaby, B.C. Canada



More information about the argus mailing list