'known flows' configuration file

Chas DiFatta chas at difatta.org
Thu Feb 28 02:00:52 EST 2002


I'll post my last two since they didn't get to the
archive for what it's worth.

	...cd

>-----Original Message-----
>From: Chas DiFatta [mailto:chas at difatta.org]
>Sent: Thursday, February 14, 2002 3:14 PM
>To: Russell Fulton
>Cc: 'newton'; 'Mark Poepping'; 'Yann Berthier';
>argus-info at lists.andrew.cmu.edu; carter at qosient.com
>Subject: RE: 'known flows' configuration file
>
>
>Russell,
>
>You already can filter on the server but with no aggregation.
>You could write something simple in C or Perl that reads
>from stdin and writes to port 561 and get exactly what you
>want.  Call it stdin_to_561, and then on the server,
>
>	argus -w - | ragator -F config.file -w - | stdin_to_561
>
>then try this,
>	
>	argus -w - | ragator -F config.file -w - | sshd -p 561
>
>then...
>
>	ssh -p 561 monitor | ra -r - 
>
>I understand your concern about the large amount of data
>from the server to the clients, but it's just not that much.
>We sample "everything", full bore from both of our cores and
>Argus is auditing a stream that's about 300 - 400 Mbs, with no
>loss.  Now if I look at the amount of audit data "only" between
>the server and client receiving it over a 5 min sample period,
>we're getting about 1.3 Mbit/sec.  I.e.
>
>ipAddr      inMB outMB inKpkt outKpt flows iMb/s oMb/s inpkt/s 
>outpkt/s services
>xx.xx.xx.xx    6    48     95    104    39   0.2   1.3     315     
> 346 argus(6)
>
>That's not much considering the load and we're not doing
>any filtering or aggregating at the server end.  This probably
>scales linearly downward, so if you had a 10 Mbs stream you were
>auditing, then the audit stream from Argus is between 33kb/sec
>and 50kb/sec.  I know if I changed the configuration to something
>more common, I could reduce these numbers by at least 30 to 50%.
>If I filtered and aggregated, it would then be in the order of 10 to
>100 times less.  With all the available bandwidth, reporting 
>everything isn't much of a auditing stream load on the network
>considering the rest of the traffic.
>
>Beyond GigE, as in say OC48, it makes sense to tell the server
>what you want to see and filter accordingly.  I'm not concerned
>that you couldn't build a probe to keep up.  The first stage
>preprocessing problem would be an interesting one because of scale,
>that's what I'm interested in.  Yes filtering at the probe would
>be need, but very dynamic.  Sending the probe a trigger set so
>it could audit more/less depending on the traffic is saw.
>
>You bring up a good point regarding architecture.  Communication
>with the server to tell it what you want is not only a good thing,
>it is essential.
>
>	...Chas
>
>>-----Original Message-----
>>From: owner-argus-info at lists.andrew.cmu.edu
>>[mailto:owner-argus-info at lists.andrew.cmu.edu]On Behalf Of Russell
>>Fulton
>>Sent: Thursday, February 14, 2002 12:35 PM
>>To: carter at qosient.com
>>Cc: 'newton'; 'Mark Poepping'; 'Yann Berthier';
>>argus-info at lists.andrew.cmu.edu
>>Subject: RE: 'known flows' configuration file
>>
>>
>>On Fri, 2002-02-15 at 04:48, Carter Bullard wrote:
>>> Hey Guys,
>>>    If you implement it with filters, the filter is
>>> sent to the argus, and the filtering is done on the
>>> remote side.  If you implement it with a flow model
>>> such as ragator, then no, the aggregation is done
>>> on the local side.
>>
>>
>>Hmmmm... would it be worth implementing the aggregation on the server
>>end?  So that if ragator is inputting from a socket then it would/could
>>(I think this should be optional) ship the config file off to to server
>>and just disply the results.  Actually this sounds like an ra option,
>>ie. add a -f option to ra which sends the file to the server which kicks
>>off an ragator process to which it feeds the filtered records. The
>>ragator process then sends the records to a standard output process...
>>
>>Why bother? Well imagine you are monitoring a wide area network with
>>sensors scattered all over the country. Then doing data aggregation on
>>the server makes sense.  I actually did this nearly 10 years ago using
>>NeTraMet when I was helping to manage Tuia (the NZ Academic & Research
>>network), we had meters at each site which I 'read' from my workstation
>>every 15 minutes.
>>
>>There are time when you need to minimize the network bandwidth used by
>>monitoring.  We are probably more aware of these issues here where
>>network bandwidth is still extremely expensive.  The network I monitored
>>was based on frame relay and most of the links had 48Kbps CRI (yes that
>>is K not M).  We have come a fair way since then thank heavens but there
>>is still nothing akin to Internet2 in NZ, although we are working on it.
>>
>>-- 
>>Russell Fulton, Computer and Network Security Officer
>>The University of Auckland,  New Zealand
>>
>>



More information about the argus mailing list