'known flows' configuration file

Chas DiFatta chas at difatta.org
Thu Feb 28 02:01:26 EST 2002


And the other post...

	...cd

>-----Original Message-----
>From: Chas DiFatta [mailto:chas at difatta.org]
>Sent: Thursday, February 14, 2002 4:25 PM
>To: Russell Fulton
>Cc: 'newton'; 'Mark Poepping'; 'Yann Berthier';
>argus-info at lists.andrew.cmu.edu; carter at qosient.com
>Subject: RE: 'known flows' configuration file
>
>
>Russell,
>
>I forgot this as to not reinvent the wheel. On host probeA do,
>
>	argus -w - - | ragator -F config.file -w - | argus -X 
>
>We won't need ssh or perl.  Then on another host acting as
>your archiving host you could,
>
>	ra -S probeA -w archive.file -w - | argus -X
>
>Then the only load on your archiving host would be to write
>the file and supply listens for other argus clients to use the
>argus stream. You could also just combine the probe streams,
>
>	ra -S probeA -S probeB -w - | argus -X
>
>You could also combine streams from different probes as well
>on the archive separate files from different probes.  This
>also lets the archive host map port numbers to different probes,
>
>	ra -S probeA -w archiveA.file -w - | argus -X -P 562
>	ra -S probeB -w archiveB.file -w - | argus -X -P 563
>	ra -S localhost:562 -S localhost:563 -w - | argus -X
>
>Note port 561 is the combined audit records from both probes.
>
>	...Chas
>
>>-----Original Message-----
>>From: owner-argus-info at lists.andrew.cmu.edu
>>[mailto:owner-argus-info at lists.andrew.cmu.edu]On Behalf Of Russell
>>Fulton
>>Sent: Thursday, February 14, 2002 12:35 PM
>>To: carter at qosient.com
>>Cc: 'newton'; 'Mark Poepping'; 'Yann Berthier';
>>argus-info at lists.andrew.cmu.edu
>>Subject: RE: 'known flows' configuration file
>>
>>
>>On Fri, 2002-02-15 at 04:48, Carter Bullard wrote:
>>> Hey Guys,
>>>    If you implement it with filters, the filter is
>>> sent to the argus, and the filtering is done on the
>>> remote side.  If you implement it with a flow model
>>> such as ragator, then no, the aggregation is done
>>> on the local side.
>>
>>
>>Hmmmm... would it be worth implementing the aggregation on the server
>>end?  So that if ragator is inputting from a socket then it would/could
>>(I think this should be optional) ship the config file off to to server
>>and just disply the results.  Actually this sounds like an ra option,
>>ie. add a -f option to ra which sends the file to the server which kicks
>>off an ragator process to which it feeds the filtered records. The
>>ragator process then sends the records to a standard output process...
>>
>>Why bother? Well imagine you are monitoring a wide area network with
>>sensors scattered all over the country. Then doing data aggregation on
>>the server makes sense.  I actually did this nearly 10 years ago using
>>NeTraMet when I was helping to manage Tuia (the NZ Academic & Research
>>network), we had meters at each site which I 'read' from my workstation
>>every 15 minutes.
>>
>>There are time when you need to minimize the network bandwidth used by
>>monitoring.  We are probably more aware of these issues here where
>>network bandwidth is still extremely expensive.  The network I monitored
>>was based on frame relay and most of the links had 48Kbps CRI (yes that
>>is K not M).  We have come a fair way since then thank heavens but there
>>is still nothing akin to Internet2 in NZ, although we are working on it.
>>
>>-- 
>>Russell Fulton, Computer and Network Security Officer
>>The University of Auckland,  New Zealand
>>
>>



More information about the argus mailing list