Question regarding on how to use flows in real time

Carter Bullard carter at
Tue May 26 17:12:44 EDT 2015

Hey Sebas,
This is a job for racluster.1, but you are asking for something that I would suggest is not practical and I'll advised.  You can not build a real time sensor with state based reporting.
And if you are interested in behavioral based anomaly detection and prevention, state based reporting is the last thing that you want to use for many many many reasons.

How can you stop something if you are only realizing that it exists, after it is finished ???
My recommendation is to lower the status interval to 1sec and deal with intermediate records.
Or use racluster to flush on specific state, which I don't recommend.

> On May 26, 2015, at 4:48 PM, el draco <eldraco at> wrote:
> Hi list. We come across some special need with argus and I would like
> to have your opinion on it.
> I will try to make my self clear.
> The need:
> In our project in the Univ we would like to have something similar to
> 'real time' argus flows. The idea is to have a running argus
> generating flows from a network and have them reported to us as soon
> as possible. However, we _strongly_ depend on the _real_ time
> differences between flows, so we don't want to use the "reporting
> interval" of argus. We need the real flows being reported 'naturally'.
> (More info on
> Our first attempted solution was to use an
> ARGUS_FLOW_STATUS_INTERVAL=3600. This is based on the idea that since
> the protocols in argus are timeouting at 60,30 and 5 secs, then the
> connections would timeout before argus reports them. Which in fact
> turns out to be like having no argus reporting at all. This was good
> for some time. However, despite the fact that using a reporting
> interval of 3600 s stops argus from reporting intermediate flows,
> argus do not output the flows that are already finished in real time.
> Example:
> Let me show an example that is attached in this email. I generated it
> with the command :
> for i in `seq 1 10`; do wget --limit-rate 1000 -N $i; sleep 60; done
> That is, 10 downloads separated by 1 minute. Each download last few
> seconds and is completely done and terminated.
> In our ideal case I would like to have argus reporting each flow as
> soon as it finishes. However, using a reporting time of 5 seconds
> gives us intermediate flows:
> (notice that I'm using -f to simulate a never ending pcap file)
> argus -f -S 5 -r 3/3.pcap -w - |ra -Z b -n -F /etc/ra.conf -r -
> StartTime,Dur,Proto,SrcAddr,Sport,Dir,DstAddr,Dport,State,sTos,dTos,TotPkts,TotBytes
> 2015/05/25 15:11:49.627257,4.717779,tcp,x.x.x.x,57384,
> ->,,80,SPA_SPA,0,0,27,7536
> 2015/05/25 15:11:55.056756,2.979670,tcp,x.x.x.x,57384,
> ->,,80,FA_FPA,0,0,16,3823
> 2015/05/25 15:12:57.933572,4.620449,tcp,x.x.x.x,57453,
> ->,,80,SPA_SPA,0,0,37,7974
> 2015/05/25 15:13:02.946189,3.511176,tcp,x.x.x.x,57453,
> ->,,80,FA_FPA,0,0,23,4844
> 2015/05/25 15:14:06.354719,4.507918,tcp,x.x.x.x,57525,
> ->,,80,SPA_SPA,0,0,37,7974
> (and 14 more identical flows)
> And using a reporting time of 3600 seconds give us nothing until 1 hour passes:
> argus -f -S 3600 -r 3/3.pcap -w - |ra -Z b -n -F /etc/ra.conf -r -
> What we need is argus reporting each flow as soon as it finishes
> without adding intermediate _reported_ flows.
> Do you think this is possible? Or there are important drawbacks on
> this reasoning?
> (Maybe there is a way of differentiating the _reported_ flows from the
> completed ones?)
> Thanks a lot!
> Sebas
> -- 
> <3.pcap>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the argus mailing list