[ARGUS] argus occasionally show unrealistic mount of packet counter

Carter Bullard carter at qosient.com
Fri Nov 3 17:20:54 EDT 2023


Hey Ming,
These look pretty terrible … normally you see bad values for the byte counts as these are derived from the TCP sequence numbers in some situations, but the interesting thing is that in these records (if I’m reading them right) the problem is in the packet counts, which are always observed, rather than derived.

The biggest problem is sequence number turnover, and with you generating flow records every 30 seconds, you can experience sequence turnover during your status interval.
I would shorten the flow generation status interval to 5 seconds, and then look to see if it doesn’t go away.  If it does, I would go with 5 seconds … if not let’s debug further.

Carter


> On Nov 3, 2023, at 4:00 PM, Ming Fu via Argus-info <argus-info at lists.andrew.cmu.edu> wrote:
> 
> Hi 
> 
> We noticed that there are occasional cases when argus archive show unrealistically high amount of packets counter. 
> 
> Here is an example:
> --------------------------------------------------------
> ra-3.0.8.3 -L -1 -c ' ' -n -s stime,ltime,saddr,daddr,proto,spkts,dpkts,sport,dport,sbytes,dbytes  -r /path/to/archive/argus.*| grep tcp | awk '{ print $6+$7, $0 }'| sort -n -r| head
> 14988828386791653376 22:07:48.757785 22:08:18.758381 10.100.250.137 10.63.36.11 tcp 67108864 14988828386724545280 51380 2051 0 288793326608450864
> 9251874556556148736 22:07:18.731168 22:07:48.757773 10.100.250.137 10.63.36.11 tcp 2939797658325221376 6312076898230927432 51380 2051 288230378135882496 144115188126187552
> 8200002023161069568 22:08:18.758494 22:08:48.842602 10.100.250.137 10.63.36.11 tcp 7911771646942248960 288230376218820608 51380 2051 664144640 0
> 2932566 22:15:08.701928 22:15:38.735385 10.100.250.137 10.63.36.11 tcp 2746314 186252 51476 2051 3829903796 26056252
> 2902567 22:25:14.426295 22:25:44.470556 10.100.250.137 10.63.36.11 tcp 2754226 148341 51680 2051 3937714540 18801530
> 2758335 22:21:20.908170 22:21:50.932300 10.100.250.137 10.63.36.11 tcp 2457196 301139 51588 2051 3327733318 45132426
> 2679080 22:06:48.672480 22:07:18.731156 10.100.250.137 10.63.36.11 tcp 2499495 179585 51380 2051 3583055390 21369210
> 2557546 22:06:10.561644 22:06:35.690115 10.100.250.137 10.63.36.11 tcp 2426367 131179 51354 2051 3478126178 15642994
> 2443147 21:18:43.083593 21:19:11.514668 10.100.250.137 10.63.36.11 tcp 2291548 151599 49712 2051 3276577321 26749882
> 2068048 22:11:23.654426 22:11:53.660680 10.100.250.137 10.63.36.11 tcp 1918509 149539 51450 2051 2614658170 24009070
> -------------------------------------------------------
> We ran 3.0.8.1 and 3.0.8.3 in parallel on the same input stream. The problem can happen in both versions. One interesting observation is the problem does not happen at the same time for the two version of argus. However, they do always happen on high volume connections. The incoming stream is ~ 2Gbits/s, with fair bit of duplicated packets. The argus does skip a small portion of the traffic (2%) when the load is high. The stream is full size ethernet traffic. The average packet size is over 1000 bytes per packet.
> 
> Any suggestion on how to debug further?
> Regards,
> Ming
> 
> _______________________________________________
> argus mailing list
> argus at qosient.com
> https://pairlist1.pair.net/mailman/listinfo/argus

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1385 bytes
Desc: not available
URL: <https://pairlist1.pair.net/pipermail/argus/attachments/20231103/c5b0c7c1/attachment.bin>


More information about the argus mailing list