BPF tweak; negative impact?

Scott A. McIntyre scott at xs4all.nl
Thu Feb 22 09:19:08 EST 2001


>    This should tell you if you have packet drops, and if
>    argus was reporting them appropriately.  It is highly
>    unlikely that all the processes would lose the same
>    number of packets, so any variation in numbers is
>    indicative of loss.
> 
>    Something like this should do the trick.

Indeed, it appears something is up, but not, perhaps, much.  

The three tcpdumps produced the following when processed by argus:

racount: totrcds        162470  rcds    162470  pkts      2683025 bytes   987200265
racount: totrcds        162704  rcds    162704  pkts      2689725 bytes   990460125
racount: totrcds        162677  rcds    162677  pkts      2685350 bytes   988406158

And when the same time period is evaluated against argus itself:

racount: totrcds        162670  rcds    162670  pkts      2686006 bytes   988515753

And yet the drops have reduced to 0:

22 Feb 01 14:56:54    man  pkts    109668  bytes     38973861  drops 0  CON
22 Feb 01 14:57:04    man  pkts    117379  bytes     49108813  drops 0  CON
22 Feb 01 14:57:14    man  pkts    117866  bytes     49683499  drops 0  CON
22 Feb 01 14:57:24    man  pkts    120616  bytes     50229128  drops 0  CON
22 Feb 01 14:57:34    man  pkts    107885  bytes     40253941  drops 0  CON
22 Feb 01 14:57:44    man  pkts    102618  bytes     38007788  drops 0  CON
22 Feb 01 14:57:54    man  pkts    112879  bytes     38502433  drops 0  CON

[ snip ]

I'm assuming that the caputring of a user data bytes could increase
lossage?

No other applications other than the tcpdumps and argus were running at
the time of these tests (well, nothing cpu or i/o intensive at least).

Scott



More information about the argus mailing list