BPF tweak; negative impact?
Peter Van Epp
vanepp at sfu.ca
Thu Feb 22 10:41:47 EST 2001
While you need to be careful with Linux (the default 3c905 driver
hangs at high volume for instance, and there was reputed to be a bpf that
didn't report drops), testing with tcpreplay to provide a known packet stream
at HDX 100 indicates (on the same machine using an early beta) that Linux
indeed captured everything and FreeBSD lost some packets below the level of
BPF. Running argus and tcpdump on the same interface on a busy 100 link under
FreeBSD (don't know about Linux because haven't tried) will cause packet loss.
On current evidence a Linux box with a 905B and the decent driver for it
(our beowolf maintainer has done a fair amount of network performance
benchmarking on Linux with various cards and has settled on 905Bs) or a
Solaris box (an E450 with HME interface in my case, a little pricey) may be
a better bet (both captured full speed 100, Linux on the same machine as
FreeBSD when FreeBSD didn't ...) than FreeBSD at the moment.
It would be interesting to compare the output from the three tcpdumps
as well to see if there are missing packets in the dump streams which would
indicate loss at the interface level (which bpf/argus won't see) as well.
For certain capturing user data is going to make packet loss worse
(bpf has to copy more of the packet across the kernel/user boundary eating
more memory bandwith).
Peter Van Epp / Operations and Technical Support
Simon Fraser University, Burnaby, B.C. Canada
>
> > This should tell you if you have packet drops, and if
> > argus was reporting them appropriately. It is highly
> > unlikely that all the processes would lose the same
> > number of packets, so any variation in numbers is
> > indicative of loss.
> >
> > Something like this should do the trick.
>
> Indeed, it appears something is up, but not, perhaps, much.
>
> The three tcpdumps produced the following when processed by argus:
>
> racount: totrcds 162470 rcds 162470 pkts 2683025 bytes 987200265
> racount: totrcds 162704 rcds 162704 pkts 2689725 bytes 990460125
> racount: totrcds 162677 rcds 162677 pkts 2685350 bytes 988406158
>
> And when the same time period is evaluated against argus itself:
>
> racount: totrcds 162670 rcds 162670 pkts 2686006 bytes 988515753
>
> And yet the drops have reduced to 0:
>
> 22 Feb 01 14:56:54 man pkts 109668 bytes 38973861 drops 0 CON
> 22 Feb 01 14:57:04 man pkts 117379 bytes 49108813 drops 0 CON
> 22 Feb 01 14:57:14 man pkts 117866 bytes 49683499 drops 0 CON
> 22 Feb 01 14:57:24 man pkts 120616 bytes 50229128 drops 0 CON
> 22 Feb 01 14:57:34 man pkts 107885 bytes 40253941 drops 0 CON
> 22 Feb 01 14:57:44 man pkts 102618 bytes 38007788 drops 0 CON
> 22 Feb 01 14:57:54 man pkts 112879 bytes 38502433 drops 0 CON
>
> [ snip ]
>
> I'm assuming that the caputring of a user data bytes could increase
> lossage?
>
> No other applications other than the tcpdumps and argus were running at
> the time of these tests (well, nothing cpu or i/o intensive at least).
>
> Scott
>
>
More information about the argus
mailing list