packet loss measure with real data
Carter Bullard
carter at qosient.com
Mon Nov 26 06:49:54 EST 2007
Hey Robert,
Argus can be run against your packet file to generate flow status records, which you can use to understand the loss.
For some protocols, those with good aequence numbers,, argus tracks the number of lost packets in each flow status report. For connection-less protocols that do not have useful sequence numbers, you have to compare the received packet counts to the offered counts, and so having the packets from both ends is important.
Ping is a measure of connectivity, and each connectivity failure is presented as a single packet loss, which is inappropriate. Ping involves 2 packets, 2 paths and a reflector, which is a bit more complicated to simulate. Did the echo request packet get dropped? Did the response packet get dropped? Is the ping target faulty/overwhelmed? The real analysis can be complicated.
Run argus against your packets, setting a unique source-id to the flow records. Then use racluster() and ra to generate the stats you want.
Read the man pages and send questions after you get familiar with the tools.
Carter
Carter Bullard
QoSient LLC
150 E. 57th Street Suite 12D
New York, New York 10022
+1 212 588-9133 Phone
+1 212 588-9134 Fax
-----Original Message-----
From: "Robert B." <robertb at semtix24.de>
Date: Thu, 22 Nov 2007 10:42:38
To:argus-info at lists.andrew.cmu.edu
Subject: [ARGUS] packet loss measure with real data
Hello !
I want to measure what packet loss we really have with our communication. So i
can make some statements about the quality of the connection. My idea is to
make a tcp dump on both sides and then analyse thies dumps with argus. But i do
not know how ! Have installed argus-server and argus-client packet on a debian
machine. i know there is a argus daemon running. But how can i use argus to
measure packet loss ? My test Environment could be something of this:
Client--debianargus1--router--internet--router--debianargus2--client2
Would be fine if someone can give me some hints or simply a direction on point
in the manpages on how to measure packet loss between client1 and client2. (have
read the manpages of argus and ra)
For more people who wants to read more about why i want to measure can read
here:
Iam a student from germany writing my degree dissertation. My project is about
improving quality of connectivity of a group communication protocol which is
developed on our university. As a part of this project i have to measure some
data about the connection (packet delay,loss,duplication). Ive started with
simple measurement of pings and some simulations with netem and tc and so on
because we want to simulate wan connections in our laboratory. I noticed that my
simulated packet loss can not comparable with that packet loss that we have in
real wan connection between Germany and other Countries.
the Problem is that the packet loss measuring is incorrect. We measure the
packet loss with a simple ping command (over 10 mins) and the statistics about
it. We get a packet loss of 2-5% with this method. The problem is that in some
cases the connection of our clients (over the group communication protocol)
broke with that wan connection. If i simulate a packet loss of 40% !! and a
delay of 1000ms our protocol still works fine ! I think thats because of the
spreading of the lossed packets. Netem simulate the packet loss with a random
function which (i know) can be adapted with the correlations parameter. But i
think its far away from a real situation. For now i want only to measure
packetloss.
Greetings Robert !
More information about the argus
mailing list