Argus giving wrong bytes results ?

Carter Bullard carter at qosient.com
Fri Jul 9 15:12:06 EDT 2010


Hey Mike,
Well this isn't any good.  I know what the problem is.  racluster() thinks the
errant records have the packet and byte counts encoded as (long long)
when they are actually encoded as (short).   So racluster() grabs 64-bits for
the packets and bytes, rather than the 16-bits that are actually in the record.

The question is whether the culprit is radium() or racluster().

Can you do me a favor?  Could you have ra() collect enough of the records,
rather than the current radium() -> racluster() to see if the bug is in writing
the records out or reading them in.  Also, if you could just have ra() print the
netflow records rather than writing them to disk, may indicate that it doesn't
have an error in converting the netflow to argus, but writing the records to
disk, may be where the bug is. 

I'm trying to modify argus() so that it can generate argus records from the
packet stream, but that will take quite a bit of time.

Thanks!!!!

Carter



On Jul 8, 2010, at 10:14 PM, Mike Tancsa wrote:

> At 12:39 PM 7/6/2010, Carter Bullard wrote:
>> Hey  Mike,
>> I did make some changes to the clients to deal with your netflow problem.
>> Did you get a chance to test whether the latest radium() is behaving better?
> 
> Hi Carter,
>        Sorry for the delay. I just tried with the latest version, and still I am getting bogus netflow results. I built radium on FreeBSD 7.3 just with the default ./configure
> 
> 
> racluster -L0 -nr radium.arg - srcid 192.168.1.82 | head
>         StartTime    Flgs  Proto            SrcAddr  Sport   Dir         DstAddr  Dport  TotPkts   TotBytes State
>   21:57:00.190000 Ne         udp    192.168.1.118.123       ->     192.168.1.82.123 86469112 1037629354   INT
>   21:57:11.670000 Ne         tcp     192.168.1.81.62830     ->      10.1.1.3.9010    5        403   FIN
>   21:57:14.706000 Ne         tcp     192.168.1.81.59931     ->      10.1.1.3.9010 36028797 8358962383   FIN
>   21:57:16.186000 Ne         tcp     192.168.1.81.51973     ->      10.1.1.3.9010    5        403   FIN
>   21:57:19.730000 Ne         tcp     192.168.1.81.58569     ->      10.1.1.3.9010    5        403   FIN
>   21:57:23.886000 Ne         tcp     192.168.1.81.55401     ->      10.1.1.3.9010    6        407   FIN
>   21:57:24.794000 Ne         tcp     192.168.1.81.63774     ->      10.1.1.4.9010    5        402   FIN
>   21:57:26.006000 Ne         tcp     192.168.1.81.51267     ->      10.1.1.3.9010    5        403   FIN
>   21:57:26.134000 Ne         tcp     192.168.1.81.53147     ->      10.1.1.3.9010 36028797 8431019977   FIN
> 
>        ---Mike
> 
> 
>> Carter
>> 
>> On Jun 10, 2010, at 11:11 AM, Mike Tancsa wrote:
>> 
>> > At 11:08 AM 6/10/2010, Carter Bullard wrote:
>> >> Hey Mike,
>> >> I'll need to get either your argus data file, or a flow-tools like netflow file, or the pcap to fix this problem,
>> >> if that is workable for you.    Can't seem to replicate it here with anything I have.
>> >
>> > Hi,
>> >        I was just putting it together when your email came in :) I will send offlist.
>> >
>> >        ---Mike
>> >
>> >> Carter
>> >>
>> >> On Jun 8, 2010, at 2:44 PM, Mike Tancsa wrote:
>> >>
>> >> >
>> >> > Actually, I think I found what might be a bug in radium at least.  Using the lastest dev version (Radium Version 3.0.3.11), I get some strange netflow results as compared to radium 3.0.2.  The packet counts should be 5, not 36028797.  I can send pcaps offline of the netflow data if you like. Its just from a tiny cisco Frame router.  Starting up the old version of radium gives the correct results. I use the same .conf file for both.
>> >> >
>> >> > 12:28:53.988000 Ne         tcp     192.168.135.81.51857     -> 10.10.197.3.9010         5        402 FSPA_
>> >> >   12:28:54.709000 Ne         tcp     192.168.135.81.55318     -> 10.10.197.3.9010         5        386 FSPA_
>> >> >   12:28:55.517000 Ne         tcp     192.168.135.81.53690     -> 10.10.197.3.9010         5        403 FSPA_
>> >> >   12:28:55.617000 Ne         tcp     192.168.135.81.60303     -> 10.10.197.3.9010         5        403 FSPA_
>> >> >   12:28:58.433000 Ne         tcp     192.168.135.81.60752     -> 10.10.197.3.9010         5        403 FSPA_
>> >> >   12:28:58.713000 Ne         tcp     192.168.135.81.57136     -> 10.10.197.3.9010         5        403 FSPA_
>> >> >   12:29:02.969000 Ne         tcp     192.168.135.81.60152     -> 10.10.197.3.9010         5        401 FSPA_
>> >> >   12:29:04.297000 Ne         tcp     192.168.135.81.53716     -> 10.10.197.3.9010  36028797 8358962383 FSPA_
>> >> >   12:29:07.231000 Ne         tcp     192.168.135.81.51299     -> 10.10.197.3.9010  36028797 8358962383 FSPA_
>> >> >   12:29:15.879000 Ne         tcp     192.168.135.81.58679     -> 10.10.197.3.9010         5        401 FSPA_
>> >> >   12:29:17.887000 Ne         tcp     192.168.135.81.50642     -> 10.10.197.3.9010         5        403 FSPA_
>> >> >   12:29:19.111000 Ne         tcp     192.168.135.81.55935     -> 10.10.197.3.9010  36028797 8358962383 FSPA_
>> >> >   12:29:19.603000 Ne         tcp     192.168.135.81.51737     -> 10.10.197.3.9010  36028797 8431019977 FSPA_
>> >> >   12:29:25.389000 Ne         tcp     192.168.135.81.56375     -> 10.10.197.3.9010         5        411 FSPA_
>> >> >   12:29:29.993000 Ne         tcp     192.168.135.81.64352     -> 10.10.197.3.9010  36028797 8286904789 FSPA_
>> >> >   12:29:30.121000 Ne         tcp     192.168.135.81.53483     -> 10.10.197.3.9010         5        403 FSPA_
>> >> >   12:29:41.178000 Ne         tcp     192.168.135.81.51055     -> 10.10.197.3.9010         5        386 FSPA_
>> >> >   12:29:46.166000 Ne         tcp     192.168.135.81.52996     -> 10.10.197.3.9010         5        386 FSPA_
>> >> >   12:29:46.590000 Ne         tcp     192.168.135.81.65290     -> 10.10.197.3.9010         5        401 FSPA_
>> >> >   12:29:48.298000 Ne         tcp     199.212.135.83.54323     -> 10.10.197.3.9010         5        387 FSPA_
>> >> >   12:29:50.778000 Ne         tcp     192.168.135.81.53839     -> 10.10.197.3.9010  36028797 7926616819 FSPA_
>> >> >   12:29:52.671000 Ne         tcp     192.168.135.81.55540     -> 10.10.197.3.9010  36028797 7998674413 FSPA_
>> >> >   12:29:54.507000 Ne         tcp     192.168.135.81.59975     -> 10.10.197.3.9010         5        386 FSPA_
>> >> >
>> >> >
>> >> > At 06:33 AM 6/8/2010, carter at qosient.com wrote:
>> >> >> There is a HUGE difference between per transaction flow data and interface counters. If you simply print out your Argus data, you can see this. ra -r argus.data.file You have to transform the bi-directional flow data, that accounts for conversations, into RMON style data, that counts ingress and egress packets based on a layer 2 address.If you want to compare SNMP interface counters with Argus data, you will need to use any aggregator, such as racluster, ragator, or rabins, using the "rmon" mode, modifying the flow key to track one of the Mac addresses in the records. racluster -m smac -M rmon -r argus.data.fileNow the src and dst counters will look like interface egress and ingress counters, respectively.ragraph(), supports this style of aggregation. ragraph sbytes dbytes -t time 5s -m smac -M rmon -r argus.data.fileBUT, you will have to modify your argus.conf to enable ARGUS_GENERATE_MAC_DATA so that you have layer 2 information in your argus data.Carter
>> >> >>
>> >> >> Sent from my Verizon Wireless BlackBerry
>> >> >> From: Reykjavik hindisvik <hindisvik at gmail.com>
>> >> >> Date: Mon, 7 Jun 2010 12:23:23 +0200
>> >> >> To: <carter at qosient.com>
>> >> >> Cc: <argus-info-bounces+carter=qosient.com at lists.andrew.cmu.edu>; Argus<argus-info at lists.andrew.cmu.edu>
>> >> >> Subject: Re: [ARGUS] Argus giving wrong bytes results ?
>> >> >> Hello,
>> >> >>
>> >> >> Thank you for your answers. I have tried using sapp_bytes and dapp_bytes, the result downloading a file seems to be correct but it does not fix my issue : Outbound traffic is not really OK and Inbound is absolutely wrong (50Mb instead of 100Mb...)
>> >> >>
>> >> >> What I would like to do is tu use the result of racount -r xxx.xxx.xxx.xxx.ra to draw a graph with cacti.
>> >> >> One problem is the ra file will be huge so I'm compelled to rotate it every 5 minutes, and I have to tell Cacti it's a Gauge data source, not a counter data source.
>> >> >> Has anyone ever tried to do this?
>> >> >> Is there a argus command which will be more appropriated than raccount ?
>> >> >>
>> >> >> Before using Argus I was using SNMP with InOctets and OutOctet, and on Linux deveices I was using Iptables+accounting (which was giving me a COUNTER type cacti value).
>> >> >>
>> >> >> Here is my agent server conf file :
>> >> >>
>> >> >> ARGUS_FLOW_TYPE="Bidirectional"
>> >> >> ARGUS_FLOW_KEY="CLASSIC_5_TUPLE"
>> >> >> ARGUS_DAEMON=yes
>> >> >> ARGUS_MONITOR_ID=`hostname`
>> >> >> ARGUS_ACCESS_PORT=561
>> >> >> ARGUS_INTERFACE=eth1
>> >> >> ARGUS_SET_PID=yes
>> >> >> ARGUS_PID_PATH="/var/run"
>> >> >> ARGUS_FLOW_STATUS_INTERVAL=0.5
>> >> >> ARGUS_MAR_STATUS_INTERVAL=60
>> >> >> ARGUS_DEBUG_LEVEL=0
>> >> >> ARGUS_GENERATE_RESPONSE_TIME_DATA=no
>> >> >> ARGUS_GENERATE_PACKET_SIZE=no
>> >> >> ARGUS_GENERATE_JITTER_DATA=no
>> >> >> ARGUS_GENERATE_MAC_DATA=no
>> >> >> ARGUS_GENERATE_APPBYTE_METRIC=yes
>> >> >>
>> >> >> Thanx you for your ideas, I'm a bit stuck...
>> >> >>
>> >> >> H.
>> >> >>
>> >> >>
>> >> >> 2010/6/7 <<mailto:carter at qosient.com>carter at qosient.com>
>> >> >> Also, Argus uses a different definition for source and destination since Argus works with flow data not interface data, and that can cause confusion.
>> >> >>
>> >> >> What are the differences that you are seeing? How are you running the client programs?
>> >> >>
>> >> >> Carter
>> >> >>
>> >> >> Sent from my Verizon Wireless BlackBerry
>> >> >>
>> >> >> ----------
>> >> >> From: Reykjavik hindisvik <<mailto:hindisvik at gmail.com>hindisvik at gmail.com>
>> >> >> Date: Sun, 6 Jun 2010 09:31:42 +0200
>> >> >> To: <<mailto:argus-info at lists.andrew.cmu.edu>argus-info at lists.andrew.cmu.edu>
>> >> >> Subject: [ARGUS] Argus giving wrong bytes results ?
>> >> >>
>> >> >> Hello,
>> >> >>
>> >> >> I would like to use argus to draw graph of bandwidth usage for our network. Today, I'm using SNMP which give me a graph of my bandwidth, and I've setup Argus which draw the same graph for the same Network Interface but does not give me the same results at all...
>> >> >>
>> >> >> I can't believe it's a bug but I bet it's just a different way to get the packets and maybe there's an option to get the same results as I have with SNMP.
>> >> >>
>> >> >> For example : When I download a 130Mb File, SNMP show me 130MB, but Argus show me much more (maybe be it includes size of header or something that SNMP don't...) and for me the result in the right.
>> >> >> So my question is :
>> >> >>
>> >> >> 1) What does exactly makes the difference ?
>> >> >> 2) Is there a way to get the same results (option or something...)
>> >> >> 3) Maybe I can recount it after with a math formula to get the same results, but which formula ?
>> >> >>
>> >> >> Thanx for your ideas.
>> >> >>
>> >> >> Best regards,
>> >> >>
>> >> >> H.
>> >> >>
>> >> >
>> >> > --------------------------------------------------------------------
>> >> > Mike Tancsa,                                      tel +1 519 651 3400
>> >> > Sentex Communications,                            mike at sentex.net
>> >> > Providing Internet since 1994                    www.sentex.net
>> >> > Cambridge, Ontario Canada                         www.sentex.net/mike
>> >> >
>> >> >
>> >>
>> >> Carter Bullard
>> >> CEO/President
>> >> QoSient, LLC
>> >> 150 E 57th Street Suite 12D
>> >> New York, New York  10022
>> >>
>> >> +1 212 588-9133 Phone
>> >> +1 212 588-9134 Fax
>> >>
>> >>
>> >>
>> >>
>> >
>> >
>> 
>> Carter Bullard
>> CEO/President
>> QoSient, LLC
>> 150 E 57th Street Suite 12D
>> New York, New York  10022
>> 
>> +1 212 588-9133 Phone
>> +1 212 588-9134 Fax
>> 
>> 
>> 
>> 
> 
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3681 bytes
Desc: not available
URL: <https://pairlist1.pair.net/pipermail/argus/attachments/20100709/22c83668/attachment.bin>


More information about the argus mailing list