Too many inputs

carter at qosient.com carter at qosient.com
Tue Dec 7 06:45:28 EST 2010


Hey Rafael,
Keeping the output as files that match the original packet data is a habit that works for many of us.
I work with 5 min files by default, as it solves performance and distributed issues that I run into routinely.  But there are lots of strategies that work, so not to worry.

The preliminary argus-clients-3.0.3.20 may fix the inconsistent times, please give it a try, if you have the time.

Carter 

Sent from my Verizon Wireless BlackBerry

-----Original Message-----
From: Rafael Barbosa <rrbarbosa at gmail.com>
Date: Mon, 6 Dec 2010 17:13:34 
To: <carter at qosient.com>; <argus-info-bounces+carter=qosient.com at lists.andrew.cmu.edu>; Argus<argus-info at lists.andrew.cmu.edu>
Subject: Re: [ARGUS] Too many inputs

Hi all,

Just wanted to share my last attempts. Besides the workaround with mergecap,
I found out the argus, by default, append new records when saving to an
existing file. From the manpage:

-w   <file  |  stream  ["filter"]> Append transaction status records to
> output-file or write records to the URL

            based stream.


So I tried: (was there a reason to generate individual "argus files" for
each pcap file in CS Lee suggestion???)

$>  for i in file*; do echo $i; argus -r $i -w argus.out; done

Although the outputs from the mergecap and this new workaround are different
(e.g. when comparing with 'diff'), it seems that the contents are basically
the same. But I do get some differences, for instance:

$>diff <(ra -r file.argus - host X.X.X.X and port 1084) <(ra -r file2.argus
- host X.X.X.X and port 1084)
...
<    15:10:58.624644  e         udp         X.X.X.X.1084      ->
 Y.Y.Y.Y.2423          6        708   INT
<    15:11:04.623560  e         udp         X.X.X.X.1084      ->
 Y.Y.Y.Y.2423          6        708   INT
<    15:11:10.622486  e         udp         X.X.X.X.1084      ->
 Y.Y.Y.Y.2423          5        590   INT
---
>    15:10:58.624644  e         udp         X.X.X.X.1084      ->
 Y.Y.Y.Y.2423          5        590   INT
>    15:11:03.623752  e         udp         X.X.X.X.1084      ->
 Y.Y.Y.Y.2423          6        708   INT
>    15:11:09.622670  e         udp         X.X.X.X.1084      ->
 Y.Y.Y.Y.2423          6        708   INT
...

The individual records are different, but saved information is the same. I
guess can be considered expected result.

--
Rafael Barbosa
http://www.vf.utwente.nl/~barbosarr/



On Tue, Nov 30, 2010 at 2:01 PM, Rafael Barbosa <rrbarbosa at gmail.com> wrote:

> I managed to work around the limitation merging all pcap files at once with
> mergecap. For that I had to change the limit of maximum open files to 1000
> (just to be safe). In a Mac:
> $> ulimit -S -n 1000
>
> Still, I don't know what went wrong at my first attempt. I merged the files
> in intermediate steps, i.e. part0 = file1,file2,...,file149; part2 =
> file150,...,file300 and so on. In the end, I tried to use argus:
> $> argus -r part* -w flows.argus
>
> Probably there was some mistake with the order I fed files to mergecap
> and/or to argus (although I cheked it twice). @Dave - I also checked the
> order of the pcap files in a similar method to what you proposed.
>
> The problem I have with CS Lee method, is that I will loose some "fine
> grain" information about the flows. For instance, by default argus will
> generate a flow record every 5s, that is, if I have a flow with more than
> 5s, it will be split in several flow records. In my understanding,
> racluster() would merge all these records in a single one.
>
> If now I use ragraph() to generate a timeseries for the amount of bytes
> every 5s for a given flow, I would have the exact information using the
> output of argus, while I would have some average using the output of
> racluster.
>
> Thanks for all the info.
>
> --
> Rafael Barbosa
> http://www.vf.utwente.nl/~barbosarr/
>
>
>
> On Tue, Nov 30, 2010 at 7:11 AM, Dave Edelman <dedelman at iname.com> wrote:
>
>> You might want to figure out the correct sequence of the pcap files using
>> something like tcpdump to look at the timestamp of the first packet in each.
>>
>>
>>
>> for i in file*; do echo -n “$i  “; tcpdump –c 1 –r $i;  done
>>
>>
>>
>> *From:* argus-info-bounces+dedelman=iname.com at lists.andrew.cmu.edu[mailto:
>> argus-info-bounces+dedelman <argus-info-bounces%2Bdedelman>=iname.com@
>> lists.andrew.cmu.edu] *On Behalf Of *Rafael Barbosa
>> *Sent:* Monday, November 29, 2010 10:01 AM
>> *To:* carter at qosient.com
>> *Cc:* argus-info-bounces+carter=qosient.com at lists.andrew.cmu.edu; Argus
>> *Subject:* Re: [ARGUS] Too many inputs
>>
>>
>>
>> Hi,
>>
>>
>>
>> In this test I ran version 3.0.2. I think last time I updated the clients,
>> I forgot to update argus... I will update my binaries before continuing.
>>
>>
>>
>> Trying to solve my problem I used 'mergecap' (part of wireshark) to merge
>> the files, and then read load then into argus. However I had problems with
>> packet timestamps, such as:
>>
>>
>>
>> argus[4311]: 29 Nov 10 15:49:01.766800 ArgusInterface timestamps wayyy out
>> of order: now 1233014770 then 1233577523
>>
>>
>>
>> Now I am trying to understand where out of order packets are coming from.
>> Kinda frustrating...
>>
>>
>>
>> --
>> Rafael Barbosa
>>
>> http://www.vf.utwente.nl/~barbosarr/
>>
>>
>>
>> On Mon, Nov 29, 2010 at 2:46 PM, <carter at qosient.com> wrote:
>>
>> Hey Rafael,
>> The number of inputs is a constant defined in the ./argus/ArgusSource.h
>> include file. You can increase that number to whatever to process files, but
>> there are limits to the number of fd's that you may run into.
>>
>> What version are you running, I couldn't find your exact error string in
>> the 3.0.3 codebase. Just curious.
>>
>> Carter
>>
>>
>> Carter
>>
>> Sent from my Verizon Wireless BlackBerry
>> ------------------------------
>>
>> *From: *Rafael Barbosa <rrbarbosa at gmail.com>
>>
>> *Sender: *argus-info-bounces+carter=qosient.com at lists.andrew.cmu.edu
>>
>> *Date: *Fri, 26 Nov 2010 14:34:58 +0100
>>
>> *To: *Argus<argus-info at lists.andrew.cmu.edu>
>>
>> *Subject: *[ARGUS] Too many inputs
>>
>>
>>
>> Hi all,
>>
>>
>>
>> When trying to read several hundreds of small pcap files (100MB) to create
>> a argus flow file I ran into a problem. When I tried:
>>
>> $> argus -r dump* -w file.argus
>>
>>
>>
>> I got the following error:
>>
>> argus[34458]: 26 Nov 10 14:29:02.394286 ArgusOpenInputPacketFile: too many
>> inputs max is 5
>>
>>
>>
>> Is it possible to overcome this limitation without merging the files
>> manually?
>>
>>
>>
>> Thanks,
>> Rafael Barbosa
>>
>> http://www.vf.utwente.nl/~barbosarr/
>>
>>
>>
>>
>>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://pairlist1.pair.net/pipermail/argus/attachments/20101207/175e19c0/attachment.html>


More information about the argus mailing list