Why is rabins() "ramping up" counts?

Carter Bullard carter at qosient.com
Wed Jul 31 19:04:40 EDT 2013


Hey Matt,

On Jul 31, 2013, at 2:29 PM, Matt Brown <matthewbrown at gmail.com> wrote:

> Hey Carter...
> 
> Thanks for replying quickly.
> 
> Hope you're ready for one of my novels...
> 
> If not, this message can be summerized into a question:
> 
> Should I be "throwing away" any data returned within the first "ARGUS_FLOW_STATUS_INTERVAL" when using rabins() as it appears to be inaccurately reported?
> 
> 
No, don't throw anything away.  
Lets try to figure out what is causing your problem.

> --
> /etc/argus.conf: ARGUS_FLOW_STATUS_INTERVAL=60
> Does 60 seconds qualify in the "very large in comparison to 5 seconds" category?
> 
> --
> I definitely have a small number of _flows_ per five second interval for this specific BPF.
> Am I right to assume that rabins() with `-M hard` will take whatever flows are occurring within each bin and treat it solo, not disgarding it in the next bins (this is what `-M nomodify` is for right?)?
> 
> -
First of all, you need to understand what the options do.
Stop using "-B 5s" when reading a file, regardless of what
you think you are getting.

"-M hard" sets the start and stop times for the aligned flows to
the bin time boundaries.  I don't know what it means to
"treat it solo".

"-M nomodify" means "don't modify the records", this will result
in bins that have data that extends beyond the bins time boundary.
This is useful for doing some correlation analytics, but you do
not want to use this option in this situation.

> -
> Here is the outcome of what you've described to help me understand rabins():
> 1) grab the `seq` of the `-N 1` record:
> #ra -N 1 -r ~/working/2013-07-25_argus_09\:00\:00 -s seq - port 5432 and src host 192.168.10.22
>    Seq
>    12187458

> 
> 2) write the single flow record to an argus binary file:
> #ra -N 1 -r ~/working/2013-07-25_argus_09\:00\:00 -w - - port 5432 and src host 192.168.10.22 > ~/temp.argus
> 
> 3) If I look at a field that is summation (`pkts`) [not an aggregate itself, like `rate` is], without using field aggregation (`-m`), I get the TotPkts:
> #ra -r ~/temp.argus -s seq ltime pkts - port 5432 and src host 192.168.10.22
>    Seq                        LastTime  TotPkts
>    12187458 2013-07-25 09:59:17.698748    59326

rate is NOT an aggregate value.  rabins() will aggregate records, using the default
flow key definition, when you don't use the "-m keyfield(s)" option.  Again, you are not
exercising any function of rabins() in this case.

> 
> 4) If I then look at the output of rabins() running against the same `seq`, It appears that rabins() shows `pkts` within each bin, whose sum IS equal to the above TotPkts:
> #rabins -M hard time 5s -r ~/temp.argus -s seq ltime pkts - port 5432 and src host 192.168.10.22
> ...snipped output...
> 
> Cool!  A summation works with a field that isn't, itself, an aggregate.  [Note the output is the same with or without `-B 5s`]
> 
Why aren't you printing out the output of the command?
How can anyone tell if you're getting correct results or not?

> 
> What about a field that is, itself, an aggregate (`rate`)?
> 
> #ra -r ~/temp.argus -s seq ltime rate - port 5432 and src host 192.168.10.22
>    Seq                        LastTime         Rate
>    12187458 2013-07-25 09:59:17.698748    16.675105
> 
> #rabins -M hard time 5s -B 5s -r ~/temp.argus -s seq ltime rate - port 5432 and src host 192.168.10.22
> 
> Cool! If I avg() the sum of the resultant Rates, I get 16.4646067416... so not exactly correct, but good enough(?). [Note the output is the same with or without `-B 5s`]
> 
The values we generate are going to be correct, more than not.
Not being arrogant, just that we've put a massive amount of statistical
effort to get the right numbers out of the tools.

rate is a calculated value, not an aggregate value. rate =  pkts/(ltime - stime).
It is NOT the avg(rate1, rate2,…, rateX).

> --
> The (`-m`) aggregator does not cause the "ramp up"...
> 
> proof:
> I do not see a difference when using an aggregator (`-m saddr`, because my BPF considers a single src host) with rabins() if I use an aggregate field (`rate`) against a live feed:
> 
> #timeout 60s rabins -M hard time 5s -B 5s -S 127.0.0.1:561 -s seq ltime saddr rate - port 5432 and src host 192.168.10.22 > ~/rabins_aggr.out & timeout 60s rabins -M hard time 5s -B 5s -S 127.0.0.1:561 -m saddr -s seq ltime saddr rate - port 5432 and src host 192.168.10.22 > ~/rabins_aggr_saddr.out
> 
> I can successfully manually add up the output of rabins() without the `-m` aggregator that would fall into the bins, and they equate to the `-m saddr` values [+/-~0.5].
> 
> So aggregation is not the cause of the "ramp up".
> 
> 
> --
> "Ramp up" is exhibited on both aggregated fields and non-aggregated fields.
> 
> proof:
> # rabins -M hard time 5s -B 5s -m saddr -S 127.0.0.1:561 -s seq ltime pkts - port 5432 and src host 192.168.10.22
>    Seq                        LastTime  TotPkts
>    15103267 2013-07-31 13:46:35.000000       41
>    14983890 2013-07-31 13:46:40.000000       75
>    14983890 2013-07-31 13:46:45.000000      144
>    14983890 2013-07-31 13:46:50.000000      255
>    14983890 2013-07-31 13:46:55.000000      377
>    14983890 2013-07-31 13:47:00.000000      368
>    15103267 2013-07-31 13:47:05.000000      373
>    14983890 2013-07-31 13:47:10.000000      446
>    14983890 2013-07-31 13:47:15.000000      570
>    14983890 2013-07-31 13:47:20.000000      567
>    14983890 2013-07-31 13:47:25.000000      575
>    14983890 2013-07-31 13:47:30.000000      637
>    15103267 2013-07-31 13:47:35.000000      647
> 
> # rabins -M hard time 5s -B 5s -S 127.0.0.1:561 -m saddr -s seq ltime saddr rate - port 5432 and src host 192.168.10.22                       Seq                        LastTime            SrcAddr         Rate
>    14667433 2013-07-31 13:43:45.000000       192.168.10.22    15.200000
>    14667433 2013-07-31 13:43:50.000000       192.168.10.22    38.600000
>    14667433 2013-07-31 13:43:55.000000       192.168.10.22    61.800000
>    14667433 2013-07-31 13:44:00.000000       192.168.10.22    61.000000
>    14667433 2013-07-31 13:44:05.000000       192.168.10.22    60.600000
>    14667433 2013-07-31 13:44:10.000000       192.168.10.22    75.200000
>    14667433 2013-07-31 13:44:15.000000       192.168.10.22    99.400000
>    14667433 2013-07-31 13:44:20.000000       192.168.10.22    99.200000
>    14667433 2013-07-31 13:44:25.000000       192.168.10.22   101.400000
>    14667433 2013-07-31 13:44:30.000000       192.168.10.22   113.400000
>    14667433 2013-07-31 13:44:35.000000       192.168.10.22   123.400000
>    14667433 2013-07-31 13:44:40.000000       192.168.10.22   130.600000
>    14667433 2013-07-31 13:44:45.000000       192.168.10.22   129.800000
> 
> 
Here is what I do.  Here is some known starting data:

thoth:clients carter$ ra -r /tmp/data.out -s stime dur saddr daddr  spkts dpkts sbytes dbytes state
                 StartTime        Dur            SrcAddr            DstAddr  SrcPkts  DstPkts     SrcBytes     DstBytes State 
2013/07/31.18:11:05.476059 775534.50*                  0                  0        0        0            0            0   STA
2013/07/31.18:11:09.012184   0.000079      192.168.0.164       192.168.0.68        1        1          302           66   CON
2013/07/31.18:11:10.981317   0.000085       192.168.0.68     74.112.184.214        1        1           66           70   CON
2013/07/31.18:11:11.408303   0.000000       192.168.0.33      192.168.0.255        1        0          194            0   INT
2013/07/31.18:11:12.568137   4.821330       192.168.0.68       192.168.0.70       48       48         3168        18024   CON
2013/07/31.18:11:13.505094   0.000000  01:80:c2:00:00:0e  74:44:01:8f:82:fe        1        0           64            0   REQ
2013/07/31.18:11:13.824966   4.691803       192.168.0.66       192.168.0.68        8        8         4300          528   CON
2013/07/31.18:11:14.212201   2.400279      192.168.0.164       192.168.0.68        2        2          472          132   CON
2013/07/31.18:11:16.150103   0.228961       192.168.0.68        66.39.3.162        2        2          132          246   CON
2013/07/31.18:11:16.168094   0.000000       192.168.0.66      192.168.0.255        1        0          220            0   REQ
2013/07/31.18:11:16.423683   0.235413       192.168.0.68        66.39.3.162        2        2          132          246   CON
2013/07/31.18:11:16.793751   0.000087       192.168.0.68        66.39.3.162        1        1           66          123   CON


Lets run rabins() against this data, aggregating against the "srcid".  There is only one
srcid in the data set, so we should get one record per bin.

thoth:clients carter$ rabins -M time 5s -m srcid -r /tmp/data.out -s stime dur saddr daddr spkts dpkts sbytes dbytes state
                 StartTime        Dur            SrcAddr            DstAddr  SrcPkts  DstPkts     SrcBytes     DstBytes State 
2013/07/31.18:11:09.012184   0.000079            0.0.0.0            0.0.0.0        1        1          302           66   CON
2013/07/31.18:11:10.981402   4.018598            0.0.0.0            0.0.0.0       30       28         3219         9280   CON
2013/07/31.18:11:15.000000   3.516769            0.0.0.0            0.0.0.0       37       36         5595        10089   CON


See how the first record starts 4.012184 seconds into a 5 second bin ?????
And really is only 0.000079 seconds in duration ?????

Now, here is what your command line options generate with this data.  I've added the
duration so you can see something curious about your options.

thoth:clients carter$ rabins -M hard time 5s -r /tmp/data.out -m srcid -s seq ltime dur pkts rate
    Seq                        LastTime        Dur  TotPkts         Rate 
           2 2013/07/31.18:11:10.000000   5.000000        2     0.200000
      199170 2013/07/31.18:11:15.000000   5.000000       58    11.400000
         304 2013/07/31.18:11:20.000000   5.000000       73    14.400000

I believe that you would think that there is some ramp up here.
Because of the "-M hard" option, the data in the first bin, which
represented only 0.000079 seconds duration of network activity, was changed
to represent 5.0 seconds.  This is correct, because the "-M hard" option is
designed to force the new records to have the start and stop times of the
bins.  This makes the duration of the data 5 seconds, in this case.

Now, lets get rid of the "-M hard"

thoth:clients carter$ rabins -M 5s -r /tmp/data.out -m srcid -s seq ltime dur pkts rate
    Seq                        LastTime        Dur  TotPkts         Rate 
           2 2013/07/31.18:11:09.012263   0.000079        2 12658.228516
      199170 2013/07/31.18:11:15.000000   4.018598       58    14.184051
         304 2013/07/31.18:11:18.516769   3.516769       73    20.473339

Now that is different, and now we have ramp down….  See, your assumptions
as to how the tools work, and the purpose of the options is not really accurate.


So, doing some experiments here may help to illuminate the situation.
Lets grab the 4.8 second duration flow our our starting data, and see how
rabins() deals with it, under different options.

Here is the single flow record.  

% ra -r /tmp/data.out -s stime dur saddr daddr spkts dpkts sbytes dbytes state - host 192.168.0.70
                 StartTime        Dur            SrcAddr            DstAddr  SrcPkts  DstPkts     SrcBytes     DstBytes State 
2013/07/31.18:11:12.568137   4.821330       192.168.0.68       192.168.0.70       48       48         3168        18024   CON

Lets run rabins() against the single record, but with "-M time 1.0", since its only 4.8 seconds in duration.

% rabins -M time 1s -r /tmp/data.out -s stime dur saddr daddr spkts dpkts sbytes dbytes state - host 192.168.0.70
                 StartTime        Dur            SrcAddr            DstAddr  SrcPkts  DstPkts     SrcBytes     DstBytes State 
2013/07/31.18:11:12.568137   0.431863       192.168.0.68       192.168.0.70        4        4          263         1502   CON
2013/07/31.18:11:13.000000   1.000000       192.168.0.68       192.168.0.70       10       10          659         3755   CON
2013/07/31.18:11:14.000000   1.000000       192.168.0.68       192.168.0.70       10       10          659         3755   CON
2013/07/31.18:11:15.000000   1.000000       192.168.0.68       192.168.0.70       10       10          659         3755   CON
2013/07/31.18:11:16.000000   1.000000       192.168.0.68       192.168.0.70       10       10          659         3755   CON
2013/07/31.18:11:17.000000   0.389467       192.168.0.68       192.168.0.70        4        4          269         1502   CON

Lets try "-M time 0.5".

% rabins -M time 0.5s -r /tmp/data.out -s stime dur saddr daddr spkts dpkts sbytes dbytes state - host 192.168.0.70
                 StartTime        Dur            SrcAddr            DstAddr  SrcPkts  DstPkts     SrcBytes     DstBytes State 
2013/07/31.18:11:12.568137   0.431863       192.168.0.68       192.168.0.70        4        4          263         1502   CON
2013/07/31.18:11:13.000000   0.500000       192.168.0.68       192.168.0.70        5        5          329         1877   CON
2013/07/31.18:11:13.500000   0.500000       192.168.0.68       192.168.0.70        5        5          329         1877   CON
2013/07/31.18:11:14.000000   0.500000       192.168.0.68       192.168.0.70        5        5          329         1877   CON
2013/07/31.18:11:14.500000   0.500000       192.168.0.68       192.168.0.70        5        5          329         1877   CON
2013/07/31.18:11:15.000000   0.500000       192.168.0.68       192.168.0.70        5        5          329         1877   CON
2013/07/31.18:11:15.500000   0.500000       192.168.0.68       192.168.0.70        5        5          329         1877   CON
2013/07/31.18:11:16.000000   0.500000       192.168.0.68       192.168.0.70        5        5          329         1877   CON
2013/07/31.18:11:16.500000   0.500000       192.168.0.68       192.168.0.70        5        5          329         1877   CON
2013/07/31.18:11:17.000000   0.389467       192.168.0.68       192.168.0.70        4        4          273         1506   CON


See, it seems to be doing the right thing…..


Lets do another experiment, with the original data set.  Lets make the time bin
0.5s instead of 5s there.  Pay particular attention to the Rate.  This is
a calculated value.  TotPkts / Dur.

% rabins -M 0.5s -r /tmp/data.out -m srcid -s seq ltime dur pkts rate
    Seq                        LastTime        Dur  TotPkts         Rate 
           2 2013/07/31.18:11:09.012263   0.000079        2 12658.228516
      199170 2013/07/31.18:11:10.981402   0.000085        2 11764.706055
      199272 2013/07/31.18:11:11.408303   0.000000        1     0.000000
         304 2013/07/31.18:11:13.000000   0.431863        8    16.208843
         304 2013/07/31.18:11:13.500000   0.500000       10    18.000000
         304 2013/07/31.18:11:14.000000   0.500000       13    24.000000
         304 2013/07/31.18:11:14.500000   0.500000       14    26.000000
         304 2013/07/31.18:11:15.000000   0.500000       12    22.000000
         304 2013/07/31.18:11:15.500000   0.500000       12    22.000000
         304 2013/07/31.18:11:16.000000   0.500000       12    22.000000
         304 2013/07/31.18:11:16.500000   0.500000       19    36.000000
         304 2013/07/31.18:11:17.000000   0.500000       18    34.000000
         304 2013/07/31.18:11:17.389467   0.389467        8    17.973282
           1 2013/07/31.18:11:18.516769   0.000037        2 27027.025391

This is doing exactly as I would expect.  Lets see what the "-M hard" option does.

% rabins -M 0.5s -r /tmp/data.out -m srcid -s seq ltime dur pkts rate -M hard
    Seq                        LastTime        Dur  TotPkts         Rate 
           2 2013/07/31.18:11:09.500000   0.500000        2     2.000000
      199170 2013/07/31.18:11:11.000000   0.500000        2     2.000000
      199272 2013/07/31.18:11:11.500000   0.500000        1     0.000000
         304 2013/07/31.18:11:13.000000   0.500000        8    14.000000
         304 2013/07/31.18:11:13.500000   0.500000       10    18.000000
         304 2013/07/31.18:11:14.000000   0.500000       13    24.000000
         304 2013/07/31.18:11:14.500000   0.500000       14    26.000000
         304 2013/07/31.18:11:15.000000   0.500000       12    22.000000
         304 2013/07/31.18:11:15.500000   0.500000       12    22.000000
         304 2013/07/31.18:11:16.000000   0.500000       12    22.000000
         304 2013/07/31.18:11:16.500000   0.500000       19    36.000000
         304 2013/07/31.18:11:17.000000   0.500000       18    34.000000
         304 2013/07/31.18:11:17.500000   0.500000        8    14.000000
           1 2013/07/31.18:11:19.000000   0.500000        2     2.000000

All of this is caused by aliasing effects.  And all of the answers are correct,
based on the assumptions and how the data is treated.  You have to understand
the tools, to understand the numbers.

And of course, if you want to see the bins that don't have anything in them,
use the "-M zero" option.

thoth:clients carter$ rabins -M 0.5s -r /tmp/data.out -m srcid -s seq ltime dur pkts rate -M hard zero
    Seq                        LastTime        Dur  TotPkts         Rate 
           2 2013/07/31.18:11:09.500000   0.500000        2     2.000000
           0 2013/07/31.18:11:10.000000   0.500000        0     0.000000
           0 2013/07/31.18:11:10.500000   0.500000        0     0.000000
      199170 2013/07/31.18:11:11.000000   0.500000        2     2.000000
      199272 2013/07/31.18:11:11.500000   0.500000        1     0.000000
           0 2013/07/31.18:11:12.000000   0.500000        0     0.000000
           0 2013/07/31.18:11:12.500000   0.500000        0     0.000000
         304 2013/07/31.18:11:13.000000   0.500000        8    14.000000
         304 2013/07/31.18:11:13.500000   0.500000       10    18.000000
         304 2013/07/31.18:11:14.000000   0.500000       13    24.000000
         304 2013/07/31.18:11:14.500000   0.500000       14    26.000000
         304 2013/07/31.18:11:15.000000   0.500000       12    22.000000
         304 2013/07/31.18:11:15.500000   0.500000       12    22.000000
         304 2013/07/31.18:11:16.000000   0.500000       12    22.000000
         304 2013/07/31.18:11:16.500000   0.500000       19    36.000000
         304 2013/07/31.18:11:17.000000   0.500000       18    34.000000
         304 2013/07/31.18:11:17.500000   0.500000        8    14.000000
           0 2013/07/31.18:11:18.000000   0.500000        0     0.000000
           0 2013/07/31.18:11:18.500000   0.500000        0     0.000000
           1 2013/07/31.18:11:19.000000   0.500000        2     2.000000

Your own experiments don't really show much at all.  Print more fields, modify
the parameters of your command.  Use deterministic input, so you can see
how the output relates to the input.

> 
> FINAL QUESTION:
> Should I simply be "throwing away" any data returned within the first "ARGUS_FLOW_STATUS_INTERVAL" when using rabins() as it appears to be inaccurately reporting?
> 

No, don't throw away data.  The data is not inaccurate.

> 
> SORRY ONE MORE... :)
> Also, does ra() only report flows (`seq`) that have flow records reporting, while rabins() (with `-M hard`) report all flows that have any activity within the bin?
> 
> 
ra() prints out every record.  rabins() prints out the contents of bins, but like argus(),
rabins() does not, by default, generate any output when there isn't any data.
Use the "-M zero" option to printout the occurrence of empty bins.

ragraph() uses all of these options, take a look at the source code of raprint.pl to
see how that is being done.

> 
> Thanks,
> 
> Matt
> 
> On Jul 30, 2013, at 4:32 PM, Carter Bullard <carter at qosient.com> wrote:
> 
>> Hey Matt,
>> Have to see the data that generated the output to know if
>> there is a problem.
>> 
>> The key here is the ARGUS_FLOW_STATUS_INTERVAL.  If it is
>> very large in comparison to your bin size, and you 
>> have a small number of records, then this kind of
>> skewing can occur.  But have to see the data.
>> 
>> Your rabins() call will cut flow records into 5 second bins,
>> normally distributing the metrics (pkts, bytes, appbytes, etc…),
>> and then when its time to output the bins, it will apply the
>> aggregation strategy to all the flow records that are in
>> each bin.
>> 
>> Your -B 5s will throw away records that preceed the apparent
>> start time of the stream,  and is only used when reading live data.
>> Don't use the " -B secs" option when reading files.
>> That may clear up your problem.
>> 
>> So grab a single flow record's status records, writing them out to a file.
>> Then run rabins() to see how it carves up the flow record.
>> You should see that it processes well.
>> 
>> Carter
>> 
>> On Jul 30, 2013, at 4:19 PM, Matt Brown <matthewbrown at gmail.com> wrote:
>> 
>>> Hello all,
>>> 
>>> 
>>> Does rabins() "ramp up to normal" over N bins?
>>> 
>>> 
>>> I'd like to start working on calculating moving averages to help
>>> identify performance outliers (like "spikes" in `loss` or `rate`).
>>> 
>>> For this purpose, I believe grabbing data from the output of rabins()
>>> would serve me well.
>>> 
>>> 
>>> For example, if I take historic argus data and run it through the
>>> following rabins() invocation, I see some odd things that can only be
>>> noted as "ramping up":
>>> 
>>> 
>>> for f in $(ls -m1 ~/working/*) ; do (
>>> rabins -M hard time 5s -B 5s -r $f -m saddr -s ltime rate - port 5432
>>> and src host 192.168.10.22
>>> ) >> ~/aggregated_rate ; done
>>> 
>>> 
>>> The first few and the last few resulting records per file seem to not
>>> be reporting correctly.
>>> 
>>> For example, these dudes at 192.168.10.22 utilize a postgres DB
>>> replication package called bucardo.  During idle time, bucardo sends
>>> heartbeat info, and appears to be holding at about 47-49 packets per
>>> second (rate).
>>> 
>>> However, I am seeing the following in my rabins() resultant data (note
>>> the precense of field label header == the start of a new rabins() from
>>> the above for..loop):
>>> 
>>> 2013-07-25 00:59:25.000000    47.400000
>>> 2013-07-25 00:59:30.000000    47.400000
>>> 2013-07-25 00:59:35.000000    48.000000
>>> 2013-07-25 00:59:40.000000    48.000000
>>> 2013-07-25 00:59:45.000000    40.600000
>>> 2013-07-25 00:59:50.000000    21.400000
>>> 2013-07-25 00:59:55.000000    15.400000
>>> 2013-07-25 01:00:00.000000     5.000000
>>> 2013-07-25 01:00:05.000000     0.000000
>>>                LastTime         Rate
>>> 2013-07-25 01:00:05.000000     0.200000
>>> 2013-07-25 01:00:10.000000     0.600000
>>> 2013-07-25 01:00:15.000000     0.400000
>>> 2013-07-25 01:00:35.000000     0.400000
>>> 2013-07-25 01:00:40.000000     1.000000
>>> 2013-07-25 01:00:45.000000     6.200000
>>> 2013-07-25 01:00:50.000000    25.400000
>>> 2013-07-25 01:00:55.000000    32.400000
>>> 2013-07-25 01:01:00.000000    41.800000
>>> 2013-07-25 01:01:05.000000    47.600000
>>> 2013-07-25 01:01:10.000000    48.600000
>>> 
>>> [The source files were written with rastream().]
>>> 
>>> 
>>> It is well worth noting that if I start an rabins() reading from the
>>> argus() socket with the following invocation, the same sort of thing
>>> occurs:
>>> # rabins -M hard time 5s -B 5s -S 127.0.0.1:561 -m saddr -s ltime rate
>>> - port 5432 and src host 192.168.10.22
>>>                LastTime         Rate
>>> 2013-07-30 15:42:55.000000     1.400000
>>> 2013-07-30 15:43:00.000000     0.600000
>>> 2013-07-30 15:43:05.000000    33.800000
>>> 2013-07-30 15:43:10.000000    47.400000
>>> 2013-07-30 15:43:15.000000    58.600000
>>> 2013-07-30 15:43:20.000000    87.600000
>>> 2013-07-30 15:43:25.000000    96.200000
>>> 2013-07-30 15:43:30.000000    96.000000
>>> 2013-07-30 15:43:35.000000   134.200000
>>> 2013-07-30 15:43:40.000000   137.200000
>>> 2013-07-30 15:43:45.000000   137.400000
>>> 2013-07-30 15:43:50.000000   136.600000
>>> 2013-07-30 15:43:55.000000   139.800000
>>> 2013-07-30 15:44:00.000000   136.200000 <-- `rate` averages about here
>>> going forward
>>> 
>>> 
>>> It's irrelevant which field I utilize, the same instance occurs:
>>> # rabins -M hard time 5s -B 5s -S 127.0.0.1:561 -m saddr -s ltime load
>>> - port 5432 and src host 192.168.10.22
>>>                LastTime     Load
>>> 2013-07-30 15:50:15.000000 1461.19*
>>> 2013-07-30 15:50:20.000000 42524.7*
>>> 2013-07-30 15:50:25.000000 54329.5*
>>> 2013-07-30 15:50:30.000000 55244.8*
>>> 2013-07-30 15:50:35.000000 90164.8*
>>> 2013-07-30 15:50:40.000000 92539.1*
>>> 2013-07-30 15:50:45.000000 94827.1*
>>> 2013-07-30 15:50:50.000000 95292.7*
>>> 2013-07-30 15:50:55.000000 96286.3*
>>> 2013-07-30 15:51:00.000000 94857.6*
>>> 2013-07-30 15:51:05.000000 130699.*
>>> 2013-07-30 15:51:10.000000 149979.*
>>> 2013-07-30 15:51:15.000000 149320.*
>>> [killed]# rabins -M hard time 5s -B 5s -S 127.0.0.1:561 -m saddr -s
>>> ltime load - port 5432 and src host 192.168.2.22
>>>                LastTime     Load
>>> 2013-07-30 15:52:35.000000 33894.4*
>>> 2013-07-30 15:52:40.000000 3134.84*
>>> 2013-07-30 15:52:45.000000 39262.4*
>>> 2013-07-30 15:52:50.000000 40024.0*
>>> 2013-07-30 15:52:55.000000 41188.7*
>>> 2013-07-30 15:53:00.000000 40259.2*
>>> 2013-07-30 15:53:05.000000 75057.6*
>>> 2013-07-30 15:53:10.000000 97160.0*
>>> 2013-07-30 15:53:15.000000 106520.*
>>> 2013-07-30 15:53:20.000000 138504.*
>>> 2013-07-30 15:53:25.000000 153835.*
>>> 2013-07-30 15:53:30.000000 152892.*
>>> 2013-07-30 15:53:35.000000 154017.* <-- `load` averages here going forward
>>> 
>>> This happens whether or not I perform field aggregation (`-m saddr`).
>>> 
>>> 
>>> Why is this happening?
>>> 
>>> 
>>> This seems like it will really screw up calculating moving averages
>>> (figuring out spikes, etc.) from the rabins() resultant data.
>>> 
>>> 
>>> Thanks!
>>> 
>>> Matt
>>> 
>> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://pairlist1.pair.net/pipermail/argus/attachments/20130731/b0ba1642/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6837 bytes
Desc: not available
URL: <https://pairlist1.pair.net/pipermail/argus/attachments/20130731/b0ba1642/attachment.bin>


More information about the argus mailing list