lost flows and memory leak in radium

Craig Merchant cmerchant at responsys.com
Wed Jan 30 19:28:46 EST 2013


So, it appears to be a problem related to my .rarc file.  And it seems that the problem is with the first line:

RA_ARGUS_SERVER=192.168.1.40:561  ##This is the radium server, not the argusd daemon that is defined in radium.conf

I set that value to be the radium server so that my ra clients could automatically connect to it without having to specify it on the command line.  But (I’m guessing) because radium is considered an ra client, it sees that value in the .rarc file and tries to connect to itself?

Is there a way to make radium ignore the .rarc file while other clients can still use it?

Thanks.

Craig

From: Carter Bullard [mailto:carter at qosient.com]
Sent: Wednesday, January 30, 2013 5:55 AM
To: Craig Merchant
Cc: Argus (argus-info at lists.andrew.cmu.edu)
Subject: Re: [ARGUS] lost flows and memory leak in radium

Hey Craig,
There is no state that radium() that would carry over from one run to the next.  There are  potential problems if you have multiple radii running with the exact same configuration.  How are you restarting radium ?

Yes, it would appear that there is a memory leak in the label code.
I'm ' on the road ' today, but will look into it tonight.

Sorry for any inconvenience,

Carter

On Jan 29, 2013, at 2:10 PM, Craig Merchant <cmerchant at responsys.com<mailto:cmerchant at responsys.com>> wrote:
The first thing I did was get radium working without labels.  I was able to test various ra clients against it and they worked perfectly.  I then added label support to radium.conf and restarted.  That’s when I saw the high CPU utilization and saw the lost flows.  So, I assumed the problem was with the labels.

I tried label files with a small number of networks, a small number of hosts, long labels, short labels, etc.  Each time, restarting radium after I’d changed the label file.

It wasn’t until I tried rebooting the entire machine that radium started behaving normally again.  I rebooted the server without any label support in radium.conf.  Radium came up working normally.  I restarted radium with the init script and experienced the high CPU utilization and flow loss.

I enabled label support in radium.conf and used my 1500+ line label file and rebooted again.  Radium came up appearing to normally – CPU around 20% and 2% memory use.  But after twelve hours or so, the memory use was up to about 55%.  No ra clients connected to radium during that time.

So, from what I can tell, there are two issues and they don’t seem to be related to labels.  Restarting radium (with labels on or off) causes it to run at like 185% CPU and flows get dropped.  Radium will run normally after a machine reboot (with labels on or off), but there appears to be a slow memory leak.

Thx.

Craig


From: Carter Bullard [mailto:carter at qosient.com]
Sent: Tuesday, January 29, 2013 3:01 AM
To: Craig Merchant
Cc: Argus (argus-info at lists.andrew.cmu.edu<mailto:argus-info at lists.andrew.cmu.edu>)
Subject: Re: [ARGUS] lost flows and memory leak in radium

Hey Craig,
Hmmm, you've got too much going on to know what is happening.
Your problem was that radium() had poor performance when providing labels.  Now its when you restart radium().  Which problem are we working on ?

If you restart radium, all the clients will close their connections, and then reconnect.  Is that where the data loss occurs ?  Why are you restarting radium ?

Carter



On Jan 29, 2013, at 1:08 AM, Craig Merchant <cmerchant at responsys.com<mailto:cmerchant at responsys.com>> wrote:
Carter,

After much testing, it doesn’t appear that the problem is with the size or makeup of the iana label file.  Restarting radium is the problem.  Restarting the service, even with labeling commented out in the /etc/radium.conf file, causes the spike in CPU and the data loss for ra clients.

What kind of data can I provide that would be helpful to you?

Thx.

Craig


From: Carter Bullard [mailto:carter at qosient.com]
Sent: Saturday, January 26, 2013 3:32 PM
To: Craig Merchant
Cc: Argus (argus-info at lists.andrew.cmu.edu<mailto:argus-info at lists.andrew.cmu.edu>)
Subject: Re: [ARGUS] lost flows and memory leak in radium

Hey Craig,
This is interesting, as we haven't had much in the way of pure radium performance
reports with labeling.  The cycle requirements for labels will vary quite a bit depending
on the strategy.  Address based labeling will perform the best, as we have a pretty fast
patricia tree structure for address and label lookup.  The flow based labeling may be
the worst performing, as we have to switch out the search contexts for each rule.
And no telling how fast the GeoIP goes, but its been the most used label to date, so
I think they do a pretty good job.

Can you try a few sample label strategies, just to tease out where the loads are?
Maybe start with a single rule in each label strategy, doing one strategy at a time,
and then ramp them up with 2, 4 8, etc... rules, until we get to your complexity.

A good sample would be a label rule that labels everything, with a small label,
vs a rule that labels everything with a large label, so that we're accounting for the
label sizes as an impact on performance.

That will help.  There are a lot of queues, a lot of buffering, a lot of things going on.

Can you share your radium.conf file?  and the ralabel.conf style Classifier file?

Carter

On Jan 26, 2013, at 2:45 PM, Craig Merchant <cmerchant at responsys.com<mailto:cmerchant at responsys.com>> wrote:




I tried rebooting the server with the label options commented out in radium.conf.  When the server came up, radium was running at 11% CPU and there were no pauses or loss of flows when clients connected.  I added the labeling config back to radium.conf and restarted.  The CPU ran at over 190% and the flow loss and pauses returned.

I commented those lines back out again and restarted radium.  Radium ran at around 150% with flow loss and pauses.  I rebooted the server again and it radium was back to normal.

From: argus-info-bounces+cmerchant=responsys.com at lists.andrew.cmu.edu<mailto:argus-info-bounces+cmerchant=responsys.com at lists.andrew.cmu.edu> [mailto:argus-info-bounces+cmerchant=responsys.com at lists.andrew.cmu.edu<mailto:info-bounces+cmerchant=responsys.com at lists.andrew.cmu.edu>] On Behalf Of Craig Merchant
Sent: Friday, January 25, 2013 4:44 PM
To: Argus (argus-info at lists.andrew.cmu.edu<mailto:argus-info at lists.andrew.cmu.edu>)
Subject: [ARGUS] lost flows and memory leak in radium

We’ve got one data center currently running argus on our IDS sensor (CentOS 6.2) and it listens on a DNA/libzero interface thanks to code from Chris Wakelin.  So, we do experience the bug in PF_RING where some select() call causes argusd to run at 100% CPU all the time.

We probably average between 4-8 Gbps of traffic.  A separate host is running radium and pulls the flows off of the sensor by connecting to tcp 561.  Top shows radium running at 190% CPU most of the time.

If I connect any of the ra clients to radium (such as ra –S radium:561), flows will appear for 10-30 seconds and then pause for 30-60 seconds.  If I connect the ra clients directly to the remote argusd instance, they work fine.  We’ll be deploying argus in a second data center soon, so we’d really like to take advantage of radium’s ability to dedup flows.

Radium’s memory usage slowly climbed whether an ra client was connected or not.

I tried commenting out the two RADIUM_CLASSIFIER settings and restarted radium.  Our label file is something like 1500 lines long, so I thought that could be causing problems.    Radium uses about 30% less CPU and memory stays at 0.8%.  The intermittent pauses still happen though.

I then tried setting RADIUM_CLASSIFIER=no instead of commenting it out and the CPU went back up by 30% and the memory usage climbed steadily with no ra clients connected.  Does that not disable labeling in radium?

I’m not sure how to diagnose it any further.  My argus.conf and radium.conf are in the spreadsheet I sent you earlier.  Let me know what I can do to help diagnose this further.

Thanks.

Craig




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://pairlist1.pair.net/pipermail/argus/attachments/20130131/2bec88ec/attachment.html>


More information about the argus mailing list