A couple troubleshooting questions...

Carter Bullard carter at qosient.com
Thu Jul 24 06:02:27 EDT 2014


Hey Craig,
Did you man records ever print out ???

I think you should use the default IP and TCP timeouts.  You're holding onto
caches waaaaayyyyyyy to long.

Carter

On Jul 23, 2014, at 7:24 PM, Craig Merchant <craig.merchant at oracle.com> wrote:

> I am aggregating my flows using the standard 5-tuple model every five minutes.  We’ve got Gigamon taps between all of our top of rack switch clusters and the core switches.
> 
> What does TCP 0 mean if the flows are aggregated with racluster and I’m using a 5-tuple model?
>  
> Is it possible to get the management records by connecting to argus or radium rather than a file?  I tried:  ra –S argus_ip:561 –M xml – man, but that didn’t give me any records.  I tried the same thing against my radium instance and I got data, but nothing that includes any performance data.  It looked like:
>  
> <ArgusFlowRecord  StartTime = "2014-07-23T15:46:32.000131" Flags = " * g     " Proto = "tcp" SrcAddr = "15.23.223.33" SrcPort = "10503" Dir = "<?>" DstAddr = "23.67.242.93" DstPort = "47536" Pkts = "9" Bytes = "6997" State = "FIN"></ArgusFlowRecord>
>  
> My argus.conf looks like.  Am I missing something?
>  
> ARGUS_FLOW_TYPE="Bidirectional"
> ARGUS_FLOW_KEY="CLASSIC_5_TUPLE"
> ARGUS_DAEMON=no
> ARGUS_MONITOR_ID="argus01"
> ARGUS_ACCESS_PORT=561
> ARGUS_BIND_IP="10.10.10.10"
> ARGUS_INTERFACE=dnacluster:10 at 28
> ARGUS_GO_PROMISCUOUS=no
> udp://224.0.20.21:561
> ARGUS_SET_PID=yes
> ARGUS_PID_PATH="/var/run"
> ARGUS_FLOW_STATUS_INTERVAL=5
> ARGUS_MAR_STATUS_INTERVAL=60
> ARGUS_IP_TIMEOUT=900
> ARGUS_TCP_TIMEOUT=1800
> ARGUS_GENERATE_RESPONSE_TIME_DATA=yes
> ARGUS_GENERATE_PACKET_SIZE=yes
> ARGUS_GENERATE_APPBYTE_METRIC=yes
> ARGUS_GENERATE_TCP_PERF_METRIC=yes
> ARGUS_GENERATE_BIDIRECTIONAL_TIMESTAMPS=yes
> ARGUS_CAPTURE_DATA_LEN=10
> ARGUS_SELF_SYNCHRONIZE=yes
> ARGUS_KEYSTROKE="yes"
>  
> As you can see from the interface setting, we are using the PF_RING DNA/Libzero drivers.  I compiled the latest ixgbe drivers from Intel and tried those and the packet loss was as bad or worse than with the PF_RING drivers.  We are using the 5.3.3 version of the drivers which, I believe, have that SELECT() bug that causes argus to run at 100% all of the time.
>  
> I grabbed a bunch of non-aggregated flow records that had gaps in the packets and added the sgap and dgap fields.  I’ll send that to you offline.  For whatever reason, the header row didn’t get printed.  The last two fields are sgap and dgap.  I opened it in excel and the average sgap is 21,903.  The average dgap is 10,812.
>  
> Each argus instance that we’re running probably sees 3-8 Gbps pretty much 24/7. 
>  
>  
>  
> Thanks for your help!
>  
> From: Carter Bullard [mailto:carter at qosient.com] 
> Sent: Wednesday, July 23, 2014 3:23 PM
> To: Craig Merchant
> Cc: Argus
> Subject: Re: [ARGUS] A couple troubleshooting questions...
>  
> Hey Craig,
> Here are a few suggestions on what to look for.  If you do find something
> please send your observations to the list.
>  
> A couple of things first.  Are you basing these observations on primitive
> argus data (data straight from argus) or from processed data (aggregated
> argus flows ??).
>  
> If these observations are coming from primitive Argus data: 
> the ?’s and ‘g’aps, can be indications that your Argus is either not
> getting all the packets from the wire, or there is asymmetric routing,
> such that all the packets don’t come down the wire/interface that your
> monitoring.
>  
> Argus management records have the argus packet drop rate in them.  If argus
> isn’t getting all the packets, and the libpcap interface is dropping packets
> then the ‘man’ record will show this.  When you print the man records using
> xml, it will show the number of dropped packets during the reporting interval.
>  
>    ra -S argus.source -M xml - man
>    ra -r repository.file(s) -M xml - man
>  
> If the ‘PktsDropped’ number is gt 0, then argus is having problems keeping up
> with the captured load, and the packet loss is between the libpcap interface
> and argus reading packets from the interface.  This is the only place where
> we can directly report on packet capture infrastructure loss.  If the packets
> are lost in the switch that is port mirroring packets, or if they are dropped
> by the sensors capture ethernet interface, there isn’t any way that we 
> can “ know “ that they were dropped.  
>  
> The ‘g’ap tracking is our way of indicating that we are seeing gaps, which means
> we didn’t see all the packets for this flow.  You can print the size of the gaps,
> from the TCP records “ -s +sgap +dgap” in order to understand how much we
> missed, which can help in your understanding of the problem.
>  
> Because some TCP flow idle times do exceed the Argus default TCP idle time,
> there will be TCP status flow records that have the ‘?’ in them.  To understand
> if this is the case, you have to look earlier in your archive to see if you
> saw this flow before, if so, there is an answer, if not, we’re back to thinking
> that we aren’t seeing all the packets.
>  
> All of that can help to figuring out how bad is the issue and where it might be.
> Packet loss in the collection infrastructure is expected above 1G.  If you are
> port mirroring, it can be expected at any speed, depending on how the mirroring
> is being implemented.
>  
>  
> If these observations are coming from aggregated Argus data:
>  
> the 0 TCP port number and ‘g’aps can be expected when using non-default
> aggregation rules.  If so we have a somewhat long conversation, but
> it is important to email about it, again if this the case.
>  
>  
> Carter
>  
> On Jul 23, 2014, at 5:41 PM, Craig Merchant <craig.merchant at oracle.com> wrote:
> 
> 
> I’ve been trying to troubleshoot why Argus is having a tough time determining the direction of flows (approximately 40% of flows).  We also seem to be seeing a fairly high number of flows with gaps (approximately 15%).  Although oddly enough, only about 20% of flows with questionable direction have gaps in them.
>  
> What I am seeing is that the overwhelming majority of traffic with gaps in the sequence numbers have either TCP 0 or TCP 25 as the source port or TCP 25 as the destination.  After doing a little reading (http://www.lovemytool.com/blog/2013/08/the-strange-history-of-port-0-by-jim-macleod.html), TCP 0 doesn’t seem to mean that the source port was defined as 0, but that it means a Layer 4 header wasn’t included in the packet.  This article implies that packet fragmentation is often a cause of this, but I’m not seeing TCP flags indicating any kind of fragmentation.
>  
> What does a packet with TCP 0 as a source port mean in Argus?
>  
> Is there anything special about SMTP that might generate a higher volume of gaps than other types of traffic?  We’re an ESP, so we send and receive a ton of email on behalf of our customers.  But I’m also not seeing gaps in other types of traffic (like HTTPS) between us and the Internet.
>  
> Thanks.
>  
> Craig

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://pairlist1.pair.net/pipermail/argus/attachments/20140724/d8f6b172/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6837 bytes
Desc: not available
URL: <https://pairlist1.pair.net/pipermail/argus/attachments/20140724/d8f6b172/attachment.bin>


More information about the argus mailing list