Partial Fragments

elof2 at sentor.se elof2 at sentor.se
Mon Oct 27 09:53:10 EDT 2014


Thanks for the explaination.

I see that argus has dropped 0 packets, so I guess the missing first 
fragments were dropped outside of the sensor due to an extreme spike of 
packets.
(there were multiple sessions like this one on other ports running in 
parallell. I guess the SPAN port in the switch got choked.)

/Elof


On Sun, 26 Oct 2014, Carter Bullard wrote:

> So, partial fragment flows are where argus doesn’t see the first packet of the fragment.
> The first packet has the UDP header, which is were the port numbers are, and the
> fragments only have an IP header.  So if we don’t see the first packet of the fragment,
> we don’t see all the identifiers needed to match the packet to the flow, and so we report
> the ‘partially fragmented’ flow.
>
> Argus tracks these partial fragments by adding the fragid to its flow tracking key.  If
> you printed the ‘sipped’ field, you would see that they are all different.  When you
> racluster() this data, all of those individual flows will be aggregated together.
>
>
> Argus is designed to track fragments and match them to the first packet or the fragment
> using the fragment id.  The first packet is the glue for tracking a fragment to its parent
> flow.  When there are fragments, argus puts the ‘F’ in the flow status field.
>
> This situation where you see a lot of partial fragments, as well as seeing full fragments
> can happen when there is content based or flow based load balancing, or NATing.
>
> The load balancer does the same thing argus does, and treats the first packet of a
> fragment differently than the other fragments, because the packet looks like a different flow.
> The first packet forwarded down one path,  the other fragments are forwarded down
> potentially another path.  So argus sees a lot of fragments that don’t have a first packet.
>
>
> Hopefully this was useful ???
> Carter
>
>
>> On Oct 24, 2014, at 6:31 AM, elof2 at sentor.se wrote:
>>
>> Hi Carter!
>>
>>                                                                                           spkts     dpkts      sbytes       dbytes         state      ltime
>> ...
>> ...
>> 02:56:56.508553  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.508553
>> 02:56:56.509244  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.509244
>> 02:56:56.509580  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.509580
>> 02:56:56.516669  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.516669
>> 02:56:56.516748  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.516748
>> 02:56:56.531167  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.531167
>> 02:56:56.531575  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.531575
>> 02:56:56.531704  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.531704
>> 02:56:56.535491  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.535491
>> 02:56:56.536158  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.536158
>> 02:56:56.537324  M    F      udp      10.20.20.20.59297  <->        10.10.10.10.443        8725    44994      1273843     45412854           CON    02:57:01.718200
>> 02:56:56.540186  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.540186
>> 02:56:56.541504  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.541504
>> 02:56:56.543402  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.543402
>> 02:56:56.546622  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.546622
>> 02:56:56.550524  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.550524
>> 02:56:56.552348  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.552348
>> 02:56:56.560982  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.560982
>> 02:56:56.566569  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.566569
>> 02:56:56.568299  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.568299
>> 02:56:56.570430  e    f      udp        10.10.10.10         ->      10.20.20.20               1        0          135            0           INT    02:56:56.570430
>> ...
>> ...and so on for tens of thousands of lines...
>>
>>
>>
>> So, 10.20.20.20:59297 is talking to 10.10.10.10:443 via UDP.
>> 44994 of the responses over the next minute was matched and hence connected to the above udp flow.
>> Within this traffic (and/or in the 8725 packets from the client), there were fragments, so this flow is flagged with a "F".
>>
>> However, over this minute (and before), there seem to also have been tens of thousands of "partial fragments" (flagged "f").
>> Each such UDP packet gets logged as a new flow.
>> Result: the argus logfile grows exponentially in size.
>>
>> (I noted this because the traffic logged above ran constantly for two hours, filling my harddrive)
>>
>> Questions:
>>
>> Q1: When is argus flagging something as a "Partial Fragment" ("f") and not a Fragment ("F")?
>>
>> Q2: Why don't these packets also get matched to the udp flow, and increase the 44994 counter by the tens of thousands instead?
>>
>> /Elof
>>
>


More information about the argus mailing list