Argus-2.0 Clients

Carter Bullard carter at qosient.com
Tue Jul 25 07:43:54 EDT 2000


Hey Peter,
   I believe that we will catch your overlapping offset
problem in the existing code base.  Our fragment reassembly
validation algorithm is pretty simple.  We are adding up
the total bytes received in a given fragment stream, and
comparing the total to the original packet length as
reported in the first packet.  When the two numbers are
equal, we've got a good fragment.  Overlapping fragments
should give us a length that is larger, and, as a result,
we'll generate a special fragment incomplete (frag) record
indicating a problem.  We do this so we can account for
all the packets, but if we didn't see the first packet in
the fragment, we don't know what parent flow's metrics
to update, so we generate a separate record.

   The unfortunate thing about this strategy is that we
end up with an Argus record per fragmented packet error,
and this is a potential source of Argus record flooding,
so I'm rethinking how we report packet problems with
fragments.

Carter




-----Original Message-----
From: owner-argus at lists.andrew.cmu.edu
[mailto:owner-argus at lists.andrew.cmu.edu]On Behalf Of Peter Van Epp
Sent: Sunday, July 23, 2000 3:48 PM
To: argus
Subject: Re: RE: Argus-2.0 Clients


>
> Hey Peter,
>    Running Argus against "standardized" attacks is right
> on the money for validation as well as baseline deviation
> prediction.  I thought CIDF was going to generate packet
> capture files of popular attacks, did this go away?

	It is probably still there (I remember reading about this and adding
it to the reference pile but I haven't got back there yet). If it is up I'll
steal that instead of grabbing my own exploits :-).

>
> OK on to specifics:
>    Argus is already doing fragment offset calculations
> and so adding some logic to report this type of problem
> shouldn't be an issue, as long as we can have a set
> number of things to look for.

	The most interesting one would be new fragements whose offset would
overlap (but not match exactly) a previously received fragment. This
shouldn't
happen in normal use (even multipath  MTU fragmentation shouldn't create
overlaps I don't think, although I may be wrong in pathelogical cases). That
should make it a good indication of being a fragmentation attack and worth
looking at in detail in any case. There is a good paper on how to defeat
network IDS systems from a few years ago on this topic, I can dig up the
reference if you haven't seen it. I disagree with his conclusion that it
makes
network IDS useless, but it certainly can lead to combitorial explosion
while
trying to predict what any one host will see (because of ttl changes).
Outputing the complete packet of a fragement with different TTLs than its
fellows would also be an interesting thing to do (allowing an non real time
process to process the packets at different TTL points in the network
forward
to see what interesting data turns up (such as an attack that depends on one
of the frags to die from ttl before getting to the host). This might turn
out
to be too complex, but its worth a shoot. The very fact that such frags
appear
should be a warning of a possible attack.

>
>    I'm not sure I follow the VJ compression analogy.
> Can you elaborate on that idea?
>

	Essentially if the next packet matches our forward header prediction
we don't need to keep any additional state about it because we can derive it
from the last packet's state. If it deviates from prediction then record an
exception record so that we can reproduce the exact flow of the connection
later (argus may already do this of course). This would catch the unused
bits
in the header being set for instance, that would cause an exception (the
first
time it happened) and then get accepted as current state for subsequent
packets.

>    I am very interested in looking into using Argus
> records to drive your traffic generators, as this was
> one of the things on the "wouldn't it be cool"
> list for pre Argus-1.5.
>

	The header compression analogy is essentially for that, a way to keep
detailed header changes without storing the complete headers. So that for
instance suddenly 10 packets in to a flow some of the unused header bits get
set in the packets. The header exception record would tell us to do that
same
thing as we recreate the stream.

Peter Van Epp / Operations and Technical Support
Simon Fraser University, Burnaby, B.C. Canada



More information about the argus mailing list