client problems on Linux (fwd)

Peter Van Epp vanepp at sfu.ca
Wed Mar 3 12:41:54 EST 1999


	When I try it (on FreeBSD 3.1) I see much the same thing. I usually
do an ra -r <argus_file> -c -n port 31337 (for example). This time I tried

zcat argus.log.Feb10_08_06.gz | ra -c -n port 31337 

(which I expect is what Russell meant to write in the ones below, the -r would
 attempt to read directly from the file not stdin, no?)

	This produced no output at all (although there are BO probes in the
file such as:

Tue 02/09 09:20:38      udp     142.58.25.0.31337 <-   153.37.243.165.1903  0      1       0         26       TIM

So then I did:

zcat argus.log.Feb10_08_06.gz | ra -c -n

which gives output for a while and then:

...
Tue 02/09 07:50:40      tcp  204.244.200.18.1754   ->   142.58.110.12.80    5      4       294       102      CLO
Segmentation fault

	although I haven't found a core dump anywhere to look at, it looks like
FreeBSD does the same thing (probably for the same reason) as Linux. My hazy
recollection from years ago is that stdin is a character device and thus it
feels free to spit out whatever number of characters it has left in the current
buffer, then an EOF while it refills the buffer (if you do a getc() rather than
check if a character is available to be read). Thus if argus needs 60 byte
records, I think it will need to be prepared to do multiple reads of stdin to a 
60 byte buffer to guarantee 60 byte records in from stdin (although I may be
remembering wrong!).

Peter Van Epp / Operations and Technical Support 
Simon Fraser University, Burnaby, B.C. Canada


> Russell,
> 
> Sorry this is completely new to me, as that is
> not how computers are suppose to work ;o)
> 
> I would recommend that you always run ra with
> the '-n' option, so that name lookup is not
> involved in reading large sets of argus data.
> 
> I do this sort of thing all the time and have
> never noticed these kinds of problems.  I test other
> variations of this code using some 10 mega argus
> record files and have never seen this type of problem,
> but that doesn't mean that you aren't.  Although
> when I'm doing this on Linux, it is done on Red Hat,
> but you and I both know that it is highly unlikely
> that the kernel is causing corruption issues.
> 
> In the Debian case, I would suspect that the problem
> is due to partial reads from stdin.  Ra() is designed
> to read fixed length 60 byte blocks, from either a
> file or stdin.  There isn't any logic for partial reads,
> i.e. where the read returned with less than 60 bytes.
> I can imagine that Debian may let this happen, where
> RH and other Unixes may be better behaved in this area.
> That would explain ra() getting out of sync with the
> data stream.
> 
> I'll look into putting some protection around this in
> 1.8.
> 
> But in your RH case, my assumption is that you are losing
> records.  Now that is completely different, and the only
> way that would work, at least as I imagine it, is for
> ra() to be seeing premature EOF's from stdin.  If this
> is true then the missing argus records should all be
> at the end of the file.
> 
> Is this true?
> 
> 
> Carter
> 
> -----Original Message-----
> From: Russell Fulton [mailto:r.fulton at auckland.ac.nz]
> Sent: Tuesday, March 02, 1999 8:45 PM
> To: Carter Bullard
> Subject: client problems on Linux
> 
> 
> Hi Carter,
> 	  You may remember that I was bitching a while back about 
> problems when clients read data from stdin on Debian Linux.  You may 
> also recall that I stated that I did not have this problem on Red Hat.  
> The latter statement is not entirely true, more is the pitty.
> 
> Under Debian the problem is very obvious, ra prints a few records then 
> spews garbage and eventually crashes, on RH the problems are more 
> subtle.  Two runs of ra with the same input (from stdin) and the same 
> command line parameters produce different, incomplete outputs :-(
> 
> So people on RH may not realise there is a problem.
> 
> I discovered this because I have a perl script that reads ra output.  
> It was written on my debian system and therefore unzips files into /tmp 
> and then uses -r option to read them.  Yesterday it reported some 
> interesting traffic so I used 
> 
> zcat <argus file>.gz | ra -r /tmp/<arugs file> host xxxxxxx
> 
> to get the and was very puzzled when I did not get all the output I 
> expected. I then did: 
> 
> zcat <argus file>.gz | ra -r /tmp/<arugs file> host xxxxxxx > file1
> zcat <argus file>.gz | ra -r /tmp/<arugs file> host xxxxxxx > file2
> 
> diff file1 file2
> 
> diff printed many records which were in one file and not the other.
> 
> It seems strange that I am the only person to be having these sorts of 
> problems.  Has anyone else complained about this?
> 
> Cheers, Russell.
> 



More information about the argus mailing list