argus-clients-3.0.2.beta.10 segfault when reading argus 2 data

Peter Van Epp vanepp at sfu.ca
Thu Jul 30 18:09:44 EDT 2009


On Thu, Jul 30, 2009 at 12:28:53PM +0100, Robert Kerr wrote:
> Hi,
> 
> I'm in the progress of migrating various argus 2 sensors and clients
> over to 3 and have ran into a few problems. I've got an argus 2 sensor
> which is both logging to a local file and listening on a socket to which
> an argus 3 client connects. An argus 3 client on a remote system is then
> in turn writing the collected data to a local file.
> 
> I find that the files written by argus 3 result in a seg fault when read
> by the same version of argus. I will see a small number of initial
> records read fine before the seg fault, and for any given file the fault
> always occurs after the same number of correct records being output. The
> last line output before the seg fault is:
> 
<snip>

	I think SFU is seeing similar results to this except probably entirely
on 3.0. As far as I can gather the segfaults are in the clients reading 3.0
data that was collected on a 3.0 argus on a sensor box (probably an IBM PPC
machine) writing only to a socket and archived using ra on another box 
(probably an Intel) writing to disk. As far as I know that process is working 
at least without seg faulting, but sometimes when doing an ra query on the 
archived data they are getting seg faults. This sounds like it may be a 
variation on the old zero length argus record problem that we had a while 
(maybe quite a while :-)) ago that I think was a timing problem of some kind.
	I think the 2.0.6 implementation at SFU has been shut down and the 
3.0 one is only semi official (in that DSCC replaced argus from most things 
when I retired) so it is getting a limited amount of attention. I have been
encouraging them to send the where gdb output from a segfault either to the list
or to Carter to see if we can figure out whats happening. I also suggested
changing to radium rather than ra listening on a socket as the approved 
method of archiving to see if that helps.

Peter Van Epp





More information about the argus mailing list