hole in the argus archive on theorygroup.org

Peter Van Epp vanepp at sfu.ca
Fri Mar 1 13:09:00 EST 2002


> 
> 
> Are you going to ship more than 8-900 Mbps so you'll need bonding?  Or
> are you just planning ahead? 

	Just planning ahead. I need to upgrade my argus hosts for Gig, and as
long as I'm spending money I want to get as much as I can. Our commodity link
will be bandwith limited (or more correctly cost limited by the mechanism
of bandwith limiting :-)) and likely not a big problem (we won't be able to
afford a huge bandwith there). The link to C4/I2 is currently projected to be
a separate gig link (the cost of long haul GBICs may crimp that plan) with
no traffic shaping and an underlying OC48/OC192 backbone beyond it. There is
currently no traffic charge there and I expect it to be the potential challange.We could see a full speed attack/misconfiguration there although I don't expect
that to be the norm. The worlds largest MP3 site is a possiblility (we are also
getting a 25 Tbyte network file store that will be on C4 (probably on is 
own gig link / fibre). We already see MP3 transfers on the current C3 link
to other Canadian university res nets (thank god our resnet is being 
outsourced!)


>				 Do you know of a system that can ingest
> that?  I guess that's what you want to find out..  We haven't tried
> choking our fastest stuff yet..

	Just from looking at hardware specs I expect finding a host to keep up
at this rate (let alone 10 gigE which we have offered to beta test for one
of the vendors :-)) is going to be exciting. Getting close to a gig we start
hitting both memory bandwith and I/O bandwith issues (without even considering
interrupt load on the CPUs). Given our client base (bright students with time
on their hands and access to C4) I expect we may see some attempts at filling
the pipes, and we will need to be ready to combat that.

> 
> We're doing everything on linux (2.4.16 right now), mostly because we're
> lazy and it's been keeping up (using Carter's commercial code -
> gargoyle, which kicks butt on the argus stuff by the way).
> 

	Sounds like I better consider becoming a customer too :-), Linux has
appeal (I can make OS maintance Martin's problem because he is the Linux 
expert for instance :-)) especially if thats what is working for you folks.
I certianly don't need to reinvent the wheel (or find new problems) even if 
it is interesting. 

> We have several GigE cards, seem to have the best luck with the
> SysKonnect fiber, but have both generation Intel (the second gen were

	Yes, this is what Martin ended up selecting for his beowolf cluster
so I intend on buying a couple of the Sysconnect cards.

> *much better*, but we've had some issues with them as eth2) and some
> 3Com's too.  Had the 2-port SysKonnect for a while, but it seemed to be
> designed more for redundancy than performance, so I shipped it back.  I
> admit we haven't been too systematic about the analysis mostly lacking
> money to create time for that diligence. 
> 
> There seems to be a fair disparity in the interrupt load of the various
> cards, which isn't surprising, but I haven't had the time to look into
> real counts (from /proc/net since netstat can't count too high).
> 
> We'd talked about trying to do some more benchmarking on performance
> numbers, but it wasn't much of a priority in our conversations with
> Carter, especially since it's pretty easy to catch up by splitting
> flows..
> 

	I'll probably just follow along, find something that works and leave
the benchmarking for when we get time (should such ever occur :-)).
	Thanks for the info!


Peter Van Epp / Operations and Technical Support 
Simon Fraser University, Burnaby, B.C. Canada



More information about the argus mailing list