You are not logged in. Login Now
 0-24   25-49   50-74   75-99   100-124   125-149   150-174   175-199   200-224 
 225-249   250-274   275-299   300-324   325-349   350-374   375-399   400-424   425-449 
 450-474   475-499   500-524   525-547       
 
Author Message
janc
Next Grex Hardware Mark Unseen   Feb 17 17:16 UTC 2003

At Saturday's meeting, STeve Andre proposed that Grex purchase hardware for
the next Grex system now, and that the remaining development work be done on
that system.  Most people seemed to be willing to buy that idea, so there was
quite a bit of discussion of what hardware to get.  I want to move that
discussion on-line.

First, universal agreement was reached on using an x86 system, not a SPARC.
A number of people strongly prefer an AMD Athlon over an Intel Pentium, and
nobody really objects to this, so we are likely going that way.

There is a lot of concern over quality.  I believe that in recent years the
PC marketplace has shifted from competition based on performance, to
competition based on price.  It used to be that new desktop machines held
price steady at a bit over $1000, while the performance steadily improved.
But lately the prices have been falling (while performance has still steadily
improved).  This has placed substantial pressure on all manufacturers to cut
cost where they can - power supplies and cases have been getting crappier,
mechanical components of drives have gotten less reliable, and so forth.

The feeling was that this trend had impacted a lot of companies that used to
produce good stuff.  Dell's servers, for example, aren't as solid as they sued
to be (though they are more powerful).  The best approach to acquiring a good
new computer was to carefully buy separate components and integrate it
ourselves.  STeve Andre is likely to take the lead on this, though there are
other staff members with plentiful experience building systems (Dan
Gryniewicz, for one).

STeve brought to the meeting a draft suggestion for a system.  He is still
working on refining this.  His suggestion was:

   Athlon XP 2800 (I think this is 2.2 GHz) - about $400

   Motherboard - STeve wants to buy two, keeping one as spare.  I don't think
   a particular model was discussed.  About $145 each.

   RAM - buy lots.  It's cheap.  Say 1.5G for $270 or so.

   Case/Power Supply.  STeve like Antec.  About $250.

   Misc parts, fans, etc.  STeve wants lots of cooling.  About $100.

   NIC - STeve likes Intel.  100 mbit.  $33

   SCSI controller.  Ultra 160 at least, ultra 320 if possible.  About $200.

   SCSI drives, two 18G ibm.  About $142.

   CD rom, floppy, this and that maybe $250.

Adding up to around $2000.  STeve also include in his list a monitor and
keyboard, but Dan says he can probably donate these.  He also suggested an
80G IDE drive for about $100.  This has lower performance and reliability
than the SCSI drives, but is fine for stashing non-critical or rarely used
data.  With this, and various additional slough factors, we were mostly
talking about something in the $2500 range.
547 responses total.
scott
response 1 of 547: Mark Unseen   Feb 17 17:50 UTC 2003

The spare motherboard is so that we can have two *identical* motherboards -
often the "same" motherboard a month later will have some minor revisions
which can cause problems with existing software configurations - I've noticed
this as well as STeve.
janc
response 2 of 547: Mark Unseen   Feb 17 20:00 UTC 2003

You know, if we ordered two motherboards from the same vendor at the same
time, I wouldn't be too amazed if we received two that were *not* the same.
It's probably worth specifying when we order them that we want identical
twins.
dang
response 3 of 547: Mark Unseen   Feb 18 19:16 UTC 2003

Incidentally, I'm not sure which Antec in particular it was that Steve wanted, but you can get an Antec (SX1040BII) case with 400 watt power supply at CompUSA for $120. I have this case, and it's a wonderful case. It's a full tower, and easily fits my dual-athlon setup in it. It has good cooling (4 80mm case fan slots, comes with two fans, I have three), is solid, and the power-supply has been like a rock. I can understand if we're not sure 400 watts is enough.

Monitor and keyboard are not an issue. I have several of each I can donate.

As to motherboards, we might want to consider 64-bit/66Mhz PCI, as that will give us much better performance out of our SCSI, especially if we get Utra 320.

janc
response 4 of 547: Mark Unseen   Feb 23 16:30 UTC 2003

Thanks Dan.
cmcgee
response 5 of 547: Mark Unseen   Feb 26 00:51 UTC 2003

Now linked to Coop as Item 176; Garage 147
aruba
response 6 of 547: Mark Unseen   Feb 26 04:58 UTC 2003

I suspect the board will start the process of buying the hardware for the
new Grex at the meeting on Thursday.  So if people have strong opinions on
what items we should buy, they should speak up soon.
cross
response 7 of 547: Mark Unseen   Feb 26 13:39 UTC 2003

One think I would suggest is a rack-mount case.  While it's a little
more expensive, it's also probably a little more rugged and can easily
be fit into a colocation facility if, at some point in the future,
that becomes desirable.

I would suggest that, as part of this, grex either move out of the
pumpkin, or try to do as much as possible to make it a more habitable
place for the grex machine's.  In particular, the descriptions I've
heard of it like it's just too hot during the summer.  I suspect that
has a lot more to do with any of grex's system reliability problems
than any concerns of component quality or load.
scott
response 8 of 547: Mark Unseen   Feb 26 13:53 UTC 2003

Grex has had extremely low hardware problems in the Pumpkin, though.
gull
response 9 of 547: Mark Unseen   Feb 26 14:15 UTC 2003

Nearly all new hardware supports internal temperature monitoring.  If
OpenBSD supports this, we could monitor the CPU core and case
temperatures and see if they really are reaching unreasonable levels. 
That would, in my opinion, be a much better indication than the ambient
room temperature.
I know Linux supports reading most sensor chips via the 'sensors'
package, but I don't know if OpenBSD has support for any of this yet.

I agree a rack-mount case would be a good idea, but I don't feel too
strongly about it because it would be relatively easy to shift the
hardware into a rack-mount case later.

What brand of motherboard are you thinking of using?  Abit has had a lot
of problems with defective capacitors lately and maybe should be
avoided.  I'm not sure who makes the best AMD boards right now.
keesan
response 10 of 547: Mark Unseen   Feb 26 14:30 UTC 2003

Jim asks if grex would be interested in his basement once he gets it
insulated, and the house rewired.  He could have a separate entrance
accessible at all hours.  Might be a few years though.
mdw
response 11 of 547: Mark Unseen   Feb 26 15:31 UTC 2003

Grex's hardware reliability problems in the past few years have mainly
been:
 (1) DSL line flakiness.  Almost certainly not heat related.
 (2) random power weirdness.  Almost certainly not heat related.
(Unless
        you count all air conditioners in the state of Michigan.)
 (3) random disk failures.  These probably are heat related.
 (4) weird modem problems.  Wide range of potential causes.

The only one of the 3 we can control is (3).  *However* -- we've gone to
some effort to secure the best cooling we can given our environment.
Some of this has included the use of extra large enclosures and a fair
amount of extra room.  In a colocation, we'd have much less
space--smaller enclosures, less room, etc. Right there our improvements
go out the window.  No doubt things are much better in NJ, but here in
SE Mich, it's not hard to collect interesting tales of various
colocation heating and cooling disasters.  Backups are important - and
we definitely want to maintain our current advantage in terms of making
removable tape backups; this isn't just for disk failures, but also
covers floods and fires (both known risks in the local colocation
market) and vandals (a special and unique risk we also have to deal
with, which makes mirrored disks, normally a useful backup strategy,
much less attractive to us.)  When you measure up cost, cooling, and
backup convenience, the pumpkin suddenly starts looking a lot less bad.

I said that our disk disasters probably are heat related.  I suppose I
should expand on that.  We've had several failures.  We used to have
lousy disk enclosures.  Eventually, we resorted to using box fans.  It
was noisy and crude, but worked.  We eventually got better disk
enclosures.  Those have been basically adequate.  We have been luckier
in our failures than perhaps we deserve -- our failures have generally
given us notice, often show up during backups, so we've generally been
able to simply restore that last backup.  Some have shown up as heat
sensitivity - letting the disk cool often eliminates the errors (at
least long enough for that last backup).  In at least one memorable
case, the completely dead disk turned out to simply be packed with dust
-- cleaning it throughly resulted in proper operation, although we got
nervous and replaced it before it had a chance to turn traitor on us.
I'd like to think our luck is mostly due to backups, observation, and
paranoia.  But, to the extent that heat has played a factor, it may have
actually worked in our favour, although I'd hardly recommend it as a
good strategy.

Keep in mind that we're running mostly used disks of elderly vintage,
and basically running them until they give up the ghost This strategy is
guaranteed to eventually produce 100% mortality -- but it may
paradoxically produce more reliable storage meanwhile than constantly
purchasing new disks even though most of those won't fail before being
replaced.  Perhaps this just shows that you can prove anything you want
with statistics.

I'll be the first to admit the pumpkin is far from perfect, but even so,
I'd have to say that in terms of dealing with disk disasters, it still
comes out way ahead of what we could manage for colocation deals.  If
you're looking for that huge advantage that's bound to convince us to
move to colocation, this isn't it.  The certain convenience in terms of
access and space is known to us.
scott
response 12 of 547: Mark Unseen   Feb 26 15:38 UTC 2003

Sindi - we really want Grex to be in a neutral property, instead of someone's
house.
jmsaul
response 13 of 547: Mark Unseen   Feb 26 15:58 UTC 2003

Besides, it would take Jim years to get set up to draw the copper wire for
the cabling.  And we don't want to run Grex on a refurbished 486, powered by
a bicycle generator.
cross
response 14 of 547: Mark Unseen   Feb 26 16:27 UTC 2003

Regarding #11; Well, you mention several things that disturb me.  Notably,
dust and heat conditions in the pumpkin.  If you're going to stay there,
I suggest you make an effort to mitigate those to whatever extent is
possible.  Perhaps that means putting in a wall-mount A/C unit, or a
bigger one if necessary; perhaps it means putting in a humidifier to keep
down on dust; perhaps it means going over the whole room with a dust mop;
perhaps it means throwing out old yellowing sheets of paper that have no
further importance; it almost certainly means going with a new, server
grade case with adequate cooling.  Perhaps it also means something else.
I don't know, but it strikes me, and has been stated by several others,
that grex could do a bit better to make sure the conditions in the
pumpkin don't kill your new servers.

I have no idea what the colocation facilities in New Jersey are like,
since I live in New York City.  Here, our colo facilities are, umm,
quite different from the way you describe your options.  That's fine,
but if putting in a wall-mount A/C unit and giving the pumpkin a thorough
cleaning and removing a bunch of garbage from it will help improve grexes
chances of not having a disk failure, I'd say go for it.  In fact, that's
all I'm saying.
jep
response 15 of 547: Mark Unseen   Feb 26 17:00 UTC 2003

How is the new system going to be financed?  Might it make some sense 
to look at how much money is going to be available?  Is Grex just going 
to write a check for the amount of the new computer?

I don't see a tape drive listed.

The computer I just ordered can have 2 GB installed.  Whatever Grex 
gets, it'd seem to me to make sense to max out the RAM.
scott
response 16 of 547: Mark Unseen   Feb 26 17:04 UTC 2003

Indoor dust comes from people - the Pumpkin is quite dust-free, actually. 
My guess is that the bulk of the dust in that drive came from its previous
life.
keesan
response 17 of 547: Mark Unseen   Feb 26 17:21 UTC 2003

The pumpkin does not have windows.  The owners might not appreciate a hole
in the wall made by grex for an air conditioner.
aruba
response 18 of 547: Mark Unseen   Feb 26 17:47 UTC 2003

Right, I think a wall-mount AC unit is not an option in the Pumpkin.

Re #15: We plan to have a fundraiser to help pay for the hardware.
mary
response 19 of 547: Mark Unseen   Feb 26 18:12 UTC 2003

Ideally it would have been nice to have a fundraiser and buy hardware
based on the money raised plus what we have already set aside for upgraded
hardware.  But instead what we have is a bit of a time crunch.  Staff has
time to put this together, nowish, but a big chunk of the work needs to be
done before May. 

So instead of fundraiser first, purchase later, we are going to make a
leap of faith that the users will want this badly enough to donate what
they can, and get the project started.  Do folks think this is a
reasonable thing to do? 

keesan
response 20 of 547: Mark Unseen   Feb 26 18:35 UTC 2003

Wasn't there already a fundraiser for the last grex hardware, which ended up
getting donated instead, plus a $1024 donation for new hardware that has not
been spent yet?
gull
response 21 of 547: Mark Unseen   Feb 26 18:42 UTC 2003

Re #11: I'd also add that modern disks tend to run cooler.  I bet the
Pumpkin will be considerably cooler when Grex's old hardware is retired.

Good airflow should definately be a consideration when picking a case,
of course.  Thanks to the overclocker market, you can now get cases with
truely awe-inspiring numbers of fans.  Since noise isn't much of a
consideration where Grex is, we should take advantage of that.

Re #15: I'd guess our current tape drive will work with the new system.
 If I remember right it's an external SCSI drive.  These are quite
standardized; it'll just be a matter of the right cable, most likely.
aruba
response 22 of 547: Mark Unseen   Feb 26 19:17 UTC 2003

Re #20: In 1998, we had a fundraiser for spare parts for the current Grex
machine.  Then most of the spare parts we needed were donated to Grex, so we
asked everyone who had donated what to do with what they had sent in - some
of it was refunded or converted into membership dues, the rest was converted
into miscellaneous donations.

The $1024 which is currently in the infrastructure fund came from a single
user in 2001.  Its purpose is indeed to upgrade Grex's hardware, so the goal
of a fundraiser would be to add to that fund.
jep
response 23 of 547: Mark Unseen   Feb 26 19:31 UTC 2003

Why is there a deadline of May, and what has to be accomplished by 
then?  Is the goal or plan to get Grex actively on the PC machine by 
then?

Why not start the fundraising plan now?  I bet we could get at least 
some idea how much money will be available by the time of the next 
Board meeting, if people are asked for pledges.  If there's a lot more 
(or less) money coming in for the upgrade than what's expected, it 
might affect what would be purchased.
aruba
response 24 of 547: Mark Unseen   Feb 26 19:37 UTC 2003

The board meeting is tomorrow.  I decided to wait until then to give people
a little time to discuss what hardware we'd like to buy.  I expect to start
the fundraiser on Friday.
 0-24   25-49   50-74   75-99   100-124   125-149   150-174   175-199   200-224 
 225-249   250-274   275-299   300-324   325-349   350-374   375-399   400-424   425-449 
 450-474   475-499   500-524   525-547       
Response Not Possible: You are Not Logged In
 

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss