|
|
| Author |
Message |
| 25 new of 547 responses total. |
scg
|
|
response 79 of 547:
|
Mar 9 04:18 UTC 2003 |
I'll put in another plea for a rack mount case. Even if you don't want to
colocate somewhere now (a bad decision being made by refusing to look at
current information, but not really worth arguing about at this point), it
will keep that option open for the future. More to the point, the rack Grex
already has could easily hold a rackmount PC server case, the DSL router,
modems, a spare server for development work, the keyboard and router, and have
lots of room to spare, freeing up the rest of the space in the Pumpkin for
whatever it is that people think we need the Pumpkin for.
|
pvn
|
|
response 80 of 547:
|
Mar 9 10:54 UTC 2003 |
re#79: Rack mount cases are appropriate for a lot of things, grex
isn't one of them. First, they tend to be far less forgiving of
environment. Second, they tend to be a lot more expensive. Shelves
for racks holding standard PC cases are a lot cheaper and give the same
space saving quaility. Throw out the rack and cheap plastic shelving
units perform the same function. If grex ever has to colo then cheap
2RU case at that time is probably better.
|
cross
|
|
response 81 of 547:
|
Mar 9 15:52 UTC 2003 |
Regarding #80; what do you mean they're less forgiving of environment?
It's been my experience that rackmount cases are far more rugged than
your average tower.
|
scg
|
|
response 82 of 547:
|
Mar 9 23:01 UTC 2003 |
Without a rack, rackmount cases are less convenient than mini-tower cases.
With a rack, the rackmount cases become much more convenient. I just worry
when I see people talking about getting a really expensive full tower case
that the case will become a big limiter of future options. Full tower cases
are fine when you've got one or two of them in a room (Grex's current
situation), but they really don't scale.
|
gull
|
|
response 83 of 547:
|
Mar 10 03:26 UTC 2003 |
Re #81: Rack mount cases are very restricted inside, which means cooling
is more difficult and the ambient room temperature is much more
critical. We have some 1U rack-mount servers at work. They have about
five fans each, and the air that comes out of them is pretty hot.
If we ever do decide to go with colocation, the hardware could be moved
into a rack case.
|
scg
|
|
response 84 of 547:
|
Mar 10 04:13 UTC 2003 |
1-U rackmount cases are a special kind of beast, requiring special components
to go inside (normal PCI expansion cards don't have room to stick vertically
out of the motherboard). 1 U rackmount cases certainly aren't the only kind
of rackmount case out there.
|
pvn
|
|
response 85 of 547:
|
Mar 11 10:05 UTC 2003 |
Here we go, re-arranging deck chairs on the Titianic ---
If you want hardware that will replace the current in its
current environment then cheap commodity PC stuff is the
way to go. My only question is if high end 'server class'
SCSI drives on a PC platform is the way to go, nothing more.
(Is is RU or U? I'm not clear.)
Point being if you don't have a reasonable temp environment than
rack mounted is not the way to go. Which is meaningless drift.
I don't think anyone is suggesting spending the bucks for stupid
racks instead of PC boxes. I merely suggest two things, that one
reconsider SCSI in the first place and the second is that one budget
for power supplies and have them on the shelf at least.
|
jared
|
|
response 86 of 547:
|
Mar 13 15:26 UTC 2003 |
Just to give some of my experiences:
with PC hardware, you want to have your /var/mail (mail spool) and
swap on scsi disk. The rest tends to be less relevant.
i "skipped to the end", so if backups haven't been discussed, i
suggest grabbing some cheap disk and having hot-backups available on
already spinning media in the same room. I've found this invaluable
in my environment going across an x-over ethernet cable.
you probally want daily (or even hourly?) backups of /etc (hourly of
/etc/passwd perhaps?) in order to allow for easy recovery. I might be
able to donate some hardware towards this.
|
pvn
|
|
response 87 of 547:
|
Mar 15 07:22 UTC 2003 |
re#86: With a lot of RAM why worry so about swap space? And in grex's
case - all data at best over a thin pipe - with proper tuning even a
micro$oft OS can keep up with email over DSL or Cable speeds using IDE
drives (hopefully you don't have all that much local email over even
10mbs ethernet). With high density IDE drives you have a lot of spare
space to do lots of backups. And with mirroring IDE controllers or even
software RAID you have a lot of fault tolerance. Even with neither and
simple more big disk (JABOCD) you still got lots of fault tolerance if
you put your mind to it.
As for RACK cases. That dog don't hunt. If you need to cram a lot of
CPUs into a small expensive space with climate control then rackmounts
are sure and the way to go. Even in a modern office environment with
HVAC and cleaning services rackmount is questionable. I don't know what
the grex current physical environment is but I bet its far more 'dirty'
and with a much higher range of tempurature than a modern office. More
like 'home' and thus a cheap conventional PC case with lots of 'dead
space' is the way to go for that - big fan and lots of room for dust.
|
jared
|
|
response 88 of 547:
|
Mar 15 23:52 UTC 2003 |
re #87
Virtually all modern unices swap out unused processes and with a high
smtp and other load on the system you will see continued need to swap
out a few processes as things are being used more efficently as disk
buffers.
I'd rather have my shell process be swapped out while i'm in bbs in order
for caching of the password file for background smtp delivery instead
of keeping it in memory and making the password lookups slower.
|
jkd
|
|
response 89 of 547:
|
Mar 16 04:54 UTC 2003 |
Here are a few comments from a perspective that a) is West Coast, and b) has
developed over the past 25 years of building and operating large-scale data
centers on a daily basis.
First, I live in Silicon Valley, and visit the surplus sources here basically
every weekend with a couple of friends for entertainment purposes. We've been
doing this for nearly three years now. So, every time I hear about people
making regular purchases at CompUSA, Best Buy, and similar national chains,
I *CRINGE*. However, I lived in Ky. for 10 years or so before moving out here,
so I understand how people get into that. Anyway, if Grex wants me to compare
prices that I see out here with what's available in MI or via mail-order, let
me know. I'll be happy to.
Second, out here in Silicon Valley, the .com meltdown means that, literally,
200,000 jobs EVAPORATED. This has had an enormous impact on availability and
pricing of all manner of commercial hardware. Example, I just bought what I
refer to as my new "Not H-P" machine. It's "surplus." It has NO dust in it
and is comprised of a 3.06 Ghz P4, 512MB of PC2100 memory, A Radeon 9700 Pro
graphics card, on-board Ethernet, a 120GB Maxtor IDE drive, floppy, CDRW+DVDRW
combo drive, a DVD-RAM drive, and a modem card. My Price? $1200. No Kidding.
Why do I call it "Not H-P?" Because it was made for H-P and was an overstock
item. So, H-P surplused and forced the OEM to paste pieces of plastic on the
sides of the case where the H-P logo is normally visible. However, to anyone
who has ever seen an H-P PC, it's OBVIOUS what it is.
I saw an earlier mention of a "Liebert UPS" in this thread. I hope that means
that Grex is in possession of what used to be regularly known as a "True
Online" UPS. One where Utility AC power is converted by the UPS to DC, then
BACK to AC so that the hardware connected to the UPS is fed power that is
totally clean because it has gone through a complete AC->DC->AC conversion
and therefore has a perfect 60 Hz sinewave ALL THE TIME. No surges, no sags,
etc. The value of such a UPS design can not be overstated. It will prevent
countless numbers of problems from ever occurring. To me, this issue is even
more important than the details of the power supply and cooling within the
case. Any dollars invested in a *real* UPS will last far longer than dollars
invested in the computer itself.
Finally, I would like to disagree on the subject of tower vs. rackmount cases.
I'd vote for a rackmount unit. It doesn't have to be 1U. As someone mentioned
earlier that has undesirable side-effects. But consider that with rackmount
cases, you can easily get LOAD SHARING POWER SUPPLIES! Such a case will be
equipped with two of them and the load can be handled by only one. Should a
PS fail, the load is picked up by the surviving unit. THen, you just slide
out the failing one and replace it. No rebooting, etc.
John
|
gull
|
|
response 90 of 547:
|
Mar 16 05:08 UTC 2003 |
There are tower cases available with hot-swappable power supplies, as well.
It's nice, but given the reliability of power supplies and the
non-critical nature of Grex, I think it's probably unnecessary.
(Heck, Grex is often taken offline just to run backups.)
It's worth doing if it doesn't cost much more, though. The big
disadvantage I see, other than the cost of the case, is that you
generally have to use 'special' power supplies then, instead of standard
ATX ones.
|
pvn
|
|
response 91 of 547:
|
Mar 17 09:18 UTC 2003 |
'special' meaning higher price? Like the 'special' SCSI drives (36G
total) instead of the 360G that one could get for the same price or
less? I mean if you are going 'commodity' PC hardware for the MB why
not go with commodity drives? If you want, for about the same amount of
money as the SCSI drives you could do Fibre (using an obsolete
controller I'll grant you) and kick SCSI's butt. You could
theoritically have 1Gbps over a media that could theoretically deliver
such over 10KM distance using optical fibre. Odd thing is even at 66Mhz
64-bit PCI they all seem to be about the same when content is delivered
over yer average Internet connection....
|
jared
|
|
response 92 of 547:
|
Mar 17 12:55 UTC 2003 |
Because your 'commodity' drives rely on the central processor for all disk I/O
whereby using scsi offloads that to a seperate processor (on the scsi
controller). But the performance gains from scsi are clear to be seen.
On any system that gets the volume of mail and users as grex, you need
fast disk for the day-to-day operations. swap, mail, /etc/passwd all
take quite a hit. In my own personal mail/web/whatnot server once I
made a recent switch to scsi from ide (with the exception of my truly mass
storage, ie: /home and /mp3 partitions ;-) ) the system performance
increased greatly. With the userbase the size of grex that
type of benifit can not be ignored.
|
gull
|
|
response 93 of 547:
|
Mar 17 13:36 UTC 2003 |
Re #91: "Special" meaning proprietary to the particular case manufacturer.
|
mdw
|
|
response 94 of 547:
|
Mar 19 05:59 UTC 2003 |
Re rack mount case. I think the cooling thing is a non-issue. There
are plenty of people who take short cuts on cooling. A tower case with
"bad" cooling is no worse than a rack mount case with good cooling.
Any inherent disadvantage rack mount cases might have is probably going
to be cancelled out by the fact that rackmount cases go into
environments where more is expected of them. *Neither* of these
cases--tower or rack mount, is going to have cooling anywhere near
equal our current sun hardware. That's a reality we've already
accepted by going to x86 hardware. I think for grex the real issues
are:
A rack mount case is going to be *slightly* more expensive.
(the estimate I heard was $150 higher.)
I don't think grex is actually likely to move into a
rack mountable space in the next 18 months
If we do move, the expense to buy another case and move our
guts is probably the least of our "moving" expenses.
If we do rackmount, we should probably get 2U [ which would
probably impact our rental costs slightly were we
ever to colocate. ]
We could certainly do rackmount in the pumpkin today - we have the very
heavy sun rack mount case sitting there empty today. It's even got
some very impressive fans of its own. I see mostly small disadvantages
to the rackmount (slightly more expensive case, more electricity for
fans) that doesn't quite equal the potential "advantage" of moving to a
colocation space where we'd have to be rackmountable. But frankly this
doesn't seem like a big point to me.
Perhaps we should commission John Doyle to find really cheap
rackmountable cases in SV. If he can get cases no more expensive than
good tower cases, then I think that make the difference insignificant
and worth going rackmountable. The negative to buying everything
surplus is much like our past bottom feeding habits - except the cost
is slightly more.
Jared is right that older IDE drives relied on the CPU to do
"programmed I/O". But this is no longer true, besides which there were
also even stupider SCSI controllers that also did programmed I/O
(mostly for the scanner market, so that's thankfully all been replaced
by USB today.) Basically, SCSI and IDE have been playing leapfrog with
each other, so today's fast IDE subsystem will outperform yesterday's
best SCSI. I don't think it's ever really been true that IDE drives
took fewer components than SCSI. The main win IDE used to have is that
it required fewer components *overall*; but I suspect this is both no
longer true (with ide dma) and no longer important (with the degree of
component integration we have today). What I think matters the most to
us is the relative markets SCSI and IDE aim for; SCSI aims for server
configurations, IDE aims for personal machines. Server configurations
are going to have greater demands for reliability and random I/O
throughput - at a price. We're going to have to pay attention to be
sure the avantage continues to be real, and that the price remains
acceptable. We will also have to accept that whatever we buy today
*will* be outclassed by something out next year - which will be both
faster *and* cheaper, and maybe even more reliable.
|
jared
|
|
response 95 of 547:
|
Mar 22 22:02 UTC 2003 |
Marcus,
I've noticed even modern systems that use the latest (E)IDE technology
still see a considerable hit on the CPU for any disk I/O
This is something that I think is important to keep in mind for
Grex and plan to get a good price-performance ratio.
|
lk
|
|
response 96 of 547:
|
Mar 25 04:19 UTC 2003 |
IF the rational for SCSI is reliability, then perhaps you should also
consider mirrored IDE drives. (If at least one is in a removable bay,
a backup could be as simple as swapping drives and letting the new
drive be rebuilt. I haven't done this so I'm not sure about implemantation.
The same would hold for SCSI but it will be faster -- and more expensive.)
IF drive speed is a concern, get the 15K RPM U320 SCSI Drives.
(I don't believe this has been mentioned, so that might be the plan
rather then 10K rpm drives. I'm not sure if these are available in
18 GB denominations or just 36 and above.)
Lastly, as bdh mentioned, IBM doesn't really make their own SCSI
drives any more. I'm not sure if this is true across the line, but some
recent 36 GB 15K U320 drives I installed were actually Hitachi drives.
(You should be able to get IBM 18 GB U160 10K drives for about $100.)
|
mdw
|
|
response 97 of 547:
|
Mar 26 08:00 UTC 2003 |
It would be interesting to know what the CPU bottleneck is with (E)IDE
these days. I sure haven't had the time to actually look. It shouldn't
be DMA, so a good kernel profiler would be entertaining to run.
IBM sold *all* their hard disk stuff to Hitachi. They've been busy
getting rid of all their magnetic storage stuff. Given the length of
time they've been in the field, the only reason I can see for them doing
this that they have good reason to believe magnetic storage is going to
become obselete fairly soon. I don't think this is of any immediate
importance to grex, but if I were investing in the stockmarket, I might
consider this very interesting.
I believe STeve is looking for 15K U320 SCSI.
There's at least 2 problems with mirrored IDE -- proprietary
controllers, and performance during that "rebuild". The most common
chipset seems to be adaptec "aac" - there are linux & openbsd drivers
for this, but it's not fully functioned. The raid management stuff in
particular loses; I'm not 100% convinced we would necessarily even know
we lost a disk -- until we lost the 2nd one and were screwed. Regarding
performance - I think the "7/24" shop most people have in mind with RAID
includes windows of relative idleness. If you have a truely disk
intensive load with no letup, then the rebuild never completes.
Fortunately, grex doesn't have that, but we have seen that disk
intensive things start slowing everything else enough that the load
average starts to pile up and build. If we had to go visit the machine
in person to install a new drive, then leave it in single-user mode
during the rebuild, I'm not sure we've really gained all that much vs.
the traditional "restore from tape" model, *especially* if this is
liable to happen more often.
There's another issue to think about too -- our reason for doing tape
backups is *not* just hardware reliability, but also to cover the case
of vandals destroying information. Online backups don't protect against
this - if a vandal can destroy active filesystems, he can get at the
backup just as easily - and one of the more attractive attacks he can
make is to install a trojan in the backup then destroy the active
filesystem. So, um, ya the mirrored IDE is an interesting option, but
I'm skeptical that it makes sense for us. Sure, if we had extra time &
money, mirrored storage could be fun, but I don't see it as really
replacing the need either for reliable hardware in the first place, or
backups to cover the case of vandals in the second.
|
cross
|
|
response 98 of 547:
|
Mar 26 12:28 UTC 2003 |
Most ``24/7'' shops I've seen really are 24 hours a day. They use disk
subsystems a lot more interesting than what you think, though.
|
mdw
|
|
response 99 of 547:
|
Mar 26 21:30 UTC 2003 |
Most activity I've seen is actually centered (somehow) around human
schedules. Even in hospitals and in the travel industry this is true.
To get something approximating 24 hours of real activity you pretty much
need some sort of global presence (or some sort of artificial
constraints that causes humans to rearrange their schedule to suit the
computer). Despite the recent ubiquity of the internet, and the even
more recent fall of the dollar in international markets, I doubt this is
nearly as true of US business in general as Dan's experience apparently
indicates. And, of course, silicon valley isn't necessarily designing
for Dan's world either, despite the illusion their marketing droids
cast. If they were, there'd be a lot more discussion about the possible
performance hit while rebuilding a portion of a raid array.
|
keesan
|
|
response 100 of 547:
|
Mar 26 22:00 UTC 2003 |
Many factories in places like China run 24 hours to keep costs down.
|
slynne
|
|
response 101 of 547:
|
Mar 26 22:12 UTC 2003 |
Lots of factories in places like the US run 24 hours a day too.
|
gull
|
|
response 102 of 547:
|
Mar 26 23:35 UTC 2003 |
I remember one site I had a mail account on that had some kind of
external SCSI RAID storage array. One day they had a disk fail and the
rebuild, which had to be done offline, took a week to complete. They
were not amused.
|
styles
|
|
response 103 of 547:
|
Mar 29 18:35 UTC 2003 |
#101: :)
|