You are not logged in. Login Now
 0-24   25-49   50-74   75-99   100-124   125-149   150-174   175-199   200-224 
 225-249   250-274   275-299   300-324   325-349   350-374   375-399   400-424   425-449 
 450-474   475-499   500-524   525-547       
 
Author Message
25 new of 547 responses total.
jhudson
response 75 of 547: Mark Unseen   Mar 7 16:26 UTC 2003

Good point. SCSI needed.
mdw
response 76 of 547: Mark Unseen   Mar 7 23:25 UTC 2003

Grex has had equipment fundraisers for hardware before.  Since we've
previously gone with trailing edge CPU's, previous fundraisers have been
for memory, hard disks, etc.

I don't know if IDE commonly supports overlapped seeks yet.  With only 2
devices per channel, there of course less advantage to overlapping
seeks, but all other things being equal, a 2 disk IDE chain that can't
do overlapped seeks is going to perform less well than a 2 disk SCSI
chain which can.  With small block transfers, overlapped seeks and more
spindles per given capacity (ie, smaller drives) may be more important
to us than transfer rates.

There are 2, 4, and 6 channel IDE controllers.  A 6 channel IDE
controller can attach up to 12 IE disks using one PCI slot.  I've heard
people claim that some of these mega channel IDE based systems are very
fast disk machines, and that it even makes more sense to do software
RAID than hardware, on account of the CPU having so much more memory to
buffer things.  I don't know how much truth there is in all this.

One difference that is likely to be important to grex is that SCSI
drives today typically go into "server" machines.  IDE drives available
via retail channels are most commonly going into desktop machines.
There is a real split in the PC x86 world between server, desktop, and
home machines, with a corresponding descent in quality (and reduction in
reliability) between the three.  This is a recent development, so I'm
afraid Sindi won't have seen this in any of the machines she sees.
Since SCSI drives mainly go into server class machines, there is a
chance they'll be more rugged and reliable.
tonster
response 77 of 547: Mark Unseen   Mar 8 02:51 UTC 2003

Looking at the prices you guys are looking to pay for stuff, it really
seems like you're spending a hell of a lot more money than you need to.
 You can get a 54X CD-ROM brand new for $28 at Sky-Tech in Ann Arbor. 
And it's not some cheap knock-off drive.  Floppy drives cost no more
than $14.95 locally, and you don't have to pay shipping.  A lot of those
costs that were listed really are inflated for what you're getting.
pvn
response 78 of 547: Mark Unseen   Mar 8 07:05 UTC 2003

re#76:  IDE drives are currently optimized for large storage
(windoze bloat) and sequential access at high rate.  (Speaking
really generally, and it seems to me its been years since there
was any increase in rpm (like 7200 has been around for awhile now
and that is tops)).  I absolutely do not disagree that SCSI drives
have always been far superior at chunky random access.  However, the
reason IDE is not generally seen in the server class machines is
because it is not hot swappable - its not a problem or question of
being less reliable or lower quaility manufacture.  Indeed I think WD
for example has a 5year warranty on drives and I don't recall any SCSI
manufacterer offering more.  The major reason IDE drives are so much
cheaper is economy of scale - you sell a ton of them for every SCSI
drive.  PLus the IDE drives really are stupider, although that 
advantage has gone away over time.  

What you gain by using really fast SCSI drives you lose by going with
PC hardware (remember what PC stands for).  Your motherboard itself
is suboptimal as a 'server class' machine in the first place - and
I think this is an arguement you know full well.  That being said,
the pure integer computes that modern PC CPUs deliver overcomes
that by brute force - so what if you "share IRQs" when you can do it
so fast.  So for grex the PC motherboard is sure appropriate.  

Steve is theoretically correct in that if grex were a server delivering
indexed 36G of data - such as a database - over a fast media such
as GigE then absolutely the SCSI solution is the way to go.  I'm just
not sure that is a good model for what grex actually is or does.
With a couple gig of memory I wouldn't be surprised if most browsing
of conferences wasn't satisfied by memory read in what case disk
speed is irrelevent.  I'm also sure that the use of SCSI won't hurt,
I'm just not sure it will help as much as folk think.  Again, to your
users you are delivering ascii content over a thin pipe.

re#72: And as mdw already pointed out, yer modern MB already has
typically 2ide(2IRQ) for 4 drives total.  And there are PCI IDE
controllers that can be add ins (typically sharing IRQ). I personally
run one 'server' that has a total of 8 IDE drives (cheap Maxtor add
on controller).  Theoretically I could easily be serving close to a
TB of disk.  (In fact I'm running mostly a bunch of 500M drives that
I got from the trash of a firm that decided the proper method of
data destruction of old drives was to toss them about 50 feet across
a room into a dumpster. Of the 20 or so drives I 'dived' 17 were 
apparently good (data was probably intact although I simply built
linux filesystems on them).  My big difficulty was to sheet metal
screw together the frames of two old 'tower' cases front to back and
saw off the opposite sides of the cases in order to have enough
bays for all the drives - that and the y-cables...it sure don't look
too good, but it works and has now for on about 2 years at least.)

re#73:  HA/Clustering/whatever you want to call it has been around
a long time now. This is no longer rocket science.  Once set up there
really isn't that much more to do especially for something like grex
where generally the users all do the same thing they ever do, over
and over, and over again.  The advantage of spending a little more
time on the front end is that your single point of failure becomes
your upstream connection (which is a significant POF in my opinion
but one that you shouldn't bother to address - nobody dies if they
can't get logged into grex).  The other advantage is that it gives
you the ability to do rolling backups and rolling upgrades.  Unless
you are a hardware maintenance organization the concept of having
perfectly good hardware sitting gathering dust is silly.

Again, its just my 2-cents worth and a couple minutes of typing 
based on years of experience fixing problems involving systems a
little larger than grex.  And if you are gonna have an identical MB on
the shelf gathering dust, then you should make sure that you have at
least 2 identical PWS as well.  (And no, you really don't need to
pay for server class hardware either.  So what if grex is down for
a couple days?  You don't lose money and nobody dies.)
scg
response 79 of 547: Mark Unseen   Mar 9 04:18 UTC 2003

I'll put in another plea for a rack mount case.  Even if you don't want to
colocate somewhere now (a bad decision being made by refusing to look at
current information, but not really worth arguing about at this point), it
will keep that option open for the future.  More to the point, the rack Grex
already has could easily hold a rackmount PC server case, the DSL router,
modems, a spare server for development work, the keyboard and router, and have
lots of room to spare, freeing up the rest of the space in the Pumpkin for
whatever it is that people think we need the Pumpkin for.
pvn
response 80 of 547: Mark Unseen   Mar 9 10:54 UTC 2003

re#79:  Rack mount cases are appropriate for a lot of things, grex
isn't one of them.  First, they tend to be far less forgiving of
environment.  Second, they tend to be a lot more expensive.  Shelves
for racks holding standard PC cases are a lot cheaper and give the same
space saving quaility.  Throw out the rack and cheap plastic shelving
units perform the same function.  If grex ever has to colo then cheap
2RU case at that time is probably better.
cross
response 81 of 547: Mark Unseen   Mar 9 15:52 UTC 2003

Regarding #80; what do you mean they're less forgiving of environment?
It's been my experience that rackmount cases are far more rugged than
your average tower.
scg
response 82 of 547: Mark Unseen   Mar 9 23:01 UTC 2003

Without a rack, rackmount cases are less convenient than mini-tower cases.
With a rack, the rackmount cases become much more convenient.  I just worry
when I see people talking about getting a really expensive full tower case
that the case will become a big limiter of future options.  Full tower cases
are fine when you've got one or two of them in a room (Grex's current
situation), but they really don't scale.
gull
response 83 of 547: Mark Unseen   Mar 10 03:26 UTC 2003

Re #81: Rack mount cases are very restricted inside, which means cooling
is more difficult and the ambient room temperature is much more
critical.  We have some 1U rack-mount servers at work.  They have about
five fans each, and the air that comes out of them is pretty hot.

If we ever do decide to go with colocation, the hardware could be moved
into a rack case.
scg
response 84 of 547: Mark Unseen   Mar 10 04:13 UTC 2003

1-U rackmount cases are a special kind of beast, requiring special components
to go inside (normal PCI expansion cards don't have room to stick vertically
out of the motherboard).  1 U rackmount cases certainly aren't the only kind
of rackmount case out there.
pvn
response 85 of 547: Mark Unseen   Mar 11 10:05 UTC 2003

Here we go, re-arranging deck chairs on the Titianic ---
If you want hardware that will replace the current in its
current environment then cheap commodity PC stuff is the
way to go.  My only question is if high end 'server class'
SCSI drives on a PC platform is the way to go, nothing more.

(Is is RU or U?  I'm not clear.)

Point being if you don't have a reasonable temp environment than
rack mounted is not the way to go.  Which is meaningless drift.
I don't think anyone is suggesting spending the bucks for stupid
racks instead of PC boxes.  I merely suggest two things, that one
reconsider SCSI in the first place and the second is that one budget
for power supplies and have them on the shelf at least.
jared
response 86 of 547: Mark Unseen   Mar 13 15:26 UTC 2003

Just to give some of my experiences:
with PC hardware, you want to have your /var/mail (mail spool) and
swap on scsi disk.  The rest tends to be less relevant.

i "skipped to the end", so if backups haven't been discussed, i
suggest grabbing some cheap disk and having hot-backups available on
already spinning media in the same room.  I've found this invaluable
in my environment going across an x-over ethernet cable.

you probally want daily (or even hourly?) backups of /etc (hourly of
/etc/passwd perhaps?) in order to allow for easy recovery.  I might be
able to donate some hardware towards this.
pvn
response 87 of 547: Mark Unseen   Mar 15 07:22 UTC 2003

re#86:  With a lot of RAM why worry so about swap space?  And in grex's
case - all data at best over a thin pipe - with proper tuning even a
micro$oft OS can keep up with email over DSL or Cable speeds using IDE
drives (hopefully you don't have all that much local email over even
10mbs ethernet).  With high density IDE drives you have a lot of spare
space to do lots of backups.  And with mirroring IDE controllers or even
software RAID you have a lot of fault tolerance.  Even with neither and
simple more big disk (JABOCD) you still got lots of fault tolerance if
you put your mind to it.

As for RACK cases.  That dog don't hunt.  If you need to cram a lot of
CPUs into a small expensive space with climate control then rackmounts
are sure and the way to go.  Even in a modern office environment with
HVAC and cleaning services rackmount is questionable.  I don't know what
the grex current physical environment is but I bet its far more 'dirty'
and with a much higher range of tempurature than a modern office.  More
like 'home' and thus a cheap conventional PC case with lots of 'dead
space' is the way to go for that - big fan and lots of room for dust.
jared
response 88 of 547: Mark Unseen   Mar 15 23:52 UTC 2003

re #87
Virtually all modern unices swap out unused processes and with a high
smtp and other load on the system you will see continued need to swap
out a few processes as things are being used more efficently as disk
buffers.

I'd rather have my shell process be swapped out while i'm in bbs in order
for caching of the password file for background smtp delivery instead
of keeping it in memory and making the password lookups slower.
jkd
response 89 of 547: Mark Unseen   Mar 16 04:54 UTC 2003

Here are a few comments from a perspective that a) is West Coast, and b) has
developed over the past 25 years of building and operating large-scale data
centers on a daily basis.

First, I live in Silicon Valley, and visit the surplus sources here basically
every weekend with a couple of friends for entertainment purposes. We've been
doing this for nearly three years now. So, every time I hear about people
making regular purchases at CompUSA, Best Buy, and similar national chains,
I *CRINGE*. However, I lived in Ky. for 10 years or so before moving out here,
so I understand how people get into that. Anyway, if Grex wants me to compare
prices that I see out here with what's available in MI or via mail-order, let
me know. I'll be happy to.

Second, out here in Silicon Valley, the .com meltdown means that, literally,
200,000 jobs EVAPORATED. This has had an enormous impact on availability and
pricing of all manner of commercial hardware. Example, I just bought what I
refer to as my new "Not H-P" machine. It's "surplus." It has NO dust in it
and is comprised of a 3.06 Ghz P4, 512MB of PC2100 memory, A Radeon 9700 Pro
graphics card, on-board Ethernet, a 120GB Maxtor IDE drive, floppy, CDRW+DVDRW
combo drive, a DVD-RAM drive, and a modem card. My Price? $1200. No Kidding.
Why do I call it "Not H-P?" Because it was made for H-P and was an overstock
item. So, H-P surplused and forced the OEM to paste pieces of plastic on the
sides of the case where the H-P logo is normally visible. However, to anyone
who has ever seen an H-P PC, it's OBVIOUS what it is.

I saw an earlier mention of a "Liebert UPS" in this thread. I hope that means
that Grex is in possession of what used to be regularly known as a "True
Online" UPS. One where Utility AC power is converted by the UPS to DC, then
BACK to AC so that the hardware connected to the UPS is fed power that is
totally clean because it has gone through a complete AC->DC->AC conversion
and therefore has a perfect 60 Hz sinewave ALL THE TIME. No surges, no sags,
etc. The value of such a UPS design can not be overstated. It will prevent
countless numbers of problems from ever occurring. To me, this issue is even
more important than the details of the power supply and cooling within the
case. Any dollars invested in a *real* UPS will last far longer than dollars
invested in the computer itself.

Finally, I would like to disagree on the subject of tower vs. rackmount cases.
I'd vote for a rackmount unit. It doesn't have to be 1U. As someone mentioned
earlier that has undesirable side-effects. But consider that with rackmount
cases, you can easily get LOAD SHARING POWER SUPPLIES! Such a case will be
equipped with two of them and the load can be handled by only one. Should a
PS fail, the load is picked up by the surviving unit. THen, you just slide
out the failing one and replace it. No rebooting, etc.

John


gull
response 90 of 547: Mark Unseen   Mar 16 05:08 UTC 2003

There are tower cases available with hot-swappable power supplies, as well.

It's nice, but given the reliability of power supplies and the
non-critical nature of Grex, I think it's probably unnecessary.
(Heck, Grex is often taken offline just to run backups.)
It's worth doing if it doesn't cost much more, though.  The big
disadvantage I see, other than the cost of the case, is that you
generally have to use 'special' power supplies then, instead of standard
ATX ones.
pvn
response 91 of 547: Mark Unseen   Mar 17 09:18 UTC 2003

'special' meaning higher price?  Like the 'special' SCSI drives (36G
total) instead of the 360G that one could get for the same price or
less?  I mean if you are going 'commodity' PC hardware for the MB why
not go with commodity drives?  If you want, for about the same amount of
money as the SCSI drives you could do Fibre (using an obsolete
controller I'll grant you) and kick SCSI's butt.  You could
theoritically have 1Gbps over a media that could theoretically deliver
such over 10KM distance using optical fibre.  Odd thing is even at 66Mhz
64-bit PCI they all seem to be about the same when content is delivered
over yer average Internet connection....
jared
response 92 of 547: Mark Unseen   Mar 17 12:55 UTC 2003

Because your 'commodity' drives rely on the central processor for all disk I/O
whereby using scsi offloads that to a seperate processor (on the scsi
controller).  But the performance gains from scsi are clear to be seen.
On any system that gets the volume of mail and users as grex, you need
fast disk for the day-to-day operations.  swap, mail, /etc/passwd all
take quite a hit.  In my own personal mail/web/whatnot server once I
made a recent switch to scsi from ide (with the exception of my truly mass
storage, ie: /home and /mp3 partitions ;-) ) the system performance
increased greatly.  With the userbase the size of grex that
type of benifit can not be ignored.
gull
response 93 of 547: Mark Unseen   Mar 17 13:36 UTC 2003

Re #91: "Special" meaning proprietary to the particular case manufacturer.
mdw
response 94 of 547: Mark Unseen   Mar 19 05:59 UTC 2003

Re rack mount case.  I think the cooling thing is a non-issue.  There
are plenty of people who take short cuts on cooling.  A tower case with
"bad" cooling is no worse than a rack mount case with good cooling.
Any inherent disadvantage rack mount cases might have is probably going
to be cancelled out by the fact that rackmount cases go into
environments where more is expected of them.  *Neither* of these
cases--tower or rack mount, is going to have cooling anywhere near
equal our current sun hardware.  That's a reality we've already
accepted by going to x86 hardware.  I think for grex the real issues
are:

A rack mount case is going to be *slightly* more expensive.
        (the estimate I heard was $150 higher.)
I don't think grex is actually likely to move into a
        rack mountable space in the next 18 months
If we do move, the expense to buy another case and move our
        guts is probably the least of our "moving" expenses.
If we do rackmount, we should probably get 2U [ which would
        probably impact our rental costs slightly were we
        ever to colocate. ]

We could certainly do rackmount in the pumpkin today - we have the very
heavy sun rack mount case sitting there empty today.  It's even got
some very impressive fans of its own.  I see mostly small disadvantages
to the rackmount (slightly more expensive case, more electricity for
fans) that doesn't quite equal the potential "advantage" of moving to a
colocation space where we'd have to be rackmountable.  But frankly this
doesn't seem like a big point to me.

Perhaps we should commission John Doyle to find really cheap
rackmountable cases in SV.  If he can get cases no more expensive than
good tower cases, then I think that make the difference insignificant
and worth going rackmountable.  The negative to buying everything
surplus is much like our past bottom feeding habits - except the cost
is slightly more.

Jared is right that older IDE drives relied on the CPU to do
"programmed I/O".  But this is no longer true, besides which there were
also even stupider SCSI controllers that also did programmed I/O
(mostly for the scanner market, so that's thankfully all been replaced
by USB today.) Basically, SCSI and IDE have been playing leapfrog with
each other, so today's fast IDE subsystem will outperform yesterday's
best SCSI.  I don't think it's ever really been true that IDE drives
took fewer components than SCSI.  The main win IDE used to have is that
it required fewer components *overall*; but I suspect this is both no
longer true (with ide dma) and no longer important (with the degree of
component integration we have today).  What I think matters the most to
us is the relative markets SCSI and IDE aim for; SCSI aims for server
configurations, IDE aims for personal machines.  Server configurations
are going to have greater demands for reliability and random I/O
throughput - at a price.  We're going to have to pay attention to be
sure the avantage continues to be real, and that the price remains
acceptable.  We will also have to accept that whatever we buy today
*will* be outclassed by something out next year - which will be both
faster *and* cheaper, and maybe even more reliable.
jared
response 95 of 547: Mark Unseen   Mar 22 22:02 UTC 2003

Marcus,

I've noticed even modern systems that use the latest (E)IDE technology
still see a considerable hit on the CPU for any disk I/O

This is something that I think is important to keep in mind for
Grex and plan to get a good price-performance ratio.
lk
response 96 of 547: Mark Unseen   Mar 25 04:19 UTC 2003

IF the rational for SCSI is reliability, then perhaps you should also
consider mirrored IDE drives. (If at least one is in a removable bay,
a backup could be as simple as swapping drives and letting the new
drive be rebuilt. I haven't done this so I'm not sure about implemantation.
The same would hold for SCSI but it will be faster -- and more expensive.)

IF drive speed is a concern, get the 15K RPM U320 SCSI Drives.
(I don't believe this has been mentioned, so that might be the plan
rather then 10K rpm drives. I'm not sure if these are available in
18 GB denominations or just 36 and above.)

Lastly, as bdh mentioned, IBM doesn't really make their own SCSI
drives any more. I'm not sure if this is true across the line, but some
recent 36 GB 15K U320 drives I installed were actually Hitachi drives.

(You should be able to get IBM 18 GB U160 10K drives for about $100.)
mdw
response 97 of 547: Mark Unseen   Mar 26 08:00 UTC 2003

It would be interesting to know what the CPU bottleneck is with (E)IDE
these days.  I sure haven't had the time to actually look.  It shouldn't
be DMA, so a good kernel profiler would be entertaining to run.

IBM sold *all* their hard disk stuff to Hitachi.  They've been busy
getting rid of all their magnetic storage stuff.  Given the length of
time they've been in the field, the only reason I can see for them doing
this that they have good reason to believe magnetic storage is going to
become obselete fairly soon.  I don't think this is of any immediate
importance to grex, but if I were investing in the stockmarket, I might
consider this very interesting.

I believe STeve is looking for 15K U320 SCSI.

There's at least 2 problems with mirrored IDE -- proprietary
controllers, and performance during that "rebuild".  The most common
chipset seems to be adaptec "aac" - there are linux & openbsd drivers
for this, but it's not fully functioned.  The raid management stuff in
particular loses; I'm not 100% convinced we would necessarily even know
we lost a disk -- until we lost the 2nd one and were screwed.  Regarding
performance - I think the "7/24" shop most people have in mind with RAID
includes windows of relative idleness.  If you have a truely disk
intensive load with no letup, then the rebuild never completes.
Fortunately, grex doesn't have that, but we have seen that disk
intensive things start slowing everything else enough that the load
average starts to pile up and build.  If we had to go visit the machine
in person to install a new drive, then leave it in single-user mode
during the rebuild, I'm not sure we've really gained all that much vs.
the traditional "restore from tape" model, *especially* if this is
liable to happen more often.

There's another issue to think about too -- our reason for doing tape
backups is *not* just hardware reliability, but also to cover the case
of vandals destroying information.  Online backups don't protect against
this - if a vandal can destroy active filesystems, he can get at the
backup just as easily - and one of the more attractive attacks he can
make is to install a trojan in the backup then destroy the active
filesystem.  So, um, ya the mirrored IDE is an interesting option, but
I'm skeptical that it makes sense for us.  Sure, if we had extra time &
money, mirrored storage could be fun, but I don't see it as really
replacing the need either for reliable hardware in the first place, or
backups to cover the case of vandals in the second.
cross
response 98 of 547: Mark Unseen   Mar 26 12:28 UTC 2003

Most ``24/7'' shops I've seen really are 24 hours a day.  They use disk
subsystems a lot more interesting than what you think, though.
mdw
response 99 of 547: Mark Unseen   Mar 26 21:30 UTC 2003

Most activity I've seen is actually centered (somehow) around human
schedules.  Even in hospitals and in the travel industry this is true.
To get something approximating 24 hours of real activity you pretty much
need some sort of global presence (or some sort of artificial
constraints that causes humans to rearrange their schedule to suit the
computer).  Despite the recent ubiquity of the internet, and the even
more recent fall of the dollar in international markets, I doubt this is
nearly as true of US business in general as Dan's experience apparently
indicates.  And, of course, silicon valley isn't necessarily designing
for Dan's world either, despite the illusion their marketing droids
cast.  If they were, there'd be a lot more discussion about the possible
performance hit while rebuilding a portion of a raid array.
 0-24   25-49   50-74   75-99   100-124   125-149   150-174   175-199   200-224 
 225-249   250-274   275-299   300-324   325-349   350-374   375-399   400-424   425-449 
 450-474   475-499   500-524   525-547       
Response Not Possible: You are Not Logged In
 

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss