You are not logged in. Login Now
 0-24   25-27         
 
Author Message
twenex
The Unix Hardware Item Mark Unseen   Sep 16 23:25 UTC 2006

This item is reserved for discussion of Unix hardware - that is, hardware
created specifically for the purpose of running a Unix operating system. The
PDP-11, Interdata 7/32 and 8/32, and DEC VAX have an honorary place in this
item as, although not specifically created to run Unix, they were nevertheless
the first four hardware platforms to run it.
27 responses total.
maus
response 1 of 27: Mark Unseen   Sep 19 04:43 UTC 2006

On a somewhat newer topic than the original UNIX hardware, has anyone
dealt with adding additional processor boards to a cPCI-based system? 

I just want to confirm what I think I understand from the assorted
literature out in the Internet. In a cPCI system, the additional system
boards (satelite boards or slave boards) add additional compute
resources but are still a part of the same "system"; that is, they
share the same state, same operating environment kernel image and IP
number. Is this correct? If the main system board fails, will the
system stay up and limp along on the slave board or is the presence
tied to the primary system board? 
cross
response 2 of 27: Mark Unseen   Sep 20 04:12 UTC 2006

I think it depends highly on the operating system running on the system.  Some
versions are going to be resistant to things like failures of other system
boards on the bus, others are going to want to run on some "master" and aren't
going to take kindly to it going away suddenly.  Which is which varies.
maus
response 3 of 27: Mark Unseen   Sep 20 04:17 UTC 2006

So I would probably want to run Solaris, QNX or a port of Carrier-Grade
Linux, rather than a BSD? 
cross
response 4 of 27: Mark Unseen   Sep 20 04:24 UTC 2006

I guess it depends on what you want to do with it?  If you need high
availability, I'd go with QNX which has a proven track record in this area.
But it's really impossible to say without knowing more about your specific
application....
maus
response 5 of 27: Mark Unseen   Sep 28 04:26 UTC 2006

I was looking at running OBSD and using sysjail or Solaris and zones to
provide lightweight Virtual Private Environments to allow colleagues an
opportunity to play as root in OBSD or Solaris without giving away the
server, and also to allow them to try out non-intel gear (most are used
to either Linux or Windows on intel, with a few FBSD partisans). The
other thing I wanted to do is demonstrate that uptimes of over a couple
of months are a reasonable thing to look at. I figured that with either
a resilient BSD or UNIX and resilient, redundant, hot-swapable hardware,
with all users being kept in a jail environment, this puppy could do the
whole uptime-for-uptime's-sake thing. 
cross
response 6 of 27: Mark Unseen   Sep 28 05:23 UTC 2006

Oh, hmm.  Solaris might be able to handle that; I doubt that OpenBSD has the
support for hot-swappability of, e.g., CPU cards.
maus
response 7 of 27: Mark Unseen   Sep 28 14:08 UTC 2006

You're probably right, now that I think about it. 
twenex
response 8 of 27: Mark Unseen   Sep 28 15:52 UTC 2006

More to the point, I doubt Intel hardware has support for hot-swappable CPUs.
cross
response 9 of 27: Mark Unseen   Sep 28 16:02 UTC 2006

Some does, but certainly the commodity stuff isn't likely to.
maus
response 10 of 27: Mark Unseen   Sep 28 21:29 UTC 2006

Which is the reason I am not doing this on commodity intel hardware. The
system is a 3 slot cPCI chassis from marathon with a 440MHz Sun CP1500
board (UltraSparc IIe processor, half gig of RAM in a Mezanine board,
hme and scsi built in, serial console), along with redundant,
hot-swapable power supplies and redundant, hot-swappable scsi drives. I
would like to use a CP2060 or CP2080 to add resources. If this works
well, I have a couple of identical systems which I will put up in the
NOC as a test-bed for some of my guys.
tod
response 11 of 27: Mark Unseen   Sep 28 21:32 UTC 2006

Awesome
twenex
response 12 of 27: Mark Unseen   Sep 28 21:37 UTC 2006

Re: #10. Oh right, lost track, sorry.
ball
response 13 of 27: Mark Unseen   Sep 28 22:23 UTC 2006

Re #10: Damn shiny.  Hot swap CPUs are a good thing (I've
  used them on VME) but I don't know of an OS that could
  gracefully handle CPUs disappearing on the fly.  It would
  be nice if you could mark a CPU as "pending shutdown" or
  something and have processes gradually migrate to other
  boards.  Fit solenoids so that a board can physically
  eject itself once it's shut down. :-)
cross
response 14 of 27: Mark Unseen   Sep 28 22:29 UTC 2006

I believe that Solaris lets you take a CPU out of production.  The FreeBSD
people might be working on that sort of thing too.
maus
response 15 of 27: Mark Unseen   Sep 29 02:21 UTC 2006

Re #13: I think more along the lines of creating a resources pool as a
small cluster and then put resource partitioning on top of hte resource
pool, so if a piece of kit fails, it can either be swapped live, or just
put a tag on it and ignore it until RMA day. As an analogy, think of no
longer thinking of discs, but instead have a big-ass RAID array and then
slice that up and take slices as needed. 
cross
response 16 of 27: Mark Unseen   Sep 29 02:52 UTC 2006

Well, I think what you're doing is neat, but have you considered virtual
machines under VMWare or Xen or even qemu?
nharmon
response 17 of 27: Mark Unseen   Sep 29 03:14 UTC 2006

> As an analogy, think of no longer thinking of discs, but instead have a 
> big-ass RAID array and then slice that up and take slices as needed.

That would sound just about like how a SAN works. And when you couple a
SAN with Vmware you're left with a very reliable server infrastructure.
We're in the process of implementing the SAN+Vmware thing at work. We
were waiting to see if the new Vmware 3 supported iSCSI, which it does.
maus
response 18 of 27: Mark Unseen   Sep 29 03:44 UTC 2006

I was actually using the disc virtualization as an analogy, but yeah,
SAN for storage is kind of spiffy, though the cost of entry is pretty
high, both in dollars and in learning. I think my company is just now
starting to dick around with iSCSI-SAN for our bigger customers. 

I have tried VMWare, Xen, and both are interesting, though they are
really rather heavyweight and too resource intensive for what I plan to
do. qemu runs like a wounded dog with only two legs and a hangover.
Right now I am playing around with OpenVZ (the freeware version of
Virtuozzo, under an opensource license) on RHEL 4, which seems to be an
interesting and fun way of doing things. If you have tried the zone
facility in Solaris 10 or jail in FBSD or sysjail in OBSD, they are
rather spiffy and provide sufficient isolation and manageability. 
cross
response 19 of 27: Mark Unseen   Sep 29 05:27 UTC 2006

Well, if you think that's good enough....  I think the thing about jails and
sysjail's is that they're more oriented towards logical separation of system
spaces rather than real or physical separation.  Zones are a bit different
again, somewhere in between the two.

But, if you feel like jails are sufficient, then go for it.  SuperMax would
be something like VMWare.  Have you tried the version of qemu with the x86
accelerator?  (Though that description cracked me up...)
nharmon
response 20 of 27: Mark Unseen   Sep 29 11:19 UTC 2006

VMware server and workstation are bottom heavy, I will admit. But vmware
infrastructure (formerly known as ESX) is quite lean. It is basically a
stripped down Linux kernel until it starts up vmware, at which time the
linux kernel is swapped out for a vmware kernel (although the stripped
down linux is still there, its running as a VM). It is really quite
slick, and we're finding that even with our low end 2-proc systems, we
can run around 4 or 5 virtual win2k3 servers under low-load conditions.

As for a SAN, the cost of entry may not be as bad as you think. In fact,
I think there is software or Linux to make it an iSCSI target. Ah, here
is it: http://iscsitarget.sourceforge.net/.

maus
response 21 of 27: Mark Unseen   Sep 29 15:45 UTC 2006

re #19: True, the isolation is not as absolute as emulating hardware,
but unless the person can break something in Ring0, it should still
isolate them. The machine would run with a positive number secure level,
which should keep even people with root access from dicking around in
Ring0 space without going through the normal access mechanisms and
protections put into place by kernel. And according to Sun, zones were
based on the FBSD jail facility, with a little deep wizardry added in to
make it sexier and more Enterprise and stuff. And stuff. I've never
heard of SuperMax and I don't think I have seen any sort of accelerator
for qemu. If all of the instances are going to be running hte same
operating environment, I do not get the reason for the extra abstraction
layers here and the duplication of kernels and low-level shit. As far as
I can tell, the only thing that would protect against is a kernelpanic
or a crash, which probably means something even more serious is wrong,
unless I am misunderstanding. 

re #20: I've never tried ESX, though it sounds conceptually nifty. How
friendly is it about letting you oversubscribe resources? That is one
thing I would like to be able to do if I go with something that makes
virtual hardware. The VMWare I am familiar with is VMWare Server, which
I occasionally use when I am modeling multi-tier networks (the ability
to create all sorts of virtual networks is it's spiffiest feature,
IM(ns)HO) on my laptop; it's great, but with a half dozen of Virtual
Servers and Windows pagin like crazy, it is a bit slow. What I really
need to do is get IBM to give me a free small 390. I believe the
smallest one can give you 40 hardware-based LPARs and just run RHEL or
Monta Vista in each of those. 
nharmon
response 22 of 27: Mark Unseen   Sep 29 19:54 UTC 2006

Well, some resources you could end up oversubscribing like memory and
CPU time. Other resources are quite static, like disk space.

While the virtual networks and VLAN tagging in Vmware are cool, I'll
tell you vmware's greatest strengths are in its Virtual Center
management system and Vmotion. Vmotion especially, as it lets you
transfer virtual machines to different servers without interupting them.
It takes about a minute, but during that time the VM is totally unware
of it.
cross
response 23 of 27: Mark Unseen   Sep 29 19:55 UTC 2006

Regarding #21; SuperMax is an actual type of prison; it's where they held,
e.g., John Gotti Sr. before he died.  Qemu, when running on x86, can be used
in conjunction with something called kqemu which passes through most
instructions to the base hardware (as opposed to interpreting the
instructions in software).

I think the big reason you'd want virtual separation of machines is so that,
if a bug in the actual kernel were found, you wouldn't be affected by it.
This doesn't just mean panics, but also security violations and the like.
It also lets you run multiple, different kernels on the same hardware.  For
your application, it may not be necessary, but it might be easier to get
going than zones, jails, sysjails, or anything else....
gull
response 24 of 27: Mark Unseen   Oct 2 22:16 UTC 2006

Re resp:13: It seems like I remember there being some people working on 
supporting hot-swappable CPUs in the Linux kernel, but I'm not sure.


One thing to keep in mind about jails is they can be hard to do 
securely.  It's awfully easy to screw up and leave the jail with access 
to something that can let the jailed user "escape."
 0-24   25-27         
Response Not Possible: You are Not Logged In
 

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss