No Next Item No Next Conference Can't Favor Can't Forget Item List Conference Home Entrance    Help
View Responses


Grex Systems Item 21: The Unix Hardware Item
Entered by twenex on Sat Sep 16 23:25:49 UTC 2006:

This item is reserved for discussion of Unix hardware - that is, hardware
created specifically for the purpose of running a Unix operating system. The
PDP-11, Interdata 7/32 and 8/32, and DEC VAX have an honorary place in this
item as, although not specifically created to run Unix, they were nevertheless
the first four hardware platforms to run it.

27 responses total.



#1 of 27 by maus on Tue Sep 19 04:43:52 2006:

On a somewhat newer topic than the original UNIX hardware, has anyone
dealt with adding additional processor boards to a cPCI-based system? 

I just want to confirm what I think I understand from the assorted
literature out in the Internet. In a cPCI system, the additional system
boards (satelite boards or slave boards) add additional compute
resources but are still a part of the same "system"; that is, they
share the same state, same operating environment kernel image and IP
number. Is this correct? If the main system board fails, will the
system stay up and limp along on the slave board or is the presence
tied to the primary system board? 


#2 of 27 by cross on Wed Sep 20 04:12:35 2006:

I think it depends highly on the operating system running on the system.  Some
versions are going to be resistant to things like failures of other system
boards on the bus, others are going to want to run on some "master" and aren't
going to take kindly to it going away suddenly.  Which is which varies.


#3 of 27 by maus on Wed Sep 20 04:17:18 2006:

So I would probably want to run Solaris, QNX or a port of Carrier-Grade
Linux, rather than a BSD? 


#4 of 27 by cross on Wed Sep 20 04:24:04 2006:

I guess it depends on what you want to do with it?  If you need high
availability, I'd go with QNX which has a proven track record in this area.
But it's really impossible to say without knowing more about your specific
application....


#5 of 27 by maus on Thu Sep 28 04:26:35 2006:

I was looking at running OBSD and using sysjail or Solaris and zones to
provide lightweight Virtual Private Environments to allow colleagues an
opportunity to play as root in OBSD or Solaris without giving away the
server, and also to allow them to try out non-intel gear (most are used
to either Linux or Windows on intel, with a few FBSD partisans). The
other thing I wanted to do is demonstrate that uptimes of over a couple
of months are a reasonable thing to look at. I figured that with either
a resilient BSD or UNIX and resilient, redundant, hot-swapable hardware,
with all users being kept in a jail environment, this puppy could do the
whole uptime-for-uptime's-sake thing. 


#6 of 27 by cross on Thu Sep 28 05:23:31 2006:

Oh, hmm.  Solaris might be able to handle that; I doubt that OpenBSD has the
support for hot-swappability of, e.g., CPU cards.


#7 of 27 by maus on Thu Sep 28 14:08:39 2006:

You're probably right, now that I think about it. 


#8 of 27 by twenex on Thu Sep 28 15:52:33 2006:

More to the point, I doubt Intel hardware has support for hot-swappable CPUs.


#9 of 27 by cross on Thu Sep 28 16:02:01 2006:

Some does, but certainly the commodity stuff isn't likely to.


#10 of 27 by maus on Thu Sep 28 21:29:31 2006:

Which is the reason I am not doing this on commodity intel hardware. The
system is a 3 slot cPCI chassis from marathon with a 440MHz Sun CP1500
board (UltraSparc IIe processor, half gig of RAM in a Mezanine board,
hme and scsi built in, serial console), along with redundant,
hot-swapable power supplies and redundant, hot-swappable scsi drives. I
would like to use a CP2060 or CP2080 to add resources. If this works
well, I have a couple of identical systems which I will put up in the
NOC as a test-bed for some of my guys.


#11 of 27 by tod on Thu Sep 28 21:32:26 2006:

Awesome


#12 of 27 by twenex on Thu Sep 28 21:37:22 2006:

Re: #10. Oh right, lost track, sorry.


#13 of 27 by ball on Thu Sep 28 22:23:45 2006:

Re #10: Damn shiny.  Hot swap CPUs are a good thing (I've
  used them on VME) but I don't know of an OS that could
  gracefully handle CPUs disappearing on the fly.  It would
  be nice if you could mark a CPU as "pending shutdown" or
  something and have processes gradually migrate to other
  boards.  Fit solenoids so that a board can physically
  eject itself once it's shut down. :-)


#14 of 27 by cross on Thu Sep 28 22:29:30 2006:

I believe that Solaris lets you take a CPU out of production.  The FreeBSD
people might be working on that sort of thing too.


#15 of 27 by maus on Fri Sep 29 02:21:48 2006:

Re #13: I think more along the lines of creating a resources pool as a
small cluster and then put resource partitioning on top of hte resource
pool, so if a piece of kit fails, it can either be swapped live, or just
put a tag on it and ignore it until RMA day. As an analogy, think of no
longer thinking of discs, but instead have a big-ass RAID array and then
slice that up and take slices as needed. 


#16 of 27 by cross on Fri Sep 29 02:52:53 2006:

Well, I think what you're doing is neat, but have you considered virtual
machines under VMWare or Xen or even qemu?


#17 of 27 by nharmon on Fri Sep 29 03:14:03 2006:

> As an analogy, think of no longer thinking of discs, but instead have a 
> big-ass RAID array and then slice that up and take slices as needed.

That would sound just about like how a SAN works. And when you couple a
SAN with Vmware you're left with a very reliable server infrastructure.
We're in the process of implementing the SAN+Vmware thing at work. We
were waiting to see if the new Vmware 3 supported iSCSI, which it does.


#18 of 27 by maus on Fri Sep 29 03:44:41 2006:

I was actually using the disc virtualization as an analogy, but yeah,
SAN for storage is kind of spiffy, though the cost of entry is pretty
high, both in dollars and in learning. I think my company is just now
starting to dick around with iSCSI-SAN for our bigger customers. 

I have tried VMWare, Xen, and both are interesting, though they are
really rather heavyweight and too resource intensive for what I plan to
do. qemu runs like a wounded dog with only two legs and a hangover.
Right now I am playing around with OpenVZ (the freeware version of
Virtuozzo, under an opensource license) on RHEL 4, which seems to be an
interesting and fun way of doing things. If you have tried the zone
facility in Solaris 10 or jail in FBSD or sysjail in OBSD, they are
rather spiffy and provide sufficient isolation and manageability. 


#19 of 27 by cross on Fri Sep 29 05:27:37 2006:

Well, if you think that's good enough....  I think the thing about jails and
sysjail's is that they're more oriented towards logical separation of system
spaces rather than real or physical separation.  Zones are a bit different
again, somewhere in between the two.

But, if you feel like jails are sufficient, then go for it.  SuperMax would
be something like VMWare.  Have you tried the version of qemu with the x86
accelerator?  (Though that description cracked me up...)


#20 of 27 by nharmon on Fri Sep 29 11:19:54 2006:

VMware server and workstation are bottom heavy, I will admit. But vmware
infrastructure (formerly known as ESX) is quite lean. It is basically a
stripped down Linux kernel until it starts up vmware, at which time the
linux kernel is swapped out for a vmware kernel (although the stripped
down linux is still there, its running as a VM). It is really quite
slick, and we're finding that even with our low end 2-proc systems, we
can run around 4 or 5 virtual win2k3 servers under low-load conditions.

As for a SAN, the cost of entry may not be as bad as you think. In fact,
I think there is software or Linux to make it an iSCSI target. Ah, here
is it: http://iscsitarget.sourceforge.net/.



#21 of 27 by maus on Fri Sep 29 15:45:57 2006:

re #19: True, the isolation is not as absolute as emulating hardware,
but unless the person can break something in Ring0, it should still
isolate them. The machine would run with a positive number secure level,
which should keep even people with root access from dicking around in
Ring0 space without going through the normal access mechanisms and
protections put into place by kernel. And according to Sun, zones were
based on the FBSD jail facility, with a little deep wizardry added in to
make it sexier and more Enterprise and stuff. And stuff. I've never
heard of SuperMax and I don't think I have seen any sort of accelerator
for qemu. If all of the instances are going to be running hte same
operating environment, I do not get the reason for the extra abstraction
layers here and the duplication of kernels and low-level shit. As far as
I can tell, the only thing that would protect against is a kernelpanic
or a crash, which probably means something even more serious is wrong,
unless I am misunderstanding. 

re #20: I've never tried ESX, though it sounds conceptually nifty. How
friendly is it about letting you oversubscribe resources? That is one
thing I would like to be able to do if I go with something that makes
virtual hardware. The VMWare I am familiar with is VMWare Server, which
I occasionally use when I am modeling multi-tier networks (the ability
to create all sorts of virtual networks is it's spiffiest feature,
IM(ns)HO) on my laptop; it's great, but with a half dozen of Virtual
Servers and Windows pagin like crazy, it is a bit slow. What I really
need to do is get IBM to give me a free small 390. I believe the
smallest one can give you 40 hardware-based LPARs and just run RHEL or
Monta Vista in each of those. 


#22 of 27 by nharmon on Fri Sep 29 19:54:46 2006:

Well, some resources you could end up oversubscribing like memory and
CPU time. Other resources are quite static, like disk space.

While the virtual networks and VLAN tagging in Vmware are cool, I'll
tell you vmware's greatest strengths are in its Virtual Center
management system and Vmotion. Vmotion especially, as it lets you
transfer virtual machines to different servers without interupting them.
It takes about a minute, but during that time the VM is totally unware
of it.


#23 of 27 by cross on Fri Sep 29 19:55:45 2006:

Regarding #21; SuperMax is an actual type of prison; it's where they held,
e.g., John Gotti Sr. before he died.  Qemu, when running on x86, can be used
in conjunction with something called kqemu which passes through most
instructions to the base hardware (as opposed to interpreting the
instructions in software).

I think the big reason you'd want virtual separation of machines is so that,
if a bug in the actual kernel were found, you wouldn't be affected by it.
This doesn't just mean panics, but also security violations and the like.
It also lets you run multiple, different kernels on the same hardware.  For
your application, it may not be necessary, but it might be easier to get
going than zones, jails, sysjails, or anything else....


#24 of 27 by gull on Mon Oct 2 22:16:27 2006:

Re resp:13: It seems like I remember there being some people working on 
supporting hot-swappable CPUs in the Linux kernel, but I'm not sure.


One thing to keep in mind about jails is they can be hard to do 
securely.  It's awfully easy to screw up and leave the jail with access 
to something that can let the jailed user "escape."


#25 of 27 by maus on Thu Oct 19 04:51:43 2006:

Well, after discussion with colleagues, the OS will be Solaris 10u1. The
box only has a single system board for the time being (unforeseen hits
to the budget precluded adding a satelite board), but I have confidence
in the reliability of this board and will keep a spare on hand just in
case. Virtual environments will be built from zones with basic
functionality coming from loop-mounted, read-only copies of the system
/bin /sbin /lib etc. 

Anyone have a V880 or V890 that they don't need anymore? I could run a
fairly large database on one of those and have plenty of muscle left to
run a whole slew of full-rooot zones. If I remember right, that one had
a standard configuration of 8 processors, 16 GBytes of RAM, 8
hard-drives, dual NICs and a combined LOM/remote-console-over-IP.
Inasmuch as computing resources can be sexy, that one is sexy.

Re: #24: In this instance, a jail is referring to one created with the
jail() or sysjail() facility, not simply a chroot() jail. The jail() or
sysjail() mechanisms make it very hard to escape, as they presume that
the contents will be running as root and are hostile. In addition to
pivoting hte root of the filesystem, they also pivot the root of the
process tree and will not allow even root to jail(./..) or the like. 

http://sysjail.bsd.lv/ or
http://www.onlamp.com/pub/a/bsd/2006/03/09/jails-virtualization.html may
provide some interesting reading for the insomniac. 


#26 of 27 by gull on Tue Oct 24 23:12:26 2006:

Ah, gotcha.  I misunderstood and thought you were talking about a 
chroot()-style jail.


#27 of 27 by maus on Wed Oct 25 16:06:47 2006:

Nah. A chroot() jail would require giving the system root password to 
the users and would not allow them to have a truly isolated region. If 
one decided to be a prick, he could break out of his confinements and 
stomp on someone else. 

Response not possible - You must register and login before posting.

No Next Item No Next Conference Can't Favor Can't Forget Item List Conference Home Entrance    Help

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss