|
Grex > Jelly > #24: The Microkernel vs. Monolithic kernel Item | |
|
| Author |
Message |
twenex
|
|
The Microkernel vs. Monolithic kernel Item
|
Sep 16 23:29 UTC 2006 |
This item is reserved for discussing the relative merits of microkernel and
monolithic operating systems kernels (and everything in between).
|
| 15 responses total. |
remmers
|
|
response 1 of 15:
|
Sep 18 15:11 UTC 2006 |
For background, see the famous Tanenbaum-Torvalds online debate from 1992,
where computer science professor and microkernel advocate Andy Tanenbaum
collides head-on with brash young computer science student and monolithic
Linux kernel designer Linus Torvalds. Transcripts can be found at various
places on the web, e.g. here:
http://people.fluidsignal.com/~luferbu/misc/Linus_vs_Tanenbaum.html
|
other
|
|
response 2 of 15:
|
Sep 18 20:00 UTC 2006 |
If you had to summarize the major fundamental point of contention
between the monolithic and micro kernel camps, in a way useful to
someone with a little bit of technical knowledge but understandable to a
lay person, what would you say?
|
twenex
|
|
response 3 of 15:
|
Sep 18 20:35 UTC 2006 |
I am, of course, assuming that this question was not specifically directed
at Remmers, but generally at anyone who can answer.
1. The difference between the two depends on how the system is divided up
between what is commonly termed "kernel space" and "userspace". Kernel space
(and its associated mode) is privileged and can do anything on the system;
userspace is limited to a basic subset of functions, and different userspace
programs are isolated from each other.
2. Monolithic kernels attempt to place "as much as possible" in the
(priviledged) kernel, for reasons of simplicity and performance: In general,
it is much easier to design a monolithic kernel than a microkernel, and, for
technical reasons, their performance tends to be better.
3. The ideal microkernel would place the minimum amount of system functions
in kernel space, with everything else in userspace. Typical subsystems which
microkernels abstract into userspace include filesystems (for storing data)
ande networking drivers. Many advocates of monolithic kernels object to the
fact that the interfaces between microkernels and drivers (the latter in
userspace) tend to be complex, causing poor performance.
Almost all Unix implementations are monolithic kernels; the only prominent
counter-examples are QNX (a Unix-like OS for real time systems) and Minix (a
pedagogic, minimalist Unix-clone, and the inspiration for (the monolithic)
Linux).
The difficulty of designing microkernels has led some teams to design "hybrid
kernels"; their detractors consider this term to mean "We wanted to design
a microkernel, but we had so many problems that we ended up with a monolithic
kernel". Windows NT, and its descendants Win2K and XP, are hybrid kernels.
4. A distinction must be drawn between microkernels, and monolithic kernels
which use kernel modules implemented in separate files. In these systems, which
include Linux and Windows NT, the kernel modules share kernelspace with the
monolithic kernel; it is, conversely, also possible to design a microkernel,
implemented as a single file, in which the microkernel part of the file is in
kernel space, and the rest of the file resides in userspace.
|
twenex
|
|
response 4 of 15:
|
Sep 18 20:50 UTC 2006 |
The GNU system, which provides most of the commandline utilities for the Linux
kernel, was originally to be implemented on top of a microkernel-based OS, with
a microkernel called Mach serving as a base on top of which a group of
"servers" called "The HURD", or Hird of Unix-replacing Daemons, would be
implemented. At the time of writing, emphasis has shifted towards implementing
the HURD on the high-performance L4 microkernel, but the HURD is not much
farther forward than it was when Linus (Torvalds) started programming Linux.
One particularly ugly (from a programming standpoint) but common approach is
to "place" a monolithic system on top of a microkernel: This was done with
the original, prototypical monolithic kernel "Mach" - the BSD Unix system,
a monolithic kernel, was integrated with Mach to provide a shortcut to people
who wanted to work on Mach and have a usable system. The approach was
continued in NeXTSTEP, the OS for Steve Jobs' NeXT computers (an ancestor of
Mac OS X), and MkLinux, the "official" port of Linux to the PowerPC-based Macs
(no "h"). At this point, I suspect MkLinux is abandonware.
|
twenex
|
|
response 5 of 15:
|
Sep 18 21:02 UTC 2006 |
Some advantages of microkernels (as touted by their advocates) include
stability and security: a serious bug in any part of a monolithic kernel can
cause a kernel panic (also known as a Blue Screen of Death, Guru Meditation,
or system crash), but, a bug in a userspace filesystem driver causes only that
driver to fail; one need only fix the bug in that driver and restart it.
However, it must be noticed that whether a system uses a microkernel or not,
if a filesystem crashes in the middle of writing your data, your data is still
liable to be munged (a hackerish term for "munged [sic] until no good".)
|
other
|
|
response 6 of 15:
|
Sep 19 21:06 UTC 2006 |
If you add a journalled filesystem onto a microkernel, and take into
account how rarely it seems that filesystems crash (as opposed to other
things), why do your comments reflect an obvious preference for the
monolithic model?
|
gull
|
|
response 7 of 15:
|
Sep 19 21:11 UTC 2006 |
I think his point is there's no real advantage to the microkernel in
practice. Most of the time, if a device driver crashes, the system is
going down anyway. It doesn't matter if it causes a kernel panic or
not.
An example is Windows 2000. Display drivers run outside of the kernel,
in a different process space. In theory, that means that if the
display driver crashes, the machine can keep running. In reality, if
the display driver crashes in Windows, you're not going to have a lot
of options other than hitting the big red switch.
|
twenex
|
|
response 8 of 15:
|
Sep 19 21:16 UTC 2006 |
gull puts it nicely. To be fair, if Windows' GUI ran "over" a CLI, and there
was a problem with it (as with the (in)famous Ubuntu bug of a few weeks ago),
the system would be recoverable; however, as was pointed out in relation to
Ubuntu, few, if any, members of its target market would be able to deal with
putting the GUI right from the CLI.
To add, it's certainly true that the data in a journalled filesystem could
be "resurrected" if the filesystem crashed in the middle of writing the data
- but what if it crashed in the middle of writing the *meta*data (the
journal)?
|
cross
|
|
response 9 of 15:
|
Sep 20 04:22 UTC 2006 |
The advantage of the protected domain model of the microkernel varies
depending on the target application environment. For something like Windows,
it can rightly be seen to have very little benefit. On the other hand, if
my microkernel is running my nuclear reactor, and the logging filesystem dies
because of a dead disk, I'd rather it kept running and kept the cooling tower
going regardless.
Another approach is to structure the kernel as you would a microkernel, and
then implement it as a monolithic kernel. Microkernels give a nice
conceptual model for thinking about how to structure an operating system; if
for nothing else, this gives them value.
|
gull
|
|
response 10 of 15:
|
Sep 20 23:21 UTC 2006 |
Re resp:8: I don't think journalling filesystems usually work quite
that way. The idea is to protect the filesystem from corruption, not
to ensure that the data always gets written. It's a bit like a
transaction-based database; if the system crashes at any point, the
filesystem can be "rolled back" to a sane state.
In practice, I find that a power cut during writing to a journalled
filesystem usually results in some truncated or zero-length files.
|
cross
|
|
response 11 of 15:
|
Sep 20 23:34 UTC 2006 |
Regarding #8; The whole point of the journal is integrity, as David points
out. You have something akin to transactions; your write doesn't succeed
unless the transaction succeeds. If your power goes out half-way through a
write, the write call won't have returned yet anyway and the program should
never have assumed it was successful. Of course, this implies that
application programs detect failures and act in a sane way, not just the
filesystem.
|
mcnally
|
|
response 12 of 15:
|
Sep 22 15:34 UTC 2006 |
re #2:
> If you had to summarize the major fundamental point of contention
> between the monolithic and micro kernel camps, in a way useful to
> someone with a little bit of technical knowledge but understandable
> to a lay person, what would you say?
The debate so far:
Is not!
Is too!
Nuh uh!
Yuh huh!
seriously, though, many of the microkernel vs monolithic (macro-?) kernel
"debates" quickly generate into personality clashes between prominent figures
in the two different camps.
|
cross
|
|
response 13 of 15:
|
Sep 22 17:37 UTC 2006 |
That reminds me of a quote from Dave Presotto, back when he was at Bell Labs.
With respect to why Plan 9 wasn't a microkernel, he said something along the
lines of, "Because in order to implement a microkernel, you have to have a
brain the size of a planet, and we only have egos that big."
I thought it was funny.
|
twenex
|
|
response 14 of 15:
|
Oct 8 00:17 UTC 2006 |
Heh.
I've been accused of a bias towards monolithic systems. I don't think that's
quite accurate, though I can see how you can get the idea from my comments.
However, for one thing, I'm not a programmer, much less a kernel programmer,
so I don't have as much of a stake in these htings as some might. What I will
say is that the majority of systems I've used have been monolithic; it may
be the case that designing a "quick and dirty" system, as some might call a
monolithic kernel, is easier in the "real world" than what everyone would
surely PREFER to write - a pristine, legacy-free system. It's noticeable
however, that the microkernel systems I HAVE used - QNX and the AmigaOS - have
in practice been much smaller, and "done more with less" - than the monolithic
kernels.
(1. I WILL admit to a bias towards monolithic kernels in that I don't consider
hybrid kernels as having anything to do, in practice, with microkernels; this
is why I don't consider NT and its successors as microkernels. It's also
noticeable that whether you consider the NT kernel as monolithic or not, the
OS just keeps growing! 15GB at last count!
2. Some might not consider the AmigaOS as a true microkernel as it did not
implement a separation between kernel and userspace; however, it shares the
property, with microkernels, of easily updating such things as filesystems
by the simple updating or addition of a library).
|
cross
|
|
response 15 of 15:
|
Oct 8 03:31 UTC 2006 |
There's nothing wrong with having preferences, and of course, one part of
the equation that's often overlooked is what the purpose of the system under
construction is? If your goal is to do real-world work, then you have some
sort of cost/benefit metric (not necessarily economic in nature) which is
going to heavily influence your design. Ie, it might be a lot easier to get
a monolithic program together that's only a few Kloc to run your cheap
consumer electronics gadget: it wouldn't justify the hoops one has to jump
through to write a teeny microkernel from scratch. You might be able to
verify and reason about the 2k of code that runs your kid's speak and spell
a lot easier than one could verify and reason about a microkernel.
Similarily, if a life depends on it, you might find that it makes more sense
to enforce separation of boundaries to minimize the chances of one part of
the system crashing and taking down, say, a nuclear reactor or smoke
detector or car's braking system, and that might best be implemented as a
microkernel. Or, you might be writing a research system to push the
boundaries of operating systems research. In that sense, monolithic kernels
have more or less been done. Again, the real answer is that there is no
right or wrong. It all boils down to what you need to accomplish.
|