|
Grex > Systems > #24: The Microkernel vs. Monolithic kernel Item | |
|
| Author |
Message |
| 7 new of 15 responses total. |
cross
|
|
response 9 of 15:
|
Sep 20 04:22 UTC 2006 |
The advantage of the protected domain model of the microkernel varies
depending on the target application environment. For something like Windows,
it can rightly be seen to have very little benefit. On the other hand, if
my microkernel is running my nuclear reactor, and the logging filesystem dies
because of a dead disk, I'd rather it kept running and kept the cooling tower
going regardless.
Another approach is to structure the kernel as you would a microkernel, and
then implement it as a monolithic kernel. Microkernels give a nice
conceptual model for thinking about how to structure an operating system; if
for nothing else, this gives them value.
|
gull
|
|
response 10 of 15:
|
Sep 20 23:21 UTC 2006 |
Re resp:8: I don't think journalling filesystems usually work quite
that way. The idea is to protect the filesystem from corruption, not
to ensure that the data always gets written. It's a bit like a
transaction-based database; if the system crashes at any point, the
filesystem can be "rolled back" to a sane state.
In practice, I find that a power cut during writing to a journalled
filesystem usually results in some truncated or zero-length files.
|
cross
|
|
response 11 of 15:
|
Sep 20 23:34 UTC 2006 |
Regarding #8; The whole point of the journal is integrity, as David points
out. You have something akin to transactions; your write doesn't succeed
unless the transaction succeeds. If your power goes out half-way through a
write, the write call won't have returned yet anyway and the program should
never have assumed it was successful. Of course, this implies that
application programs detect failures and act in a sane way, not just the
filesystem.
|
mcnally
|
|
response 12 of 15:
|
Sep 22 15:34 UTC 2006 |
re #2:
> If you had to summarize the major fundamental point of contention
> between the monolithic and micro kernel camps, in a way useful to
> someone with a little bit of technical knowledge but understandable
> to a lay person, what would you say?
The debate so far:
Is not!
Is too!
Nuh uh!
Yuh huh!
seriously, though, many of the microkernel vs monolithic (macro-?) kernel
"debates" quickly generate into personality clashes between prominent figures
in the two different camps.
|
cross
|
|
response 13 of 15:
|
Sep 22 17:37 UTC 2006 |
That reminds me of a quote from Dave Presotto, back when he was at Bell Labs.
With respect to why Plan 9 wasn't a microkernel, he said something along the
lines of, "Because in order to implement a microkernel, you have to have a
brain the size of a planet, and we only have egos that big."
I thought it was funny.
|
twenex
|
|
response 14 of 15:
|
Oct 8 00:17 UTC 2006 |
Heh.
I've been accused of a bias towards monolithic systems. I don't think that's
quite accurate, though I can see how you can get the idea from my comments.
However, for one thing, I'm not a programmer, much less a kernel programmer,
so I don't have as much of a stake in these htings as some might. What I will
say is that the majority of systems I've used have been monolithic; it may
be the case that designing a "quick and dirty" system, as some might call a
monolithic kernel, is easier in the "real world" than what everyone would
surely PREFER to write - a pristine, legacy-free system. It's noticeable
however, that the microkernel systems I HAVE used - QNX and the AmigaOS - have
in practice been much smaller, and "done more with less" - than the monolithic
kernels.
(1. I WILL admit to a bias towards monolithic kernels in that I don't consider
hybrid kernels as having anything to do, in practice, with microkernels; this
is why I don't consider NT and its successors as microkernels. It's also
noticeable that whether you consider the NT kernel as monolithic or not, the
OS just keeps growing! 15GB at last count!
2. Some might not consider the AmigaOS as a true microkernel as it did not
implement a separation between kernel and userspace; however, it shares the
property, with microkernels, of easily updating such things as filesystems
by the simple updating or addition of a library).
|
cross
|
|
response 15 of 15:
|
Oct 8 03:31 UTC 2006 |
There's nothing wrong with having preferences, and of course, one part of
the equation that's often overlooked is what the purpose of the system under
construction is? If your goal is to do real-world work, then you have some
sort of cost/benefit metric (not necessarily economic in nature) which is
going to heavily influence your design. Ie, it might be a lot easier to get
a monolithic program together that's only a few Kloc to run your cheap
consumer electronics gadget: it wouldn't justify the hoops one has to jump
through to write a teeny microkernel from scratch. You might be able to
verify and reason about the 2k of code that runs your kid's speak and spell
a lot easier than one could verify and reason about a microkernel.
Similarily, if a life depends on it, you might find that it makes more sense
to enforce separation of boundaries to minimize the chances of one part of
the system crashing and taking down, say, a nuclear reactor or smoke
detector or car's braking system, and that might best be implemented as a
microkernel. Or, you might be writing a research system to push the
boundaries of operating systems research. In that sense, monolithic kernels
have more or less been done. Again, the real answer is that there is no
right or wrong. It all boils down to what you need to accomplish.
|