|
Grex > Coop > #297: Grex: It's time to switch operating systems. | |
|
| Author |
Message |
| 25 new of 29 responses total. |
vsrinivas
|
|
response 5 of 29:
|
Nov 30 15:14 UTC 2010 |
Using FreeBSD would allow Grex to switch to the ZFS filesystem; I think
that alone would be an excellent reason in its favour.
FreeBSD release branches are supported for considerably longer than
OpenBSD releases. While that isn't a major deal for Grex (it is keeping
up-to-date fairly well), it would keep upgrade pains confined to longer
intervals.
-- vs
|
cross
|
|
response 6 of 29:
|
Nov 30 15:23 UTC 2010 |
ZFS is another great point.
|
tsty
|
|
response 7 of 29:
|
Dec 1 19:45 UTC 2010 |
did a little digging .. expeciallya bdcuase of hte starbved-ram cross ran
into.
these may be worth the click & read:
http://www.undeadly.org/cgi?action=article&sid=20100618041150
http://www.osnews.com/comments/23978
http://mongers.org/openbsd/interview-espie-ports
http://onlamp.com/pub/a/bsd/2004/03/18/marc_espie.html
|
tsty
|
|
response 8 of 29:
|
Dec 1 19:59 UTC 2010 |
also, from sys.cf ther is this:
Item 100: Linus Torvalds on OpenBSD
Entered by John H. Remmers (remmers) on Thu, Jul 17, 2008 (10:18):
Ran across this today at
http://article.gmane.org/gmane.linux.kernel/706950
|
cross
|
|
response 9 of 29:
|
Dec 1 20:30 UTC 2010 |
Pretty much all of that, except for the OpenBSD 4.8 announcement
link, is a couple of years old. Grex is running OpenBSD 4.8.
I found the problem installing packages; an old dependency that had
been removed in the GNOME libraries (needed through a complicated
set of dependencies by the RT package) had been installed as a port,
but then that port had been removed. When pkg_add (the tool Marc
Espie wrote to add or update packages under OpenBSD) went to upgrade
that package, it got itself into an infinite loop trying to navigate
what it thought was an cyclic dependency chain. I guess every time
through that loop it set a variable, or added something onto an
array (these tools are written in Perl) or something similar until,
eventually, the thing just ran out of memory. I tracked it down
by manually following the dependency chain until I found the cycle,
and force-removing the no-longer-existing package (and everything
that depended on it).
Now, this to me sort of exemplifies what I dislike about OpenBSD.
Espie's package tools are often held up as examples of what they
do *right*, but actually, they've got some pretty serious bugs in
them. I mean, really; the tool didn't bother to keep some sort of
"visited node" list when it traversed the package dependency graph?
Detecting a cycle in a directed graph isn't that hard. Similarly,
having some new packages and some old packages on the system, without
doing any sort of maintenance of the original dependency information,
just invites troubles. Databases get around this by having a notion
of an atomic transaction: either everything succeeds, or it all
fails and is "rolled back" in a way that is transparent to the
consumer of the data...none of this half-updated business.
The FreeBSD people seem to do a lot better with portsnap and
portupgrade.
|
tsty
|
|
response 10 of 29:
|
Dec 2 05:46 UTC 2010 |
re 9 ... greate about findeirng the prob & fix!! now, it shold never appear
again ???? pkg_add was too new comparred to taht old depenendency for it to
have been considered ?? just asking.
|
cross
|
|
response 11 of 29:
|
Dec 2 08:50 UTC 2010 |
No, it'll probably happen again (in fact, it did later, with another package).
The point about pkg_add is that it has bugs in it, and those bugs appear, at
first inspection, to be pretty deep into its architecture.
|
cross
|
|
response 12 of 29:
|
Dec 2 09:34 UTC 2010 |
Here's another interesting point about OpenBSD: they will categorize
security problems that affect their system, but weren't discovered
by them, as "reliability" problems. For instance, the recent OpenSSL
vulnerability (for which the FreeBSD project released a security
advisory) was listed as a reliability problem by OpenBSD.
Well, I guess their project can continue to claim that they've had
some artifically low number of security holes in "a heck of a long
time" if they just don't call security holes security holes.
At the last staff meeting back in March, Steve made mention of a
bug in FreeBSD's ftpd, citing that as a reason we should stick with
OpenBSD. Unfortunately, that *exact same bug* affected OpenBSD and
existed on Grex. Again, the OpenBSD project marked this as a
"reliability fix." Despite their supposedly superior auditing,
they didn't catch either of these problems.
So this is another way of saying that I don't buy OpenBSD's security
claims. I'm not saying they don't do a good job, but so does everyone
else. In a lot of ways, it appears they do a better job, but that's
partly self-selection on their part: when your definition of what a
"security hole" is is so tightly focused, it's not that hard to make
it look like you're light years ahead of everybody else, but are you
really? I say no. And in that case, the main rationale for why we've
stayed on OpenBSD for so long is, I think, demonstrably false.
|
remmers
|
|
response 13 of 29:
|
Dec 3 13:53 UTC 2010 |
I've done some setting up of FreeBSD systems, and as a result - although
I'm not nearly as experienced in the nuts-and-bolts of system
administration as Dan is - I've found FreeBSD straightforward to manage
and tend to agree that FreeBSD would be a better choice for a system
like Grex than OpenBSD is. The fact that FreeBSD supports modern
processor architectures better than OpenBSD is another point in its favor.
How stable is ZFS on FreeBSD nowadays? The last time I looked (which
was a while ago) the implementation was somewhat experimental.
FreeBSD is well-supported in the cloud, OpenBSD not so much I think.
Given the likelihood of Grex's moving to the cloud eventually, that's
another reason for abondoning OpenBSD.
Should cloud support factor into decisions we make now, even if we're
not going to move to the cloud quite yet? I'm thinking of Amazon's EC2
service, which is widely used by some big players (e.g. by Netflix) and
offers Linux and Solaris virtual machines but not FreeBSD at this point.
At the risk of being accused of heresy ;-), should we be considering
going with Linux?
|
veek
|
|
response 14 of 29:
|
Dec 3 14:19 UTC 2010 |
Solaris has stable zfs (from what i've seen at wrk but my exposure is
minimal) Why don't we create a test partition and see..
I think we are looking at it the wrong way (Linux/ZFS/etc).. We have a
bunch of volunteers right? A, B, C etc.. If a A wants to try something
let him try it so long as he doesn't create unwanted work for B! OR in
other words B pre-approves the task.. Eg: If cross wants ZFS he should
get it - If Tsty approves off it before hand (because TS/or-someone-
else will have to go reset the box/re-install the b0x if ZFS b0rks)..
so get it pre-approved by WHOEVER has to clean up the mess..
This won't create problems and heated debates about which is the
"better" solution. The actual "better solution" is ultimately people
using the box.. if you got 50ppl in a bullock-cart and 1 in a car.. the
cart is better simply because it serves more ppl..
|
cross
|
|
response 15 of 29:
|
Dec 3 15:44 UTC 2010 |
ZFS on FreeBSD is quite stable now days; certainly, production ready.
I think that it's always good to think ahead: cloud support (as you
put it) should definitely be a consideration. It may be a while
before we move there, but the whole world is heading in that direction
and I think it would be silly if Grex tried to resist that tide. I
believe one of the reasons we're in the malaise we are in now is that
we spent too long trying to hold back other tides with teaspoons.
|
remmers
|
|
response 16 of 29:
|
Dec 4 20:48 UTC 2010 |
Interesting - you were the resister and quite opposed to a move to the
cloud when I suggested it a couple of years ago. What's changed since then?
And how might a move to the cloud in the future affect our choice of OS
today?
|
cross
|
|
response 17 of 29:
|
Dec 5 01:04 UTC 2010 |
resp:16 I wasn't opposed in the long term; I was opposed in the
near term, and still am (in the near term). I don't think the
present offerings are mature enough, or offer a compelling enough
price point over our own hardware. I also think there are a host
of legal issues to be thought through, and I think most of the
technical benefits of virtualization can be realized by a combination
of a remote console capability at the hardware level, and maybe
virtualizing our own hardware (e.g., run Grex under Xen or VMware
or something, but on a computer we own and control).
That said, I think in the long term, jumping into the cloud is
inevitable. One cannot fight the march of time. Grex has tried,
and I think a lot of the current predicament is a result. I also
think that, in about five years, precedence will have been set for
the legal ramifications of running a service like Grex on a virtualized
hosting provider, and the price point will continue to get better
for the sorts of capabilities we'd like.
In other words, I don't think it's the best avenue of approach now,
but I think we would do well to prepare for it in the future.
|
remmers
|
|
response 18 of 29:
|
Dec 8 16:38 UTC 2010 |
In addition to getting a handle on the legal ramifications, we'd also
want a cloud hosting service whose TOS are compatible with the kind of
system Grex is and wants to continue to be: Free speech in the forums,
and full access by users to the full range of Unix tools and programming
languages after minimal verification. Offhand I'm not so sure how easy
that will be to come by.
Currently, there's a kind of "Grex annex" that I've been donating and
that resides in the cloud. It's a FreeBSD virtual machine hosted at
rootbsd.net. I've been using it for a while to keep an offsite (as in,
somewhere in North Carolina) backup of the conferences and the
grexconfig repository -- these are sync'd several times a day using
rsync, so that in case some catastrophe befalls Provide, we can recover
some essential data. More recently, Dan used it to test pnewuser before
installing it on Grex and has been doing some other work on it as well.
Any other root staffers on Grex are welcome to access it should they
wish to. The general idea is to provide a resource separate from Grex
that staff can use to test things of possible use on Grex.
That machine works fine for the purpose for which it's being used, but
if you look at the rootbsd.net terms of service, it's pretty clear that
it wouldn't be a satisfactory host for the real Grex. Too many
restrictions on what kinds of activities are allowed. Part of moving
Grex to the cloud would entail finding a hosting service that gives us
the same flexibility that we have currently.
In the meantime, let's get back to the question of what we should be
doing now to prepare for an eventual move to the cloud. Running Grex in
a VM on a machine that we actually own, as Dan suggests, would be a
reasonable step to take and pretty easy to set up, once we have
sufficiently modern hardware to do it on. I'm wondering if FreeBSD is
the best choice of OS though, as Linux and Solaris seem to have more
widespread "cloud support" at the moment.
Or maybe the OS choice doesn't matter that much. What we should
probably be working toward is making Grex portable, in the sense that we
could drop it into any Unix-ish environment (FreeBSD, Linux, Solaris, OS
X, whatever) and be able to bring up a fully flexible Grex system in an
automated way. (Dan and I have already discussed this a bit.)
|
cross
|
|
response 19 of 29:
|
Dec 9 01:58 UTC 2010 |
Yes, John points out some of the bigger issues; ie, terms of service and so
forth. I think most of these things will be somewhat settled in the next few
years, but haven't been yet. Then, where the data lives and so on is still
an important question.
With respect to OS, I think that FreeBSD is a good happy medium for now. With
Oracle basically killing OpenSolaris, I expect Solaris mind and market share
to dwindle in the coming few years. That said, as John points indicates, he
and I both believe that Grex can and should be a portable layer on top of
pretty much any reasonable operating system.
|
remmers
|
|
response 20 of 29:
|
Dec 14 20:20 UTC 2010 |
Speaking of the cloud, Netflix (definitely a big-time enterprise)
recently moved most of its ever-expanding services to Amazon's AWS cloud
service, as opposed to beefing up their own data centers. Here's an
interesting post on the rationale, from Netflix's official "tech" blog:
http://techblog.netflix.com/2010/12/four-reasons-we-choose-amazons-cloud-as
.html
(http://tinyurl.com/3yqxs5s).
|
cross
|
|
response 21 of 29:
|
Dec 15 00:44 UTC 2010 |
I just read a blog post that FreeBSD now boots on Amazon's service. huzzah.
|
remmers
|
|
response 22 of 29:
|
Dec 15 17:04 UTC 2010 |
Cool!
|
remmers
|
|
response 23 of 29:
|
Dec 15 17:40 UTC 2010 |
Ars Technica post: "FBI accused of planting backdoor in OpenBSD IPSEC
stack"
http://arstechnica.com/open-source/news/2010/12/fbi-accused-of-planting-bac
kdoor
-in-openbsd-ipsec-stack.ars (http://tinyurl.com/32vrot7)
From the article:
"The prospect of a federal government agency paying open source
developers to inject surveillance-friendly holes in operating systems is
also deeply troubling. It's possible that similar backdoors could
potentially exist on other software platforms. It's still too early to
know if the claims are true, but the OpenBSD community is determined to
find out if they are."
|
cross
|
|
response 24 of 29:
|
Dec 15 18:13 UTC 2010 |
Yikes! That's a huge bummer....
Yeah, we really need to switch.
|
remmers
|
|
response 25 of 29:
|
Dec 16 03:23 UTC 2010 |
Well, it's alleged, not confirmed. And even if it's true, other OS's
might be similarly compromised.
|
cross
|
|
response 26 of 29:
|
Dec 16 09:58 UTC 2010 |
We need to switch for other reasons. This is the last domino in the list
of reasons we went with OpenBSD in the first place.
|
remmers
|
|
response 27 of 29:
|
Dec 21 14:39 UTC 2010 |
Here's a link to a blog post about accessing FreeBSD on Amazon EC2:
http://www.daemonology.net/blog/2010-12-20-FreeBSD-on-EC2-FAQ.html
|
kentn
|
|
response 28 of 29:
|
Dec 21 15:11 UTC 2010 |
(Where EC2=Elastic Compute Cloud)
|
scholar
|
|
response 29 of 29:
|
Jan 2 04:51 UTC 2011 |
This item isn't going to be exciting until someone tells Steve' about it.
|