|
Grex > Oldcoop > #380: Cyberspace Communications finances for November 2006 | |
|
| Author |
Message |
| 25 new of 124 responses total. |
spooked
|
|
response 75 of 124:
|
Dec 20 12:02 UTC 2006 |
*MANY* solutions exist today, before-your-eyes on the Internet which could
easily catch 95%+ of incoming spam into the Grex mail server. They could
be implemented quickly by any staff member with half-a-degree of
intelligence.
Unfortunately, Grex is so backward and naive (did I mention
anti-progressionary?) that it (in particular its staff) will find any
excuse not to move forward from its ancient software and system
architecture base.
So glad I resigned from those cronies.
|
nharmon
|
|
response 76 of 124:
|
Dec 20 13:01 UTC 2006 |
You don't seem glad.
|
spooked
|
|
response 77 of 124:
|
Dec 20 13:11 UTC 2006 |
*giggles* Thanks for the light amusement :)
|
mary
|
|
response 78 of 124:
|
Dec 20 13:22 UTC 2006 |
The reason I bring up the rack-mount issue is I believe we'll
someday need to fit into the smallest space possible at some
other location than Provide. When we moved from the Pumpkin,
to Provide, we were very lucky Provide had the space and inclination
to allow our hardware to occupy a footprint outside of their racks.
Every other affordable ISP I contacted wanted us in a rack and charged
for service based on the amount of rack space (and bandwidth) used.
I would really like to see space considerations made part of any
hardware decisions we make at this point. So, thanks for all
the information on this.
|
ric
|
|
response 79 of 124:
|
Dec 20 13:34 UTC 2006 |
I don't know of any spam fighting systems that are easily implemented thta
actually eliminate 95% of spam without also blocking desired email.
Even greylisting, and using all sort of DNS blacklists, does *NOT* reduce my
spam intake by 95% on my server, and I even use some mroe aggresive DNS
blacklists like spamcop.
|
nharmon
|
|
response 80 of 124:
|
Dec 20 14:00 UTC 2006 |
I understand and appreciate the need to keep our physical footprint as
small as possible. If we needed to put Grex into a rack mountable case
right now, we would need one that was at least 3U to accommodate the PC
components that were used to build Grex (Rack space is measured in U's,
with each U being about 1.75 inches).
http://www.directron.com/ra349c00300w.html
This is a 3U rack chassis that would accommodate Grex's present
motherboard and cards as well as two of the drive cages that maus is
proposing.
If we wanted to venture into 2U or 1U territory we would be looking at a
complete system repurchase, and we might even have to get 2.5" (read:
laptop) hard drives in the case of a 1U solution. And laptop hard drives
are NOT cheap, nor as reliable, nor as spacious, as 3.5" drives.
|
maus
|
|
response 81 of 124:
|
Dec 20 20:22 UTC 2006 |
I like that chassis you showed. And anything smaller than 3U would
require a reengineering. Laptop hard drives are not standard for a 1U
chassis, but you would be limited to two or three normal-sized drives,
which means putting all of our eggs into one basket (in performance as
well as redundancy). If we only had 2U of space, what we could do is put
just our system drives into the host and data drives into a separate
drive shelf.
An alternate solution might be to find out if our ISP offers a managed
SAN option. in this case, we would simply pay the monthly fee instead of
amortizing out a the cost of installing this storage equipment
ourselves. At worst, we would have to buy a gigabit NIC and an initiator
programme (though I have heard that the initiator programme from NetBSD
can be ported to OpenBSD with little work).
|
aruba
|
|
response 82 of 124:
|
Dec 20 23:15 UTC 2006 |
maus - thanks a lot for your work on this. Your RFQ looks very
professional. I just want to make sure it doesn't commit us to anything,
if we get a bid. Going forward with a RAID array will require some time to
get the board and staff on board, so I don't want you to be annoyed if we
get a quote and then sit on it for a while. I hope we'll discuss it in
depth soon, but the process of agreeing on what we want and then the
logistics of the changeover may be much more elaborate than the actual
purchase.
Here is Grex's current case:
http://www.antec.com/pdf/drawings/PLUS1080AMG.pdf
It has room for 8 5.25 drives. I tend to agree with Mary, though, that we
should think in terms of rack mounting in the future.
|
cross
|
|
response 83 of 124:
|
Dec 21 00:07 UTC 2006 |
I think it's reasonable to look at 3U as a lower bound on space required.
Regarding #51; The thing about ccd or the like is that you can't boot off of
it. I'd be less worried about a controller going bad and more worried about
having a good hot-swappable disk system.
Regarding #55; I agree, we need to make things as simple as possible. I
further agree that an SATA RAID solution really looks promising for grex.
Regarding #56; RAID-5 *does* have multiple spindles, but they're all required
for reads and writes. Something like RAID 0+1 would be a better fit for grex,
I think.
With respect to spam and newuser ... You need a decent foundation to build
off of.
|
maus
|
|
response 84 of 124:
|
Dec 21 00:36 UTC 2006 |
I would like to have one of our Legal Weasels read over the RFQ and make
sure it does not obligate us to anything. I think we should also wait
before sending it to the vendors until we get a commit from the board
that we will start the selection process as soon as the deadline clicks,
and time it so that the deadline is something like a week before a BOD
meeting so that we could decide on it fairly quickly. We should probably
have a stanza in there that says something to the effect of "we will
notify vendors within *** days of our decision".
Should we have a standardized worksheet for RFQs so that in the future
if we need gear over a certain dollar amount (maybe arbitrarily over
400$ or something), we can just fill in a few blanks and email it off?
Also, who would be the recipient both for bids and for
questions/clarifications?
|
maus
|
|
response 85 of 124:
|
Dec 21 00:40 UTC 2006 |
Cross, I know, ccd is only for a data volume, such as /var/www or
something that needs to be big without needing super performance. If you
need big and performance, RAID 1+0 is your friend.
I still think we should use the new RAID for our equivalent of /home and
/var and have the system on the existing SCSI drives (maybe using
RAIDFrame (the software RAID) to mirror two system drives).
|
keesan
|
|
response 86 of 124:
|
Dec 21 02:11 UTC 2006 |
Ric, what percentage of spam do you eliminate? Remmers, when do you expect
to have something set up for people to use who are averse to copying over two
files and changing the login name and want a script to do it for them?
|
cross
|
|
response 87 of 124:
|
Dec 21 03:02 UTC 2006 |
Regarding #85; Personally, I'd like to see the entire system on hot-swappable
media: both the user data, and the operating system. We had an occassion
where the root filesystem got lost once, and grex was down for at least
several days. If that filesystem had been RAIDed, we could have avoided that
downtime. I don't believe you can boot fram RAIDframe, either, which implies
that the root filesystem cannot be as redundant as we'd (perhaps) like.
I'm in favor of moving to a rack mount case with the storage system you
proposed, and disposing of the SCSI disks. Perhaps selling them and the SCSI
controller would be a way to offset the cost --- at least partially --- of
getting this new hardware.
|
maus
|
|
response 88 of 124:
|
Dec 21 04:59 UTC 2006 |
If I remember correctly, the way you do it is to make every filesystem
except / on software RAID and make an identical copy of / on the first
slice of the second drive, so no, it is not fault tolerant live, but if
you cannot boot from the normal /, then you just issue your boot command
to bring you up on the alternate copy of /. Things could have changed,
though, since I have not done RAIDframe-based RAID in a while, mostly
relying on 3ware boards for mirroring Serial ATA or IDE drives, and LSI
or Adaptec boards for mirroring SCSI drives.
|
cross
|
|
response 89 of 124:
|
Dec 21 14:12 UTC 2006 |
That is what you're supposed to do, but then you have to have some mechanism
for mirroring the root filesystem over to the spare partition; that gets
ugly after a bit. What's more, if one of the disks running the root
filesystem goes down, you still have to manually reboot. A hardware RAID
solution is better in the sense that this is handled for you automatically;
if one of the disks holding / dies, you just throw in another hot-swappable
disk and go on your merry way. Sure, one can approximate this using our
existing SCSI disks and RAIDframe and mirroring the root filesystem, but
why bother?
|
maus
|
|
response 90 of 124:
|
Dec 21 14:50 UTC 2006 |
I recommended keeping our existing SCSI infrastructure to capitalize on
sunk costs and because by keeping the system separate from the data, we
decrease a bottle-neck. I agree, with good, inexpensive hardware RAID,
RAIDframe kind of blows in comparison.
|
cross
|
|
response 91 of 124:
|
Dec 21 15:50 UTC 2006 |
You decrease a bottleneck, but at what cost? Then you have the associated
maintenance costs if the root disk fails, which is what we're trying to avoid.
I'd say that the goal of an increase in performance at the expense of added
(or, unchanged) administrative burden is the opposite of what we're trying
to achieve (or, what we *should* be trying to achieve). As for the sunk
costs.... Well, grex cold do several things with the existing SCSI disks and
controller. (1) Put up a satellite machine ala gryps to offload some of the
processing from the main machine. For instance, a basic spam/virus blocker
for mail before it gets to the `main' grex machine, or running proxy servers
for web and/or DNS, having a serial port plugged into the serial console of
grex itself, etc. (2) Sell them and use the proceeds to offset the new
hardware costs. (3) I'm sure there are others.
The other factor is that I *really* don't believe that grex gets enough usage
to worry about bottlenecks throught he I/O controller right now.
|
maus
|
|
response 92 of 124:
|
Dec 21 16:43 UTC 2006 |
That's a fair assessment. Consider my mumblings about the bottleneck the
ramblings of a weary mind, and please ignore said mumblings.
So, silly question: if we are thinking about moving the system-space
onto a new disc subsystem, does this mean a fresh, new installation? Can
we use the opportunity to request new commands to be added and to
implement new controls and move to standards from odd Grex-isms?
|
cross
|
|
response 93 of 124:
|
Dec 21 17:01 UTC 2006 |
No, not at all; I think it's good to be challenged and be asked to justify
one's conclusions. I thank you for that.
I think you can always request the installation of additional software. And
yes, I *do* think it would mean a new installation of the basic system. But,
that might not be a bad thing. Any opportunity to move to standard commands
from weird customizations is a plus, in my opinion.
|
maus
|
|
response 94 of 124:
|
Dec 21 17:15 UTC 2006 |
I agree that moving to standards would be a good thing, provided nothing
is broken in the process (if a command that users or staff depend on is
broken by the move to standardize, then the standardization is crap; if
no-one is hurt and we make the system easier to maintain and easier to
upgrade and actually match what the man pages and web-pages say, then we
have done a good thing by standardizing and deserve doughnuts).
I didn't have specific commands or software in mind (or, at least,
nothing appropriate for this system), but I figured that if we were
facing a fresh installation, this would be the time to ask people what
commands they would like to see on here, and also see if there are
commands that users would want to see upgraded or replaced.
|
maus
|
|
response 95 of 124:
|
Dec 21 17:29 UTC 2006 |
On the new commands front, I was just looking through /usr/local/bin and
noticed javac and java and jar. I thought the port of native java to
OBSD was still a couple of years away. Did we build this by way of RHEL
emulation or something else entirely? Have we published somewhere how we
managed it? Hurray f
|
remmers
|
|
response 96 of 124:
|
Dec 21 17:44 UTC 2006 |
Hmm... I seem to recall that I saw the Java stuff sitting in either the
OpenBSD ports or packages collection a few months ago and installed it.
Didn't do any testing or anything (I'm not a Java person), so whether it
all works is another issue.
Oh, I remember now. Have a look at /usr/ports/lang/kaffe.
|
cross
|
|
response 97 of 124:
|
Dec 21 18:11 UTC 2006 |
Regarding #94; That can be relative. For instance, on the Sun4, staff
depended on a custom command to edit password information, because the
password stuff was so hacked. But, the standard commands are better;
we made a net gain by leaving behind the old stuff we *had* depended on
and moving to a newer system. Someone definitely deserved a Krispy
Kreme on that one.
|
maus
|
|
response 98 of 124:
|
Dec 21 19:17 UTC 2006 |
Krispy Kreme? Ewwww!!!! Give me a nice Shipley's or a Dunkin' Donuts any
day.
giggle
|
cross
|
|
response 99 of 124:
|
Dec 21 19:27 UTC 2006 |
Blasphemy!
|