You are not logged in. Login Now
 0-24   25-49   50-74   75-99   100-124   125-149   150-174   175-184   
 
Author Message
25 new of 184 responses total.
gelinas
response 25 of 184: Mark Unseen   Jul 23 02:11 UTC 2003

That seems to be fairly standard in the industry, from what I've heard.
Metering is expensive to do on the fly, so the information is usually
aggregated and billed after the fact.
other
response 26 of 184: Mark Unseen   Jul 23 02:23 UTC 2003

I'm concerned that the traffic on our machine would swell to be limited 
by the speed and settings of our system if the bandwidth is not actively 
capped.  Especially with the new machine.
janc
response 27 of 184: Mark Unseen   Jul 23 02:30 UTC 2003

That could be difficult to live with.

I agree that a Grex staff member should talk to these people.  I'm just not
sure which staff member.  I really know nothing about these kinds of
facilities.  Marcus and STeve would be better choices if they weren't so much
pressed for time.  Steve Gibbard would be perfect, except he's not on staff
anymore.  Steve Weiss?  Joe?  Kip?
janc
response 28 of 184: Mark Unseen   Jul 23 02:32 UTC 2003

It would make things like spam filtering pretty critical.  A long spam message
sent to all our users could cost us a lot of money.  We'd have to think of
all the ways we can control bandwidth usage from our end.
gelinas
response 29 of 184: Mark Unseen   Jul 23 02:37 UTC 2003

I was thinking of stopping by there tomorrow morning, since I'm going to be
in the vicinity any way.  I'll ask the questions that have been raised here
but not answered.
scg
response 30 of 184: Mark Unseen   Jul 23 07:39 UTC 2003

I suppose if somebody wanted to buy me a plane ticket I could fly out and take
a look at the place, but I don't really think that's necessary.

I may be mistaken, but I think Online Technologies is the company that
used to be BizServe.  Somebody from there was involved in the early days
of the HVCN project, I think.  Later I used to see their equipment in at
least one of the Metro Detroit colo facilities, but I don't know much
about them.

"A rack" has a pretty standard meaning.  It should be 19 inches wide, and
seven feet high.  The ideal is to use rack mountable equipment, which screws
into the rack, but the alternative if you've got extra space is to get rack
mountable shelves (Graybar or Alltel in Livonia will have them; a few years
ago there didn't seem to be a good source for such things in Ann Arbor) and
put non rack mountable equipment on those.

The standard method of usage based billing is to collect five minute averages
of bandwidth use over a month (why five minutes?  Because that's the default
in MRTG, the program originally used to do this), and then take the 95th
percentile.  That means you can be over your limit for a little over an hour
a day (36 hours in a 30 day month) without it mattering.  But not all usage
based billing is done that way, so it's worth asking.

A Cisco router or switch should be able to very aggressively rate limit
the connection, to the point of dropping packets and letting TCP figure
out it needs to back off whenever the connection gets over the rate limit.
There's enough processor and administrative overhead for that that it's
unlikely the colo provider would be willing to do the rate limiting on
their own equipment, but if Grex wanted to do that itself the necessary
equipment appears to be going for around $800 on EBay.  Packeteer makes
a much nicer box to do that, which I was going to say would be much more
expensive, but I see some on EBay now for around $600.

Having spent a lot of time recently designing colo facilities, here are my
thoughts on what to look for:

I wouldn't worry so much about open rack vs. closed rack vs. cage so much as
what the chances are that the equipment will get messed with, and what sort
of access you'll have to your own equipment.  When dealing with a relatively
small amount of equipment, unsupervised access to a cage or a locked cabinet
are pretty interchangable, and supervised access to an open rack should be
fine too provided there isn't large per-use charge for the supervision.

I would hope they have sufficient redundancy in the routing infrastructure
that a single router falling over and dying won't take down colo customers,
but there's no one right answer to network architecture, and while I can say
how I'd do it, they could do thinks rather differently and still be reliable.
A question to ask would be whether they are multi-homed and run their own
AS (do they use BGP to switch between upstream providers if one goes down?), 
but that's probably sufficiently basic at this point that the answer would 
always be yes.  Another good thing to look for is that they should be using
HSRP (hot standby routing protocol) or some non-Cisco equivalent on the
routers their colo customers talk to, meaning that if one of those routers
goes down, your default gateway address should seamlessly move to the other
one.

I would pay particular attention to the cable plant, largely because it's an
issue that's often neglected in networks set up by computer geeks rather than
telco geeks, and it's an area where confusion tends to lead to outages.  As
antithetical as it is to the normal Grex way of doing things, good datacenters
are run by "cable nazis," with all cables labeled clearly on both ends and
neatly tied into place, and run on ladder racks across the ceiling -- never
across the fronts of other racks or across the floor.  Any connections to
things outside of Grex's rack should be delivered by the facility to the
rack -- customers running their own cables outside of their racks have a
tendency both to make a mess of things and to accidentally unplug the wrong
stuff.

Cooling is a big issue.  If they're reasonably full, do they have enough
capacity to keep the place pretty close to 68 degrees on a hot day?  If
they're reasonably empty, how much currently unused capacity do they have in
their cooling system?  Does the air conditioner run at night and on weekends
(often a problem in multi-tenant buildings)?

What's the backup power situation?  Ideally they should have a generator that
kicks in automatically in the event of an outage.  Battery backup is also
needed to bridge the gap between utility power going out and the generator
kicking in, but whether that's something the customers supply in their own
racks or something the facility supplies is really more an issue of what the
customer needs to budget for than anything else.

What's the on-site support like?  Nice colo facilities tend to have somebody
there 24 hours a day who you can call and get to cycle power, check cables,
or swap cards for you.  If not, you'll still be stuck sending people over
there if something breaks.  On the other hand, having people there constantly
costs money, and raises prices.

There's a lot I'm not thinking of, but I'm tired and need to sleep.  Really,
just about anything would be a safer and less hostile environment than the
Pumpkin, so my ideals probably aren't anything to hold anybody to.
kip
response 31 of 184: Mark Unseen   Jul 23 12:38 UTC 2003

OpenBSD has a couple of software solutions for self-limiting or traffic
shaping.

I'm probably overstepping here, but is there room here to discuss something
between staying at the Pumpkin and moving to a professional colo space?
davel
response 32 of 184: Mark Unseen   Jul 23 12:53 UTC 2003

Why would such a question be overstepping?
gull
response 33 of 184: Mark Unseen   Jul 23 13:32 UTC 2003

Re #30: Considering that any cooling and any network redundancy they
have is going to be better than what we have now, I wonder how important
those concerns really are for us.

I think a major concern for Grex is going to be after-hours access,
since we have volunteer staff members with day jobs.
mary
response 34 of 184: Mark Unseen   Jul 23 14:11 UTC 2003

Gawd no, Kip.  Anything you have to offer here is welcome.
And I hope I never have to say that again. ;-)

I've heard that ICNET also offers co-lo, of a kind.
They host machines, allowing 24 hour access, but the machines
aren't individually secured.  Co-lo is not their focus.
janc
response 35 of 184: Mark Unseen   Jul 23 14:19 UTC 2003

Thanks, Steve.  That was very informative.
scg
response 36 of 184: Mark Unseen   Jul 23 16:00 UTC 2003

re 33:
        I agree.  I'm looking at this from the perspective of how to design
such facilities with as close to 100% reliability as possible, but a lot less
than that would still be an improvement for Grex.

One thing I neglected to mention but should have was network separation. 
Would Grex get its own VLAN (or separate physical LAN entirely, but that's
probably overkill), or would it be put on a shared LAN with other customers?
Shared LANs are bad both because it makes it easy for a single customer's
misconfiguration to take down other customers, and because it can lead to
broadcast storms saturating the network.
janc
response 37 of 184: Mark Unseen   Jul 23 20:37 UTC 2003

I think my biggest concern with this would be being sure that we could keep
the bandwidth cost within our budget.  If that can be dealt with, the rest
looks like a pure win to me, at this point in time.
cross
response 38 of 184: Mark Unseen   Jul 23 20:40 UTC 2003

Another question is whether one really gets a whole rack, or just a couple
U of space in an existing rack.  Also, if we *do* decide to go colo (and
honestly, it seems like a *much* better deal than the pumpkin + DSL), would
it make sense to buy a rackmount case for nextgrex and move the guys of the
new grex computer into it before moving?  As Kip noted, OpenBSD has traffic
shaping functions built in that could force it to stay under the bandwidth
limit.
gelinas
response 39 of 184: Mark Unseen   Jul 24 03:04 UTC 2003

I stopped by Online Technologies this morning.  First, I gave them a
printout of this item, through response 32.  I also gove them the URL,
suggesting they may want to check out the discussion, or even join in. :)

I spoke to Ty, who was surprised by how quickly Mary has moved on this,
and Bob (I missed his last name, despite two tries. :( )

Yes, this is the company that used to be known as bizserve.com.  No names
were mentioned, but the Board of Directors has brought in a new management
team to change from the previous president's focus on web-hosting to
colocation.

They are in the process of expanding their existing machine room and
moving the offices to the second floor.  This morning, when WEMU was
reporting a temperature of 69F, the machine room was noticibly warm.
They specifically noted that improving the cooling was in progress.

I triple-checked the insurance:  they find it easier and cheaper to get
an umbrella policy that protects everyone than to risk a loss caused by
an uninsured client.

They CAN do network throttling, if we need it.  The cabling was neat,
in ladders along the back and under the raised floor.

They have a couple of really large APC UPSes in tandem, with a backup
generator in the next room.  They are looking to replace the UPSes with
an even larger one, as part of the expansion.  So we won't need our own UPS.

The price quoted to Mary was for a 1U space, not for an entire rack.  I
realise now I should have asked for ballpark quotes on more space. 
(1U is roughly six inches high, I think.)
scg
response 40 of 184: Mark Unseen   Jul 24 03:31 UTC 2003

Thanks, Joe.

I forget the exact measurement for a "rack unit," or "U", but it's a little
over an inch and a half.

Needless to say, they can't possibly be offering your own locked cage for 1
U of rack space.

There are plenty of nice, powerful, 1-U intel-based servers out there,
although I suspect nextgrex probably has more disks than would fit in such
a case.
gelinas
response 41 of 184: Mark Unseen   Jul 24 03:34 UTC 2003

I never was much good at estimating distances. :/
mary
response 42 of 184: Mark Unseen   Jul 24 12:24 UTC 2003

If someone would make a estimate of the space we'd
need I'll contact OTC and get an updated price quote.

I'll make sure it reflects the cage and 24/7 access.

Thanks for the road trip, Joe.
mooncat
response 43 of 184: Mark Unseen   Jul 24 18:02 UTC 2003

Yes, thanks for making the trip out there, Joe.
aruba
response 44 of 184: Mark Unseen   Jul 25 02:58 UTC 2003

Yes, thanks, Joe.  It would be good to find out what the phone line
situation is; do we pay per line, or per connection, or what?  What are the
prices?
gelinas
response 45 of 184: Mark Unseen   Jul 25 04:19 UTC 2003

The demark is on the second floor; they can run as many connections as we
want.  I didn't ask about pricing, though.  'Twill probably be much the same
as it is now:  we pay installation to OTC and monthly fees to SBC.
scg
response 46 of 184: Mark Unseen   Jul 25 07:13 UTC 2003

It's fairly common for colo providers to charge a cross connect fee for any
connections between their tenants and other tenants or phone company demarcs.
It's also entirely possible that this colo provider isn't charging cross
connect fees, but it's certainly something to ask about rather than making
assumptions.
mary
response 47 of 184: Mark Unseen   Jul 25 15:56 UTC 2003

I emailed OTC asking about modem, rack space, phone line connection and
what I found out is we'll need to get more specific about our needs for
any cost estimate to be helpful. 

OTC will be able to supply 24/7 access to various sizes of locked cabinets
designed for this purpose.  They can go "at least as small as 1/3 of a
rack if not 1/4".  So we need to know how much space we'll require. 

How many IP addresses are needed?  I'd assume two but not sure here. 

How many servers are involved? 

Should we price this out at 0.500 mbps?

Do we have any special power needs?

We can supply our own modems.  SBC (or whomever we choose for land line
service) will charge a one time connection fee and for monthly service. 
OTC will need to get a quote from their installers as to the charge for
connecting our machine to the point at which the phone line enters their
building but there won't be a ongoing monthly charge for access. 

I'm starting to sense this might be too expensive for our modest means. 
But who knows.  I'd still like to take this to a fairly close estimate. 
And yes, I know we're like 9 months away from needing a decision, but this
is Grex, and you know what that means.  ;-) 

mary
response 48 of 184: Mark Unseen   Jul 25 15:58 UTC 2003

This response has been erased.

mary
response 49 of 184: Mark Unseen   Jul 25 16:00 UTC 2003

Actually, make that 6 months.
 0-24   25-49   50-74   75-99   100-124   125-149   150-174   175-184   
Response Not Possible: You are Not Logged In
 

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss