No Next Item No Next Conference Can't Favor Can't Forget Item List Conference Home Entrance    Help
View Responses


Grex Agora41 Item 205: Comcast considering heavy usage surcharges
Entered by gull on Fri May 24 21:37:13 UTC 2002:

http://www.freep.com/money/tech/comc24_20020524.htm

Comcast and AT&T's cable division (which are about to merge) say they're
probably going to start charging heavy users extra fees, in the future. 
AT&T's chairman notes that on their network 30% of the capacity is consumed
by only 1% of the subscribers.

I don't find this surprising, in fact I think it's pretty much inevitable. 
But it'll be interesting to see how it plays out.  It could be a bit of a PR
problem for Comcast if their competators don't quickly follow suit, but I
suspect they're eager to try this, too, and just don't want to be the first.

49 responses total.



#1 of 49 by bru on Sat May 25 03:49:40 2002:

well, Aparently Comcast is now being sued for selling or giving information
about where the users surfed to away to somebody.  


#2 of 49 by bdh3 on Sat May 25 06:30:08 2002:

I believe it is being sued by one person for collecting information.
I don't recall that suit claims COMCAST actually sold the collected
information.  


#3 of 49 by senna on Sat May 25 08:01:54 2002:

What is this one percent doing?  mp3s? :)


#4 of 49 by jazz on Sat May 25 16:13:42 2002:

        Knew it.


#5 of 49 by gull on Sat May 25 21:04:41 2002:

Re #2: Right...the suit alleges they were collecting the information for the
purpose of selling it, but no one's claiming they actually sold any.


#6 of 49 by keesan on Sun May 26 15:51:45 2002:

Comcast promised never to send me any more junk mail and a piece of it arrived
yesterday.  I doubt they are competent enough to store and retrieve info.


#7 of 49 by scg on Sun May 26 22:29:02 2002:

Companies can be quite competent in some departments and amazingly incompetent
in other departments.  Companies that want to stay in business should probably
make their billing and accounting departments high prioirties.  They should
also be nice and not send junk mail to Sindi when she tells them not to, but
I have a hard time imagining that they'd see that as financially important.


#8 of 49 by scg on Sun May 26 22:47:44 2002:

When heavy ISP users mostly used modems, there was the possibility that a user
could hog enough modem time that the cost of the ISP's phone line would be
more than the user was paying for the account, but it wouldn't have been much
more.  Some ISPs tried to charge heavy users extra, while others either
decided it was too much trouble to keep track of, or that they got better
sales by advertising unlimited use.  Some tried to have it both ways,
advertising unlimited use, and then making up excuses to go after heavy users
anyway.

The situation is a bit different with high speed connections.  There are no
longer modems to tie up, but the bandwidth is still expensive.  Available
bandwidth is great enough that the difference between a normal user and a 
user pushing multiple megabits of MP3s 24 hours a day is probably far more
significant than the difference between a half hour a day modem user and a
24 hour a day modem user.  For a modem-based ISP, if your modem pool fills
up all you have to do is add more modems and phone lines, a relatively minor
expense.  If a group of users manages to saturate a neighborhood's cable modem
infrastructure, does that require infrastructure upgrades out in the
neighborhood?  I don't know how scalable the current cable modem technology
is, so I'm not sure of the answer to that question.

In addition, ISPs a few years ago were mostly trying to increase market share,
assuming they'd figure out the financial part of it once they'd won the market
share battle.  Now ISPs are generally feeling a tremendous pressure to make
a profit -- no more money from investors seems to be forthcoming, so if they
run out of money they're gone.  If it costs them more to deal with these heavy
users, they presumably have a choice between raising the rates for the heavy
users a lot, or raising the rates for everybody, making the light users
subsidize the heavy users.


#9 of 49 by slynne on Mon May 27 14:49:13 2002:

Sometimes the best way to charge heavy users extra is to advertise 
unlimited use at the regular price (which has been raised to account 
for the heavy users) but then to quietly offer a discounted "budget 
plan" to the light users. 


#10 of 49 by drew on Tue May 28 02:46:10 2002:

If a high speed internet provider wishes to sell X bit-per-second service to
Y customers, the amount of upstream bandwidth that is needed is X * Y bits
per second. Not some fraction of that in the hope that their customers aren't
going to take full advantage of it. Design for the Worst Case.


#11 of 49 by mdw on Tue May 28 03:07:01 2002:

I'd hate to think what our water & sewer systems would look like, if
they were designed with that goal in mind.


#12 of 49 by gull on Tue May 28 15:26:43 2002:

Or our road system.  Can you imagine what it would look like if we
assumed every single person would try to drive to, say, Meijer
simultaneously?

No one designs that way because it's inefficient and makes service
unacceptably expensive.  (That's why 'business' service is so expensive
-- they're assuming you *will* use all that bandwidth.)  Not even the
phone company designs that way.  If every single person in Ann Arbor
picked up their phone simultaneously and tried to make a call, not all
of them would get through.



#13 of 49 by flem on Tue May 28 17:04:10 2002:

If Comcast adds usage surcharges, I will find another provider.  Which will
suck, as I've been happy with Comcast so far. 


#14 of 49 by bhelliom on Tue May 28 19:40:02 2002:

That's one of the irritating things about lack of competition with many 
services these days.  <sigh>


#15 of 49 by glenda on Tue May 28 22:00:11 2002:

Re #13:  Same here.  STeve is already looking at alternatives.


#16 of 49 by russ on Wed May 29 04:08:14 2002:

Re #12:  Except with the huge bandwidth of optical fiber, we
really *can* let everyone on the Net, at full DSL or cable
speeds, at the same time.  We have more than enough fiber in
the ground.  The problem is the fiber is mostly dark.

The way to get this fiber lit is to stop companies like Comcrap
from artificially constraining the demand for bandwidth.


#17 of 49 by scg on Wed May 29 04:47:57 2002:

What makes that, at the moment, impossible, is switching speeds.

There's also really no point, especially now that money is no longer thought
to be infinite.  Most of us like to sometimes do things other than sending
data across the Net.  Even somebody who spent 24 hours a day reading web pages
would spend a lot of time reading stuff they'd already downloaded, rather than
receiving data constantly.  Having enough capacity to handle peak loads with
a bit of room to spare is a good idea, but spending lots of money on capacity
that will never be used doesn't benefit anybody except those being paid to
build the capacity.


#18 of 49 by gull on Wed May 29 13:17:27 2002:

You may have noticed this already, but the bottleneck usually isn't your
Comcast bandwidth cap.  It's out farther -- probably Comcast's Internet
connection.  If they spend a lot of money (and raise your rates) to upgrade
it, it would just push that bottleneck out farther, probably to some
overloaded backbone in Chicago.  (That's usually where the bottleneck for my
DSL downloads is.)


#19 of 49 by tpryan on Wed May 29 16:31:58 2002:

        Channell 56, Detroit PBS has some special about telephone
companies in Michigan, and why we still don't have a choice on 
at 8pm tonight, Wed, 5/29.  Preview hinted of addressing internet
options.


#20 of 49 by janc on Wed May 29 17:13:19 2002:

Hmmm...Valerie and I share the net connection and spend a lot of time 
on the net, between work and this and that.  But I expect that we 
wouldn't count as a "heavy user".  Most of the time we are just typing 
stuff.  Hardly any MP3 or porn downloads.  Whether usage fees would bug 
me depends an awful lot on the fee structure.


#21 of 49 by gull on Wed May 29 19:09:28 2002:

Re #19: Where are they talking about, specifically?  I can choose between at
least two local phone companies on regular lines, and I think Comcast does
phone service, too.  So far no one's had a better deal for me than
Ameritech, though.


#22 of 49 by scg on Wed May 29 22:26:21 2002:

My impression is that Internet backbone congestion is getting pretty rare at
this point, although I don't know about Ameritech's and Comcast's
infrastructure specifically.  For my PacBell DSL circuit, the bottleneck is
pretty definitely the 1.5Mb/s DSL circuit.  If you're having bottlenecks
elsewhere, reaching a variety of sites, it sounds like somebody's
oversubscribing things more than they should.


#23 of 49 by gull on Thu May 30 13:05:00 2002:

Anything that has to go through Sprintlink around Chicago is slow.  That's
just life; Sprintlink has always been terrible.  What used to be MCI.net is
usually pretty bad, too.  (We used to say MCI stood for 'Might Connect,
Intermittently'.) My experience both with my Michigan Tech ethernet account
and with my Ameritech DSL account is that the best you can usually expect
from non-local sites on the Internet is about 400 kilobits/second.  There
are occasional sites I hit the full DSL bandwidth on, though.

Incidentally, that's 'local' network wise, not geography wise.  I live about
two miles from where I work, but there's 11 hops from my DSL modem to the T1
router at work, several of them through Sprintlink, and most of the time
communicating between the two is not terribly fast.


#24 of 49 by scg on Thu May 30 18:02:31 2002:

I can't speak for my employer here, so I probably need to stop discussing
this.


#25 of 49 by drew on Thu May 30 18:38:21 2002:

Re #17 re #16:
    Are switching speeds keeping up with Moore's Law? How long before they
can keep up with a 600 nm laser beam? Perhaps we need only sit tight and wait
a few years?


#26 of 49 by scg on Thu May 30 23:02:47 2002:

Again, though, why?  If there's a demand for that capacity, I'm sure the
technology will get developed.  But there's no reason for every user to be
able to talk to every other user at the same time.  The capacity would cost
something, and would never get used.


#27 of 49 by bdh3 on Sun Jun 2 06:47:09 2002:

Here in chicagoland the bottlenecks seem to be the NAPs - points
where networks connect.  The MCI one (or whomever they call
themselves) is notorious and I think is still subject to litigation.
The arguement between Ameritech and MCI was who should pay for the
access.  MCI didn't feel that it should pay to allow its competition
high bandwidth connectivity to its network and of course Ameritech
felt the same. Alter.net seems to be another one although that seems
to be more a reliability issue rather than bandwidth.

The big problem that a lot of ISPs are having is not that they
don't have the bandwith to support heavy users - they do.  The
reason they want to start charging heavy users is because their
current subscription revenue from existing base is not profitable.
Many ISPs over built and over bought and now with the economic
downturn and stock performance in the pits, credit lines drying
up, and the inability to float bonds, and other factors the 
turkeys are coming home to roost.  They need to fund operations
from existing users and know that they can't raise the rates 
across the board without their competition doing the same.  Perhaps
they are also counting on some sort of cache of being a 'heavy
user' causing said to not object to the raise in cost.  Another
source of revenue that many ISPs are going to go after is that
of 'servers'.  Many new subscriber agreements already prohibit
the operation of 'servers' especially those with static IP.
I'd expect to see more 'charges' associated with allowing them
croping up in the future. 

To sum up, its not a bandwith issue but a revenue issue.


#28 of 49 by scg on Mon Jun 3 05:56:30 2002:

re first paragraph of #27:

Huh?


#29 of 49 by bdh3 on Mon Jun 3 08:12:26 2002:

re#28: Uh. Huh? Lets say yer town all have fiber or whatever.  You
all can connect to each other at warp speed.  But lets say the town
next to you has only a 64K link.  Yer effective bandwidth to any user
from yer town to that town is 64K divided by the simultaneous number
of users minus the overhead.  Who pays for upgrading the link?
You or them?  Lets say they have the hot porno site everyone in
your town wants to visit and all your town has to offer is town
meetings plus the crop report.  Why should they pay for the higher
speed link to allow your town to visit their site?  Because it allows
their users to visit your crop report? 50-50? They should pay half
so your users can visit their hot porno site while they get nothing
in return?


#30 of 49 by scg on Tue Jun 4 01:04:19 2002:

Yup, you've basically summed up a lot of the peering debates.  The "huh" was
in regard to your apparerent confusion about various NAPs and Internet
backbones.  I really was wondering what you had been trying to say.


#31 of 49 by other on Tue Jun 4 06:54:58 2002:

Of course if they pay for an upgrade, then they can visit their own hot 
porno site much more expeditiously, not to mention being better able to 
make money selling site access to other people in other towns who might 
otherwise go to other hot porno sites with faster links.


#32 of 49 by other on Tue Jun 4 06:55:26 2002:

What were we talking about?


#33 of 49 by bdh3 on Tue Jun 4 08:07:09 2002:

NAPs and taking naps and why the Internet as we know it is
doomed.


#34 of 49 by russ on Wed Jun 5 03:58:55 2002:

Getting the subject back to routing and capacity limitations and
whatnot, I understand that a big part of the bottleneck is that
routers are a lot slower than fibers and thus get expensive.

Routers are expensive in part because they have to have routing
tables of what addresses are where, and they need to be able to
scan them for each packet that comes in.  If the router received
data packets with the routing information already encoded in the
header (with the next stop at the top, so it's stripped off and
the rest of the packet sent on), the router would have a lot less
work to do and could be much faster without any more hardware.

Design something like this to be dumb enough, and purely optical
technology will be able to handle it soon.  The electronic hardware
would only have to use smarts to discover a route from the packet
source to its desired destination, after which the originating node
could handle the work for all subsequent traffic to that destination.

If the routers appended "where I've been" routing info to the end
of the packet, the receiving node wouldn't have any work to do to
return responses (source route just has to be reversed) and it would
be trivial to find the sources of DDoS attacks...

If you had enough bandwidth to be able to use it at a small fraction
of its total capacity (and what else is offered by our glut of fiber?), 
you wouldn't even have to do collision handling; if two packets want
to use the same outbound fiber at the same time, they're lost and TCP
does the recovery.  Oh, you could use TDMA to avoid collisions, but
that's requiring something to be *smart* and *coordinated* to work;
making things that don't need to be more than dumb is probably better.

Let's see, at 40 fibers in a bundle and 100 Gb/s per fiber, a bundle
represents 4 Tb/sec.  If you allow 1% of that to be used, you get
40 Gb/s.  That's enough to give 10 Mb/sec to 4000 users, or 1 Mb/sec
to 40,000 users (24/7).  That's about a city's worth, no?  If you
can use 10% effectively, it's 400,000; ten bundles serves 4 million,
or a Michigan's worth of households.

Ten bundles doesn't seem like much when I see four or more conduits
going into the ground along every road, even in the middle of nowhere.
Why do we have all these bandwidth bottlenecks, anyway?


#35 of 49 by gelinas on Wed Jun 5 04:12:52 2002:

What happens when one of the links goes away, Russ?  How does that information
get transmitted to all of the endpoints that need to know?


#36 of 49 by bdh3 on Wed Jun 5 05:17:14 2002:

(psst.  he's an engineer.  he thinks machines designed by engineers
never break.)


#37 of 49 by mdw on Wed Jun 5 07:19:58 2002:

Source routing went out with uucp.  With IP, the end nodes are not
responsible for deciding how packets get from point A to B; that's the
responsibility of the routers inbetween.  Modern routers are designed to
handle failing nodes (given a sufficient # of machines, there is
virtually a 100% chance some node is out of order), and some smarter
machines may also do load sharing.

Besides the probability of machine failure, there is also the "back hoe"
issue, which is actually more serious than it seems.  There are a
surprising number of points in our transcontinental communications
system where there are "choke points"; only one way to get from point A
to B.  There may be 3 dozen telecommunications companies offering
service from points near A to B, so it may seem like there are lots of
choices - but it generally turns out they mostly all bought or rent
cable rights that all go via one mountain pass or over one bridge --
very often that turns out to be some railroad right of way first laid
out for steam over a century ago, somewhere out west.

In order to handle the case that any given machine may be out of
service, most modern routing algorithms are designed to be
"distributed", with each machine generally only computing only the "next
hop".  One of the proofs for correctness of any given routing algorithm
is that it not route packets in a circle.  Another requirement is that
when overloaded, routing algorithms should "fail gracefully" (which is
harder than it seems, and imperfect at best.)

For routers on the periphery of the internet, these are all generally
easy problems; typically there are a few local attached networks, one or
a small number of other routers with local networks, and some single
"upstream" point where everything else goes.  It may only take a couple
of K of data to represent all this, and making computing "next hop" off
this data is generally fast.  For such machines, exchanging the whole
route table periodically is often sufficient, & handling network outages
is usually simply a matter of noticing some machine hasn't been heard
from in a while, and deleting it from the very small routing table.

For "backbone" routers, the routing problem is a nightmare, and modern
backbone routers probably start with having >64M of ram just to store
all the networks it has to know about.  One of the important
requirements of backbone routing protocols is to reduce the amount of
routing traffic as much as possibly simply to avoid shipping 64M of data
every time some remote network interface flaps (goes up or down).  This
is all a very big problem; there are numerous dissertations on the
design problem, and dozens of discarded routing protocols that seemed
like a good idea at the time, then turned out to have expensive
problems.

All of this is complicated to solve, and people who are truely competent
at it are scarce.  That is why network engineers can pull down some big
bucks, and sometimes have some truely strange educational backgrounds.
This is also why there are really only a few companies that make good
"backbone" routers - those are typically very specialized machines with
some terribly expensive and fast hardware in them.


#38 of 49 by oval on Wed Jun 5 07:34:19 2002:

interesting ..



#39 of 49 by bdh3 on Wed Jun 5 08:19:58 2002:

Yes, and marcus is well underpaid.


Last 10 Responses and Response Form.
No Next Item No Next Conference Can't Favor Can't Forget Item List Conference Home Entrance    Help

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss