You are not logged in. Login Now
 0-24   25-49         
 
Author Message
25 new of 49 responses total.
drew
response 25 of 49: Mark Unseen   May 30 18:38 UTC 2002

Re #17 re #16:
    Are switching speeds keeping up with Moore's Law? How long before they
can keep up with a 600 nm laser beam? Perhaps we need only sit tight and wait
a few years?
scg
response 26 of 49: Mark Unseen   May 30 23:02 UTC 2002

Again, though, why?  If there's a demand for that capacity, I'm sure the
technology will get developed.  But there's no reason for every user to be
able to talk to every other user at the same time.  The capacity would cost
something, and would never get used.
bdh3
response 27 of 49: Mark Unseen   Jun 2 06:47 UTC 2002

Here in chicagoland the bottlenecks seem to be the NAPs - points
where networks connect.  The MCI one (or whomever they call
themselves) is notorious and I think is still subject to litigation.
The arguement between Ameritech and MCI was who should pay for the
access.  MCI didn't feel that it should pay to allow its competition
high bandwidth connectivity to its network and of course Ameritech
felt the same. Alter.net seems to be another one although that seems
to be more a reliability issue rather than bandwidth.

The big problem that a lot of ISPs are having is not that they
don't have the bandwith to support heavy users - they do.  The
reason they want to start charging heavy users is because their
current subscription revenue from existing base is not profitable.
Many ISPs over built and over bought and now with the economic
downturn and stock performance in the pits, credit lines drying
up, and the inability to float bonds, and other factors the 
turkeys are coming home to roost.  They need to fund operations
from existing users and know that they can't raise the rates 
across the board without their competition doing the same.  Perhaps
they are also counting on some sort of cache of being a 'heavy
user' causing said to not object to the raise in cost.  Another
source of revenue that many ISPs are going to go after is that
of 'servers'.  Many new subscriber agreements already prohibit
the operation of 'servers' especially those with static IP.
I'd expect to see more 'charges' associated with allowing them
croping up in the future. 

To sum up, its not a bandwith issue but a revenue issue.
scg
response 28 of 49: Mark Unseen   Jun 3 05:56 UTC 2002

re first paragraph of #27:

Huh?
bdh3
response 29 of 49: Mark Unseen   Jun 3 08:12 UTC 2002

re#28: Uh. Huh? Lets say yer town all have fiber or whatever.  You
all can connect to each other at warp speed.  But lets say the town
next to you has only a 64K link.  Yer effective bandwidth to any user
from yer town to that town is 64K divided by the simultaneous number
of users minus the overhead.  Who pays for upgrading the link?
You or them?  Lets say they have the hot porno site everyone in
your town wants to visit and all your town has to offer is town
meetings plus the crop report.  Why should they pay for the higher
speed link to allow your town to visit their site?  Because it allows
their users to visit your crop report? 50-50? They should pay half
so your users can visit their hot porno site while they get nothing
in return?
scg
response 30 of 49: Mark Unseen   Jun 4 01:04 UTC 2002

Yup, you've basically summed up a lot of the peering debates.  The "huh" was
in regard to your apparerent confusion about various NAPs and Internet
backbones.  I really was wondering what you had been trying to say.
other
response 31 of 49: Mark Unseen   Jun 4 06:54 UTC 2002

Of course if they pay for an upgrade, then they can visit their own hot 
porno site much more expeditiously, not to mention being better able to 
make money selling site access to other people in other towns who might 
otherwise go to other hot porno sites with faster links.
other
response 32 of 49: Mark Unseen   Jun 4 06:55 UTC 2002

What were we talking about?
bdh3
response 33 of 49: Mark Unseen   Jun 4 08:07 UTC 2002

NAPs and taking naps and why the Internet as we know it is
doomed.
russ
response 34 of 49: Mark Unseen   Jun 5 03:58 UTC 2002

Getting the subject back to routing and capacity limitations and
whatnot, I understand that a big part of the bottleneck is that
routers are a lot slower than fibers and thus get expensive.

Routers are expensive in part because they have to have routing
tables of what addresses are where, and they need to be able to
scan them for each packet that comes in.  If the router received
data packets with the routing information already encoded in the
header (with the next stop at the top, so it's stripped off and
the rest of the packet sent on), the router would have a lot less
work to do and could be much faster without any more hardware.

Design something like this to be dumb enough, and purely optical
technology will be able to handle it soon.  The electronic hardware
would only have to use smarts to discover a route from the packet
source to its desired destination, after which the originating node
could handle the work for all subsequent traffic to that destination.

If the routers appended "where I've been" routing info to the end
of the packet, the receiving node wouldn't have any work to do to
return responses (source route just has to be reversed) and it would
be trivial to find the sources of DDoS attacks...

If you had enough bandwidth to be able to use it at a small fraction
of its total capacity (and what else is offered by our glut of fiber?), 
you wouldn't even have to do collision handling; if two packets want
to use the same outbound fiber at the same time, they're lost and TCP
does the recovery.  Oh, you could use TDMA to avoid collisions, but
that's requiring something to be *smart* and *coordinated* to work;
making things that don't need to be more than dumb is probably better.

Let's see, at 40 fibers in a bundle and 100 Gb/s per fiber, a bundle
represents 4 Tb/sec.  If you allow 1% of that to be used, you get
40 Gb/s.  That's enough to give 10 Mb/sec to 4000 users, or 1 Mb/sec
to 40,000 users (24/7).  That's about a city's worth, no?  If you
can use 10% effectively, it's 400,000; ten bundles serves 4 million,
or a Michigan's worth of households.

Ten bundles doesn't seem like much when I see four or more conduits
going into the ground along every road, even in the middle of nowhere.
Why do we have all these bandwidth bottlenecks, anyway?
gelinas
response 35 of 49: Mark Unseen   Jun 5 04:12 UTC 2002

What happens when one of the links goes away, Russ?  How does that information
get transmitted to all of the endpoints that need to know?
bdh3
response 36 of 49: Mark Unseen   Jun 5 05:17 UTC 2002

(psst.  he's an engineer.  he thinks machines designed by engineers
never break.)
mdw
response 37 of 49: Mark Unseen   Jun 5 07:19 UTC 2002

Source routing went out with uucp.  With IP, the end nodes are not
responsible for deciding how packets get from point A to B; that's the
responsibility of the routers inbetween.  Modern routers are designed to
handle failing nodes (given a sufficient # of machines, there is
virtually a 100% chance some node is out of order), and some smarter
machines may also do load sharing.

Besides the probability of machine failure, there is also the "back hoe"
issue, which is actually more serious than it seems.  There are a
surprising number of points in our transcontinental communications
system where there are "choke points"; only one way to get from point A
to B.  There may be 3 dozen telecommunications companies offering
service from points near A to B, so it may seem like there are lots of
choices - but it generally turns out they mostly all bought or rent
cable rights that all go via one mountain pass or over one bridge --
very often that turns out to be some railroad right of way first laid
out for steam over a century ago, somewhere out west.

In order to handle the case that any given machine may be out of
service, most modern routing algorithms are designed to be
"distributed", with each machine generally only computing only the "next
hop".  One of the proofs for correctness of any given routing algorithm
is that it not route packets in a circle.  Another requirement is that
when overloaded, routing algorithms should "fail gracefully" (which is
harder than it seems, and imperfect at best.)

For routers on the periphery of the internet, these are all generally
easy problems; typically there are a few local attached networks, one or
a small number of other routers with local networks, and some single
"upstream" point where everything else goes.  It may only take a couple
of K of data to represent all this, and making computing "next hop" off
this data is generally fast.  For such machines, exchanging the whole
route table periodically is often sufficient, & handling network outages
is usually simply a matter of noticing some machine hasn't been heard
from in a while, and deleting it from the very small routing table.

For "backbone" routers, the routing problem is a nightmare, and modern
backbone routers probably start with having >64M of ram just to store
all the networks it has to know about.  One of the important
requirements of backbone routing protocols is to reduce the amount of
routing traffic as much as possibly simply to avoid shipping 64M of data
every time some remote network interface flaps (goes up or down).  This
is all a very big problem; there are numerous dissertations on the
design problem, and dozens of discarded routing protocols that seemed
like a good idea at the time, then turned out to have expensive
problems.

All of this is complicated to solve, and people who are truely competent
at it are scarce.  That is why network engineers can pull down some big
bucks, and sometimes have some truely strange educational backgrounds.
This is also why there are really only a few companies that make good
"backbone" routers - those are typically very specialized machines with
some terribly expensive and fast hardware in them.
oval
response 38 of 49: Mark Unseen   Jun 5 07:34 UTC 2002

interesting ..

bdh3
response 39 of 49: Mark Unseen   Jun 5 08:19 UTC 2002

Yes, and marcus is well underpaid.
scg
response 40 of 49: Mark Unseen   Jun 5 20:12 UTC 2002

Marcus is pretty much right.  256 MB is generally considered the required
minimum backbone router memory at this point.  You might be able to get away
with 128MB, if you're careful.  A full Internet routing table at this point
appears to be a bit more over 110,000 routes.
russ
response 41 of 49: Mark Unseen   Jun 6 01:42 UTC 2002

Re #35:  What happens now?  New route information gets sent around
the net eventually, and until it does packets using the wrong route
get dropped on the floor.  The difference is that routing would only
have to be done once per connection (or set of connections between
two particular nodes), not once per packet.  This relieves the burden
on the routers because most of the data-handling requires no smarts.

If you wanted to use some smarts, you could always use the header-edit
system to re-route packets bound for one port to an alternate route
via some different set of hops.  You'd only have to do this once per
change in the routing table.

Re #37:  It may have gone out, but there's a really good argument
for bringing it back; the expense of the backbone might make it a good
idea to push more of the routing functions nearer to the edges.
gelinas
response 42 of 49: Mark Unseen   Jun 6 04:25 UTC 2002

As it is now, if a node goes away, the nodes connected to it note and find
a new route.  With your scheme, the endpoints would NEVER find a new route;
there is no mechanism to get the location of the break back to the end points.
If you fix that problem, you are right back where we are today.
scott
response 43 of 49: Mark Unseen   Jun 6 12:56 UTC 2002

Actually, the Internet today is heavily loaded with traffic and so a broken
node tends to result in massive loss of data.
russ
response 44 of 49: Mark Unseen   Jun 7 01:11 UTC 2002

Re #42:  Not so.  There is a mechanism for discovering routes between
nodes; if things start failing, the retry mechanism will just perform
the route-discovery step again and pick up where it left off.  My point
is that there is no need to check the route again unless the current
route either fails or becomes congested, and I'll bet that failures on
the backbone routes are rare enough that the overhead of doing route
re-discovery wouldn't come anywhere near eating the benefits.
scg
response 45 of 49: Mark Unseen   Jun 7 06:12 UTC 2002

At the very least, you're talking about a complete protocol redesign,
requiring software updates or replacement of every computer or other device
connected to the Internet.  Even if it were a good idea, it wouldn't be likely
to happen.

The path from any given destination to any other given destination tends to
be reasonably static in most cases.  Still, even in the current setup,
somebody reconfiguring their transit connections, or having a link to one of
their transit providers failing, will be seen by BGP speaking routers all over
the Net.  Networks releasing enough bad routing information have caused some
fairly significant Internet instability several times.  Since you're talking
about every edge device needing to know considerably more routing information
than today's core backbone routers (which only need to know the next hop, path
length, and a few other bits of information for making routing decisions),
and since most edge devices are optimized for something other than routing,
you're talking about a pretty significant job for the edge devices.

The number of routes in the table would have to increase dramatically as well.
Currently, the Internet is devided up into several thousand "autonomous
systems," each of which generally announce big aggregate blocks of IP address
space.  For example, an AS might have lots of customers with /29s (blocks of
8 IP addresses, but would announce them all to the world as a single /18
(16,384 IP addresses).  The rest of the world would only know to send anything
within that /18 to the AS making the announcement; the AS's internal routing
protocol, which doesn't share its data with the rest of the world, would take
it from there and get the data to the right place.  If you want all the
routing information to be known to all the edge devices, the edge devices all
over the world would have to know not only what's in the current global
routing tables, but the internal routing information of every network in the
world.  The overhead would be extremely significant.
russ
response 46 of 49: Mark Unseen   Jun 10 02:41 UTC 2002

Re #45:

>At the very least, you're talking about a complete protocol redesign,
>requiring software updates or replacement of every computer or other device
>connected to the Internet.

Why?  At the *absolute* worst, it could be handled like tunneling
TCP/IP through another protocol.  At the edges, routers smart enough to
speak the host-addressed format would cache recently-used routes and use
those known routes to wrap packets going to known destinations.  The
advantage is that those edge routers are only handling the traffic going
to their little section of the address space, not the masses of data
going over the backbone.

>Even if it were a good idea, it wouldn't be likely to happen.

Depends how much there would be to be gained.  You could still route
traffic by sending it through all of the backbone routers and having
intelligence at each one determine the next hop; that's how route
discovery would probably be done.  Routing stuff beneath the smarts
of the routers would be a way to get faster transport with fewer
resources consumed.  Everybody would like to do that.
 
>The number of routes in the table would have to increase dramatically as well.

That conclusion is based on a faulty premise.

>If you want all the routing information to be known to all the edge devices

You don't; you only need to hold routing information for the connections
which are currently active.  You could push this all the way out to the
user's computer, taking the overhead off of the routers almost completely.

>the edge devices all over the world would have to know not only what's in
>the current global routing tables, but the internal routing information of
>every network in the world.  The overhead would be extremely significant.

No they don't.  The edge devices only have to know the routing information
for the pieces of those networks that they are actually using right that
minute.  If your computer has 100 connections open, you need to be holding
100 routes (if you want to take advantage of the faster transfer rates and
reduced overhead, that is; otherwise you can contend for CPU time on the
congested backbone routers).

Eventually you get to a point where the overhead of discovering routes to
hosts (opening a connection) takes as much muscle as routing each packet
once did and the routers are once again congested to uselessness, but
you'd still be moving much more data per router than you were before.  The
greater capability of the faster Internet makes it even more indispensible
to civilization and concentrates even more resources on the problem.  This
is progress to the state of nerdvana, which is why I think it's inevitable.
scg
response 47 of 49: Mark Unseen   Jun 13 23:20 UTC 2002

Getting back to the subject of heavy usage, a bunch of engineering people from
various cable modem and DSL ISPs have been saying that a majority of the
traffic they're now seeing is Kazaa, the Napsteresque file sharing program.

If you're provisioning a network for end users, you can generally assume that
the users will sit there sending tiny amounts of text in telnet sessions, or
downloading files or web pages every few minutes.  You can further assume that
most of your users will have other things to do with some of their time, and
won't do even that constantly.  Kazaa, on the other hand, not only grabs files
for download, but serves the files it already has to other Kazaa users.  If
left running, it has a tendency to suck up lots of bandwidth, 24 hours a day.
I assume it's the Kazaa users, not just people who need to read mail and so
forth while working at home, who would be targeted by a heavy usage surcharge.
It would certainly make sense to do so.
gull
response 48 of 49: Mark Unseen   Jun 17 14:24 UTC 2002

Re #37: Another issue is that the routing tables are just plain getting too
big, even for those expensive backbone routers.  This is starting to cause
some subnets to 'disappear' and become unroutable from parts of the
Internet, as network providers trim out some of the smaller subnets to get
the tables to fit into memory.  I actually see this pretty often -- there
are sites I can't reach directly at all, but can reliably reach through
proxies.  There was a paper written last year about this "dark address
space" -- there's an article about it here:

http://online.securityfocus.com/news/282
scg
response 49 of 49: Mark Unseen   Jun 18 01:31 UTC 2002

Such blackholes are more likely to be the result of peering disputes than
small subnets.  What needs to be done, in terms of aggregation, to get a route
past the various filters and routed by everyone is pretty well documented.
 0-24   25-49         
Response Not Possible: You are Not Logged In
 

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss