You are not logged in. Login Now
 0-24   25-49   50-74   75-92       
 
Author Message
25 new of 92 responses total.
steve
response 50 of 92: Mark Unseen   Mar 2 20:26 UTC 1995

   Rob, I like your statement, but maybe we can modify it for one more
point?  We can ban for legal or technical reasons, but not moral.
   I say this, because the binary newsgroups are still huge.  Even
more huge than before these days.  So, given the limited nature of
whatever pipe we construct for news traffic, we'll be able to do
a lot more if we don't carry the binary groups.  We did this before,
when the Egale's disk space was a factor.  The next generation of
news machine won't have that problem with falling disk prices, but
we'll still have the more formidible limitation of bandwidth getting
to us.
nephi
response 51 of 92: Mark Unseen   Mar 2 22:02 UTC 1995

Would it be workable to only carry the newsgroups that someone requests?  
I think this would really pare down the bandwidth requirements. 
steve
response 52 of 92: Mark Unseen   Mar 3 02:51 UTC 1995

   No, not really, because then we'd be dealing with hundreds and hundreds
of requests.  I think it would be much better to carry all of rec, comp, soc,
talk, misc, mi and news, that we can (minus the legal and technical deletions)
and then add some of the other groups as need be.
tsty
response 53 of 92: Mark Unseen   Mar 3 09:08 UTC 1995

In the beginning, yes, possibly hundreds of requests +many+ of
which would be overlaps. Not possibly, probably. That's a
startup factor not different from startup expenses, a stairstep
discontinuity from "normal." A disruption in the Force, if you will.
  
But with 70+ newusers per day (still that high?) and no usenet, we
can afford to discriminate, or censor, or whatever PC word
raises the hackles the most. <g>
  
Btw, this former candidate did not hedge on the surveyxD,and given
teh charged language, would point out that +every+ "selection"
inherently involves censorship and discrimination.
  
Aside from that, when a person should decide to start trn or rn
or whatever ... are THEY gonna be faced with 11,000 (!! 450+ screens
full !!) of "selections/censors/discriminations" before they
read their first posting?
robh
response 54 of 92: Mark Unseen   Mar 3 11:29 UTC 1995

Curiously, I read more newsgroups from the "alt" domain
than any other.  I guess I'm an alternative kinda guy.  >8)
steve
response 55 of 92: Mark Unseen   Mar 3 14:08 UTC 1995

   I should have included alt in my above list.  Its certainly an
important part of the net.
lilmo
response 56 of 92: Mark Unseen   Mar 3 19:22 UTC 1995

May I humbly suggest:  start a new cf, called NewsRequest (or something),
start one item per "parent" hierarchy (alt, misc, comp, rec, et cetera), and
let everyone interested in news post what 1st-level "child" hierarchies
they areinterested in receiving (e.g., rec.humor, not rec.humor.funny).

And this ought to be done well in advance of the time that Usenet access
is restored.
popcorn
response 57 of 92: Mark Unseen   Mar 3 19:48 UTC 1995

Re 49: It's not our concern until that irate parent decides to sue Grex
for showing naughty stuff to their child.  I'm still not sure how
to draw a line on this question: *I* believe in making as much material
as possible available; I don't believe in censoring what children
see; but I also don't want to see Grex on the wrong end of that kind
of legal action.

Re 56: I think that would require a *lot* of time from some
administrator person, typing in the names of each group someone
requested.
ajax
response 58 of 92: Mark Unseen   Mar 3 19:51 UTC 1995

The same idea could be largely automated, but automating things would take
some work too.
ajax
response 59 of 92: Mark Unseen   Mar 3 21:54 UTC 1995

  I just read a some recent UUNET Usenet usage stats (starting from
http://www.netpart.com/janus/usenet.html), and the heirarchies STeve
listed seem to account for 95% of Usenet by volume, currently around
200MB/day.  Skipping heirarchies for continents like Australia (aus.*)
just don't save that much :).  (Or is UUNET not comprehensive,
internationally?)
 
  Scrapping all binary newsgroups should help a lot though...articles
in alt.* (which contain many binaries) are on average three times the
size of those in rec.* or comp.*, and alt.* takes up the majority
of Usenet volume.
 
  Some other interesting trivia for calculating Usenet space needs:
add 10% if you want an overview database for threaded news readers.
For 1K block file systems, add another 23% in wasted round-off space.
And add about 10% more for "some spare space in your news partition
or else your operating system is going to complain."
steve
response 60 of 92: Mark Unseen   Mar 4 02:36 UTC 1995

   I really really think we shouldn't create any more work for ourselves
than we must.  Automatically taking the 6 "base" news heirarchies, and
making the appropriate changes makes the best sense.  Then, we can add
things from other less used newgroups as needed.
marcvh
response 61 of 92: Mark Unseen   Mar 4 16:58 UTC 1995

Is the real problem one of disk space, or link bandwidth?
ajax
response 62 of 92: Mark Unseen   Mar 4 18:57 UTC 1995

  Both are limited resources, so both pose "problems."  Even
with many gigs of disk space, there's a tradeoff between
keeping articles around longer, or not carrying all articles.
Link bandwidth is similar; when people transfer big files off
Grex through the net, it slows down other net users.  With
Usenet binaries this would happen more often.
steve
response 63 of 92: Mark Unseen   Mar 4 19:33 UTC 1995

   Disk isn't the problem.  Whatever machine we come up with for
news, disk is going to be pretty cheap.  IDE disks are the cheapest,
but SCSI disks aren't that much more, per Mb.  Disks we can also get
more of, as time goes on.
   Link bandwidth is a problem.  Thats the reason why we can't get
all the newsgroups that exist.
tsty
response 64 of 92: Mark Unseen   Mar 6 08:14 UTC 1995

What about those 450 screens full of choices? Ok, so maybe
only 250 if we can;t get +all+ the groups some day.
ajax
response 65 of 92: Mark Unseen   Mar 6 15:36 UTC 1995

  Re 63, I'd say disk is *relatively* cheap :)...to keep a full newsfeed
for a week would take nearly 4 gigs, which is in the $1200 range.  It's
not that much, but it's 4 gigs that Grex doesn't have right now :).
lilmo
response 66 of 92: Mark Unseen   Mar 6 22:21 UTC 1995

Re #57 and 58:  That's why I didn't suggest that each newsgroup be
requested individually, but by the second-from-top-level hierarchy.
Obviously, listing each newsgroup by hand would be tedious and inefficient,
but that's not what I suggested.  :-)
humdog
response 67 of 92: Mark Unseen   Mar 7 08:08 UTC 1995



it is my opinion that if you are going to carry usenet,
you ought to carry all of it if possible.

cyberspace is becoming restricted enough with out
increasing the amount of restriction.
robh
response 68 of 92: Mark Unseen   Mar 7 10:29 UTC 1995

So which would you rather have, 20% of the articles in every
newsgroup, or all of the articles in the newsgroups which Grexers
actually read?

The choice seems obvious to me.
ajax
response 69 of 92: Mark Unseen   Mar 7 14:53 UTC 1995

  The parameters of your choice aren't definitely Grex's tradeoff.
With the satellite feed, for instance, the newsfeed is no problem;
storage and downloading articles *from* Grex are the hold-ups.  In
which case the question becomes: what do you want, some articles
that last a week or two, or all articles that are deleted after 2
or 3 days (along with more link congestion and possible lawsuits
from PlayBoy :).
steve
response 70 of 92: Mark Unseen   Mar 7 19:02 UTC 1995

   If we go the PageSat route, we should be able to carry a LOT of
news.  Its then a function of the groups we don't want to carry,
and how often we buy new disk to hold more news.  That will be a
continuously ongoing process.
gregc
response 71 of 92: Mark Unseen   Mar 7 19:53 UTC 1995

The pagesat system looks very promising and inexpensive in the long run,
however, there is one big drawback to the system. If we get News from an
Internet Service provider and our link goes down, the News still sits on
the other machine until the link comes back up. With the PageSat system,
news is being constantly broadcast by a satelite to *everyone*.
Simultaineously. If we go down for N hours, then that chunk of news is
gone. Irretrievable. If someone is waiting for a response to something
he posted in a particular newsgroup and that response arrives during a
downtime window, it's gone, he'll never see it. This is something to
think about.

I also believe I remember that Chinet in Chicago has been using this system
for several years now. We may want to contact them and see what opinions
they have about the system. I believe they also run Picospan.
lilmo
response 72 of 92: Mark Unseen   Mar 7 21:21 UTC 1995

I think that the consideration brought up in #71 is very relevant, 
(considering Grex's record on downtime), and of great concern.
steve
response 73 of 92: Mark Unseen   Mar 8 00:04 UTC 1995

   Unforunately, with the ever increasing volume of news, storing a
backlog of it is becoming increasingly impossible.  At 150M of news
per day, a 12 hour outage means that on average, 75M of news would be
sitting there in the queue, waiting to come over.  That would then
have to make it over here before the current days 150M could start.
   As news gets bigger, the flow will increase to the point that the
PageSat system will be overloaded, if you want a full feed.   I don't
yet know the effective throughput rate for news via this system, but
whatever it is, its only a matter of time before this happens.
   Note however, that the PageSat system would / will be able to keep
up with the increasing load long after any V.34 (or future forthcoming
V.43bis) modem could.
   Unforunately, its getting to the point that news needs to be thought
of as a continuous stream, and not the batch orientied thing that it
has been thought of in the past.  This is a big change in the news world.
popcorn
response 74 of 92: Mark Unseen   Mar 8 04:34 UTC 1995

The thing that concerns me about the PageSat thing is that it's
very single source.  I mean, we would have to sink something like
$700 into some single-purpose hardware, which would enable us to
get news from exactly one company.  If the company goes out of
business or dramatically raises its prices or does something else
that makes it hard for us to keep dealing with them, we're left
with an expensive piece of *useless* hardware with no resale value.
 0-24   25-49   50-74   75-92       
Response Not Possible: You are Not Logged In
 

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss