You are not logged in. Login Now
 0-24   25-49   50-74   75-99   100-124   125-149   150-174   175-199   200-224 
 225-249   250-274   275-299   300-324   325-349   350-374   375-382    
 
Author Message
25 new of 382 responses total.
popcorn
response 25 of 382: Mark Unseen   Nov 16 16:26 UTC 1995

Re commercial pages on Grex: Commercial pages tend to get a *lot* more hits
than personal pages.  I mean, compare the traffic to the Coca Coal web page
to the traffic that Joe Blow from Idaho's web page gets.  Because of the
traffic, I could see us not wanting commercial web pages on Grex.
davel
response 26 of 382: Mark Unseen   Nov 16 16:52 UTC 1995

I halfway agree.  I'm concerned about the inevitable crises about what's
commercial, say, when Mary posts an announcement of her neighbor's garage
sale with tons of old & maybe rare LPs?  Having such a rule is going to
require an enforcement mechanism, and that's going to mean someone's having
to pass judgment *at least* whenever someone complains ... and if *only* when
someone complains, then those who we clamp down on are likely to complain
(rightly) that they're being singled out.  I hate the thought of the extra
link traffic, but I *really* hate the management job this involves.
janc
response 27 of 382: Mark Unseen   Nov 16 17:49 UTC 1995

We probably have the capability to keep track of how many bytes of data was
sent due to hits on any particular user's web page.  If that gets to be
excessive for some user (where our notion of excessive might be tempered by
whether the user is a member), we might start sending the person mail
asking him to seek a new home.

That's not a real formal policy, but it's more or less the way we handle
things with mail.

It might be good to write some guidelines for Grex web pages:
   - no pictures stored on Grex.
   - total file space less than whatever.
   - Try to do many small pages rather than single large pages.
   - If you really have a lot of traffic, we recommend you talk to...
etc.
rcurl
response 28 of 382: Mark Unseen   Nov 16 18:37 UTC 1995

I prefer to specify some numerical limits (filespace, bytes transferred,
etc), with of course the capability to permit more by jusitifed request,
because it requires less management time and helps prevents inequities.
I would particularly not want to give members a higher maximum than
non-members, unless we adopt a policy to that effect. We should not have
any hidden resource "perks" for members. 
robh
response 29 of 382: Mark Unseen   Nov 17 00:31 UTC 1995

Re 25 - Yes, but what do you think the odds are that
the Coca-Cola Corporation would *want* a Web page here?  >8)
I would assume that anyone who was willing to set up
a commercial Web page on a system that didn't allow graphics
or CGI at all would be pretty desperate, and probably not
have a lot of money.

I'm not sure I like the idea of limiting how many accesses
a single user can have to their Web space here, or how
much bandwidth they can use, etc., just because that's not
something a user can control easily.  Let's say that people
are accessing my pages "too often", and the other staffers
tell me about this.  What am I supposed to do about it?
I can't control how many people try to access my pages.
What, I should scatter a few hundred typoes all over the
place and hope it drives people away?  Put in lots of
obscenities?  (That might attract even more accesses...  >8)
ajax
response 30 of 382: Mark Unseen   Nov 17 05:47 UTC 1995

  In that situation, you could pay $2/mo to one of the organizations
you suggest, and change the links in your primary web page to that
site.  I don't see a need for web disk quotas beyond normal disk
quotas, nor for web bandwidth/access quotas until or unless there is
evidence they're using a sizable chunk of Grex's resources.  But *if*
that happens, users (including those using Grex for profit) are able
to move their pages elsewhere if they want to keep them available.
srw
response 31 of 382: Mark Unseen   Nov 17 06:12 UTC 1995

Re #11 - Greg and I seem to agree about the issues of efficiency and
extravagance. I am going to back away (a bit) from some of my claims that http
shuts down too often due to link traffic. I believe it used to do that
much more than it seems to do any more. I am inclined to  believe that
the recent changes to the packet fragmenting parameters
have resolved much of the problem and made the link
more efficient. Http usage has been improved a bit because of this.
It is still extraordinarily slow when the link is busy, and that is by design.

The big disagreement Greg and I seem to have is over the value of 
bringing a web-based front end to conferencing. I am not talking about
an extravagant or glitzy thing. I can do it with no graphics at all.
I believe it will allow the power of picospan to reach users who would 
not otherwise find it.  I also believe that this would be a good thing.
gregc
response 32 of 382: Mark Unseen   Nov 17 09:27 UTC 1995

Actually, no. That issue is merely a matter of opinion, and you might 
actually convince me otherwise if you can come up with a neat design.

Our area of "big disagreement" was over your idea of making http packets
high priority. That is simply wrong. If everyone on the system voted
to do it, it would still be wrong. Like the apochyphal story of some
southern states legislature voting to change "pi" to "3" so it would be
easier to deal with, there are certain things you can't vote into "rightness".
robh
response 33 of 382: Mark Unseen   Nov 18 00:23 UTC 1995

Re 30 -Rob, that's a silly policy idea as it stands, and I'll
illustrate why:  (one of the few advantages of working at Meijer,
my brain is free to come up with bizarre scenarios like this)

Let's say that I'm a hacker-type who's royally ticked off at
janc because he's fixed the bug in party that let people read
private channels.  How do I take my revenge upon him?  Simple.
I go to another system, whip up a small shell script that accesses
janc's page, oh, ten thousand times, and sit back and let
it run.

At which point I have to tell janc that he can't have his pages
here on Grex any more, because he's generating too much traffic.
Does this make any kind of sense?  Do we really need to encourage
yet another potential hacker problem?  Especially one that
will take up huge amounts of bandwidth?

"But they can do that now", I hear you say.  Yes, but if they do
it now we won't reward them by removing the other user's pages.

"Well, if we know it's a hacker-type we won't do anything."
Do you want to be the one sorting through the entire log file
trying to figure out if a page is being "attacked" or not?
I think I have better things to do than play forensic analyst.

(And the sad thing is, I can't afford another $2 a month for
computer access, and would probably have to deperm all of my
Web space if that happened.)

I really do not understand why we want to punish users for
having comparatively popular pages.
ajax
response 34 of 382: Mark Unseen   Nov 18 01:13 UTC 1995

  I've already stated my opposition to web quotas at this point, in
case that wasn't clear, but to answer your last paragraph....
 
  Same reason we "punish" users for having graphical web pages, for
having a large mail spool, or for using a lot of disk space - to
keep the system working alright for the rest of our users.  Given
Grex's limited resources, we've prioritized what's important to us.
 
  If it comes down to it, if http traffic starts interfering with
e-mail delivery, you can guess which will be deemed more important.
Among possible solutions would be limiting busy pages, as we plan
to limit big mail spools.  Once mail quotas are in place, malicious
users will also be able to flood your mailbox (I think?).
 
  You raise a good point if web access quotas are ever seriously
considered, though I don't think it invalidates the idea.  An attempt
at detection of such "sabotage" could be made.  But hopefully there
won't be a need for such quotas anyway.
kaplan
response 35 of 382: Mark Unseen   Nov 18 01:33 UTC 1995

It makes no sense IMHO to say set limits (for example 100KB storage for
web space, home directory disk quotas at 1MB, or some amount of http
traffic generated) on the computing resources any one grex user is
entitled to use.  The problem is that any person who wants to increase
his/her allocation needs only to run newuser and create new accounts.  It
is trivial to link my web pages to my alter-ego's web pages.  

Such limits per account will deter the casual resource hog, but they will
soak up time and effort of both serious resource hogs and staff trying to
fight them which might have more productive uses.

I think that given free nature of grex, the best limitation on people's
web usage should be the user's own patients.  Leave http traffic with a
lower priority than other interactive traffic and if http seems too slow
for any user, that user will stop using it and there will be more
bandwidth for the patient ones.
robh
response 36 of 382: Mark Unseen   Nov 18 01:47 UTC 1995

Exactly what I say.

Re 34 - But there is clearly a difference between accesses
on a Web page and the size/contents of the page.  The user
can control how big his/her page is, and can choose not to
include huge pictures files.  (On Grex, their choice doesn't
really enter into it, or course.)  But a user cannot control
what other users at other Internet sites do.
adbarr
response 37 of 382: Mark Unseen   Nov 18 02:59 UTC 1995

<with serious trepedation, I ask> If you had the hardware and the connectivity
would the above (all of the above) be an issue? Where should the effort
be concentrated?
robh
response 38 of 382: Mark Unseen   Nov 18 03:30 UTC 1995

If we had a T3 connection and a lot more CPU, then none of
this would be much of an issue, no.  This is the problem
with overpopulation, more and more people are trying to use
the same resources.  Of course, if some of those people
became members...
gregc
response 39 of 382: Mark Unseen   Nov 18 04:40 UTC 1995

Actually, no adbarr and robh, use grows to fill the available disk space,
CPU and net bandwidth. If we had an 8 processor Sparc 20 and a T3 connection
that together would support 600 simultaineous users, we would grow until
he *had* 600 simultaineous users. And the system would be as slow as it
is now. (And if you thought agora was a problem now......)
janc
response 40 of 382: Mark Unseen   Nov 18 04:49 UTC 1995

I'm against rigid quotas on accesses.  But there are clearly some web pages
that generate a heck of a lot more traffic than others.  If we had someone
write a web-page that's so trendy and cool it gets written up in Newsweek,
Wired and all the other trendy web media, we'd have a problem.  We'd buckle
under the load.  We'd really be forced to ask the user to move his page
elsewhere, and leave a pointer here to there.

So the basic, unwritten rule of Grex pages must be "You can have a Web page
here, so long as it is of modest size, and basically boring".

Sure, the rule sounds dumb, but I don't think Grex could survive a really
popular web page, even if it wasn't all that big.

I'm sure, however, that our staff can distinguish someone using the web to
attack a page on our system from hordes of genuinely interested people.

The staff currently does concern itself with people who send or receive too
much Email.  We don't have any fixed limit on how much Email traffic a
Grexer may generate, but we know a problem when we see one.  I'd think
Web Traffic would have to be handled the same way.
ajax
response 41 of 382: Mark Unseen   Nov 18 06:12 UTC 1995

  I was thinking the same thing.  If web traffic is ever a big problem,
staff can list the top couple sites by access, and ask the authors if
they can move some of their pages elsewhere.  If people were consistently
uncooperative, there are plenty of other solutions to too much traffic
(automated quotas being one, which I'd place pretty far down the list).
robh
response 42 of 382: Mark Unseen   Nov 18 13:11 UTC 1995

How about if we ask the owners of the more popular sites
to include a pointer on becoming a member of Grex?  >8)
adbarr
response 43 of 382: Mark Unseen   Nov 18 15:27 UTC 1995

Give him lemons . . . 
danr
response 44 of 382: Mark Unseen   Nov 19 15:48 UTC 1995

re #42:  Sounds like another statistics program is in order. :)
rcurl
response 45 of 382: Mark Unseen   Nov 19 18:53 UTC 1995

I would think that it could be set up so that a web site could be read
only a couple of times (in some time period) from the same source address,
which would defeat the access-bomb described by Rob in #33. 

Everyone seems to agree that there should be some sorts of restraints on
resource hogs, but there is little agreement on how to implement this.
Currently it is done on a subjective basis by staff, which takes a lot of
their time that could be better used elsewhere. The option of making it
automatic (limits, in particular) has raised all sorts of scenarios where
limits would not be fair, or could be defeated. Well, being an engineer, I
tend to thing that there is a solution to every problem (even if only an
"engineering solution" :>), so I think there are solutions to these
problems. One step would be the "statistics program" that Dan refers to in
#44 (if I understand that correctly), so that staff is automatically
alerted to large resource demands without having to hang out all the time
to look for them. Another step would be to choose a suite of limits as
*guidelines* to users, to inform them that if they use more than this or
that (of various resources), they should consider using a different
service. Another step would be to double that "warning" limit to make an
absolute limit. 

I've been thinking about this subject because I have created a homepage
here for an organization. I *expect* it to receive relatively little
traffic. If it receives a lot, and especially if some policies are adopted
to tell me what a lot is, I would seek another provider for placing the
page. At the moment, however, there are no policies except an implicit
threat that "too much" traffic might be "frowned" upon. I would much prefer
explicit policies, so I will know what actions I have to take without
being subject to unexpected "warnings" or criticisms. 

dpc
response 46 of 382: Mark Unseen   Nov 23 01:28 UTC 1995

I would really hope we take a "hands-off" approach to Web pages for the
time being.  The whole phenomenon is just too new for us to be able to
make any judgments about what "contributes" to Grex.  Just call me
a free-marketer!   8-)
        Frankly, I thought the whole Web would be a passing fad, and
that it would collapse back to relative obscurity in about 6 months,
due in part to the molasses-like speed of viewing all those graphics.
(I even *prefer* lynx at this point for that reason!)
        Setting limits also means allocating staff time to enforcing
those limits.  Plus cries of censorship would inevitably arise.
        If we really do need to "do something" because Grex is
becoming constipated from Web pages, why not say that people who
exceed their directory quotas for any reason are asked to pay, say,
$2 per month extra?
rcurl
response 47 of 382: Mark Unseen   Nov 23 06:54 UTC 1995

Limits can be automatic.

Well, I will agree about the Web phenomenon. I think it will start
toning down as a fad before long, as people learn they need to maintain
their web pages or they become silly. Commerical use is a different
matter, of course. I am gettinng a bit sick of all the cross-listing,
which makes me go around in circles. I'm calling it the CobWeb, now.
mdw
response 48 of 382: Mark Unseen   Nov 23 15:44 UTC 1995

The general approach we've had with limits is to wait until something
becomes a demonstrable problem, and then, if we can't convince users to
stop doing it, we implement some sort of automatic limit.  For instance,
we have a disk space policy, but we've never had to automate a solution,
because we have enough extra space that just warning users has been
sufficient.  We did have a problem with people sending very large files
through mail, and choking up the whole system.  Mail, today, has special
logic to detect large files, and feed them through more slowly.  We had
a problem with people putting fancy glitzy graphics on grex, and
incidently eating up tons of disk space, *and* posting graphics of a
questionable legal nature on grex.  The current "no graphics" policy
stems directly from that, and we would have an automated solution in
place today, except that our httpd software out-dumbed us.

The feedback we've gotten from users affected by the "no graphics"
policy I think could be best described as "a hostile feeling of spoilt
entitlement".  That is to say, I think it's safe to say these people
have no idea how much of the system they're eating up, no interest in
paying for that share, and little respect for the people people who paid
for the system or make it work.

Let me be a bit more graphic about what I'm talking about.  The number
one offender of the "no graphics" rule has been posting pictures that
purport to be of sex with underage children.  Last time he was caught,
he had managed to consume about 4 megabytes of disk space here with this
sort of thing.  He consistently either fails to respond to mail, or
isn't at all cooperative about moving his business elsewhere.  He has
never been a grex member, and I doubt ever will be.  We aren't talking
about a cute "start" button here.  We are talking about something *way*
out in left field from what I think most people here would like to see
Grex be about.  Now, I'll grant you, this person is an extreme case, and
I can assure you, nobody on staff wants to be in the position of
censoring anything, or even appearing to do so, but I sure think we want
to be damn sure whatever policy we design will be 100% effective against
this guy.

A more typical example is the person who thinks a couple of 20K gif's
are "no trouble at all".  They aren't, if you have a T3 connected
computer.  They aren't even bad, if you're the *only* consumer of a
28.8k link, or even if you're only one of 2-10 web pages at a site.  But
I don't think we could extend the privilege of having 20K gif's to too
many home pages before we ran into serious network congestion problems.
Not only won't we be able to do any of the other things I think we value
on Grex, we won't *even* be able to do a decent job with those Gif's! Ah
yes, there's a technical detail that bears mentioning here.  When
netscape fetches a web page with 8 pictures on it, it doesn't fetch each
graphic one at a time.  It starts up all 8 at once.  If you have 8
netscape's each fetching the same page "at the same time", they'll all
pile 64 requests in within a matter of seconds.  This is a great thing
when you have a fast server and a slow workstation.  It's obviously just
about exactly the worst thing you could do to grex.
srw
response 49 of 382: Mark Unseen   Nov 23 22:53 UTC 1995

Disk space and censorship were not the reason for our "no graphics" policy.
Usage of the internet link was the only reason. 

That said, if we ever did decide to allow graphics, I think we'd have to
consider the question of whether the graphics violated Michigan law. If we
suspected they did, I don't see how they could be allowed to remain.
 0-24   25-49   50-74   75-99   100-124   125-149   150-174   175-199   200-224 
 225-249   250-274   275-299   300-324   325-349   350-374   375-382    
Response Not Possible: You are Not Logged In
 

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss