You are not logged in. Login Now
 0-23          
 
Author Message
cross
Thinking about Grex: May 2017 edition. Mark Unseen   May 22 01:23 UTC 2017

I've been thinking about Grex recently.  Specifically about the software. 
This item contains some of those thoughts; some are brutally honest.
23 responses total.
cross
response 1 of 23: Mark Unseen   May 22 02:35 UTC 2017

A little history.

I should write a blog post about this stuff or something.

I've been using Grex for nearly 17 or 18 years now; since about
1999 or early 2000.  The system grew out of the earlier "M-Net"
system in Ann Arbor and got started in 1991; a detailed account of
early history is here: http://www.unixpapa.com/conf/oldhistory.html
(sadly, the author of that link -- Jan Wolter -- passed away a few
years ago.  I really liked Jan; he was one of the most *reasonable*
people I ever came across on Grex).

I heard about Grex on a mailing list somewhere; maybe cypherpunks
or bugtraq.  I believe Marcus Watts mentioned that 'they' were going
to use a modified version of the MIT Kerberos KDC for authentication
on their public access Unix system, and a tiny amount of research
showed that these fools gave an account with more or less unrestricted
shell access on a Sun machine running SunOS 4 to literally anyone
who logged in off the Internet.  What craziness!  How could that
POSSIBLY work?!  I had to login and find out for myself.

Have you ever seen the movie, "The Life Aquatic with Steve Zissou"?
Logging into Grex was kind of like walking onto Zissou's boat.
Everything was sort of run-down and old-fashioned, but juxtaposed
with this sense that people took it seriously.  It was bizarre:
"Let me tell you about my boat."
https://www.youtube.com/watch?v=d1RnYfFZK2k

"The bearing-casings aren't supposed to look like that, but we can't
afford to repair them this year."  There was also this pride in
being something of a species of bottom-feeder: Grex liked to acquire
old and slightly broken hardware and press it into service.  While
laudable in some sense, in an era of geometrically increasing
computing power at plummeting cost, it didn't seem like a particularly
good use of the scarcest of resources: volunteer time.

Grex also had its heroes and local flavor already.  It was hard to
come in as a Unix-guy and make suggestions and be taken seriously;
if you weren't one of the local favorites, no one knew to consider
you different than any number of other users who would blow in off
the Internet, make some drive-by comments and then disappear again.
It took me a few years to get folks to believe that anything I had
to say wasn't just uninformed hot air.  Here, well into the 1990s,
people were writing user-interface like code for a time-shared
system like it was the early 1980s.  It was all a bit surreal and,
frankly, frustrating.

By any objective measure, one should have logged out and never
looked back.

But like a favorite stuffed animal that's been mauled through
childhood and is now missing an ear and half of its stuffing, Grex
kind of gets under your skin and is hard to let go of.

In part, there's the challenge of keeping things going.  Part of
it is the retro-charm of the place and nostalgia for a simpler and
a lost style of computing: time-sharing.  As Dennis Ritchie once
said,

    "What we wanted to preserve was not just a good environment
     in which to do programming, but a system around which a
     fellowship could form. We knew from experience that the essence
     of communal computing, as supplied by remote-access, time-shared
     machines, is not just to type programs into a terminal instead
     of a keypunch, but to encourage close communication."
        (from http://cm.bell-labs.co/who/dmr/hist.pdf)

Grex really got the fellowship part, though of course in a far
difference context than the development of Unix.

Sadly, I think the local focus held Grex back for a *very* long
time.  While the bottom-feeder mentality just didn't make sense on
a modern piece of hardware, it was mostly harmless: like aging
hippies dropping off potluck dinners you don't want, the obsolete
hardware accumulated wasn't a real burden.  But the local hero thing
became a problem; Grex damn near succumbed to Founder's Syndrome
(https://en.wikipedia.org/wiki/Founder%27s_syndrome).

When the bulk of your users are no longer local, then such a strong
regional focus and culture stops making a lot of sense.  Yet Grex
tried to hold onto that and fought tooth-and-nail NOT to evolve as
its user-base changed out from underfoot.  That was problematic.
The staff, many of whom had been founders, were very resistant to
change; consequently, most of the users went elsewhere.  Grex now
sees a fraction of the traffic it received at its peak of popularity
back in the 90s and early 2000s.

The Picospan program is a representative example.  Written by one
of the founding members (Marcus Watts), it was (is?), unfortunately,
closed source and only two people locally had access to the source
code.  When Grex moved off of the Sun and onto x86 hardware, getting
an updated version when we upgraded the operating system was
sufficiently difficult that it became a blocker.  But there was
significant resistance to replacing Picospan with an open-source
equivalent.  Eventually pragmatism won out and it happened, but it
was a slog.  Similarly with abandoning some of the customization
made to the Sun computer: despite them no longer being relevant on
a modern machine, there was serious talk about bringing them forward.
It was bizarre.

However, most of those folks have drifted away now.  While sad in
some sense, it presents a unique opportunity in others.

Wondering into Grex now is like wondering into an abandoned city,
but one with a fully-functional infrastructure.  It's like you can
take it and make it into the kind of place you always wanted to
live!
cross
response 2 of 23: Mark Unseen   May 22 02:57 UTC 2017

Things to do: Rewrite backtalk/fronttalk.

Backtalk was a neat idea: a "skinable" wrapper around a Picospan-like
conferencing system.  Backtalk itself is actually a language interpreter for
a small (relatively), concatenative programming language that vaguely
resembles PostScript.  The various "interfaces" are then programs written in
that language.  The command-line interface, fronttalk, is then one of those
programs.  Or rather, one half of fronttalk is like that; the other half is
deals with interacting with the user and is written in Perl.

While a neat idea at the time, it seems a bit antiquated now.  The "modern"
way we would address this would be to generate consistent structured markup
and then use CSS to "skin" the presentation to the user.  Actual manipulation
of the conference would be done using a RESTful API and a structured data
format like XML or JSON.  A web interface could be written that would plug
into this framework and provide a user-interface; another program could
provide a command-line interface.

As well as fronttalk/backtalk have served Grex for the past decade or so,
I think it's time to start talking seriously about putting both out to
pasture and moving towards something both more maintainable and modern.
cross
response 3 of 23: Mark Unseen   May 22 03:00 UTC 2017

Things to do: A place for community-contributed software.

I know that this is done on the WELL, I don't know if it's done on SDF
but I suspect it is.  In a nutshell, trusted users are allowed to
install "user-maintained" software in a known location.  Anyone that
wants to contribute to that repository is free to do so; root access
is not required.

I propose we do something similar.  <tfurrows> is already doing some
interesting work with gboard; we should encourage more of the same by
providing DIY mechanism for motivated users.
cross
response 4 of 23: Mark Unseen   May 22 03:04 UTC 2017

Colleen McGee (user:cmcgee) once told me that I should take Grex
and make it what I wanted.  I think that is now true for lots of
the newer users; we've got a good foundation, but the rest of the
structure needs work.  I've always at least tried to be somewhat
cognizant of the issues around Founder's Syndrome that I saw when
I first stumbled onto Grex and try and be more accepting of outside
views.  That's not to say that I've been perfect, and I certainly
think it's fine to challenge new proposals on technical merits, but
let's try and be open to new ideas!
tfurrows
response 5 of 23: Mark Unseen   May 24 20:16 UTC 2017

Awesome thread, looks like things are moving in a good direction. On the
subject of community-contributed software, I think it's a great idea. User
papa had the idea of ~username/share/bin folders, which he and I and possibly
a few others have already started using, so that we could share
scripts/utilities/filters. It's a nice way to use things from known users;
you can reference them directly of course, or place that user's share/bin in
your own path.

I think it would be nice to have a shared folder on the system, where anyone
could contribute. I have once such utility shared that way on SDF, but a
different "metaARPA" user had to place my script in the folder for me... in
any case, like you postulated, the ability is there. On the LCM's 3b2/1000
there is also such a folder for community-contributed items. So there is
certainly a precedent on current multi-user systems.
cross
response 6 of 23: Mark Unseen   May 24 20:22 UTC 2017

Re: rewriting fronttalk/backtalk.  Let's talk first about picking
a programming language.  This came up on `party` the other day, and
I thought it was worth recording here.

For the types of programs we run on Grex, the language should be
something type- and memory- safe.  That rules out the entire C
family, unfortunately.  (I have a soft-spot for C.)

I'd like something with garbage collection or clear memory sematics;
that'd rule out anyting like Pascal and is another strike against
the C family.

The desire for strict static typing rules out basically all of the
dynamic interpreted languages, so no python, perl, ruby, Lua, Io,
Lisp, clojure, etc.

I'd like to avoid JVM languages due to the high overhead and startup
cost of the JVM itself, so that rules out Java, Scala, Groovy,
Clojure, etc.  This is kind of sad because I sort of like Scala and
Clojure (and Kotlin).

What's left? SML, Haskell, OCaml, Go, Rust...those are the popular(ish)
options.

SML is too niche; I'm going to rule it out.

For this sort of thing, you need something that can interface with
the underlying system fairly intelligently and has a large(ish) set
of support libraries.  That would tend, I think, to rule out OCaml
and probably Haskell.

Haskell is nice, but it's too unfamiliar to too many people.  There
are probably libraries on Hackage to do all the low-level systems-y
stuff we need, but we don't want to require folks to wrap their
heads around Monads just to understand how the BBS works.

So that really leaves Go and Rust.

Between the two, right now, I'd probably pick Go for purely pragmatic
reasons.  It's relatively familiar, relatively light-weight, has a
good implementation, a wide standard library, but is garbage collected
and type/memory-safe.  Unlike Rust, it's also a relatively stable
and mature *language*.

So...Go feels like a win at the moment.
cross
response 7 of 23: Mark Unseen   May 24 20:25 UTC 2017

resp:5 It's an old idea.  That's where /usr/local came from.  :-)

I like the idea of there being some minimal amount of gatekeeping;
so not something world-writable, but definitely a group that can
be trusted not to be idiots about installing things.

The ~/share idea seems reasonable, but one can find that one quickly
accumulates a very long $PATH that way.
cross
response 8 of 23: Mark Unseen   May 26 01:17 UTC 2017

And /cyberspace/contrib exists now. Some folks can write to it.
cross
response 9 of 23: Mark Unseen   May 26 02:20 UTC 2017

In the interest of trying to move forward with *actually* writing
some code to replace front/backtalk, I've written a simple parser
for the item file format.  It is a rather short program; I've run
it against all item files on both Grex and M-Net, and as far as I
can tell it seems to work for both.

It's perhaps worth noting that in the more than 60 (!!) combined
years that the two systems have both been operational, some amount
of data corruption has crept into both BBSes.  This usually manifests
itself as a missing disk block in the middle of a file, with the
result that part of the data making up an item is suddenly 4 or so
kilobytes of zeros; sometimes this is true of an entire item file.

Some other corruption is the evident result of bugs in whatever BBS
software created the files in question; YAPP on M-Net seems to have
had a problem in the past scribbling responses.  Occasionally a
line that should be special to the conferencing software is
obviously malformed.  While most of these are easy fixes, some are
a bit harder; for example, we don't *know* if someone intended to
scribble or hide a response because the response's flags are
missing.  I would argue we should not expose these very old posts
for fear we'd be acting counter to the interests of the posts'
author.  Anyway....

The parser can tolerate these problems, but isn't too pleased
about it: it makes the program some modicum uglier.  There isn't
a lot we can do about it.

Anyway, the simple parser is in /a/c/r/cross/r.go
cross
response 10 of 23: Mark Unseen   May 27 03:31 UTC 2017

Thinking more about backtalk/YAPP/Picospan file formats....

Jan Wolter wrote a very nice write-up of the various file formats
used in a Picospan-like conferencing system here:
unixpapa.com/backtalk/stab/doc/format.html

One will notice, in particular, that these formats are very simple,
line-oriented and mostly text-based.

A valid question is: how many are still relevant?

I suggest that only a handful still make sense: in particular, the
item format seems to make sense because there is so much existing
data in that format (on Grex, about 360MB, on M-Net, about 330; so
nearly a gigabyte total between the two systems). It would be nice
to get rid of the custom format and replace it with something like
say, multipart-MIME-encoded Mailbox format files. I think that Jan
had some idea to do this at some point, but never got around to it.
But the sheer amount of data makes it not unreasonable to retain
the current format.

However, the conference list, conference configuration, etc, formats
no longer make a lot of sense. All of that can be replaced with
JSON and things that format JSON. The question then becomes: to what
extent do we retain *aspects* of the existing formats?

For example, the conference list file contains both a "default"
conference as well as syntax for describing *aliases* for a given
conference. These are great for an interactive program like Picospan;
but to what extent do they continue to make sense for a RESTful
server that's really not meant for direct interactive use? I would
think very little. So would it be reasonable to separate out some
of that metadata from the conflist file itself? For example, does
it make more sense to move the description of the default conference
to another file? Perhaps similarly with the abbreviated names and
aliases?

If nothing else, it's worth a bit of an experiment.
cross
response 11 of 23: Mark Unseen   May 31 04:07 UTC 2017

Today I decided to take some of the code from r.go and start fleshing
it out.  In particular, I had written a 'LineReader' struct with some
functions on that type (think of this as being approximately Go's nod
to object-oriented programming).  This got me something that could
read lines of text from some source in a controllable way; this was,
of course, for parsing BBS data.  I wasn't quite happy with it, though,
so I pulled it out into it's own package, changed how it's created and
started putting the BBS code (or rather, what will turn into the BBS
code) into a proper, Go-project directory hierarchy.  Next, I want to
clean up the parsing code and have it populate a proper typed structure
representing the various BBS objects: items, responses, etc.  Writing
a wrapper program to spit those out in, e.g., JSON format gets us a
good chunk of the way there towards a RESTful server for at least
inspecting the state of the database.

It's probably time to put all of this under revision control.  I'd
prefer to use Git, but right now Grex uses Subversion so that's not
really an option....
cross
response 12 of 23: Mark Unseen   Jun 1 04:06 UTC 2017

I want to take a moment and think about WHY things are, sometimes, the way
that they are.

Now, I'm a firm believer that when it comes to technical and engineering
decisions, there is at least some room for art, opinion, and subjective
decision making. But the bulk of the decision should rest on a firm basis
of data and be based on a grasp of theory. It is perfectly legitimate to
ask WHY a thing is the way that it is and the answer should at least be
supportable on technical merits.

Then there is the idea that a decision made should be open for revisiting
in the face of related change: sometimes, the technical landscape changes
out from under foot, and seemingly-settled decisions can come up again due
to changes in the environment. Decisions come with an implicit context; if
the context changes, perhaps the decision should as well.

But that's not the way that Grex operated historically. Grex was more of a
autocracy from a technical perspective; decisions were made unilaterally by
some staff members with no real regard as to presenting an argument for WHY
they were made. That was mostly fine, except when those things got in the
way of *actually running the system*. Then they became issues and worthy of
debate and justification, but that was rarely done. Instead, the usual
response was to repeat that the solution put into place was the "right" one
and that one shouldn't question it; that it was done by volunteer effort
and thus not permitting of question, etc.

An example was mail delivery. Now, the "modern" solution for delivering
mail is to write it into one's home directory; the older solution is to
write it into a file in a central directory somewhere (/usr/spool/mail,
then /var/spool/mail, and finally in /var/mail/$USER).  That was fine,
but it becomes an issue when you start bumping into limits for how many
files can be in a directory; on a big system with many user accounts,
like Grex, M-Net, or SDF you can actually hit this limit.

On the Sun computers, this was especially bad: SunOS 4 and earlier didn't
gracefully handle large directories with lots of files in them.  The
solution in those days was to create a set of "hierarchical mail spool"
directories. Much like user home directories on Grex now, where the user's
login name determines the path to his/her home directory, so the login
name determined the path to the user's mail spool file on the Suns.

When Grex moved to OpenBSD on x86, there were plans to migrate this
forward.  However, by then the de facto standard of writing into a spool
file in the user's home directory had already been adopted by the major
mailers of the day; postfix, qmail, etc, all worked that way.

Yet, Grex chose not to go that way. Even worse, Grex chose to just go with
the traditional method of a flat set of files in /var/mail; yuck.

Suggestions that we go to delivery into the home directory were met with
a lot of resistance.

Why?

I suspect a lot of it came from the proposals not coming from the right
people. It also came from a resistance to acknowledge that the environment
had changed out from underneath of Grex.

I just changed Grex to home-directory mailbox delivery. It wasn't hard,
and I doubt many (if any) will notice. But it's amazing that it took nearly
15 years to get it to happen.

Why?

I think it's at least partially still an open question.
cross
response 13 of 23: Mark Unseen   Jun 5 22:06 UTC 2017

I want to take a moment and talk about engineering software. Not
software engineering (that stolid, coarse topic) but rather the
distinct activity of engineering a program.

I'm not particularly enamored of the "agile" school of thought; I
think that they trade off a fascination wtih measurement with a
cowboy mentality that emphasizes the act of coding and justifies
it with platitudes about "testing" making all bugs trivially
transparent and thus claiming that tests "prove" the software
correct.

That is utter rubbish; a good suite of tests doesn't "prove" your
programs are correct.  The tests merely show that, within the context
of the highly-constrained and controlled testing environment, some
assertion can be made about the expected behavior of the software
under test.  That's it.  That's not at all the same as the software
being "correct".

Anyway.  The agile people seem to think that testing cures all evils
(and cancer and the common cold...), and that the right way to do
"design" is to just write a bunch of tests, as if the "correct"
design will simply "fall" out of the exercise of testing.  Sadly,
this doesn't lead to working programs beyond the complexity of the
absurdly silly "bowling kata".  Here are some words about this and
other fallacies of the "agile" gurus.
http://pub.gajendra.net/2012/09/stone_age (note: some of the links
on that page are broken because the Agile guys like moving things
around.  Aparently, there tests don't extend to preventing other
people's links from banking.)

So how, then, do we actually go about writing a sizeable program?
Jan Wolter and Steve Weiss put about 50,000 lines into Backtalk and
Fronttalk combined.  Some of that code was borrowed library code;
for example regular expressions and stuff.  But still, it was fairly
large.  And backtalk was in C!  Not C++, but C!  How did they do
it?  (Note: not related to anything, but just as an interesting
aside the 6th Edition Unix *kernel* was less tahn 10,000 lines of
code).  How did they do it?

I've found in the quarter century or so that I've been writing
programs for fun and profit that writing software isn't that hard,
really.  One first decides on an abstraction, then decides on the
specifics surrounding that abstraction, and then implements and
tests those specifics, finally combining them into the whole program,
which is then tested again.  Really, that's it.

I wasn't there, but that Jan and Steve decided on an interpreter
for a programming language as the central abstraction in backtalk
(and by extension fronttalk).  That did most of the heavy-lifting
of the conferencing functionality and the rest was implemented in
terms of it.  Done.

As I mentioned earlier in this thread, it was a neat idea for the
time, but a bit dated now: working in terms of a representational
interface to structured data gives us a simpler, more composible
solution.

So what, then, should the abstraction we base our program on be?

Frank Brooks (former manager of the IBM OS/360 project) had a
memorable quote in, "The Mythical Man Month" that I think is related
here:

    'Show me your flowcharts and conceal your tables, and
     I shall continue to be mystified. Show me your tables, and I
     won't usually need your flowcharts; they'll be obvious.'

We don't program in terms of "tables" and "flowcharts" anymore, but
we *do* program in terms of data structures and types.  Therefore,
I would argue the abstraction we build on should be a set of types,
and their representation in some structured data notation.

So, perhaps the next step in our backtalk/fronttalk replacement is
to come up with a set of types that describe the various conference
data we care about.  Careful selection of these will make the rest
of the program relatively straight forward.
cross
response 14 of 23: Mark Unseen   Jun 8 03:59 UTC 2017

So thinking about types a bit....  To focus the discussion a little
bit, let's talk about some of the concrete data objects present in
a Picospan/YAPP/Backtalk/frontalk-kind of conferencing system.

Picospan-like conferencing systems are actually rather straight-forward:
They are three-level hierarchies.

At the top level, one has a conference; this is analogous to a
discussion forum, newsgroup, mailing list, etc.  On SDF, this is
equivalent to each 'board' in `BBOARD`.  However, conferences are
richer than SDF's boards in that they can have access lists,
bulletins, can display login and logout files, etc.  To make things
concrete: in the existing systems, conferences are represented by
filesystem directories.  Creating a conference requires administrator
intervention.

The next level down is the "item": conferences, in some sense, can
be thought of as containers for items.  Items are unique threads
of discussion about various topics; they can be created on demand
by users.  Items are represented by files in the conference
directories; items can be "linked" into other conferences: this is
actually handled using hard links in the Unix sense.

The final level in the hierarchy is the response: responses are
individual comments or messages in a thread of conversation.  They
are represented as a range of bytes inside an item file; one can
*not* link them into other items or conferences, though there is a
backtalk syntax for *referring* to them via hyperlinks in the web
front-end (this is somewhat analogous to symbolic links vs hardlinks,
incidentally).

Clearly, it seems reasonable that there should be a type to represent
each of these objects.  However, complexity is never too far away,
and we start running into some points for design decisions rather
early on.

Picospan-style conferencing systems have a fairly rich vocabulary
for specifying "ranges" of objects.  That is, sequences of items or
responses (curiously, there doesn't seem to be much in the way of
doing that for conferences) that match some criteria (an obvious
one: "give me a list of items with new responses since the last
time I checked").

So....Should the item type include all of the responses to the item,
or should there other types to represent these ranges?

I'm leaning towards the latter; I think it's cleaner.
cross
response 15 of 23: Mark Unseen   Jul 14 18:16 UTC 2017

I wanted to mention this again. It's been a little while since I
wrote.  ARRL field day came and went and took away some attention,
and I've been quite busy with work lately. But I've been thinking
about this in the back of my mind.

Let's talk about item ranges.

The more I think about it, the more that I think that the item
range should, in some way, be the "unit of abstraction" for the
backend of a Picospace replacement.  That is, the fundamental
operation in *reading* would be selecting and reporting a range
of responses from some set of items (possibly in a set of
conferences) and the fundamental analogous operation for writing
would be storing a range of responses. One may think that this
latter requirement is a bit strange; after all, doesn't the user
enter a single resposne at a time? Well, yes...but consider that
we may want to do something like share conferences between
systems (eventually, anyway), and reconciling items between
machines may require writing multiple responses at a time.

So, the response range will be the output of reading, and the
unit for writing. Okay; let's just keep that in the back of our
minds for right now.
rdj
response 16 of 23: Mark Unseen   Sep 3 22:53 UTC 2017

Question: is this web front-end meant to replace logging into Grex to use
fronttalk, or supplement it?
cross
response 17 of 23: Mark Unseen   Sep 6 18:57 UTC 2017

The intent would be to replace both fronttalk and backttalk.
mijk
response 18 of 23: Mark Unseen   Feb 6 14:53 UTC 2018

You mentioned sharing conferences between systems. Is this sharing
between  alike systems/backtalk replacement system without a name (yet),
or between  other types of conferencing/forum software systems?

One of the reasons i ask is that, i am interested to see connections
between systems like grex (where alot of people login via a text only
system) and also general web based conferencing systems, like say:
Simple Machines Forums, (which coincidentally is what runs Randy Suess's
CBBS, they have no text based system now after running YAPP - which i
believe is very close to picospan).
cross
response 19 of 23: Mark Unseen   Feb 7 20:21 UTC 2018

I think the general idea is to share between systems running the
same backend (the unnamed backtalk replacement: I think the name
I settled on was `attospan`).

I've been busy with work lately and just haven't had time to tinker
with it much at all, which is kind of sad: but there it is.

The idea is to have a text front end, similar to the current
"fronttalk" as well as a web frontend. Nothing's happened on that
front, though.

However, I *can* read all the data on both Grex and M-Net. So we've
got that going for us, which is nice.
swolf154
response 20 of 23: Mark Unseen   Mar 13 00:48 UTC 2018

Excuse me but I'm a bit confused. When you say "New Users" what type of new users is grex looking for? I'm sure it would be great to get a bunch of Programmers as as new users but I really don't see the attraction. I've noticed a lot of "Web Bashing" here so I guess doing something with the Web side of things is out. What type of devices is Grex planning for in the future? Or is the plan to just eventually go dark? Is money the main issue so Grex needs more users to pay dues/donate? Phew...I'm "Dazed and Confused"
cross
response 21 of 23: Mark Unseen   Mar 13 02:25 UTC 2018

Definitely do something with the web if that floats your boat. Personally,
I think the web bashing is overdone and would rather see bashing on Gopher
instead (really guys: it's a dead protocol).

Something that I think would be kind of neat would be a simple publishing
system for text-only, gopher-like web pages using a simple terminal browser
like lynx. The challenge is generating simple HTML from an again-simple
textual description of pages. Think something like Jekyll, but oriented
towards text and super-slim presentation.
mijk
response 22 of 23: Mark Unseen   Mar 17 22:00 UTC 2018

For me as a non-programmer, and nothing to do with IT person; Grex, and
picospan - cast a spell of the forgotton age, where internet communities were
places you could ask any question and likely get the answer from someone who
really knew what they were talking about. As opposed to todays world, where
each subject, interest and field of eneavour has it's own niche, it's own
seperate web forum where you only go to talk with people in the know when you
want to talk about that paricular topic. It seems kinda one sided belonging
to communites where you only share one interest. I see no reason why people
should not strive to build communities on the net, with the spirit of the
older ideals of sharing in a more general pool of knowledge and experience,
and building new things from many sources and opinions, from many fileds of
interest.
Grex, picospan, the Well, still fire the imagination of the times they were
born, the counterculture, the spirit of freedom, and courage to live life in
a simpler way, without the boundaries of mainstream media and consumerism,
and to marry the fellowship of humanity with the technology that enables us
to span the whole globe, and to keep in mind, the better tomorrow is within
our grasp today- each day. The older generation said: "Go West my
daughter/son!" and with us, as with the founders of Grex, The Well, the
arpanet: West is any direction we point our modem: kinda. :) 
Seriously, there is still magic in places like this, and it's inspiring
finding people still wanting to build, and take part in, inclusive/non
eliteist communities like this.
mijk
response 23 of 23: Mark Unseen   Dec 27 10:54 UTC 2018

I look forward to reading about Grex old and (especially) new. 
 0-23          
Response Not Possible: You are Not Logged In
 

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss