You are not logged in. Login Now
 0-20   20-28         
 
Author Message
9 new of 28 responses total.
cross
response 20 of 28: Mark Unseen   Mar 4 15:23 UTC 2009

resp:19 In general I dislike globals for a variety of reasons, among
the biggest that they make it hard to test code and obscure it by
diffusing side effects throughout the program.  In general, one
should strive to limit scope as much as possible, with well-defined
accessors for state manipulation.  This makes it easy to figure out
where state changes.  For instance, private data members in an
object can only be modified by methods on that class (or, in C++,
by 'friend' classes), which makes it fairly easy to find where they
change.  (Unless, of course, you just have a bunch of methods to
'get' and 'set' the object's fields and you program extinsically,
in which case, you might as well just make the data public).

That said, hard laws in programming should be banned.  I'm not
married to the idea that globals are totally without merit.  Consider
some object that represents 'the standard debug stream' for a command
line program; where else would one put that?  As long as I can set
it up the way I want at the beginning of the program, I see no
reason not to expose it globally one or another.

With respect to the program snippet you mentioned, I would submit
that, if it were procedural-style code, there would be at least one
more global variable accessed in the function that reads the next
part from the server.

But, since I mentioned classes, it would not be unreasonable to
assume that those discrete functions would actually be methods in
a class operating on some object, and the global datum would likely
be fields in that class.  Certainly, that was my intent.

If one keeps one's classes short and well-defined, then the number
of fields tends to remain small and scoping problems are not so
much of an issue.  A good general principle here is that classes
should have one reason to change.  E.g., the representation of
something changes, or its behavior changes.  Anything else, and
your class is probably doing too much.

In "Clean Code", Robert C Martin makes the argument that methods
should read like stories, and each one should descend exactly one
level of abstraction from the one it is called from, and should
consist of statements that are all at approximately the same level
of abstraction.  Given good names, a method would then be read aloud
as, 'TO (name of method) WE (sequence of method statements).'  If
you put small methods like that in small classes, ordered so that
the calling method appears before the called method in the source
file, then each class reads like a story to comprise one discrete
chunk of data and logic in the program.  It also makes it fairly
easy to do things like dependency injection so that one can write
automated unit tests for the class.

I think that one *can* go overboard on it, but as a first order
approximation, it's not bad.

But really, that code was just meant to be a contrived example with
no relation to real code.  The idea in particular was to illustrate
how dividing things up into smaller methods with descriptive names
can result in clearer, more readable code.
remmers
response 21 of 28: Mark Unseen   Mar 4 18:38 UTC 2009

Right, the idea that you might have intended those to be class fields
and hence of limited scope occurred to me after I posted resp:19.

The importance of modularization, limited scoping, and explicit levels
of abstraction were not always well understood.  A long time ago --
early 1980s -- I had the joy of having to add significant functionality
to a fairly large program that was written in Macro-10, the assembly
language for the Tops-10 operating system.  A few hundred pages of code,
all of which resided in one monolithic source file.  Dozens and dozens
of variables, all global of course.  Yuck.  

At one point I mentioned to the original author of the program that
sometime it might be worthwhile to rewrite it in modular fashion in a
high level structured language like C or Pascal.  The response was oh
no, can't do that, it would run much too slowly.  Even though 1980s
hardware had a fraction of the power of today's, I'm skeptical of that
point, and I'm sure I could have done the enhancements in a fraction of
the time it took me if the software had been written sensibly in a high
level language.
cross
response 22 of 28: Mark Unseen   Mar 4 20:05 UTC 2009

resp:21 I can empathize with the pain you must have experienced,
though from what I recall, the macro part of Macro-10 included many
higher-level control structures, and the PDP-10 instruction set was
sufficiently rich that it was almost like programming in a high-level
language.  As far as assembly language programming went, it wasn't
that bad.  (TOPS-20 was written in Macro-10 and from first-line-of-code
to multi-user operation took a smallish-team about nine months.)
That said, I would certainly NOT choose to write a large application
in Macro-10 now days, if given the choice.

I worked on a project that was similarly painful in 1999.  I won't
go into details here only for lack of time, but it involved moving
an existing application from VB/ASP/Transact SQL under Windows to
C++/Oracle under Solaris.  Most of our "objects" on the C++ side
were very "object-oriented", which meant that a consulting company
came in and wrote a bunch of classes in which all of the data members
were private, but each had a 'get' and 'set' method associated with
it, and the classes contained no behavior.  There was tight coupling
between many of the objects (indeed, the 'Person' class inherited
from 'Address'; I guess the programmer who had written that code
had just finished reading 'We').  Maintenance was a pain; due to
the tight coupling between classes and a misunderstanding of how
'make' worked, making a one-line change required rebuilding the
entire system, which took about an hour an hour on the main development
server.  There was no automated unit testing, and the tight coupling
would have made it prohibitively difficult, anyway.  Requests to
refactor the code and rewrite the Makefile were met with outright
hostility by management, who considered such things a waste of time.

It seems that programming underwent something of a rennaisance in
the early 2000s.  The convergence of vastly increased hardware
capacity at vastly reduced prices, the emergence of good reference
implementations of some key standards, the maturation of good
compilers for decent languages and appearance of decent operating
systems all contributed.  The so-called "agile" practices pushed
things forward quite a bit and a lot of antiquated notions about
things like performance have fallen by the wayside.  More importantly,
people are beginning to understand the importance of refactoring,
keeping the codebase tidy, and the importance of automated testing.

This is probably a good thing, but it can still be abused.  And
attitudes like you encountered in the 1980s are still common today.
Indeed, that's what this item is all about.  Let's get back to
that....

I've seen programs that sparkled in their object oriented purity,
but could easily have been replaced with a couple lines of C or
shell.  A *lot* of software is massively over-engineered, and we
still see stupid things done in the name of efficiency in some
places, while efficiency is totally ignored in others.  For example,
I once had a developer complain to me that I should rewrite a
one-line awk script I was using on a project in Java because, "no
one else besides you knows awk."  Nevermind that any reasonably
competent programmer could read the awk manpage and understand the
program after five minutes of reading (it was, literally, one line,
and it wasn't a complicated one; like, '{print $1, $2, $3}' or
something).

We're nowhere near the point of figuring out how to develop software
in a consistent, predictable manner, and some notions that are
popular now are simply wrong.

Testing is a good example: the mantra now is 'test-driven development,'
and a solid body of automated unit tests are considered requisite
for refactoring.  But both of these practices are too reliant on
the notion that "all tests passing" means that your code is correct,
which is certainly NOT the case.  The old maxim, "testing can only
reveal the presence of bugs, not their absence" still holds, but
it's largely ignored.

I think that's for several reasons.  As I mentioned before, programmers
like to somehow consider themselves separate from the obligations
of other disciplines, particularly engineering, which is probably
what programming is closest to.  Instead, programmers often like
to think of themselves more as craftsmen, heirs to the medieval
guilds of Europe,  and the act of writing software then conjures
up notions of a skilled artisan lovingly hewing a beautiful piece
of wood in some New Yankee workshop somewhere.  The schedule slips?
"Well, that's what it takes to get good craftsmanship, my boy; I
take PRIDE in my work, it can't be rushed!"  Yeah, great.  Too bad
if the company goes bankrupt waiting for you to finish your
masterpiece.

I think it's a subtle way of ducking out of the professional aspect
of the job.  Software *is* hard to get right, but a lot of that is
because we, as programmers, *make* it hard We get too personally
involved and don't think about the problem domain, or the larger
picture.  While it's just now beginning to dawn on people that
programs will be read (by humans) many more times than they will
be written, we're just not there with ponying up the necessary human
capital to write software that is readable and maintainable,  and
a lot of that is just our attitudes.

An interesting experiment: try replacing the word, "bug" with
"defect" for a day when working on a programming project.  Really,
bugs are defects in a program, but it's amazing how different
people's reactions are.
sholmes
response 23 of 28: Mark Unseen   Mar 5 04:44 UTC 2009

heh thats what my ex-company did 

The term was defect  and never bug,
we had defect-report
       defect per KLOC etc etc 
       
cross
response 24 of 28: Mark Unseen   Mar 5 05:48 UTC 2009

It's a pretty subtle thing.  It's odd the way people react to it, too; at
first it's almost like they didn't hear you, but often, the quickly start
getting defensive about it.  Bugs?  Not so much.
remmers
response 25 of 28: Mark Unseen   Mar 5 19:19 UTC 2009

Can software have bugs without being defective?  For example, I'm sure
OS X has bugs, but I don't think of it as "defective" software, i.e. as
something I would want to return for a refund.

(Re an earlier sub-thread:  Yes, as assembly languages go, Macro-10 was
rather pleasant to program in, although I don't recall that it had much
in the way of HLL constructs.)
cross
response 26 of 28: Mark Unseen   Mar 6 04:40 UTC 2009

Software bugs are defects by definition, but we must be careful to
define what we mean by "defective."  In some pedantic, absolute
sense, yes, the software is defective if it has a bug in it --- but
that's the same kind of defective that a car is if, say, it has a
spark plug that doesn't fire one out of every few-million times or
something similarily insignificant, or if a plastic knob breaks off
of the stereo or something.  That component is defective, but the
car is not overall.

Regarding Macro....  It seems I was thinking about the MACRO
facilities avilable under TOPS-20, not TOPS-10 (which was probably
rather more primitive).  This document goes into more detail about
it: http://tenex.opost.com/hbook.html

In particular, not the section on "Implementation Language" that I
quote below, though the entire thing is pretty interesting.  Of the
four most common operating systems for PDP-series computers (TOPS-10,
TWENEX/TOPS-20, ITS and SAIL) I'd still say that TOPS-20 is the
most interesting, followed closely by ITS.  There were some ideas
in TOPS-20 that *still* are not in "modern" systems.  And ITS had
some good ideas that have, unfortunately, fallen by the wayside
(e.g., PCLSR).

-->BEGIN QUOTE<--
Implemention Language

Almost all system code for the 36-bit architecture machines had
been written in MACRO since the first boot routine of the PDP-6.
Because of its regular structure and powerful set of operations,
36-bit assembly language was reasonably pleasant to write in, and
so less pressure existed for a higher level language than on most
other machines. BLISS was gaining some adherents in some parts of
DEC, but its various peculiarities (e.g. the "dot" notation and use
of underscore for the assignment operator in the early -10 version)
generated major resistance elsewhere. Hence, new TOPS-20 code was
written in MACRO, as TENEX had been.

During the development of release 1 and continuing thereafter,
various features were added to the MACRO programming conventions
used in TOPS-20. These were mostly implemented by macros, and they
gave certain higher-level language capabilities to MACRO. The first
of these involved mechanisms for representing data structures in
one place (i.e. a "header" file) such that the representation would
cause appropriate code to be generated for references. This started
out as a simple macro to select "left half" or "right half" of a
36-bit word (instead of an explicit HLxx or HRxx). Next came macros
to select the correct bit test instruction (TLxx, TRxx, TDxx) for
the mask at hand. Eventually, the macros were capable of defining
and representing complex multi-word record-type structures with
fields of various sizes.

Secondly, macros were used to provide procedure-call and automatic
(stack) storage using named variables. A procedure entry point could
be declared using symbolic names for the parameters, and these names
would expand to the appropriate stack reference when used in an
instruction. Similarly, local stack storage could be defined
symbolically and local registers saved automatically. These conventions
greatly reduced the occurrance of explicit stack PUSH and POP
instructions and the bugs that often resulted from them.

Finally, macros were implemented to provide control structures
without explicit use of labels. Semantically, these resembled typical
language IF/THEN or IF/THEN/ELSE constructs, and arbitrary nesting
was permitted. As a result, a typical page of TOPS-20 code using
these conventions often contained only one label -- the procedure
entry point.

All of this made for better programming, but it did not, of course,
remove dependence on the PDP-10 instruction set. Portability was
not an objective we thought to be concerned with until much too
late.
-->END QUOTE<--
remmers
response 27 of 28: Mark Unseen   Mar 6 23:20 UTC 2009

(The R&D division of the major American automobile company for which I
did this Macro-10 job was not advanced enough in 1982 to be using
TOPS-20...)
cross
response 28 of 28: Mark Unseen   Mar 9 20:38 UTC 2009

Heh.  I didn't mean that....
 0-20   20-28         
Response Not Possible: You are Not Logged In
 

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss