|
|
In March, 1968 Edsger Dijkstra formally denounced the "GOTO" statement
in a two-page letter to the editor of the "Communications of the ACM",
(then a respected journal, not the shadow-of-itself IT trade rag it
has become in recent years), entitled "GOTO Statement Considered
Harmful." This sparked a revolution in Computer Science comparable in
size, scope and effect to that of the Copernican revolution, led to
the creation of Structured Programming, and the eventual downfall of
FORTRAN....
Okay, I exaggerate (I'm kidding! Kidding! Really!), but it did more
or less usher in the era of Structured Programming, and FORTRAN did
start to become less-used around this time. Whether Dijkstra's
article had much to do with the latter is another matter, but the
letter was very influential, is often cited, and had a profound impact
on the field.
Perhaps some evidence of the wide distribution of Dijkstra's letter,
as well as its impact, is how often it is emulated: there have been a
number of "XXX Considered Harmful" papers published, either formally
or informally, in the computing field since 1968, and while the phrase
was in common journalistic use around that time, it is widely
understood that these are at least named in homage to Dijkstra. See,
for example, the Wikipedia article on "Considered harmful":
http://en.wikipedia.org/wiki/Considered_harmful.
However, I claim that the real intent of these papers shouldn't be the
establishment of (more of the many) laws of computing or programming,
or wholesale castegations of particular technologies or techniques,
but rather, as empassioned arguments for the adoption of certain
guidelines.
Members of the computing field often have extraordinarily strong
feelings about particular technologies, techniques, companies,
programming paradigms, languages, operating systems, etc. The field
is full of zealots.
But zealotry is not a particularly useful use of intellectual power,
nor does it lead to better, more reliable software. Strongly held
beliefs are one thing, most people have those, but when people get so
strongly attached to a praticular way of doing things, or thinking
about things, then they blind themselves to other possibilities.
Take, as a concrete example, the so-called "Law of Demeter." Stated
briefly, the law basically says that you shouldn't chain method
invocations. That is, a method in some object should only call
methods either on its own object or fields of its own object, on its
arguments, or on objects that it creates. In particular, it shouldn't
call methods on objects that are returned by method calls on other
objects. So things like, "order.print" that call "print
self.customer.address.state" are 'bad' because they refer to the state
object of the address object of the customer object that's referred to
by the order object; this increases coupling (now the order knows
about the details of the customer's address object; what if the
customer lives in another county where they have provinces, not
states?).
This is the sort of thing that Object-Oriented pundits would have you
believe really is a law; that if you violate it, you are commiting
some great wrong. It's the type of thing someone would write
a "Considered Harmful" paper on: "Violates of the Law of Demeter
Considered Harmful." Though my cursory Google search didn't turn
anything up on the first search results page, I wouldn't be surprised
if this is hiding somewhere in some back corner of the web.
But is this really that bad? What is often neglected is that there
are legitimate reasons to violate such laws. Phil Haack covers this
neatly on his blog (http://haacked.com/archive/2009/07/14/law-of-
demeter-dot-counting.aspx) with, essentially, the following example:
Consider the case of a graphical presentation of some object; you may
very well *want* to have some sort of view object that pulls all of
the data out of an object tree (tree by composition, not inheritence
in this case). The Law of Demeter is all about data hiding through
encapsulation, but presentation is all about showing the data, not
hiding it; at that point, violating may be a useful thing. This is a
compelling argument. Sure, one could suggest that a generic printer
object be passed to a "print" method that then passes that down the
object tree (e.g., calls the address.print() method with the "printer"
as an argument). That's a nice solution that would avoid the
violation, but is it worth it? More to the point, is it really
*necessary*?
Maybe. But the fact that there's a grey area at all is why I think
that things like this should not be laws. They shouldn't be
considered universally harmful because they are not universally
harmful; the situation dictates.
I leave this quote, for your consideration:
"If you want to go somewhere, goto is the best way to get there."
- Ken Thompson, author of Unix.
1 responses total.
I took a class on TRS 80 Level 1 BASIC and the instructor wanted everyone to approach it in a way which allowed similar framework for PASCAL and so forth. So, he only allowed 1 GOTO statement in a single program. A typical program would start with all of the variable init statements then a GOSUB for every subroutine and finally a GOTO back to the first GOSUB subroutine after an END which had a IF THEN trap door. There's a very nice intro to BASIC at http://www.vavasour.ca/jeff/level1/entry2.html but it doesn't quite relay Dijkstra's sentiment (as I believe as I was taught that GOTO is NAUGHTY!)
Response not possible - You must register and login before posting.
|
|
- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss