|
|
Almost 20 years ago, Remmers entered an item in M-Net's programming conference about structured programming and the use of goto. He quoted a letter to the editor of the Communications of the ACM that took exception to the idea that the `goto' statement should be banned outright as a matter of course in programming (and, in a larger sense, computer science) education. The assertion of the letter was that judicious use of goto could enhance clarity and readability of programs, and in some cases, correctness. The letter stated that its author gave the following task to a group of 12 or so "expert programmers" (however that was decided): "Let X be an N by N matrix of integers. Write a program that will print the number of the first all-zero row of X, if any." (Most of the solutions offered in the thread generalized to an M by N matrix, where M and N might differ. This should not materially affect the answer.) Various solutions were offered; two of which were incorrect. Three or so used goto's. The rest did not. The question is: How would you implement this? (Without cheating and looking at the answers on M-Net!) Would you use a goto? Would that improve the efficiency and/or readability of the resulting program? What do people on grex have to say? Do we still believe the dogma that goto is only ever harmful, or are there cases where it is acceptable? Bearing in mind that the major languages of the time the question was posed included FORTRAN, COBOL, Pascal and others, how have the advances in languages changed the way we think about these problems in the last twenty years? Feel free to post a solution in any language you like and let's talk about it!
45 responses total.
(It would be cheating for me to post my solution, but I await others' with interest.)
I'm certainly NOT an expert programmer, but here is what I came up with
in Perl:
@matrix = (
[1, 2, 3, 4],
[0, 5, 6, 0],
[0, 0, 0, 0],
[7, 0, 8, 9]
);
my ($rows, $cols) = matrix_count_rows_cols ($matrix);
my $row = 0;
my $col;
my $found;
while ( $row < $rows ) {
$col = 0;
while ( $col < $cols ) {
$found = $row + 1;
if ( $matrix[$row][$col] != 0 ) {
$col = $cols;
$found = 0;
}
$col++;
}
if ( $found != 0 ) {
$row = $rows;
}
$row++;
}
if ( $found != 0 ) {
print "The first all-zero row is row number $found.\n";
}
else {
print "No all-zero rows found.\n";
}
Hmm, that will work, but there are better solutions. Early on in the thread,
Remmers posted the letter's author's solution, which looked something like
this (in Pascal, slightly reformated):
for i := 1 to n
do begin
for j := 1 to n do
if x[i,j] <> 0
then goto reject;
writeln('1st all-zero row is ',i);
break;
reject:
end
In C, we'd rewrite this as:
for (i = 0; i < n; i++) {
for (j = 0; j < n; j++)
if (x[i][j] != 0)
goto reject;
printf("1st all-zero row is %d\n", i);
break;
reject:
}
Nate, you gave a go in Perl: how would you recast this code directly
into Perl, without using goto? (Hint: look at loop control statements.)
This response has been erased.
Woops
Make that:
for ($i = 0; $i < $n; $i++) {
$rowsum = 0;
for ($j = 0; $j < $n; $j++) {
$rowsum = $rowsum + $matrix[$i][$j];
}
if ( $rowsum != 0 ) {
next;
}
print "1st all-zero row is ".($i+1)."\n";
last;
}
That's closer to what I was thinking about, but still not optimal. In particular, once you hit a non-zero element in some column, why do you keep going? You know that row is not all-zero at that point: is there a way to avoid looking at the rest of the row? And do you actually need $rowsum?
I see what you mean, Dan.
for ($i = 0; $i < $n; $i++) {
for ($j = 0; $j < $n; $j++) {
if ( $matrix[$i][$j] != 0 ) {
last;
}
}
if ( $matrix[$i][$j] == 0 ) {
print "1st all-zero row is ".($i+1)."\n";
last;
}
}
Okay, can you spot the bug?
It seems to test okay with real data. Maybe you can clue me in? :)
Consider the case where a row is all zero. Then the inner for loop will terminate naturally, and $j will be equal to $n. But arrays in Perl are origin 0, meaning that their indicies range from 0 ... ($n - 1). In your if statement, you index $a[$i][$j], but on an all-zero row, $a[$i][$j] is actually one element beyond the end of array that represents the row; a classic off by one error. I could be mistaken, I'm fairly certain there's no guarantee in Perl that going one past the end of the array always yields a zero and that this is, in fact, an error. You could do a test on $j instead of looking at the array to do essentially the same thing, but then you're doing an extra comparison in the outter loop that you could avoid in the goto case. There is another solution using the facilities of Perl, anyone know what it is?
Is this item about discussing the merits of goto, writing a problem solution, or both?
Both.
Nate saw my solution on M-Net, so I'm going to post it. Basically, I used
the labeled loop functionality of perl. Here's my Perl-based M x N matrix
solution:
# Now that we have loop control statements, do we need a fancy algorithm?
my @a = ([0, 1, 2], [0, 0, 0], [0, 1, 0]);
my $N = 3;
my $M = 3;
my $i, $j;
row: for ($i = 0; $i < $M; $i++) {
for ($j = 0; $j < $N; $j++) {
next row if $a[$i][$j] != 0;
}
print "1st all-zero row is ", $i + 1, "\n";
last row;
}
if ($i == $M) { print "No all-zero row found.\n"; }
This response has been erased.
Note something about this solution: basically, we're using functionality of the language that effectively implements a goto. But, clearly someone recognized the need for such a thing and added it to the language, thus obviating the need to use goto to break out of a nested loop structure. Actually, we have several things of this nature in modern programming. For instance, exceptions are nothing more than glorified goto's with some additional state information associated with them. So the concept is useful, and what you've seen in the 40 years since Djikstra wrote ``Goto Statement Consider Harmful'' in Communications of the ACM is a selective identification of the instances where goto really *added* something to programming, and an extraction of those paradigms into codified programming constructs. In other words, the language designers took what was useful and recast it as a first class language component. Multi-level loop control statements (e.g., as implemented in Perl's labelled loops) and exceptions are the two important examples. So, in a language where we have those things, is goto still useful? I'm not sure. But, from the pro-goto side of the house, let's quote Ken Thompson from the Plan 9 fortune file: "If you want to go somewhere, goto is the best way to get there. K Thompson" My own sense is that goto is never strictly necessary, and hardly ever useful, particularly in modern programming languages where we have exceptions and multi-level loop control constructs (when I *have* used goto in real programs, it was almost always to break out of nested loops). By the way: there's still another algorithm for the matrix problem that is both elegant and efficient that avoids goto and doesn't require the loop control constructs I used (in fact, it can be implemented entirely in ISO standard Pascal and doesn't require a BREAK or CYCLE). Can anyone find it? I'm most curious, once it's discovered (I know at least one person reading this knows the solution, since he made it up, and Nate probably does too, since he read the M-Net thread, so it *will* be discovered) to compare it to a solution that uses the `modern' language functions. Which is more obvious? Which is more efficient? What other metrics would one judge something like this by?
Regarding #14; Interesting. Nice way to generalize to a totally unexpected problem domain. I wonder if you can't avoid the left join by using a subselect, however...
Let me repost that:
So lets say this matrix was stored in a MySQL Database. :-)
mysql> select * x;
+----+------+------+-------+
| id | row | col | value |
+----+------+------+-------+
| 1 | 1 | 1 | 0 |
| 2 | 1 | 2 | 2 |
| 3 | 2 | 1 | 0 |
| 4 | 2 | 2 | 0 |
+----+------+------+-------+
4 rows in set (0.00 sec)
mysql> SELECT x.row FROM x
LEFT JOIN (SELECT row,SUM(value) AS sum FROM x GROUP BY row) AS x2
ON x.row = x2.row
WHERE x2.sum = 0 ORDER BY row LIMIT 1;
+------+
| row |
+------+
| 2 |
+------+
1 row in set (0.02 sec)
I'm not exactly sure if this gives you the correct answer. But I put the join in there because MySQL does not allow aggregate functions in the WHERE expression.
My opnion is that in general goto's should not be employed, but there are always exceptions. Certainly generous use of goto's can make code unmaintainable (especially for the inheritors). Almost never should a goto be used to exit from the middle of a block to some other different place, or used to jump into the middle of a block. Something like a "break", which does a "goto" to the end of a block is not too hazardous.
Interesting. What about exception handling? If I understand your prohibition about jumping *out* of the middle of a block, that would rule out jumping out of nested loops, no? I actually think that should be allowed, since the alternatives often involve flag variables and lots of additional checks. Ick!
I don't know any Pascal so I'm not sure if it qualifies, but here's some
Python that only uses a while:
matrix = list()
matrix.append ([1,0,0])
matrix.append ([0,0,1])
matrix.append ([0,0,0])
matrix.append ([1,1,1])
matrix.append ([0,1,0])
def blah (matrix):
m = len (matrix)
n = len (matrix [0])
row = 0
column = 0
while (row < m):
if (matrix [row][column] != 0):
row += 1
column = 0
else:
column += 1
if (column == n):
print row
row = m
blah (matrix)
Excellent. That is very similar the to method that Remmers used on M-Net,
(20 years ago, in Pascal). Here is his solution:
From Remmers (excerpt copied without permission):
----
Let (i,j) be the current (row,column) position. Assume that
all rows prior to the ith have been "rejected" (contain at least
one non-zero element) and that in the ith row, all columns
after the jth have been "accepted" (are all zero). We can
make progress toward termination by updating i and j as follows:
If x[i,j] = 0, then decrement j; if x[i,j] <> 0, then increment
i and set j to n. Termination occurs when we accept an entire
row (j = 0) or reject the nth row. (If we use j = -1 to flag
this, the loop test is simply "j > 0".)
This leads directly to the algorithm:
i := 1; j := n;
while j > 0 do
if x[i,j] = 0 then j := j-1
else if i = n then j := -1
else begin i := i+1; j := n end;
if j = 0 then writeln('1st all-zero row is ',i)
----
Whereas he tested against the column index, here you test against the
row index, but the idea is the same. You could avoid the second if
statement inside the loop by using an elif (observe that, if
matrix[row][column] != 0, then we reset column to 0 and we can never
get the case where column == n; that can only happen after we increment
column, and we needn't increment column column if that's the case. So,
we get the following modification:
def findnonzero(matrix):
m = len (matrix)
n = len (matrix [0])
k = n - 1
row = 0
column = 0
while (row < m):
if (matrix [row][column] != 0):
row += 1
column = 0
elif (column == k):
print row
row = m
else:
column += 1
I'm not too stokked about the appearance of the temporary k, but that is
just there to deal with the fact that Python arrays are also origin 0, and
we want to (explicitly) avoid the cost of calculating either (column + 1)
or (n - 1) in the conditional (though the python interpreter is probably
smart enough to do that for us).
I claim this code is equivalent to Remmers's. Clearly, it operates from
the point of view (if one will) of the row instead of the column. Whether
it will be faster, however, depends on whether non-zero elements appear
frequently in the matrix on not.
Er, where I say, `we needn't increment column if that's the case'' I mean the case that we've reset column to 0. Sorry, that was ... unclear.
"error traps" are not goto's in the conventional sense - the code running at the time of an exception didn't want to abort and wildly jump somewhere else to execute something different.
I'm not talking specifically about error traps in that sense. I meant exception handling in the more general sense: I encounter some sort of exceptional condition, and want to deal with it, which might involve breaking out of a nested loop structure or something of that nature. What about goto's in that context? E.g., I have a block of code that includes some conditional that detects something bad has happened; is it okay to `goto' out of it? Maybe after doing some cleanup? Or perhaps you mean goto out of the middle of a block meaning to execute a goto while I'm in the middle of some processing, with the expectation that I'll execute another goto back into the block?
Something occured to me about this problem: there is an almost unstated bias
that we know about the representation of the matrix before we dive into it.
Modern programming paradigms, most specifically, Object Oriented Programming,
have invalidated this assumption. Sure, we know that we have an M x N matrix,
but we really have no guarantee that we know how that's represented
internally. Is that assumption still valid?
For instance, we might now have a system where the matrix is represented as
a `Matrix object'. That might in term be something like a list of `Row'
objects. Perhaps we don't have direct access to the rows and columns of the
matrix itself; so what do we do? Maybe we have to call a method on one of
the objects in order to solve the problem.... For instance, consider the
following code, written in Ruby. It represents the matrix as a list of objects
of type Row, each of which in term contains an array of some type of data.
The Row objects expose a method to determine whether they consist of all 0's
or not, building on the underlying Array object's `detect' method which finds
the first element in the array for which some block of code evaluates to
`True.' No goto's there, just using what the system already provides for us.
The Matrix object then exposes a method, ``first_all_zero'' that finds the
first row consisting of all zero's. Again, it builds on the underlying
Array object's `detect' method, only this time, the block we pass calls a
method on the row. Again, no goto's.
#
# -+=Ruby=+-
#
class Row
def initialize(list)
@list = list
end
def is_all_zero
not @list.detect { |column| column != 0 }
end
end
class Matrix
def initialize(rows)
@rows = rows
end
def first_all_zero
@rows.detect { |row| row.is_all_zero }
end
end
matrix = Matrix.new([
Row.new([1, 2, 3]),
Row.new([0, 0, 0]),
Row.new([3, 2, 1])])
p matrix.first_all_zero
(Of course, all the error checking you would expect on the data is missing.)
This code doesn't return an index (such a thing would be meaningless), but
instead returns a Row object (granted, in this case, the Row object itself
is prety boring, but imagine it was more fleshed out, with a way to get at
the column entries, etc).
But the overall idea is sound. Not only do we not have to worry about
goto's, but also don't have to worry about the actual representation of the
matrix. This can be useful; for instance, suppose we have a very large
matrix, but which is most mostly sparse. We might then decide to represent
the matrix as a list of Row's, where each Row also contains its own row
number (so that we don't waste space storing empty rows), and Rows might be
represented as a list of Columns (which, again, contain their column number,
so that we don't have to store empty columns). Then the access methods for
a specific entry in the matrix can manufacture a new column and put it in
the appropriate row as needed. Imagine if we had to deal with that in our
earlier examples! Here, hiding data is a pretty good idea.
The essential point of this example is that how we perceive the problem,
and in particular, how we represent the data, surely influence how we go
about solving the problem. In our sparse matrix example, the algorithm
for finding the first all-zero row just changes to looking for the first
skipped entry in the row list, which is a strictly O(M) operation, as
opposed to searching throw the entire matrix, which is O(M*N). The salient
characteristic of the solution is that just because we are told that we
have an M x N matrix doesn't mean that it is represented in the canonical
way!
Re resp:15: I suspect the prohibition on using GOTO was never meant as an absolute, but rather as a restriction to encourage good habits in programming students. It's easy to fall into the trap of using GOTOs excessively as a crutch, instead of thinking through how the program should be structured. Interestingly enough, the example of using GOTO to break out of a loop was often a no-no in BASIC, the language where GOTOs were unavoidable. Depending on your version of BASIC, it could lead to memory leaks or odd behavior in later loops. A lot of BASICs provided no good way to terminate a FOR loop early, and you were better off using a WHILE loop with some kind of flag variable in the test if you wanted to be able to break out of the loop.
Yet another reason not to program in BASIC? :-)
I believe that Dijkstra's position was that goto *should* be totally
eliminated. Here's his letter: http://www.acm.org/classics/oct95/. To
quote:
`... I became convinced that the go to statement should be abolished
from all "higher level" programming languages (i.e. everything except,
perhaps, plain machine code).'
However, later on he alludes to the idea that non-linear flow of control
constructs *are* reasonable, if packaged correctly. He makes an allusion to
what one might consider a prototypical idea of an exception:
`The remark about the undesirability of the go to statement is far from
new. I remember having read the explicit recommendation to restrict
the use of the go to statement to alarm exits, but I have not been able
to trace it; presumably, it has been made by C. A. R. Hoare. In [1,
Sec. 3.2.1.] Wirth and Hoare together make a remark in the same
direction in motivating the case construction: "Like the conditional,
it mirrors the dynamic structure of a program more clearly than go to
statements and switches, and it eliminates the need for introducing a
large number of labels in the program."'
Case statements are often implemented as `computed goto's' so it appears
that he's open to the *idea*, just not the *expression* in the form of the
too-blunt `goto' statement:
`The go to statement as it stands is just too primitive; it is too much
an invitation to make a mess of one's program.'
I wonder if the language designers have finally caught up to Dijkstra. Do
we still *need* goto, or are our languages sufficiently rich now, and our
dominant paradigms sufficiently expressive that we can make do without it?
It's interesting to me to note that the only places I have *ever* used gotos
are in older languages that lacked more modern constructs that would have
made them unnecessary.
The thought process involved in arriving at my solution in #22 is
perhaps more significant than the solution itself, since it
generalizes to other problems, so I'll elaborate on it a bit.
One starts by formulating the problem specification in the notation of
predicate logic and through a series of more-or-less systematic steps
arriving at an algorithm. This approach was pioneered by Edsgar
Dijkstra in his book _A Discipline of Programming_.
I like 0-origin indexing these days, so I'll restate the problem as
follows: Given an m*n array of numbers A[0..m-1, 0..n-1] (m>0, n>0),
find the first all-zero row or determine that none exists.
Start by stating a formal *postcondition*, i.e. the condition that
should hold at the conclusion of the program:
(0 <= i <= n) and (1)
(for all p, 0 <= p < i, row A[p] is not all 0) and (2)
((A[i] is all 0) or (i = m)) (3)
Now, we can trivially establish (1) and (2) by setting i = 0.
(Note that (2) is "vacuously" true.) The hard part is to
establish (3). But if we weaken (3) by requiring only that a
trailing segment of A[i] be all 0, we get
(0 <= j <= n) and (A[i][j..n-1] is all 0) (3')
which is easily established by setting j = n. Moreover, if j > 0 and
A[i,j-1] = 0, then the assignment --j maintains the truth (3'),
meaning that we can probably solve the problem with a loop that
proceeds by decrementing j until "done". Note that if j = 0, then
(3') implies (3) provided that i < m.
This analysis suggests (if you're familiar with Dijkstra's
methodology) writing a loop that manipulates two variables i and j,
whose initialization establishes (1) and (2) and (3'), whose body
maintains these conditions (i.e. is a "loop invariant") while making
progress toward termination, and which terminates when i = m (implying
that there's no all zero row) or j = 0. Here's the algorithm,
expressed in C syntax:
i = 0; j = n;
while (j != 0) {
--j;
if (A[i][j] != 0) {
++i; /* this row not all 0, so go to the next */
if (i == m) j = 0; /* if i = m, no more rows */
else j = n; /* otherwise, begin scan of new row */
}
}
if (i < m)
intone("We have discovered the first all zero row!");
else
intone("There are no all zero rows!");
(Hm, this time the solution came out a little different from either
Dan's or mine in #22, although it's in the same spirit.)
The technique of weakening a postcondition to obtain a condition that
is easily established and maintained (illustrated here in the
transformation of condition (3) to (3')) is very effective in
determining what variables will be useful and working out the details of
manipulating them. Dijkstra's book has many examples; see also David
Gries's _The Science of Programming_.
As an exercise, I'll pose the following related C-language problem:
Let s be an array declared as "char *s[m];", where each s[i] is known
to point to a null-terminated string. (The lengths of the strings are
not known and may vary with i.) Write a program fragment in C that
finds the first all-blank string (i.e. all characters are "whitespace"
as determined by the isspace() function) or determines that no such
string exists.
Interesting problem.... How about this solution (which is not optimally
efficient on several levels):
int
isallspace(char *s)
{
while (*s)
if (!isspace(*s++))
return 0;
return 1;
}
int
findallspace(char *arr[], int m)
{
for (int i = 0; i < m; i++)
if (isallspace(arr[i]))
return i;
return -1;
}
int
main(void)
{
int d;
d = findallspace(s, m);
if (d < 0)
printf("No all-whitespace element found!\n");
else
printf("The first all-whitespace string is at index %d\n", d);
return 0;
}
I think what John wanted, though, was something along these lines:
int
findallspace(char *arr[], int m)
{
char *p;
int i;
for (i = 0, p = arr[i]; *p; p++) {
if (isspace(*p))
continue;
i++;
if (i == m) p = "";
else p = arr[i];
}
if (i < m)
printf("The first totally blank string is at index %d\n", i);
else
printf("There is no all-whitespace string in the set.\n");
return(i);
}
Though I claim that the first version might be better in a real program,
since the ``isallspace'' function can be reused in other instances.
Now, discuss the use of a for loop and continue here; to me, that is a
style improvement (and separate from the goto/no-goto debate).
I didn't "want" any particular style of solution, but was curious what people would come up with. It's true, though, that if you mimic closely the technique I described in #29, you come up with essentially the last solution in #30. I'm somewhat agnostic on the for-continue issue, but lean towards using the for loop in C when it fits the logic of the solution. I think it's a bit harder to recast my solution in #29 using for and continue. But hey, as long as we're doing for loops, let's take a nice big swig of the C KoolAid, throw in conditional expressions, use the fact that C does short-circuited evaluation of &&, and solve the whole problem with a one-liner: for (i=0,p=s[0]; *p && (isspace(*p) ? ++p : ++i,p=(i<m)?s[i]:""); );
"Your honor! I object! That line of code should be taken out and shot!"
Re: error detection and goto's, having programmed a lot in a language without goto's (Pascal), I had to solve this by making an error state something checked for in loops, so as to allow for an early and orderly exit out of a loop. I think that was better than trying to use or rely on being able to just jump out to somewhere when the heat got turned up.
I've found that that was a far worse way to go: you ended up with much more
complex code for not a lot of benefit. Particularly when you were in
situations where you successively built up a number of resources that had to
be checked for and deleted along the way, not necessarily using loops, just
if's and the like. It was too easy to forget to release something somewhere
along the way. For instance, consider this code:
int
getresources(void)
{
char *a, *b, *c, *d;
Lock *o, *v, *w;
a = malloc(SOMETHING);
if (a == NULL)
return FAILURE;
o = acquire(some_lock);
if (o == NULL) { /* Ie, we failed to get the lock.... */
free(a);
return FAILURE;
}
b = malloc(SOMETHING_ELSE);
if (b == NULL) {
release(o);
free(a);
return FAILURE;
}
v = acquire(some_other_lock);
if (v == NULL) {
free(b);
release(o);
free(a);
return FAILURE;
}
c = malloc(ANOTHER_THING);
if (c == NULL) {
release(v);
free(b);
release(o);
free(a)
return FAILURE;
}
w = acquire(a_third_lock);
if (w == NULL) {
free(c);
release(v);
free(b);
release(o);
free(a)
return FAILURE;
}
d = malloc(THE_FINAL_THING);
if (d == NULL) {
release(w);
free(c);
release(v);
free(b);
release(o);
free(a)
return FAILURE;
}
/* Do something with the resources we got.... */
return SUCCESS;
}
Well, what if I have to add another resource in there? Then I need to be
sure that I release it everywhere appropriate...it's easy to forget something,
and this is just a contrived example!
Okay, you say, you could just nest the if's. Let's see what the code looks
like:
int
getresources(void)
{
char *a, *b, *c, *d;
Lock *o, *v, *w;
a = malloc(SOMETHING);
if (a != NULL) {
o = acquire(some_lock);
if (o != NULL) {
b = malloc(SOMETHING_ELSE);
if (b != NULL) {
v = acquire(some_other_lock);
if (v != NULL) {
c = malloc(ANOTHER_THING);
if (c != NULL) {
w = acquire(a_third_lock);
if (w != NULL) {
d =
malloc(THE_FINAL_THING);
if (d != NULL) {
/* Do something
with our
acquired
resources... */
free(d);
return
SUCCESS;
}
release(w);
}
free(c);
}
release(v);
}
free(b);
}
release(o);
}
free(a);
}
return FAILURE;
}
Well, we don't repeat the code to release the resources, which is good,
but man is that ugly. In particular, the (presumably important) code is
obscured because it's indented almost the entire way to the right! I
really don't like that. Of course, we could split it off into another
routine or something, but still.... And I guess one could argue that
visually, the code sort of `funnels' the programmer into the important
segment, but I'd rather have the main line of code flow vertically down
the page. Here's the solution with goto's emulating exceptions:
#define throw goto /* Maybe this will make the purists feel better... */
int
getresources(void)
{
char *a, *b, *c, *d;
Lock *o, *v, *w;
b = c = d = o = v = w = NULL;
a = malloc(SOMETHING);
if (a == NULL)
throw exception;
o = acquire(some_lock);
if (o == NULL)
throw exception;
b = malloc(SOMETHING_ELSE);
if (b == NULL)
throw exception;
v = acquire(some_other_lock)
if (v == NULL)
throw exception;
c = malloc(ANOTHER_THING);
if (c == NULL)
throw exception;
w = acquire(a_third_lock);
if (w == NULL)
throw exception;
d = malloc(THE_FINAL_THING);
if (d == NULL)
throw exception;
/* Do something useful with our resources.... */
return SUCCESS;
exception:
if (d != NULL) free(d); /* Can't happen, but why not? */
if (w != NULL) release(w);
if (c != NULL) free(c);
if (v != NULL) release(v);
if (b != NULL) free(b);
if (o != NULL) release(o);
if (a != NULL) free(a);
return FAILURE;
}
Here, the benefit is that the main logic (which would go where the, ``Do
something useful with our resources.....'' comment is) isn't obscured all
the way over to the right side; there is a slight risk in that the
programmer could theoretically not add a test in the exception handling code
to remove a new resource, but they might forget to free it in the code that
creates it, too. The only real guarantee that you get with the nested `if'
version is that resources are definitely released in reverse order of
allocation. But I think the additional risk is worth the extra readability.
I guess you could just reformat the nested if code so that everything is
sensibly indented, and all the closing braces are on one line or something,
but I think the exception model is better, and in newer languages, you might
get ordering guaranteed by the underlying implementation.
Re resp:33: Delphi got around that problem by adding exception handling to its Pascal-based language. (And goto, but you rarely needed it for anything.)
Re #32 re #31: I wouldn't presume to defend my one-liner on grounds of
readability or maintainability, of course. For that, I'd recast it the
way it was done in #30, or live a little more on the edge and do:
i = 0;
p = s[0];
while (*p)
if (isspace(*p))
++p;
else if ((++i) < m)
p = s[i];
else
break;
/* If i < m, we have the first all-blank row, else there isn't one */
/* Look Ma, no curly braces! */
But they're all the same solution, really -- except for some language-
specific tweaking -- in that they use the same two variables
initialized, updated, and tested in exactly the same ways. The
predicate-logic based approach to algorithm developement pioneered by
Dijkstra does tend to lead to economical, maintainable solutions, I've
found. Once you have such a solution, *then* you can concern yourself
with expressing it in a style appropriate to the programming language at
hand. As David Gries put it, you should program *into* a language, not
*in* a language.
Regarding #31; I realized that. #32 was a joke. That said, my question still stands: Dijkstra was working in an environment where the primitive tools consisted of loops, conditions, etc. We now have a richer vocabulary in a variety of languages; does that change our cognitive approach to the problem? That is, how does one's expressive capability - independent of specific language - affect the way in which one applies the predicate-logic based approach?
Of course #32 was a joke. I even laughed. But the question you raise
in #37 is interesting. I haven't thought about it all that much, but I
imagine that if the language departs significantly from the "Von Neumann
machine" model - e.g. a functional programming language - it probably
would.
However, doesn't programming in most of today's widely-used languages
ultimately come down to manipulating variables with loops, conditionals,
etc? I'm thinking of languages like Java, JavaScript, Perl, Python,
Ruby. For those, I'd think the predicate-logic approach would be pretty
much the same.
I'll pose the following exercise:
foreach language in (Perl, Java, JavaScript, Ruby, Python, etc)
{
Rephrase the "first all-blank string" problem in a form
sensible for that language;
Solve it;
}
(Rephrasing is necessary since different languages have different
representations and operators for strings.)
Regarding #38; (I know that you know that I know that you knew that I was joking... :-)). I have solutions for Perl, Python, and Ruby. But I'm going to hold off on posting them for a few days to give other folks a chance (I've been responding too much too quickly; where's the fun in that? A conversation between two people is boring at best). As for the primitives we employ, I'd say no, at least not at the level the programmer should be concerned about. Today, in a number of languages, regular expressions are now first class objects, as are exceptions, classes with behavior that can be modified at runtime, etc. Our vocabulary of constructs has greatly expanded, and we're working at a level far above that first projected by JvN etc. I'd say that the method can still work, but we've got to recognize that we can add new primitives such as, ``matches this pattern'' or ``responds to this method'' to our set.
|
|
- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss