|
|
| Author |
Message |
| 25 new of 45 responses total. |
nharmon
|
|
response 18 of 45:
|
Jan 26 20:18 UTC 2007 |
I'm not exactly sure if this gives you the correct answer. But I put the
join in there because MySQL does not allow aggregate functions in the
WHERE expression.
|
albaugh
|
|
response 19 of 45:
|
Jan 26 20:22 UTC 2007 |
My opnion is that in general goto's should not be employed, but there are
always exceptions. Certainly generous use of goto's can make code
unmaintainable (especially for the inheritors). Almost never should a goto
be used to exit from the middle of a block to some other different place, or
used to jump into the middle of a block. Something like a "break", which does
a "goto" to the end of a block is not too hazardous.
|
cross
|
|
response 20 of 45:
|
Jan 26 20:39 UTC 2007 |
Interesting. What about exception handling? If I understand your prohibition
about jumping *out* of the middle of a block, that would rule out jumping out
of nested loops, no? I actually think that should be allowed, since the
alternatives often involve flag variables and lots of additional checks. Ick!
|
easlern
|
|
response 21 of 45:
|
Jan 26 21:55 UTC 2007 |
I don't know any Pascal so I'm not sure if it qualifies, but here's some
Python that only uses a while:
matrix = list()
matrix.append ([1,0,0])
matrix.append ([0,0,1])
matrix.append ([0,0,0])
matrix.append ([1,1,1])
matrix.append ([0,1,0])
def blah (matrix):
m = len (matrix)
n = len (matrix [0])
row = 0
column = 0
while (row < m):
if (matrix [row][column] != 0):
row += 1
column = 0
else:
column += 1
if (column == n):
print row
row = m
blah (matrix)
|
cross
|
|
response 22 of 45:
|
Jan 26 22:26 UTC 2007 |
Excellent. That is very similar the to method that Remmers used on M-Net,
(20 years ago, in Pascal). Here is his solution:
From Remmers (excerpt copied without permission):
----
Let (i,j) be the current (row,column) position. Assume that
all rows prior to the ith have been "rejected" (contain at least
one non-zero element) and that in the ith row, all columns
after the jth have been "accepted" (are all zero). We can
make progress toward termination by updating i and j as follows:
If x[i,j] = 0, then decrement j; if x[i,j] <> 0, then increment
i and set j to n. Termination occurs when we accept an entire
row (j = 0) or reject the nth row. (If we use j = -1 to flag
this, the loop test is simply "j > 0".)
This leads directly to the algorithm:
i := 1; j := n;
while j > 0 do
if x[i,j] = 0 then j := j-1
else if i = n then j := -1
else begin i := i+1; j := n end;
if j = 0 then writeln('1st all-zero row is ',i)
----
Whereas he tested against the column index, here you test against the
row index, but the idea is the same. You could avoid the second if
statement inside the loop by using an elif (observe that, if
matrix[row][column] != 0, then we reset column to 0 and we can never
get the case where column == n; that can only happen after we increment
column, and we needn't increment column column if that's the case. So,
we get the following modification:
def findnonzero(matrix):
m = len (matrix)
n = len (matrix [0])
k = n - 1
row = 0
column = 0
while (row < m):
if (matrix [row][column] != 0):
row += 1
column = 0
elif (column == k):
print row
row = m
else:
column += 1
I'm not too stokked about the appearance of the temporary k, but that is
just there to deal with the fact that Python arrays are also origin 0, and
we want to (explicitly) avoid the cost of calculating either (column + 1)
or (n - 1) in the conditional (though the python interpreter is probably
smart enough to do that for us).
I claim this code is equivalent to Remmers's. Clearly, it operates from
the point of view (if one will) of the row instead of the column. Whether
it will be faster, however, depends on whether non-zero elements appear
frequently in the matrix on not.
|
cross
|
|
response 23 of 45:
|
Jan 26 22:34 UTC 2007 |
Er, where I say, `we needn't increment column if that's the case'' I mean the
case that we've reset column to 0. Sorry, that was ... unclear.
|
albaugh
|
|
response 24 of 45:
|
Jan 26 23:22 UTC 2007 |
"error traps" are not goto's in the conventional sense - the code running at
the time of an exception didn't want to abort and wildly jump somewhere else
to execute something different.
|
cross
|
|
response 25 of 45:
|
Jan 26 23:27 UTC 2007 |
I'm not talking specifically about error traps in that sense. I meant
exception handling in the more general sense: I encounter some sort of
exceptional condition, and want to deal with it, which might involve
breaking out of a nested loop structure or something of that nature. What
about goto's in that context? E.g., I have a block of code that includes
some conditional that detects something bad has happened; is it okay to
`goto' out of it? Maybe after doing some cleanup? Or perhaps you mean goto
out of the middle of a block meaning to execute a goto while I'm in the
middle of some processing, with the expectation that I'll execute another
goto back into the block?
|
cross
|
|
response 26 of 45:
|
Jan 27 00:10 UTC 2007 |
Something occured to me about this problem: there is an almost unstated bias
that we know about the representation of the matrix before we dive into it.
Modern programming paradigms, most specifically, Object Oriented Programming,
have invalidated this assumption. Sure, we know that we have an M x N matrix,
but we really have no guarantee that we know how that's represented
internally. Is that assumption still valid?
For instance, we might now have a system where the matrix is represented as
a `Matrix object'. That might in term be something like a list of `Row'
objects. Perhaps we don't have direct access to the rows and columns of the
matrix itself; so what do we do? Maybe we have to call a method on one of
the objects in order to solve the problem.... For instance, consider the
following code, written in Ruby. It represents the matrix as a list of objects
of type Row, each of which in term contains an array of some type of data.
The Row objects expose a method to determine whether they consist of all 0's
or not, building on the underlying Array object's `detect' method which finds
the first element in the array for which some block of code evaluates to
`True.' No goto's there, just using what the system already provides for us.
The Matrix object then exposes a method, ``first_all_zero'' that finds the
first row consisting of all zero's. Again, it builds on the underlying
Array object's `detect' method, only this time, the block we pass calls a
method on the row. Again, no goto's.
#
# -+=Ruby=+-
#
class Row
def initialize(list)
@list = list
end
def is_all_zero
not @list.detect { |column| column != 0 }
end
end
class Matrix
def initialize(rows)
@rows = rows
end
def first_all_zero
@rows.detect { |row| row.is_all_zero }
end
end
matrix = Matrix.new([
Row.new([1, 2, 3]),
Row.new([0, 0, 0]),
Row.new([3, 2, 1])])
p matrix.first_all_zero
(Of course, all the error checking you would expect on the data is missing.)
This code doesn't return an index (such a thing would be meaningless), but
instead returns a Row object (granted, in this case, the Row object itself
is prety boring, but imagine it was more fleshed out, with a way to get at
the column entries, etc).
But the overall idea is sound. Not only do we not have to worry about
goto's, but also don't have to worry about the actual representation of the
matrix. This can be useful; for instance, suppose we have a very large
matrix, but which is most mostly sparse. We might then decide to represent
the matrix as a list of Row's, where each Row also contains its own row
number (so that we don't waste space storing empty rows), and Rows might be
represented as a list of Columns (which, again, contain their column number,
so that we don't have to store empty columns). Then the access methods for
a specific entry in the matrix can manufacture a new column and put it in
the appropriate row as needed. Imagine if we had to deal with that in our
earlier examples! Here, hiding data is a pretty good idea.
The essential point of this example is that how we perceive the problem,
and in particular, how we represent the data, surely influence how we go
about solving the problem. In our sparse matrix example, the algorithm
for finding the first all-zero row just changes to looking for the first
skipped entry in the row list, which is a strictly O(M) operation, as
opposed to searching throw the entire matrix, which is O(M*N). The salient
characteristic of the solution is that just because we are told that we
have an M x N matrix doesn't mean that it is represented in the canonical
way!
|
gull
|
|
response 27 of 45:
|
Jan 27 22:01 UTC 2007 |
Re resp:15: I suspect the prohibition on using GOTO was never meant as
an absolute, but rather as a restriction to encourage good habits in
programming students. It's easy to fall into the trap of using GOTOs
excessively as a crutch, instead of thinking through how the program
should be structured.
Interestingly enough, the example of using GOTO to break out of a loop
was often a no-no in BASIC, the language where GOTOs were unavoidable.
Depending on your version of BASIC, it could lead to memory leaks or odd
behavior in later loops. A lot of BASICs provided no good way to
terminate a FOR loop early, and you were better off using a WHILE loop
with some kind of flag variable in the test if you wanted to be able to
break out of the loop.
|
cross
|
|
response 28 of 45:
|
Jan 28 00:01 UTC 2007 |
Yet another reason not to program in BASIC? :-)
I believe that Dijkstra's position was that goto *should* be totally
eliminated. Here's his letter: http://www.acm.org/classics/oct95/. To
quote:
`... I became convinced that the go to statement should be abolished
from all "higher level" programming languages (i.e. everything except,
perhaps, plain machine code).'
However, later on he alludes to the idea that non-linear flow of control
constructs *are* reasonable, if packaged correctly. He makes an allusion to
what one might consider a prototypical idea of an exception:
`The remark about the undesirability of the go to statement is far from
new. I remember having read the explicit recommendation to restrict
the use of the go to statement to alarm exits, but I have not been able
to trace it; presumably, it has been made by C. A. R. Hoare. In [1,
Sec. 3.2.1.] Wirth and Hoare together make a remark in the same
direction in motivating the case construction: "Like the conditional,
it mirrors the dynamic structure of a program more clearly than go to
statements and switches, and it eliminates the need for introducing a
large number of labels in the program."'
Case statements are often implemented as `computed goto's' so it appears
that he's open to the *idea*, just not the *expression* in the form of the
too-blunt `goto' statement:
`The go to statement as it stands is just too primitive; it is too much
an invitation to make a mess of one's program.'
I wonder if the language designers have finally caught up to Dijkstra. Do
we still *need* goto, or are our languages sufficiently rich now, and our
dominant paradigms sufficiently expressive that we can make do without it?
It's interesting to me to note that the only places I have *ever* used gotos
are in older languages that lacked more modern constructs that would have
made them unnecessary.
|
remmers
|
|
response 29 of 45:
|
Jan 31 12:38 UTC 2007 |
The thought process involved in arriving at my solution in #22 is
perhaps more significant than the solution itself, since it
generalizes to other problems, so I'll elaborate on it a bit.
One starts by formulating the problem specification in the notation of
predicate logic and through a series of more-or-less systematic steps
arriving at an algorithm. This approach was pioneered by Edsgar
Dijkstra in his book _A Discipline of Programming_.
I like 0-origin indexing these days, so I'll restate the problem as
follows: Given an m*n array of numbers A[0..m-1, 0..n-1] (m>0, n>0),
find the first all-zero row or determine that none exists.
Start by stating a formal *postcondition*, i.e. the condition that
should hold at the conclusion of the program:
(0 <= i <= n) and (1)
(for all p, 0 <= p < i, row A[p] is not all 0) and (2)
((A[i] is all 0) or (i = m)) (3)
Now, we can trivially establish (1) and (2) by setting i = 0.
(Note that (2) is "vacuously" true.) The hard part is to
establish (3). But if we weaken (3) by requiring only that a
trailing segment of A[i] be all 0, we get
(0 <= j <= n) and (A[i][j..n-1] is all 0) (3')
which is easily established by setting j = n. Moreover, if j > 0 and
A[i,j-1] = 0, then the assignment --j maintains the truth (3'),
meaning that we can probably solve the problem with a loop that
proceeds by decrementing j until "done". Note that if j = 0, then
(3') implies (3) provided that i < m.
This analysis suggests (if you're familiar with Dijkstra's
methodology) writing a loop that manipulates two variables i and j,
whose initialization establishes (1) and (2) and (3'), whose body
maintains these conditions (i.e. is a "loop invariant") while making
progress toward termination, and which terminates when i = m (implying
that there's no all zero row) or j = 0. Here's the algorithm,
expressed in C syntax:
i = 0; j = n;
while (j != 0) {
--j;
if (A[i][j] != 0) {
++i; /* this row not all 0, so go to the next */
if (i == m) j = 0; /* if i = m, no more rows */
else j = n; /* otherwise, begin scan of new row */
}
}
if (i < m)
intone("We have discovered the first all zero row!");
else
intone("There are no all zero rows!");
(Hm, this time the solution came out a little different from either
Dan's or mine in #22, although it's in the same spirit.)
The technique of weakening a postcondition to obtain a condition that
is easily established and maintained (illustrated here in the
transformation of condition (3) to (3')) is very effective in
determining what variables will be useful and working out the details of
manipulating them. Dijkstra's book has many examples; see also David
Gries's _The Science of Programming_.
As an exercise, I'll pose the following related C-language problem:
Let s be an array declared as "char *s[m];", where each s[i] is known
to point to a null-terminated string. (The lengths of the strings are
not known and may vary with i.) Write a program fragment in C that
finds the first all-blank string (i.e. all characters are "whitespace"
as determined by the isspace() function) or determines that no such
string exists.
|
cross
|
|
response 30 of 45:
|
Jan 31 15:51 UTC 2007 |
Interesting problem.... How about this solution (which is not optimally
efficient on several levels):
int
isallspace(char *s)
{
while (*s)
if (!isspace(*s++))
return 0;
return 1;
}
int
findallspace(char *arr[], int m)
{
for (int i = 0; i < m; i++)
if (isallspace(arr[i]))
return i;
return -1;
}
int
main(void)
{
int d;
d = findallspace(s, m);
if (d < 0)
printf("No all-whitespace element found!\n");
else
printf("The first all-whitespace string is at index %d\n", d);
return 0;
}
I think what John wanted, though, was something along these lines:
int
findallspace(char *arr[], int m)
{
char *p;
int i;
for (i = 0, p = arr[i]; *p; p++) {
if (isspace(*p))
continue;
i++;
if (i == m) p = "";
else p = arr[i];
}
if (i < m)
printf("The first totally blank string is at index %d\n", i);
else
printf("There is no all-whitespace string in the set.\n");
return(i);
}
Though I claim that the first version might be better in a real program,
since the ``isallspace'' function can be reused in other instances.
Now, discuss the use of a for loop and continue here; to me, that is a
style improvement (and separate from the goto/no-goto debate).
|
remmers
|
|
response 31 of 45:
|
Jan 31 18:32 UTC 2007 |
I didn't "want" any particular style of solution, but was curious what
people would come up with. It's true, though, that if you mimic closely
the technique I described in #29, you come up with essentially the last
solution in #30.
I'm somewhat agnostic on the for-continue issue, but lean towards using
the for loop in C when it fits the logic of the solution. I think it's
a bit harder to recast my solution in #29 using for and continue.
But hey, as long as we're doing for loops, let's take a nice big swig of
the C KoolAid, throw in conditional expressions, use the fact that C
does short-circuited evaluation of &&, and solve the whole problem with
a one-liner:
for (i=0,p=s[0]; *p && (isspace(*p) ? ++p : ++i,p=(i<m)?s[i]:""); );
|
cross
|
|
response 32 of 45:
|
Jan 31 20:11 UTC 2007 |
"Your honor! I object! That line of code should be taken out and shot!"
|
albaugh
|
|
response 33 of 45:
|
Jan 31 20:44 UTC 2007 |
Re: error detection and goto's, having programmed a lot in a language without
goto's (Pascal), I had to solve this by making an error state something
checked for in loops, so as to allow for an early and orderly exit out of a
loop. I think that was better than trying to use or rely on being able to
just jump out to somewhere when the heat got turned up.
|
cross
|
|
response 34 of 45:
|
Jan 31 21:16 UTC 2007 |
I've found that that was a far worse way to go: you ended up with much more
complex code for not a lot of benefit. Particularly when you were in
situations where you successively built up a number of resources that had to
be checked for and deleted along the way, not necessarily using loops, just
if's and the like. It was too easy to forget to release something somewhere
along the way. For instance, consider this code:
int
getresources(void)
{
char *a, *b, *c, *d;
Lock *o, *v, *w;
a = malloc(SOMETHING);
if (a == NULL)
return FAILURE;
o = acquire(some_lock);
if (o == NULL) { /* Ie, we failed to get the lock.... */
free(a);
return FAILURE;
}
b = malloc(SOMETHING_ELSE);
if (b == NULL) {
release(o);
free(a);
return FAILURE;
}
v = acquire(some_other_lock);
if (v == NULL) {
free(b);
release(o);
free(a);
return FAILURE;
}
c = malloc(ANOTHER_THING);
if (c == NULL) {
release(v);
free(b);
release(o);
free(a)
return FAILURE;
}
w = acquire(a_third_lock);
if (w == NULL) {
free(c);
release(v);
free(b);
release(o);
free(a)
return FAILURE;
}
d = malloc(THE_FINAL_THING);
if (d == NULL) {
release(w);
free(c);
release(v);
free(b);
release(o);
free(a)
return FAILURE;
}
/* Do something with the resources we got.... */
return SUCCESS;
}
Well, what if I have to add another resource in there? Then I need to be
sure that I release it everywhere appropriate...it's easy to forget something,
and this is just a contrived example!
Okay, you say, you could just nest the if's. Let's see what the code looks
like:
int
getresources(void)
{
char *a, *b, *c, *d;
Lock *o, *v, *w;
a = malloc(SOMETHING);
if (a != NULL) {
o = acquire(some_lock);
if (o != NULL) {
b = malloc(SOMETHING_ELSE);
if (b != NULL) {
v = acquire(some_other_lock);
if (v != NULL) {
c = malloc(ANOTHER_THING);
if (c != NULL) {
w = acquire(a_third_lock);
if (w != NULL) {
d =
malloc(THE_FINAL_THING);
if (d != NULL) {
/* Do something
with our
acquired
resources... */
free(d);
return
SUCCESS;
}
release(w);
}
free(c);
}
release(v);
}
free(b);
}
release(o);
}
free(a);
}
return FAILURE;
}
Well, we don't repeat the code to release the resources, which is good,
but man is that ugly. In particular, the (presumably important) code is
obscured because it's indented almost the entire way to the right! I
really don't like that. Of course, we could split it off into another
routine or something, but still.... And I guess one could argue that
visually, the code sort of `funnels' the programmer into the important
segment, but I'd rather have the main line of code flow vertically down
the page. Here's the solution with goto's emulating exceptions:
#define throw goto /* Maybe this will make the purists feel better... */
int
getresources(void)
{
char *a, *b, *c, *d;
Lock *o, *v, *w;
b = c = d = o = v = w = NULL;
a = malloc(SOMETHING);
if (a == NULL)
throw exception;
o = acquire(some_lock);
if (o == NULL)
throw exception;
b = malloc(SOMETHING_ELSE);
if (b == NULL)
throw exception;
v = acquire(some_other_lock)
if (v == NULL)
throw exception;
c = malloc(ANOTHER_THING);
if (c == NULL)
throw exception;
w = acquire(a_third_lock);
if (w == NULL)
throw exception;
d = malloc(THE_FINAL_THING);
if (d == NULL)
throw exception;
/* Do something useful with our resources.... */
return SUCCESS;
exception:
if (d != NULL) free(d); /* Can't happen, but why not? */
if (w != NULL) release(w);
if (c != NULL) free(c);
if (v != NULL) release(v);
if (b != NULL) free(b);
if (o != NULL) release(o);
if (a != NULL) free(a);
return FAILURE;
}
Here, the benefit is that the main logic (which would go where the, ``Do
something useful with our resources.....'' comment is) isn't obscured all
the way over to the right side; there is a slight risk in that the
programmer could theoretically not add a test in the exception handling code
to remove a new resource, but they might forget to free it in the code that
creates it, too. The only real guarantee that you get with the nested `if'
version is that resources are definitely released in reverse order of
allocation. But I think the additional risk is worth the extra readability.
I guess you could just reformat the nested if code so that everything is
sensibly indented, and all the closing braces are on one line or something,
but I think the exception model is better, and in newer languages, you might
get ordering guaranteed by the underlying implementation.
|
gull
|
|
response 35 of 45:
|
Feb 3 21:37 UTC 2007 |
Re resp:33: Delphi got around that problem by adding exception handling
to its Pascal-based language. (And goto, but you rarely needed it for
anything.)
|
remmers
|
|
response 36 of 45:
|
Feb 5 13:27 UTC 2007 |
Re #32 re #31: I wouldn't presume to defend my one-liner on grounds of
readability or maintainability, of course. For that, I'd recast it the
way it was done in #30, or live a little more on the edge and do:
i = 0;
p = s[0];
while (*p)
if (isspace(*p))
++p;
else if ((++i) < m)
p = s[i];
else
break;
/* If i < m, we have the first all-blank row, else there isn't one */
/* Look Ma, no curly braces! */
But they're all the same solution, really -- except for some language-
specific tweaking -- in that they use the same two variables
initialized, updated, and tested in exactly the same ways. The
predicate-logic based approach to algorithm developement pioneered by
Dijkstra does tend to lead to economical, maintainable solutions, I've
found. Once you have such a solution, *then* you can concern yourself
with expressing it in a style appropriate to the programming language at
hand. As David Gries put it, you should program *into* a language, not
*in* a language.
|
cross
|
|
response 37 of 45:
|
Feb 7 18:59 UTC 2007 |
Regarding #31; I realized that. #32 was a joke. That said, my question still
stands: Dijkstra was working in an environment where the primitive tools
consisted of loops, conditions, etc. We now have a richer vocabulary in a
variety of languages; does that change our cognitive approach to the problem?
That is, how does one's expressive capability - independent of specific
language - affect the way in which one applies the predicate-logic based
approach?
|
remmers
|
|
response 38 of 45:
|
Feb 7 22:30 UTC 2007 |
Of course #32 was a joke. I even laughed. But the question you raise
in #37 is interesting. I haven't thought about it all that much, but I
imagine that if the language departs significantly from the "Von Neumann
machine" model - e.g. a functional programming language - it probably
would.
However, doesn't programming in most of today's widely-used languages
ultimately come down to manipulating variables with loops, conditionals,
etc? I'm thinking of languages like Java, JavaScript, Perl, Python,
Ruby. For those, I'd think the predicate-logic approach would be pretty
much the same.
I'll pose the following exercise:
foreach language in (Perl, Java, JavaScript, Ruby, Python, etc)
{
Rephrase the "first all-blank string" problem in a form
sensible for that language;
Solve it;
}
(Rephrasing is necessary since different languages have different
representations and operators for strings.)
|
cross
|
|
response 39 of 45:
|
Feb 8 03:26 UTC 2007 |
Regarding #38; (I know that you know that I know that you knew that I was
joking... :-)).
I have solutions for Perl, Python, and Ruby. But I'm going to hold off on
posting them for a few days to give other folks a chance (I've been
responding too much too quickly; where's the fun in that? A conversation
between two people is boring at best).
As for the primitives we employ, I'd say no, at least not at the level the
programmer should be concerned about. Today, in a number of languages,
regular expressions are now first class objects, as are exceptions,
classes with behavior that can be modified at runtime, etc. Our vocabulary
of constructs has greatly expanded, and we're working at a level far above
that first projected by JvN etc. I'd say that the method can still work,
but we've got to recognize that we can add new primitives such as, ``matches
this pattern'' or ``responds to this method'' to our set.
|
cross
|
|
response 40 of 45:
|
Feb 11 21:14 UTC 2007 |
Okay, no one has responded in the last five days, so here are my solutions
to Remmers's puzzle for Ruby, Python and Perl. Critique away.
# -+=Ruby=+-
def find1stblank(strings)
strings.detect { |str| str =~ /^\s+$/ }
end
strings = [ "this", " is ", "a", " ", "string" ]
p find1stblank(strings)
strings = [ "this", " is ", "a", "string" ]
p find1stblank(strings)
# -+=Python=+-
def find1stblank(strings):
for i in range(1, len(strings)):
if strings[i].isspace():
return i
return -1
strings = [ 'This is a test', ' ', 'hi there' ]
print(find1stblank(strings))
strings = [ 'This ', 'is ', 'another ', 'test', ' ' ]
print(find1stblank(strings))
strings = [ 'This ', 'is ', 'another ', 'test' ]
print(find1stblank(strings))
# -+=Perl=+-
use warnings;
use strict;
sub find1stblank
{
for (my $i = 0; $i < @_; $i++) {
return $i if $_[$i] =~ /^\s+$/;
}
return -1;
}
my @lines = ("This", "is ", " ", "a", "test");
print find1stblank(@lines), "\n";
@lines = ("This", "is ", "another ", "test", " ");
print find1stblank(@lines), "\n";
@lines = ("This", "is ", " not ", "a", "test");
print find1stblank(@lines), "\n";
Note that for the Ruby code, I take a more Ruby-ish approach, returning a
reference to the row in question instead of just an index.
Note also that these solutions take advantage of facilities of the language
that are beyond that of our more primitive examples (e.g., in Ruby and Perl,
we have regular expressions as a basic idea, whereas in Python we can ask the
string whether it's blank).
Two related questions: is taking advantage of these language features too much
of, ``programming in a language'' instead of ``programming into a language''?
That is, can we legitimately frame the question in terms of regular
expressions and objects at a very high level, or is that ``cheating''? The
second question, even if it *is* cheating, do we care? Should we?
|
cross
|
|
response 41 of 45:
|
Feb 14 23:02 UTC 2007 |
Another day, and no additional posts. Okay, here's the Java version:
/**
* Test of something for John Remmers.
*/
public class Find1stBlank {
/**
* Find the first all-blank string in the passed regular expression.
*/
public static int Find1stBlank(String[] strings) {
for (int i = 0; i < strings.length; i++)
if (strings[i].matches("^\\s+$"))
return(i);
return(-1);
}
/**
* Main program entry point.
*/
public static void main(String[] args) {
String[] strings = { "This ", " is", "a", "string" };
System.out.println("Find1stBlank == " + Find1stBlank(strings));
String[] strs = { " But this ", " is ", " ", " a blank." };
System.out.println("Find1stBlank == " + Find1stBlank(strs));
}
}
|
remmers
|
|
response 42 of 45:
|
Apr 8 11:31 UTC 2007 |
Responses #40 and #41 illustrate the effect of language primitives (in
this case, regular expressions) on the shape of a solution.
(Aside: This item has showed up as a shared bookmark on del.icio.us.
Have a look at http://del.icio.us/tag/grex/ and
http://del.icio.us/url/93a9640a15e68aef7b1d7837f006cc01 ...)
|