|
|
Confused about testing? Don't know the difference between unit level, systems, and integration testing? Want to know more about unit testing frameworks or how to apply them? Ask here!
4 responses total.
Yeah, how do you do that stuff?
First, you locate and install a program which provides an integrated development and testing environment, such as Eclipse. Then you locate and install the modules for whatever language you're working in. Then you spent a vast amount of time trying to properly configure the environment and modules to suit the needs of your project, programming style, available screen real estate, network structure, hardware systems, etc. Then you start writing code (or tests, depending on whether you prefer test-driven development or code-driven testing) and testing it. Voila!
You could do that. It needn't be so elaborate. I found and modified
a minimal testing suite for C once that was about six lines of preprocessor
macros, and I started using it for grexsoft. Here's my current "myunit.h"
that I use for testing C code under Unix:
/*
* My own simple unit-testing framework. A simplified version
* of the minunit used on grex.
*/
#include <stdio.h>
extern int myu_ntests;
extern int myu_nfailed;
#define myuinit() \
do { \
myu_ntests = 0; \
myu_nfailed = 0; \
} while (0)
#define myuassert(test, ...) \
do { \
int r = (test); \
if (!r) { \
(void)fprintf(stderr, "ERROR: "); \
(void)fprintf(stderr, __VA_ARGS__); \
(void)fprintf(stderr, "\n"); \
return (!r); \
} \
} while (0)
#define myuruntest(test, ...) \
do { \
int r = test(__VA_ARGS__); \
myu_ntests++; \
if (r) myu_nfailed++; \
} while (0)
#define myurunsuite(test) \
do { \
test(); \
} while (0)
#define myureport() \
do { \
(void)printf("Tests run: %d, failed: %d (%2.2f%%).\n", \
myu_ntests, myu_nfailed, \
(float)myu_nfailed / (float)myu_ntests * 100.0); \
} while (0)
Here's an example of its usage:
/*
* Test bit vector code.
*
* $Id: bitvec_test.c,v 1.2 2005/06/03 19:22:46 cross Exp $
*
* Dan Cross <cross@math.psu.edu>
*/
#include "bitvec.h"
#include "myunit.h"
int myu_ntests, myu_nfailed;
int
test_bv_get(BITVEC_T *bp, int pos, int expected)
{
myuassert(bv_get(bp, pos) == expected, "bv_get(bp, %d) != %d", pos,
expe
cted);
return 0;
}
int
main(void)
{
BITVEC_T bv;
myuinit();
bv_init(&bv, 12);
bv_free(&bv);
bv_init(&bv, 33);
bv_free(&bv);
bv_init(&bv, 32);
bv_set(&bv, 0);
bv_set(&bv, 1);
bv_set(&bv, 2);
myuruntest(test_bv_get, &bv, 0, 1);
myuruntest(test_bv_get, &bv, 1, 1);
myuruntest(test_bv_get, &bv, 2, 1);
myuruntest(test_bv_get, &bv, 7, 0);
myuruntest(test_bv_get, &bv, 8, 0);
bv_clr(&bv, 2);
myuruntest(test_bv_get, &bv, 0, 1);
myuruntest(test_bv_get, &bv, 1, 1);
myuruntest(test_bv_get, &bv, 2, 0);
myuruntest(test_bv_get, &bv, 64, -1);
bv_free(&bv);
myureport();
return(0);
}
I used to do all of this via printf() statements combined with grep or
just eyesight, but I find this much better. I've also used CppUnit,
check, jUnit, and a few others to good effect.
The extreme programming people have a lot of experience with this sort
of thing:
http://www.xprogramming.com/
Let's talk a little bit about "test infection" and "test-driven development." First, let me state plainly: I think that testing is good. Tests can give you some assurance that your assumptions about how something works under some specific set of conditions are valid (or not). Given that many, many bugs are simple "gotchas" that can be easily detected by testing, I think that having tests for code is a good way to reduce defect counts. And working with a unit testing framework, even a minimalist one like the one I posted in resp:3, is a lot nicer than writing ad hoc test drivers. In general, I think the formalization of test writing and emphasis on testing as an intrinsic part of the development process is a good thing that's doing a lot to eliminate the typical sort of bugs in edge cases and the like, and generally increase software quality. However, tests cannot prove the absence of bugs. Nor does a good body of passing tests guarantee that a program is "correct." Tests are useful to give you some assurance that things are working the way that you expect, and libraries of tests can give you some confidence that a change you make in one part of the code doesn't break your assumptions in other parts of the code, but they are no excuse for not reasoning about the code and understanding it. And that's something that I've got to take exception with; too often these days, the "test infected" crowd, as exemplified by practioners of "test-driven development" seems to think that working tests basically mean correct code. Consider the short essay, written by Robert C. Martin, here: http://www.objectmentor.com/resources/articles/craftsman5.pdf In this story, the protagonist "arrives" at an algorithm, guided by the tests, but he doesn't really understand the algorithm, at least not at first, or the obvious ways to make it better. The knowledge that the tests pass is "good enough" to declare the code correct. And the tests don't really capture the stated assumptions encoded in the solution (there's a buffer overflow just waiting to happen in there if a currently-valid assumption ever changes). And there's an obvious way to increase the efficiency of the solution. My feeling is this: a solution should not just "sneak up" on a programmer guided by tests. The solution should be *understood*. If the programmer gets that understand by writing some tests, then that's one thing and that's fine, but if the programmer relies solely on the tests to decide "correctness" without understanding, then one's sitting on a powder keg. Put another way, tests are wonderful for detecting simple flaws based on bad assumptions. But they are no excuse for not understanding or being able to reason about one's programs.
Response not possible - You must register and login before posting.
|
|
- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss