Blog 4: Testing


We are here one more time for another edition of our 3081: Program Design and Development writing assignment, this time getting into the topic called: Testing.

Testing? ...what do I mean by testing in terms of computer science? Well, testing is all about producing failures in our own code just to be sure that it is doing what it is supposed to do. Testing is the most effective technique to building up confidence that small pieces of code work as expected. As developers, we spend time debugging, printing variable's values through the code, verifying that those values are the ones we expected to be. However, we also know that it is not enough for testing purposes, so the next question is: How to test a method, a function, a class, a whole project? I found the answer in Regression Testing which states that for any failed execution must yield a test case and it remains as part of the project's testing unit ("Seven principles of software testing", 2008).

There are two kinds of test cases: manual and automatic. Automatic tests are derived from the specifications of the project and Manual tests are not intentionally intended as a test run which is the case for the tests from Lab 5. Recalling from Lab 5, there are three tests in readInput method. The first test was set up to not provide any file name by changing the first argument from 2 to 1 in order to prove the correctness of this test. For the second test, we provide a non-existing file to function makeArgs (const char *a0, const char *a1) which returns an expected null pointer. The last test just confirmed that when given a proper existing file name the pointer returned is not null. Manual tests like those previously mentioned are good for understanding of the method's functionality and its arguments.

Based on the premise: of finding all bugs in our code, no just one. We should consider an empirical assessment strategy called random testing. Random testing states that successive bugs might be of different nature; hence, we should try as many tests as our creativity and knowledge allow us to do it to uncover as many failures as possible as a function of time. In other words, we should not underestimate a seemingly dump testing strategy before we try it.

Testing is far easier in the long run if we put in practice the "pay-as-you-go" model. It means to write a simple test for each small routine as we go along with the project, to spend in a consistent manner some time at the end of the day testing our code rather than having to rush at the last minute fixing too many bugs and reworking many lines of code. In conclusion, testing is not a waste of time and it should not be something that happens toward the end of a project.


I don't know if it's possible to be "finding all bugs in our code, no just one."

Even if it is possible, is it always worthwhile? In the "real world", maybe making something bug-free is less important than releasing it, or having fewer programmers working on it. It is really economics that determines how much resources should be spent on testing.

Proposed model: the more testing that goes on, the fewer bugs are found per unit of time. This might approximate something like a logarithmic curve. In my opinion, for most projects, it is somewhere along this curve that it is most economical to stop testing.

You may want to slightly change your definition of testing. Instead of saying “Testing is about producing failures,” I would say that testing is an attempt to make a program fail.

“Program testing can be used to show the presence of bugs, but never to show their absence!” – Edsger W. Dijkstra

Leave a comment

About this Entry

This page contains a single entry by ayal0034 published on November 7, 2011 10:01 PM.

Blog 3: Subversion and Source Control was the previous entry in this blog.

Find recent content on the main index or look in the archives to find all content.