Much of the time, your code is not flawless the first time you write it. Neither is mine. Neither is a programming veteran's. To make matters worse, sometimes we don't even try out the new little sections of code we write, especially trivial functions, because there was no way we could have messed them up. Fast forward to having written a lot of code and deciding to test how it all works and we are shocked to see failed tests. Development time of the project then increases as we have to trace through the code and debug it to find out where things went wrong, which might not be found without looking at the low level depths of the code.
Luckily, we can use unit tests to avoid problematic situations such as these while legitimately boosting confidence in our code. Commonly, typos and logic errors can find their way into anyone's code without being noticed. Even if the program compiles, nothing can be said on the performance of the software. Only when you have tested even the simplest of code can you be sure that it was written correctly and can build on top of that code with confidence of its functionality. Unit testing is intended to test the small, low level functionality of our programs and a couple of guidelines should be met to get the most use out of it.
Guideline one: unit tests should answer questions along the line of "does function x do what it is supposed to do?" Once testing begins on an entire class, component, or the entire system we are no longer in the realm of unit testing. Take, for example, some unit testing my partner and I performed while working on our translator project early on. A function called scan took in a character array and output a linked list of C stucts representing the tokens in the array. In a short time, we wrote four small tests of varying simple input to scan and tested that the proper tokens were in the linked list. Had we have written a test that includes the file input phase, the scanning phase and the token parsing phase, we would have been testing on the level of integration testing or higher.
Guideline two: test appropriately. If only the average case tests are used or if tests are written long after we have written a small section of code, you are not taking full advantage of unit testing. The justification for these is that when you are designing or have just implemented a unit of code, you are more familiar with how it should behave. Using this knowledge, tests can be made to cover not only the intentional use cases but also reasonable boundary cases and those that lead to errors. Coming back to a function a week or two after having designed or written it, you might not be able to come up with all of the special cases required to ensure the unit is fully functional. Let's look at a simple example of proper boundary coverage. Another function we had used in the translator was one called readInput that took a filename and output a character array of the contents of the file at the given filename. A few cases we tested for were passing in filenames of files that did and didn't exist as well as empty filenames. Such tests ensure that not only does this suite of tests cover the intended case of reading input from an existing file but also the boundary cases where the function must behave correctly when it has no file to process.
But how does one do unit tests? Hard code a bunch of print statements and eyeball them to make sure they all output what you wanted? Going down that route removes any way of automating test success and can hinder development when output code must be constantly commented out or the like. Instead, use a testing framework. With testing frameworks, you can separately write the tests and program and update the tests only when need be. Then, whenever a change is made to the program, just quickly recompile the tests via automated building and run them. In a short time, you should have a good idea on the progress made thus far or have an early warning of what to fix. My partner and I use CxxTest for our translator project. For each suite of tests we write, we start by creating a header file of the intended unit or group of units to test. Within this header file, we make a subclass of a CxxTest class called TestSuite, write methods prefixed with "test" for each test we wanted performed and include the body for each of these test methods. CxxTest includes many assertion macros intended to be included in the body of these methods to verify correct execution, such as TS_ASSERT(expr). Next, we call upon a Makefile target that simply calls a Perl script included with the framework to generate a .cpp file for our test suite and then compile that.cpp file. We can then run the executable to see how the pieces of our project are looking and can quickly recompile the test executables when our code changes. Whenever we run a test and an assertion fails, the exact line of tests header file is output to tell us what went unexpected as well as the number of failed tests out of the total tests from that same file. Otherwise, when all tests pass, the test executable tells us "OK."
As you can see, unit testing is intended to be simple and steady your development cycle. Keep true to testing small sections of code that you have written while testing that code in a timely and wide ranged fashion and you'll be coding on top of those sections with confidence of their functionality. Testing frameworks, such as CxxTest, allow for you to create tests easily and separate from your actual code. Using these frameworks makes testing fast as well as easy to automate. No one's code is perfect at first but you can at least avoid building it on top of buggy code through unit testing.