As part of my research assistant duties here at the PNLC, Iâ€™ve been working on the administration and evaluation of a recent professional development program run by our center. One of the objectives of the evaluation was to measure participant improvements in knowledge, skills, and ability across various aspects of their work. Typically, the chosen method for evaluating this and similar types of programs is a combination of simple pre- and post-tests assessing self-perceived skills and abilities. The pre-test is administered at the beginning of the program to get a "baselineâ€? on participant skill level. After the program, a post-test assessing the same skills is administered. The difference between the two is analyzed to determine how successful the program was at accomplishing its goals.
At the suggestion of a PNLC professor, I began to explore a variation on this traditional pre-post assessment â€“ a retrospective pre-post test. The basic idea of this type of assessment is the same as its more traditional counterpart. The difference is that both the pre- and post-tests are administers at the end of the program. Yes, the pre-test is at the end.
In my opinion, the reasoning is pretty genius in its simplicity. The idea is that, until someone goes through the program, they canâ€™t really know what they do and donâ€™t know. The technical term for this problem is "response-shift bias,â€? which occurs when participants either over or under estimate themselves at the beginning of a program.
Using this methodology, assessment becomes one test with pairs of questions posed to participants. The first in each pair reads something like this - "Think back to before the program began. How would you rate your skill level in area X?â€?
The second assesses the same skill, but in the present - "Now rate your skill level in area X after having participated in the program.â€?
The potential for a response-shift bias is reduced with a retrospective pre-post test because, having gone through the program, participants have become aware of exactly what they did and didnâ€™t know before the program started, and what they do and donâ€™t know now. Thus, the assessment tool is a more accurate gauge of participant changes.
While a test of subjective skills and abilities will never have the same level of accuracy that more objective measures have (think simple math tests â€“ either you can do the long division or you canâ€™t), I think a retrospective pre-post test goes a long way towards increasing the accuracy of these types of measurements.
Is evaluation something most nonprofits are doing on a regular basis? Is this retrospective pre-post methodology something that could be useful for nonprofits trying to evaluate and improve their programs on limited budgets and with limited time?