Assessing Success

As part of my research assistant duties here at the PNLC, I’ve been working on the administration and evaluation of a recent professional development program run by our center. One of the objectives of the evaluation was to measure participant improvements in knowledge, skills, and ability across various aspects of their work. Typically, the chosen method for evaluating this and similar types of programs is a combination of simple pre- and post-tests assessing self-perceived skills and abilities. The pre-test is administered at the beginning of the program to get a "baseline� on participant skill level. After the program, a post-test assessing the same skills is administered. The difference between the two is analyzed to determine how successful the program was at accomplishing its goals.

At the suggestion of a PNLC professor, I began to explore a variation on this traditional pre-post assessment – a retrospective pre-post test. The basic idea of this type of assessment is the same as its more traditional counterpart. The difference is that both the pre- and post-tests are administers at the end of the program. Yes, the pre-test is at the end.

In my opinion, the reasoning is pretty genius in its simplicity. The idea is that, until someone goes through the program, they can’t really know what they do and don’t know. The technical term for this problem is "response-shift bias,� which occurs when participants either over or under estimate themselves at the beginning of a program.

Using this methodology, assessment becomes one test with pairs of questions posed to participants. The first in each pair reads something like this - "Think back to before the program began. How would you rate your skill level in area X?�

The second assesses the same skill, but in the present - "Now rate your skill level in area X after having participated in the program.�

The potential for a response-shift bias is reduced with a retrospective pre-post test because, having gone through the program, participants have become aware of exactly what they did and didn’t know before the program started, and what they do and don’t know now. Thus, the assessment tool is a more accurate gauge of participant changes.

While a test of subjective skills and abilities will never have the same level of accuracy that more objective measures have (think simple math tests – either you can do the long division or you can’t), I think a retrospective pre-post test goes a long way towards increasing the accuracy of these types of measurements.

Is evaluation something most nonprofits are doing on a regular basis? Is this retrospective pre-post methodology something that could be useful for nonprofits trying to evaluate and improve their programs on limited budgets and with limited time?

Comments

I think that evaluating is really important for any organization. Maybe even more so for non-profits as it can be more difficult for them to evaluate based on results. The important thing is too make sure that they are being evaluated on the right things.

Post a comment

Hubert H. Humphrey Institute of Public Affairs
The views and opinions expressed in this page are strictly those of the page author(s), and do not necessarily reflect the views or opinions of the Hubert H. Humphrey Institute of Public Affairs or the University of Minnesota. The contents of this page have not been reviewed or approved by the University of Minnesota or the Hubert H. Humphrey Institute of Public Affairs.