November 19, 2013

Private benefits from short-term outcomes

The students in my recent lecture for the Penn State University Agricultural Extension and Education program asked me several useful questions. One was whether the public value message structure (below) implied that private benefits can only be accrued after condition changes have occurred. Note that the thin, black arrow leading to the participant's private benefits implies exactly this.
In fact, I think the original construction--with the arrow extending from condition changes to private benefits--misleads. A program participant may very well enjoy benefits long before conditions have improved. Indeed, many participants directly benefit from improvements in knowledge and skills. For example, a participant in leadership development program may see personal career benefits from the enhanced leadership and facilitation abilities he acquired through the program. And this may occur well before the program can claim improvements in community conditions. So, I wonder if the diagram should be redrawn (as I did above) to indicate that those private benefits can, indeed, arise from the intermediate stages in public value creation. What do you think (besides that I need to employ someone with better graphic design skills than mine. :))?

November 4, 2013

How long is the long-term?

In the diagram below, I map the outcomes section of the University of Wisconsin Extension logic model to the components of a public value message. In the parlance of the UWEX model, learning outcomes are short-term, behavior changes are medium-term, and condition changes are long-term.

Participants in two recent seminars--one for the UM Center for Integrative Leadership and one for Penn State University's Agricultural Extension and Education program--challenged this pattern. They argued that some programs may generate public-value-level outcomes in less time than it takes other programs to generate behavior changes. In these cases, doesn't labeling the outcomes as short-, medium-, and long-term cause confusion?

Logic model.bmp

I think this is a useful point. What matters for the generation of public value is that the desired community-level condition changes are achieved, not how long it took to get there. If a program is able to alter economic, social, civic, or environmental conditions in ways that matter to critical stakeholders, then those impacts can be the basis of a public value message, even if they arose in the course of weeks or months, rather than the years or generations it may take for other programs to see impacts.

September 3, 2013

Critical conversation about public value

NL_logo_square.jpgListen in on a thought-provoking conversation about "The role of evaluation in determining the public value of Extension" with this eXtension Network Literacy Critical Conversation video. Participants are Nancy Franz (Iowa State University), Mary Arnold (Oregon State University), and Sarah Baughman (Virginia Tech), and the host is Bob Bertsch (North Dakota State University).

The participants make a number of interesting points about where Extension program evaluation has been and is headed. Several of my own take-aways from the video influenced my recent presentations at the 2013 AAEA Annual conference and the APLU meeting on Measuring Excellence in Extension. I've paraphrased three of those insights below.

==Nancy Franz: Extension should work toward changing the unit of program evaluation analysis from the individual to the system.

==Mary Arnold: Extension is still too often limiting evaluation to private value and isolated efforts. We need to move closer to systems-level evaluation.

==Sarah Baughman: We need to do more program evaluation for program improvement, not only evaluation for accountability.

June 12, 2012

Searching for public value-level impacts

the-journal-of-extension-logo-SCREEN.jpgAn Extension program creates public value when its positive impact extends beyond the program participants to the greater community. Documenting that a program has created public value, therefore, requires measuring system- or community-level impacts. How often do evaluation studies measure the kinds of impacts that can be classified as public value? In an article in the most recent (April 2012) Journal of Extension, Jeffrey Workman and Scott Scheer try to answer that question. The article, titled "Evidence of Impact: Examination of Evaluation Studies Published in the Journal of Extension," examines program evaluations published in JOE to determine the levels of impact they reached. They considered two impact hierarchies. In Bennett's Hierarchy (from "Up the hierarchy," by C. Bennett in the March 1975 JOE) the highest of seven levels is "end results," which would include such community-level impacts as a stronger economy or improved environmental conditions. In the Logic Model, these kinds of changes in social, economic, civic, and environmental conditions are called "long-term outcomes." Workman and Scheer found that about 5.6 percent of the studies they surveyed reported impacts at these levels.

The authors conclude that "more higher-level evidence of impact is needed." They write that Extension's "ultimate goal is to remain relevant and of value to the public. The strongest method to demonstrate relevancy and public value is to document "true impact" (end results/long-term outcomes)."

Do the authors' findings ring true for you? Are only a small percentage of programs able to demonstrate public value-level impacts? Is it because few programs are achieving that level of impact? Or because the resources have not been available to measure programs' long-term impacts?

November 10, 2009

Logically speaking about public value

Many of you use the University of Wisconsin Extension logic model to guide program development and evaluation. Below is my first attempt at mapping the elements of the logic model to a public value message.

logic model.JPG

The "short-term" or "learning outcomes" in the logic model are a means to achieving the behavior changes and outcomes contained in the public value message. These learning outcomes lead the way to public value--and we must identify and measure them--but they are not the focus of the public value message. A skeptical stakeholder is unlikely to be persuaded of a program's value be hearing that a participant learned or became aware of something. The stakeholder is concerned with what the participant actually did with that knowledge.

What I call "changes" in the public value message are called "intermediate" or "medium term outcomes" in the logic model. What I call "outcomes" are the logic model's "long-term outcomes" or changes in conditions.

It seems to me that public value typically arises from a program's long-term outcomes. In some cases, a program's logic model will already include the outcomes that a stakeholder cares about (public value). In other cases, the public value exercise will tell us which additional outcomes we need to monitor--how we should extend the logic model--in order to substantiate our public value messages.

I believe that the public value approach must work hand in hand with program evaluation: it is through good program evaluation that we are able to make credible statements about our programs' public value.