Main

November 19, 2013

Private benefits from short-term outcomes

The students in my recent lecture for the Penn State University Agricultural Extension and Education program asked me several useful questions. One was whether the public value message structure (below) implied that private benefits can only be accrued after condition changes have occurred. Note that the thin, black arrow leading to the participant's private benefits implies exactly this.
privatebenefits.bmp
In fact, I think the original construction--with the arrow extending from condition changes to private benefits--misleads. A program participant may very well enjoy benefits long before conditions have improved. Indeed, many participants directly benefit from improvements in knowledge and skills. For example, a participant in leadership development program may see personal career benefits from the enhanced leadership and facilitation abilities he acquired through the program. And this may occur well before the program can claim improvements in community conditions. So, I wonder if the diagram should be redrawn (as I did above) to indicate that those private benefits can, indeed, arise from the intermediate stages in public value creation. What do you think (besides that I need to employ someone with better graphic design skills than mine. :))?

November 4, 2013

How long is the long-term?

In the diagram below, I map the outcomes section of the University of Wisconsin Extension logic model to the components of a public value message. In the parlance of the UWEX model, learning outcomes are short-term, behavior changes are medium-term, and condition changes are long-term.

Participants in two recent seminars--one for the UM Center for Integrative Leadership and one for Penn State University's Agricultural Extension and Education program--challenged this pattern. They argued that some programs may generate public-value-level outcomes in less time than it takes other programs to generate behavior changes. In these cases, doesn't labeling the outcomes as short-, medium-, and long-term cause confusion?

Logic model.bmp

I think this is a useful point. What matters for the generation of public value is that the desired community-level condition changes are achieved, not how long it took to get there. If a program is able to alter economic, social, civic, or environmental conditions in ways that matter to critical stakeholders, then those impacts can be the basis of a public value message, even if they arose in the course of weeks or months, rather than the years or generations it may take for other programs to see impacts.

October 23, 2013

Does public value magnitude matter?

Is it enough for a stakeholder to learn that your program produced public value, or do stakeholders want to know how much value was created? Put another way, is it adequate to demonstrate that a program has a positive return on investment for a community? Or does it have to have a higher return than all the alternative investments the community could have made?
applesoranges.jpg

I was asked this question today at a Center for Integrative Leadership Research Forum where I presented, "How Cross Sector Leadership Builds Extension's Public Value." It seems that the answer has to be yes, it does matter whether a program generates a large or small amount of public benefit relative to its cost.

A potential funder wants to direct dollars toward their highest use. Ideally, all programs would report a return using a common metric. The metric could be dollars for programs whose impacts can be monetized (i.e., converted to dollars); it could be some other common metric (e.g., amount of pollution remediation, high school graduation rate) for programs addressing similar issues. With common metrics, a funder could rank programs by return on investment and fund as many of the top programs as they can.

Such apples-to-apples comparisons must be rare, though, even for programs with common objectives. I also imagine that the magnitude of a programs' expected public value--if it is known--will inform, but not drive a funder's decision.

What has your experience been? Have you sought funding from a source that demands to know the expected return on their investment in terms of dollars or some other metric? Do you know of funders that use those metrics to rank programs by return?


October 14, 2013

Working Differently in Extension Podcast

Interested in a short introduction to "Building Extension's Public Value"? Check out this Working Differently in Extension Podcast, featuring a conversation between Bob Bertsch of Agricultural Communication at North Dakota State University and me. If you'd like to actually see us converse, check out the video of the podcast below.

September 10, 2013

Evaluation and accountability

Last month at the 2013 AAEA Annual Conference, I was one of three presenters in a concurrent session on "Creating and Documenting Extension Programs with Public Value-Level Impacts." I learned a lot from both of my co-presenters, but here are just two quick ideas I took away from the session:

From Jo Ann Warner of the Western Center for Risk Management Education: As a funder, the WCRME demands evaluation plans--and ultimately results--from grant applicants. But the Center itself is accountable to its own funders, who demand that the Center demonstrate the effectiveness of the programs they have funded. So generally, grant applicants should keep in mind to whom the granting agency is accountable and provide the kind of results that will help the agency demonstrate effectiveness.

From Kynda Curtis of Utah State University: Intimidated about evaluating the effectiveness of your extension program? Don't be! It's not as hard as you think it is, won't take as much additional time as you think it will, and costs less than you expect. No excuses!

September 3, 2013

Critical conversation about public value

NL_logo_square.jpgListen in on a thought-provoking conversation about "The role of evaluation in determining the public value of Extension" with this eXtension Network Literacy Critical Conversation video. Participants are Nancy Franz (Iowa State University), Mary Arnold (Oregon State University), and Sarah Baughman (Virginia Tech), and the host is Bob Bertsch (North Dakota State University).

The participants make a number of interesting points about where Extension program evaluation has been and is headed. Several of my own take-aways from the video influenced my recent presentations at the 2013 AAEA Annual conference and the APLU meeting on Measuring Excellence in Extension. I've paraphrased three of those insights below.

==Nancy Franz: Extension should work toward changing the unit of program evaluation analysis from the individual to the system.

==Mary Arnold: Extension is still too often limiting evaluation to private value and isolated efforts. We need to move closer to systems-level evaluation.

==Sarah Baughman: We need to do more program evaluation for program improvement, not only evaluation for accountability.

May 28, 2013

Mark Moore: Recognizing Public Value

My UM Extension colleague Neil Linscheid has alerted me to this recording of Harvard Kennedy School's Mark Moore discussing his new book, Recognizing Public Value. moore.PNGI haven't read the book yet, but I've ordered it from the library, and I'll write more once I've read it. However, I can share some quick impressions from having listened to the recording of Moore's talk.

Both Moore and his discussant, Tiziana Dearing of the Boston College School of Social Work, commented on the limitations of monetizing public value created by service providers. Starting around 28:00 of the recording, Moore explains that to measure the value created by government, we are often challenged with "describing material conditions that are hard to monetize....We don't know how to convert them into some kind of cash value." He doesn't give examples, but it's easy to imagine changes in civic or social conditions as falling into this category. Moore then decries the amount of effort that goes into trying to monetize these effects and says that effort "mostly distorts the information rather than clarifies." For clarity, he would prefer a concrete description of the effect (presumably something like units of pollutant reduction or increase in graduation rates) to "somebody's elaborate method to try monetize it."

Dearing shares Moore's skepticism of monetizing government and nonprofit benefits, and around 42:00 of the recording says, "It's a very dangerous thing outside of the private sector to have the same enthusiasm for the bottom line." Later she warns that "it's so easy to follow the metrics that follow the dollar that it becomes shorthand for a theory of change."

In this post, I also urged caution about monetizing the impacts of Extension programs. Nevertheless, when they are carefully framed and rigorously executed, dollar-denominated metrics such as cost-benefit analyses allow for comparison across programs. I wonder how Moore and Dearing would advise a local government that is choosing between anti-pollution and education investments when the only impact data they have is a "concrete description of the effects"? How would they balance units of pollutant mitigation with a percentage change in the graduation rate?

Have you read Moore's new book or at least listened to his talk? What do you think about what Dearing calls our "enthusiasm for the bottom line"? Do we go too far in trying to translate program impacts into dollars? What other contributions of the new book are particularly relevant for Extension?

May 7, 2013

Can public value help you get promoted?

promotion.jpgLast year I presented to faculty from University of Minnesota's Research and Outreach Centers (ROCs) about communicating the public value of research, outreach and engagement scholarship for promotion and tenure purposes. Since then I've had others ask me about whether and how the public value approach can be useful in documenting scholarly accomplishments.

The guidance I came up with is pretty similar to what I wrote about here: begin with the end in mind. Basically, a scholar who aims early on for public value-level impacts and outcomes, and then evaluates and documents those outcomes, will have built a strong case for her work's public value. Because each step in creating public value involves scholarship--of research, engagement, teaching, and evaluation--the scholar who meticulously documents her contributions should, in the end, be well-positioned to defend her record.

I know, with all of the demands on outreach and engagement faculty, this is easier said than done. I know clients and community-members expect these faculty to engage in activities that are hard to classify as scholarship. But I do think that leading with the end game can help a faculty member prioritize for success.

engage begin.png

I also wrote in this blog entry about ways that the public value approach can help close the loop between research and engagement. The research-design-engagement-evaluation loop illustrated in that entry provides a number of opportunities for a scholar focusing on engagement work--in contrast with outreach education--to document her contributions and impact. How did you contribute to (1) the research that underpins the program curriculum, (2) the program design, (3) the engagement itself, (4) the program evaluation? It seems to me that viewing your engagement program in the context of the loop can bring to mind scholarly contributions that you might not have thought to document. Perhaps it can even lead to a more complete promotion and tenure case.

May 1, 2013

National resources for impacts and public value

7042011201_5ba93f1364_n.jpgAt last week's 2013 PILD conference, I heard about a couple of national initiatives that, once developed, should help Extension organizations share impact data that can inform public value messages. In my own comments at the conference, I supported cross-state sharing of ideas, so I was encouraged to hear about these national projects.

First, I think it was NIFA director Dr. Sonny Ramaswamy who mentioned an effort this summer to develop a national portal for impact reporting that will consolidate Extension program impact results.

Second, I believe it was ECOP chair Dr. Daryl Buchholz who highlighted ECOP's Measuring Excellence initiative. It appears that this project is meant to define and demonstrate excellence and to report impacts for Cooperative Extension as a whole. From the website: "Cooperative Extension has advanced from merely reporting inputs and outputs to documenting outcomes and impacts of its programs. However, most of these measures are tied to specific programs. They are not generally assessed or considered at the organizational level." While the "excellence" part of the website is well-developed, the pages having to do with impact are still under construction. I look forward to the work that will populate these pages with resources and guidance for Extension impact teams. Meanwhile, I have to give a shout out for the public value statements on the front page of the website!

By the way, my notes from the PILD keynote talks are a little sparse. If I am wrong about which speaker spoke about which initiative, please correct me in the comments. And if you know more about how the portal or the Measuring Excellence project can strengthen Extension's public value case, please share that, too!

(Photo credit: USDAgov on FlickR)

June 12, 2012

Searching for public value-level impacts

the-journal-of-extension-logo-SCREEN.jpgAn Extension program creates public value when its positive impact extends beyond the program participants to the greater community. Documenting that a program has created public value, therefore, requires measuring system- or community-level impacts. How often do evaluation studies measure the kinds of impacts that can be classified as public value? In an article in the most recent (April 2012) Journal of Extension, Jeffrey Workman and Scott Scheer try to answer that question. The article, titled "Evidence of Impact: Examination of Evaluation Studies Published in the Journal of Extension," examines program evaluations published in JOE to determine the levels of impact they reached. They considered two impact hierarchies. In Bennett's Hierarchy (from "Up the hierarchy," by C. Bennett in the March 1975 JOE) the highest of seven levels is "end results," which would include such community-level impacts as a stronger economy or improved environmental conditions. In the Logic Model, these kinds of changes in social, economic, civic, and environmental conditions are called "long-term outcomes." Workman and Scheer found that about 5.6 percent of the studies they surveyed reported impacts at these levels.

The authors conclude that "more higher-level evidence of impact is needed." They write that Extension's "ultimate goal is to remain relevant and of value to the public. The strongest method to demonstrate relevancy and public value is to document "true impact" (end results/long-term outcomes)."

Do the authors' findings ring true for you? Are only a small percentage of programs able to demonstrate public value-level impacts? Is it because few programs are achieving that level of impact? Or because the resources have not been available to measure programs' long-term impacts?

May 15, 2012

Discussing public value from youth programs

There has been a lively discussion about communicating the public value of youth development programs on "Youth Development Insight," a blog of the University of Minnesota Extension Center for Youth Development. Community program specialist, Joanna Tzenis argues that youth programs can create public value by having society-level impacts. Examples of two such impacts are building trust among community members and youth becoming agents for change in their communities. You can check out the discussion here.

May 4, 2010

Build muscles, bones, and public value!

University of Missouri Extension's Stay Strong, Stay Healthy program (SSSH) is a strength training program that leads older adults to feel more active, flexible, and energetic. weight.bmpThe SSSH team has created a public value message that they use to generate awareness and support for the program. Here is the message, complete with an estimate of health-related cost savings, excerpted from the program website:

"When you support MU Extension's Stay Strong, Stay Healthy program, participants will increase their physical activity and may improve strength, balance and flexibility, resulting in reduced risk for falls, better overall health and greater independence. These health benefits decrease the likelihood of a participant entering a nursing home, which costs on average $24,455 per year in Missouri. The money saved benefits the community by keeping more discretionary income in circulation locally. It also keeps people actively, independently contributing to society longer."

Do you have a similar program in your state? How do you explain your program's public value?

April 26, 2010

Evidence Based Living Blog from Cornell

Looking for ways to support your public value message? Spend some time exploring the Evidence Based Living Blog, written by Cornell Cooperative Extension's Karl Pillemer, Associate Dean for Extension and Outreach, and Rhoda Meador, Associate Director of Outreach and Extension in the College of Human Ecology as well as the Associate Director of the Bronfenbrenner Life Course Center. Cornell.jpgThe blog highlights research on the outcomes arising from all kinds of programs and interventions, particularly in the areas of youth development and health and wellness.As the authors say, "The blog is based on one key principle: Now more than ever, people need help separating the good scientific information from the bad. We are all about assessing the scientific evidence on human problems and looking at how to use it every day." Does that sound familiar? Does your extension program make an effort to "separate the good scientific information from the bad"?

Consider adding the Evidence Based Living Blog to your blog reader, so you can see when the authors post about the latest research or media stories on youth behavior and health. Take a stroll through the archives and read the discussions and evidence assembled therein. You may come across ideas for new research projects or findings that you can use to make the case for your own programs. If you find something that you find useful, go ahead and share that in a comment on the Cornell blog or here.

April 21, 2010

Hunting for public value?

Last month a group of about 30 Extension professionals from around the country participated in a train-the-trainer course for "Building Extension's Public Value." One of the participants, Jonathan Ferris of Purdue Extension, shared his ideas for a public value message for Purdue Extension's Venison Workshop. j0406855.jpgThe program teaches participating hunters proper techniques for field dressing deer and safe methods for storing and preserving venison. Educators also update participants about chronic wasting disease in Indiana.

Regarding the evaluation methodology for the venison program, Jonathan reports: "For years, we only asked questions like 'did you pick up some butchering tips,' or 'did you learn something about food safety,' etc. Last year, however, we decided that since we have many return attendees, we would begin asking them if they 1) hunted or fished more as a result of attending our program (we also do fish programs), and 2) do they keep or bring home more fish and game as a result of our programs."

With affirmative responses to those evaluation questions, Jonathan and his colleagues argue that the hunter/fisherman programs create public value by generating hunting and fishing license fees for the state (provided that the program participants hunt and fish in concordance with state regulations). Moreover, wild game and fish are low in saturated fats and sodium, and are generally part of a healthy diet. Sportsmen and women who bring home more wild game and fish and incorporate it into their diets may see improvements in health. When these health improvements lead to lower public health costs, we can see that the Extension programs have generated public value.

Additionally, if the venison team can produce evidence that program participants identify and report animals that show signs of chronic wasting disease, they may be able to make a "natural resource protection" argument, as well.

Do you have hunting and fishing programs in your state? Have you tried to make a case for public funding for such programs? How do you explain the programs' public value?

April 2, 2010

2010 National Extension and Research Administrative Officers' Conference

On May 18 in Madison, WI, I will lead a breakout session at the National Extension and Research Administrative Officers' Conference (NERAOC). I will present an overview--and the basic concepts--of the "Building Extension's Public Value" workshop, and talk about how to make a case for funding for outreach, extension, and research. If you are planning to attend the conference, please join me at the 10:15 session.

January 25, 2010

What the doctor ordered

What should an Extension program team have on hand to draft a public value message that secures a skeptical stakeholder's support? Here's my prescription: prescription.jpgWhat's yours?

December 28, 2009

This I believe to be true today

Substantiating the claims that we make about Extension programs' public value is crucial to Extension's credibility. However, we don't always have enough time in a "Building Extension's Public Value" workshop to assemble the documentation (journal articles, program evaluation reports, etc.) to support the claims embedded in a newly drafted public value message. The purpose of the "Research Agenda" workshop module is to list those claims and create a plan for assembling the supporting documents, or even for conducting new program evaluations or research.
research agenda.JPG
Sometimes, a workshop group is torn between wanting to draft a public value message that is persuasive--but, maybe a bit aspirational--and one that contains only claims for which the team has strong supporting evidence. I usually encourage groups to be creative and persuasive during the workshop and worry about the documentation later, but not to publicly use a public value message until they are sure it is defensible. Understandably, this guidance occasionally leads to draft public value messages that include some pretty far-fetched claims.

Cynthia Crawford, Family Financial Education Specialist and County Program Director for University of Missouri Extension in Saline County, MO, has a suggestion for helping workshop groups stay creative while not veering too far off into "aspirational" territory. Cynthia suggests telling teams drafting public value statements that they don't have to have the documentation to substantiate their claims today (during the workshop), but they do have to believe the statements are true today. Cynthia reports that this bit of direction has lead to remarkably strong--and credible--draft public value messages in short amounts of work time.

I will definitely adopt Cynthia's "you have to believe it today" guidance the next time I teach a BEPV workshop. Do you have any other suggestions for helping teams "think big" while staying grounded?

December 21, 2009

Extension, Show me the money! Or not.

While the objective of the "Building Extension's Public Value" workshop is to draft a qualitative message about a program's public value, many of our stakeholders are concerned about programs' financial impacts. For example, county commissioners and state legislators want to know how much a program will cost, and whether it's impacts will reduce strain on the county or state budget. A lot of us, therefore, are eager to quantify the impacts of Extension programs and, wherever possible, convert those impacts into dollars and cents.
dollars.JPG
Some exciting work is being done on monetizing Extension program impacts. These economic impact briefs from Texas AgriLife Extension are a strong example, and I know there are many more studies.

In future blog entries, I'll write more about ways researchers and program evaluators are quantifying and monetizing Extension program impacts. However, as persuasive as a dollars-and-cents case can be with some stakeholders, I can think of two reasons to proceed with caution as we pursue more financial and fiscal impact studies.

First, Cooperative Extension does not yet have all the resources and tools necessary to estimate the financial and fiscal benefits of all of our programs. To do a credible job, applied economists, program evaluators and others would need to devote many more hours to this effort than are currently available. Data must be collected and analyzed, models built and tested, reports written and vetted. The likely result of pressuring program teams to estimate financial impacts while providing them with inadequate resources is a collection of poor quality analyses that erode Extension's credibility.

Second, some programs' public value lends itself more readily to monetization than others. For example, a program that helps reduce a county's cost of managing its waste can make a strong, straightforward, dollars-and-cents case. On the other hand, methodologies for estimating the fiscal impact of social capital improvements are less well-developed.

Because so many of Extension's stakeholders are concerned about monetary value, I am concerned that those programs whose public value is more easily monetized will rise to the top of the priority list--not because they contribute more public value, but because their value is easier to translate into currency.

The objective of the BEPV workshop is to make strong qualitative cases for all Extension programs that create public value. I hope we can keep doing this, even while we seek the resources necessary to estimate the financial and fiscal impacts of those programs for which that is possible.

November 10, 2009

Logically speaking about public value

Many of you use the University of Wisconsin Extension logic model to guide program development and evaluation. Below is my first attempt at mapping the elements of the logic model to a public value message.

logic model.JPG

The "short-term" or "learning outcomes" in the logic model are a means to achieving the behavior changes and outcomes contained in the public value message. These learning outcomes lead the way to public value--and we must identify and measure them--but they are not the focus of the public value message. A skeptical stakeholder is unlikely to be persuaded of a program's value be hearing that a participant learned or became aware of something. The stakeholder is concerned with what the participant actually did with that knowledge.

What I call "changes" in the public value message are called "intermediate" or "medium term outcomes" in the logic model. What I call "outcomes" are the logic model's "long-term outcomes" or changes in conditions.

It seems to me that public value typically arises from a program's long-term outcomes. In some cases, a program's logic model will already include the outcomes that a stakeholder cares about (public value). In other cases, the public value exercise will tell us which additional outcomes we need to monitor--how we should extend the logic model--in order to substantiate our public value messages.

I believe that the public value approach must work hand in hand with program evaluation: it is through good program evaluation that we are able to make credible statements about our programs' public value.

November 9, 2009

Is program evaluation research?

In the previous entry I wrote about two steps that help close the loop between research and Extension (or engagement): conducting research and conducting program evaluation. When I presented this at Purdue's Scholarship of Engagement workshop, a participant asked how I differentiate between research and program evaluation.

j0407492.jpg

For the purpose of the "closing the loop" diagram, I was thinking of research as applied, scholarly investigation, which may or may not have an intended application to an engagement or outreach program. Really, I was envisioning laboratories and experimental plots.

But I also see many kinds of program evaluation as research; certainly evaluation requires the application of research methods (e.g., cost-benefit analysis, survey data analysis, economic impact analysis). However, it may not be the kind of research that subject-matter scholars (e.g., scientists and social scientists) can publish in their own fields' professional journals. So, I think some program evaluation is research and some research is program evaluation. Am I right? Is there more to the distinction than I am making here?

November 5, 2009

Closing the loop between research and Extension

When I ask Extension professionals to name Extension's strengths relative to other providers of outreach education, the connection between Extension programs and university research inevitably is the the first item on the list. We build on that key strength when we deliver programs that are based on the best research, and the community's needs inform the research agenda: that is, when we close the loop between research and Extension. I focused on this relationship--substituting "engagement" for "Extension"--at the Purdue Scholarship of Engagement Workshop last week.

loop.JPG

Here's how I think an Extension team can close that loop: They (or someone else) conduct research that leads to a discovery (knowledge creation) that could help address a condition of concern in a community (middle left box in the diagram). The team designs their Extension or engagement program with a curriculum that is based on the new knowledge, as well as existing best practices regarding program design and delivery (middle box). If the team is truly "engage" with their community partner, then the partner's needs and strengths will also inform the design of the program. The team conducts their program (middle right box) while also collecting data and observations that can be used to inform the research agenda (top box). This way, what is observed and learned "in the field" makes its way back to the lab to influence the direction of future research. The team also implements their program evaluation plan, which helps them evaluate the impact of the Extension or engagement program (lower right box). The results of the evaluation helps them improve the program design (lower middle box), so greater impact will result next time.

detail loop.JPG

Where does public value come into this scheme? I can think of at least two places: First, in the design phase, the team will plan how they expect the program to create public value. What are the expected impacts and outcomes, and how do they create benefits for sstakeholders who are not the program's direct beneficiaries? Second, in the evaluation phase, team members will assess whether those expected outcomes were generated: whether public value was created.

pubval loop.JPG

I can think of a few ways a team can increase their success at closing the loop:

*Form a team that includes researchers, Extension educators, and program evaluators.
*Embed the program evaluation plan into program design.
*Develop and implement a plan for collecting observations and data arising from the Extension or engagement program.
*Keep up to date on relevant research developments.
*Plan for steps to take once the program ends (e.g., analyzing data and revisiting the program design).

Do you think closing the loop between research and engagement is crucial? Can you suggest ways to make it happen more systematically in Extension?