University of Minnesota Extension
http://www.extension.umn.edu/
612-624-1222
Menu Menu

Extension > Youth Development Insight > What's the definition of youth work insanity?

What's the definition of youth work insanity?

15 Comments

Thumbnail image for Samantha-Grant.jpgHow often in programs do we continue to do something because "It's that time of year again"? A quote from Albert Einstein reads, "Insanity: doing the same thing over and over again and expecting different results." I don't think Einstein was thinking about evaluation when he coined this phrase, but it nicely articulates our tendency to continue to offer youth programs without "checking under the hood" periodically.

As an educator for program evaluation, I believe strongly in using evaluation to guide and improve youth programs and to prove their worth to others. I know that others will agree with me on this. But I think we often fail to intentionally build evaluation into our program design and as a result our programs suffer. Jane Powers's research on youth participatory evaluation demonstrates that the act of intentionally engaging youth in the evaluation experience helps to not only build stronger programs but also youth development skills in participating youth.

Youth workers are natural evaluators -- we intuitively modify environments to fit participants' needs. Being intentional about evaluation could include strategies such as:

  • Checking the "pulse" of a group as they conduct an activity.youth-worker-youth.jpg
  • Forming an advisory group from a mix of youth workers and youth to plan and critique program offerings.
  • Embedding reflection into daily practice. The Youth Work Institute has an excellent toolkit on reflection.
  • Administering a pre-tests and post-tests to youth in a program. The Harvard Family Research Project has some excellent resources on methods and design.
  • Analyzing evaluation data with the help of youth. Youth will be able to add their own perspectives on understanding data and in making changes to a program.

Evaluation is one natural way to determine if your program is going in the right direction or just going insane. How are you finding ways to embed evaluation in your daily practice? Do you have any tips to make it easier or more natural?

-- Samantha Grant, assistant Extension professor, program evaluation

15 Comments

Kate Walker said:

Nice post, Sam! I regularly hear people make a distinction between “Big E” and “little e” evaluation. You make a nice case for embedding “little e” evaluation or evaluative thinking into our daily practice. We’re always gathering information and making judgments about the merit and worth of [a program]. We check in, get feedback, take stock, and adjust. A great resource is Yoland Waldsworth’s book, “Everyday Evaluation on the Run” – it offers a variety of strategies for non-evaluators to build evaluation into busy, everyday practice.

Samantha Grant said:

Thanks for sharing the resource, Kate. I also appreciate your distinction between big E and little e. As practitioners I think we have to keep both on our radar, but the little e activities help us to remain responsive to our programming. A good educator has natural evaluation skills, and stating this helps me to remove barriers when working with individuals who don't see the value of evaluation or are intimidated by evaluation.

Others, do you have great resources to build evaluation into your practice?

Josey Landrieu said:

Great post Sam! Another resource that I got not long ago is "Collaborative Evaluations" By Liliana Rodriquez-Campos. Her approach is guided by the need to involve key individuals and groups in a meaningful way throughout the evaluation process. She says "If successful, evaluators using this model should greatly improve the utility, feasibility, propriety, and accuracy of their evaluations". I'm eager to start reading and hoping implementing some of her ideas.

Kate also shared a great resource!

Samantha Grant said:

Josey,

Evaluators that focus their practice on collaborative, participatory, or empowerment evaluation (like you discussed with Liliana Rodriquez-Campos) have an evaluation theory that meshes well with practitioners, as it built on the idea that evaluations should be useful. What you give up in the "control" of your evaluation, you gain two-fold due to the investment in the process. Those of you who are looking for ways to involve your staff in the process should check out information from evaluators with these perspectives. A personal favorite of mine is "Youth Participatory Evaluation" by Kim Sabo Flores. It shares concrete strategies to complete evaluations with youth driving the process.

jennifer skuza said:

Hi Sam –

Thanks for the interesting blog; it is chock full of resources on building evaluation into the program design. It is important for youth workers, and other stakeholders, to know the difference the program is making in the lives of young people and to know the effectiveness of their programs so improvements etc. can be made.

I think it is equally as important for youth need to know the progress they are making in programs based on standards that are important to them. Could you comment on this angle of evaluation?

Sam Grant said:

Jen,
As you noted many of the evaluation angles that you discussed are equally important for program design. I see your idea of youth being involved in driving the standards for a program as an equally important program design element. Anytime that youth can be involved and invested in planning their learning (and then helping with the evaluation component) there are gains in program investment as well as growth in youth development and leadership skills.

I know that the Urban program does an excellent job of involving youth in the process. Do you have any tips to share?

Erin Harris said:

I am a researcher at Harvard Family Research Project. Thanks for directing your audience to our evaluation resources! One way youth have the potential to be involved in the evaluation process is by using digital media to collect evaluation data—youth are very adept at using these new technologies, which can be used as tools to ensure that their voices are represented in the evaluation.

I also strongly agree that evaluation should be embedded in the program's culture, and not seen as an add-on. We are developing a series of tutorials for out-of-school time practitioners on how to conduct an evaluation—in this guide we recommend committing to an ongoing evaluation strategy to promote continuous learning and improvement, rather than seeing evaluation as a one-time event for accountability. We hope to pilot this tool on our website this fall, at which time we will seek feedback from users on how we can make it more useful.

Joyce Walker said:

As I read this whole string of commentary -- and particularly Erin's reference to young people using digital media -- I think how valuable it is to tap into "What's going on here?" and "Is this working?" on a regular basis. Very different than a once-a-year accountability evaluation too often aimed at documenting "Is what we're doing to/for them changing them or making the difference we think is important?"

It seems to me that by involving young people and allowing them to frame the questions, we come closer to knowing what really matters to them, why they participate, what they are really learning. My bias is that both research and evaluation need to attend to internally generated questions of practice (for improvement) at least in equal measure to externally generated questions (for stakeholder accountability). It's both, I know, but I'm very drawn to the power of the little "e".

Kate Walker said:

Thank for your comment, Erin. HFRP has so many great evaluation resources – I can’t wait for this new tutorial series! I agree with you and Joyce about technology as a hook to capture youth voice and engage young people in evaluating their programs. Do you have any favorite tools or resources?

jennifer skuza said:

Hi Sam -

Thanks for your response. I was thinking about assessment tools and techniques that could help youth gauge their own growth and learning. I shared some of those in the following blog. I am interested in knowing your ideas or remarks to that angle of assessment.

Again, thanks for content providing in your blog.

Erin Harris said:

Kate, I'm glad you are looking forward to HFRP's evaluation tutorial series! One piece that I have found particularly useful related to youth engagement and technology is Henry Jenkin's paper on participatory culture. While not framed specifically around evaluation, his discussion of participatory culture is very relevant to evaluation, and has implications for youth involvement in the process.

And Joyce, I completely understand the competing pressures of evaluation for accountability vs. for improvement. However, I don't think that they need to necessarily be in conflict: when funders and other external stakeholders are really bought onto the importance of evaluation for improvement, programs can work to create a coordinated set of evaluation methods and strategies that can meet both needs. I know, though, that reconciling the two can be a challenge!

Dale Blyth said:

Thanks Sam - Very interesting discussion around evaluation or insanity or both! While I very much agree with the value and possibilities of little e evaluation and of an improvement approach that engages young people, I also think as a field we need to be mindful of accountability issues and their linkage to what we do everyday. Erin's point that these can be coordinated is essential to note. The future of youth work depends on evidence of both an improvement and summative type and on data at a youth and program level as well as a system and city level. One of the major issues in the field is not whether a program can make a difference but whether we can get the variety of opportunities programs offered systematically available to a large enough set of young people to make a difference in their learning and development. This deals with issues of impact but also access and quality. This will be the subject of my blog on 6-29 as i look at what driving with data means.
Dale

Samantha Grant said:

Dale,

Great points. I agree with you that we must also be focused on accountability; however, I try to start working with people by focusing on the small changes that they can make that result in big changes in the long run (both in terms of the actual program and in the way that practitioners approach program development and evaluation.) Looking forward to your blog post that will continue this conversation.

Thanks for the rich discussion on this topic everyone! There will be more evaluation conversations from me in the future, and I hope to see this weaved into future posts and comments.

Samantha

I came across this great blog post via the SparkAction e-letter. I thought you would be interested in another resource developed by Jane Powers: web pages introducing youth participatory evaluation to practitioners. She discusses the principles, benefits and challenges, and resources available to those who want to involve youth as evaluators: http://www.actforyouth.net/youth_development/evaluation/evaluators/

Glad to learn about all these resources!

Karen

Samantha Grant said:

Karen, Thanks for joining in the discussion and sharing the resource. I especially like the Continuum of of Youth Involvement in Evaluation and Research. Wouldn't it be great if we assessed this in every evaluation project that we conducted in youth organizations?

Leave a comment


Type the characters you see in the picture above.

  • © 2014 Regents of the University of Minnesota. All rights reserved.
  • The University of Minnesota is an equal opportunity educator and employer. Privacy