My second presentation today is Investigating and Evaluating Program Outcomes.
1 What is outcome evaluation?
So, What is outcome evaluation?
1.1 Evaluation
Let's review the slide about Evaluation from my first presentation.
There are many different ways that people use the term 'evaluation'.
The word ‘evaluation’ in its broadest sense to refer to any systematic process to judge merit, worth or significance by combining evidence and values.
1.2 Outcome evaluation
As to Outcome evaluation, in a narrow sense, it refers to people examining: how well their programs are performing relative to the desired outcomes?
Or, how well/whether or not an intervention has met its goals?
In a more comprehensive sense, it should be a process that makes judgements about: how well a program is doing against (predetermined and non-predetermined) objectives?
Or, how well a program is achieving (intended and unintended) outcomes?
Or, simply speaking, it is a systematic process to judge program outcomes by combining evidence and values.
That is, a more comprehensive outcome evaluation will consider unintended outcomes (positive and negative) as well as intended outcomes identified as objectives.
1.3 Investigate outcomes and evaluate programs
There are some differences between Investigating outcomes and evaluating program outcomes.
Investigating program outcomes is a process to Construct the outcomes of a program and Figure out why and how these outcomes were achieved.
Evaluating program outcomes is a process to Judge a program through its outcomes and their reasons and Use the results to provide insights for progress.
You might hear some people use the terms like measuring outcomes or outcome assessment to refer to investigating outcomes. Or, some people will use the two words "assess" and "evaluate" interchangeably. It might be confusing and the word "measuring" is inappropriate for qualitative evaluation. Thus, here I use "investigating outcomes" and "evaluating program outcomes".
The former one is More like Data collection, Data analysis, Results, and Discussion parts of a research project, while the latter one needs more efforts on the discussion and implication sections if you think of it as a research report.
2 Why conduct outcome evaluations?
Well, why should NPOs conduct outcome evaluations?
We may see Experts stress that [outcome] evaluation can:
Improve program design and implementation.
It is important to periodically assess and adapt your activities to ensure they are as effective as they can be. Evaluation can help you identify areas for improvement and ultimately help you realize your goals more efficiently. Additionally, when you share your results about what was more and less effective, you help advance your field.
It also can help you Demonstrate program [results].
Evaluation enables you to demonstrate your program’s success or progress. The information you collect allows you to better communicate your program's [results] to others, which is critical for public relations, staff morale, and attracting and retaining support from current and potential funders.
However, there are some Situations where evaluation may not be a good idea, such as,
when the program is unstable, unpredictable, and has no consistent routine.
when those involved cannot agree about what the program is trying to achieve.
when a funder or manager refuses to include important and central issues in the evaluation.
3 Some confusing concepts
Before we go into the general process of an outcome evaluation, we need to make more clarification on some confusing concepts.
3.1 Formative or summative
First, Formative or summative.
[Outcome] Evaluations can be formative or summative.
Formative evaluations conducted during a program offer feedback to the educator about what is going well and what adjustments should be made.
Summative evaluations [investigate] final outcomes after a program has ended and can be used to inform funders and administrators about the worth of the program and help them make decisions about whether to continue it.
However, many evaluations fall somewhere in between formative and summative because educators and funders want both to make improvements and document outcomes.
3.2 Process or outcome
Second, Process or outcome.
Process or Implementation Evaluation
Examines the process of implementing the program and determines whether the program is operating as planned. Can be done continuously or as a one-time assessment. Results are used to improve the program.
A process evaluation of an EE program may focus on the number and type of participants reached and/or determining how satisfied these individuals are with the program.
Outcome Evaluation
Investigates to what extent the program is achieving its outcomes. These outcomes are the […] changes in program participants that result directly from the program.
For example, EE outcome evaluations may examine improvements in participants’ knowledge, skills, attitudes, intentions, or behaviors.
3.3 Program satisfaction or program outcome
Third, Program satisfaction or program outcome.
Satisfaction with a program (not the positive attitudes toward the substantial issues addressed through the program, such as a school trip program made pupils think that contact with nature was pleasant) is an overall judgement of a program based on the stakeholder’s focus and aims that they put in the program.
It is an output, rather than an outcome. Because it tells nothing about what changes the program has brought about, though it may result from the achievement of some certain outcomes of the program which are valued by the stakeholder. Stakeholders may also be satisfied because of achievement of other outputs.
4 The general process of an outcome evaluation
Let's see The general process of an outcome evaluation.
As we had discussed in my first presentation, a general process may contain several phases and steps. This process is from My Environmental Education Evaluation Resource Assistant (MEERA).
5 Phase 1
Phase 1 is Understanding Your Program.
That is, Developing a good understanding of how your program works and identifying the resources you have for evaluation.
5.1 Step 1: Before You Get Started
Before You Get Started, you should consider some important issues.
5.1.1 The relationship between program and evaluation processes
First is the relationship between program and evaluation processes. We can use the training and HRD process model to figure out it. These two processes are intertwined.
After the needs assessment or a front-end evaluation, program objectives will be defined. Meanwhile, at this early stage of a program, its evaluation criteria should be selected. The front-end evaluation results may also be used as pre-test data in your evaluation if you choose a "pre- and post-tests" design for it. Whether or not you choose this kind of design, the results of a front-end evaluation will be compared with your outcome evaluation results. So, actually you should think about your evaluation and program development carefully and simultaneously.
Needs Assessment Determines the need for a project or program by considering aspects such as available resources, extent of the problem and need to address it, audience interest and knowledge, etc. This is also known as a front-end evaluation.
5.1.2 Issues to think about
Other Issues that you should think about before you start your outcome evaluation will be:
What types of resources will you need to invest in the evaluation?
Do you have sufficient experience to carry out the evaluation?
How much time are you willing and able to dedicate to the evaluation?
How much are you willing to spend on the evaluation?
How to find and work with an internal or external evaluator?
How to involve program managers, staff and others?
How to obtain approval for the evaluation and consent from participants?
5.2 Step 2: Clarify Program Logic
5.2.1 Logic model
The Logic model is often used for analyzing Program Logic.
Resources → Activities → Outputs → Outcomes → Impact
It said NPOs will Leverage resources to carry out activities that produce certain outputs. These outputs lead to the sought after results. Results are measured through the short, intermediate, and long-term outcomes, which lead to an ultimate impact.
5.2.2 Outputs, outcomes, impacts
We should make the meaning of Outputs, outcomes, impacts more clear.
Inputs are the resources used by the program
Examples: program staff, funding, time, external partners, volunteers, materials.
Outputs include Activities, Audiences, and participants satisfaction
Activities are what the program does with its inputs to fulfill its mission.
Examples: events, informational materials, products, workshops, trainings, conferences, exhibits, curricula.
Audiences refers to the participants, clients, or customers reached by the program.
Examples: number of people attending an event, workshop, or training; type of participants (grade levels, ages, genders, etc. of participants)
Satisfaction refers to participants' satisfaction with their experience in the program and how it was implemented.
Outcomes are the results of your program. They are the changes that take place during or after the program for participants.
Examples: awareness, knowledge, skills, attitudes, behaviors.
Impacts are the changes that the outcomes bring about. They are the ultimate goals of the program.
Examples: environmental quality, community development, or individual wellbeing has been improved because of the behaviors that participants formed through your program.
5.2.3 Theory of change
Finding out what should be the Outputs and what should be listed as outcomes or impacts of your program is not enough. You should further clarify the relationships between your activities and your program outcomes. That is, you may be willing to create a theory of change for your program.
A theory of change simply is a road map that shows what you want to change and the steps to get there.
It has three major components.
First is the ultimate outcome; that's the big change you want to see. It might be something like changing environmental behaviors.
Second, the intermediate changes or outcomes, these are things to get to your ultimate outcome; it might be things like efficacy for environmental skills or attitudes.
Finally, the program activities. These are what you actually do with the participants in your programs.
After that, you should create a narrative. It contains the problem you want to address, the context of your organization, and your objectives and activities.
And, most important, you should state your reasoning or logic about why the intermediate outcomes lead to the ultimate outcomes. You might say, for example, if I can change children's attitude, like forming positive attitudes toward nature, and that's your intermediate outcome, then they will change behavior and do something to protect nature. Behavior is the ultimate outcome.
The other kind of reasoning is why activities lead to outcomes. So you might say for example, if I take children on a field trip to a city park then they'll change their attitude about nature.
5.2.4 Why make a theory of change?
So, why make a theory of change?
It helps you to Plan and improve programs.
Helps grasp program objectives: What exactly do you want to achieve by this program.
Helps identify your assumptions: why, based on your experience and the research, do you believe that changing X will lead to change in Y. why, based on your experience and the research, do you believe that changing attitudes will lead to change in behavior for example. Or why taking kids on a field trip to the park will change their environmental attitudes.
Helps engage in the discussion about your objectives, actions, and assumptions, helping you do these other things like improve your program.
Forces you to start with the outcomes; with your ultimate thing that you want to change, that thing that's at the top of your diagram, instead of with your activities. So the activities are to move towards that ultimate outcome.
It also helps you to Evaluate program outcomes.
Helps clarify what outcomes you will evaluate.
Helps answer how likely these outcomes are to be achieved and whether the program is worth an outcome evaluation.
Helps explain the results of evaluation and why a certain result occurred in its way.
Helps you to use the results of evaluation for improving your program.
5.2.5 How to make a theory of change?
As to How to make a theory of change?
For the diagram, you identify your ultimate goal; do what's called backwards mapping to identify your intermediate goals and then identify the activities to reach those intermediate goals. Then you also write up your narrative.
6 Phase 2
Phase 2 is Planning Your Evaluation.
You will create and clarify your evaluation questions, and you will identify how to collect data to answer these questions.
6.1 Step 3: Set Goals and Indicators
6.1.1 Objectives/outcomes in environmental education
First, what Objectives/outcomes can a program achieve? Take environmental education for example. You may refer to them for developing and investigating your program's objectives/outcomes.
6.1.2 Three criteria for defining of objectives
When you developing your program's objectives/outcomes, please remind yourself that definition of an objective/outcome should satisfy three criteria:
Each objective should contain only one idea. An objective statement that contains two ideas should be divided into two parts, each idea expressed as a distinct objective.
Each objective should be distinct from every other objective. If objectives overlap, they may express the same idea and so should be differentiated.
Objectives should employ action verbs (for example, increase, improve, reduce), avoiding the passive voice.
6.1.3 Develop evaluation questions
Based on your program objectives/outcomes, you can develop evaluation questions. You can:
Investigate all outcomes in an open, emergent, emancipatory way.
Select some specific outcomes to investigate its properties and dimensions.
Here we come to a Question.
People from different backgrounds have very different assumptions about the role of NPOs/NGOs. Advocacy, environmental-protection, or social justice programs may focus on behavior change. So the outcomes of their programs can be quite educational, such as knowledge, skills, attitudes, behaviors, and self-efficacy (listed in a former slide). How about those service-delivery programs? What kinds of outcomes will they prefer?
6.1.4 Outcomes classification framework in different fields
In my opinion, the types of outcomes of environmental education, capacity-building, and service-delivery programs have something in common. Referring to and adapting the Outcomes classification framework proposed by World Bank, I use it to analyze the three and show their similarity.
Definition
Outcome level is the level that Shift from outputs to changes in the status quo or in behavior that happens as a consequence of the outputs.
Environmental Education
Form
• Environmental awareness
• Environmental knowledge
• Environmental skills
• Environmental attitudes
• Environmental behaviors
Encourage
• Social interaction
Attain
• Self development
Capacity-building
Gain new skills
• Capacities learned: (worth within the training, such as “How much have been learned?”)
Perform capabilities
• Capacity behaviors: (worth in the workplace after the training, such as “What new skills can the trainee demonstrate?”)
Service-delivery
Gain service access
• Service-using awareness, knowledge, skills/self-efficacy, and attitudes: (perceived affordances)
Perform service access
• service-using behaviors: (citizens access to better-quality services)
6.4.5 Outcomes and impacts classification framework
We can add the impacts of the three kinds of programs in this framework.
Impact on the issue level is the level that Shift from a change in the status quo or in behavior to meaningful changes in the lives of ultimate beneficiaries or the environment of concern.
Environmental Education
Solve the concerned environmental issue through behaviors, improving
• Environmental quality
• Community development
• Individual wellbeing
Capacity-building
Apply new capabilities to solve work problems
• Work performance: (the value of training in terms of indices of performance, such as operational, financial, and personnel indices)
Service-delivery
Improve well-being
• Improved service access or improved access to better-quality services improves well-being [physical and psychological].
Broader impact level is the level that Shift to more sustained changes in delivery, governance, or citizens’ well-being; shift to positive changes in broader environment.
Environmental Education
Affect broader environment and environmental issues.
Capacity-building
Worth on criteria not directly related to work performance, based upon social, moral, political, or philosophical criteria.
Service-delivery
Worth on criteria not directly related to work performance, based upon social, moral, political, or philosophical criteria.
Let's see some example from World Bank group
Judgments of worth of a program can be made within the outcome level, the impact level, the broader impact level.
Mind the gaps between behaviors and impacts on the issue, and between impacts on the issue and broader impacts.
6.2 Step 4: Choose Design and Tool
6.2.1 Approaches
Two main Approaches.
Quantitative approach Uses numerical data to make sense of information
Examples: scores on a test or survey answers on a five-point scale.
Allows collection and analysis of large amounts of data relatively quickly.
Analysis is perceived to be less open to interpretation and thus typically considered more “objective”.
Qualitative approach Uses narrative forms, such as thoughts or feelings to describe what is being evaluated.
Examples: observations, interview transcripts, focus groups, photographs, or videotapes.
Can provide rich context for examining participants’ experiences and how a program operates.
Allows for questions to be investigated in-depth. [W: Instead of pretending to be “objective” as quantitative researchers usually do, qualitative researchers show readers how they managed their subjectivity and clarify as far as possible their perspectives and “pre-judice” which are inevitable and vital for activities involving human interpretation and criticism.]
Quantitative question may look like:
Question: What is your level of agreement with the following statement: “[I think snake should be protected]”
Disagree Agree
1 2 3 4 5 6 7 8 9 10
Qualitative question may look like:
Question: [how did this trip affect your views on protecting snake?]
Answer: [I used to think snakes were bad, they deserved to die. But now I think they are important and timid, and should not be disturbed or bullied by people.]
Some Further reading for you.
What goals can qualitative research help you achieve?
It explained briefly that "Qualitative and quantitative methods are not simply different ways of doing the same thing. Instead, they have different strengths and logics, and are often best used to address different kinds of questions and goals".
7 Phase 3
Phase 3 is Implementing Your Evaluation
7.1 Step 5: Collect Data
7.1.1 Qualitative Data sampling
The intent of qualitative research is to gain an in-depth understanding of a situation. As such, most sampling strategies involve purposefully selecting individuals according to criteria that the evaluator considers most valuable for answering the evaluation questions.
Below are examples of purposeful sampling strategies based on those suggested by Patton (1990):
Extreme or deviant case sampling
Learn what works or doesn't work for your program by studying the extremely atypical cases -- e.g., either those individuals for whom the program is unusually successful or those for whom the program seems to have no impact.
Maximum variation sampling
This strategy captures common outcomes that cut across a variety of participants or programs. For example, if you provide outdoor EE to schools of varying socio-economic status, you might try to collect data from schools at each socio-economic level.
Homogeneous sampling
As the name implies, this sampling method seeks out individuals who are homogeneous with respect to certain variables (e.g., teachers with less than 5 years experience; 3rd grade girls, minority students in the free-lunch program, etc.).
Typical case sampling
Choosing a sample of "typical" cases allows you to describe the average experience of your participants to someone not familiar with your program. If your program is implemented in multiple schools, for instance, you would select the schools where the program seems to have an average impact and avoid the ones that exhibit unusual or extreme results.
Stratified purposeful sampling
This strategy involves sampling from below average, average, and above average cases in order to capture the main variations in your program's outcomes.
7.1.2 Qualitative data collection tools
Interview
Focus group
Standardized open-ended questionnaire
Drawing
Statue theatre game
Observation
7.1.3 Interview
For using Interview, you need Develop interview outline (or some other mechanism/skill that ensures rapid/effective/coherent response in your interviews)
Here is an Interview outline example
The research question is How did an EE program influence the environmental learning, social interaction, and self development aspects of participants?
Data generated from interviews may look like these. You will get texts data. For example, the interviewees said… You will read them and group them under sub-themes and themes.
7.1.4 Standardized open-ended questionnaire
Standardized open-ended questionnaire uses a questionnaire format but its questions are open-ended.
The example is to investigate the outcomes of an environmental education program.
Its open-ended questions are:
How does what you learned today in the program relate to your life?
If you were the instructor, what would you have taught from today’s lesson and why?
Participants will choose one question to answer.
7.1.5 Drawing (attitudes)
Drawing can be used to investigate participants' attitudes. This research attempted to answer the questions of what kind of stigma urban pupils attached to wild animals and how urban pupils’ perceptions and attitudes changed because of the lesson.
We asked participants to follow the prompt: ‘Please draw a scene with yourself and wild animals in it, and please write a paragraph describing the scene.’
Here are the data and results.
The pre-lesson drawings by our participants were discerned into four themes: Physical Abuse to Wild Animals, Wild Animals as Hazards, Ignorance of Wild Animals, and Wild Animals as Pets, which revealed the existing stigma that our participants attached to wild animals. This stigmatization was manifested in the form of visualized negative scenes around discriminatory behaviors toward wild animals.
The post-lesson drawing themes were identified as Not Hurting Wild Animals, Wild Animals Are Not Hazards, No Absence of Wild Animals, and Wild Animals Are Not Pets. In most of these drawings, indicators of stigma are absent. Most participants showed no physical bullying, spreading misinformation (i.e. diffusion of negative stereotypes), restriction of resources (i.e. ignorance), and unwanted benevolence toward wild animals in the scenes that they drew.
7.1.6 Drawing (concepts/mental model/knowledge)
Drawing can also be used to investigate participants’ mental models. We used it in answering the research question of how participants’ mental models of the local mangrove ecosystem changed through a half-day school trip at a nature reserve.
We asked participants: "Please draw anything you can think of when you hear the phrases ‘Futian Mangrove Nature Reserve’, and please write a paragraph explaining what you drew."
Here are the data and results.
We sorted the participants into three groups based on their pre-drawing scores and found that all three groups could enhance their knowledge about the mangrove ecosystem, which means their mental models were further developed not only on average but also when individual differences are considered.
7.1.7 Statue theatre game
You may want to use some creative theatre games for generating evaluation data. For example, some organization will conduct team-building programs using theatre activities, which is very popular in enterprise training. The Statue theatre game is a relatively simple and easy one. In this activity, Participants shape their bodies to create a frozen “statue” that represents a situation, feeling, or idea.
Directions:
Invite participants in pairs to find their own space in the room.
Introduce the activity.
One person (A) is the sculptor and the other (B) is the block of clay. In a moment I will give you a word/theme/situation to explore. Your sculptor’s job is to shape your clay’s body to create a frozen statue that represents the sculptor’s response/thoughts/feelings to the theme. ‘B’ begins by standing in a neutral position; you can let ‘A’ to slowly move your body into a new position, or you can let ‘A’ to demonstrate their statue and you imitate and hold it. Facial expressions can be shown by the sculptor for the clay to copy.
- Offer the prompt.
For example: Make a statue of one thing you learned from today’s activity. Or, Make a statue that shows the relationship between you and wild animals. Or, Make a statue that shows the situation when you encounter wild animals. Or, Create a statue that represents a XX moment, concept or person.
Give participants a moment to think.
Count backwards from ten to one while they create their images.
Once statues are made, choose a way to look at the images.
Ask half the group to relax and half the group to hold their statues and take time to look at and interpret the statues with the other half of the group; then switch. Or, single out some statues to remain frozen while others relax, to invite and focus further interpretation and discussion. You may like to give the sculptors paper and pen so that they can write a title or caption for their masterpiece and put it in front of the statue.
- Switch A and B roles for another round.
Reflection:
What sort of statues did we make?
*How did we use our bodies to represent the idea we were working on today? *
How did it feel to take on that pose in your body?
What did we discover about our inquiry through this activity?
Data will be generated in participants’ interpretation and reflection.
7.2 Step 6: Analyze Data
7.2.1 Data analysis tools
Thematic analysis
Grounded theory coding procedures
7.2.2 Characteristics of qualitative analysis
Qualitative analysis is a cyclical and iterative process, with many rounds of investigating evidence, modifying [claims], and revisiting the data from a new light. You will need to reexamine data repeatedly as new questions and connections emerge and as you gain a more thorough understanding of the information you collected.
Throughout the process of examining and reexamining data, concentrate on the following:
Patterns, recurring themes, similarities, and differences
Ways in which these patterns (or lack thereof) help answer evaluation questions
Any deviations from these patterns and possible explanations for these
Interesting or particularly insightful stories
Specific language people use to describe phenomena
To what extent patterns are supported by past studies or other evaluations (and if not, what might explain the differences)
To what extent patterns suggest that additional data may need to be collected
7.3 Step 7: Report Results
7.3.1 Analyze your data → Analyze your results
Results: analyze and make arguments about data; what do you construct in the data/what do you think the data said to you.
Discussion: analyze and make arguments about results; make sense of it, and communicate to others what you think your results mean.
7.3.2 Reports
In addition to a formal evaluation report, consider providing one or more of the following opportunities:
Formal evaluation report
Oral briefing
Newsletter article
Website feature
Popular press article
Conference presentation
Journal article
…
8 Phrase 4
Phrase 4 is Improve Program
Your evaluation results can help you decide to expand successful activities, discontinue or modify those that are not working as well, or take an entirely new approach to achieving a program goal.
Findings from one evaluation can even be used to initiate another evaluation.
Results may give rise to an improved model of your program’s logic, generating new evaluation questions, and helping to kick off another cycle of evaluation and program development, leading your program to achieve greater success.
8.1 Step 8: Improve Program
8.1.1 How to use evaluation results to benefit programs?
Improve your program's visibility and outreach
Improve how your program is delivered or implemented
Improve the content of your program
Inform future evaluations
Help advance the field
8.1.2 How to ensure the report is used for improvement?
Finalize and distribute your report promptly
This way you will not miss opportunities to influence important decisions. If you delay distribution, your findings may no longer be relevant when they are shared. There are some exceptions, that is, times when you may want to wait. For example, it may make sense to share your report just before an annual planning meeting or other important decision-making meeting, when participants will have a concrete reason to pay close attention to the findings.
Be strategic about how you share your results
Directly communicate your findings to those who you want to use the information and do so in ways that will appeal to them. For example, rather than disseminating an evaluation report and hoping it will be read, develop tailored presentations of your results for specific individuals or groups. Provide information that is most relevant to stakeholders’ priorities. Suggest ways that you plan to address their priorities and include specific actions they can take to help implement the recommendations.
Follow up!
After sharing your report or recommended changes with intended users, make sure they have a chance to discuss to what extent and how to best implement them. One way you can help ensure the recommendations are acted upon is by coming up with an implementation plan and indicating how you will help to carry out the plan.
8.1.3 Other ways to get the most from the evaluation
Evaluate the evaluation!
At this stage of the process, take some time to reflect on the evaluation. What went well? What would you do differently if you could do it over? Specifically, what would you do differently to help ensure that recommendations will be acted on? Reflecting on the evaluation and its influence will help to improve future evaluations.
Document changes to the organization that may have occurred as a result of the evaluation
Not only can the evaluation’s recommendations, when acted on, help to improve your program but so can the process of conducting the evaluation. A positive evaluation experience can stimulate improvements in organizational culture, teamwork, and relationships. The evaluation process typically increases participants’ understanding of the program, and increases their motivation to help the program succeed. The capacity-building effects of the process can lead participants to develop lasting skills and habits in critical thinking, problem-solving, leadership, and in the practice of evaluation itself. Documenting benefits to staff and the organization as a whole will help to justify further investments in evaluation.
9 Value learning outcomes
An additional but important step is Valuing learning outcomes. It means placing dollar values on learning outcomes.
9.1 Return on investment (ROI) of program
It is especially popular for training and development initiatives.
It is the ratio of the net benefits of an investment compared to its total costs.
It can also be applied to many other types of programs.
It requires the evaluator to place dollar values on [program benefits].
9.2 Convert program benefits into monetary value
Ask stakeholders how much they think something is worth in dollars
Ask the same person to express how certain they are of their estimates
Convert the level of certainty (a percentage) to a decimal
Multiply the dollar figure (perceived value) by the decimal (level of certainty)
Arrive at a conservative conversion estimate (in dollars)
Qualitatively report any unconverted outcomes along with the ROI figure
9.3 Valuing deliverables or material impacts is not enough
Valuing deliverables or material impacts is not enough
As we have seen before, a program can have outputs or deliverables, learning outcomes, impacts on the issue, and broader impacts.
Program deliverables and material impacts may be appealing and relatively easy to be converted into monetary value, because they are considered to be tangible and “solid”. More intangible or “liquid” results such as learning/empowerment outcomes may face more challenges in monetization. However, they should not be neglected in valuing the worth of a program. The worth of the work of a program coordinator or trainer should be judged fairly. Focusing only on the worth of program deliverables may cause exploitation of their labor.
Gaps often appear between learning outcomes (behaviors) and impacts on the issue (even though other influencers had been controlled and separated from, if they really can, the program when analyzing), especially in programs designed not under the vocational neoclassical ideology for training specific, predetermined skills but under liberal-progressive or socially-critical ideology which seeks to provide participants with opportunities for positive development and to encourage autonomous thinking, active dialogue, and the self-determined establishment of their own objectives and plans for action. Impact is not an appropriate indicator to infer learning outcomes. Focusing only on the worth of impacts may underestimate the program’s benefits to participants.
9.4 Notes
It is important to recognize that some outcomes cannot be easily measured and converted to monetary values.
Attempting to put a dollar value on outcomes such as customer satisfaction, a less stressful work environment, and employee satisfaction can be extremely difficult, and the results may be of questionable value.
Trying too hard to attach a business value may call into question the credibility of the entire evaluation effort.
As a result, the Phillips methodology recommends that evaluators do not try to convert those “soft” business measures, and instead report them as intangible benefits along with the “hard” business improvement outcomes such as increase in sales, reduced defects, time savings, etc.
Question
Why did some NPOs not evaluate their program outcomes? How can you help or encourage them to do that, if they need to do so?
What's your opinion on valuing program outcomes?