All along the way, social impact organizations ask and are asked a very important question: “Are you measuring and achieving significant and positive changes for those you serve?” Many nonprofit organizations have been spending significant time and money investing in systems to track and monitor program implementation as a way of answering the question, and for a smaller group of organizations, they are also investing in objectively measuring results – a.k.a., “outcomes.” To date, these efforts have been laden with many challenges. During this blog series I will be presenting the most common and unaddressed challenges with impact measurement, as well as an innovative and cost-effective solution for addressing them.

When it comes to the program metrics that impact organizations collect most often, it is how much service has been delivered. More specifically, program implementers are asked to monitor and document how much service they deliver, often at every key intervention point along the way. Then, these data are “rolled up” by someone else, perhaps with the aid of technology, all to synthesize and communicate who showed up, how much service was delivered, and sometimes how the implementers performed. Across the social sector, these implementer-centric measures of productivity – or “outputs” – are where a lot of evaluation effort gets spent.

What we are not measuring is each beneficiary’s perception of the quantity and quality of their program experience. I am NOT talking about their “satisfaction” with the provider or organization. What I AM referring to are specific questions posed to each beneficiary about their perceived dosage and quality of engagement with each of the program design elements. I often like to say, “When it comes to any intervention that has to go through the mind, the recipient is the final decider on dosage and engagement, not the implementer.” We aren’t delivering pills, after all. If you are skeptical about this statement, and you view “self-report” as biased, I simply ask you to consider children in a classroom. If a teacher delivers a lesson as intended, exhibited all of the best practices in doing so, did all the kids in the classroom get the exact same amount and quality of uptake of information? Not likely: depending on their mindset and attention during the lesson, some were fully engaged; some were mostly engaged; some were distracted and some might have even been asleep.

Another way to view this is to consider the context of making a service something that the beneficiary has to fully pay for, rather than having it charitably paid for by someone else. In this “beneficiary is the buyer” analogy, who should the business interview, ask or observe to determine the amount of uptake and quality of the service – the buyer (i.e., beneficiary) or the implementer? The answer seems obvious – the buyer/beneficiary. If a company creates a product, delivers it, and puts in on the shelf, do you ask those who deliver and place the product on the shelf about the usage and quality of the product? Hopefully not. You ask the consumer.

Gathering program experience from the beneficiary can be a less-biased view than relying solely on an implementer to tell you what they provided and how much. Why? Because asking implementers about the quality and quantity of their efforts creates a potential conflict of interest because program dosage and the implementer’s performance (i.e., productivity) are now intertwined. To honestly understand what works, we must also listen to the beneficiary concerning matters of dosage and quality of the program experience. I am not arguing for the elimination of provider-based point of service tracking and monitoring, but rather adding on another important and perhaps less biased perspective as well – the beneficiary.

It is time to address our sector’s beneficiary-blindness when it comes to gathering program data if we are ever to fully learn what works. Learn more about how Algorhythm is addressing this challenge through its iLearning Systems: http://algorhythmio.wpengine.com