4 mins read
Finding a Proxy for Impact
Many of us in the social sector are obsessed with measuring impact. I include myself—off and on—throughout my past 10 years at GlobalGiving.
GlobalGiving is a funder, connector, and trainer of nonprofits around the world. Since the early days in 2002, it’s been impossible (and undesirable) for GlobalGiving to collect extensive outcome or impact data from the nonprofits that fundraise through our marketplace. So instead, we’ve invited our vetted nonprofit partners to be transparent with donors in whichever format they choose: stories, photos, or quantitative data. This approach is an expression of trust.
But we frequently get asked by donors to compare or recommend nonprofits. So we sought to learn what characteristics high-performing organizations share, as a proxy for effectiveness. Our hypothesis reflected learning from the corporate sector: organizations demonstrating learning behaviors, “Listen, Act, Learn. Repeat.” (LALR), are more likely to be more effective. “Learning orientation” became our One Metric to Rule Them All, so to speak.
Over several years (and with input from our nonprofit partners), we built an incentive program, awarding points and badges for demonstrating learning orientation. Organizations with more points were more visible to donors through our algorithms, and more likely to benefit from the millions of “extra” dollars we drive through our marketplace each year.
Our own Listening, Acting, and Learning
In some ways we were successful. We were able to learn stories of organizations that took LALR to heart and actively sought to improve. We were proud to highlight relatively unknown organizations that had demonstrated a commitment to learning.
We also grew more aware of the pitfalls of this One Metric to Rule Them All approach. We knew folks would try to game the system, but the extent of it frankly surprised us. While many of our partners did legitimately participate in our incentive program, just 2% of organizations earned 80% of the points awarded. On more than one occasion, nonprofits actually built internet bots to earn points.
So our incentives worked, but they were problematic. Instead of freeing up time spent on fundraising (and rewarding learning organizations with more funding), we were simply requiring them to do more work for us to get extra cash. Many of our nonprofit leaders told us they appreciated being able to do something to earn higher visibility, but others also said it was a waste of their time.
We also conducted an impact study. We did not find evidence that we were having a measurable impact on organizations’ learning behaviors, but we did find the evidence we were having a measurable impact on organizations using feedback to become more community-led. Our community continually reinforced the value of community led-ness.
So our Theory of Change began to shift to focus on community-led change. We updated our mission statement: transforming aid and philanthropy to accelerate community-led change. Then we asked some fundamental questions, investing in research with community members around the world to understand, “What does it mean to be community-led?”
Some of us went into this community-led research project hoping it would help us get to a new Metric to Rule Them All. Could we understand how to measure/rate organizations’ community-led-ness as a proxy for effectiveness, predicting impact?
But again, nonprofit leaders in the research warned us against this type of measurement and incentivization. The community leaders helped us create a tool for understanding community led-ness, but then they explicitly said, “and you should never tie it to funding.” They said it wouldn’t be fair to compare nonprofits given how contexts vary among 170+ countries, and that people would just write what we wanted to read. Instead, they suggested, GlobalGiving should simply offer tools enabling rich conversations to the organizations who want it.
Putting “Learning Cycles” To Rest
Seven years after taking on this effort to develop as a proxy for effectiveness and incentivize organizations to spend time “improving” rather than fundraising, we’re ending it. We’re moving away from incentivizing our partners with effectiveness points. And perhaps more importantly, we’re no longer searching for One Metric to Rule Them All as a proxy for effectiveness or predictor of impact.
Instead, we’re focusing our impact measurement first on our own behavior change. How are we working in a way that enables organizations to be more accountable to their communities (rather than being accountable to us, as an intermediary or funder?)
Tactically, this means building and modeling community-led capabilities (in our hiring, nonprofit and corporate partner selection, program development, grantmaking, internal and external communications, etc), and providing tools and resources, at our partner’s request, that enables them to do the same. It means using the tools developed by community leaders to measure the extent to which we’re operating in an equitable, trust-based, community-led way, as described by community members.
We’re shifting away from measuring our impact as a sum of the impact of our partners, and toward measuring how we transform ourselves. This is how we believe we’ll model transformation for aid and philanthropy.