Performance Indicators: Tips & Lessons Learned

Performance Indicators: Tips & Lessons Learned
EgudinKa / Shutterstock image

“People operate off of beliefs and biases. To the extent you can eliminate both and replace them with data, you gain a clear advantage.” ―Michael Lewis, Author, Moneyball: The Art of Winning an Unfair Game

When it comes to tracking progress towards your organization’s desired impact, indicators are essential. They serve as a framework for monitoring effectiveness and provide actionable data to drive continuous improvement.

Michael Lewis’s Moneyball offers a powerful example of how data and indicators can be used strategically, and many of the book’s lessons are transferable to the public and nonprofit sector. Lewis tells the story of Billy Beane, general manager of the Oakland A’s from 1998-2016. When Beane started, the A’s had limited financial resources for acquiring talent. To make the most out of limited resources, rather than drafting players based on gut instinct or observation, Beane and his team used data to pinpoint and recruit the most valuable players.

But he didn’t use conventional metrics like home runs and RBIs; instead, he prioritized less flashy measures (like on-base and slugging percentage) that were actually stronger predictors of success. Beane’s strategy was widely replicated because it proved enormously effective: in 2002, the A’s became the first team in over a century to win 20 games in a row. Beane’s approach was also economical: in 2006, the A’s had the fifth-best season record of 30 teams, but paid out comparatively less in player salaries (24th out of 30 teams).

The story of the Oakland A’s has many parallels to measurement in the nonprofit community. Too often, we allow gut feelings, hunches, or emotions to drive decision-making. To truly accelerate change, however, it’s critical to pair intuition with data. This is where indicators come in. Strong indicators provide clear, specific ways to track impact and enable us to make data-informed decisions when allocating scarce resources.

While outcomes are broad, indicators are focused. For example, if a job training program’s outcome is “Improve job-seekers’ skills for high-growth careers,” indicators might include the number of participants who participated in the program and the number who obtained jobs within six months of graduation. Implementation indicators (like program participation) help you understand how effectively a program is being implemented, which impact indicators (like job placement) evaluate program outcomes.

Both implementation and impact indicators should encompass the following:

  • Who and What is changing?

  • How many do we expect will succeed?

  • How much do we expect to happen?

  • By when does this need to happen?

Strong indicators should have proxy power, data power, and communication power, described in more detail in the Results Based Accountability Guide.

Proxy Power: Don’t be afraid of non-traditional indicators

Your indicators should really get to the heart of the matter and say something of central importance about the outcome. Too often, we’re drawn to the indicators that are easiest to measure. In education, for example, this means using traditional metrics like test scores to capture student success. While test scores may have value, they provide an incomplete picture. Evidence suggests that indicators like students’ level of grit or passion for learning, while they may be more challenging to capture, might be better predictors of long-term success.

Think outside the box and ensure your indicators really capture what you’re hoping to accomplish. Recall the Moneyball example: in recruiting players, Beane prioritized less commonplace indicators, which ultimately had a greater impact on the desired outcomes. Rather than choosing indicators that are flashy or familiar, aim for those that are most meaningful.

Data Power: Don’t try to measure everything under the sun

Data power is all about data collection feasibility—ensuring that it’s realistic to gather data on your indicators. To assess data power, consider whether you have timely access to the data, if the data you’ll need to collect will overburden stakeholders, and if the data will be of high quality. Will a certain indicator require you to develop a new method of data collection (e.g., a new survey) OR do you already collect data on your proposed indicators?

Be realistic about what data your organization can and cannot collect. Use a focused approach that includes the articulation of no more than five indicators per outcome. To streamline your measurement efforts, consider the connections between various aspects of your programs and create indicators for key categories (e.g., all community building or training activities) accordingly.

Communication Power: Remember your audience

Your indicators must have clarity with diverse audiences. A needs assessment can help you refine your indicators in the early stages of program implementation to ensure they reflect the needs of your stakeholders. Keep in mind that as a program evolves, so might its indicators. Design indicators that are meaningful to your program now, and don’t be afraid to iterate over time.

In addition to thinking about how indicators inform your internal programming, consider how you communicate about your indicators with external stakeholders. Ask yourself whether your indicators would pass a public square and/or social media test. For example, if you had to explain to a diverse audience what you meant by the indicator children are healthy and ready for school, what data would you use to demonstrate that? How would you make that data meaningful to various audiences?

Just as Beane’s data-informed approach made the A’s more cost-effective and successful, organizations that use performance measurement well can do more with less. With ready access to relevant data, they can also adapt to changing circumstances and communicate effectively about their impact to a wide range of stakeholders.

Check out these additional resources for learning more about developing indicators:

  • Measurement as Learning: This Bridgespan article presents an overview of performance measurement in the non-profit sector and includes a discussion of indicators.

  • Perform Well: This site, led by Urban Institute, Child Trends, and Social Solutions, offers evidence-based tools, sample surveys/assessments and indicators, and ideas for measuring outcomes.