The Sorting Hat of Indicators: A Wizarding Approach to Shared Measurement

When it comes to measuring collaborative and collective impact, we’re often asked: how can we make it easier for organisations to learn from each other, save money on evaluation costs, and build an evidence base of what works? While strictly speaking, this can’t be achieved with the wave of a magic wand, we’ve found that a shared approach to measurement – where organisations apply the same indicator or set of indicators of change to a common area of work – can deliver some pretty magical results.

But how do you go about finding and sorting indicators of change to facilitate this approach? And can the process for selecting indicators make a difference to how useful the data will be for learning, improvement, and ultimately, social impact? Well, it can depend on the Sorting Hat and how your choices are made. But first, you’ll need to engage in a little game of Quidditch…

Become a Seeker

What is the most efficient way to find indicators? By getting hold of the Golden Snitch of course! And by Golden Snitch, we mean the fantastic community-level indicator databases that you can use to identify and long-list indicators for which validated data sets already exist. In Victoria, these include:

Using indicators from one of these sources will save you time and resources on developing your own indicators and/or data collection processes, while also providing a level of assurance regarding the quality and validity of the data. These existing resources are highly recommended as your first port of call.

However, not all indicators relevant to your work and context will be contained within these existing databases. A good way to expand the search is to talk to key informants in the sector and to review existing relevant government or national outcomes frameworks, such as the Victorian Gender Equality Strategy and the Victorian Health and Wellbeing Outcomes Framework.

There are also other population-level data collection tools such as the National Community Attitudes Survey conducted by external providers such as ANROWS or Vichealth that may contain indicators and data that prove useful to your work.

You will likely end up with a long list of possible indicators for your program initiative. In a recent project, we identified up to 75 potential indicators that were relevant to the client’s strategy. We were aiming for seven headline indicators to measure the collaborative impact of the partnership over the life of the strategy, and to frame the theory of change. Which leads us to the next question: how do you decide which indicators to use? This is where the Sorting Hat comes in.

The Sorting Hat of Indicator Selection

There are a couple of options to help with sorting your indicators. Here are the processes we employed to help narrow down our indicators of change from 75 to seven.

  1. Developing a mathematically weighted criterion. Once you’ve assigned a value to each indicator based on set criteria, indicator selection comes down to how the numbers stack up. Potential criteria for ranking each indicator could include: whether or not it is strategic; relevant; aligned to existing reporting requirements; or has existing data available. The decision on which criteria to use should also be guided by the purpose of the shared measurement framework and the information needs of partners (Quinn-Patton 2008). And if this seems a little complicated, you could also simplify the categories to just “Essential” or “Desirable” to help with your selection.
  1. Developing or using an existing set of principles to inform your choices. These principles could be grounded in the values and on what matters most to the organisations that adopt them. This approach is useful when developing a shared measurement system (and selecting indicators) in complex, dynamic environments. For example: inspirational and motivational; comprehensive thinking and action; and courage to take risks and innovate (Quinn-Patton 2017).

Our client used both approaches at different points in the process. The first was applied during the internal selection process by individual organisations, and was pragmatic in nature. We employed the second during the partnership-level voting process to ensure the selection of headline indicators was holistic and aligned with the values and aspirations of the partnership.

Did the use of principles in the second stage change which indicators were selected? Yes, and without applying principles to the indicator selection process, the collaboration would have missed important population change indicators and associated long-term outcomes (which would have been excluded for not meeting the criteria of data availability). The program partners ended up adding two headline indicators that were important to their understanding of social change and in line with their mission. In other words, the sorting hat process helped them stay true to their (Gryffindor) house and values. This in turn led to a more complete and sophisticated Theory of Change for achieving their long-term goal.

In sum, becoming a Seeker and using the Sorting Hat to help select the right set indicators for your collaboration can help you to ultimately defeat Voldermort (and other social evils).

We’d love to know – has anyone else used principles to guide selection of population level indicators?  Did it make a difference to the process? Where have you found population level indicators and data?

Resources and Suggested Further Reading

New Philanthropy Capital (2013) , ‘Blue Print for Shared Measurement’, Inspiring Impact Series.

Quinn Patton (2017), ‘Principle focused evaluation: the guide’, Guildford Press, USA.

A brief overview of this approach can be found here:

Quinn Patton (2008)  Utilisation-focused Evaluation Four Edition’, Sage Publication, USA.

2 thoughts on “The Sorting Hat of Indicators: A Wizarding Approach to Shared Measurement

Leave a Reply