Measuring personal outcomes: Challenges and strategies

Insight 12
Published in Insights on 5 Jan 2012

This Insight, written by Dr Emma Miller, Honorary Senior Research Associate at Glasgow School of Social Work, will consider some of the challenges of measuring outcomes and emerging responses to these.

Iriss has created a storyboard (an animated, engaging video) of Insight 12, which is a useful summary: Measuring personal outcomes: Challenges and strategies (video storyboard)

Key Points

  • A focus on personal outcomes offers the potential to refocus on what matters to people who use services, with potential benefits for the individuals involved, staff and organisations
  • It is important to be clear about the purpose of measuring outcomes. In particular, whether the measurement is primarily for improvement purposes or for judgement - in practice it may well be both
  • There is potential to link outcomes measurement to the organisational value base and a range of approaches and tools are emerging to support this
  • There are many identified challenges of measuring outcomes, but the evidence highlights various recommendations and strategies that can help
  • Outcomes tools are sometimes designed with a very specific user group in mind, whilst others can be used more generally with different user groups.

Measuring outcomes

For many years there has been an emphasis on measuring the outcomes of human services. It is important to distinguish between personal outcomes, which are defined by the individual, and outcomes, which are pre-determined by the service on behalf of beneficiaries. The reasons for measuring personal outcomes can be understood from various perspectives. Research demonstrates that it cannot be assumed that service users' views on their outcomes will correspond with those of organisations and practitioners (Felton 2005). Further, for people who use services and their families, being involved in defining the outcomes they want to achieve can be empowering and result in increased relevance of support (Qureshi 2001, Beresford et al 2011). For staff, working with individuals to develop outcome-focused plans, and reviewing the outcomes achieved can help achieve clarity of purpose (Thompson 2008). For organisations, an outcomes approach can help to reconnect with their value base and ensure that they are focused on the difference they make to people's lives, as well as the activities undertaken (Miller 2011). Measuring outcomes is not enough in itself but can provide the 'missing piece of the information jigsaw' in relation to evaluating and improving services, and increasing accountability to the public and regulatory bodies. This Insight will consider some of the challenges of measuring outcomes and emerging responses to these.

Policy context: Scotland

Outcomes have been emphasised in Scottish policy for several years. Better outcomes for older people (Scottish Executive 2004) strongly advocated an outcomes focus. In 2006 the Scottish Government stated that less time should be spent on measuring what goes into services and how money has been spent, and that more time should be invested on what funding achieves for individuals and communities (Scottish Government 2006). This was followed by the overarching Single Outcome Agreement (SOA) (Scottish Government 2007), which set out a new relationship between central and local government, allowing for more flexibility at the point of delivery. Sitting underneath the overarching SOA is Getting it Right for Every Child (GIRFEC) (Scottish Government, 2008a) the Community Care Outcomes Framework (Scottish Government 2008) and the National Outcomes and Standards for Criminal Justice (Scottish Government 2010). The Housing Support Enablement Unit also recently produced a specific tool for relevant providers (HSEU 2011).

Defining outcomes

Key evaluation concepts can be defined as follows:

Summary of main definitions

Inputs
All the resources a group needs to carry out its activities
Activities
The actions, tasks and work a project or organisation carries out to create its outputs and outcomes, and achieve its aims
Outputs
Products, services or facilities that result from an organisation's or project's activities
Outcomes
The changes, benefits, learning or other effects that result from what the project or organisation makes, offers or provides
Impact
Broader or longer-term effects of a project's or organisation's outputs, outcomes and activities

Adapted from: (Wainwright 2002, CES 2004)

The Social Policy Research Unit identified three main categories of outcome, which their research found to be important to people using social care services:

  • Quality of Life outcomes (or maintenance outcomes) are the aspects of a person's whole life that they are working to achieve or maintain
  • Process outcomes relate to the experience that individuals have seeking, obtaining and using services and supports
  • Change outcomes relate to the improvements in physical, mental or emotional functioning that individuals are seeking from any particular service intervention or support (Qureshi et al 2001)

Specific services may emphasise particular types of outcome but research has shown that there are benefits to considering the different categories of outcome. For example, Beresford and Branfield (2006) caution against a tendency in service-led discussions about evaluation to separate process from outcome because their research with service users demonstrated that the process, or how services engage with people, is inseparable from, and shapes the outcome.

Challenges with measuring outcomes

Despite the long-standing policy focus, measuring outcomes remains challenging. Some of the key challenges are outlined below.

1. Clarity of purpose

It is important to be clear about the purpose of measuring outcomes. In particular, there is the question of whether the measurement is primarily for improvement purposes or for judgement:

In the former case, the information is used as a 'tin opener' for internal use, designed to prompt further investigation and action where needed, and not as a definitive measure of performance in itself. In the latter case, the information is used as a 'dial' - an unambiguous measure of performance where there is no doubt about attribution, and which may be linked to explicit incentives for good performance (pay for performance) and sanctions for poor performance (Raleigh and Foot, 2010 p6).

Table 2: Characteristics of indicators used for judgement (reporting for external scrutiny and comparison) and improvement (using information to make improvements within the organisation)

Indicators for judgement Indicators for improvement
Unambiguous interpretation Variable interpretation possible
Unambiguous attribution Ambiguity tolerable
Definitive marker of quality Screening tool
Good data quality Poor data quality tolerable
Good risk-adjustment Partial risk-adjustment tolerable
Statistical reliability necessary Statistical reliability preferred
Cross-sectional Time trends
Used for punishment/reward Used for learning/changing practice
For external use Mainly for internal use
Stand-alone Allowance for context possible
Risk of unintended consequences Lower risk of unintended consequences

Table adapted from: (Raleigh and Foot 2010)

In practice, most systems will need to consider measurement both for improvement and judgement. The emphasis given to each can result in very different approaches to the selection of measures, collection of data, interpretation and use, which in turn will influence the culture of the organisation.

2. Measurable or meaningful?

One of the policy priorities in service improvement is that the results should be measurable. Recent research highlighted the limitations of quality measurement, including the tendency to miss areas where evidence or data are not available, and to exclude less quantifiable aspects of quality (Raleigh and Foot 2010). This is of particular concern given that what is deemed easy to measure can in turn determine and limit the priorities and activities of services. Further, the delivery of a quality service does not necessarily guarantee good outcomes, so measuring quality alone is not sufficient.

The evidence reveals the adverse effects of prioritising external reporting, particularly in the form of targets (Raleigh and Foot 2010), and the risk of 'severely dysfunctional consequences' arising from performance systems which are insufficiently vigilant to unintended effects (Smith 2007, 304). Other research has shown the importance of moving beyond a sole focus on external accountability to the need to link evaluation and measurement to the organisational value base (Whitman 2008). Further, it has been argued that measuring the outcomes of a service should be part of a wider shift of focus onto the person and their outcomes, and without the shift of focus, the outcomes tool may become another form which is mechanistically completed by practitioners (McKeith and Graham 2007). Equally important is the emphasis on involving staff, as the Audit Commission notes:

Corporate leadership on data and information quality is vital... However, one of the biggest factors underlying poor data quality is the lack of understanding among frontline staff of the reasons for, and benefits of, the information they are collecting. The information collected is too often seen as irrelevant to patient care and focused on the needs of the "centre" rather than frontline service delivery (Audit Commission 2004, p5).

A recent guide in Scotland focuses on the critical role of staff in recording outcomes, and includes some common errors and practical examples (Miller and Cook 2011).

3. Hard and soft outcomes

Several authors highlight the limitations of only focusing on 'hard' or easily measured outcomes. In many such cases, what are categorised as hard outcomes could be described as outputs, such as numbers of individuals completing a training course, or numbers who achieve employment following a training scheme. In contrast, soft outcomes give a fuller picture of the overall value and success of projects. Measuring soft outcomes is also supported by inclusion of qualitative as well as quantitative data. Although this presents its own challenges in terms of data management, resources are available to support this (Evaluation Support Scotland 2009b).

Some funders, including the Big Lottery Fund, require that soft outcomes are considered. However interim findings from a longitudinal study of the third sector in Scotland found that many agencies were unable to demonstrate their value because of the tendency of some funders to focus on hard outcomes. The most vulnerable users were viewed as missing out because they were less likely to achieve quick and measurable outcomes:

The focus on attaining quick, clear results with clients had, it was argued, led to those with some of the greatest need being overlooked in the pursuit of targets. For instance, the outcomes-focused approach encouraged competition between services for groups of clients who can easily have measurable 'positive' outcomes (Scottish Government 2011).

Recent research by the Standards We Expect project in England examined the development of person-centred support from the perspective of service users, carers, practitioners and frontline managers. They identified efforts to develop 'softer' targets and measures consistent with independent living as one of the key developments in overcoming barriers to person-centred support (Beresford et al 2011). The following example illustrates the value and inclusivity of focusing on soft outcomes:

Example: What would become of the 90 year old widower who gained the confidence to learn computing skills to write his autobiography for his family... He will neither be getting a job nor going on to accredited courses and yet the soft outcomes keep him active and involved rather than confined to a retirement home (Butcher and Marsden 2004, p4).

4. Challenges of attribution

One of the most frequently cited challenges of measuring personal outcomes is that of establishing cause and effect, or attribution. The challenge of isolating the impact of any one service is further complicated where there is multi-agency involvement (Ellis and Gregory 2008). Was it the individual, their family, the service, other services or other factors that influenced the outcomes? A recent Learning Point paper from the Improvement Service (McGuire 2010) acknowledged the complexity of attribution due to the number of partners involved and the range of external factors. Some agencies highlight the benefit of obtaining the perspectives of users, carers and staff to help to identify causal chains (Culpitt and Ellis 2003).

5. Variation in service users

The final challenge of measuring outcomes to be covered here is that of variation in the characteristics of service users leading to challenges of interpretation of data. This is not unrelated to challenges of attribution. To avoid unfair comparisons across different services, account should be taken of such variations, as responses can be influenced by service user characteristics unrelated to the quality of care, such as age, gender, region of residence, self-reported health status, type of care and expectations (Raleigh and Foot 2010).

Recommendations/strategies

There are no easy answers to many of the identified challenges of measuring outcomes, but the evidence highlights various recommendations and strategies that can help, and being mindful of these challenges can be a useful starting point.

Theory driven evaluation

Theory driven evaluation provides an alternative approach to traditional input-output approaches to evaluation, and it has been suggested that it is more suited to complex real-world interventions. It involves the development of a programme theory, which sets out what the project planners expect from the intervention, which means making implicit assumptions explicit, and then checking out the programme theory with staff and key stakeholders.

In brief, theory-driven evaluation first attempts to map out the programme theory lying behind the intervention and then designs a research evaluation to test out that theory. The aim is not to find out 'whether it works,' as the answer is almost always 'yes, sometimes'. The purpose is to establish when, how and why the intervention works, to unpick the complex relationships between context, content, application and outcomes, and to develop a necessarily contingent and situational understanding of effectiveness (Walshe 2007, p58).

Theory driven evaluation means developing a hypothesis which can be tested out in practice. Logic modelling, discussed below, is an example of a theory driven approach.

Logic modelling

Logic modelling involves an organisation (staff, users, carers etc) working to define the endpoint that they want to reach, and then consider what activities and processes are required to achieve it. It can help organisations adopt an outcomes approach by improving their clarity about what they are aiming to achieve. Guides are available to support the development of a logic model (Evaluation Support Scotland 2009a).

The Charities Evaluation Service (CES) has used logic modelling to demonstrate how soft outcomes can be viewed as outcomes in their own right and can contribute to longer term or more strategic outcomes (which could be applied to the Single Outcome Agreement in Scotland).

Example: From inputs to long term change = The Women's Project (Culpitt and Ellis 2003)

The Women's Project aims to reduce unwanted teenage pregnancy by offering support and group work to young women

Inputs Outputs Outcomes Long-term change
Staff One-to-one support Increased confidence Increased social inclusion
Budget Group work Understand alternatives to young parenthood Reduced teenage pregnancy
Venue Outings Be ambitious  
Advertising   Able to access training  

A project might bring about changes before reaching its final outcome. For example, someone who using a drugs project is likely to change in various ways before they stop using drugs. The project may not always reach all its final outcomes in its lifetime, or individuals might move on before doing so, so it is important to record changes on the way.

Example: Outcomes on the way = Employment Training Service (Culpitt and Ellis 2003)

Project aim Outcomes on the way Long term outcome
To reduce social exclusion

Improve motivation and aspirations

Improved opportunity to re-enter education and to find work
Improve confidence and self-esteem
Improve communication skills
Improve job search skills
Increase work skills
Improved chance of qualifications

Choosing or designing outcomes tools

There are many outcomes tools across service sectors, with varying formats and content. Although it is possible to find tools which measure outcomes at one interval, it is more common for outcomes to be measured at least at two intervals, providing a picture of the person's journey towards their intended outcomes. Outcomes tools are sometimes designed with a very specific user group in mind, whilst others can be used more generally with different user groups. Earlier research on measuring soft outcomes concluded that a generic model for soft outcomes was neither desirable nor achievable and that a flexible approach was needed for interventions which were holistic, integrated and geared to the individual needs of users (Dewson et al 2000).

Some agencies and organisations have reported benefits from designing their own outcomes tools. A key advantage is that the process of engaging staff in designing a tool can develop an outcomes orientation within the organisation and promote ownership by staff. However, some authors urge caution against investing too much effort in devising the perfect tool, as the tool should be seen as an accompaniment and enabler, rather than a replacement for the worker's professional judgement (Butcher and Marsden 2004). Where an agency decides to develop their own tool, some guides recommend that they adapt an existing tool (MacKeith and Graham 2007). The Coalition of Care and Support Providers in Scotland have produced a summary guide of existing tools (CCPS 2010), including any costs where relevant.

Outcomes tools can be based on different types of questions. Examples highlighted from McKeith and Graham (2007) include concrete questions, subjective scales which ask where the person thinks they are in relation to a specified outcome, and defined scales which ask where the person is on a journey of change towards an outcome, based on pre-determined intervals. Other approaches such as Talking Points (Cook and Miller 2010) adopt a more flexible, conversational approach, structured around a set of outcomes. Selection of the type of question or structure of the tool should be influenced by the relevant population. Concrete questions and tightly specified pre-defined scales can present challenges to people with communication support needs.

SMART principles can be usefully employed when discussing and recording outcomes. Traditionally SMART outcomes have been classified as the first definitions provided below, as set out by Doran (1981). However, various alternatives are in use and the definitions highlighted in bold have been found to be more compatible with outcomes approaches (Miller and Cook 2011):

  • S - Specific (or Significant).
  • M - Measurable (or Meaningful).
  • A - Attainable (or Action-Oriented).
  • R - Relevant (or Rewarding).
  • T - Time-bound (or Trackable)

Conclusion

A focus on personal outcomes within human services offers potential to refocus on what matters to people who use those services, with potential benefits for the individuals involved, staff and organisations. Although outcomes have been prevalent in policy for some time, a range of challenges remain with regard to their measurement. The key challenges covered in this paper all relate to the meaningfulness of measures. There is the need to decide whether the emphasis is weighted towards measuring for improvement or measuring for judgement or externally driven performance management, with concern that the improvement potential can be compromised when the predominant emphasis is judgement. Related considerations are the selection of hard or soft outcomes and the challenge of attribution. Acknowledging these challenges is a necessary step in progressing towards meaningful measurement. Literature suggests that there is real potential to link outcomes measurement to the organisational value base and a range of approaches and tools are emerging to support this. There is also a significant role for funders and policymakers in ensuring that agencies involved in direct support are not over burdened by demands for measures, which are system rather than people driven.

References