As part of our learning week we had a day where we focused on our internal evaluation process, as well as how we can better use the data we already collect to show our impact to partners and stakeholders.
Big reputation
Evaluation seems to have this reputation of something everyone has heard about but is a bit wary of - it sounds big and official and is often strongly tied to funding for organisations or teams, particularly in the public and third sector. It feels like it is seen as a necessary evil, one we are not always sure what the purpose might be, other than to provide data to funders: and it isn’t always clear how we can actually use it to our own advantage. There is also the concern that collecting numbers about what we do will never be able to tell the full story of what we are actually achieving and how we can prove that we are having long term or preventative impacts, particularly in short funding cycles. While in some cases a lot of these things might be true, we want to share about how we are trying to use evaluation, not just to show our impact, but to use for our own learning.
The prophecy
Setting up questions and tools for evaluations can feel a bit daunting, like you need to predict the future or mind read what people will want to know from you in six months, a year or three years from now. While other people’s expectations often play a part in why we set up evaluation tools, that does not need to be our main motivation. Evaluations can be a great way for us to better understand our own organisation and to have time and space to reflect on what we have done in the past and what we want to do moving forward. For us, our motivation included that we wanted to become more strategic in how we show our impact to our funders and board members, but also to help us internally mark our achievements and figure out how we can better support the people we work with.
Question…?
As an organisation, we sat down and tried to figure out what is important to us, what we want to achieve and how we think this will impact other people we work with. We wanted to ensure that this will work with our new strategy and most importantly that it works for us individually in our day to day work. We tried to answer some of the following questions when figuring out how we wanted to evaluate and what might be important for others to understand the impact we have:
- Who are we working with directly and how does our work (hopefully) impact them?
- What do we think is the purpose of our projects and organisational work?
- In what way are we already collecting data on our work and are we currently using this to show others our impact?
- What are the things we currently do well? Are we able to show others that we do this?
- How do we find the right balance between collecting data for the sake of collecting data and collecting meaningful data that can tell our story as accurately as possible?
All of those questions may seem fairly straightforward but getting together to think about this turned out more complex than we might have expected. For example, we would have loved to include an indicator on how our work impacts people who are receiving support from social work and social care. However, we realised that as we don’t work directly with these individuals, it would be nearly impossible for us to get meaningful data of our impact on them. Because of this, we focused our outcomes and impacts on stakeholders and partners we work with directly. It took a few sessions with Evaluation Support Scotland for us to narrow down what the key aspects are that we want to focus on and then to figure out the practicalities of how we will put our evaluation in place. From those sessions, we came away with a small but focused number of outcomes, impacts and indicators we wanted to collect data on moving forward.
We also realised that we were already collecting a lot of data but often these were stored within individual projects and we didn’t use them to their full potential to show our impact as an organisation. Because of this we ended up building on the systems we already had in place rather than starting from scratch. We looked at what is working and what isn’t and where we had gaps in our data collection. It was also important for us to know that we can always change the ways we collect our data, if something isn’t working the way we need it to. As the current financial year is coming to an end, we used our learning week to start to look more closely at actually doing our evaluation in its current format for the first time.
So it goes…
At our learning week session, our resident Swiftie (Research Analysis Lead) Katie led a session on our upcoming evaluation. Because evaluation is often something that can be daunting and seen as boring, Katie tried to distract us from that through Taylor Swift themed slides and with friendship bracelets. The friendship bracelets were all (somewhat) evaluation themed and were meant to show what we would like to see to happen to our work. For those of you that are not Swifities, fans made A LOT of friendship bracelets for Taylor Swift’s latest tour because of one lyric in a song (“So make the friendship bracelets, take the moment and taste it”). It became a key element of the tour, as well as tours of other artists and fandoms. Because of these bracelets, companies like Hobbycraft noted a 500% increase in bracelet making tools showing not only a cultural but economic impact of the bracelets. Therefore, Katie thought that the bracelets could nicely symbolise the impact we would like our work to have for the people we work with. For everyone who engages with us to take a little nugget from our work and make it their own and create a society wide impact beyond what we might think of.
The actual work of the day started with going over why we are doing our evaluation and what practical things we need in place to do our evaluation well. Those things include seemingly little parts such as when to send reminders internally and how we can best account for the broad variety of projects that we have, as well as timings of when we can best get feedback from our partners on what working with us is like for them.
We then moved on to do some of our evaluation work. Through mentimeters and group discussions we tried to identify what makes us hopeful and confident about our work, what we have learned over the past year and how that impacts our work for the coming year and what we are most excited about for the upcoming year.
It was really nice for all of us to have the time to sit down and reflect together on what we have achieved and to figure out what we are hoping for in the future. One of the main things we agreed on was that it made us hopeful to work with a lot of different stakeholders who seem interested in driving change. One of the things that makes us excited for the coming year is being able to continue a number of projects that started as one year projects but have grown from that into longer term work because of the impact they already have.
Is it over now?
While our learning week might be over, our evaluation journey is just beginning. We will be working closely as a team and with partners to understand what makes Iriss’s work helpful for others and how we can continue this moving forward. We found that evaluation does not need to be just a tick box exercise; but can be something that is actually useful for us, as we already identified some things we are hoping to incorporate in our work moving forward. We are looking forward to seeing how all things evaluation will continue to develop and look forward to sharing this impact with you in the next financial year!