DOC

Evaluation - collecting information that allows an assessment of

By Tina Ray,2014-05-06 12:19
10 views 0
Evaluation - collecting information that allows an assessment of

    USER INVOLVEMENT IN VOLUNTARY ORGANISATIONS

    SHARED LEARNING GROUP

    Evaluating user involvement

1. Introduction

    This paper is a short ‘think-piece’ reflecting the discussions of the User Involvement

    in Voluntary Organisations Shared Learning Group about evaluating user

    involvement. This is a working document that is likely to be developed over time - it does not necessarily reflect the views of all members of the Shared Learning Group. It is hoped that the ideas presented here will be useful to other voluntary organisations to help them think about the different ways they work with service users, carers and members of the public.

2. What is ‘evaluation’?

    Evaluation can be defined as:

    - collecting information that allows an assessment of how well a programme, service

    or organisation does what it has set out to do, in terms of its effectiveness and efficiency. (1)

- a process of assessment which identifies and analyses the nature and impact of

    processes and programmes. Evaluation ideally starts as the project or programme begins and continues throughout the project‟s life (and after). (2)

    Realistic evaluation‟ is a commonly-used approach to evaluating projects that involve complex intereactions between people (3). It involves looking at three different aspects of the project:

; Context particularly the culture

    ; Mechanism what methods or processes are used

    ; Outcome the impact

This approach does more than identify ‘what works’. It is also able to explain ‘what

    works, for which people and in what circumstances’. This is crucial to evaluating

    complex interventions like UI because the attitudes, opinions and motivation of the individuals involved all have a big influence on its success.

3. Why evaluate user involvement?

    Evaluating user involvement means addressing the question: Is user involvement

    making a difference?

Evaluating user involvement can help:

    ; identify what works (or not)

    ; generate evidence of the value of UI ‘prove that it works’

    ; celebrate success - recognise achievements

    ; share learning

     1

    ; improve the planning of future projects

    ; provide another mechanism for involving users

    As the rest of this paper describes, it is is a difficult challenge to evaluate user involvement. A proper evaluation is likely to require careful planning and investment of considerable time and resources. So it is important to be clear at the beginning about what you hope to achieve by an evaluation then you can be sure the process

    will answer your questions.

It is also important to think about when to do an evaluation. Some evaluations maybe

    ongoing, as part of the development of a project so that they help to shape its direction (formative evaluation). Others involve an assessment at the end a specific,

    time-limited project to assess what has been achieved (summative evaluation).

    There are other ways that user involvement is often assessed including:

    - Assessing plans for UI in a project proposal this is likely to involve using

    a checklist to assess whether a project proposal has clarified the purpose and

    aims of involving people and properly planned for their involvement (e.g.

    budgeted appropriately for users’ expenses).

    - Assessing whether standards of good practice are being met this often

    takes the form of a self-assessment tool or an independent assessment that

    uses a checklist of indicators’ to look for evidence of good practice (e.g.

    checking whether all the people involved have been offered training prior to

    their involvement).

    - Monitoring UI this involves checking whether agreed plans for UI have

    been implemented and delivered ‘did we do what we said we would do?’. It

    simply involves checking whether actions have been followed (e.g. checking

    that people did all receive papers two weeks before a meeting).

    - Measuring performance either of an individual or of a whole organisation.

    This involves measuring whether agreed targets have been met (e.g. whether

    for example 80% of all committee meetings are attended by at least two

    service users/carers) or a whether a desired quality of involvement has been

    achieved (e.g. whether for example 100% of all the people who were involved

    feel they have made a difference).

    There is some overlap across these activities as they may involve asking similar questions. An evaluation may also involve assessing whether good practice was followed and whether agreed actions were taken forward. But the key distinction is that an evaluation goes on to ask what difference has this made?

4. The challenges of evaluating user involvement

    Evaluating user involvement has proved to be a difficult challenge. This is because users can be involved in so many different ways for many different purposes. It is therefore almost impossible to develop a standard set of measures that can used to assess all types of UI activity. There is no ‘single way’ to do it.

     2

    For example at Macmillan Cancer Support, staff and users were asked about how they measured success in their projects involving users. A BME worker described one measure of successful involvement as BME users being asked to get involved

    right from the beginning of a project rather than some time after the work has begun.

    A Macmillan researcher, who had involved users in co-running focus groups of cancer patients, described success in terms of the impact on the group discussions and the richness of the data that they gathered. It is hard to imagine a simple indicator that could measure impact in both these examples.

    User involvement also tends to have an impact on the quality of a project or programme, rather than changing its outcome entirely. It helps to shape opinions and decisions, change the way information is provided, alter priorities and generate buy-in. These more qualitative aspects are also difficult to capture. For example, one health professional described the impact of user involvement in the following way:

“I spoke to a user who made me think about the issue differently, which meant that in

    the next meeting I argued the case in a way that I don‟t normally do, which made

    someone else get angry and argue their point more passionately which then persuaded the others to do something different How do you describe that impact?

    You may have to experience user involvement before you can fully appreciate the difference that it makes”.

    There are other barriers to evaluating user involvement including (4):

    ; Power differences between professionals and service users - this can make it difficult to be

    honest in evaluations. It is also important in terms of who sets the stage for the evaluation

    (who decides what will be evaluated and how?)

    ; The way service users are involved is very varied, from continuous involvement to

    one-off engagement, and there may be many different ways service users are

    involved in any particular project. This can make it difficult to prove the precise

    links between user involvement and the final outcomes.

    ; Commitment to the principle of involvement can make it difficult to be objective about

    the difference it is making.

    ; The culture in an organisation, including staff attitudes, can be hostile to evaluation. There

    may be fears about what an evaluation will discover.

    ; Lack of resources - evaluation needs to be built in from the beginning, along with the

    resources to carry it out. Although project funding might include evaluation, often it does

    not include evaluation of user involvement.

    ; Service users and professionals may have different views of what success looks

    like and what contribution user involvement has made. Tokenism occurs when an

    organisation feels satisfied that it has ticked the boxes, yet the reality is experienced very

    differently by service users and carers.

    ; Professionals and users can find it difficult to recognise and be clear about the

    impact that user involvement has made. There are many reasons for this: the

    people involved may focus on the process of user involvement rather than

    outcome; success is rarely celebrated; individuals tend to focus on what went

    wrong rather than what went right; thinking about involvement and providing

     3

    feedback to service users on the impact of their involvement is not always part of

    the culture of organisations.

In summary, the answer to the question ‘How do we know whether user involvement has

    made a difference?’ is not easy. It depends on (4):

; who we ask ... and who does the asking

    ; when we ask the question

    ; how we ask it

    ; what we ask about

    ; how people feel about being asked (what has their past experience been) ; whether they think it will make a difference if they bother to answer ; whether they feel they can be honest

    ; the different power of the people involved

    ; what 'being involved' has been like in the past

5. Criteria for evaluating user involvement

    Given the varied nature of user involvement there are few generally agreed criteria for success. In 2001 the Department of Health identified the criteria listed below as a way of measuring successful UI (5). However, there is not a commonly agreed set of indicators to measure whether these criteria have been met in practice.

    ; Effective in representing and strengthening the voice of patients and

    communities

    ; Accessible at a local level to people using health services

    ; Accountable in a clear and transparent way

    ; Integrated to match the structures of the NHS

    ; Independent to be able to scrutinise the NHS

    ; Adaptable building on the best existing local practice and ensuring high quality

    A set of principles and indicators to assess whether user involvement in a research project has been successful has also been published (6). However, these 10 principles all relate to process i.e. how the user involvement was carried out and then

    followed-up. They do not attempt to measure the impact of UI on research.

    It is easier to agree upon a common set of standards to assess whether the process of UI has been done well. Such standards could include the following:

; roles were agreed between the organisation and users

    ; staff budgeted for costs of UI

    ; respect was shown for differing skills, knowledge and experiences of users ; users were offered training and personal support to be involved ; staff have skills needed to involve users their own training needs were met

    ; users were involved in decisions about project and kept informed of progress ; feedback was given to users in terms of the difference their involvement made

    However, ensuring good practice is not necessarily an indicator of successful involvement and need not always be consistently applied. For example, a simple standard might state that a minimum of two users should be invited onto any group or

     4

    board. However, having one user join a group may be all that is possible in a group that’s hostile to UI. That one individual could have a tremendous impact and

    therefore be considered a great success. In contrast, having two members on a group could be totally ineffective, even though the agreed ‘standard’ was being met.

    In conclusion, it seems that while some generalised criteria may be useful, the most effective approach to evaluating the impact of user involvement is likely to be through the development of mutually agreed and project-specific criteria (7). UI is best

    evaluated through a process of involving all stakeholders in first identifying what success would look like and then measuring whether that ‘success’ has been achieved.

6. Involving users in evaluating UI ideas about how best to do it

Planning and designing an evaluation

    The SCIE Resource guide states that It is not possible to declare which methods of evaluation are best for what kinds of involvement’ (4). However, they have identified ‘nine

    big questions’ (Box 1) and a list of twenty pointers (Box 2) to help organisations find the best approach to evaluating user involvement. The resource guide provides help with thinking through the various issues.

    BOX 1: The Nine Big Questions (4):

1. Why bother to evaluate?

    Are there good reasons for finding out whether and how involvement is making a difference?

    2. What stops us from finding out whether involvement makes a difference? What are the barriers to evaluation?

    3. What do we mean by making a difference?

    Can all improvements be easily measured? If we just feel better or understand more is that a result in itself?

    4. When do we decide to find out whether a difference is being made? Is the timing right? Have things had time to develop?

    5. Who says?

    Who does the evaluating? Will everyone get a say?

    6. How do we find out?

    What methods might be used to find out whether being involved has really made a difference?

    7. What tools and resources do we need?

    What is going to help us to find out whether involvement has made a difference? How do we make sure it is meaningful?

    8. What about differences?

    How will differences be handled? What if there are conflicts?

    9. What happens next?

    How is the information from the evaluation collected and made sense of? Let‟s have

    some „for instances‟ of changes. How will we get feedback? Who owns these findings

    and what will happen as a result of them?

     5

BOX 2: Checklist of 20 pointers (4):

    1. PURPOSE: Are you clear about the purpose of the evaluation? Why is service

    user and carer involvement being evaluated?

    2. CHANGE: What kinds of change might you expect user and carer involvement to

    have made and at what levels is it expected to make a difference - individual

    experiences, staff attitudes, agency policies, local or national strategies?

    3. TIMING: When will you measure these changes? Are you looking for short term

    results, longer term outcomes or both? Do you have indicators of progress?

    4. PROCESS OF INVOLVEMENT: How might the experience be evaluated?

    5. SUPPORT and SUPPORTERS: What kinds of support might be needed to make

    the evaluation an effective and independent one? What part might supporters and

    facilitators play in evaluating the results of involvement?

    6. SKILLS: What skills are needed to make an evaluation of involvement?

    7. TRAINING: What kinds of training are needed to help people to evaluate the

    effects of involvement? Is this training available?

    8. RESOURCES: What resources are needed to evaluate involvement? Are

    resources such as budget available?

    9. ORGANISATIONAL CULTURE: How open to involvement is the organisation or

    group? Does the climate or culture in the organisation support involvement and

    how do you find out about this?

10. PRACTICE: How participative is practice in the organisation or group? How do

    you evaluate the way service users and carers are involved in practice?

    11. STRUCTURE: Is evaluation of involvement a regular feature of the organization

    or group? Is it part of the structure? How might evaluation help it become part of

    the structure?

    12. POWER: What differences in power are there between the people involved

    (service users, carers, professionals, managers, etc.)? How might these affect the

    evaluation? What can you do to change these differences in power? How will you

    involve people who are seldom heard?

    13. TOKENISM: How will you avoid tokenism? In other words, how will you evaluate

    whether the involvement has been real and meaningful?

    14. THOROUGH AND FAIR: How will you make sure that your evaluation listens to

    the negative messages as well as the positive ones, taking note of disadvantages

    of involvement as well as advantages?

    15. LINKING INVOLVEMENT TO CHANGES: How might you find out whether any

    changes are indeed a result of involvement and not something else?

    16. OWNERSHIP: How will service users and carers participate in deciding what will

    be evaluated and how? Who will undertake the evaluation and how independent

    should they be from the process? Who will own the information gathered? Are

     6

    there any other ethical issues that you will need to consider (for example, about

    confidentiality)?

    17. FEEDBACK: How do people find out about the results of the evaluation of service

    user and carer involvement?

    18. IMPLEMENTATION: How are the findings from the evaluation to be used? Who

    will implement recommendations? What further changes should you expect as a

    result of the evaluation?

    19. CONTINUITY: Is evaluation a one-off event or an on-going process and part of

    the way the organisation or group works all the time?

    20. PUBLICITY: How do other organisations and groups learn from your experience

    of evaluating the difference that involvement has made?

     1InterAct recommend the following general approach:

    - be clear about objectives and the purpose of evaluation

    - agree principles e.g. openness, honesty, transparency, involvement of users

    - design the process to ensure lessons from the evaluation can be made easily

    accessible to those who need to understand and implement them

    - identify needs and capabilities of target audiences when designing the

    evaluation

    - involve users in identifying indicators

    Ideally planning an evaluation should be an integral part of planning any project with UI. Evaluation needs to be embedded into the project so that systems can be set up for reporting results, for ongoing review and for analysing and interpreting evaluation data at regular intervals. This is the approach being adopted by Macmillan Cancer Support. They have developed a planning tool for UI projects which incorporates planning to measure success.

    It is also important to think about who will need to know about the lessons from an evaluation and why those people need to know. This makes it possible to plan:

; how the lessons from the evaluation will be shared

    ; how to ensure people act upon recommendations

    ; how progress on recommendations will be monitored and assessed

    Richard Murray from the New Economic Foundation has developed a quality and impact toolkit for social enterprise called ‘Measuring what matters’ (see Box 3). The

    general principles of this participatory approach to designing and carrying out an evaluation would also be relevant and useful in evaluating UI.

     1 InterAct is an alliance of experienced practitioners, researchers, writers and policymakers committed to putting participatory, deliberative and co-operative approaches at the heart of debate, decision-making and action across the UK.

     7

Box 3: An outline of the participatory approach in ‘Measuring what matters’.

1. Know why you are evaluating.

    2. Know where you are going. Clarify, objectives, mission and values. 3. Identify your stakeholders.

    4. Map the story (what you expect will change) and describe the milestones. This

    will determine the scope of your evaluation.

    5. Choose your indicators what you will measure as proof of success (be

    challenging but realistic about what you can measure).

    6. Make a plan for how you will measure your methods.

    7. Collect the information.

    8. Analyse the information and draw conclusions.

    9. Share it with others.

    10. Learn from it and take action.

Further details and supporting materials are available at www.proveandimprove.org

    InterAct have produced a guide for evaluating participatory process based on a ‘realistic evaluation’ framework (see Box 4). They recommend looking at the

    context, mechanism and outcomes (2).

Box 4: What to look for when evaluating participatory projects

Context

    ; What were objectives of the project?

    ; How clear were they and how were they communicated?

    ; How were they set? How much did various stakeholders participate in setting

    them?

    ; Have they changed over time? If so, how and why?

    ; To what extent have the stated objectives been met or fulfilled? ; Is this project part of a wider programme/strategy?

    ; How does it relate to that larger strategy structurally and informally? ; What other factors have influenced the outcome (e.g. people, groups, budget,

    organisational policies, professional expectations, limits to expertise)? ; What impact have these factors had?

Mechanism / process

    ; What is the level of involvement in this project? Is this an appropriate level of

    involvement for this particular project and circumstances? ; What methods were used? Were these appropriate?

    ; Were users given training?

    ; Was each method evaluated and the lessons used to improve future

    involvement?

    ; Were users involved in identifying the methods?

     8

    ; Which users were involved? How many? Which types of people and groups wee

    involved?

    ; What were their roles?

    ; Do stakeholders believe these people were representative e.g. included hard to

    hear groups.

Outputs

    ; Where the deliverables delivered e.g. were reports, events, questionnaires

    completed, interviews completed etc.

Outcomes

    What changes have been achieved as a result of UI including:

; impact on individuals

    ; groups of people

    ; institutions and organisations

    ; immediate or long term change

    ; small scale or systematic changes

    ; increased trust amongst stakeholders

    ; increased level of ownership of process

    ; increased capacity amongst stakeholders

    ; changes in values, priorities, aims and objectives

    ; new relationships formed

    ; increased openness and transparency

    The criteria used to measure outcomes need to be specific to the project.

Choosing project-specific indicators

    Indicators are the criteria used to measure success and provide a structure to the collection of evaluation data. The process of choosing indicators needs to involve all stakeholders.

    The New Economics Foundation has developed a simple guide for choosing effective indicators AIMS (see Box 5).

Box 5 - Choosing effective indicators AIMS

Action focused

    If there is no action that can be taken as a result of collecting data on a particular indicator, it is probably not worth using that indicator.

Important

    Indicators must be chosen to be meaningful and important to stakeholders as well as evaluators.

Measurable

    It must be possible to collect data relevant to the indicator.

     9

Simple

    Collecting data must be relatively easy and whatever data is collected can be widely understood.

    Involve has also made recommendations on general indicators that could be used to measure impact on an organisation (see Box 6).

Box 6: Possible impacts of UI on an organisation

    ; Has this initiative encouraged more users to use our services? ; Do more people think the organisation is doing a good job?

    ; Has it encouraged people to get involved again because they think it is

    worthwhile?

    ; Are we now more accountable for the way we spend our money?

    ; Are we able to reach people from different backgrounds?

    ; Have we enabled people to make new contacts/ join new networks/

    has this increased equality of access to decision-making or services? ; Have we saved money by making services more reflective of users needs and not

    spending money on unwanted services?

    ; Has it been quicker easier to make decisions about priorities?

    Involve: The true costs of participation: The Framework. (www.involve.org.uk)

7. How are shared learning group members evaluating UI?

    Action for Blind People have developed a UI impact recording system for the whole organisation that measures impact using the following criteria:

; New decisions - New services influenced by UI Groups/service users

    ; Changed decisions - Changes to existing services

    ; Supported decisions - Projects supported by UI groups/service users

    ; Decisions with no influence - Projects/services/decisions not influenced by UP

    groups/service users

    ; Negative decisions - Negative decisions from UI groups/service users regarding

    Action’s activities

    Bliss has commissioned a group at Warwick University to carry out an evaluation of UI in the managed clinical care networks that oversee neonatal care. The interim report suggests that users attending network board meetings as a sole means of involvement may not be the most effective means of involvement and a full range of methods are being evaluated. It has also highlighted how hard it is to track the impact of users on procedures and care delivery. The final report will be completed in May 2008.

     10

Report this document

For any questions or suggestions please email
cust-service@docsford.com