Try to understand evaluation as one mandatory and integrated task of the course of a project. Thus design the project, participation and evaluation are synchronised and interconnected. The intended project structure – including measures of participation – paves the way of evaluation. E.g. your project contains some crucial milestones which would also be appropriate for evaluation steps. Make yourself clear which cause and effect relationships, intended (and unintended) effects, influenceable and external parameters could shape the project success. You should have a substantiated overview about project inputs, outputs and with regard to participation evaluation about short-term to long-term outcomes of the project.
Your overall goal is to collect accurate, high-quality, reliable and valid data. Therefore you have to decide which tool (e.g. interview, questionnaire) you choose to receive the information wanted. The data you would like to receive and the methods you use can range from simple to complex. It depends on the resources and goals for evaluation, how comprehensive you design your evaluation approach. The same applies for data analysis. You can start with simple descriptive statistical methods and end with highly complex data analysis procedures.
Bear in mind that data collection, analysis and interpretation can be time (e.g. conducting a large survey) and cost (staff, software, printing …) intensive.
A crucial step in the evaluation process is to find comprehensive indicators that allow you to measure the success of a participation process. You have to consider the requirements of ‘good’ indicators that have to be specific, measurable, attainable, relevant and trackable 14
. If you conduct a goal-based evaluation with precise objectives it is maybe more accessible to derive indicators. A goal-free evaluation is a more open approach that considers inherent and even unintended effects of a project. That makes to definition of indicators much more complex, especially if you try to evaluate hardly to ascertain qualitative aspects like behavioural change. It that cases you will use proxy indicators
as indirect measure that approximates or represents a phenomenon in the absence of a direct measure (e.g. the growing number of volunteers in a regional rail project as proxy indicator for changing attitude of citizens towards participation).
In general you can distinguish between quantitative (data, statistics) and qualitative (text, voice, visual) information. Quantitative data can derive from primary data, original data that has not previously been collected. You can align the collection of primary data, so that they really answer your needs. But you will need resources to conduct a primary data survey. The other data source is secondary data, data that have been previously collected, usually for another purpose. These data are very often easy to access and very useful if you want to conduct comparative evaluations. But unfortunately they are not specific enough with respect to your evaluation goals, temporal and spatial context.
A qualitative evaluation approach tries to reveal the hidden catalysts and motivations that induce a behavioural change. So it is an evaluation “beyond the numbers of game” 15
. Even if you use many quantitative data as proxy indicators, try to evaluate qualitative, holistic, questioning and process-orientated. Very often you will combine quantitative and qualitative data and typically you will collect your one primary data. Do not underestimate the effort to generate a set of qualitative information, e.g. the procedure of a comprehensive expert interview campaign.