DOC

How does the internal Project Monitoring and Evaluation system

By Frederick Marshall,2014-11-25 10:25
14 views 0
How does the internal Project Monitoring and Evaluation system

    Internal Project M&E System and Development of Evaluation Capacity Experience of the World Bank-funded Rural Development Program

Presentation by Krzysztof Jaszczołt, Tomasz Potkański, Stanisław Alwasiak

INTRODUCTION:

    Conclusions and opinions presented herein result from our practical experience in implementing a Local Government Administration Component of the World Bank-funded Rural Development Program ($ 200mln) - RDP. These grass-root level findings create a basis for reviewing main factors, which determine evaluation capacity at the program and sector level.

    After presenting basic data and the strategic context of the Rural Development Program, we will explain our approach to organizing M&E system under the Local-Government Administration Component. Discussion of inter-relationships between the monitoring and evaluation functions will be followed by a review of linkages between the program-specific and sector-related aspects of the final evaluation. In the last part of our presentation we will list lessons learned, conclusions and recommendations concerning development of evaluation capacity.

Definition of key terms and concepts:

Evaluation (DAC definition)

    Evaluation is an assessment that refers to design, implementation and results of What?

    completed or on-going project / program / policy.

    Evaluation should be systematic and objective. Key criteria to be used are: How?

    relevance, fulfillment of objectives, developmental efficiency, effectiveness,

    impact and sustainability.

    Evaluation should provide credible and useful information to enable the Why?

    incorporation of lessons learned into the decision-making process (recipients and

    donors).

Monitoring (WB definition)

    Monitoring is an integral part of a day-to-day management. What?

    Monitoring embodies the regular tracking of inputs, activities, outputs, reach, How?

    outcomes, and impacts of development activities at the project program, sector

    and national levels

    Monitoring provides information by which management can identify and solve Why?

    implementation problems and assess progress towards project's objectives

Evaluation Capacity Development, ECD (WB definition)

    "ECD is the process of setting up a country-based system to conduct and use M&E". To develop evaluation capacity, three main elements of the system should be simultaneously addressed: (1) demand for evaluation in public administration and interest in evaluations' findings among media and civil society; (2) supply of professional evaluation services based on properly developed information systems; and (3) institutional framework allowing stakeholders for effective incorporation of evaluation findings in follow up activities.

RURAL DEVELOPMENT PROGRAM, LOCAL GOVERNMENT ADMINISTRATION

    COMPONENT KEY FACTS

    The Rural Development Program (RDP) has been designed to provide medium-term support to the development of the rural sector in Poland. RDP contributes to three overall objectives (i) increase the

    level of off-farm employment in rural areas, (ii) on-going decentralization of self-government and regional development, and (iii) building

    institutional capacity to absorb EU pre-RDP -Program Structure

    accession and structural funds.

     Rural Development ProgramRDP is implemented through a series of

    components, addressing various aspects of the

    Government's Strategy for Rural Microcredits Labor Market Management Effective Infrastructure Development of Education Public InvestmentsDevelopment: AdministrationAB1B2B3C

    Component A: Micro-credit, MinistryMinistryMinistryof Economy and Laborof Educationof Public AdministrationComponent B-1: Labor Redeployment, Ministryof AgricultureComponent B-2: Education, PMUComponent B-3: LG Administration, Ministryof Finance Component C: Rural Infrastructure

     Diagram 1: RDP Structure

    The Local Government Administration Component (B-3) aims to increase the level of efficiency and effectiveness in local and regional administration (project urpose). Key results and outputs expected at

    the end of the program include:

    Results Key Outputs

    1. Participating LG units have Providing massive management training to over 4000 LG officials of 600 LG resources to use external units, using modern education methodology (group-based training, individual

    management support mentoring for groups, distance learning tools, project development focus)

    2. LG units have easy access to Designing and pilot testing (33 LG units) a management tool to diagnose, plan effective management tools and implement institutional development in LG offices ("IDP methodology")

    developed under the program

     Creating a database of best practices in public administration management

    3. LG units are willing to invest Identifying legal deficiencies constraining effective management and

    time and money to improve suggesting appropriate revisions to the legal framework

    management Strengthening capacity and institutional cooperation between the Ministry and

    LG Associations

    Creating a basis for performance benchmarking system in Poland

    Promoting ethical standards in public administration at local and regional

    levels

DESCRIPTION OF THE M&E SYSTEM CREATED UNDER THE LG ADMINISTRATION

    COMPONENT OF RDP

    1. A Logical Framework Matrix constitutes a central element of the project management system. It

    defines project's objectives and describes the approach taken to implement it. The Logical

    Framework is accompanied and complemented by other monitoring and evaluation tools. A full set

    of M&E instruments is presented in the diagram below. Main sources of data / information are listed

    in the first column. They are addressed and utilized in the M&E system via specific tools that are

    provided in the second column of the table. Observations and conclusions are forwarded to the

    Project Management Team, implementers, sponsors and beneficiaries, as presented in the last column.

    RDP, COMPONENT B3 -M&E SYSTEM

    INTERVENTION SOURCES OF DATAM&E TOOLSFOLLOW-UPLOGIC

    FINAL EVALUATION (FE)FINAL EVAL. REPORT (FER) OVERALL EXTERNAL REPORTS & DATASURVEYS, STATISTICAL ANALYSIS-PROGRAMMING OBJECTIVES

    MID-TERM EVALUATION (MTE)MID-TERM EVAL.REPORT (MTE) BENEFICIARIES’ OPINIONSBENEFICIARIES SURVEY-REVISING CONTRACTSPROJECT PURPOSE

    STRUCTURED INTERVIEW / FOCUS QUARTERLY MONITORING EXTERNAL EXPERTS’ OPINIONSGROUPSREPORT (QMR)

    RESULTSQUANTITATIVE DATA ON PROJECT MANAGEMENT MEETINGS (MM)PIPELINE ANALYSIS (PA)PROGRESS-WORKPLANS

    IMPLEMENTERS’ PERIODIC REVIEW OF PERIODICREPORTSREPORT APPROVAL PROCESSREPORTSACTIVITIES

    PRODUCT REVIEW REPORTS (PRR)PRODUCTSDELIVERED BY REVIEW OF PRODUCTS AND CONTRACTORSSERVICES

    MEANS & RESOURCESSITE VISITSDIRECT OBSERVATIONSITE VISIT REPORT (SVR)MONITORINGEVALUATION

Diagram 2: M&E System as applied under the LG Administration Component of RDP

    2. The tools and procedures could be assigned to monitoring or evaluation function, or, in some cases, to both of them (e.g. the internal Mid-Term Evaluation). M&E instruments are presented in a vertical order. Those at the bottom of the list, like Site Visits and Product Reviews are considered as purely monitoring. The higher the position on the list the more evaluative character of the instrument.

    3. Through intense site visits, comprehensive technical product reviews and careful analysis of periodic reports the Project Team develops its opinion on the quality and timeliness of services provided by individual implementers. These findings are verified in a partner dialog with contractors and project beneficiaries. Final opinions, suggestions, and recommendations are forwarded to the implementers via Site Visit Reports (SVR), Product Review Reports (PRR) and at the management meetings. Eventually this process leads to approving implementers' periodic reports and transferring payments to their accounts.

    4. Implementers submit quantitative information on monthly or quarterly basis (pipeline data). It allows for calculating project monitoring indicators and assessing the dynamics of project activities (Pipeline Analysis).

    5. In the Quarterly Monitoring Report (QMR), operational data is aggregated, summarized and converted into more general opinions on the project progress towards its objectives (result indicators). Thus the QMR links monitoring and evaluation aspects. Internal mid-term and final evaluations complement the M&E system. Project Team's conclusions are reconsidered in the strategic context and on the background of external expert and project beneficiaries’ opinions. Finally, recommendations concerning necessary revisions to the implementation approach are formulated.

    MONITORING AND EVALUATION - DIFFERENCES AND SIMILARITIES

    Monitoring is focused on daily management issues. The typical questions are: “How many?” "When?” “How?” “For how much?” By monitoring we try to assess whether activities are implemented effectively and efficiently. Evaluation addresses strategic questions: “So what?”(impact and sustainability) and “Why?” (relevancy). Here the analysis is getting deeper and seeks for actual cause-results relationships

    and eventual implications of the observed situations. It perceives program not as a series of peace-meal activities but seeks for “big picture" conclusions.

    Under “monitoring” we usually mean a system. Data should be collected and analyzed more or less frequently, according to a predefine timetable (Performance Measurement Plan). It requires regularity and continuity with regard to the type of data being gathered and methodology used to analyze it. Evaluation differs from this description. Stakeholders have significant flexibility in specifying, which aspects of the program should be assessed, when and how.

    Monitoring is one of the components of the modern project management. First of all, it is expected to generate useful information for project manager: Where are bottlenecks? How are we doing towards our objectives? Are expenses under control? We can say that utility is the primary feature of properly

    organized monitoring system. Evaluation serves different audiences, like sponsors, assistance recipients and a wider public, who are potentially interested in results of the investment made. They expect an objective response to the very basic questions: Have we achieved our goals? Are our results sustainable? Have we learned anything for the future? The clearer and better-justified are evaluation responses, the more added-value is found in the assessment. Attention is put on transparency of evaluator’s approach

    and his ability to reveal cause-effect relationships between subsequent layers of analysis.

    MONITORING & EVALUATION COMPARATIVE CHARACTERISTICS

    Characteristics Evaluation Monitoring

    Subject: usually focused on strategic aspects addresses operational management issues

    Character: incidental, flexible subject & methods continuous, regular, systematic

    Primary client: stakeholders and external audience program management

    Approach: objectivity, transparency utility

Methodology: rigorous research methodologies, rapid appraisal methods

    sophisticated tools

    Primary focus: focus on relevancy, outcomes, impact and focus on operational efficiency and effectiveness

    sustainability

    Objectives: to check outcomes / impact, verify to identify and resolve implementation problems

    developmental hypothesis

     to document successes and lessons learned to assess progress towards objectives

    Monitoring refers to a pre-defined program strategic framework that guides implementation. It is expected to generate timely information on operational efficiency and effectiveness. To fulfill these need monitoring utilizes “rapid assessment methods”, which provide fast feedback and are not very expensive.

    Evaluation, to produce objective and exact information, uses more scrupulous research methodologies, like representative surveys and comprehensive quantitative analyzes. Review of program outputs, outcomes and impact is conducted on the background of well-recognized trends in the surrounding environment. On this basis evaluator judges whether the program developmental hypothesis was optimal in given circumstances.

    Along with differences we found important links and similarities between monitoring and evaluation. Comprehensive approach to monitoring includes on-going review of progress towards results and

    outcomes, as well as, gathering data for measuring impact. Project team can use some of the evaluation tools to develop internal assessment of selected strategic aspects of the program. Sometimes a logical framework, developed in the programming phase, is out of date when program is actually implemented. In such case, project team needs to reconsider, revise and / or complement original assumptions and program' approach.

Examples of linkages between monitoring and evaluation:

    - Both monitoring and evaluation refer to the same logical framework that organizes the program as a

    whole. Monitoring system utilizes some of the result, outcome and impact indicators to observe

    program progress towards its final objectives. In exchange it tests indicators' formulas and verifies

    data sources. Well organized monitoring system creates a solid base for proper design of final

    evaluation.

    - Evaluation, even if expected at the end of a program, influences its current implementation.

    Inevitable assessment by an independent, external expert puts significant pressure on the project

    management team and contractors. In result they act more diligently and see their operational-level

    activities in the strategic context.

    - Sometimes monitoring faces important implementation issues, which can’t be properly explained by

    simplified research methodologies and within limited time and budget resources. A profound review

    of some of these issues could be contracted out to external experts, for instance by including them

    into a SOW for a mid-term evaluation.

STRATEGIC CONTEXT FOR EVALUATION: PROGRAM VS. SECTOR PERSPECTIVES

    Explanation of inter-relationships between intervention logic of a program and a strategic development structure of a respective sector is usually considered as straightforward and obvious. Recommended approach calls for identifying key problems hampering sector development and exploring logical cause

    effect linkages between identified issues. A resultant diagram known as a “problem tree” serves as a basis to select a strategic approach for a program (intervention logic) and draft a relevant Logical Framework. In such approach program is closely related to the respective sector strategy or simply duplicates selected parts of it. Program achievements and related changes in the sector could be measured under one evaluation research and against the same set of impact indicators.

    Our experience with Rural Development Program, being a multi-sectoral program, proves that the actual situation could be more complicated.

    First issue refers to multi-sector character of complex structural undertakings. Program logic and activities are subordinated to specific objectives in a priority sector. In the same time, individual components of the program can refer to a number of other areas, which contribute to the ultimate goal, but simultaneously have their own logic and development structures. It is the case of our project. As a component of RDP it contributes to the objective phrased as "development of the rural sector in Poland". In the same time, it has a role in the context of the Local Government Development Strategy. While RDP is coordinated by the Ministry of Agriculture, our Component is implemented by the the Ministry of Public Administration (MoPA). Of course MoPA is primarily concerned with strengthening management capacity of central and local government administration. In the end the program have two important dimensions. Both should be taken into account at each subsequent phase of program planning, implementation and evaluation.

    The Log Frame concept put more attention to the vertical "intervention logic". Program is deemed as successful when inputs produce outputs, outputs create results, which subsequently convert into a wider impact on the targeted priority sector. Using this logic the final evaluation of the LG Administration Component should check whether its activities caused improvement in rural areas. But there is also a

    horizontal dimension, where program achievements could be interpreted in terms of their impact on particular elements of the Local Government Development Strategy. (See Diagram 3)

    Another aspect worth consideration refers to the "developmental hypothesis", which sets a basis for a program. To simplify the situation let's imagine that our program is directly related to only one sector.

    LG ADMINISTRATION PROGRAM

    STRATEGIC CONTEXT

    Effective, efficient Development of SECTOR PERSPECTIVEPROGRAM PERSPECTIVE&accountable rural sectorlocal gov.

    Building inst. Ethical Clearlegal Stable LG Effective SME Human capacity in Infrastructurestandards -frameworkfinancesystemmanagementdevelopmentresourceslocal govfocus on clients

    Results

    LGshave LGs are Participaiting units have resources to improve mgntLGs have ACCESS TO COMMITTED TO RESOURCESMGT TOOLSCHANGEDeveloped mgnt tools are accessible

    Participating LG units are committed to change

    Key OutputsFINANCE & MGT TOOLSPERFORMANCE TECHNOLOGY developedINDICATORSTraining of 4000 officials (600 LG units) resources

    Designing and testing ID Methodology

    Database of management best practices PERSONNEL isCOST OF MGT LEGAL proactive and Recommendations to the legal frameworkTOOLSFRAMEWORKskillfulCapacity and cooperation Ministry LG Assoc.

    Promoting ethical standards

    Basis for performance benchmarking systemMGNTSUPPORTPARTICIPATIONSYSTEM

    INFOon management tools

Diagram 3: LG Administration Program Strategic Context

    Instead of developing rural areas we want to improve management in the local government sector. A logical framework for relevant sector strategy is presented on the left side of the Diagram 3. It assumes that in order to improve management effectiveness of LG units three major conditions must be met: (1) There are effective management tools easily available to interested LG units, (2) Local Governments have human and technological resources to adapt these tools to their specific local conditions, and (3) Local authorities are committed to improve management. Each of these strategic objectives could be further disaggregated into operational objectives like "LGs' personnel is proactive and skillful", "Management tools are documented and ready for replication" "Law stimulates management improvements", and so on. Having a strategic framework we can develop a set of indicators to monitor changes in the sector.

    In the planning phase a critical decision is made concerning selection of operational objectives to be addressed by the program. One could imagine that the most effective approach would be to focus resources on one or two elements of the system, which are the weakest. Somebody else would vote for spreading resources among bigger number of objectives to assure small but simultaneous improvement in all important aspects. Since the availability of historic data is limited, it is difficult to estimate a correlation between changes of individual elements and resultant improvement at the level of ultimate sector objective. The choice is usually more or less arbitral but has got a tremendous impact on the final results of the program. Honestly speaking we don't have any good idea on how to evaluate this very strategic aspect of program design. Methodologically it would make sense to compare program achievements with estimated potential results of applying alternative strategies, but technically this task seems very difficult if not impossible.

FACTORS, WHICH DETERMINE EVALUATION CAPACITY

    As per the World Bank definition level of evaluation capacity could be described in relation to three aspects: (1) strength of the demand for evaluation, (2) market ability to supply it, and (3) appropriate institutional framework to guaranty that evaluation findings are utilized. These general conditions could be further disaggregated to help us identify and categorize factors influencing evaluation capacity.

Demand

    Demand for evaluation is a function of: existing formal and legal requirements for conducting evaluations, degree of public interest in the quality of public administration performance, and knowledge and understanding of evaluation, its function and utility among public officials.

Formal requirements

    There is no doubt that external donors' requirement to conduct evaluations played a very important role in introducing and popularizing M&E concept and tools in Poland. In spite of this, many officials having a choice of spending money on something concrete, like a kilometer of sewage pipe , or something abstract, like an evaluation report, would tend to select the previous option. Does it mean that they are poor managers? Not necessarily. Evaluation is sometimes perceived as something externally imposed and alien. It was already accepted as a formal and inevitable requirement but in many cases was not internalized as something really needed and useful. This situation is likely to change if M&E refers to our Polish public funds, and is required by the Polish law. An argument that evaluation is a kind of strange bureaucratic impediment created by donors to limit our access to money won't be valid any more. It is also possible that public interest in effectiveness of spending public resources will be higher than in the case of externally funded programs.

Development of civil society

    It would be wrong to say that media are not interested in the performance of public authorities. The problem is that this interest is limited to investigating negative cases. I haven't heard about any research on the degree of utilizing evaluation reports by journalists as a source of objective information on failures but also successes of public programs. Taking into account limited access to such publications, their complex content and technical language, results could be discouraging. One could say that media use to provide information that wide public is interested in. It is truth, as a society we are far from the Tocqueville's ideal. We don't know our rights and are not used to the role of clients who can really influence the way they are served by administration. For sure, a degree of public interest in the quality of administration performance is one of the strategic factors, which will pave the way to accountability in the public sector and enhance the role of evaluation.

Knowledge and understanding of evaluation

    Equally important is knowledge and understanding of M&E concepts and tools among potential clients of evaluations. It seems that big part of reservation towards hiring evaluators stems from the lack of trust that somebody external to the organization could provide some new information that is worth spending money on. Doubtless numerous cases of sub-standard reports or fat "user-hostile" elaborations that landed on the lower shelves don't help to change this stereotype. Personal experience in managing projects and external experts is one of the key factors influencing officials' interest in evaluation. It is recommended that success stories of conducting useful evaluations be actively disseminated. Equally important is training for public officials in designing, contracting and managing evaluation measures. As we know quality of evaluation depends as much on professionalism of evaluator as on the ability of program manager to prepare a good Terms of Reference for the research project.

Supply

    Looking at the supply side of the market we should consider existence of appropriate expertise, availability of credible data sources and information systems and access to effective evaluation tools and methodologies.

Evaluators, their tools and methodologies

    We have very limited knowledge on other specialists active in the field of evaluation. It seems that many people get involved in M&E projects just by chance. To some extent it is also our case. We have never attended any workshops or training sessions devoted specifically to M&E. Knowledge that we have today results from our ten years experience with managing Phare, USAID and World Bank projects. Our evaluation-related skills were developed through reading and practicing. We made numerous mistakes and spent a lot of time on re-inventing ideas that had earlier been explored by others. The conclusion is that it is too early yet to say that evaluation is a real profession in Poland. There is an urgent need for professional standards, code of ethics, specialist training, opportunity to exchange experience, opinions and information among M&E professionals. Discussions about creating a professional association for evaluators are very promising. Such initiative should be build on individuals rather than institutions. It deserve attention by government and international organizations.

    Evaluation methodologies are known, even those very sophisticated. Statisticians, social scientists and good expertise in relevant technical fields are available. A real challenge for evaluators in Poland relates to the modest practical experience in actual doing evaluations and lack of opportunities to master tools and approaches through a professional debate with interested clients or other evaluators. Many experts involved in the evaluation projects treat them as a marginal addition to their primary lines of business. They are not eager to invest substantial time and effort to perfect approach and methods. Passive clients are not very demanding so this shortsighted attitude seems sufficient and beneficial. In result we are missing very practical skills of keeping a balance between the type of information that is needed and the approach taken to address it in a meaningful way. Strategic perspective is often treated as an additional piece included only because of formal requirements and not as a central axis that organizes program. Participatory approach to implement assessment projects is under-utilized. Report writing and presentation skills are probably one of the weakest elements of the market as we see it today.

    In this context a separate comment should be made on evaluation language. Since Project Cycle Management and Log Frame were widely adopted by development agencies, evaluators are obliged to assess project success against wider goals, results, outputs and inputs. They are expected to refer to developmental assumptions and anchor their opinions in a solid system of sophisticated indicators. All this make perfect sense, is logic and convincing. The problem is that big part of our clients: top-level officials, media, wide public and politicians do not use this language for their professional work. Log Frame table itself is a wonderful tool to present all the key aspects of a program in a concise way. Each and every question asked separately sounds clear and straightforward. But when all this information is included into one table labeled with an esoteric name it does look extremely complex for an un-experienced reader. In several years the situation can change significantly, but as of now some substantial effort must be put to simplify language and format used for presenting evaluation findings to wider audience.

Data sources and information systems

    We don't have enough knowledge and audacity to make any judgments on relevance and credibility of information in the national statistical system. There are huge amounts of data being gathered by the Main Statistical Office and its regional branches. Separated databases are maintained by various ministries and central agencies. Usually problem appears when somebody needs to check a very concrete thing e.g. how many new jobs were created at the regional or county level. It takes significant time to imagine who could possibly have this kind of information. If it exists but not as a part of routine publication one has to send a formal request to be authorized to get it. Well, it is difficult to specify what you need when you don't know what they have. If one is finally successful and the data set landed on his/her desk one is often surprised that it is in different format than expected or there are some other deficiencies that limit its utility. The point we are trying to make is that the system is very overwhelming and unfriendly. You can't avoid feeling that if you are not a PhD in statistics you won't be able to get through it. Since most of us don't fulfill this condition additional research is funded under individual programs. Because of time constraint, limited skills and budget, acquired data have questionable credibility and the research is not repeated in the following years. Doing this way we will never be able to develop a useful monitoring systems at the sector or regional levels.

    To overcome this situation we should start from defining sector and sub-sector development strategies. When we know what we are trying to achieve relevant indicators could be specified. Then a thorough review of available data should be conducted to identify gaps in the current system and agree on the needs for additional research. Launching costly surveys we should think not only about a particular medium term program but consider a long-term monitoring system closely linked to the sector strategy (e.g. development of benchmarking to measure public sector performance).

Institutional arrangements

    Reviewing institutional arrangements we should observe whether the overall system is conducive to incorporate evaluation conclusions and recommendations in managing ongoing activities, documenting successes and failures of completed measures, planning follow-up interventions and generally speaking

    holding public sector accountable for its operations. It is also worth considering whether examples of successful utilization of evaluation reports are known and available to project managers and responsible public officials.

Conducive system

    Third group of factors determining evaluation capacity deals with the way of using evaluation reports once they are developed. Format for submitting financing proposals to international donor institutions requires that a review of earlier projects, evaluations and reports concerning subject sector be conducted. Unfortunately, it is not a standard way of doing things in the traditional type of administration. In order to review a report one needs to get it first. Even if relevant evaluation report was produced, it is not an easy task to learn about it, find and actually get a copy. Of course it is not impossible, but time consuming and not required. Two conclusions are obvious: An incentive mechanism making people learn from previous experience should be build into the system. There should be also a kind of central depository of evaluation reports, so that anybody can have an easy access to them.

    Sometimes it happens that a scope of work for a follow-up program is foreclosed before the results of previous investment is fully reviewed. On the other extreme, a momentum could be lost when programming next stages of the development process lasts too long. Proper consideration should be given to the issue of time synergy if evaluation is to play a major role in managing structural programs.

    In order to promote evaluation concepts and tools, disseminate good practices in building effective M&E systems and maintain a database of evaluation reports some sort of an evaluation excellence center could be established. Beyond training and on-line technical support to public officials, project managers and evaluators it could serve a useful role in the process of incorporating evaluation provisions to the public administration system at the central and local levels.

     Examples of successful evaluations

    Generally speaking modern result-oriented management is not a prevailing model in Polish public administration yet. Very often external assessment of our activities isn't perceived as a learning opportunity and a chance to improve our approach, but as a threat - risk factor, which should be closely controlled. There are many reasons for this situation: bad experience of previous "centralistic" system, low organizational culture in public institutions, psychological tendency to avoid problems and difficult questions, etc. In our opinion the best approach to overcome this problem would be to show some real and positive cases of evaluations that led to actual improvements of public administration services. In order to fight negative stereotypes connected with evaluation it would be advisable to publicly appreciate authors and implementers of the most successful programs. People should see that external assessment is not only a way to identify problems but also a mechanism to recognize successes.

    Evaluation Capacity Factors

    1. Demand for / Interest in 2. Supply of quality evaluation 3. Organization's ability to learn

    evaluation from evaluation findings

    1.1 Formal provisions for 2.1 Availability of professional 3.1 Institutional system conducive to

    conducting evaluations evaluators using / learning from evaluations

1.2 Civil society development 2.2. Access to proper data sources 3.2 Skills to apply evaluation

    Public interest in public sector findings in managing, programming

    performance and reporting

1.3 Knowledge & understanding 2.3 Knowledge of evaluation

    of evaluation among methods and tools

    stakeholders

CONCLUSIONS CONCERNING POTENTIAL ACTIVITIES TO STRENGTHEN

    EVALUATION CAPACITY IN POLAND

    1. Works over introducing a "polonized" evaluation system, supported by relevant legislative

    provisions, training and information should be continued. We need institutional arrangements

    that stimulate proactive attitudes aiming at systematic applying, not only documenting, of

    successes and failure lessons.

    2. Development of a national evaluation system should be seen in the context of building

    transparent and accountable public administration and civil society. Initiatives like benchmarking

    of public services, promoting citizens participation and spreading ethical standards in public

    administration can magnify eventual impact of the evaluation system.

    3. There is a need for massive training on monitoring and evaluation issues for project managers,

    public officials, NGO-s, and media. A step-by-step manual on designing and managing

    evaluation projects would be very needed to increase the quality and ownership of evaluation

    reports. To this end EU or other international organizations' handbooks could be translated and

    adapted to local conditions.

    4. Efforts to establish and strengthen a national professional association of evaluators should be

    supported. In our opinion such organization could play a major role in setting professional

    standards and facilitating development of technical skills among M&E specialists.

    5. There is a need for simplifying very technical language used in evaluation reports. At minimum

    a requirement for adding short, written in common language summaries could be introduced.

    6. While evaluation of individual projects is pretty common, much more could be done concerning

    sector / national level. Structural Funds create a unique opportunity to stimulate strategic

    approach in managing sector reforms. To monitor structural changes we need to review currently

    available data sources and develop better ways of cooperation and coordination between data

    providers, holders and users.

    7. An incentive mechanism making people learn from previous experience should be build into the

    system. To allow for this to happen a widely accessible depository of evaluation reports is

    needed.

Report this document

For any questions or suggestions please email
cust-service@docsford.com