Performance of operations research and impact of results
This set of 27 indicators serves to collectively evaluate operations research (OR) studies in terms of (a) the process of conducting the study, and (b) its impact (i.e., utilizing the study results to change service delivery or to influence policy).
An additional four indicators measuring context and other factors are listed, but they provide background only; evaluators do not score them in measuring performance of the OR team.
Assessment of an external evaluator based on available information. Evaluators score each of the 25 items on a scale of 1 (not at all) to 3 (very much so). If indicator I- 1 is negative (the intervention was not effective), then I-3, I-4, and I-5 are non-applicable.
Project documents, in particular the final report of the project. Interviews with key informants, including researchers (especially the Lead Investigator), program managers, and other providers in the service delivery organizations who stand to benefit from the OR, donor agency staff, policymakers and other key decision-makers.
The set of OR indicators allows the evaluator to arrive at a set of numerical scores supported by qualitative justifications for the scores for each OR study under review. The 3-point scale for each item distinguishes among those studies that performed well (3), those that performed satisfactorily but with notable problems (2), and those that did not perform satisfactorily on the relevant indicator (1).
Although other formats are possible, the OR indicators developed under the USAID-funded FRONTIERS Program use a grid format, which doubles as a data collection tool and a reporting format. Evaluators can use the blank grid as an interview guide to ensure consistency among key informants, evaluators, and projects. Evaluators can present results for each project using the same grid format. In addition, they can summarize the numerical scores of multiple projects in a table so that one can easily compare performance of studies overall (comparing columns) or can compare specific indicators across studies (comparing rows). This enables evaluators to identify areas of consistent strength and those requiring improvement.
The OR indicators fall into three categories: process, impact, and context/other. Process indicators relate to the conduct of the study; evaluators can assess them immediately upon completion of the study. By contrast, they should assess impact indicators three years later, although impact occurring sooner certainly “counts.” The instrument contains four contextual and other indicators that provide insight into the process but do not reflect “performance.”
While each indicator has a score, the set of indicators does not lend itself to a summary score. Indicators measure different aspects of process and impact that are not necessarily of equal importance, nor is there sufficient experience in evaluating OR to reliably weight them. Rather, evaluators can compare scores individually or in groups of indicators that measure similar aspects, such as participation of the implementing agency at various stages of the study or conduct of subsequent research.