Process Indicators (P-1 through P-15)
P-1 The implementing/collaborating organization(s) actively participated in the design of the operations research (OR) project
The design of the OR project is the formulation of the study, which includes identifying the problem, establishing the objectives, designing the intervention, and selecting a research methodology. “Active” participation involves contributing original ideas to the work, not simply attending meetings.
P-2 The implementing/collaborating organization(s) actively participated in the implementation of the OR study
“Active participation” indicates that the organization was involved in decision-making and played a technical role in the implementation of the study, for example hiring new staff, conducting training, or analyzing and interpreting results.
P-3 The implementing/collaborating organization(s) participated in developing programmatic recommendations
This indicator asks whether these organizations participated, as well as how, for example, collaboration in report preparation, through formal meetings, and in working groups at dissemination conferences.
P-4 The study accomplished its research objectives
Each study is designed with one or more objectives. This indicator determines whether the study achieved each of its objectives.
P-5 The intervention was implemented as planned (or with some modifications)
Changes between the proposal and implementation of the intervention frequently occur and often are for the better. This indicator seeks to determine whether the organization carried out all of the activities specified in the intervention, allowing for some change in response to local realities. If not, the reviewer should identify any changes between the design and actual realization of these activities.
This indicator is not intended to penalize an organization for making modifications. Rather, it ascertains that the organization made some meaningful change in service delivery (that there was “something to evaluate”). An intervention study fails to show any change in the desired outcome for two plausible reasons: (1) the organization never implemented the intervention or implemented it so weakly that the study hardly constituted a fair test of its potential effectiveness, or (2) the organization fully implemented the intervention but it failed to show the expected results. This indicator attempts to eliminate the first possibility by determining that the intervention was in fact implemented.
P-6 The researcher(s) completed the study without delays (or other adjustments to the timeline)
that would compromise the validity of the research design
Study activities are often delayed. This indicator seeks to identify delays that affected the timing of the
intervention or that could have reduced the effectiveness of certain activities (e.g., a delay in training resulted in diluting the effects of the activity; the period between intervention and final data collection had to be cut short, and thus the desired change had insufficient time to take place).
P-7 Key personnel remained constant over the life of the OR project
“Key personnel” are any personnel with a decision-making role in the design or implementation of the subproject. Such personnel include the principal investigator, the study coordinator, and counterparts in the collaborating agencies, including key service personnel or government officials actively participating in implementation.
P-8 The study design was methodologically sound (free of flaws that could have affected the final results)
Evaluators should assess this item based on the methodology section of the report and (if appropriate) on discussions with the researchers. Generally, the external evaluator (not a staff member of any of the participating organizations) makes an “informed decision” on this point; key informants may have less knowledge or experience to make this judgment.
P-9 The research design was feasible in the local context
“Feasible” here means “reasonable” or “manageable,” a design that could be repeated without unduly draining financial or human resources. “Local context” includes not only program-related factors but also sociocultural or political factors, among others.
P-10 The implementing/collaborating organization(s) judged the OR technical assistance to be useful and provided in a collegial manner
To qualify for a full score, both elements must be positive. If, for example, the advice was technically sound, but counterparts reacted negatively to the manner in which the OR team provided assistance (e.g., in an offensive or condescending way, “imposed upon them”), then the study should receive a lower score on this indicator.
P-11 Stakeholders judge results of the OR study to be credible/valid in the local context
This indicator refers to the judgment of stakeholders (policymakers, researchers, donors, program managers). Utilization of results would be likely limited if stakeholders seriously questioned the validity of the results.
P-12 Research was programmatically relevant
The perceptions of the same stakeholders listed above determine relevance. Relevant research addresses a priority problem of the program, whether a national program of the MOH or a more local program of an NGO.
P-13 Results were disseminated to key audiences, including policymakers, program managers, service providers, and donors
All studies involve dissemination of results. This indicator seeks to determine whether the dissemination strategies used effectively reached the target audience. “Key audiences” are those in a position to act on the results (e.g., policymakers, key decision-makers or service providers in implementing/collaborating agencies, donor agency staff). In addition, dissemination efforts may reach other interested parties (e.g., students at the local university, members of the international reproductive health community), but the indicator refers only to those in a position to act upon the results.
P-14 Results are readily available in written form
This indicator verifies the existence of a document on the key findings of the study that is well presented (of professional quality) and is locally available in sufficient quantity. This document may appear in a variety of media (e.g. website, CD-ROM) in addition to print. Ideally, results should be available in various formats appropriate to the intended audience: final reports and journal articles for donors and the academic reproductive health community, summaries or research briefs for decision makers and program managers.
P-15 The study included an assessment of costs of the intervention
Evaluators should mention any data collected on the cost of the intervention, primarily for the purpose of cost-effectiveness analysis. This indicator serves informational purposes only, since all OR studies do not necessarily need an assessment of cost.