During the mid-1990s, USAID introduced a new approach to monitor its programs throughout the agency, known as Performance Monitoring Plans (PMP). Central to PMP is the results framework — a planning, communications and management tool. The results framework includes the strategic objective and all intermediate results, whether funded by USAID or its partners, necessary to achieve the objective. The framework also conveys the development hypothesis implicit in the strategy and the cause and effect linkages between the intermediate results and the objective. It includes any critical assumptions of the development hypothesis that must hold to achieve the relevant objective. Typically the framework appears in graphic form supplemented by a narrative (Price Waterhouse Coopers, 2001). Evaluators can apply this conceptual framework to country-level programs, as well as to the programs of specific USAID-funded projects. Other donor agencies have similar tools for program monitoring that require evaluators to select appropriate indicators to track the progress of the program.
The results framework has a number of advantages. First, it has shifted the focus from a simplistic “bean counting” of activities toward the achievement of results. Although you may still count the number of nurses trained, the key question is, “what result did the program achieve?” Did the nurses delivery quality services? Did people adopt a contraceptive method after receiving services from trained nurses? A second advantage of the results framework is that its widespread application throughout USAID has greatly enhanced the understanding of program monitoring and evaluation for all levels of program personnel. Whereas evaluation used to be passed off to a lone evaluator in a back office, the results framework is now an integral part of the design of the program. Program managers are accountable for achieving the results they outline in their results frameworks, and program personnel understand that these indicators represent the criteria on which their efforts will be evaluated.
The results framework does have limitations. First, the framework describes the ways that program interventions will contribute to the ultimate results through cause-and-effect linkages (e.g., lower level results that contribute to the achievement of the strategic objective). In reality, however, the framework rarely traces all possible influences, such that factors other than the program can influence the results observed in ways that the results framework does not depict (i.e., the attribution problem). Second, the desired results are not always easy to measure in quantifiable terms. For example, the Central American HIV/AIDS Prevention Project in Central America (Programa Acción SIDA de Centro America — PASCA) was designed to change the policy environment for HIV/AIDS programs and to strengthen NGO capacity to implement prevention programs in the region of Central America. The need for indicators to measure the policy environment gave birth to the now widely used AIDS Program Effort Index (API), but this measure was totally untested at the time it was first used. Similarly, the measures for assessing institutional capacity building represent the “best guesses” of the expert group assembled to develop them, yet they lack the methodological rigor associated with more common indicators, such as prevalence of contraceptive use or breastfeeding.
A third limitation is not inherent in the framework itself but rather in its application. If program managers are fearful of failing to reach the behavioral objectives of their programs in terms of changing behavior at the population level (e.g., outcomes), they may revert to output measures that are within their manageable interest. For RH areas where output and outcome measures are closely linked (e.g., immunization), this shortcoming is less evident. In contrast, in areas where the link between outputs and outcomes is more tenuous (e.g., women’s nutrition), this limitation is more important.