Service Delivery (Core)

Service Delivery (Core)

Service Delivery (Core)

Rationale for Evaluating the Service Delivery Environment

The service delivery environment is the situation prospective clients find when they seek services, both in terms of tangible factors (e.g., the physical plant, personnel, equipment, and supplies) and the intangibles (e.g., treatment received from the staff, educational messages received). The stronger the input from each of these functional areas, the better will be the services available to clients.

Program evaluation in the early days tended to focus either on counting activities performed (e.g., the number of persons trained), results obtained in terms of service delivery (e.g., number of clients, number of visits), rates of use (e.g., contraceptive prevalence rate), or long-term outcomes (e.g., infant mortality rate). Curiously, evaluators considered the actual functioning of the program itself somewhat of a “black box.” In general, evaluation did not probe the quality of the services and their availability to the population of the catchment area.

This situation has changed markedly as evaluators increasingly focus on the two defining characteristics of the service delivery environment: access to services and quality of care. The rationale for evaluating access and quality is twofold. First, evaluation of these topics serves to focus staff attention on the need to strive for improvements in these areas. Second, this type of evaluation measures the objectives the different functional areas– management, training, commodity security and logistics, etc. — are working to achieve: better services and programs. Some have argued that evaluators should assess these functional areas in terms of behavioral change among clients/ participants in the program or in the population at large. This argument fails to recognize that although the functional areas contribute to achieving program objectives, they do so by creating an improved service delivery environment, which in turn increases service utilization and desired behavior, as outlined in the conceptual framework below.

(Click on the image below to enlarge.)

Conceptual Framework

Methodological Challenges of Evaluating the Service Delivery Environment

The challenges for evaluating access to services are quite different from those for evaluating quality of care. Indeed, each type of access presents different methodological issues. As for quality of care, most of the indicators are derived from one of four sources: facility audits, observation, client exit interviews, or review of medical records/ clinic registries. The methodological issues that surface in connection with the measurement of quality relate to the concept of quality, data collection techniques, or sampling bias.

  • The opinions of actual clients may differ from that of international experts on “what is important” in terms of quality of care.

International experts who define the items on the instruments generally try to encompass a large range of issues, and in doing so may give less weight to the key issues for clients. A similar problem may occur when different stakeholders disagree about which indicators are most important.

  • Data collected by two or more observers may have low inter-rater reliability.

If observers are well-trained, direct observation is generally the best method to measure compliance with standards of care. However, inadequate training of observers can result in low inter-rater reliability that seriously compromises the validity of the findings.

  • Providers may perform better than usual when observed (i.e., the Hawthorne effect).

The Hawthorne effect refers to the tendency for persons to perform differently (usually better) when they know they are being observed (Rossi, Freeman, and Lipsey, 1999). Thus, the presence of an observer in the room during a counseling or clinical sessions may cause providers to be especially attentive to their duties.

  • Direct observation is not always feasible.

Although the direct observation of care is the preferred method to measure level of compliance with clinical standards, in reality it is seldom used for monitoring maternal care because of the following limitations: emergency care is difficult to observe; deliveries happen often at night when observers are absent; deliveries can last many hours; and the opportunity to observe rare events is low.

  • Clients may not accurately remember the events that took place during the counseling and clinical sessions (recall bias).

Clients may not remember exactly what occurred during the session with the provider. The reliability of their responses may vary with the provider action in question.

  • Clients may report that they are satisfied with services, even if they are not (courtesy bias).

Studies have shown that clients are likely to report they feel satisfied with the services they’ve received and will not speak negatively about the clinic or clinic staff during exit interviews (Williams, Schutt-Aine, and Cuca, 2000).  Hence, results from the client exit interview tend to be positively skewed on the question of satisfaction.

  • The unit of analysis differs for the different data collection instruments.

The unit of analysis for both the client exit interview and for the observation is the client; however, the unit of analysis for the facility audit is the clinic.

  • Evaluators have difficulty appropriately estimating the sample size when the client volume differs substantially for the different RH services to be evaluated.

Often the evaluator wishes to collect data on different RH services within a given facility or within a set of facilities. However, because the client volume may differ by service, the evaluators will have difficulty establishing a sampling strategy that will yield the appropriate number of cases for evaluating both services.

  • Standards defining quality of care may not be available or consistent across countries.

Indicators for evaluating the quality of maternal and neonatal care services require standards or guidelines as a reference for measurement. Some countries lack clinical standards for their programs or they are not evidence-based. If the purpose of the evaluation is to compare quality of maternal care among countries, then standards need to be the same.

  • Special (periodic) studies of quality fail to address the need for regular monitoring.

Useful as periodic studies of quality (such as the SPA and QIQ) are to program managers, they fail to provide an ongoing measurement of performance. Relatively few programs have regular, ongoing systems to monitor quality systematically.

Service Delivery Indicators in this Database

The cross-cutting service delivery indicators in this database are organized under Access, Quality of Care, and Genger Equity/Sensitivity.  Most of the specific programmatic areas also have indicators for access and quality pertaining directly to that technical area.

___________

References:

Rossi, P.H., H. Freeman, and M. Lipsey. 1999. Evaluation: A Systematic Approach. Thousand Oaks, CA: Sage Publications.

Williams, T., J. Schutt-Aine, and Y. Cuca. 2000. “Measuring Family Planning Service Quality Through Client Satisfaction Exit Interviews.” International Family Planning Perspectives. 26(2)