Training in Service Delivery

Training in Service Delivery

Training in Service Delivery

Welcome to the programmatic area on training in service delivery within MEASURE Evaluation’s Family Planning and Reproductive Health Indicators Database. Training is one of the subareas found in the service delivery section of the database. All indicators for this area include a definition, data requirements, data source(s), purpose, issues and—if relevant—gender implications.

  • In the past, training was viewed as an isolated set of activities to address a range of challenges in a service delivery system. Today, training programs are expected to address a broader range of issues, including contextual factors that affect a person’s ability to perform satisfactorily, and that go far beyond the traditional limits of training. Consequently, competency-based training has become the standard in organizations worldwide.
  • A large proportion of the personnel to be trained in the context of reproductive health programs will work in a clinical setting. However, a growing proportion of persons to be trained will work in a non-clinical setting; such groups including community health workers, teachers, peer educators, journalists, women’s groups, and others. Although this section focuses on training for service delivery, many of the concepts can be adapted to other types of program implementation. The training indicators presented here distinguish two levels of effects: individual and organizational.

On the surface, one might consider training an “easy” area to evaluate, thanks to the pre- and post-tests often used in connection with training activities. Although such instruments continue to serve a useful function, they by no means capture the full range of training effects.

But the evaluation of training has changed substantially. First, organizations are no longer content to evaluate based on the number of training events, number of participants, improved scores on post-test instruments, or other process indicators. Instead, competency-based training has become the standard in organizations worldwide.

Second, whereas in the past, when training was viewed as an isolated set of activities — often the panacea for whatever was ailing a service delivery system — today training programs are expected to address a broader range of issues, including contextual factors that affect a person’s ability to perform satisfactorily, that go far beyond the traditional limits of training. Programs have moved beyond conventional training to a process known as “Performance Improvement (PI)”. The rationale for PI and the role that indicators play in this process are summarized in the “About” section of this database.

Third, where possible, evaluators attempt to measure the quantity and quality of training on the service delivery environment itself (i.e., improved access, enhanced quality). However, this type of “linkage” cannot be accomplished without a special study based on an experimental or quasi-experimental design or multivariate longitudinal analysis to demonstrate that the facilities receiving training are superior on one or more specific measures than those that did not receive the training.  Unless program managers and donors are willing to commit funds to such special studies, they basically operate on the assumption that good training results in improved performance and enhanced quality of care in the service delivery environment.

No universally accepted word exists in English to describe the person that attends a training event. We have used “trainee” in this section, but recognize the existence of other terms, such as “participant” (which implies more active involvement), “learner” (which reflects the absorption of new knowledge and skills), or “student” (especially in a pre-service education institution). Readers are encouraged to use the term most widely accepted in their local work environment or most appropriate for the activity in question.

A large portion of the personnel to be trained in the context of reproductive health programs will work in a clinical setting, such as a family planning clinic, STI treatment center, or obstetrical care ward. However, a growing proportion of persons to be trained will work in a non-clinical setting; such groups include community health workers, teachers, peer educators, journalists, women’s groups, and others. Whereas this section focuses on training for service delivery, many of the concepts can be adapted to other types of program implementation.

Methodological Challenges of Evaluating Training Programs

Specific methodological challenges of evaluating training programs include the following:

  • “Training” takes many different forms and levels of intensity.

A given training program may address learning objectives that require as little as a couple of hours to achieve, or it may last a month or more. Moreover, “training” may constitute an isolated activity (which has generally been the case in the past), or it may be one part of an ongoing and integrated program to deal with multiple problems in the service delivery environment. As such, the evaluator must clarify the type of training event that is being evaluated and the intended objectives.

  • Training is designed to have multiplier effects, but the evaluation of training rarely captures such effects.

Training occurs in numerous forms and levels of intensity. Some training programs are set up explicitly to have multiplier effects, such as “cascade training” where one level of program personnel is trained at a central location then these trainers begin training groups of providers, all based on specific training standards and materials. Other training programs may in fact produce a spin-off effect when the trained person returns to the service delivery setting and shares content and skills with co-workers, either formally or informally. Because a trained provider may be immediately promoted to an other level of care or to an administrative position, the evaluator may have trouble ascertaining the added/amplified effects on that level. In theory, one could conduct a special study to capture the effects of the training at different levels of the system, but such a study would be complex and expensive. In practice, the multiplier effects of training tend to get overlooked in the evaluation process. However, if such effects were overt objectives (and adequate human and financial resources were available), evaluators could measure them.

  • The training — however well executed — may be of little value to the program if organizations select inappropriate participants.

Traditional group-based training is often considered a “perk.” It allows an individual to obtain new (and generally marketable) skills, often in an enjoyable environment away from the pressures or routine of the workplace, with the added benefit of cash payment to cover living expenses (in the case of traditional, off-site training). As a result, the demand to attend a given training course may outstrip the number of slots available. Moreover, officials in high positions may use training opportunities as a means of repaying favors, whether or not the person selected is the most appropriate for the task. Seniority as well as politics also plays a role in selecting participants for training. Whereas one hopes that this practice is on the decline, it represents a problem in evaluating the effects of the training on the service delivery environment. Training organizations have identified several means of addressing the problem. Some have developed ways to encourage appropriate attendees for training while ensuring that the administrator-level staff members (who are sometimes sent to a training course to enlist their support) are actually involved in the training process in a different way. Alternatively, many organizations are developing other approaches to training, such as distance learning, self-directed learning, peer learning, and on-the-job training.

  • The guidelines and standards against which to evaluate performance may differ by country.

A number of the indicators refer to guidelines or standards, against which service provider practices are to be evaluated. Some international standards do exist, such as WHO’s Medical Eligibility Criteria for Contraceptive Use (2010). However, most governments prefer to establish their own standards and guidelines (or to adapt international ones to their own situation). The benefit of country-specific standards is their relevance to the local context; it is unrealistic to think that a very poor developing country will be able to provide the same quality of care as a country that has “graduated” from donor funding in a given area. Commitment from key constituencies tends to be greater if the standards are developed with local input. However,this results in non-comparable results across countries. Since the major purpose of program evaluation is the improvement of service delivery program implementation in a given country setting, the difference of standards across countries should not be considered a major limitation.

  • Ideally, evaluators should assess training in terms of changes in the service delivery or program environment, but doing so requires technical and financial resources.

Training programs are generally designed to improve performance in a service delivery or program setting. However, evaluating the extent to which a training intervention achieves positive change requires an experimental or quasi-experimental design or multivariate longitudinal analysis. Many training organizations recognize the effectiveness of training, but they lack the financial or technical resources to conduct such evaluations. Although training programs are often asked to “justify” their work through concrete examples of their effectiveness, few program administrators or donor agency representatives are willing to fund evaluations of training effectiveness. This problem is by no means unique to training, but has hindered the advancement of evaluation in this area.

  • Those who attempt experimental or quasi-experimental designs run into problems of “clustering” and intra-class correlation in evaluating training.

Evaluators often use the individual as the unit of analysis, but individuals from the same service delivery point or those taught by the same trainers using a classroom or group approach are more likely to perform in a similar manner (have less variance) than are those from different locations or those taught by other trainers. This clustering has important ramifications not only for the analysis of the data, but also for evaluators. sample size calculations. Evaluators should consult a statistician or expert in sampling to discuss the best strategy for addressing this problem in the design of their evaluation.

The training indicators presented here distinguish two levels of effects: individual and organizational. Whereas the evaluation of training has tended to focus on the individual service provider in the past, there is increased emphasis on evaluating training programs in terms of their effects on the service delivery system (e.g., of the Ministry of Health in a given country).