The organization systematically evaluates its training program to improve effectiveness

The organization systematically evaluates its training program to improve effectiveness

The organization systematically evaluates its training program to improve effectiveness

To systematically evaluate its training program an organization routinely applies indicators such as the first five in this section to its training activities. This evaluation requires systematic data collection, analysis, and reporting of the results to those involved in the training.

Data Requirement(s):

A list of all training events; a list of the indicators and instruments used to evaluate them; and a copy of the results

Program records; occasional special studies

As training organizations attempt to develop a “culture of evaluation” to improve their programs, this indicator documents the evolution of the trend. It provides concrete evidence that training organizations (or units) are attempting to obtain systematic feedback and to discuss it with those involved in training efforts.

Training evaluation should form part of the training strategy; the institution should have an evaluator on staff or a regular consultant. Training evaluations should systematically examine the capacity of the trainers, their training materials, tools and methods, and the actual evaluation methodology (e.g., whether checklists measure intended areas, whether they need updating, how to adapt tests to different audiences/learners).

Examining the job performance after training — level three evaluation of Kirkpatrick’s training evaluation framework (1998) — should take place every two to three years of a regular training program, if possible. In the interim, training trainers to function as evaluators (working with line supervisors), and adapting training tools (knowledge tests, skills checklists) with which to monitor/ observe can document trends in changes in performance.

The evaluation of training can take various forms, ranging from the simplest to the most sophisticated. At the very least, training programs will monitor increased learning using pre- and post-knowledge tests. However, few training organizations consider tests an adequate evaluation of the course, and most prefer (where funds permit) to track the skill level of trained providers, both upon completion of the course and at a period X months later (e.g., 6 months, 12 months).

Results from a study in Indonesia (Kim et al., 2000) on reinforcement via self-assessments and support groups of providers indicate that providers lost skills and knowledge acquired through training within six months, except those who performed self-assessment exercises, who actually improved.

These evaluation methods refer to the individual trainee. In contrast, many of the indicators in this section refer to the organizational capacity of the system to design and implement effective training. Yet to truly evaluate the effectiveness of training, one must link the training activity to improvements in the service delivery environment. The linkage requires a special study using a quasi-experimental design, in which one contrasts a group of clinics whose providers are trained with a group of clinics whose providers have yet to be trained. This type of operations research study is relatively rare because of the resources required and the burden placed on service delivery to maintain “everything else constant.” However, those wishing to definitively demonstrate the link between training and improvement in the service delivery environment will need to undertake such studies. Other techniques involve using multivariate analysis, combining data from facility-based and household surveys (e.g., Dietrich, Guilkey, and Mancini, 1998). Short of that, one simply works on the assumption that improving the competency of individual trainers and increasing the number of locations in which they operate will improve quality and access to service delivery.

Related content

Program Management in Sexual and Reproductive Health Services