Number or percent of trainees who have mastered relevant knowledge and/or skills at the conclusion of the training

Number or percent of trainees who have mastered relevant knowledge and/or skills at the conclusion of the training

Number or percent of trainees who have mastered relevant knowledge and/or skills at the conclusion of the training

Number or percent of pre-service education (PSE) or in-service training (IST) participants, students, or learners who can demonstrate acquisition of knowledge and/or skills following a training, where “mastery” is defined in terms specific to a given context.

As a percent, this indicator is calculated as:

(Number of trainees who have mastered knowledge or skills/Total number of trainees tested) x 100

Number of trainees; number who have mastered knowledge or skills; scoring criteria to define “mastery;” evidence of mastery of knowledge or skills

Mastery may be ascertained through pretests and post-tests, return skill demonstrations with models or clients, and adherence to established procedures that were taught in the training.

This indicator can be disaggregated by age, sex, urban/rural status, cadre, sector, and type of trainee.

Administrative records (training files); written tests (e.g., pre-and post-tests of accurate, up-to-date knowledge); trainee demonstrations

This indicator, commonly used to evaluate training, measures the trainees’ ability to retain key information in the short term (during and at the end of training) or demonstrate successful application of a new skill (e.g., inserting an intrauterine device, removing a contraceptive implant). Low post-test scores reflect inadequacies in the course and/or the inability of trainees to absorb the information. Every training organization that has developed or uses training manuals has identified the knowledge that a category of trainees should acquire on a specific subject. Pre-and post-tests measure this knowledge.

The test results indicate whether the trainee understands certain key points, even though the number and definition of key points will differ by context. The items included in the test should be those most relevant to a particular training exercise, which relate to program performance. If the same questions appear on subsequent tests, this indicator can monitor trends over time within a program and can determine knowledge retention as part of formal training evaluations.

This indicator has two limitations. First, tests lack standardized items. Some training organizations have a list of questions, or a list of steps the trainees must follow, which they encourage host country organizations to adopt for testing purposes on a given topic, but some countries opt to design their own questions or steps. This lack of standardization makes it difficult to compare the results from this indicator across countries and even across programs within a given country. Second, the concept of “mastery” is not consistent across settings. For example, in some countries, a passing grade may be 60 percent, whereas in others the required score for passing may be 100 percent. Improved knowledge is only one indication of training effectiveness; by itself, it does not necessarily ensure improved performance.

Despite these limitations, training organizations routinely use this indicator to control the quality of training conducted in connection with their activities.

health system strengthening (HSS), training