Social and Behavior Change Communication

Social and Behavior Change Communication

Social and Behavior Change Communication

Welcome to the programmatic area on social and behavior change communication (SBCC) within MEASURE Evaluation’s Family Planning and Reproductive Health Indicators Database. SBCC is one of the subareas found in the service delivery section of the database. All indicators for this area include a definition, data requirements, data source(s), purpose, issues and—if relevant—gender implications.

  • A common thread running through all reproductive health (RH) programs is behavior change. SBCC programs are designed to bring about behaviors that will improve health status and related long-term outcomes.
  • SBCC programs include a wide range of interventions that fall into three broad categories: mass media, interpersonal communication, and community mobilization. Any of these three types of communication can generate the results measured by the core indicators presented here.
  • Additional SBCC indicators can be found on the Social and Behavior Change Indicator Bank for Family Planning and Service Delivery.

A common thread running through all reproductive health (RH) programs is behavior change. Behavior Change Communication (BCC) programs are designed to bring about behaviors that will improve health status and related long-term outcomes. Previously known as Information- Education-Communication (IEC), the change in name implies a switch from materials production to strategically designed programs that influence behavior.

BCC programs include a wide range of interventions that fall into three broad categories:

  • Mass media (radio, television, billboards, print material, the internet);
  • Interpersonal communication (client-provider interaction, group presentations); and
  • Community mobilization.

Any of these three types of communication can generate the results measured by these core indicators, including changes in knowledge, attitudes, intentions, and behavior. However, to date evaluators have applied these indicators primarily in relation to communication programs with a mass media component, presumably because the population-based surveys needed to collect such data at the population level are relatively expensive and are only appropriate where the communication program is far reaching.

With regard to community mobilization, Bertrand and Kincaid (1996) have published a list of process indicators, which measure activities conducted but not results achieved. The dynamics of community mobilization projects present a number of methodological challenges to evaluators. For example, the objectives of the intervention may not be fully developed at the time of the baseline survey. The intervention itself may vary markedly in the different communities that implement it. The critical analytic approach used in evaluation may be antithetical to those efforts to develop trust with the community that characterize community mobilization programs. Whereas the objectives of community mobilization include, but are not limited to, changing health behaviors, they also include developing managerial skills, strengthening organizational infrastructure, encouraging participation from different subgroups, and so forth. In this database preference is given to indicators that have been tested in the field. Given that the evaluation of community development initiatives is still young, we omit separate indicators for this type of communication.

Methodological Challenges of Evaluating BCC

Evaluating BCC programs has a number of methodological challenges including but not limited to the following.

  • For programs with a mass media component, evaluators cannot identify an appropriate (or any) control group and thus they cannot rule out confounding factors (the attribution problem).

In field-based program evaluation, many evaluators have had difficulty establishing a control group, often for administrative rather than for technical reasons. However, in the case of programs with a mass media component, it is often virtually impossible to establish a control group (with random allocation of subjects) or even a comparison group (a population with similar socio-demographic characteristics) that is not exposed to the communication intervention in question. Without a control or comparison group, one cannot definitively answer the question: “what would have happened in the absence of the intervention?” Thus, even if the evaluation shows the desired increase in the outcome variables, one cannot unequivocally attribute this effect to the communication intervention.

An alternative approach for evaluating the effectiveness of BCC interventions consists of conducting a baseline and a follow-up survey among the intended audience that measures:

  • A series of ideational variables that represent sequential steps toward behavior change (see the conceptual framework below);
  • Socio-demographic characteristics of respondents; and
  • Data on the intensity of exposure to the specific messages of the communication program (e.g., based on recall of specific messages, number of channels, or some combination of each).

Data on intensity of exposure allows the evaluator to test “dose-response” as a possible determinant of each ideational variable, after controlling for socio-demographic factors (e.g., education) known to influence health behavior. Although this description oversimplifies the design and statistical analysis involved, it does explain the choice of indicators included here.

Conditions for Inferring Causal Attribution:

1. Observation of a change or difference in the population of interest

2. Correlation between exposure to the intervention and the intended outcome

3. Evidence that exposure to the intervention occurred before change in the outcome

4. Control or removal of confounding factors (or spurious effects).

Kincaid outlines four conditions that must be present for evaluators to infer causal attribution. Whereas these conditions do not entirely overcome the problem related to a lack of a control group, Kincaid argues that the combination of theory-based evaluation and paths of influence observable in the data can lead to convincing results on the effects of communication programs, even those with a mass media component.

The different functional areas of FP/RH programs all labor under some pressure to demonstrate their effectiveness, yet this pressure has been particularly strong for BCC programs. They have the burden of providing the most rigorous evidence that observed changes in behavior at the population level result from BCC interventions and not from confounding factors in the environment.

  • A baseline survey was conducted but lacked key indicators.

In some cases, the group conducting the baseline survey is different from the group performing the evaluation. As such, the former may not think to include certain indicators a subsequent evaluator may want or need. Events may occur that were unforeseen at the baseline or specific questions may have been eliminated from the questionnaire (to shorten the interview time, to improve the flow of questions, or for other reasons that seem logical to them). The result is the absence of key information at the time of analysis.

  • The interval between communication program and the follow-up evaluation was too short or too long.

There is no “established” or “correct” interval between the launch of a communication program and the follow- up evaluation. Advertising firms with multi-million dollar budgets in developed countries would expect to see the effects reflected in sales data in a matter of weeks. However, change in culturally entrenched RH behaviors can take months if not years to achieve. Thus, the interval between launch and evaluation is often dictated by administrative decisions rather than by technical reasons.  As a result, the interval may be too short or too long.

“Too short” tends to occur when a given project must be completed within a donor-specified period of time. A program may fail to show results, simply because too little time has elapsed for program implementation and for such effects to take hold.

“Too long” occurs when the program implementation occurs on schedule, but the follow-up evaluation is delayed, often for administrative reasons (e.g., funds not yet available, personnel deployed to another activity) or for factors beyond the control of the program (e.g., the monsoon season). When the interval between the communication intervention and the follow-up evaluation increases dramatically, there is greater opportunity for confounding factors to intercede.

  • The organization lacks the time or money to do an adequate evaluation.

This problem is by no means unique to BCC. In terms of time, the need to conduct the baseline survey before the communication launch may create an unacceptable delay if the program is under a tight schedule. In terms of money, program managers often feel that the funds would be better spent on doing more of the program, even if it means doing less of the evaluation; or they may have budgeted accordingly for an evaluation but were forced to reallocate funds during the project.

  • Obtaining consent forms for human subjectsoften proves difficult, time-consuming, and expensive.

Some organizations, if not most, require that all research proposals involving human subjects undergo a thorough review process. In most developing countries with communication programs, language and cultural barriers contribute to misunderstandings about informed consent. First of all, many people cannot read; and therefore they may be unwilling or unable to sign any document. Second, people may be suspicious about signing any document that looks official for fear that it may be used for other purposes. Furthermore, in many settings, the perception exists that the need to sign something indicates a, perhaps dangerous, hidden agenda.

Frequently, obtaining the approval from an ethics review board can take months with an average time of three to four months. This delay poses serious time problems to programs.

The Conceptual Framework: How BCC Works

(Click image to enlarge.)

Model of Strategic Communication and Behavior Change

In this conceptual framework of strategic communication and behavior change, communication is treated as an outside factor that affects the other variables in the model. Communication designed to improve skills is identified as instruction, communication for removing environmental constraints is identified as advocacy, and communication designed to change ideational factors is identified as promotion. The model specifies how and why communication affects intention and behavior: indirectly through its effects on skills, ideation, and environmental constraints.

“Promotion” is central to this section, because it leads to ideational change (that is, a change in the way individuals or populations perceive given practices or behaviors). Promotion is designed to have cognitive, emotional, and social effects, which in turn influence a person’s intent to practice a certain behavior and to follow through in doing so. The actual behavior is the “desired result” in almost all BCC programs, whatever the specific area or topic. Evaluators often label this behavior the “intermediate outcome” (if measured at the population level).

In addition to obtaining data on the actual behavior, evaluators should collect data on all ideational variables that may be relevant to the behavior of interest. Communication is designed to affect ideational variables in order to change behavior. In a pre-post evaluation design, evaluators can compare baseline measures of these variables with post-intervention data. Also, they can
assess program effects on the ideational variables by comparing the level of each variable among those exposed and unexposed to the communication program. Evaluators can then use results on the relationship between ideational variables and program exposure to track changes over time and to refine and/or reinforce the communication messages. Research has shown that ideational variables operate as “proximate determinants” and that communication can influence contraceptive use not only directly, but also indirectly through ideation (Kincaid, 2000; Babalola et al., 2001).

Even if one can convince individuals that certain courses of action are desirable, environmental constraints to behavior change often exist– in the form of politically based barriers, resource limitations, legal constraints, and other factors. Advocacy becomes a powerful tool to confront these constraints at the macro level and to minimize barriers to positive behavior at the individual level.

The section contains two indicators to monitor communication on the Internet. This has traditionally not been a primary channel for behavior change communication, especially in the developing world, but has now emerged as a leading source of information for millions of people in developed countries and for those with access to this medium in developing countries. Originally used by researchers and program managers, the Internet has become a source of reliable, confidential information on RH topics, especially for computer-savvy adolescents.

___________

References:

Babalola, S., C. Vondrasek, J. Brown, and R. Trao. 2001. “The Impact of a Regional Family Planning Promotion Initiative in West Africa: Evidence from Cameroon.” International Family Planning Perspectives 27: 4.

Bertrand, J.T. and D.L. Kincaid. 1996. Evaluating Information-Education-Communication (IEC) Programs for Family Planning and Reproductive Health. WG-IEC-03. University of North Carolina: Carolina Population Center. Available online at http:/www.cpc.unc.edu/measure.

Kincaid, D L., 2000. “Mass Media, Ideation, and Behavior: A Longitudinal Analysis of Contraceptive Change in the Philippines.” Communication Research 27, 6: 723-763.