Your Toolkit to Design Interventions that Impact Real Lives

Glossary

The majority of these defintions are from:

D&I Research in Health

Rabin, B.A. and Brownson, R.C. (2012). Developing the terminology for dissemination and implementation research in health. In Brownson, R.C., Colditz, G.A., & Proctor, E.K. (Eds.), Dissemination and Implementation Research in Health: Translating Science to Practice. New York: Oxford University Press.

ISBN: 9780199751877

Click here to view book information from Oxford University Press.

Top

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z


A

Academic detailing
Academic detailing is a form of outreach that is university-based.144  The goal of academic detailing is to change prescribing of targeted drugs to be consistent with medical evidence, support patient safety, and to be cost-effective medication choices and overall, to improve patient care. A key component of non-commercial or university-based academic detailing programs is that they (academic detailers, management, staff, program developers, etc.) do not have any financial links to the pharmaceutical industry.145

Acceptability 
Acceptability is related to the ideas of complexity and relative advantage; it refers to a specific intervention and describes whether the potential implementers, based on their knowledge of or direct experience with the intervention, perceive it as agreeable, palatable, or satisfactory.51

Adaptation
See Reinvention/Adaptation

Adoption 
Adoption is the decision of an organization or community to commit to and initiate an evidence-based intervention.18,44,45

Appropriateness 
Appropriateness is related to the idea of compatibility and is defined as the perceived fit and relevance of the intervention for a given context (i.e., setting, user group) and/or its perceived relevance and ability to address a particular issue. Organizational culture and organizational climate might explain whether an intervention is perceived as appropriate by a potential group of implementers.51

Assimilation gap
Assimilation gap refers to the population-level (or public health) impact of interventions and describes the phenomenon when interventions that are adopted by individuals or organizations are not deployed widely (e.g., population level) and/or not sustained sufficienly at the individual or organizational level.79-81

Audience Segmentation 
Audience segmentation is the process of distinguishing between different sub-groups of users and creating targeted marketing and distribution strategies for each sub-group. Dearing and Kreuter suggest that "segmentation of intended audience members on the basis of demographic, psychographic, situational, and behavioral commonalities" allows for the design of products and messages that are perceived to be more relevant by the intended target audience.41 

Back to top


B

Barriers and challenges
Factors that promote or impede the dissemination and implementation of evidence-based interventions. Also see factors associated with speed and extent of D&I

Back to top


C

Capacity building
Any activity (e.g., training, identification of alternative resources, building internal assets) that builds durable resources and enables the recipient setting or community to continue the delivery of an evidence-based intervention after the external support from the donor agency is terminated.46,49,52 Other terms that are commonly used in the literature to refer to program continuation include incorporation, integration, local or community ownership, confirmation, durability, stabilization, and sustained use.50

Champion
Effective, influential individuals at the implementation site who can facilitate the implementation of the intervention by mobilizing internal support for the presence of strong champions, often correlated with large implementation fidelity.

Change agent 
Change agents are representatives of change agencies that are external to an organization or community, and their goal is to influence the innovation decisions of members of the organization or community. Change agents often use opinion leaders from an organization or community to facilitate the dissemination and adoption process.18

Channel of dissemination 
Route of message delivery (e.g., mass media, community, interpersonal).141

Channels of evidence-based intervention delivery
Pathway by which intervention is delivered to participants (e.g., face-to-face; small group; telephone).141

Characteristics of the adopters 
Characteristics of the adopters can be discussed at the individual and organizational/community level. Attributes of the organization/community include its size, formalization, perceived complexity, and readiness for the implementation of the innovation. The characteristics, attitudes, and behaviors of individuals within an adopting organization (e.g., position in the organization, education, individual concerns and motivations) may also determine the uptake and use of an innovation.107 Rogers classifies the individual adopters according to their degree of innovativeness into five categories: (1) innovators, (2) early adopters, (3-4) early and late majority, and (5) laggards.18,105

Characteristics of the intervention 
Rogers identifies five perceived attributes of an innovation that are likely to influence the speed and extent of its adoption: (1) relative advantage (effectiveness and cost efficiency relative to alternatives), (2) compatibility (the fit of the innovation to the established ways of accomplishing the same goal), (3) observability (the extent to which the outcomes can be seen), (4) trialability (the extent to which the adopter must commit to full adoption), and (5) complexity (how simple the innovation is to understand).18,105 Relative advantage and compatibility are particularly important in influencing adoption rates.18

Clinical/study clinic
Facility with defined responsibilities for recruiting, enrolling, treating, and following patients or subjects in a clinical trial.141

Communities of practice
Communities of Practice can be defined, in part, as a process of social learning that occurs when people who have a common interest in a subject or area collaborate over an extended period of time, sharing ideas and strategies, determine solutions, and build innovations. Wenger gives a simple definition: "Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly." Note that this allows for, but does not require intentionality. Learning can be, and often is, an incidental outcome that accompanies these social processes.143 

Comparative effectiveness research to accelerate translation (CER-T) 
Comparative effectiveness research (CER) is defined as "the conduct and synthesis of research comparing the benefits and harms of different interventions and strategies to prevent, diagnose, treat and monitor health conditions in 'real-world' settings. The purpose of this research is to improve health outcomes by developing and disseminating evidence-based information to patients, clinicians, and other decision-makers, responding to their expressed needs, about which interventions are most effective for which patients under specific circumstances."84

Compatibility
Degree to which an innovation is perceived as consistent with the existing values, past experiences, and needs of potential adopters.141

Complexity
Degree to which an innovation is perceived as relatively difficult to understand and to use.141

Conceptual models
See findings and impact.

Contextual factors
Contextual factors may include the political, social, and organizational setting for the implementation of the intervention and include social support, legislations and regulations, social networks, and norms and culture.28,108 Understanding the delivery context for the intervention is essential for the success of the D&I and closely linked to the concepts of fidelity and adaptation.109 Recent efforts in the organizational change literature discussed context in terms of the inner (organizational) context, including structural and cultural features, and system readiness and the outer (interorganizational) context, including interorganizational networks and collaborations.19 They also identified several core aspects of context including leadership, infrastructure, and unit variability.110

Core elements (or components)  
The term core elements or components can refer to the intervention (core intervention elements or components) and is defined as the active ingredients of the intervention that are essential to achieving the desired outcomes of the intervention.9 Some authors differentiate between core intervention elements or components and customizable components; the latter can be modified to local context without harming the effectiveness of the intervention.47 While understanding of the core elements or componenets of an intervention or the implementation process can facilitate the adaptation and sustainability of the intervention in a new context (i.e., setting, audience),47 the identification of these core elements is not always straightforward.9 Identification can be facilitated by detailed description of the elements or components but as Fixsen and colleagues noted "the eventual specification of the core intervention components for any evidence-based program or practice may depend upon careful research and well-evaluated experiential learning from a number of attempted replications."9(p.26) Core elements or components can also refer to the implementation process (Core implementation elements or components) and indicate the drivers of the implementation process that are indispensable for the successful implementation of an intervention.9 

Back to top


D

Data collection
Methods of gathering data during the evaluation of an intervention.

Definition of D&I
See dissemination and implementation.

Description of intervention
The intervention as defined by the developer or adopter.

Designing for dissemination and implementation (D4D&I) 
Designing for Dissemination and Implementation refers to a set of processes that are considered and activities that are undertaken throughout the planning, development, and evaluation of an intervention to increase its dissemination and implementation potential. Some authors refer to the understanding and consideration of the user context (receiver "pull").41 D4D&I builds on the premises that (1) effective dissemination of interventions requires an active, systematic, planned and controlled approach;28 (2) planning for D&I in the early stage of conceptualization and development of the intervention can increases the success of later D&I efforts;101 (3) early involvement of and partnership with target users in the conceptualization and development process can increase the likelihood of success for later dissemination and implementation efforts;41 (4) close understanding of and building on the characteristics, beliefs, norms, and wants of target adopters can positively influence their perception of a new intervention and consequently will increase the likelihood of adoption, implementation, and sustained use of the intervention;41 (5) study designs and measures that generate practice-relevant evidence facilitate and inform later-stage D&I efforts.102

Diffusion
Diffusion is the passive, untargeted, unplanned, and uncontrolled spread of new interventions. Diffusion is part of the diffusion-dissemination-implementation continuum and it is the least focused and intense approach.38,39

Diffusion can also be thought of as the over time spread of a new idea, practice, or program among the members of a social system such as a network of healthcare providers, or the residents living in a community. Diffusion can occur as a result of purposive dissemination activity, though diffusion processes among social system members often occur in the absence of purposive activity, too.

Diffusion of innovations
The Diffusion of Innovations theory was proposed by Rogers to explain the processes and factors influencing the spread and adoption of new innovations through certain channels over time.18 Key components of the diffusion theory are: (1) perceived attributes of the innovation, (2) innovativeness of the adopter, (3) the social system, (4) individual adoption process, and (5) the diffusion system.43 

Dissemination 
Dissemination is an active approach of spreading evidence-based interventions to the target audience via determined channels using planned strategies.38,39

Dissemination research 
Dissemination research is the systematic study of processes and factors that lead to widespread use of an evidence-based intervention by the target population. Its focus is to identify the best methods that enhance the uptake and utilization of the intervention.44,72

Dissemination strategy 
Dissemination strategies describe mechanisms and approaches that are used to communicate and spread information about interventions to targeted users.40 Dissemination strategies are concerned with the packaging of the information about the intervention and the communication channels that are used to reach potential adopters and target audience. Passive dissemination strategies include mass mailings, publication of information including practice guidelines, and untargeted presentations to heterogeneous groups.28 Active dissemination strategies include hands on technical assistance, replication guides, point-of-decision prompts for use, and mass media campaigns.28 It is consistently stated in the literature that dissemination strategies are necessary but not sufficient to ensure wide-spread use of an intervention.41,42

Back to top


E

Economic evaluation
Comparison of the relationship between costs and outcomes of alternative healthcare interventions, such as cost-benefit analysis, cost-effectiveness analysis, and cost-utility analysis.141

Effectiveness research 
Effectiveness research determines the impact of an intervention with demonstrated efficacy when it is delivered under "real-world" conditions. As a result, effectiveness trials often must use methodological designs that are better suited for large and/or less controlled research environments with a major purpose to obtain more externally valid (generalizable) results.44,70

Efficacy research
Efficacy research evaluates the initial impact of an intervention (whether it does more good than harm among the individuals in the target population) when it is delivered under optimal or laboratory conditions (or in an ideal setting). Efficacy trials typically use random allocation of participants and/or units and ensure highly controlled conditions for implementation. This type of study focuses on internal validity or on establishing a causal relationship between exposure to an intervention and an outcome.44,70

Engagement
Process of matching evidence-based intervention (EBI) characteristics with interests and needs of potential adopting organizations or their decision makers.141

Evaluation
Assessment of the efficacy, effectiveness, dissemination, or implementation of an intervention. Also see formative research, implementation outcomes, measurement considerations, and outcome variables.

Evidence-based intervention
The objects of D&I activities are interventions with proven efficacy and effectiveness (i.e., evidence-based). Interventions within D&I research should be defined broadly and may include programs, practices, processes, policies, and guidelines.28 More comprehensive definitions of evidence-based interventions are available elsewhere.29-33 In D&I research, we often encounter with complex interventions (e.g., interventions using community-wide education) where the description of core intervention components and their relationships involve multiple settings, audiences, and approaches.19,34

External validity
External validity is concerned with the generalizability or real-world applicability of findings from a study and determines whether the results and inferences from the study can be applied to the target population and settings.133,134 Standardized and detailed reporting on factors that influence external validity (such as the ones recommended in the RE-AIM framework) can contribute to more successful D&I efforts.81,102,133

Back to top


F

Factors associated with the speed and extent of D&I
Several factors (i.e., moderators) influence the extent to which D&I of evidence-based interventions occur in various settings.18 Moderators are factors that alter the causal effect of an independent variable on a dependent variable.89 In this case, organizational capacity can moderate the effect of an intervention on a desired outcome. These factors can be classified as the characteristics of the intervention, characteristics of the adopter (organizational and individual), and contextual factors. Adoption rate will be influenced by the interaction among the attributes of the innovation, characteristics of the intended adopters, and the given context.19

Feasibility
Feasibility is closely related to the concepts of compatibility and trialability and refers to the actual fit, suitability, or practicability of an intervention in a specific setting. Perceived feasibility plays key role in the early adoption process.51

Fidelity
Fidelity measures the degree to which an intervention is implemented as it is prescribed in the original protocol.16,44 Fidelity is commonly measured by comparing the original evidence-based intervention and the disseminated and implemented intervention in terms of: (1) adherence to the program protocol, (2) dose or amount of program delivered, (3) quality of program delivery, and (4) participant reaction and acceptance.103

Findings & impact
Results from the evaluation of the efficacy, effectiveness, dissemination, or implementation of an intervention.

Fit
The degree to which the characteristics of all evidence-based interventions are compatible with the delivery system structure and values.

Formative research
Formative research is research conducted during the development of your program or intervention to help you choose and describe your target audience, understand the factors which influence their behavior, and determine best ways to reach them. Formative research is also called formative assessment, market research, consumer research, or audience research.134

Fundamental (or Basic) research 
Fundamental or basic research develops laboratory-based, etiologic models to provide theoretical explanation for generic or more specific phenomena of interest.44

Funding
Financial support and opportunities provided for D&I research. See the Resource Library for D&I funding opportunities.

Back to top


G

Generalizability
See external validity.

Go Sun Smart
The goal of this program is to provide employees with information about ways to protect their skin from the sun. Program materials include posters, newsletter articles, tip cards, and an interactive website. Researchers are also studying how sun safety information is passed from employees to ski area guests. Click here to visit the Go Sun Smart page in the Narrative Library.

Back to top


I

Implementation 
Implementation is the process of putting to use or integrating evidence-based interventions within a setting.40

Implementation cost
Implementation cost (or incremental cost) is defined as the cost impact of an implementation effort and depends on the costs of the particular intervention, the implementation strategy used, and the characteristics of the setting(s) where the intervention is being implemented. Understanding implementation cost can be especially important for comparative effectiveness research.51

Implementation effectiveness trial
Test of the effectiveness of an efficacious program when implementation can vary, or is deliberately varied, so that both availability and acceptance can vary.141

Implementation evaluation
Assessment of how, and at what level, a program is implemented, and what and how much were received by the target population (i.e., a type of process evaluation).141

Implementation gap
Implementation gap refers to the phenomenon when the interventions that are adopted by individuals and organization are not implemented with sufficient fidelity and consistency to produce optimal benefits.42,79

Implementation outcomes 
Implementation outcomes are distinct from system outcomes (e.g., organizational-level measures) and individual-level behavior and health outcomes and are defined as "the effects of deliberate and purposive actions to implement new treatments, practices, and services."51(p. 65) Implementation outcomes are measures of implementation success, proximal indicators of implementation processes, and key intermediate outcomes of effectiveness and quality of care. The main value of implementation outcomes is to distinguish intervention failure (i.e., when an intervention is ineffective in a new context) from implementation failure (i.e., when the incorrect deployment of a good intervention causes lack of previously documented desirable outcomes).51

Implementation research 
Implementation research seeks to understand the processes and factors that are associated with successful integration of evidence-based interventions within a particular setting (e.g., a worksite or school).73 Implementation research assesses whether the core components of the original intervention were faithfully transported to the real-world setting (i.e., the degree of fidelity of the disseminated and implemented intervention with the original study) and is also concerned with the adaptation of the implemented intervention to local context.73 Another often overlooked but essential component of implementation research involves the enhancement of readiness through the creation of effective climate and culture in an organization or community.19,74

Implementation site
The delivery system where the evidence-based intervention is being implemented.

Implementation strategy 
Implementation strategies refer to the systematic processes, activities, and resources that are used to integrate interventions into usual settings.43 Some authors refer to implementation strategies as core implementation components or implementation drivers and list staff selection, pre-service and in-service training, ongoing consultation and coaching, staff and program evaluation, facilitative administrative support, and systems interventions as components.9,42

Importance of D&I
See the Junior and Senior D&I Expert Interviews in the Narrative Library.

Innovation
The term innovation can refer to "an idea, practice, or object that is perceived as new by an individual or other unit of adoption."18(p. 12) Some authors use this term interchangeably with the term evidence-based intervention.

Institutionalization
Institutionalization assesses the extent to which the evidence-based intervention is integrated within the culture of the recipient setting or community through policies and practice.45,46,48 Three stages that determine the extent of institutionalization are: (1) passage (i.e., a single event that involves a significant change in the organization's structure or procedures such as transition from temporary to permanent funding), (2) cycle or routine (i.e., repetitive reinforcement of the importance of the evidence-based intervention through including it into organizational or community procedures and behaviors, such as the annual budget and evaluation criteria), and (3) niche saturation (the extent to which an evidence-based intervention is integrated into all subsystems of an organization).46,49,50

Interdisciplinary approach
An approach that involves expertise from multiple disciplines (e.g. behavioral sciences, clinical sciences, marketing, etc.) for the design, evaluation, and/or dissemination and implementation of an intervention.

Interviewee's name, title, affiliation
See the Junior and Senior D&I Expert Interviews in the Narrative Library.

Interviewee's role on project
See the Junior and Senior D&I Expert Interviews in the Narrative Library.

Back to top


J

Junior D&I expert
See Junior D&I Expert Interviews in the Narrative Library.

Back to top


K

Knowledge broker
A knowledge broker is an intermediary (individual or organization) who facilitates and fosters the interactive process between producers (i.e., researchers) and users (i.e., practitioners, policy makers) of knowledge through a broad range of activities (see "Knowledge Brokering").60,64 More broadly, knowledge brokers assist in the organizational problem-solving process through drawing analogic links between solutions learned from resolving past problems, often in diverse domains, and demands of the current project. Knowledge brokers also help "make the right knowledge available to the right people at the right time."64(p. 67)

Knowledge brokering
Knowledge brokering has emerged from the understanding that there is a belief, value, and practice gap between producers (i.e., researchers) and users (i.e., practitioners, policy makers) of knowledge and it involves the organization of the interactive process between these two groups to facilitate and drive the transfer and implementation of research evidence.60-63 Specific tasks include synthesis and interpretation of relevant knowledge, facilitation of interaction and setting of shared agendas, building of new networks, and capacity building for knowledge use.60,61 Knowledge brokering is described as a two-way process that not only aims at facilitating the uptake and use of evidence by practitioners and policy-makers, but also focuses on prompting researchers to produce more practice-based evidence.61

Knowledge exchange
Knowledge exchange is the term used by the Canadian Health Services Research Foundation and describes the interactive and iterative process of imparting meaningful knowledge between knowledge users (i.e., stakeholders) and producers, such that knowledge users (i.e., stakeholders) receive relevant and easily usable information and producers receive information about users' research needs.26,53 This term was introduced to, in contrast to the terms knowledge translation and knowledge transfer, highlight the bi- or multi-directional nature of the knowledge transmission process (relationship model).26,53,56

Knowledge-for-action terms
The terms knowledge translation, knowledge transfer, knowledge exchange, and knowledge integration are commonly used especially outside of the United States to refer to the entire or some aspects of the D&I process. This chapter uses definitions coined by the CIHR, Graham and colleagues, and Best and colleagues to define these terms.5,26,53 As Best and colleagues suggested, these terms can be classified as linear (knowledge translation and transfer), relationship (knowledge exchange), or systems (knowledge integration) models of D&I.53

Knowledge integration
The term was introduced by Best and colleagues as the systems model for the knowledge transmission process and is defined as "the effective incorporation of knowledge into the decisions, practices and policies of organizations and systems."53 The key assumptions around the knowledge integration process are that (1) it is tightly woven within priorities, culture, and context; (2) it is mediated by complex relationships; (3) it needs to be understood from a systems perspective (i.e., in the context of organizational context and strategic processes); (4) it requires the integration with the organization(s) and its systems.53

Knowledge transfer
Knowledge transfer is a commonly used term both within and outside of the health care sector and is defined as the process of getting (research) knowledge from producers to potential users (i.e., stakeholders).26,53 This term is often criticized for its linear (unidirectional) notion and its lack of concern with the implementation of transferred knowledge.26

Knowledge translation
Knowledge translation is the term used by the Canadian Institutes of Health Research (CIHR) to denote "a dynamic and iterative process that includes synthesis, dissemination, exchange and ethically sound application of knowledge."5 Knowledge translation occurs within a complex social system of interactions between researchers and knowledge users and with the purpose of improving population health, providing more effective health services and products, and strengthening the health care system.5,26 

Knowledge utilization 
Knowledge utilization refers to the use of broadly defined knowledge including not only research evidence but also scholarly practice and programmatic interventions. It can be regarded as an overarching term that encompasses both research utilization and evidence-based practice.57,58

Back to top


L

Lessons learned
Read the lessons learned from the Go Sun Smart and Ozioma projects from the Narrative Library.

Logistics
Activities supporting the development, evaluation, and/or dissemination and implementation of an intervention. Also see interviews with Junior and Senior D&I experts in the Narrative Library.

Back to top


M

Maintenance 
Maintenance refers to the ability of the recipient setting or community to continuously deliver the health benefits achieved when the intervention was first implemented.46

Measurement considerations 
In the context of measures of the D&I process, three main components should be considered: moderators (i.e., factors associated with the speed and extent of dissemination and implementation), mediators (i.e., process variables), and outcomes. The measurement of moderators and mediators can help to identify the factors and processes that lead to the success or failure of an evidence-based intervention to achieve certain outcomes. To reflect the complexity of interventions and diversity in the interest of potential stakeholders (i.e., policymakers, practitioners, clinicians), in D&I research we commonly measure multiple moderators, mediators, and outcomes and assess their relationship.130

Mixed-method designs 
Mixed-methods designs involve the collection and analysis of multiple, both quantitative and qualitative data in a single study to answer research questions using a parallel (quantitative and qualitative data collected and analyzed concurrently), sequential (one type of data informs the collection of the other type), or converted (data is converted - qualitized or quantitized - and reanalyzed) approach. The mixed methods research design can generate rich data from multiple levels and a number of stakeholders and hence is appropriate to answer complex research questions (also see Systems thinking).127,128

Mode I and II science 
A similar model for the classification of research (knowledge production) established by Gibbons and colleagues was considered by the National Cancer Institute of Canada Working Group on Translational Research and Knowledge Transfer.53,77 This model suggests the distinction of Mode I and Mode II science. Mode I science refers to traditional investigator-initiated scientific methods designed to produce discipline-based generalizable knowledge and is characterized by a clear hypothesis, transparent methods, and replicability. Mode II science is defined as "science in the context of its application" and is described as context-driven, problem-focused research with the production of interdisciplinary knowledge.53 Mode II science is concerned with contextual factors such as organizational structure, geography, attitudes, economics, and ethics.53 Graham Harris introduces the concept of Mode III science that is not only done "in the context of its application but which also influences the context and application through engagement in a contextual and recursive debate." He further suggests that "to achieve this aspirational goal requires the establishment of a collaborative 'magic circle', a creative collaboration linking the worlds of science, governance, industry, the media and the community."78

Back to top


N

Natural experiment 
Natural experiment is a form of observational study design and is defined as "naturally occurring circumstances in which subsets of the population have different levels of exposure to a supposed causal factor, in a situation resembling an actual experiment where human subjects would be randomly allocated to groups."89

Back to top


O

Observability
Degree to which the results of an intervention are visible to others.141

Opinion leader 
Opinion leaders are members of a community or organization who have the ability to influence attitudes and behaviors of other members of the organization or community. Opinion leadership is based on perceived competence, accessibility, and conformity to system norms and is not a function of formal position. Opinion leaders serve as models for other members of the organization or community for innovation decisions, and hence they can facilitate or impede the dissemination and adoption process.18

Organizational change
Organizational change occurs when a company makes a transition from its current state to some desired future state.138 Also see Organizational readiness for change.

Organizational climate 
Organizational climate refers to the employees' perceptions of and reaction to the characteristics of the work environment.115-121

Organizational culture 
Organizational culture is defined as the organizational norms and expectations regarding how people behave and how things are done in an organization.111,112 This includes implicit norms, values, shared behavioral expectations, and assumptions that guide the behaviors of members of a work unit.113 Organizational culture refers to the core values of an organization, its services, or products as well as how individuals and groups within the organization treat and interact with each other. Schein defined it as "the pattern of shared basic assumptions that was learned by a group as it solved its problems of external adaptation and internal integration, and that has worked well enough to be considered valid and, therefore, to be taught to new members as the correct way to perceive, think, and feel in relation to those problems."114 (p.17)

Organizational readiness for change 
Organizational readiness for change is defined as the extent to which organizational members are psychologically and behaviorally prepared to implement a new intervention. Organizational readiness is widely regarded as an essential antecedent to successful implementation of change in healthcare and social service organizations.122-124

Origin of intervention
Ideas and inspiration that led to the development of the intervention. Also see prior experience and interviews in the Narrative Library.

Originating group
Group that developed the intervention or conducted the original efficacy testing on the intervention.141

Outcome variables 
Outcome variables, the end results of evidence-based interventions, in D&I research are often different from those in traditional health research and have to be defined broadly, including short- and long-term outcomes, individual and organizational- or population-level outcomes, impacts on quality of life, adverse consequences, and economic evaluation.28 Although individual-level variables can also be important (e.g., behavior change variables such as smoking or physical activity), outcome measures in D&I research are typically measured at organizational, community, or policy level (e.g., organizational change, community readiness for change).

Ozioma
The goal of the Ozioma programs and research is to improve the reach and effectiveness of cancer information for minority populations. The Ozioma News Service and Ozioma online tool aim to provide greater access to locally relevant cancer and health data, which may help in building healthier communities. Click here to visit the Ozioma page in the Narrative Library.

Back to top


P

Partnership
Equitable collaboration of different stakeholders in the development, evaluation, and/or dissemination and implementation of an intervention. Also see interviews with Junior and Senior D&I experts in the Narrative Library.

Participant selection and recruitment
See the Junior and Senior D&I Expert Interviews and the Go Sun Smart and Ozioma projects in the Narrative Library.

Peer review
Peer review is a process of self-regulation by a profession or a process of evaluation involving qualified individuals within the relevant field. Peer review methods are employed to maintain standards, improve performance and provide credibility. In academia peer review is often used to determine an academic paper's suitability for publication and/or a research proposal is suitable for funding.139 See interviews in the Narrative Library on how peer review might influence dissemination and implementation of evidence-based interventions.

Pilot test
Preliminary study designed to indicate whether a larger study is practical (also known as a feasibility study).142

Plausibility design 
Plausibility design is used to document impact and rule out alternative explanations when RCT approach is not feasible or acceptable (i.e., complexity of intervention, known efficacy or effectiveness in small-scale, ethical concerns). Plausibility studies include comparison groups and also address potential confounders.126

Population health intervention research 
Population health intervention research (PHIR) emerged from the work of Hawe and colleagues and is supported by the Canadian Institutes of Health Research (CIHR) through their Population Health Intervention Research Initiative for Canada (PHIRIC).82 PHIR uses scientific methods to produce knowledge on interventions operating either within or outside the health sector with potential to impact health at the population level.24 Population health interventions include programs, policies, and resource distribution processes and are often aimed at multiple systems, use multiple strategies, and are implemented both within and outside of the health sector into dynamic and complex systems.82 PHIR integrates the components of evaluation research and community-based intervention research into traditional intervention research and is concerned with multiple aspects of an intervention, including efficacy and effectiveness, processes by which change is brought about, contextual factors that favor desirable outcomes, reach, differential uptake, dissemination, and sustainability.83 PHIR considers both controlled and uncontrolled intervention designs and produces practice-relevant knowledge for real-world decision making.83

Pragmatic (or practical) clinical trial (PCT) 
Pragmatic (or practical) clinical trials (PCTs) are clinical trials that are concerned with producing answers to questions faced by decision makers.129 Tunis and colleagues define PCTs as studies that "(1) select clinically relevant alternative interventions to compare, (2) include a diverse population of study participants, (3) recruit participants from heterogeneous practice settings, and (4) collect data on a broad range of health outcomes."129 PCTs that take into rather than "take out of" (i.e., control for) consideration the large number of mediators and moderators that influence the D&I process are more likely to produce practice-based evidence than their highly controlled counterparts.125

Prior experience
Prior experience refers to any previous studies conducted by the research team that might have informed the development, evaluation, and/or dissemination and implementation of an intervention. Also see interviews with Junior and Senior D&I experts in the Narrative Library.

Priority
Priority in this context refers to the perceived importance of a certain topic for stakeholders at the implementation site. See interviews with Junior and Senior D&I experts in the Narrative Library for how priority influences dissemination and implementation of an intervention.

Process evaluation
Evaluation that uses a mixture of methods to identify and describe the factors that promote or impede the implementation of an intervention.141

Professional associations
A professional association (also called a professional body, professional organization, or professional society) is usually a nonprofit organization seeking to further a particular profession, the interests of individuals engaged in that profession, and the public interest.140 See interviews on Go Sun Smart in the Narrative Library on how professional associations might facilitate the dissemination and implementation of evidence-based interventions.

Program evaluation
Evaluation conducted to identify a program's accomplishments and effectiveness.141

Provider reminders or checklists
Intervention strategy that informs, cues or reminds providers or other healthcare professionals that individual clients are due (reminder) or overdue (recall) for a checkup or medical procedure. Techniques for delivery include notes in client charts (written or electronic) or memorandum or letter.141

Provider or system-oriented interventions
Category of intervention strategies that intends to affect change on providers or on the systems within which providers work.141

Publishing
See peer review and Junior and Senior D&I expert interviews in the Narrative Library.

Back to top


Q

Qualitative research
Qualitative research explores the subjective world. It attempts to understand why people behave the way they do and what meaning experiences have for people. Qualitative research relevant to effectiveness reviews may include interviews, observational studies, and process evaluations.141

Quasi-experimental design with control group
An experiment in which units are not randomly assigned to conditions and contains a control group that receives no treatment. The control group is selected to be as similar as possible to the treatment group.141

Back to top


R

Randomized controlled trial (RCT)
An experiment in which two or more interventions, possibly including a control intervention or no intervention, are compared by being randomly allocated to participants. In most trials one intervention is assigned to each individual but sometimes assignment is to defined groups of individuals (for example, in a household) or interventions are assigned within individuals (for example, in different orders or to different parts of the body).141

RE-AIM framework 
The RE-AIM framework developed by Glasgow and colleagues70,81,100 provides a conceptual model to guide researchers and practitioners in the development of adequate multistage (reach, adoption, implementation, maintenance) and multilevel (individual, setting) indicators when evaluating D&I efforts.88 A more comprehensive description of the RE-AIM framework and related tools can be found at: http://www.re-aim.org/

Reasons for sparse D&I
The factors that contribute to the infrequent dissemination and implementation of evidence-based interventions as perceived by Junior and Senior D&I experts. Also see the Narrative Library.

Receiving group
Group who adopt and use an evidence-based intervention.141

Reinvention/Adaptation 
For the success of D&I, interventions often need to be reinvented or adapted to fit the local context (i.e., needs and realities).7 Reinvention or adaptation is defined as the degree to which an evidence-based intervention is changed or modified by a user during adoption and implementation to suit the needs of the setting or to improve the fit to local conditions.18 The need for adaptation and understanding of context has been called Type 3 evidence (i.e., the information needed to adapt and implement an evidence-based intervention in a particular setting or population).29,37 Ideally, adaptation will lead to at least equal intervention effects as is shown in the original efficacy or effectiveness trial. To reconcile the tension between fidelity and adaptation, the core components (or essential features) of an intervention (i.e., those responsible for its efficacy/effectiveness) must be identified and preserved during the adaptation process.104

Relative advantage
Stakeholders' perception of the advantage of implementing the intervention versus an alternative solution.146

Replication
Testing of the same intervention to verify that it produces the same effect.141

Representativeness
The degree to which the study group reflects the larger target population on a set of agreed characteristics. Use of a sampling method which deliberately samples for heterogeneity aimed at reflecting diversity on presumptively important dimensions, even though not formally randomized.141

Research team
Individuals who develop, evaluate, and/or disseminate and implement the intervention. Also see interdisciplinary approach. An appropriate composition of the research team can be a major provider of successful intervention development, dissemination and implementation.

Research utilization 
Research utilization is a form of knowledge utilization and it has long traditions in the nursing literature and refers to "the process by which specific research-based knowledge (science) is implemented in practice."58,59 Research utilization, similarly to knowledge translation and knowledge transfer, follows a linear model and is primarily concerned with moving research knowledge into action.26

Resources
Physical, staffing or financial supports necessary to carryout the evidence-based intervention. A response of sufficient resources indicates that the study described the physical, staffing, or financial supports necessary and demonstrated that these resources were present in the setting where implementation occurred.141

Back to top


S

Scale up and scaling up 
The term is commonly used in the international health and development literature and refers to "deliberate efforts to increase the impact of health service innovations successfully tested in pilot or experimental projects so as to benefit more people and to foster policy and programme development on a lasting basis."6,65,66 Scaling up most commonly refers to expanding the coverage of successful interventions, however, it can also be concerned with the financial, human, and capital resources necessary for the expansion.6,67 It is suggested that sustainable scale up requires a combination of horizontal (e.g., replication and expansion) and vertical (institutional, policy, political, legal) scaling up efforts that benefit from different D&I strategies (i.e., training, technical assistance hands-on support versus networking, policy dialogue, advocacy).7 Furthermore, some researchers suggest that scale up has a broader reach and scope than D&I and expands to national and international levels.68 The National Implementation Research Network uses the term going to scale when an evidence-based intervention reaches 60% of the target population that could benefit from it.42 Additional terms used to describe some aspect of the D&I process include knowledge cycle, knowledge management, knowledge mobilization, research transfer, research translation, expansion, linkage and exchange.5,7

Science-to-service gap 
Science-to-service gap refers to the phenomenon when the interventions that are adopted by individuals and organizations are not the ones that are known to be effective and hence most likely to benefit the target population.42,79

Senior D&I expert
See Senior D&I Expert Interviews in the Narrative Library.

Setting
Where an intervention is delivered.141

Stage models 
Stage models propose that D&I of interventions occurs as a series of successive phases rather than as one event.16,18,85,86 Although different stage models vary in the number and name of the identified stages,16all models suggest that D&I does not stop at the level of initial uptake; further steps are necessary to ensure the long-term utilization of an intervention.87 The stages are identified as dissemination, adoption, implementation, and sustainability. Other commonly used models are the innovation-decision process (knowledge, persuasion, decision, implementation, and confirmation)18and the stages of the RE-AIM framework (reach, adoption, implementation, maintenance).88 The different stages of the D&I process can be thought of as process variables or mediating factors (i.e., factors that lie in the causal pathway between an independent variable [e.g., the exposure to the intervention] and dependent variable [e.g., an outcome such as organizational change] and require different strategies and are influenced by different moderating variables.89

Standard care
Comparison group that mimics typical practice.141

Study design
A study design is a specific plan or protocol for conducting the study, which allows the investigator to translate the conceptual hypothesis into an operational one.137 

Success factors
Success factors refer to any activities and strategies undertaken by the research team during the development and evaluation of an intervention that contributed to its high D&I success as perceived by the developers and/or the adopters of the intervention. Also see interviews with Junior and Senior D&I experts in the Narrative Library.

Sustainability 
Sustainability describes the extent to which an evidence-based intervention can deliver its intended benefits over an extended period of time after external support from the donor agency is terminated.46 Most often sustainability is measured through the continued use of intervention components; however, Scheirer and Dearing suggest that measures for sustainability should also include considerations of maintained community- or organizational -level partnerships, maintenance of organizational or community practices, procedures, and policies that were initiated during the implementation of the intervention, sustained organizational or community attention to the issue that the intervention is designed to address, and efforts for program diffusion and replication in other sites.47 As discussed below, three operational indicators of sustainability are: (1) maintenance of a program's initial health benefits, (2) institutionalization of the program in a setting or community, and (3) capacity building in the recipient setting or community.46

Systems thinking 
Systems thinking is the process of understanding how things influence one another other within a whole and is based on the premise that societal problems are complex and that the response to these complex problems is only possible by intervening at multiple levels and with the engagement of stakeholders and settings across the different levels including the home, school, workplace, community, region, and country.131,132 Systems thinking is not only concerned with applying multiple strategies at multiple levels but also focuses on the interrelationships within and across levels and how interventions need to take these relationships into account in their design and implementation.131,132

Back to top


T

T1 research 
T1 translational research uses discoveries generated through laboratory and/or preclinical research to develop and test treatment and prevention approaches. In other words, T1 clinical research moves science from "the bench" (fundamental research, methods development) to the patient's "bedside" (efficacy research).44,69

T2 research 
T2 translational research focuses on the enhancement of widespread use of efficacious interventions by the target audience. This type of research includes effectiveness research, diffusion research, dissemination research, and implementation research44 and is also referred to as "bedside to (clinical) practice (or trench)" translation.69,71

Target beneficiaries
Individuals who will be impacted by the intervention. For cancer control interventions, these are often patients or consumers. Target beneficiaries can be, but are most often not, the user of the intervention.

Target users
Individuals who will be implementing the actual intervention. For cancer control interventions, target users are often not the end beneficiary of the intervention.

Technology transfer 
Technology transfer is closely related to (some suggest that it is a subset of) knowledge transfer, and it refers to the process of sharing technological developments with potential users.54,55 While knowledge transfer often refers to individuals as the recipients of the knowledge, technology transfer more often focuses on transfer to larger entities such as organizations, countries, or the public at large.55 The object of technology transfer is often defined broadly as a process, product, know-how, or resource, but its focus is still narrower than the focus of the more encompassing knowledge transfer.55

Theories and frameworks 
There are a number of theories, theoretical frameworks, and models that shape the way that we think about D&I research and guide our planning and evaluation activities.12,72 The most commonly used theories and frameworks include the Diffusion of Innovations theory, 18,87 theories of organizational change,90 Social Marketing theory,91 theories of communication,92 individual and organizational decision making,93 community organizing models,94 the RE-AIM framework,81 the Precede-Proceed model,95 the Interactive Systems Framework for D&I,96 and the Practical, Robust Implementation and Sustainability Model (PRISM),97 the Knowledge-to-Action (KTA) model,26 and the Promoting Action on Research Implementation in Health Services (PARiHS) framework.98,99

Training
Level of instruction and skill development provided to those who deliver the evidence-based intervention.141

Trialability (easy to pilot)
Degree to which an innovation may be experimented with on a limited basis.141

Type 1 evidence 
Type 1 evidence defines the cause of a particular outcome (e.g., health condition). This type of evidence includes factors such as magnitude and severity of the outcome (i.e., number, incidence, prevalence) and the actionability of the cause (i.e., preventability or changeability) and often leads to the conclusion that "something should be done."31,37

Type 2 evidence 
Type 2 evidence focuses on the relative impact of a specific intervention to address a particular outcome (e.g., heath condition). This type of evidence includes information on the effectiveness or cost-effectiveness of a strategy compared with others and points to the conclusion that "specifically, this should be done."31 Type 2 evidence (interventions) can be classified based on the source of the evidence (i.e., study design) as evidence-based, efficacious, promising, and emerging interventions.37

Type 3 evidence 
Type 3 evidence is concerned with the type of information that is needed for the adaptation and implementation of an evidence-based intervention.29 This type of evidence includes information on how and under which contextual conditions interventions were implemented and how they were received and addresses the issue of "how something should be done." Type 3 is the type of evidence we have the least of and derives from the context of an intervention.37

Types of evidence 
The types of evidence available for decision making in health can be classified as Type 1, Type 2, and Type 3 evidence.37 These evidence types differ in their characteristics, scope, and quality.

Back to top


U

Usability testing
Usability testing is a systematic way of observing actual and potential users of a product as they work with [the product] under controlled conditions.135 Usability is defined as "the extent to which a product  can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction."136

Back to top


Reference list

1. National Cancer Institute. The National Cancer Institue Startegic Plan for Leading the Nation 2006.
2. National Cancer Institute. Designing for Dissemination.  http://cancercontrol.cancer.gov/d4d/. Accessed Accessed on August 9, 2010.
3. National Institute for Occupational Safety and Health. Communication and Information Dissemination.  http://www.cdc.gov/niosh/programs/cid/. Accessed Accessed on August 9, 2010.
4. National Institute on Disability and Rehabilitation Research. NIDRR's Core Areas of Research.  http://www2.ed.gov/rschstat/research/pubs/core-area.html#kdu. Accessed Accessed on August 9, 2010.
5. Canadian Institutes of Health Research. Knowledge Translation at CIHR.  http://www.cihr-irsc.gc.ca/e/33747.html.
6. Mangham LJ, Hanson K. Scaling up in international health: what are the key issues? Health Policy and Planning. 2010;25:85-96.
7. World Health Organization. Practical guidance for scaling up health service innovations 2009.
8. National Center for the Dissemination of Disability Research. A review of the literature on dissemination and knowledge utilization 1996.
9. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation Research: A synthesis of the literature. Tampa, Florida: National Implementation Research Network, University of South Florida; 2005.
10. Kerner J, Rimer B, Emmons K. Introduction to the special section on dissemination: dissemination research and research dissemination: how can we close the gap? Health Psychol. Sep 2005;24(5):443-446.
11. Glasgow RE, Marcus AC, Bull SS, Wilson KM. Disseminating effective cancer screening interventions. Cancer. Sep 1 2004;101(5 Suppl):1239-1250.
12. Crosswaite C, Curtice L. Disseminating research results--the challenge of bridging the gap between health research and health action. Health Promotion International. 1994;9(4):289-296.
13. Ciliska D, Robinson P, Armour T, et al. Diffusion and dissemination of evidence-based dietary strategies for the prevention of cancer. Nutr J. Apr 8 2005;4(1):13.
14. Lost in clinical translation. Nat Med. Sep 2004;10(9):879.
15. Dobbins M. Is scientific research evidence being translated into new public health practice? Toronto, Ontario: Central East Health Information Partnership; 1999.
16. Mayer JP, Davidson WS. Dissemination of innovations. In: Rappaport J, Seidman E, eds. Handbook of community psychology. New York: Plenum Publishers; 2000:421-438.
17. Green LW, Johnson JL. Dissemination and utilization of health promotion and disease prevention knowledge: theory, research and experience. Can J Public Health. Nov-Dec 1996;87 Suppl 2:S11-17.
18. Rogers EM. Diffusion of innovations. Fifth ed. New York: Free Press; 2003.
19. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581-629.
20. Solomon J, Card JJ, Malow RM. Adapting efficacious interventions: advancing translational research in HIV prevention. Eval Health Prof. Jun 2006;29(2):162-194.
21. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm Policy Ment Health. 2009;36(1):24-34.
22. Saul J, Duffy J, Noonan R, et al. Bridging science and practice in violence prevention: addressing ten key challenges. Am J Community Psychol. 2008;41(3-4):197-205.
23. Bowen DJ, Sorensen G, Weiner BJ, Campbell M, Emmons K, Melvin C. Dissemination research in cancer control: where are we and where should we go? Cancer Causes Control. May 2009;20(4):473-485.
24. Hawe P, Potvin L. What is population health intervention research? Canadian Journal of Public Health. 2009;100(1):Suppl I8-14.
25. Tetroe J. Knowledge Translation at the Canadian Institutes of Health Research: A Primer: National Center for the Dissemination of Disability Research; 2007.
26. Graham ID, Logan J, Harrsion MB, et al. Lost in Knowledge Translation: Time for a Map? The Journal of Continuing Education in the Health Professions. 2006;26:13-24.
27. Balbach ED. Using case studies to do program evaluation. Sacramento, CA: California Department of Health Services; 1999.
28. Rabin BA, Brownson RC, Kerner JF, Glasgow RE. Methodologic challenges in disseminating evidence-based interventions to promote physical activity. Am J Prev Med. Oct 2006;31(4 Suppl):S24-34.
29. Rychetnik L, Hawe P, Waters E, Barratt A, Frommer M. A glossary for evidence based public health. J Epidemiol Community Health. Jul 2004;58(7):538-545.
30. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. Bmj. Jan 13 1996;312(7023):71-72.
31. Brownson RC, Baker EA, Leet TL, Gillespie KN. Evidence-based public health. New York: Oxford University Press; 2003.
32. Guyatt G, Rennie D, eds. Users' guides to the medical literature. A manual for evidence-based clinical practice. Chicago, IL: American Medical Association Press; 2002.
33. Jenicek M. Epidemiology, evidenced-based medicine, and evidence-based public health. J Epidemiol. Dec 1997;7(4):187-197.
34. Hawe P, Shiell A, Riley T. Complex interventions: how "out of control" can a randomised controlled trial be? Bmj. Jun 26 2004;328(7455):1561-1563.
35. Borkovec TD, Castonguay LG. What Is the Scientific Meaning of Empirically Supported Therapy? Journal of Consulting and Clinical Psychology. 1998;66(1):136-142.
36. Gambrill E. Evidence-based practice: Implications for knowledge development and use in social work. . In: Rosen A, Proctor E, eds. Developing practice guidelines for social work intervention. New York: Columbia University Press; 2003:37-58.
37. Brownson RC, Fielding JE, Maylahn CM. Evidence-Based Public Health: A Fundamental Concept for Public Health Practice. Annual Review of Public Health. 2009;30:175-201.
38. Lomas J. Diffusion, dissemination, and implementation: who should do what? Ann N Y Acad Sci. Dec 31 1993;703:226-235; discussion 235-227.
39. MacLean DR. Positioning dissemination in public health policy. Can J Public Health. Nov-Dec 1996;87 Suppl 2:S40-43.
40. National Institutes of Health. PA-10-038: Dissemination and Implementation Research in Health (R01). 2010.
41. Dearing JW, Kreuter MW. Designing for diffusion: how can we increase uptake of cancer communication innovations? Patient Educ Couns. Dec;81 Suppl:S100-110.
42. Blase KA, Fixsen DL, Duda MA, Metz AJ, Naoom SF, Van Dyke AK. Implementing and sustaining evidence-based programs: Have we got a sporting chance? Blueprints Conference. University of North Carolina, Chapel Hill; 2010.
43. National Institutes of Health. PA-08-166: Dissemination, Implementation, and Operational Research for HIV Prevention Interventions (R01). 2009.
44. Sussman S, Valente TW, Rohrbach LA, Skara S, Pentz MA. Translation in the health professions: converting science into action. Eval Health Prof. Mar 2006;29(1):7-32.
45. Hoelscher DM, Kelder SH, Murray N, Cribb PW, Conroy J, Parcel GS. Dissemination and adoption of the Child and Adolescent Trial for Cardiovascular Health (CATCH): a case study in Texas. J Public Health Manag Pract. Mar 2001;7(2):90-100.
46. Shediac-Rizkallah MC, Bone LR. Planning for the sustainability of community-based health programs: conceptual frameworks and future directions for research, practice and policy. Health Educ Res. Mar 1998;13(1):87-108.
47. Scheirer MA, Dearing JW. Agenda for Research on Sustainability. Under review.
48. Goodman RM, Steckler A. A model for the institutionalization of health promotion programs. Fam Community Health. 1989;11(4):63-78.
49. Pluye P, Potvin L, Denis J. Making public health programs last: conceptualizing sustainability. Evaluation and Program Planning. 2004;27:121-133.
50. Johnson K, Hays C, Center H, Daley C. Building capacity and sustainable prevention innovations: a sustainability planning model. Evaluation and Program Planning. 2004;27:135-149.
51. Proctor E, Silmere H, Raghavan R, et al. Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda. Adm Policy Ment Health. 2011;38:65-76.
52. Community Partnership for Healthy Children. Spotlight: Funding alternatives. A Sierra Health Foundation Initiative. 2002;4(3):1-4.
53. Best A, Hiatt RA, Norman CD. Knowledge integration: Conceptualizing communications in cancer control systems. Patient Education and Counseling. 2008;71(3):319-327.
54. National Science Foundation. Science and engineering indicators 2006 2006.
55. Oliver ML. The transfer process: Implications for evaluation. In: Ottoson SM, Hawe P, eds. Knowledge utilization, diffusion, implementation, transfer, and translation: Implications for evaluation. Vol 124. San Francisco: Jossey-Bass and the American Evaluation Association; 2009:61-73.
56. Mitton C, Adair CE, McKenzie E, Patten SB, Perry BW. Knowledge Transfer and Exchange: Review and Synthesis of the Literature. The Milbank Quarterly. 2007;85(4):729-768.
57. Loomis ME. Knowledge utilization and research utilization in nursing. Image J Nurs Sch. Spring 1985;17(2):35-39.
58. Estabrooks CA. The conceptual structure of research utilization. Res Nurs Health. Jun 1999;22(3):203-216.
59. Estabrooks CA, Wallin L, Milner M. Measuring Knowledge Utilization in Healthcare. International Journal of Policy Evaluation & Management. 2003;1:3-36.
60. Ward VL, House AO, Hamer S. Knowledge brokering: exploring the process of transferring knowledge into action. BMC Health Serv Res. 2009;9:12.
61. van Kammen J, de Savigny D, Sewankambo N. Using knowledge brokering to promote evidence-based policy-making: The need for support structures. Bull World Health Organ. Aug 2006;84(8):608-612.
62. Lomas J. Using 'linkage and exchange' to move research into policy at a Canadian foundation. Health Aff (Millwood). May-Jun 2000;19(3):236-240.
63. Caplan N. The two-communities theory and knowledge utilization. American Behavioral Scientist. 1979;22(3):459-470.
64. Hargadon AB. Brokering Knowledge: Linking learning and innovation. Research in Organizational Behavior. 2002;24:41-85.
65. Simmons R, Fajans P, Ghiron L. Introduction. In: Simmons R, Fajans P, Ghiron L, eds. Scaling up health service delivery: from pilot innovations to policies and programmes. Geneva: World Health Organization; 2007:vii-xvii.
66. Johns B, Torres TT. Costs of scaling up health interventions: a systematic review. Health Policy and Planning. 2005;20(1):1-13.
67. Hanson K, Cleary S, Schneider H, Tantivess S, Gilson L. Scaling up health policies and services in low- and middle-income settings. BMC Health Services Research. 2010;10(Suppl1):I1.
68. Norton WE, Mittman B. Scaling-Up Health Promotion/Disease Prevention Programs in Community Settings: Barriers, Facilitators, and Initial Recommendations: The Patrick and Catherine Weldon Donaghue Medical Research Foundation; 2010.
69. Woolf SH. The Meaning of Translational Research and Why It Matters. JAMA. 2008;299:211-213.
70. Glasgow RE, Lichtenstein E, Marcus AC. Why don't we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. Am J Public Health. Aug 2003;93(8):1261-1267.
71. National Institute of Health. Roadmap for medical research 2002.
72. Johnson JL, Green LW, Frankish CJ, MacLean DR, Stachenko S. A dissemination research agenda to strengthen health promotion and disease prevention. Can J Public Health. Nov-Dec 1996;87 Suppl 2:S5-10.
73. National Institutes of Health. PA-10-040: Dissemination and Implementation Research in Health (R21). 2010.
74. Center for Mental Health in Schools at UCLA. Systemic change and empirically-supported practices: The implementation problem. Los Angeles, CA 2006.
75. Szilagyi PG. Translational Research in Pediatrics. Academic Pediatrics. 2009;9(2):71-80.
76. Khoury MJ, Gwinn M, Yoon PW, Dowling N, Moore C, Bradley C. The continuum of translation research in genomic medicine: how can we accelerate the appropriate integration of human genome discoveries into health care and disease prevention. Genetics in Medicine. 2007;9(10):665-674.
77. Gibbons M, Limoges C, Nowotny H, Schwartzman S, Scott P. The new production of knowledge: the dynamics of science and research in contemporary societies. London: Sage; 1994.
78. Harris G. Wicked meso-scale problems and a new kind of science. Seeking Sustainability in an Age of Complexity First ed: Cambridge University Press; 2007.
79. Panzano P. Images of Implementation; 2010.
80. Fichman RG, Kemerer CF. The illusory diffusion of innovation: An examination of assimilation gaps. Information Systems Research. 1999;10(3):255-275.
81. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. Sep 1999;89(9):1322-1327.
82. Canadian Institutes of Health Research. Population Health Intervention Research.  http://www.cihr-irsc.gc.ca/e/33503.html.
83. Institute of Population and Public Health. Population health intervention research. Spotlight on research. Toronto, ON 2007.
84. Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review:  A new method of systematic review designed for complex policy interventions. . Journal of Health Services Research and Policy. 2005;10:S21-S39.
85. Goodman RM, Tenney M, Smith DW, Steckler A. The adoption process for health curriculum innovations in schools:  a case study. Journal of Health Education. 1992;23:215-220.
86. Brownson RC, Ballew P, Dieffenderfer B, et al. Evidence-based interventions to promote physical activity What contributes to dissemination by state health departments Am J Prev Med. 2007;33(1 Suppl):S66-73; quiz S74-68.
87. Oldenburg B, Glanz K. Diffusion of Innovation. In: Glanz K, Rimer BK, Viswanath K, eds. Health Behavior and Health Education. Fourth ed: Wiley, John & Sons, Inc.; 2008:313-334.
88. Dzewaltowski DA, Estabrooks PA, Glasgow RE. The future of physical activity behavior change research: what is needed to improve translation of research into health promotion practice? Exerc Sport Sci Rev. Apr 2004;32(2):57-63.
89. Last JM. A dictionary of epidemiology. 4th ed. New York: Oxford University Press; 2001.
90. Butterfoss FD, Kegler MC, Francisco VT. Mobilizing Organizations for Health Promotion: Theories of Organizational Change In: Glanz K, Rimer BK, Viswanath K, eds. Health Behavior and Health Education. Fourth ed: Wiley, John & Sons, Inc.; 2008:335-362.
91. Storey JD, Saffitz GB, Rimon JG. Social Marketing. In: Glanz K, Rimer BK, Viswanath K, eds. Health Behavior and Health Education. Fourth ed: Wiley, John & Sons, Inc.; 2008:435-464.
92. Finnegan JRJ, Viswanath K. Communication Theory and Health Behavior Change: The Media Studies Framework In: Glanz K, Rimer BK, Viswanath K, eds. Health Behavior and Health Education. Fourth ed: Wiley, John & Sons, Inc.; 2008:363-388.
93. Kegler MC, Glanz K. Perspectives on Group, Organization, and Community Interventions In: Glanz K, Rimer BK, Lewis FM, eds. Health Behavior and Health Education. Fourth ed: Wiley, John & Sons, Inc.; 2008:389-404.
94. Bracht NK, Rissel C. A five stage community organization model for health promotion: empowerment and partnership strategies. In: Bracht N, ed. Health promotion at the community level: new advances. Thousand Oaks, CA: Sage Publications; 1999.
95. Green LW, Kreuter M. Health program planning: An educational and ecological approach. Fourth ed: McGraw-Hill; 2005.
96. Wandersman A, Duffy J, Flaspohler P, et al. Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation. Am J Community Psychol. Jun 2008;41(3-4):171-181.
97. Feldstein AC, Glasgow RE. A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Jt Comm J Qual Patient Saf. 2008;34(4):228-243.
98. Kitson A, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a conceptual framework. Qual Health Care. Sep 1998;7(3):149-158.
99. Rycroft-Malone J, Kitson A, Harvey G, et al. Ingredients for change: revisiting a conceptual framework. Qual Saf Health Care. Jun 2002;11(2):174-180.
100. Glasgow RE, Klesges LM, Dzewaltowski DA, Bull SS, Estabrooks P. The future of health behavior change research: what is needed to improve translation of research into health promotion practice? Ann Behav Med. Feb 2004;27(1):3-12.
101. Kerner JF, Guirguis-Blake J, Hennessy KD, et al. Translating research into improved outcomes in comprehensive cancer control. Cancer Causes Control. Oct 2005;16 Suppl 1:27-40.
102. Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: Issues in translation methodology. Eval Health Prof. 2006;29(1):126-153.
103. Rohrbach LA, Grana R, Sussman S, Valente TW. Type II translation: transporting prevention interventions from research to real-world settings. Eval Health Prof. Sep 2006;29(3):302-333.
104. Castro FG, Barrera M, Jr., Martinez CR, Jr. The cultural adaptation of prevention interventions: resolving tensions between fidelity and fit. Prev Sci. Mar 2004;5(1):41-45.
105. Dearing JW. Evolution of diffusion and dissemination theory. J Public Health Manag Pract. Mar-Apr 2008;14(2):99-108.
106. Brownson RC, Kreuter MW, Arrington BA, True WR. Translating scientific discoveries into public health action: How can schools of public health move us forward? Public Health Reports. 2006;121:97-103.
107. Elliott JS, O'Loughlin J, Robinson K, et al. Conceptualizing dissemination research and activity: The case of the Canadian Heart Health Initiative. Health Education and Behavior. 2003;30(3):267-282.
108. Waters E, Doyle J, Jackson N, Howes F, Brunton G, Oakley A. Evaluating the effectiveness of public health interventions: the role and activities of the Cochrane Collaboration. J Epidemiol Community Health. Apr 2006;60(4):285-289.
109. Bauman LJ, Stein RE, Ireys HT. Reinventing fidelity: the transfer of social technology among settings. Am J Community Psychol. Aug 1991;19(4):619-639.
110. Stetler CB, Ritchie J, Rycroft-Malone J, Schultz A, Charns M. Improving quality of care through routine, successful implementation of evidence-based practice at the bedside: an organizational case study protocol using the Pettigrew and Whipp model of strategic change. Implement Sci. 2007;2:3.
111. Gilson L, Schneider H. Managing scaling up: what are the key issues? Health Policy and Planning. 2010;25:97-98.
112. Verbeke W, Volgering M, Hessels M. Exploring the conceptual expansion within the field of organizational behaviour: Organizational climate and organizational culture. J Manage Stud. 1998;35:303-330.
113. Cooke RA, Rousseau DM. Behavioral norms and expectations: A quantitative approach to the assessment of organizational culture. Group Organ Stud. 1988;13:245-273.
114. Schein E. Organizational culture and leadership. Third ed. San Francisco: Jossey-Bass; 2004.
115. Glisson C, James LR. The cross-level effects of culture and climate in human service teams. J Organ Behav. 2002;23:767-794.
116. James LR, Hater JJ, Gent MJ, Bruni JR. Psychological climate: Implications from cognitive social learning theory and interactional psychology. Pers Psychol. 1978;31:783-813.
117. James LR, Sells SB. Psychological climate: Theoretical perspectives and empirical research. In: Magnusson D, ed. Toward a psychology of situations: An international perspective. Hillsdale, NJ: Erlbaum; 1981:275-295.
118. Litwin G, Stringer R. Motivation and Organizational Climate. Cambridge, MA: Harvard University Press; 1968.
119. Hellriegel D, Slocum JWJ. Organizational climate: Measures, research and contingencies. Acad Manage J. 1974;17:255-280.
120. Reichers A, Schneider B. Climate and culture: An evolution of constructs. In: Schneider B, ed. Organizational Climate and Culture. San Francisco: Jossey-Bass; 1990.
121. Schneider B. Organizational climate: An essay. . Pers Psychol. 1975;28.
122. Lehman WE, Greener JM, Simpson DD. Assessing organizational readiness for change. J Subst Abuse Treat. Jun 2002;22(4):197-209.
123. Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4:67.
124. Weiner BJ, Amick H, Lee SY. Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. Aug 2008;65(4):379-436.
125. Rabin BA, Glasgow RE, Kerner FJ, Klump MP, Brownson RC. Dissemination and implementation research on community-based cancer prevention: A systematic review. Am J Prev Med. 2010;38(4):443-456.
126. Victora C, Habicht J, Bryce J. Evidence-Based Public Health: Moving Beyond Randomized Trials. Am J Public Health. 2004;94(3):400-406.
127. Tashakkori A, Teddlie C. Handbook of mixed methods in the social and behavioral research. Second ed. Thousand Oaks, CA: SAGE Publications; 2010.
128. Johnson RB, Onwuegbuzie AJ. Mixed Methods Research: A Research Paradigm Whose Time Has Come. Educational Researcher. 2004;33(7):14-26.
129. Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. Jama. Sep 24 2003;290(12):1624-1632.
130. Glasgow RE. What outcomes are the most important in translational research? . Paper presented at: In Proceedings of "From clinical science to community: The science of translating diabetes and obesity research" conference, 2004; Bethsada, Maryland.
131. Leischow SJ, Best A, Trochim WM, et al. Systems thinking to improve the public's health. Am J Prev Med. Aug 2008;35(2 Suppl):S196-203.
132. Trochim WM, Cabrera DA, Milstein B, Gallagher RS, Leischow SJ. Practical challenges of systems thinking and modeling in public health. Am J Public Health. Mar 2006;96(3):538-546.
133. Rothwell PM. External validity of randomised controlled trials: "to whom do the results of this trial apply?" Lancet. Jan 1-7 2005;365(9453):82-93.
134. Centers for Disease Control. Social Marketing: Nutrition and Physical Activity. http://www.cdc.gov/nccdphp/dnpa/socialmarketing/training/glossary.htm.
135. Dumas J, Loring B. Moderating usability tests: principles and practices for interacting. Burlington, MA: Morgan Kaufmann; 2008; p.2.
136. International Standards Organization (ISO 9241-11:1998 (E)): Ergonomic requirements for office work with visual display terminals (VDTs) - Part 11: Guidance on usability; p.2.
137. World Health Organization. Training manual for community-based initiatives: a practical tool for trainers and trainees; p.416. http://www.emro.who.int/dsaf/dsa736.pdf.
138. U.S. Legal. Managing Organizational Change Law & Legal Definition. http://definitions.uslegal.com/m/managing-organizational-change/.
139. Wikipedia. Peer review. http://en.wikipedia.org/wiki/Peer_review.
140. Wikipedia. Professional associations. http://en.wikipedia.org/wiki/Professional_association.
141. Center for Health Dissemination and Implementation Research. Glossary. http://www.research-practice.org/glossary.htm.
142. Meinert C. Clinical trials: Design, conduct, and analysis. Oxford: Oxford University Press; 1986. 
143. Learning-Theories. Communities of Practice (Lave and Wegner). http://www.learning-theories.com/communities-of-practice-lave-and-wenger.html
144. Soumerai SB, Avorn J. 1990. Principles of Educational Outreach ('Academic Detailing') to Improve Clinical Decision Making. JAMA 263(4): 549.
145. Wikipedia. Academic detailing. http://en.wikipedia.org/wiki/Academic_detailing#cite_note-0.
146. Consolidated Framework for Implementation Research (CFIR). http://wiki.cfirwiki.net/index.php?title=Relative_Advantage