In your initial post, reflect on your understanding of the concept of sustainability, as applied to program design in human services. Based on the course readings, list and explain three examples of tools for organizational sustainability that must be considered when developing a program. How do these tools apply to the hypothetical program you are developing as a response to the call for action inspired by this course’s media piece?
Program Development in the 21st Century: An Evidence-Based Approach to Design, Implementation, and Evaluation
By: Nancy G. Calley
Developing the Outcomes Evaluation Plan As is true of all evaluation efforts, the outcomes evaluation is a complex process and, as such, requires significant planning. In addition to conducting all the preparatory work needed to effectively support the evaluation process, there are three major areas that compose the planning process: Determining the evaluation design Selecting the assessment instruments Establishing the evaluation time frames Outcomes Evaluation Design Similar to fidelity assessment and process evaluation, outcome evaluations are highly complex and require a tremendous amount of planning, effort, and attention to detail. Outcome evaluations are the driving force behind the development of evidence-based practices. Indeed, without an effective program evaluation, an evidence basis cannot be established. As a result, the highest degree of scientific rigor is required in order to most effectively evaluate outcomes. To this end, randomized clinical trials (RCTs) have become the gold standard for the efficacy of outcome evaluations. This is because of the potential of RCTs to maximize internal validity (i.e., attribute outcomes directly to the treatment rather than other causes; Del Boca & Darkes, 2007). The significance of the interdependent relationship shared by fidelity assessment, process evaluation, and outcomes evaluation is further underlined by the requirement of RCTs, because RCTs are dependent on treatment fidelity as well as an effective process evaluation. Whereas RCTs are indeed the gold standard, experimental designs that use a control group (i.e., withhold treatment) are not always feasible in practice settings for ethical as well as logistical reasons. As a result, program developers must be well versed in various types of program evaluation design to ensure that the most effective and ethically sound evaluation is used. Although there are multiple types of design that can be used, I would like to focus briefly on quasi-experimental design, since it may account for the most rigorous type of design, given the inherent challenges of research in practice settings. One of the most commonly used types of quasi-experimental design in mental health and human services is the pre/post-test design. In the pre/post-test design, clients can be randomly assigned or deliberately assigned to either one of two treatments or to a treatment or control group (i.e., no treatment is provided). But again, because of ethical reasons prohibiting the withholding of treatment to those in need, pre/post-test design in practice settings typically involves assigning clients to one of two treatmentsan evaluation design that is increasingly common today (Heppner, Wampold, & Kivlighan, 2008). A pre/post-test design allows you to examine pretest differences, a critical aspect when comparing unequivalent groups. In comparison to a post-test-only design, the pre/post-test design is stronger and more interpretable because of this (Heppner et al., 2008). However, it should be noted that a threat to this type of design is related to potential problems with external validity that can occur as a result of pretest sensitizationthe pretest itself creates a difference between the two groups. However, most agree that this is minor, and the benefits of the design outweigh this risk (Heppner et al.,2008; Kazdin, 2003). The aspect of random selection should be briefly discussed, particularly as it can significantly impact the integrity of the pre/post-test design. In order to maximize the strength of research findings, the two treatment groups being evaluated should be equal. Such equality among treatment participants is referred to as between-groups designmeaning that the groups are equal prior to treatment. By ensuring this, you are able to more easily attribute post-treatment differences to the treatment. Accounting for between-groups equality requires random selection of participants to one of the two or more groups. Random selection, or randomization, means that participants have been assigned without bias. Random selection can be accomplished in several ways, such as assignment to one of the two groups based on order of program admissionClient 1 is assigned to Treatment 1, Client 2 to Treatment 2, Client 3 to Treatment 1, Client 4 to Treatment 2, and so on. In this type of randomization, all odd-number admissions are assigned to Treatment 1, while all even-number admissions are assigned to Treatment 2. While quite straightforward, this type of randomization requires that clients enter in a sequential manner and that the sequence of client admission is tracked. This may not be feasible in some practice settings, particularly when clients are admitted en masse. To address this issue, the old and trusted hat trick may serve better. You can simply place the names of all clients in a hat, and use the order in which names are pulled to assign the clients to one of the two treatment groups (i.e., first name pulled is assigned to Treatment 1, second name to Treatment 2, third name to Treatment 1, etc.). As with every aspect of research design, the issue of random selection must be given adequate attention in order to further strengthen your overall design. While basic research design composes one part of an evaluation program, the other area that must be given thoughtful attention is that of research methods; however, such a discussion is outside the scope of this text. A number of quantitative and qualitative research methods may be used in program evaluation as relevant to the study design, and a good resource for use in designing the program evaluation is Heppner et al.s (2008) Research Design in Counseling. Whereas little can replace firm knowledge of research design, attention to specific issues related to design may provide basic guidance for planning the research design (see Box 12.2). BOX 12.2 BASIC GUIDE TO DETERMINING THE RESEARCH DESIGN Consult your professions ethical standards related to research. Review each of the potential types of research design to identify the most rigorous design that can be implemented within the practice setting. Seek out more and new knowledge as needed to increase your own competency level with regard to research design and analysis. Consult with experts in research design and statistical analysis as needed. Involve staff and other stakeholders in the design selection process to promote early engagement in the research. Most importantly, before embarking on any research design project, ensure that you have a comprehensive grasp of research design and methods. This often requires revisiting research methods coursework, pursuing new coursework, attending training and workshops focused on research design and methods, and utilizing the vast array of texts and tools available to you in this area. Failure to do so will almost certainly threaten your ability to effectively evaluate your programs outcomes. It is often at this stage that we get to finally put to practice in the real world what we have learned only conceptually or through academic assignments. As a result, this should be viewed as an exciting opportunity to grow. Therefore, regardless of any previous struggles that you may have experienced with research design and/or methods, applying research in practice often signifies the opening of a window that has not been opened before. So rather than facing it with discomfort or fear, pursue it with zest and perseverance, and once you have thoroughly engaged in it, the stimulation and sense of accomplishment will reinforce how truly rewarding the process of learning is. Selecting Assessment Tools The research design will guide the selection of assessment instruments. This is particularly true if you have selected a pre/post-test design, since assessment instruments that assess change over time are needed rather than assessment instruments that have been developed to assess issues that are static or not prone to change. For instance, there are assessment instruments that evaluate potential risk based on events that occurred at a particular time in a persons life, such as the age at which an individual committed his/her first crime or when an individual was physically abused. The results of this type of assessment will not change over time; therefore, the assessment is intended to be used just one time to gain specific information. However, other assessment instruments are developed precisely to evaluate change over time and, therefore, are designed to be given multiple times. Examples of these types of outcomes include mental health functioning, recidivism, sobriety, and employment. These types of outcomes lend themselves to evaluation through a pre/post-test research design, whereas the former (i.e., static outcomes) do not. Selecting both the most effective and the most relevant assessment tools for an outcomes evaluation requires quite a bit of work and investigation; however, much of this work should have been completed during the program design stage (discussed in Chapter 5). At this step, then, it means revisiting the program design in order to review previously selected assessment instruments and determine if additional instruments are needed. There are several guidance factors that should be considered in the selection of assessment instruments: Use assessment instruments only with established psychometric properties. Use assessment instruments only for their intended purpose and in the manner in which they have been found to be effective. Ensure a thorough understanding of the strengths and limitations of assessment instruments. Review the qualifications level needed to administer a test, and ensure that individuals charged with test administration have the required qualifications. Ensure a firm understanding of the ethical standards that guide the use of assessment instruments. Use assessment instruments for the purpose of treatment planning and improving treatment and services, not for denying or limiting services that would otherwise be provided. Ensure a firm understanding of the role that testing conditions play in test performance, work to promote optimal testing conditions, and include a discussion of testing conditions and potential impact on test scores in the testing summary. Gather and maintain assessment data, as you do all client information, in a confidential manner, and protect it in accordance with all relevant state and federal laws. Practice additional compliance as required for all research protocols, policies, and laws when using assessment data for research purposes (e.g., outcomes/program evaluation). Gain consent and/or authorization, when required, from the oversight organization with indirect responsibility for the client, in addition to gaining consent from clients and/or other authorized individuals (in the case of minors and vulnerable adults). Conduct all research in accordance with laws regarding the protection of human subjects and with authorization and oversight by an institutional review board. In addition, a valuable resource regarding the use of tests is Responsibilities of Users of Standardized Tests, promulgated by the Association for Assessment in Counseling (2003). The publication provides broad-based guidance in seven key areas: Qualifications of test users Technical knowledge Test selection Test administration Test scoring Administering test results Communicating test results The publication is available at no cost through the associations website (www.theaaceonline.com). Establishing Evaluation Time Frames Conducting any type of evaluation is always a lengthy processand understandably so, given all that is involved. This is why planning for the evaluation is such a critical aspect. Part of this planning process requires the establishment of evaluation time frames. Essentially, these are the time frames in which the actual evaluation will be conducted. For instance, a basic research design may include one pretest and one post-test that will be given at program admission and then at program discharge, respectively. However, there are multiple other time frames that may be used based on the research design, some of which allow you to evaluate progress during the treatment process and others that allow you to evaluate long-term treatment gains. Table 12.2 provides examples of time frames for conducting assessment activities based on the research design. Table 12.2 Evaluation Time Frame Samples In addition to identifying time frames for conducting specific assessment activities, developing a timeline to facilitate the collection of data for the development, maintenance, and revision of the program evaluation plan is recommended (Gard, Flannigan, & Cluskey, 2004, p. 177). Again, I cannot stress enough the use of specific planning tools to help organize the planning process. Whereas timelines are often essential, Gantt charts and project maps may also prove indispensable not only in managing your time but also in communicating plans to others. Comprehensive Evaluation Planning Whereas a large part of this chapter has been devoted to outcomes evaluation and its various aspects, mental health professionals should be engaging in comprehensive evaluation that includes all three types of evaluation discussed abovefidelity assessment, process evaluation, and outcomes evaluation. This is because each has specific relevance and, therefore, is conducted for specific purposes. Indeed, comprehensive evaluation planning should include the use of multiple types of evaluation and should be used to guide long-term evaluation activities. When engaging in comprehensive program evaluation, several issues must be attended to that include, but are not limited to, the following: Identify specifically what is being evaluated and why and how the evaluation results will be disseminated and used to inform treatment and services. Engage all stakeholders in the evaluation process early in order to sustain engagement throughout the evaluation process. Provide orientation and training to all evaluation participants to promote knowledge and understanding of evaluation procedures, rationale, and methods. Establish a comprehensive evaluation plan with identified evaluation types and time frames as part of initial program planning. In addition to promoting effective program management, engaging in comprehensive evaluation planning ensures that the evaluation processes are well organized and are a pivotal part of program development. Considerations in Evaluation As mentioned earlier, evaluation can be one of the most rewarding endeavors in which you engage, particularly as viewed from a program management perspective. In order to fully consider all that is involved and related to evaluation efforts, there are three key areas that I would like to highlight: Evaluation as a tool for organizational sustainability Costs and benefits of evaluation Creating a culture of evaluation Evaluation as a Tool for Organizational Sustainability Evaluation is a toola tool that is used by professionals in order to gain critical information. In the case of process and outcomes evaluations, data is collected and analyzed to determine the efficiency, effectiveness, and impact of programs and services (Boulmetis & Dutwin, 2005). As such, evaluation demonstrates accountabilitythe accountability of a provider of services to the recipient/client, to the funder or contractor of services, to the public, to the profession to which the provider belongs, and to the industry in which the provider is working. It is this accountability that may allow the provider to continue providing servicesjust as, conversely, a lack of accountability may result in the discontinuation of practice. Because the appropriate and effective use of evidence-based practices guides the mental health and human service fields today, evaluation is no longer an optional activity for those who are so moved but a required activity that must be incorporated into all aspects of practice. Indeed, when called on to provide evidence of program or intervention effectiveness, mental health professionals can effectively draw on information gathered from the evaluations that they have instituted (Astramovich & Coker, 2007). As such, evaluation is a toola tool necessary for the long-term sustainability of the service, program, organization, and counseling and other mental health professions. The Costs and Benefits of Evaluation As any good manager knows, never embark on any new endeavor without conducting a thorough cost-benefit analysis. Otherwise, you may find that your investment far outweighs your return, that what you received was far from what you originally hoped, or any number of other unfortunate surprises. There are multiple potential costs and benefits related to engaging in evaluation efforts. I use the term potential since, ultimately, the outcomes will dictate actual costs and benefits.Table 12.3 provides a sample of some of the costs and benefits typically associated with evaluation efforts. Table 12.3 Sample of Potential Costs and Benefits of Engaging in Evaluation Efforts The point here, although rather obvious, is that the benefits resulting from engaging in evaluation efforts should always greatly outweigh the costs. And while only few costs and benefits are economic and can be easily quantified, the benefits to the professionals involved in evaluation efforts are priceless. This is of particular significance in a profession that does not naturally receive immediate feedback about the impact of our work. Unlike the car salesperson who gains immediate feedback about her/his selling ability based on the act of completing a car sale or the senator who witnesses the passage of a bill that s/he authored, many mental health professionals rarely gain substantial feedback about their work unless an evaluation has been conducted. It is in this regard, then, that evaluations provide significant and meaningful information about our work. Even when anticipated outcomes have not been attained, evaluation data usually provides other significant information and is useful for service/treatment improvement efforts and, as such, provides essential input to our work. Creating a Culture of Evaluation The environment in which an evaluation is conducted plays a major role in any evaluation process. This is due to many reasons, not least of which is the very intent of evaluationto assess or evaluate how one is doing. And in this case, it means evaluating the work of mental health professionals. For many of usregardless of how otherwise healthy we might bethe notion of having our work evaluated has a tendency to make us a bit uneasy. It is because of this that the climate created within practice environments is key to effectively supporting evaluation efforts. Creating a culture or climate for evaluation requires close attention to several details and adequate preparation of the work environment. Murray (2005) identifies two of these issues: Encouraging an atmosphere of openness and trust throughout the evaluation process Including all relevant stakeholders throughout the evaluation process In addition, all stakeholders must understand the purpose of the evaluation and how the results will be used. Often, uncertainty about how results will be used can cause the greatest anxiety to stakeholders in an evaluation process. Therefore, an environment must be created in which continuous improvement is the overarching goaland the philosophy that there is no failure, only room for improvement, is used to guide the process. It is only in this type of environment that evaluation can be viewed as a necessary and positive experience, regardless of the results. However, this also means that evaluation results cannot be used for punitive measures, since such measures are counterproductive to creating a healthy evaluation environment. The following activities should also be used to promote a culture of evaluation: Before starting the evaluation process, identify where you would like it to lead and all that will result from the evaluation. Openly and frequently discuss the relationship between evaluation and accountability and long-term sustainability. Incorporate progress updates into existing forums so that evaluation information and activities are consistently shared among stakeholders as part of the ongoing communication cycle. Share various types of results with stakeholders frequently to keep the evaluation process alive. Explain precisely how each set of evaluation results will be used, and then provide ongoing updates regarding their use. Celebrate evaluation processes as a core part of work life. Summary Evaluation is an integral part of comprehensive program development and one that is specifically connected to program design and program implementation. The significance of evaluation has grown steadily over the past several decades and today is viewed as standard practice in mental health and human services. Moreover, the significance of the various types of evaluation has also continued to grow as our understanding of the influence of treatment fidelity and process implementation on evaluation has developed. While there is still room for growth in broad-based acknowledgment about the role that fidelity and process implementation play in comprehensive program evaluation, today there are signs that this knowledge will only continue to evolve. As such, fidelity and process implementation may soon reach the same level of significance as outcomes evaluation holds today. The manner in which mental health professionals perceive evaluation as a core part of program development and thereby embed evaluation activities throughout programs and organizations is largely indicative of their commitment not only to quality and accountability but to long-term sustainability. Whereas there continues to be a need to bring in external evaluation experts to handle evaluation activities on behalf of the organization, evaluation knowledge and skills are essential skills of all mental health professionals. As a result, there is increased understanding of the link between program design, implementation, and evaluation and a much more intimate relationship between the treatment provider and the treatment. This is not only a basic right of accountability to which all consumers are entitled but also what consumers most prefera closer relationship between the product and the seller to ensure that the seller is intrinsically aware of all that the treatment and/or service is and is not able to provide. CASE ILLUSTRATION Alana and Ava had been cofacilitating a treatment program for adults with panic disorder and agoraphobia for the past year and a half. Their interventions consisted of individual and group therapy using cognitive-behavioral interventions. Whereas cognitive-behavioral interventions had been found to be effective in addressing panic disorder, Ava and Alana knew that they needed to evaluate their approach to determine if it was indeed evidence-based, and they also needed to explore existing evidence-based models. Adding a sense of urgency to this, Alana and Ava were increasingly being recognized as specialists in their community for treating panic disorder, and therefore, they were anxious to ensure that they were providing the best treatment they could to their clients. After reviewing the research, Ava discovered a treatment approach that was evidence-based and shared the details with Alana. The approach had been rigorously evaluated with strong outcomes over multiple evaluationsreinforcing their excitement to implement the approach with their clients. Alana got a hold of all the details of the model, examining all the components and how each was implemented so that she and Ava could implement it as designed, thus retaining the models fidelity. At the same time, Ava designed the evaluation components, including a fidelity assessment, process implementation, and outcomes evaluation. The outcomes that would be measured were determined based on the research about expected outcomes for panic disorder and the results of previous outcomes evaluations. The assessment tools were identified based on the research as well as on the previous outcomes evaluations. Because they wanted to evaluate their existing program as well, Alana and Ava decided to use a quasi-experimental design to evaluate their existing treatment approach against the evidence-based model. They would do this through randomly assigning clients into one or the other of the treatments, evaluating treatment outcomes during treatment, at discharge, and at 6 months post-discharge. Neither Ava nor Alana had conducted a formal evaluation before, so they consulted with an evaluator for guidance in finalizing the evaluation design, thus learning how to design and conduct the evaluation. They then developed informed consent forms for their clients and obtained approval through the agencys human subjects committee to conduct the study. Ava and Alana developed a timeline to guide the evaluation, including the implementation date for the new treatment model, which was also the date that the evaluations would begin for both the existing and the new model. Following implementation, Alana and Ava met to review the initial fidelity and process evaluation data and were pleased to note that they had implemented the evidence-based model as designed. After 4 months, they had their initial outcomes data set, which did in fact illustrate significant differences between the two treatment groups, with clients who had received the evidence-based treatment showing greater improvements (i.e., fewer panic symptoms and less frequent episodes) than those clients who had received their existing treatment. Whereas Ava and Alana realized that these short-term outcomes may not translate into long-term outcomes, they were anxious to learn what the long-term outcomes would be. Soon enough, they witnessed the first four groups complete treatment and had enough data to analyze the post-treatment outcomes. The post-treatment outcomes also revealed significant differences between the two treatment groups, with the clients who had received the evidence-based treatment continuing to show even greater improvements (i.e., less panic symptoms and less frequent episodes) than those clients who had received the existing treatment. In addition, Ava and Alanas existing treatment did not produce significant positive outcomes in comparison with the evidence-based model, and the findings did not indicate any significant change for this group. Because both the fidelity assessment and process implementation assessment results indicated that Alana and Ava had implemented the treatments as originally designed and intended and they had effectively conducted the evaluations, they were confident that the results of the outcomes evaluation were valid. Unfortunately, the outcomes did not provide evidence that the treatment approach that they had been using was effective, and therefore, they planned to immediately stop using it. In its place, they would continue to use the evidence-based treatment model that they had now become comfortable using and, more importantly, that had yielded significant positive outcomes for their clients. Being guided in this decision making by the evaluation data, Ava and Alana were excited about their newly adopted treatment approach, their outcomes, and the continuation of their evaluation programwhich would continue to inform and guide their practice well into the future.