Sunday, December 5, 2010

Evaluation

Oh evaluation. This is a term near and dear to my heart at this point in my life as I will be doing a program evaluation for my thesis! I found this weeks lecture to be particularly interesting obviously because of my thesis, but also because it is so important. Programs and interventions are great tools for helping people and organizations create real change, but as I think Manpreet pointed out, they are successful in accomplishing this goal if they are in fact effective programs. I found reviewing the different types of evaluation (process, impact, and outcome) to be helpful in understanding evaluation in a systematic lens as well. I am excited to use some of these tools when I begin my research because while I helped design the health program at the school where I teach and I think it is fantastic (I have to gloat a little), I won't really know it's impact until I do the impact evaluation. Likewise, I won't know what worked well, what did not work at all, and how I could recommend improvements, unless I do the evaluation.

Final Thoughts...

This week's lecture about evaluation is the culmination of our semesters journey through health communication methods and ways.  Evaluation is probably one if not the most important feature of reflection and justification of a program.  Evaluation can aid in correcting and consolidating programs as well as offering necessary feedback to help progress the program and catch any errors.

In my current internship I record and enter the data for the PHE presentation evaluations.  During this process I note any comments made by participants or instructors and observe their quiz responses to see if they are adequately absorbing the information.  This process allows the program and the Peer Health Educators to grow in their presentation skills and content of material. without evaluation we would never know if our health campaigns are successful or if we are even making a difference, negative or positive.  For a health promotion student, evaluation is not my favorite area of research but i understand the need and appreciate the outcome.

In the future of my career within health promotion I will value the lessons taught in health communication and try to use them to further my message and also accurately disseminate my information.  Without accurate evaluation, all of our work will be in vain.  The moral of this story, speak softly but carry a big stick.

Evaluation

This week we talked about the importance of evaluation. I agree with what Liz said in class: "It's worse to have a bad intervention, than no intervention at all." If your intervention isn't doing what it's supposed to do, then why put the time, money and effort into it? But how do you know if it's a "bad intervention?" Evaluation!

If you think about it, you need evaluation results to see if what you are doing works, prove the method works or doesn't work, develop and advise future projects, show key stakeholder what you have accomplished, and make notes on what, if anything, you need to change as you carry out the process. What surprised me was what I learned about the D.A.R.E. program. My school took part in that program when I was in elementary school, and I recently learned after evaulating the program, researchers found that it did not have a positive effect on its target population. This program was widely used, and if they had pilot tested the program or evaluated as they went along, they would not have ended up putting so much time and resources for this program. It's rather unfortunate.

The big take away message from this lecture is that evaluation is extremely necessary! Regardless of if the program/intervention is working or not because researchers, consumers, stakeholder, etc. should know if what they are investing in will give the biggest bang for the buck!

Janice's Last Reflection

Evaluation is the 'icing on the cake' for a health campaign. In other words, to 'top it all off,' one must know whether or not his or her campaign is working for the better. As Sheila mentioned in class, there are campaigns that have negative affects on the target population i.e. D.A.R.E program. Evaluation via pre/post-test and evaluation during a health campaign is needed to be sure that it is working in a positive manner i.e. working in a way in which people ARE making behavior modifications and changes to improve their health status.

Evaluation is an arena I already partake in my current occupation. After completion of training parents about the importance of oral health, the program I work for needs feedback regarding the information I disseminated. Making sure I have reached, or even exceeded, the audience's expectation with the training session allows my co-workers and I to plan our educational material accordingly. If parents learned something new after witnessing my presentation, great; however, whether or not they learned something new and are going to put that new knowledge to use in the future is what we, in the public health field, are trying to target.

Positive feedback from surveys I hand out at the end of the trainings are good, but negative or constructive feedback is better. Allowing the target population offer feedback makes way for new insight for one's program and gives the general public the opportunity to constructively criticize one's program in order to make changes which can be seen as building blocks for a more effective health campaign.

Friday, December 3, 2010

Health Communication Evaluation

This week's presentation about evaluation of the Truth campaign made me think about my days as an internal evaluator for the California Smokers' Helpline. When I was an undergrad, I worked for 1-800-NO-BUTTS which is partnered with UCSD. My job was to call clients who had used the program at different time intervals (I think 6 and 12 months after using the service) in order to assess their satisfaction of the program and assess their current behaviors. Boy, this was a memorable learning experience about behavior change! I mean at times it was really hard to follow-up with clients and obtain their feedback if they had not been successful in quitting smoking. For some people, talking about their relapse and/or inability to reach or maintain the target behavior change was difficult and uncomfortable to do. On the other hand, those clients who did agree to participate in the 15 minute telephone survey provided valuable feedback for the program. A small monetary incentive (I think it was a $10 or $15 check) was also mailed to clients who completed the evaluation. Back then, I knew that the info we were collecting was valuable to improving the program. But now I realize that the info we obtained was applicable to process, impact, and outcome evaluation of the program.