Every year the Association for Public Policy Analysis and Management awards the Peter H. Rossi Award for Contributions to the Theory or Practice of Program Evaluation. The 2010 recipient was Howard S. Bloom, the Chief Social Scientist at MDRC. MDRC is a social policy research organization that evaluates various policies and programs throughout the US. As part of his acceptance speech Howard S. Bloom shared nine important lessons he learned about evaluation research during his career in academia and at MDRC. I think these lessons are interesting and important to keep in mind as program evaluation becomes more and more important to non-profits and their funders.
Howard S. Bloom’s nine lessons about doing evaluation research include:
- The three keys to success are “design, design, design” (just like “location, location, location” in real estate.) No form of statistical analysis can fully rescue a weak research design.
- You cannot get the “right answer” unless you pose the right question. Disagreements among capable researchers often reflect differences in the research questions that motivate them (explicitly or implicitly). Thus, it is well worth spending the time needed to clearly articulate your research questions, being as specific as possible about the intervention, population, and outcomes of interest.
- A “fair test” of an intervention requires that there be a meaningful treatment contrast (the difference in services received by treatment group and control or comparison group members). this condition has two subparts: (1) the intervention must be implemented properly, and (2) services to control or comparison group members cannot be too substantial.
- The most credible evidence is that which is based on assumptions that are clear and convincing. Thus, researchers should put all of their cards on the table when explaining what they did, what they found, and what they think it means.
- The old saying, “keep it simple, stupid,” is crucial for meeting the preceding condition. This is especially important for evaluation research – because no matter how simple a research design is, the resulting study will be more complicated because of its interaction with the real world.
- You probably don’t fully understand something if you cannot explain it. The best way to avoid this trap is to teach everyone who is will to listen about what you are trying to do and how you are trying to do it.
- Thoughtful and constructive feedback is a researcher’s best friend. Hence, you should seek review early and often.
- Evaluation research is a team sport. It is impossible to overstate the importance of complementary policy, programmatic, data, research, and dissemination skills on an evaluation team.
- the best way to change how evaluation researchers do their work is to change how they are taught to think about it. Thus, methodological training is essential both during graduate school and throughout one’s career
To read the entire speech, click here.
To learn more about MDRC, click here.
To learn more about the Association for Public Policy Analysis and Management, click here.