Skip Navigation Links

Leading Successful Programs: Using Evidence to Assess Effectiveness

Program Session(s):
April 29, 2018 - May 4, 2018

Application Deadline(s):
February 9, 2018

Program Fee:$8,500

Program fee includes: tuition, housing, curricular materials, and most meals.

Click here to see this program’s Executive Core Qualification (ECQ) alignment. 

This program qualifies for an Executive Certificate.

Faculty Chairs: Dan Levy, Julie Wilson

Program Director: Anna Shanley

Formerly named Using Evidence to Improve Policy and Programs.  

Request Brochure Apply

Managers of government and programs are under increasing pressure to provide evidence about the effectiveness of their programs, but what constitutes reliable and valid evidence of effectiveness? How should an organization generate evidence about the effectiveness of programs? What data should organizations collect, and how should managers use that data? How does one assess and apply evidence that others have generated about what works? Answering these questions can help managers lead their organizations to design policy and implement more effective programs.

Leading Successful Programs: Using Evidence to Assess Effectiveness, one of the newest Executive Education programs at Harvard University’s John F. Kennedy School of Government, addresses the challenges that managers face in identifying useful strategies for assessing and improving program effectiveness. The program will help managers become better commissioners and consumers of the evidence they need to make better decisions for their organization.

The program will explore:
  • What are the big questions managers need to ask about the effectiveness of programs in their organization?
  • How should managers decide what evidence needs to be gathered?
  • What kinds of evaluations and other forms of assessment need to be conducted?
  • Since evaluations can be expensive and time consuming, how should managers make decisions about which programs to evaluate?
  • What are the key methods to evaluate the impact of a program and when should each of them be used?
  • What role do randomized experiments play in evaluating the impact of a program?
  • What data should be collected and when?
  • Does it all have to be numbers? How can managers make sense of mixed method evaluations and integrate quantitative and qualitative information to design and implement better programs? 
The program considers a wide range of types of evaluation (including design, process, and impact evaluations) and a wide range of evaluation methods. The curriculum pays special attention to the use of evaluation results and other type of evidence in helping managers make better decisions about their programs.


Copyright © 2016 The President and Fellows of Harvard College