Handbook on the Theory and Practice of Program Evaluation
Show Less

Handbook on the Theory and Practice of Program Evaluation

  • Elgar original reference

Edited by Albert N. Link and Nicholas S. Vonortas

As this volume demonstrates, a wide variety of methodologies exist to evaluate particularly the objectives and outcomes of research and development programs. These include surveys, statistical and econometric estimations, patent analyses, bibliometrics, scientometrics, network analyses, case studies, and historical tracings. Contributors divide these and other methods and applications into four categories – economic, non-economic, hybrid and data-driven – in order to discuss the many factors that affect the utility of each technique and how that impacts the technological, economic and societal forecasts of the programs in question.
Buy Book in Print
Show Summary Details
You do not have access to this content

Chapter 6: Logic modeling: a tool for designing program evaluations

Gretchen B. Jordan

Extract

Logic modeling is a process, a management and evaluation tool, to develop a succinct picture of a program’s goals and the strategies for achieving these within a broader context. The process makes explicit what is often implicit. A logic model is a plausible and sensible model of how the program will work under certain environmental conditions to solve identified problems (Bickman, 1987). The elements of the logic model are resources, activities, outputs, customers reached, short-, intermediate- and longer-term outcomes reached, and the relevant external contextual influences on the program (McLaughlin and Jordan, 1999, 2010). The process of developing the model builds a shared understanding of the program and performance expectations as well as a short, clear “performance story” for those less familiar with the program, such as senior managers and Congress. The primary use of logic modeling is to design program evaluations and performance measurement systems. Managers and evaluators can use the logic model to identify appropriate measures and indicators of success to demonstrate progress from inputs to outcomes, that is, all along the program’s performance spectrum, and to identify researchable issues or questions that need to be evaluated. The process also informs program design or redesign.

You are not authenticated to view the full text of this chapter or article.

Elgaronline requires a subscription or purchase to access the full text of books or journals. Please login through your library system or with your personal username and password on the homepage.

Non-subscribers can freely search the site, view abstracts/ extracts and download selected front matter and introductory chapters for personal use.

Your library may not have purchased all subject areas. If you are authenticated and think you should have access to this title, please contact your librarian.


Further information

or login to access all content.