Handbook on the Theory and Practice of Program Evaluation
Show Less

Handbook on the Theory and Practice of Program Evaluation

Edited by Albert N. Link and Nicholas S. Vonortas

As this volume demonstrates, a wide variety of methodologies exist to evaluate particularly the objectives and outcomes of research and development programs. These include surveys, statistical and econometric estimations, patent analyses, bibliometrics, scientometrics, network analyses, case studies, and historical tracings. Contributors divide these and other methods and applications into four categories – economic, non-economic, hybrid and data-driven – in order to discuss the many factors that affect the utility of each technique and how that impacts the technological, economic and societal forecasts of the programs in question.
Buy Book in Print
Show Summary Details
You do not have access to this content

Chapter 10: Evaluating cooperative research centers: a strategy for assessing proximal and distal outcomes and associated economic impacts

Drew Rivers and Denis O. Gray


This chapter describes a methodology, refined during a recent feasibility study, for capturing economic impact data from private sector firms involved in cooperative research with universities. We focus on a specific science, technology and innovation (STI) programmatic initiative – cooperative research centers (CRCs) – and explore a possible approach for overcoming barriers to gathering return on investment and other financial data from CRC member firms. Building on the foundation laid by the ongoing evaluation of the National Science Foundation (NSF) Industry/ University Cooperative Research Centers (IUCRC) program, our study had two objectives: first, to assess the existing program evaluation protocols with regard to capturing estimates of economic outcomes generally; and second, to pilot test an interview-based approach for gathering credible and persuasive data on more distal economic outcomes. We first analyzed archival data sources and concluded that the existing IUCRC evaluation strategy does a good job of addressing the program’s explicit partnership and capacity-building objectives, and at measuring outputs and proximal outcomes like publications and patent applications based on center research. While these instruments do capture economic outcomes to some extent (for example, cost savings or cost avoidances), they do not consistently capture more distal (and likely more significant) economic outcomes derived from process, product and service innovations. Most respondents to these existing evaluation instruments appeared either unable or even unwilling to disclose these types of outcomes.

You are not authenticated to view the full text of this chapter or article.

Elgaronline requires a subscription or purchase to access the full text of books or journals. Please login through your library system or with your personal username and password on the homepage.

Non-subscribers can freely search the site, view abstracts/ extracts and download selected front matter and introductory chapters for personal use.

Your library may not have purchased all subject areas. If you are authenticated and think you should have access to this title, please contact your librarian.

Further information

or login to access all content.