Resources for Evaluators
1.1 OVERVIEW
The Industry University Cooperative Research Center (I/UCRC) Program and model grew out of an experimental program (The Experimental R&D Incentives Program) which operated from 1972 to about 1979. NSF began systematically building I/UCRCs based on the industrial consortia approach developed and validated through this program beginning in 1980-81. A more detailed description of the history and evolution of the I/UCRC program and its evaluation can be found in the sources listed in theĀ publications list.
The ongoing evaluation of the individual Centers and the I/UCRC Program began in 1982-83. The evaluation effort has several key features: coordinating mechanisms for the national data collection effort; an on-site evaluator at each Center; standardized data collection protocol and instruments; and longitudinal data collection, reporting, and analysis. Although the specifics of the evaluation have changed over time, these features remain the bedrocks of the evaluation effort. Support for the evaluation effort is provided by the I/UCRC Program. NSF also provides funding for the other coordinating mechanisms and various supplemental studies.
There are four coordinating mechanisms for the evaluation effort. First, NSF I/UCRC staff provide overall guidance for the evaluation effort. Second, all evaluators comprise a coordinating committee for the evaluation effort. This group meets twice each year (typically January and June) to share briefings on the findings of different components of the evaluation effort, exchange information, orient new evaluators, and vote on issues related to the evaluation effort. Evaluators are hired and paid through a third party contractor. Finally, a team from North Carolina State University (NCSU) has been contracted to coordinate and support the national-level I/UCRC evaluation activities. The NCSU team also maintains and updates this Evaluator website and analyzes structural data collected from Directors.
The I/UCRC Evaluation project has four goals:
Primary
1)To help NSF and local Centers objectively evaluate their impact by documenting I/UCRC outcomes and accomplishments
2)To promote continuous improvement by giving actionable, timely, data-based (formally collected and observational) feedback, analysis and advice to NSF and local Centers;
3)To identify and communicate information about I/UCRC best practices to NSF and local Centers;
Secondary
4) To help promote a better understanding of industry-university-government research cooperation
The remainder of this page focuses on the role and responsibilities of the on-site evaluator. As a reminder, Centers (and as a result, Evaluators) are funded under different solicitations, many of which are active at any given time. Please be aware of which solicitation(s) apply to you and your Centers.