The American Institute of Biological Sciences (AIBS) has just published a literature review summarizing results of empirical tests on the validity of peer review decisions using impact measures of investigator output.
These results were published as part of a research topic for the journal Frontiers in Research Metrics and Analytics. Of all of the results, less than half were US based and the majority focused on bibliometric measurements of applicant/project success. Only 25% used more than one type of metric.
Nevertheless, the vast majority of studies provided evidence for at least some level of predictive validity of review decisions, although many detected sizable type I and II errors. Moreover, many of the observed effects were small and several studies suggest a coarse power to discriminate poor proposals from better ones, but not amongst the top tier proposals or applicants.
The article can be accessed here.