#News
Citations in clinical practice guidelines do not always reflect true impact of research
Study identifies hidden and symbolic citation patterns that distort the measurement of scientific impact in health policies
Formal citations in official documents can exaggerate or downplay the true role of research in the formation of health policies | Image: Unsplash
In academia, counting citations has become synonymous with measuring impact. The logic is simple: if a study is cited, it has influenced other researchers. The same reasoning has been extended to public policy: references to scientific articles in clinical practice guidelines are now treated as evidence of science’s social impact.
However, a new study by an international group of researchers from Germany, Denmark, and the Netherlands suggests that this relationship is more complex than it seems. The research, described in the journal Research Policy in February, involved the creation of a natural language processing tool to analyze clinical practice guidelines on diabetes and identify citations of scientific articles.
These guidelines are essential for evidence-based decision-making in the diagnosis, treatment, and management of diseases. They are usually written by expert groups at a national or international level, coordinated by organizations such as the World Health Organization (WHO).
Three citation methods, three distinct functions
The researchers identified three ways that scientific articles are cited in clinical practice guidelines:
- Citation with textual credit: The article is formally referenced and its conclusions support the document’s recommendations. This is the approach most strongly associated with real impacts on clinical practice and patients’ lives.
- Token citation: The study is referenced but does not directly influence the content of the guidelines. It is often used to lend scientific credibility to the document or acknowledge influential figures in the field.
- Hidden citation: The research informs the recommendations but it is not formally cited. This may occur because it is considered established knowledge, due to editorial limits on references, or because the studies have not yet been published.
By analyzing these patterns in detail, the team found that they relate to the characteristics of the articles and guidelines in different ways, suggesting distinct roles for these citations within policy documents.
“Institutional pressures shape not only the uptake of scientific knowledge but also its visibility—or lack thereof—in official documents,” say the study’s authors.
Credibility versus utility
Citations with textual credit were strongly associated with concrete impacts on patients’ lives and healthcare practices. The others, especially token citations, primarily reflected the academic impact of the articles, functioning as indicators of prestige rather than practical relevance.
Hidden citations, meanwhile, point to evidence of high clinical relevance, representing research that guides decisions without appearing in the references, either because it is treated as established knowledge or due to editorial limits on bibliographies. In some cases, they cite studies that have not yet been published or have circulated in a limited context but have already been incorporated into practice.
The authors also observed that basic research studies rarely appeared as hidden citations and tended to take longer to contribute to clinical recommendations. Basic research with a high number of citations in the academic literature, however, appeared as token citations in the guidelines.
To better understand the results, the authors interviewed those responsible for producing high-level clinical guidelines related to diabetes. The accounts suggest that committees sometimes include token citations to meet expectations regarding comprehensiveness or to acknowledge influential figures, even when the cited study does not directly inform the recommendations.
Tight deadlines and delays in updating guidelines can also encourage the informal use of evidence yet to be published.
Implications for science funding
The findings have direct implications for evaluation and funding systems that measure impact based on citations and formal records of engagement.
“These criteria may fail to capture less visible forms of research use, such as hidden citations, and may overestimate the social impact of certain studies,” the researchers say.
Articles published in high-impact journals or authored by influential figures, for example, may be cited to lend credibility to a document without necessarily influencing its content—a bias that traditional impact indicators do not capture.
*
This article may be republished online under the CC-BY-NC-ND Creative Commons license.
The text must not be edited and the author(s) and source (Science Arena) must be credited.