Library:Scholarly Communications/Citation Analysis

From UBC Wiki
Jump to: navigation, search

This guide brings together information on metrics and tools that are commonly used in the process of citation analysis.

It is important to note that judging the quality of a publication, whether an entire journal or specific article, is a fairly subjective process. Use of the journal impact factor as a measure of the impact of individual research publications has been widely criticized, most notably in the recent San Francisco Declaration on Research Assessment. For more information, see Criticism of Impact Factors.


Metrics

Traditional scholarly metrics count publications and citations in journals, books, etc. Altmetrics are new metrics that count numbers of downloads, views, comments on scholarly websites and blogs, etc.

Scholarly Metrics

H-index from a plot of decreasing citations for numbered papers

Individual author/article citation counts Citation Tracking (CT) counts the number of times an individual article or a particular author has been cited by other scholars. Large numbers are associated with greater impact and influence. Article and author level citation counts are available on Web of Science, Google Scholar, PLoS, BioMed Central, plus numerous discipline-specific databases.

H-index A measure of author influence, an h-index is the number where the number of articles published by an author intersects on a graph with the number of citations for each article. For instance, an author with h-index of 10 has published 10 papers that have been cited at least 10 times each. The h-index is the first (and most well-known) of many author metrics. Available on Web of Science and Google Scholar if the scholar has created a user profile.

Journal Impact Factor (used by ISI Web of Science) A Journal Impact Factor (JIF) is measured by dividing the number of current citations a journal has in a given year, by the number of articles published in the two previous years. The JIF is used to indicate the relative importance of a journal within a given field and a higher JIF is seen as providing more “authority” or “weight” to a researchers’ work.

Eigenfactor A non-commercial alternative to ISI Web of Science's JIF. The Eigenfactor algorithm also takes into account the significance of the citations coming into a journal so that citations from important journals are weighted more highly.

Acceptance Rate Acceptance rate compares the number of articles selected for publication to the number of articles submitted. For example, the very prestigious journal Nature publishes fewer than 10% of the articles submitted. In general, a low acceptance rate is associated with high prestige. Not all journals publish their acceptance rates, but in many cases it is included in the "instructions to authors" section of the journal.


Concerns Use of the Journal Impact Factor as "the primary parameter with which to compare the scientific output of individuals and institutions" has been criticized by a number of scholars and organizations, most notably here.

Most scholarly metric tools include only a small fraction of citations that appear in books or book chapters: the work of Humanities and Social Sciences scholars is not accurately reflected.

Alternative Metrics

Download counts and View Counts These article level metrics count the number of times an article has been downloaded and the number of times that it has been viewed. Download counts can be used to demonstrate use of articles for purposes other than citation such as education, background research, presentation, or use in grey literature such as white papers or reports. Download and view counts are available for many, but certainly not all journals.

Counting Citations in non-scholarly media Altmetric.com tracks tweets, blog posts, news stories and other content that mention scholarly articles.

Counting Citations to Additional Publication Types For example, impactstory.org includes datasets, software, slide decks, figures and posters.

Authority scores (e.g. Klout) Authority scores attempt to measure how many people you reach through your own contacts, their amplified reach (e.g. how many people are followed by people who follow you on Twitter) and how often your content elicits a response (i.e. a comment or reply).

Tools

While many publishers and scholarly databases support searching for cited references, the feature is generally limited to articles within their own database. These academic and altmetric tools are commonly used for more thorough citation analysis.

Academic tools

ISI Web of Science Core Collection

ISI Journal Citation Reports

  • One of the most standard commercial tools for finding journal Impact factors, article citation counts and other author metrics
  • Favours sciences over social sciences and humanities
  • Journal-based (i.e. Uses journal articles as a source for citations. Books, patents, etc. are included only if they have been cited by a journal article.)

Video example of creating citation reports for journals, articles, authors, etc. via the Web of Science Core Collection

Video example of creating citation reports for journals via the separate Journal Citation Reports database.

SciVal

  • Large database of bibliographic citations, plus tools for comparing institutions, departments, research groups and individual scholars.

Google Scholar Citations; Sample Profile UBC Math Professor Nassif Ghoussoub

  • Video Tutorial of Google Scholar Citations
  • Allows authors to curate their own list of publications
  • Calculates citation counts, h-index, and i10-index
  • Tracks conference proceedings, chapters in edited volumes, and books, but not consistently
  • Better for humanities and many social sciences than ISI Web of Science or Scopus/SciVal

Google Books

  • Tracks references in books to books, book chapters (and journal articles)
  • Finds unique citations, i.e. not included in Google Scholar, Web of Science or Scopus/SciVal
  • Books to journal citation ratio, roughly: Philosophy 4:1 |Sociology, Fine Arts 3:1 | Psychology 1.5:1 | Physics .001:1

Publish or Perish

Altmetrics

ImpactStory

  • Set up your own researcher profile and share all the "diverse impacts" your research has made, from journal articles to data sets and blog posts. (Free 30-day subscription; 60 USD/Year.)

Altmetric.com/

  • "Captures hundreds of thousands of tweets, blog posts, news stories and other content that mention scholarly articles." Add the free Altmetric Bookmark to your bookmarks toolbar and see the metrics for any article with a DOI and items in digital repositories.

Other Tools

Top Ten (10) indices measuring scholarly impact in the digital age from The Search Principle, April 2012

Altmetrics

What are Altmetrics?

Altmetrics are an emerging way to track the social impact of scholarly research. As opposed to traditional article-level metrics, which measure only formal journal citations, altmetrics attempt to measure the visibility and impact of research by look at emerging data sources, such as:

  • Download counts from online journals
  • Page views
  • Mentions in news reports
  • Mentions on social media
  • Mentions in blogs, or on Wikipedia
  • Activity on reference manager readers, such as Mendelay or CiteULike

Altmetrics are not the same thing as article-level metrics: both attempt to measure the impact of research at an article level, but while article-level metrics rely on traditional measures such as journal citations, altmetrics attempt to take new and emerging types of scholarly sharing and communication into account.

Both sides of the debate:

Like other methods for measuring research impact, there are both benefits and limitations to altmetrics.

Benefits:

  • Provide different markers of an article’s reach, beyond just formal citations.
  • May be used to uncover the impact of just-published research. Measures of journal citations may take months to accumulate, but altmetrics data begins to appear immediately upon publication.
  • May be used in parallel with traditional impact factors and citation counts to add a more nuanced, qualitative account of impact.
  • Encourage a focus on public engagement
  • Measure all types of scholarly products: such as sharing of openly available data, software, etc.

Limitations:

  • Data currently lacks the ability to make distinctions of quality and intent within feedback. For instance, the mention of a new research paper many times on twitter is not necessarily an indication of the quality of the research.
  • Altmetrics may be susceptible to “gaming”, or attempts to manually conflate the appearance of impact. Note that this issue of manipulation is one that affects many types of metrics, however.
  • Preferred platforms for scholarly sharing emerge and wane in their importance regularly. The data sources measured for altmetrics must be regularly evaluated for relevance and normalized.

Selected Altmetrics Tools

1. PLOS article-level metrics
PLOS Article-level metrics provide information on page views, citations, downloads, and mentions on various social media sites. See the Metrics information for the article Who Shares? Who Doesn't? Factors Associated with Openly Archiving Raw Research Data (2013) for a sample.

2. ImpactStory
ImpactStory is an open-source, web-based tool that measures the impact of a variety of research products (including journal articles, datasets, and software, among others) by harvesting data from a variety of online sources (see their FAQ “which metrics are measured?” for a comprehensive list). See a Sample Report.

3. Altmetric
Altmetric seeks to “track and analyze the online activity around scholarly literature”. The Altmetrics Bookmarklet is used by publishers such as Nature and in the PLOS Altmetrics Collection (See a collection of altmetrics demos at the PLoS Impact Explorer).

4. Plum Analytics
Plum Analytics is still in the development stages. Users are able to sign up for the beta version of the service, though, and the site provides a lot of detail on how it gathers data.

5. figshare
figshare is a repository where users can make their research available in a manner that allows it to be easily cited, shared, and discovered. It tracks views and shares on a few social media platforms, and plans to implement citation tracking soon.

Find Out More

Communities:

  • #altmetrics on Twitter: Use the #altmetrics hashtag on twitter to follow the conversation.
  • Mendeley Altmetrics Group: A group which aims to share literature and discuss new approaches to the assessment of scholarly impact based on new metrics.

Further Reading:

Criticism of Impact Factors

Dora-logo-big.png

Criticism of research assessment methods has been aimed at the practice of using the Journal Impact Factor (JIF) as a measure of the impact of individual research publications. This criticism has most notably been voiced in the San Francisco Declaration on Research Assessment (DORA), which was initiated in December 2012 by the American Society for Cell Biology (ASCB) and a group of scholarly journal editors.

The JIF was originally introduced as a metric to aid libraries in collection development: by dividing the total number of citations received by the number of articles that have been published in the previous two years, the JIF provides the average of citations received per article. However, this metric has also come to be used as a tool in assessing the impact of individual research papers; a use for which this tool was never intended. Thomson Reuters also addressed this, releasing this video of David Pendlebury on why the Journal Impact Factor was originally created and how it should be used, as well as a statement in response to the San Franscisco Declaration on Research Assesment.

As has been pointed out in a variety of studies, the journal impact factor judges only how much a journal is cited on average, a number which bears little relation to the individual papers within a journal. According to a 2012 study, the strength of the relationship between citation rates and the IF has been weakening for the last 40 years[1].

The San Francisco Declaration on Research Assessment states that “There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties”.

It also provides recommendations for funding agencies, institutions, publishers, organizations that supply metrics, and researchers, including:

  • To avoid using journal-based metrics to assess an individual scientist in hiring, promotion, and funding decisions.
  • To consider the value and importance of all research outputs, including datasets and software, in addition to scholarly publications.
  • To consider a broad range of impact measures, including qualitative indicators of research impact, such as influence on policy and practice.
  • To challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice that focuses on the value and influence of specific research outputs.

To see the full list of recommendations, find out more, or to add an individual or organizational signature to the list of DORA supporters, visit San Francisco Declaration on Research Assessment page.



  1. Lozano, 2012. More information about this study appears on the London School of Economics blog