Skip to Main Content

Scholarly Communications for New Faculty: ORCID and Metrics

Things to Know

Some strategies:

First, take some time and reflect on what story you WANT to tell about your impact in several years and what 'impactful scholarship' means to you. Numbers are great, but we know numbers can tell lies at worst and can be only part of the story at best. Engaged students, professional and community connections, and deep intentional growth can and should be your story.  Numbers and narratives should help tell that story and these and other engagement strategies should be used thoughtfully to enhance and find new connections in your scholarship and work. 

Your impact and story should be personal to you and your work and collaboration, but we also acknowledge the pressure that especially pre-tenure faculty are under, so make sure to discuss your thoughts with your department chair and other mentors. 

Some general ways to increase the reach of your work include:

  • Use Digital Commons or another non-profit disciplinary repository to share metadata and/or your accepted manuscript and make them Green OA to increase visibility and access.
  • Collaborate with the University Media Relations Team to see if a press release or similar media outreach is appropriate.
  • Consider opportunities to share aspects of your work at conferences, workshops, etc shortly after publication.
  • Consider SEO tactics and how to use them in a way that supports the discoverability of your work without compromising the quality of your work. Keywords and repeated words and phrases in your title and abstract may enhance the visibility of your work.

Journal, article, and author-level metrics can help tell the story of your work's reach and impact, but they are often misused.  The Metrics Toolkit can help you identify when to use different metrics. 

Journal Metrics:

Journal Impact Factor: Calculated by dividing the number of times articles were cited by the number of articles that were published in the last two years. Originally used to inform library purchasing decisions, many researchers consider this a (controversial) measure of journal prestige.

Eigenfactor: Similar to the journal impact factor but does not include citations from the same journal to control for self-citing within the journal. 

Source Normalized Impact per Paper (SNIP): Attempts to create a more contextualized citation impact by looking comparatively at citations in disciplines and fields. Accounting for variation in disciplinary citation practices and the bias towards 'general' journals vs specialized or clinical journals.

Article Metrics:

Citations: The number of times a particular work has been cited. Usually reliant on indexing and citation tracking.

Downloads: The number of times a work has been downloaded from a particular database or repository.

Mentions and alternative metrics: Calculated by companies often using proprietary tools, these attempt to estimate the impact that a work has had through a combination of media mentions, shares, saves, and downloads. The altmetrics browser plug in and the plumx tools embedded in many journals and repositories are two widely used examples. 

Author Metrics:

h-index: The h-index is the number of works authored by a researcher with at least n number of citations. For example, if we have a researcher with 5 publications A, B, C, D, and E with 10, 8, 5, 4, and 3 citations, respectively, the h-index is equal to 4 because the 4th publication has 4 citations and the 5th has only 3. In contrast, if the same publications have 25, 8, 5, 3, and 3 citations, then the index is 3 because the fourth paper has only 3 citations.

g-index: The g-index was created as an alternative to the h-index, specifically to provide very highly cited works more weight.

i10-index: Introduced by google scholar, the i10-index indicates the number of academic publications an author has written that have been cited by at least 10 sources

How can researchers and institutions use metrics responsibly? One of the guiding documents of this collective question and issue is an article published in Nature News in 2015. A summary of the 10 points emphasized in the piece are listed below:

  1. Quantitative evaluation should support qualitative, expert assessment.
  2. Measure performance against the research missions of the institution, group, or researcher.
  3. Protect excellence in locally relevant research.
  4. Keep data collection and analytical processes open, transparent, and simple.
  5. Allow those evaluated to verify data and analysis.
  6. Account for variation by field in publication and citation practices.
  7. Base assessment of individual researchers on a qualitative judgment of their portfolio.
  8. Avoid misplaced concreteness and false precision.
  9. Recognize the systemic effects of assessment and indicators.
  10. Scrutinize indicators regularly and update them.

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature News 520(7548), 429–431. https://doi.org/10.1038/520429a

What is ORCID? from ORCID on Vimeo.

ORCID is a unique and persistent identifier for your scholarship and work to help track, share, and centralize your work in your unique profile. You can include your ORCID identifier on your Webpage, when you submit publications, apply for grants, and in any research workflow to ensure you get credit for your work. Used by many grant organizations and publishers, ORCID iDs are intended to cut down on similar name confusion, save you time (as an alternative to managing multiple author profiles, ORCID can link to and be populated from Researcher ID, Scopus Author profile, LinkedIn, etc.), and will stay with you throughout your career. 

 

Possible action Item: Create an ORCID ID or review/update your existing profile.

 

Links and Resources