Google+ Followers

Sunday, August 25, 2013

Not Waving but Drowning

A close friend and colleague from one of the great lands of the formerly socialist Europe gave me a good-natured (metaphorical) jab in the ribs, having read my previous post on claiming one's unique research identity, suggesting that perhaps I am spending too much energy on the p.r. aspect of my research output. I will be among the first to own up to my share of narcissism, endemic to the academic endeavor (as another colleague once put it: "scholars live in a universe of one"), but I believe strongly that researchers need to become vigilant about the visibility of their research and how impact measurement is used by those who hold power in the academy. As the great debate about the value of research and higher education to society grows, the call for "accountability" and measurement of impact follows suit. I am agnostic about the obsession with measurement--in many, if not most, cases the real impact of our work may be best measured when we are long retired or have joined the emeriti lounge in the great beyond. It may be that there is simply no way properly to measure our impact--perhaps our work (published or otherwise) simply inspired others to do something useful, but they never published a word about it. How do you measure that? But as long as the measurement game is being played, researchers must know what it is about and control what s/he can control in the game.
Currently, the prevailing paradigm calls for gauging journal impact factors (a metric that is increasingly called into question) and tracking citations, the latter of which are gathered electronically. For humanists (like myself), who work in a variety of types of publications aside from electronically-available journals and monographs, such citations are invisible to the machines that gather the relevant data. Or, to the extent that the landscape is changing towards electronic publication, many of our high-quality venues fly under the radar when compared with the Big Important Journals (i.e., profitable) in the STEM disciplines. Fortunately, some are thinking through this problem and raising awareness about how to measure research impact in a more nuanced and responsible way (PDF), taking into account attention to research products and scholarly communication in ways other than just journal citation.

Here's my recipe, subject to refinement (salt and pepper to taste), for controlling one's research visibility, which I recommend especially to junior colleagues who must struggle to keep their foothold in the profession:

  • Know your rights with regard to copyright and make sure you keep as many of your rights as you can. See Timothy K. Armstrong, 2009: An Introduction to Publication Agreements for Authors (PDF).
  • Work with your institution's open digital archive (the one at the University of Kansas looks like this), which can curate your work, make it openly available, and track usage of it. If you do not work at an institution that has such a digital archive, consider a service like Figshare.
  • Register with ORCID and claim all of your electronically visible research, as well as differentiate it from others' publications appearing under the same or similar names.
  • Claim an Academia.edu page and link there to your papers in your institution's digital archive. There are many advantages to having a profile at this site in addition to research visibility, including better connection to the global community of scholars in your areas of interest.
  • Claim your GoogleScholar page. Edit it to weed out duplicates and works mistakenly attributed to you. Keep track of your h-index (the number h of your works cited h or more times).
  • Once you've taken control of your research visibility, help other colleagues along. Consider becoming involved in open-access locally, nationally, or internationally, perhaps working towards a campus-wide open access policy, as we have at my university.


No comments:

Post a Comment