1000 Reads!…?
ResearchGate recently notified me that I had reached a milestone: my research items had reached 1,000 reads. Ignoring for a moment the awkwardness of “research items,” which we can, I think, chalk up to ResearchGate making it possible to publish a variety of materials, I want to think about what “reads” means here, because in the age of citometrics et alum these kinds of quantifications may eventually play more of a role than any of us might wish.
As a member of a department personnel committee, I recently enjoyed reviewing the work of three terrific junior faculty members, all of whom I will note here deserve to be tenured and promoted without delay. All three offered not only impressive vitas and compelling portfolios of materials for personnel committee members to peruse, but also had very polished slide decks, each of which featured various composite scores of semesterly SEIs (Student Evaluation of Instruction). All three are smart enough to know that SEIs are biased in a variety of well-documented ways and, when they consist of 7 fairly subjective questions, offer little of statistical significance. They are also smart enough to know that the same institutions that don’t invest in faculty also tend to think things like SEIs are acceptable forms of assessment and even development. (The kind of professional development I have in mind would not only include funding for travel to conferences but also funding a teaching resource center staffed by people with a real focus on educating college-aged students as well as funding of teaching pairs and training for faculty on how to be better assessors of their own, and their colleagues’, approaches to teaching.)
So add up some scores, calculate an average, and include that in graphic on a slide. Put another way, it doesn’t matter how meaningful, or meaningless, the number is, so long as it is a number.
And that probably reveals my attitude toward a milestone like 1000 reads, because … is it? Is it really 1000 reads? Or is it 1000 downloads? Or 1000 views of a page from which you could download the text, which is how Academia.edu seems to work. (More on that later.)
Google Scholar seems to hew to the more conventional approach to counting things by counting only citations. However it is not entirely clear how they are arriving at those counts: is it only from materials also deposited in/with Google Scholar? I ask because, to be honest, the portfolio of my materials with Google Scholar is not complete. Nor is it complete with ResearchGate nor Academia.edu. Perhaps worse: the make-up of the portfolio on each is different, with Google Scholar having older materials, Academia having more conventionally humanities materials, and ResearchGate getting more computational materials.
In all honesty, there was no principled division of materials because I have never been quite sure which one was worth investing the effort to upload everything, and, really, my goal was to put everything in a GitHub repository. (I’m still working on this.)
In a better world, researchers could post their open access, or otherwise being made accessible, materials in a repository of their choosing and then these sites/services would simply index things there. That has not happened, and we seem to getting farther away from that happening rather than nearer, precisely because metrics have become so central to the management of the modern university, perhaps because university’s are less often managed by reasonably successful academics and more often managed by non-academics who feel the only way to be objective is to “run the numbers.”
It is perhaps true that a thoughtful use of the numbers might enable one to determine where to go next, but in the hands of most numbers can only tell you where you have been. Numbers can’t supply a vision, and they certainly can’t reveal how gaps can actually become places for innovation. And that’s why I worry about the variety of numbers currently available: reads as ResearchGate terms it doesn’t strike me as any more useful a gauge of engagement as impressions a number entirely dreamed up by the advertising to rationalize its existence.
In contrast, the citation numbers for most scholars and scientists will be small: most work steadily, and step-wise, contributing to the greater edifices of scholarship and science. You pick up a citation here, a nice nod in your direction there. You get invited to speak here. You get asked to review a manuscript there. Do you dream of the breakthrough, the moment your work suddenly gets attention and then gets to benefit from the “rich get richer” dynamic of small-world networks? Yes, yes you do. But it isn’t what the day-to-day job looks like, and, for most of us, the satisfaction is in the work itself.
The draw of Google Scholar, ResearchGate, and Academia is that they make it fairly easy to establish an online portfolio and the latter two offer at least the semblance of interaction and/or community, with users able to follow each other and/or make requests of each other. And they also provide metrics, numbers, but that’s because they want you to be a part of their metrics, their numbers. Academia wants you eventually to invest in a premium membership, like LinkedIn, and Google wants to keep you in the GooglePlex. ResearchGate seems to be able to remain in the “if you get enough users the money question will answer itself” phase. Their “About Us” pages hints at perhaps eventually rolling out a job search service or … something.
Post Script: As soon as I posted this I saw that Costas Gabrielatos had pointed to a useful account of citation metrics: https://harzing.com/resources/publish-or-perish/manual/using/query-results/metrics.