Quantifying Scholarship
A controversy emerged earlier this week regarding the use of the F-word in an academic paper. Actually, the controversy about the paper itself played out some time back; but the new controversy was over whether a provocatively-titled paper, "Fuck," by Ohio State law professor Christopher Fairman, 28 Cardozo Law Review 1171 (2007), should be disqualified when counting the number of downloads for which a law school should be credited. Say what?
The Social Science Research Network (SSRN) is a central repository into which scholars in many fields place their written work, where other scholars can then easily search for working papers on particular subjects and download papers that are interesting and/or potentially useful in one's own scholarly work. This provides a nearly ideal internet-era medium through which scholars can interact with each other at the draft stage of their writing, soliciting feedback from both supportive and skeptical critics and basically enhancing the scholarly process in every way. I don't mean to be glib. This is truly a wonderful resource.
The problem comes from our seemingly irresistible desire to count and rank things. (Rankings can be fun, of course. See "High Fidelity" for a wonderful portrait of dysfuntional men who have a "Top 5 All Time" list for everything: breakups with girlfriends, songs about death, etc.) SSRN downloads have become a method by which individual scholars and their employers are evaluated: a large number of downloads of one's papers moves a person up the rankings, and the presence of many heavily-downloaded professors on a law school's faculty moves that law school up in such rankings. The theory is evidently that multiple downloads imply "importance" or "influence" or something that makes a scholar appear to be having a useful professional impact.
One such use of SSRN to rank faculties was Brian Leiter's "Most Downloaded Law Faculties, 2006," posted on March 6 of this year, in which Leiter compiled rankings of the top-15 law faculties based on total downloads of all papers by all of a school's faculty and on downloads per paper of all of a law school's faculty. The immediate controversy has revolved around Leiter's decision to exclude Ohio State and Emory from the rankings because their high download numbers are overwhelmingly the result of Fairman's one famous paper. (Fairman is visiting at Emory.) Paul Caron reports on the controversy on the TaxProf blog here. For what it's worth, I agree with Ann Bartow, who criticized Leiter's decision on the Feminist Law Professors blog. She points out, among other things, that Fairman's paper is a very serious piece of scholarship about the impact of language on society, noting the irony that Leiter's decision reflects one of the very issues that Fairman discusses. Fairman has also posted a commentary about the controversy, which one can download from SSRN. (I love irony.)
The more interesting issue to me, though, is the notion of using SSRN downloads as a measure of scholarship in the first place. Others have offered varying critiques, including Leiter himself, who has pointed out that SSRN tends to be used more by scholars writing in certain areas of law than others and suggesting various ways in which the results can be gamed or at least skewed. All true, I have no doubt. Even more fundamentally, the problem with download counts as a measure of importance (or quality or whatever) is essentially identical to the older practice of counting citations to assess scholarly impact. If Mike Dorf, say, writes an article that everyone thereafter cites, that must mean it's important, right? Mike (whose articles are heavily cited, for all the right reasons) will be the first to tell you that that is simply not a valid conclusion. I have seen plenty of papers in economics and in law that are downloaded heavily apparently because they're so silly or so dangerous that they need to be debunked and exposed to ridicule. That's one type of "importance," I suppose, but it seems odd to give someone credit for saying something that other people find ridiculous or scary. It is odder still because the only people who are likely to be able to say something silly yet still be cited rather than simply ignored and dismissed are scholars who already have big reputations. So a big name at a big school can enhance one's scholarly reputation by being cited (and now downloaded) for writing something absurd. Put more simply, quantity measures are not quality measures.
There is an important difference, though, between how citation counting distorts behavior and how SSRN download counts distort behavior. Citation counting has almost surely led scholars to cite their own papers and their friends' papers extensively (even when those papers are not on point), to refuse to cite high quality articles with which they disagree, etc. That is a bad result, and it makes the citation-based rankings even less well-suited to measure importance or quality than they would otherwise be.
For SSRN, though, this distortion is especially poignant. When people browse SSRN, they now know that every decision to download a paper is a vote. If I see an abstract for a paper that might or might not make a ridiculous point, for example, I would normally want to download the paper to see what the author is really saying. If the author seems to be writing in good faith but is making a point poorly or has apparently not thought about some implications of an argument, I might even try to contact the professor to discuss the paper. I hope that others will do the same for me. If I do download a paper, though, I'm voting blind. I may find myself enhancing the measured importance of a paper that turns out not to be a worthy piece of scholarship (or even coherent). Decisions to download thus become strategic, discouraging scholars from even engaging with one another because of each scholar's subjective decisions about who and what one wants to support.
And that is especially troubling, because the promise of SSRN was that it would allow scholars to interact at the stage where their work would benefit from the views of others. Counting downloads from SSRN has led to rankings based on those counts, and that undermines -- perhaps fatally -- SSRN's greatest contibution to the academic enterprise.
The Social Science Research Network (SSRN) is a central repository into which scholars in many fields place their written work, where other scholars can then easily search for working papers on particular subjects and download papers that are interesting and/or potentially useful in one's own scholarly work. This provides a nearly ideal internet-era medium through which scholars can interact with each other at the draft stage of their writing, soliciting feedback from both supportive and skeptical critics and basically enhancing the scholarly process in every way. I don't mean to be glib. This is truly a wonderful resource.
The problem comes from our seemingly irresistible desire to count and rank things. (Rankings can be fun, of course. See "High Fidelity" for a wonderful portrait of dysfuntional men who have a "Top 5 All Time" list for everything: breakups with girlfriends, songs about death, etc.) SSRN downloads have become a method by which individual scholars and their employers are evaluated: a large number of downloads of one's papers moves a person up the rankings, and the presence of many heavily-downloaded professors on a law school's faculty moves that law school up in such rankings. The theory is evidently that multiple downloads imply "importance" or "influence" or something that makes a scholar appear to be having a useful professional impact.
One such use of SSRN to rank faculties was Brian Leiter's "Most Downloaded Law Faculties, 2006," posted on March 6 of this year, in which Leiter compiled rankings of the top-15 law faculties based on total downloads of all papers by all of a school's faculty and on downloads per paper of all of a law school's faculty. The immediate controversy has revolved around Leiter's decision to exclude Ohio State and Emory from the rankings because their high download numbers are overwhelmingly the result of Fairman's one famous paper. (Fairman is visiting at Emory.) Paul Caron reports on the controversy on the TaxProf blog here. For what it's worth, I agree with Ann Bartow, who criticized Leiter's decision on the Feminist Law Professors blog. She points out, among other things, that Fairman's paper is a very serious piece of scholarship about the impact of language on society, noting the irony that Leiter's decision reflects one of the very issues that Fairman discusses. Fairman has also posted a commentary about the controversy, which one can download from SSRN. (I love irony.)
The more interesting issue to me, though, is the notion of using SSRN downloads as a measure of scholarship in the first place. Others have offered varying critiques, including Leiter himself, who has pointed out that SSRN tends to be used more by scholars writing in certain areas of law than others and suggesting various ways in which the results can be gamed or at least skewed. All true, I have no doubt. Even more fundamentally, the problem with download counts as a measure of importance (or quality or whatever) is essentially identical to the older practice of counting citations to assess scholarly impact. If Mike Dorf, say, writes an article that everyone thereafter cites, that must mean it's important, right? Mike (whose articles are heavily cited, for all the right reasons) will be the first to tell you that that is simply not a valid conclusion. I have seen plenty of papers in economics and in law that are downloaded heavily apparently because they're so silly or so dangerous that they need to be debunked and exposed to ridicule. That's one type of "importance," I suppose, but it seems odd to give someone credit for saying something that other people find ridiculous or scary. It is odder still because the only people who are likely to be able to say something silly yet still be cited rather than simply ignored and dismissed are scholars who already have big reputations. So a big name at a big school can enhance one's scholarly reputation by being cited (and now downloaded) for writing something absurd. Put more simply, quantity measures are not quality measures.
There is an important difference, though, between how citation counting distorts behavior and how SSRN download counts distort behavior. Citation counting has almost surely led scholars to cite their own papers and their friends' papers extensively (even when those papers are not on point), to refuse to cite high quality articles with which they disagree, etc. That is a bad result, and it makes the citation-based rankings even less well-suited to measure importance or quality than they would otherwise be.
For SSRN, though, this distortion is especially poignant. When people browse SSRN, they now know that every decision to download a paper is a vote. If I see an abstract for a paper that might or might not make a ridiculous point, for example, I would normally want to download the paper to see what the author is really saying. If the author seems to be writing in good faith but is making a point poorly or has apparently not thought about some implications of an argument, I might even try to contact the professor to discuss the paper. I hope that others will do the same for me. If I do download a paper, though, I'm voting blind. I may find myself enhancing the measured importance of a paper that turns out not to be a worthy piece of scholarship (or even coherent). Decisions to download thus become strategic, discouraging scholars from even engaging with one another because of each scholar's subjective decisions about who and what one wants to support.
And that is especially troubling, because the promise of SSRN was that it would allow scholars to interact at the stage where their work would benefit from the views of others. Counting downloads from SSRN has led to rankings based on those counts, and that undermines -- perhaps fatally -- SSRN's greatest contibution to the academic enterprise.