[Update: Moved to front from 13/3. I've created a wiki to continue the planning and discussion. Click here to sign up and begin editing.]
I want to say a little more about how academics might hope to take advantage of the shift to a 'publish, then filter' world.
The first thing to note is that we already solicit and offer each other informal feedback (i.e. outside of the journal peer review process) in a number of ways -- sending drafts to friends, presenting at conferences, etc. But these opportunities are predictably and unfortunately limited by the constraints of face-to-face social networking (e.g. geography: I never could have spoken with all the wonderful people I'm now surrounded by, back when I lived in New Zealand).
So it would seem desirable to take this informal process, and expand and enhance it by means of appropriate online infrastructure. We could create a website - a global database - to which philosophers could submit their draft papers in exchange for reviewing and rating others' submissions. (Authors could respond to these reviews in turn, either by revising their paper or explaining why they consider the criticisms misguided; the reviewers may subsequently revise their ratings. It's a dynamic process.)
Such feedback could be valuable to the authors. Insightful reviews could be reputation-enhancing for the reviewers. And, emerging from of all this public give-and-take, readers end up with a searchable database of cutting-edge philosophical research, complete with a (rough) metric of quality, to help us find especially interesting and important new papers. (One might variously browse by rating, download popularity, number of reviews/comments, etc.)
As you can probably tell, I like this idea a lot. Unfortunately, I don't have the technical know-how to set up such a thing. (Oisin suggests Drupal?) But I'd be willing to look into it further -- and see if I can get institutional support, etc. -- if there's sufficient interest out there to make such a project worthwhile. Any takers?
This has been discussed before. It is worth a look at the comments there to get an idea of some of the issues involved.
ReplyDeleteAh, thanks for the link!
ReplyDeleteI note that some of the objections are specific to the more radical proposal that journals replace their review process with 'open source peer review'. My proposal is more modest -- merely an expansion of the informal draft-sharing process; no revamping of formal journal procedures required.
I'm pretty sure Drupal could handle this if that's what you wanted to do. In fact, setting it up on drupal probably wouldn't take very long at all: someone who knows what they're doing could *probably* have it set up within a day. (me, maybe a week.)
ReplyDeleteThe problem, of course, is that you've got get it started. There's a collective action problem whereby no-one will submit until it's well-known and reputable, and it won't be well-known and reputable until people submit. I think you'd need some well-known backing to get it off the ground; people might check out a small database if, say, Parfit's latest paper was there.
Alex
sounds good.
ReplyDeleteI guess one downside is a potential flood of less than ideal reviewers, and responses to the comments of ideal reviewers. But I have some ideas regarding systems to sort that out.
So we could make one. You'd need to bring the people and a spec regarding what the community needs it to do.
I think I might be able to get a pretty good team here for the technical side, once the idea progresses a little, if you want something reasonably (or very) powerful.
ReplyDeleteAlex - I'll look into the possibility of high-profile backers. But in any case, I wonder whether it could just start off as a small draft-sharing enterprise between my friends and acquaintances, and then slowly expand as more people hear about it and decide that they, too, would like to get our feedback on a paper. So long as it's sufficiently easy to participate, why wouldn't they leap at any additional opportunity for helpful feedback/comments? There's no cost, so they merely need to be convinced that the expected benefits outweigh the minimal effort involved. That doesn't seem too unlikely, does it? (People already put drafts on their personal website, distribute them to friends, etc. Would this not be seen as a natural extension of this practice?) But maybe I'm missing something...?
ReplyDeleteGenius - thanks. It may be worth floating some ideas about how best to manage the review/commenting system. Two possibilities spring to mind for quality-control: (1) Allow others to review the reviewers, as per the Slashdot system; give greater visibility to higher-rated comments, etc. (2) Restrict posting privileges to registered users with verified identities. Further: perhaps only allow professors and grad students to register (independent scholars could request an exemption). And perhaps allow users the option to filter out (or weight more lightly) grad student contributions, on the assumption that these will tend to be of lower quality?
I'm open to alternative suggestions.
This seems like a good idea. I was about to suggest something like the digg.com format.
ReplyDeleteI'm about to do a drupal install for www.youngphilosophers.org. Right now I run blogger for it and my weblog, but drupal is supposed to be remarkably versatile in being able to do some of the stuff you're interested in doing.
I bet it has a mod for social bookmarking (which is what you'd want it to be able to do).
"...I was about to suggest something like a digg.com format." should have been followed by... "but then I saw your slashdot note. That's basically a digg.com format - right?"
ReplyDeleteI think that, if there is more than one metric of quality used (rating, download popularity, number of reviews, etc.) that the system would work well, if you had a sufficiently diverse group of people. (I think a diverse group of good reviewers would be more important than high-profile backers. You mostly just want it to to be an obviously useful academic instrument; if it succeeds in that, it will spread.) There are several different ways to gauge quality -- clear presentation of a hot topic, solid scholarship, usefulness to others, etc; so a system that recognized this (however crude it would have to be in practice) would be a great idea.
ReplyDeleteA vague idea that occurs to me just now: One possibility for quality control might be to have gradations of reviews and comments -- one category for minor comments (slight quibbles about facts, typos, etc.) and questions of clarification, another for blog-post type commentary, others for somewhat more formal brief discussion-notice and presentation-response kinds of reviews, and so forth. There probably is a better way to divide it up, but
one could set up rule-of-thumb guidelines for each category to guide those who are reviewing the reviewers. That way the system would have a bit more flexibility about the type of reviewing that could go on.
I'm inclined to think only registered users can see full papers. One wants to encourage registering and it helps to control the site.
ReplyDeleteI'm imagining known identities registering and getting a publication based (and field specific) ranking rather like what they have in the wider academic world. (although that might not be dynamic unless we can assess it via a publication database)
then verified by sending an authorization email to their university email (or whatever).
Then that combines with an internal points system of reward points.
Comments could be filtered of course by anything.
the key thing to keep in mind that we want to keep the process reasonably automatic - at some point we would want thousands of people to be involved and you can't make the process take hours per person. Similarly your programme can interrelate any variable but it must be available and create a reasonably fair looking result.
this probably means that publication based rankings would be exceptions - but the ranking they gain from that helps you to use them to rank everyone else via the internal points system.
oh and beta testing....
Andrew - I'm not super familiar with digg. Do you just mean allowing users to rate entries (with a "thumbs up" vote, or whatever), and then highlighting the top-rated entries on the main page? Something like that sounds good. My Slashdot reference was to their practice of "meta-ratings", i.e. where readers can rate reviews/comments as well as the main entries; low-rated comments are then displayed less prominently (or not at all, by default). Do let me know if you have anything else in mind.
ReplyDeleteBrandon - thanks, some flexibility in response types sounds like a good idea. The desirability of fine-grained ratings needs to be balanced against the extra hassle of rating a paper across multiple dimensions. So I'm not sure how best to implement that. But it's definitely something to think about.
Genius - I'd want it to be open to read, and only restrict those who wish to post to the site. (Outsiders may benefit from reading the papers even if they're not qualified to comment on them.)
The idea of assigning extra reputation points to externally recognized experts is interesting. May be difficult to implement; I'm not sure what to use as the basis for such a ranking. But perhaps we could do it on an ad hoc basis at first, just to kick-start the internal points system, as you suggest.
if outsiders want to read them why not register? it will be free.
ReplyDeleteAnd we might want to give people the ability at least to restrict who can see their papers a bit like face book? Or have categories of publishing from open to 'i authorize'.
I tend to find normal simple comment rating systems rather inadequate - they just don't sort the wheat from the chaff until too late if at all.
another idea is that one might have to invest points in order to comment (invest trying to get the reward points) - in order to get people to take their commenting seriously.
I was thinking of using one of those journal rating lists to classify journals into A+ to C- (or something similar - hopefully one of those reference based ones) give then a number weighting on that scale and then running a program to take any list of publications and strip the journal names out of it and calculate a number and ascribe it to that person.
paste list in column A get rating in box B. It'd be easy as pie, as they say. sure I'd probably end up weighting first name authorship equally with second and things like that - but it would be a not bad estimate.
Of course I'd need a list of at least their top publications - but surely they have that on their Uni web pages.
I think I remember a site out there which recorded a lot of people's publishing records. If I could find it again I could just use that and I'd have a ready made rating for a lot of people for when they decide to join up. that would make it to a large degree automatic which is exactly what I want.
ReplyDeleteMaybe someone here knows the one I am talking about?
Richard,
ReplyDeleteThat is what I was talking about with Digg.com. Perhaps that could be one of many measures.
Each paper could have 3 measures.
1. Simple Social Bookmark.
(How many people bookmark the paper for reading)
2. A 1-10 rating.
3. A 1-10 meta-rating (of the ratings) - Low numbers in these reduce the impact of low numbers in (2).
Here's something else that would be cool. If the site were set up like a quasi-social-networking site, then philosophers could have profile pages with a number next to their name (with information about how many total diggs their papers have had, plus some average rating of their papers).
Here is a potential concern...
This is being discussed in the context of trying to determine what to do with the new publish-then-filter model. One reason to favor a simple open-access-online-journal model over that the selection could facilitate blind review.
I suspect if a rating-site like the one we're talking about got started, we might see some alliances crop up. People with the right kind of philosopher friends online could easily get their numbers inflated. People without the right kind of philosophy friends would remain unnoticed.
Some disjointed thoughts:
ReplyDelete(1) I suspect alliances would pop up, too; but if the participants were sufficiently diverse in interest, I'm not sure that this would really be a problem in the long run, if all the papers, comments, etc. were easily accessible by searches, tags, and whatever else. The real danger from alliances is that a paper that is quite brilliant will take a beating in ratings, or be ignored, because it is very different in its topic or approach (or comes to a conclusion almost everyone disagrees with); but this is an unavoidable danger under any circumstances (even blind review doesn't eliminate it), and the best way to compensate for the danger is to make sure that there are a lots of bright, decent people with very different interests involved.
(2) I like Andrew's profile page idea. One could also imagine people putting up their CV's and recommendations (both from the site and from outside) on their profile pages, if it had that functionality.
(3) In addition to the three that Andrew mentions, I think one of the quality metrics that should be used, if possible, is some sort of citation ranking -- the idea being that one measure of philosophical quality is inspiring a lot of philosophical discussion in the community, so papers (and reviews) on the site that get mentioned a lot should be pointed out. (But I think one would want this to be sorted out according to how much people are investing in their reviews and comments: another reason for wanting there to be distinct categories of reviews, with the most involved ones, the ones that required the most extensive investment, being the ones that count.)
Lots of good ideas here!
ReplyDeleteI think profile pages would be important for enhancing the system's reputational incentives. (We might want it to display the average meta-rating for their comments/reviews, for example, to discourage low-quality contributions. That seems less necessary in case of papers, however. Better just to highlight their best ones, perhaps?)
I wonder if the meta-ratings could also help deter the worst effects of alliance-forming? Imbalanced or blatantly biased reviews, at least, could be expected to inspire negative meta-ratings from outsiders. Though more subtle forms of favouritism -- e.g. in attention -- are probably unavoidable in a social system like this. (In any case, I agree that it's no replacement for blind review!)
I like the metrics discussed so far: (1) bookmark/download popularity, (2) ratings (weighted by meta-rating), and (3) citation counts / level of response (weighted by response type).
We need fleshing out on the marketing side.
ReplyDeleteHow many people are we going to get? any big names we know? who can to do networking for us, who do we have already? what Uni can we canvas? What types of people will we focus on (fields/grads/professors)? do we have papers already? are they from respected people?
Is the website viable on a small or medium scale? (because we need to get over that hurdle).
And so forth.
(Haven't read all the comments here yet, so I might be repeating somebody.)
ReplyDeleteThis sounds like a great idea, but I wonder if it'll need a board of "editors" to check overzealous reviewers and distribute the workload evenly (so everybody isn't commenting on one article).
Also, I'd like to propose a three-tier rating system.
(A) A simple rating on a scale of 1-10 of an article, open to all viewers.
(B) A similar rating of article reviews, open to all viewers, but only on a scale of +/-. (Eg. the YouTube comment rating system.)
(C) A reviewer rating of the article in question.
For example, an article might look something like this:
TitleReader rating: 8.4 Reviewer rating 7.9
Article text.
review 1+4 Was this review helpful Y or N?
review 2-1 Was this review helpful Y or N?
And now I glance up and see that Andrew C has proposed something similar. Inflated numbers would only be a problem if a disproportionate amount of voters would over-value an article and the positive reviews.
Also, as an informal system, I'd expect that big names will find themselves drawn into the project if a significant amount of serious people (eg. their graduate students) start using the system. So it doesn't seem necessary to solicit endorsements or tacit approval--from the comments it already looks like the philosophy blog-o-sphere can get excited about this prospect.
ReplyDeleteOne more problem though: plagiarism. We don't want it to be a mine for desperate paper-writers, although you'd hope that a simple Google search would expose a violator.
Hi Jared, re: "board of editors", I think it would be preferable to automate such guidance into the system itself as much as possible. For example, reviewers might gain more "referee points" (counting towards allowing them to submit an additional paper of their own) for reviewing as-yet-unreviewed articles. Fix the incentives, and spontaneous order will emerge far more efficiently than it would from central planning. (I'm such an ideologue, ha...)
ReplyDeletere: plagiarism, I don't think this would create any new problems. There's already abundant source material provided by the Ph-Online database, the OPP blog, and drafts posted on personal websites. (Not to mention journals themselves.) The way to fight plagiarism is not to hide content but to find and penalize the idiots who misappropriate it. Google makes that ever easier, fortunately.
Jared,
ReplyDeletethere is a big difference between being wiling to invest a little time commenting and being willing to spend months designing and testing a website or using important contacts to market it. Also the website needs to start with a bit of a bang - it just wont be viable if we start off with 3 papers and 9 people reviewing them from diverse fields and mostly students. while it is small it will add very little value - since the community is the key asset. until hundreds if not thousands of people are using it seriously you may well be spending more time making and marketing it than people spend using it.
People who are working on it as well as people who will come later need to be convinced it has a serious chance unless you want them to do a half arsed job because if it fails their work will be for nothing.
Richard,
> Fix the incentives, and spontaneous order will emerge
thats the big question - and I think the answer is yes - if your good enough.
I'm not so sure about all that. Like Brandon said in his first comment: "You mostly just want it to to be an obviously useful academic instrument; if it succeeds in that, it will spread." So I'd be open to starting small -- just a couple dozen grad students, perhaps -- and see how that works out. (It'd seem kind of presumptuous to market it aggressively when we're not yet sure how well it'll work.)
ReplyDelete(The key question, I think, is how useful authors find the feedback they receive -- i.e. whether it's worth the effort of having to review others in return. If it is, the system will flourish. If not, it won't.)
ReplyDeleteGenius:
ReplyDeleteI agree with Richard. If it starts off with 9 grad students who are already reading and reviewing each others' papers--and a few dozen casual readers (like those who already read the philosophers' carnival)--then I see no harm. In fact, it will make the work being done accessible to everybody; I can only see that as a benefit (minus instances of plagiarism).
Hello, long time reader, first time responder.
ReplyDeleteI think this is an idea whose time has come. I and some other grads at Wash. U. were considering doing something like this, but we lacked the technical know-how. I've also heard of some other groups trying something like this on existing forums. I think the key is to do a really good job in terms of design, ease of use, etc. so that the site stands out as a useful tool even at first glance.
If you guys get it started, I would love to take part starting right from launch!
Brandon N. Towl
Washington University in St. Louis
I don't think that will work...
ReplyDelete1) your not offering anything that anyone else doesn't have - i.e. if it as viable on a small scale the question is why we don't know of anything like it out there. It would emerge naturally from blogs and such.
2) you give the impression that you are not very convinced about your own idea and as a result won't put much into it. thats a pretty shaky foundation to build a project on particularly one like this.
3) Small comunities dont average out things like periods where lots of people want reviews but have no time to review etc.
4) you probably can't find anyone similar enough to you to be more useful than those you can find in 'the real world' that means your paper progresses much faster via an alternative medium making improvements in this medium much less valuable.
Here you have a number of people who are philosophers or armature philosophers - but is there anyone in your field? Anyone who could seriously compete with your supervisor? or even any number of your classmates?
5) in a tiny community website is highly unlikely to be the optimal method of exchanging papers. In the end I'd just download your paper onto a file paste it into word, turn on track changes and email it back to you when completed. the website just gets in the way unless I think a serious community has formed.
6) Your tiny website has a totally different dynamic than a big website - even if it works it proves very little about if the major site will work.
for example
7) value of reward points etc in a tiny community is negligible so your rating system doesn't operate.
8) the ratio of designing hte site managing it writing the code etc and actual time spent using it is much worse the smaller the community. Will you really be willing to spend more time maintaining the site than using it?
Are you sure I can't convince you to take it as a serious project?
If not, is there anyone else here willing to do it on a bigger scale?
I've been entranced with this thread (and some others like it that have cropped up lately) for the past couple of weeks. It's something that I had considered previously during my time at school but discounted for reasons of time.
ReplyDeleteI want to see this project move forward. I've looked into what some other fields are doing to manage the same general problem and have found some interesting results. There's a lot of interest out there across a variety of disciplines.
Anyhoo ... I'm working on this problem and I would love to hear from other like-minded folks.
peter a t brandwagon d o t ca
OK I'll email you to see where your heading with it.
ReplyDeleteI'm nz.genius@gmail.com
ok so 21 people say they will contribute - we could test your 'small site' theory.
ReplyDelete1) we get the papers (maybe all 21 wont be ready yet - but we need a few at least)
2) we create a really basic blog with the papers and a comments section
3) we manage the comments and a 'points system' as if we were a computer program - at this point you probably only need one guy - that could be me. I can also comment on the papers within my ability.
4) sit back and see if it works
If it gets to hard for me to manage you have a success that might be worth doing properly. If you can't get papers or you can't get comments its a failure.
like magic no technical expertise, or anything else required besides some papers and my time.
Yeah, I'd like to get this up and running for the summer break (around June, say). I wouldn't expect it to be all that much work to program the bare bones of it, though? Easier to automate than run manually, I would've expected, and it's certainly more aesthetically appealing. So, better to 'do it right' right from the start, if we can.
ReplyDeleteI'm expecting that people will want to keep changing how it is done in order to create the effect desired. Its natural that people's first experience of a new medium might use that to shape what they want (and not really know beforehand).
ReplyDeleteAnd that would make doing too much programming upfront a demoralizing task.
Besides, you indicated it would start out as a small thing as a sort of test phase- if so I find it hard to imagine a dozen or so people being much of a burden to manage - it might take minutes a day. writing a smart website on the other hand could be a pain - and much more boring and might take a couple of weeks solid work to do properly - plus ongoing management. It also requires all sorts of testing and so forth and in the end will be significantly inferior to me doing it anyway. (the best formula would probably just be a simulation of what I would do)
I've amassed a good bit of reading on this topic if anyone is interested in an exchange. This forum is inadequate for the sheer volume of links and articles.
ReplyDeleteOne notable item, however, for those seeking to employ a scheme of open peer-review on their blog;
CommentPress, an addon for the popular blogging tool WordPress expands upon the built-in comment capabilities towards an open peer-review system. It could be of interest to a small-scale trial. Also, here, a discussion about its application and direction and a bit about the information theory that underwrites the software.
There's really no shortage of interest and discussion on this topic - over the past few years especially, and across a variety of academic fields. Nearly every conclusion points in the direction of an open access publication model. The sentiment of academe is exceedingly clear, especially in light of Harvard's recent move towards open access. Organizations, institutions, academies, and so forth, have been actively supporting the growth of the open access 'movement' and serious amounts of research are available to inform new initiatives. What remains is for the corpus of best practices to be developed into a sort of 'version 2.0' end-product.
My email address is above should anyone wish to contact me directly.
"This forum is inadequate for the sheer volume of links and articles."
ReplyDeleteHi Karen, please feel free to contribute to the wiki I've created for just this purpose. You can easily create a new 'links' page, or whatever you like.
Yeah to me it would be a proof of concept site (although to you it might be an almost finished site) we could use that comment press - it looks like the sort of thing that would be useful.
ReplyDeleteHi Richard, love the idea happy to be involved, submit papers etc but don't really have the time at the moment.
ReplyDeleteOne suggestion. I'm not sure you are thinking big enough. While the addition of a new ranked database of papers would be great and useful thing, what would be more useful would a metaframework that hosted papers if need be, but primarily did the job of ranking and sorting papers.
In other words why not a mashup a site that allows papers posted anywhere on personal homepages, the equality exchange and so on to be reviewed and ranked.
David - are you merely suggesting that submitters be allowed to submit a link to their paper rather than uploading it all over again, or are you making the more radical suggestion that externally hosted papers get included automatically in the ranking database?
ReplyDeleteThe first idea sounds fine to me, but my worry about the latter is that automatically included authors would not bother to review others. A more tightly controlled submissions process can require this (as outlined here).
Definitely the first. I think this is important because people won't necessarily want to upload their papers to more than one site and this means you won't be competing with the existing sites for hosting.
ReplyDeleteYou are right the second would lose the incentives for people to provide reviews. Although one may hope people do this out of the goodness of their heart, or interest in the paper (largely how the peer review system works after all) I think the present incentive structure you guys are working on is useful to get the ball rolling.