Skip to:

Some hypotheses about hypothes.is

My first response when looking at hypothes.is was uncertainty. I've seen quite a bit of publicity, lots of mentions in discussion forums about hypothes.is, but it wasn’t very clear to me on looking at the website just what was proposed. “Annotate with anyone, anywhere” doesn’t really explain very much to me. I have visions of researchers sitting in a circle and annotating together – not very likely.  

 

Perhaps by “annotation” they meant “marginalia”? I remember all those handwritten annotations scribbled in the margins of library textbooks. They were very occasionally useful, but much more likely to be intensely annoying, revealing the frantic state of mind of the student preparing for exams and struggling to absorb every word uncritically (repeated underlining and highlighting) or, just as unhelpful, displaying their fury at the book without explaining why (with annotations such as “nonsense!”, “rubbish!”) Not much interest in this kind of annotation.

 

And then, what a title for an annotation service! What a curious choice for a service that is not about hypothesizing at all. Admittedly, a typical research article may present a hypothesis. But a reader’s comments on that article, whether they agree, or disagree with it, would not typically be described as a hypothesis.

 

Delving further into the description of hypothes.is when it was first announced (way back in 2011) some initiatives were described that sound interesting, but with no sign yet of how they are to be implemented.

For example, according to a presentation by Dan Waley at the time of the launch, hypothes.is is an “open-source, community-moderated, distributed platform for sentence-level annotation of the Web”. Now, that might be interesting. There are examples of providing annotation tools within a scholarly framework, notably PubMed Commons, which was launched in 2013 as  a trial, but which was then announced to be continuing due to the success of this trial.

 

Now, here is a possible reason for getting excited about the hypothes.is initiative. One group of people make a practice of disagreeing with each other, and that is the scholarlyresearch community. A tool that enables the reader of an article to see comments not just from one, but from all places where the article appears would be valuable indeed.  I still have, however, three main questions, covering interoperability, reputation, and stance. 

 

Interoperability

A service that enables annotations to be shared requires some mechanism for comments to be posted and received from one platform to another. Quite how that might work in practice I don’t know: the same article might appear on the publisher’s website, in an institutional repository, perhaps also in an aggregator collection from EBSCO or ProQuest. In other words, several locations, none of which have any particular interest in sharing content with the others.  Annotations might be created on any of those platorms. To provide cross-platform compatibility across all the places where an article might appear, together with a way of collecting the annotations, is an interesting challenge. Certainly PubMed Commons, a similar initiative enabling authors to comment on other author's articles,  is restricted only to PubMed articles, and only to comments on that platform.

 

Reputation

Hypothes.is claims to be both reputation-based and also community moderated. It is not easy to reconcile the two.  Wikipedia is community moderated, and definitely not reputation based – remember the notorious example of the American novelist whose contribution to his own Wikipedia entry was questioned because it lacked a secondary source.  There are a few crowd-sourced projects that include some degree of authority, for example the Transcribe Bentham project run by University College London (http://www.ucl.ac.uk/Bentham-Project).  Volunteers checked the digitisation of Bentham’s correspondence, but their identity was known and was corrected by “experts”. Hypothes.is claims it will allow anonymous comments – so how will this be managed or moderated?

 

Stance

hypothes.is claims to be aware of “stance” – the attitude of the person creating the annotation. The presentation lists five kinds of stance:

  1. is this a challenge?
  2. Does it support the issue?
  3. Is it a neutral observation?
  4. Does it ask a question?
  5. Does it request expert advice?

This is an admirable categorisation of types of response, and no doubt some useful user journeys could be constructed as a result, once the stances have been separated. But there is no mention of how this stance would be captured. Would it be using automatic tools along the lines of sentiment analysis in text analytics? Or would users be invited to annotate their own annotations with one or more of the above five tags? That sounds very unlikely, given that people’s own tagging of their content is often highly unreliable.

To summarise, this looks a fascinating project, but with lots of questions still unanswered Are these hypotheses correct? We will have to wait for more information from this intriguing collaboration.