How can experiments in collaborative markup capture uncommon or dissenting readings? The concept of crowdsourcing – and, really, the social internet in general – has proven highly adept at extracting majority opinions, at taking the pulse of a group of people. What is “liked” by the community of participants? Where is there agreement? Always implicitly contained in the data that yields these insights, though, is information about how individuals and dissenting groups diverge from the majority consensus.
Usually, in the context of the consumer web, these oppositions are flattened out into monolithic “like” or “dislike” dichotomies. Tools like Prism, though, capture structurally agnostic and highly granular information about how users react to complex artifacts (texts – the most complex of things). I think it would be fascinating to try to find ways of analyzing the data produced by Prism that would illuminate places where the experimental cohort profoundly disagrees about things. These disagreements could be interesting irritations into criticism. Why the disagreement? What’s the implicit interpretive split that produced the non-consensus?
« Capturing disagreement »
A quote saved on Feb. 26, 2013.
Top related keywords - double-click to view: