As a researcher, you’re tasked to systematically look for explanations of phenomena. The condensed reason for this approach is to avoid biases. As such, our tenets for “good” research includes: magnitude, articulation, generality, interestingness and credibility (Abelson, 1995).
Not surprisingly, the recent Berkman publication entitled Measuring Internet Activity: a (Selective) Review of Methods and Metrics (Ferris & Heacock, 2013) caught my eye. I am well into my second year as a doctorate student in at the TWC where we focus heavily on all aspects of the Web, from the technical to the social. (I’ll spare you the lengthy justification for studying this phenomena).
But, as I gear up to propose my research, questions regarding exactly how I plan to accomplish my studies is where publications like the Berkman study are must-reads. Drawing from the article, I highlight a few key points on why using the Internet for studies on human behavior are and can be problematic:
- Self-selection bias can emerge when administering opt-in online surveys which limits the reliability to make inferences about preferences, beliefs, and actions.
- Representativeness becomes questionable as data from the Internet is typically limited to a “modest slice of digital life”, making it very difficult to generalize over a population
- Replicability is damn near impossible.
- Reliability of the data collected continues to be a point of contention; though more and more people are building ways of retaining, exposing, and tracking provenance (like the TWC!).
- Noise can be amplified when using Internet data, especially from social networks and “big data”.
These are just some of the challenges researchers, like myself, face as we sort out exactly what we can glean from and make statements of human behavior. The key will be - as Faris & Heacock state - the need for strong theory and robust methods.
Luckily for me, I work in a lab that is pushing the boundaries to unpack and answer the challenge. (More on this in a later post).
Lastly, the authors suggest that the best way to measure the impact and efficacy of networked technologies to affect social change is through “observation and measure of what people do and accomplish with digital tools” (p. 14).
I add to this (a preview to my research interest) the need to close the loop; wherein as technologists/engineers/policymakers learn and respond from these observations to make incremental shifts in the technology itself.