A recent column by David Alan Grier gives a mixed review to citizen science and related activities. He speaks well of the CubeSat, at standardized design for experiment packages to be designed by students and launches into space; he also likes the BOINC framework for harnessing household computers to solve computationally intense problems. But his praise for generalized crowdsourcing of research is tempered:
Citizen science may not be able to make major changes to scientific institutions, but it should be able to occupy some niche in scientific practice. As we have seen in other activities that attempt to coordinate the contributions of the general public with the Internet, these efforts have a way of disciplining work and overcoming gross inefficiencies associated with mass labor.
Mass labor, especially when it is a volunteer effort, can prove to be remarkably resistant to discipline. Volunteers are prone to follow their own inclinations no matter how much guidance a professional scientist might offer. One of the major citizen projects devoted to recording biodiversity claims to offer a global perspective on flora and fauna, but its volunteers have shown a remarkable propensity for collecting images from the world’s wealthy shopping districts and resorts. Pictures of Yellowstone can be of great interest, but they are of little use when you hoped to see images of plants found in Yaoundé.
I’m inclined to agree. Doing science is not the same thing as snapping a photo with a smartphone or checking in at foursquare. Perhaps the sweet spot for this approach is the application of what we might call semi-skilled knowledge work: trained observation and transcription. As examples, consider eBird and the related projects from the Cornell Lab of Ornithology, the Distributed Proofreaders component of Project Gutenberg, or my own new volunteer project, the North American Bird Phenology Program.