Maribel Recharte

and 3 more

Flagship species are used to promote conservation and tourism. Africa’s famous ‘Big Five’, have become marketing flagships that fundraisers and tourism promoters emulate on other continents, choosing regional groups of species for marketing campaigns. Selections can be based on characteristics identified as appealing: colour, size, or behaviour, but this approach may overlook unique flagships or homogenise seelctions. Polling the public to reveal existing preferences for animals may identify suitable species more directly. We used questionnaires with tourists in the Peruvian Amazon to identify existing biases for species suitable for tourism and conservation marketing. Without a species list, preferences were expressed at inconsistent taxonomic levels. The response ‘monkeys’ (infraorder Simiiformes) was highest ranked, followed by ‘jaguar’ (Panthera onca), ‘Amazon dolphin’ (Inia geoffrensis), ‘sloths’ (suborder Folivora), ‘caiman’ (subfamily Caimaninae) and ‘birds’ (class Aves). When ranking species from a preselected shortlist, jaguar, Amazon dolphins, and sloths (represented by Bradypus variegatus) remained popular, while vote splitting within higher taxonomic levels, in particular monkeys, made room in the top rankings for green-winged macaw (Ara chloropterus) and anaconda (Eunectes murinus). When asked about their willingness to pay for excursions or donate to conservation, tourists were overwhelmingly more likely to quote larger figures to see or conserve jaguars than any other species, but results for other species were more homogenous. Important species for tourism in rainforest regions are often from diverse taxonomic groups; monkeys may be represented by 8-14 species at single sites in Amazonia, birds by several hundred species. A big five strategy obscures this diversity. Similarly, using physical characteristics as selection criteria underplays diversity and can overlook popular taxa. A strategy of polling the public to identify regional flagships more directly identifies salient species for marketing and is especially useful where budgets are limited, but diversity may trump the Big five approach in megadiverse areas.

Nicole Egna

and 25 more

Scientists are increasingly using volunteer efforts of citizen scientists to classify images captured by motion-activated trail-cameras. The rising popularity of citizen science reflects its potential to engage the public in conservation science and accelerate processing of the large volume of images generated by trail-cameras. While image classification accuracy by citizen scientists can vary across species, the influence of other factors on accuracy are poorly understood. Inaccuracy diminishes the value of citizen science derived data and prompts the need for specific best practice protocols to decrease error. We compare the accuracy between three programs that use crowdsourced citizen scientists to process images online: Snapshot Serengeti, Wildwatch Kenya, and AmazonCam Tambopata. We hypothesized that habitat type and camera settings would influence accuracy. To evaluate these factors, each photo was circulated to multiple volunteers. All volunteer classifications were aggregated to a single best answer for each photo using a plurality algorithm. Subsequently, a subset of these images underwent expert review and were compared to the citizen scientist results. Classification errors were categorized by the nature of the error (e.g. false species or false empty), and reason for the false classification (e.g. misidentification). Our results show that Snapshot Serengeti had the highest accuracy (97.9%), followed by AmazonCam Tambopata (93.5%), then Wildwatch Kenya (83.4%). Error type was influenced by habitat, with false empty images more prevalent in open-grassy habitat (27%) compared to woodlands (10%). For medium to large animal surveys across all habitat types, our results suggest that to significantly improve accuracy in crowdsourced projects, researchers should use a trail-camera set up protocol with a burst of three consecutive photos, a short field of view, and consider appropriate camera sensitivity. Accuracy level comparisons such as this study can improve reliability of future citizen science projects, and subsequently encourage the increased use of such data.