Introduction The invention of the camera trap by George Shiras in the late 1890s and its widespread adoption by hunters 100 years later has armed scientists and managers with a powerful noninvasive tool to collect data on wildlife (Sanderson and Trolle 2005). Imagery from camera traps supports ecological investigations, inventory and monitoring networks, and cataloging biodiversity (e.g., Karanth and Nichols 1998, MacKenzie et al. 2005, Trolle et al. 2007, Stein et al. 2008). Such use of camera traps continues to expand in the number of cameras deployed and images taken (Kays and Slauson 2008, Kays et al. 2009). Yet this increase creates a paradox. While practitioners seek more data to improve analyses, they buckle under mounds of imagery piling up before them. This situation engenders four problems. First, because cataloging imagery is slow, image identification lags behind acquisition, and many images remain unidentified. Second, user entry is tedious, causing errors (Maydanchik 2007). Third, inconsistent filing and naming conventions complicate data retrieval and sharing (Chaudhary et al. 2010). Fourth, the struggle to keep pace with the acquisition and management of data from existing camera traps slows the deployment of additional cameras (and subsequent data acquisition). These four problems stem from two general issues: inability to address volumes of imagery, and the lack of systematic organization. With few tools presently available, users have addressed them by either storing raw images, or using ad hoc labeling and cataloging. The former means much data sits This section highlights new and emerging areas of technology and methodology. Topics may range from hardware and software, to statistical analyses and technologies that could EmergTech unanalyzed, and the latter complicates data retrieval, analysis, and collaboration. Just as Chaudhary et al. (2010) found, across-site comparisons and meta-analyses are nearly nonexistent. The few software tools now available offer limited data analysis capability (Camera Base 2007). Even established global monitoring networks such as the TEAM Network (2008) advocate using a spreadsheet with hand entry to record data gleaned from both digital and film camera traps (Kays et al. 2009). As a result, camera trapping is an underutilized tool. To address these issues we offer a three-step, standardized procedure to retrieve, label, store, analyze, and disseminate camera trap data. The methodology relies solely on open-source software and two computer programs we created. Our procedure is fast and simple and does not require hand data entry, thus greatly reducing data entry errors (Maydanchik 2007). Output from our analysis software can be directly imported to other analysis programs (e.g., PRESENCE (MacKenzie et al. 2005)), and standard spreadsheets used elsewhere (TEAM Network 2008). The analysis program also calculates 18 popular parameters commonly examined by ecologists and wildlife managers (Table 1). These 18 parameters serve as examples, as the potential for expansion is self-evident. We illustrate these parameters by summarizing the analysis of data obtained from a camera-trapping project in south-central Arizona (~30,000 images) and Suriname, South America (~75,000 images). We are presently organizing camera programs throughout the southwestern USA following this methodology (i.e., presently ~300,000 images from Arizona and New Mexico, and building). A system for retrieval, storage, analysis, and sharing of camera-trap data Background
CITATION STYLE
Harris, G., Thompson, R., Childs, J. L., & Sanderson, J. G. (2010). Automatic Storage and Analysis of Camera Trap Data. The Bulletin of the Ecological Society of America, 91(3), 352–360. https://doi.org/10.1890/0012-9623-91.3.352
Mendeley helps you to discover research relevant for your work.