Have Standards Enhanced Biodiversity Data? Global correction and acquisition patterns

  • Otegui J
  • Ariño A
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

The Global Biodiversity Information Facility (GBIF) was developed with the prime idea of making biodiversity data freely available for everyone, responding to the increasing demand of basic information for addressing environmental challenges. This vision required that many differently built databases could abide to some standards, and the development of the latter was largely entrusted to TDWG. Adhering to a standard should, in principle, contribute to data quality: the danger of misinterpretation by the user querying different databases could be removed, as the standard would guarantee at least the semantic contents of each data item. Furthermore, it could be argued that the need to map diverse databases to an agreed-upon standard might lead to discovering errors, or more often gaps, in data availability. On the other hand, however, if data are made available through some standard, failure to comply (e.g. by incorrect mapping) might lead to the injection of error: if not in the original databases, at least on the data as they are served to the user. In previous contributions (Ariño & Otegui, 2008; Otegui et al., 2009), we made transversal assessments of the quality of some basic pieces of information (the “what, where, when” that form the primary biodiversity data) at the moment of retrieval. The assessment revealed a vast majority of apparently correct data, along with obvious issues that should be addressed. However, were these data correct from the beginning? Were there wrong data that someone corrected at some time, possibly as a result of the application of an exchange standard? Could wrong data have been actually used for research, results of which went uncorrected even after detection of the data errors? And, could correct data have turned wrong because of some standardization mistake? Observing the evolution of the quality of standardized data over time should enable us to address a critical, overarching question: How far can we trust the world’s available biodiversity data? Extending over time our previous, snapshot-type assessments we set to try to unveil patterns that may appear on acquisition of new data, on trends and rates of error correction, and on the likelihood of new errors resulting from the implementation of the standardization processes. Our results should help drawing the portrait of what was, what is, and what is expected to be, about the availability of biodiversity information. References Ariño A.H. & Otegui J., 2008: Sampling Biodiversity Sampling. Proceedings of TDWG, 2008. Otegui J., Robles E., & Ariño A.H., Noise in Biodiversity Data. Contribution to e-biosphere, London, 2009.

Cite

CITATION STYLE

APA

Otegui, J., & Ariño, A. H. (2009). Have Standards Enhanced Biodiversity Data? Global correction and acquisition patterns. In A. L. Weitzman (Ed.), Proceedings of TDWG (2009) (p. 92). Montpellier, FR: Biodiversity Information Standards (TDWG). Retrieved from http://www.tdwg.org/proceedings/article/view/494

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free