The importance of simulation assumptions when evaluating detectability in population models

  • Monroe A
  • Wann G
  • Aldridge C
  • et al.
N/ACitations
Citations of this article
32Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Population monitoring is important for investigating a variety of ecological questions, and N‐mixture models are increasingly used to model population size ( N ) and trends (λ) while estimating detectability ( p ) from repeated counts within primary periods (when populations are closed to changes). Extending these models to dynamic processes with serial dependence across primary periods may relax the closure assumption, but simulations to evaluate models and inform effort (e.g., number of repeated counts) typically assume p is constant or random across sites and years. Thus, it is unknown how these models perform under scenarios where trends in p confound inferences on N and λ, and conclusions regarding effort may be overoptimistic. Here, we used global positioning system data from greater sage‐grouse ( Centrocercus urophasianus ) to inform simulations of the detection process for lek counts of this species, and we created scenarios with and without linear annual trends in p . We then compared estimates of N and λ from hierarchical population models either fit with single maximum counts or with detectability estimated from repeated counts (dynamic N‐mixture models). We also explored using auxiliary data to correct counts for variation in detectability. Uncorrected count models consistently underestimated N by >50%, whereas N‐mixture models without auxiliary data underestimated N to a lesser degree due to unmodeled heterogeneity in p such as age. Nevertheless, estimates of λ from both types of models were unbiased and similar for scenarios without trends in p . When p declined systematically across years, uncorrected count models underestimated λ, whereas N‐mixture models estimated λ with little bias when all sites were counted repeatedly. Auxiliary data also reduced bias in parameter estimates. Evaluating population models using scenarios with systematic variation in p may better reveal potential biases and inform effort than simulations that assume p is constant or random. Dynamic N‐mixture models can distinguish between trends in p and N , but also require repeated counts within primary periods for accurate estimates. Auxiliary data may be useful when researchers lack repeated counts, wish to monitor more sites less intensively, or require unbiased estimates of N .

Cite

CITATION STYLE

APA

Monroe, A. P., Wann, G. T., Aldridge, C. L., & Coates, P. S. (2019). The importance of simulation assumptions when evaluating detectability in population models. Ecosphere, 10(7). https://doi.org/10.1002/ecs2.2791

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free