Turbulent Dissipation in the Interstellar Medium: The Coexistence of Forced and Decaying Regimes and Implications for Galaxy Formation and Evolution

  • Avila‐Reese V
  • Vazquez‐Semadeni E
32Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

We discuss the dissipation of turbulent kinetic energy E k in the global interstellar medium (ISM) by means of two-dimensional, MHD, nonisothermal simulations in the presence of model radiative heating and cooling. We argue that dissipation in two dimensions is representative of that in three dimensions as long as it is dominated by shocks rather than by a turbulent cascade. Contrary to previous treatments of dissipation in the ISM, this work considers realistic, stellar-like forcing: energy is injected at a few isolated sites in space, over relatively small scales, and over short time periods. This leads to the coexistence of forced and decaying regimes in the same flow, to a net propagation of turbulent kinetic energy from the injection sites to the decaying regions, and to different characteristic dissipation rates and times in the forced sites and in the global flow. We find that the ISM-like flow dissipates its turbulent energy rapidly. In simulations with forcing, the input parameters are the radius l f of the forcing region, the total kinetic energy e k each source deposits into the flow, and the rate of formation of those regions, Σ̇ OB . The global dissipation time t d depends mainly on l f . We find that for most of our simulations t d is well described by a combination of parameters of the forcing and global parameters of the flow: t d ≈ u rms 2 / (ε̇ k f), where u rms is the rms velocity dispersion, ε̇k is the specific power of each forcing region, and f is the filling factor of all these regions. In terms of measurable properties of the ISM, t d ≳ 〈Σ g 〉u rms 2 /(e k Σ̇ OB ), where 〈Σ g 〉 is the average gas surface density; for the solar neighborhood, t d ≳ 1.5 × 10 7 yr. The global dissipation time is consistently smaller than the crossing time of the largest energy-containing scales, suggesting that the local dissipation time near the sources must be significantly smaller than what would be estimated from large-scale quantities alone. In decaying simulations, we find that the kinetic energy decreases with time as E k (t) ∝ t -α , where α ≈ 0.8-0.9. This result can be translated into a decay with distance ℓ when applied to the mixed forced-plus-decaying case, giving E k ∝ ℓ -2α/(2-α) at large distances from the sources. Our results, if applicable in the direction perpendicular to galactic disks, support models of galaxy evolution in which stellar energy injection provides significant support for the gas disk thickness but do not support models in which this energy injection is supposed to reheat an intrahalo medium at distances of up to 10-20 times the optical galaxy size, as the dissipation occurs on distances comparable to the disk height. However, this conclusion is not definitive until the effects of stratification on our results are tested.

Cite

CITATION STYLE

APA

Avila‐Reese, V., & Vazquez‐Semadeni, E. (2001). Turbulent Dissipation in the Interstellar Medium: The Coexistence of Forced and Decaying Regimes and Implications for Galaxy Formation and Evolution. The Astrophysical Journal, 553(2), 645–660. https://doi.org/10.1086/320944

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free