We consider scheduling over a wireless system in which the channel state information is not available a priori to the scheduler but can be inferred from past history. Specifically, the wireless system is modeled as a network of parallel queues. We assume that the channel state of each queue evolves stochastically as an independent on/off Markov chain. The scheduler, which is aware of the queue lengths but is ignorant of the channel states, has to choose at most one queue at a time for transmission. The scheduler has no information regarding the current channel states but can estimate them from the acknowledgment history. We first characterize the capacity region of the system using tools from the theory of Markov decision processes (MDPs). Specifically, we prove that the capacity region boundary is the uniform limit of a sequence of linear programming (LP) solutions. Next, we combine the LP solution with a queue-length-based scheduling mechanism that operates over long frames to obtain a throughput optimal policy for the system. By incorporating results from MDP theory within the Lyapunov-stability framework, we show that our frame-based policy stabilizes the system for all arrival rates that lie in the interior of the capacity region.
CITATION STYLE
Jagannathan, K., Mannor, S., Menache, I., & Modiano, E. (2013). A state action frequency approach to throughput maximization over uncertain wireless channels. Internet Mathematics, 9(2–3), 136–160. https://doi.org/10.1080/15427951.2011.601934
Mendeley helps you to discover research relevant for your work.