- Hajek B

class notes for ECE (2011) 534 1-405

- 168Mendeley users who have this article in their library.
- N/ACitations of this article.

From an applications viewpoint, the main reason to study the subject of these notes is to help deal with the complexity of describing random, time-varying functions. A random variable can be interpreted as the result of a single measurement. The distribution of a single random vari- able is fairly simple to describe. It is completely specified by the cumulative distribution function F(x), a function of one variable. It is relatively easy to approximately represent a cumulative distribution function on a computer. The joint distribution of several random variables is much more complex, for in general, it is described by a joint cumulative probability distribution function, F(x1,x2, . . . ,xn), which is much more complicated than n functions of one variable. A random process, for example a model of time-varying fading in a communication channel, involves many, possibly infinitely many (one for each time instant t within an observation interval) random vari- ables. Woe the complexity! These notes help prepare the reader to understand and use the following methods for dealing with the complexity of random processes: • Work with moments, such as means and covariances. • Use extensively processes with special properties. Most notably, Gaussian processes are char- acterized entirely be means and covariances, Markov processes are characterized by one-step transition probabilities or transition rates, and initial distributions. Independent increment processes are characterized by the distributions of single increments. • Appeal to models or approximations based on limit theorems for reduced complexity descrip- tions, especially in connection with averages of independent, identically distributed random variables. The law of large numbers tells us that, in a certain context, a probability distri- bution can be characterized by its mean alone. The central limit theorem, similarly tells us that a probability distribution can be characterized by its mean and variance. These limit theorems are analogous to, and in fact examples of, perhaps the most powerful tool ever dis- covered for dealing with the complexity of functions: Taylor’s theorem, in which a function in a small interval can be approximated using its value and a small number of derivatives at a single point. • Diagonalize. A change of coordinates reduces an arbitrary n-dimensional Gaussian vector into a Gaussian vector with n independent coordinates. In the new coordinates the joint probability distribution is the product of n one-dimensional distributions, representing a great reduction of complexity. Similarly, a random process on an interval of time, is diagonalized by the Karhunen-Lo` series representation. Stationary random processes are diagonalized by Fourier transforms. • Sample. A narrowband continuous time random process can be exactly represented by its samples taken with sampling rate twice the highest frequency of the random process. The samples offer a reduced complexity representation of the original process. • Work with baseband equivalent. The range of frequencies in a typical radio transmission is much smaller than the center frequency, or carrier frequency, of the transmission. The signal could be represented directly by sampling at twice the largest frequency component. However, the sampling frequency, and hence the complexity, can be dramatically reduced by sampling a baseband equivalent random process. These notes were written for the first semester graduate course on random processes, offered by the Department of Electrical and Computer Engineering at the University of Illinois at Urbana- Champaign. Students in the class are assumed to have had a previous course in probability, which is briefly reviewed in the first chapter of these notes. Students are also expected to have some familiarity with real analysis and elementary linear algebra, such as the notions of limits, definitions of derivatives, Riemann integration, and diagonalization of symmetric matrices. These topics are reviewed in the appendix. Finally, students are expected to have some familiarity with transform methods and complex analysis, though the concepts used are reviewed in the relevant chapters. Each chapter represents roughly two weeks of lectures, and includes homework problems. Solu- tions to the even numbered problems without stars can be found at the end of the notes. Students are encouraged to first read a chapter, then try doing the even numbered problems before looking at the solutions. Problems with stars, for the most part, investigate additional theoretical issues, and solutions are not provided. Hopefully some students reading these notes will find them useful for understanding the diverse technical literature on systems engineering, ranging from control systems, image processing, com- munication theory, and communication network performance analysis. Hopefully some students will go on to design systems, and define and analyze stochastic models. Hopefully others will be motivated to continue study in probability theory, going on to learn measure theory and its applications to probability and analysis in general. A brief comment is in order on the level of rigor and generality at which these notes are written. Engineers and scientists have great intuition and ingenuity, and routinely use methods that are not typically taught in undergraduate mathematics courses. For example, engineers generally have good experience and intuition about transforms, such as Fourier transforms, Fourier series, and z-transforms, and some associated methods of complex analysis. In addition, they routinely use generalized functions, in particular the delta function is frequently used. The use of these concepts in these notes leverages on this knowledge, and it is consistent with mathematical definitions, but full mathematical justification is not given in every instance. The mathematical background required for a full mathematically rigorous treatment of the material in these notes is roughly at the level of a second year graduate course in measure theoretic probability, pursued after a course on measure theory. The author gratefully acknowledges the students and faculty (Todd Coleman, Christoforos Had- jicostis, Andrew Singer, R. Srikant, and Venu Veeravalli) in the past five years for their comments and corrections.

Mendeley saves you time finding and organizing research

Sign up here

Already have an account ?Sign in

There are no full text links

Choose a citation style from the tabs below