Statistical techniques in cosmolo...
Statistical techniques in cosmology Alan Heavens Institute for Astronomy, University of Edinburgh, Blackford Hill, Edinburgh EH9 3HJ, firstname.lastname@example.org Lectures given at ���Francesco Lucchin��� Summer School, Bertinoro, May, 2009, and SISSA, Italy, May 2010 1 Introduction In these lectures I cover a number of topics in cosmological data analysis. I concentrate on general techniques which are common in cosmology, or techniques which have been developed in a cosmological context. In fact they have very general applicability, for problems in which the data are interpreted in the context of a theoretical model, and thus lend themselves to a Bayesian treatment. We consider the general problem of estimating parameters from data, and consider how one can use Fisher matrices to analyse survey designs before any data are taken, to see whether the survey will actually do what is required. We outline numerical methods for esti- mating parameters from data, including Monte Carlo Markov Chains and the Hamiltonian Monte Carlo method. We also look at Model Selection, which covers various scenarios such as whether an extra parameter is preferred by the data, or answering wider questions such as which theoretical framework is favoured, using General Relativity and braneworld gravity as an example. These notes are not a literature review, so there are relatively few references. Some of the derivations follow the excellent notes of Licia Verde  and Andrew Hamilton . After this introduction, the sections are: ��� Parameter Estimation ��� Fisher Matrix analysis ��� Numerical methods for Parameter Estimation ��� Model Selection 1.1 Notations: ��� Data will be called x, or x, or xi, and is written as a vector, even if it is a 2D image. ��� Model parameters will be called ��, or �� or ���� 1.2 Inverse problems Most data analysis problems are inverse problems. You have a set of data x, and you wish to interpret the data in some way. Typical classifications are: ��� Hypothesis testing ��� Parameter estimation ��� Model selection arXiv:0906.0664v3 [astro-ph.CO] 7 Jun 2010
2 Alan Heavens Cosmological examples of the first type include ��� Are CMB data consistent with the hypothesis that the initial fluctuations were gaussian, as predicted (more-or-less) by the simplest inflation theories? ��� Are large-scale structure observations consistent with the hypothesis that the Universe is spatially flat? Cosmological examples of the second type include ��� In the Big Bang model, what is the value of the matter density parameter? ��� What is the value of the Hubble constant? Model selection can include slightly different types (but are mostly concerned with larger, often more qualitative questions): ��� Do cosmological data favour the Big Bang theory or the Steady State theory? ��� Is the gravity law General Relativity or higher-dimensional? ��� Is there evidence for a non-flat Universe? Note that the notion of a model can be a completely different paradigm (the first two examples), or basically the same model, but with a different parameter set. In the third example, we are comparing a flat Big Bang model with a non-flat one. The latter has an additional parameter, and is considered to be a different model. Similar problems exist elsewhere in astrophysics, such as how many absorption-line systems are required to account for an observed feature? These lectures will principally be concerned with questions of the last two types, param- eter estimation and model selection, but will also touch on experimental design and error forecasting. Hypothesis testing can be treated in a similar manner. 2 Parameter estimation We collect some data, and wish to interpret them in terms of a model. A model is a theoretical framework which we assume is true. It will typically have some parameters �� in it, which you want to determine. The goal of parameter estimation is to provide estimates of the parameters, and their errors, or ideally the whole probability distribution of ��, given the data x. This is called the posterior probability distribution i.e. it is the probability that the parameter takes certain values, after doing the experiment (as well as assuming a lot of other things): p(��|x). (1) This is an example of RULE 11: Start by thinking about what it is you want to know, and write it down mathematically. From p(��|x) one can calculate the expectation values of the parameters, and their errors. Note that we are immediately taking a Bayesian view of probability, as a ���em degree of belief, rather than a frequency of occurrence in a set of trials. 2.1 Forward modelling Often, what may be easily calculable is not this, rather the opposite, p(x|��)2. The opposite is sometimes referred to as forward modelling - i.e. if we know what the parameters are, we can 1 There is no rule n: n 1. 2 If you are confused about p(A|B) and p(B|A) consider if A=pregnant and B=female. p(A|B) is a few percent, p(B|A) is unity
Statistical techniques in cosmology 3 compute the expected distribution of the data. Examples of forward modelling distributions include the common ones - binomial, Poisson, gaussian etc. or may be more complex, such as the predictions for the CMB power spectrum as a function of cosmological parameters. As a concrete example, consider a model which is a gaussian with mean �� and variance ��2. The model has two parameters, �� = (��,��), and the probability of a single variable x given the parameters is p(x|��) = 1 ��� 2���� exp - (x - ��)2 2��2 , (2) but this is not what we actually want. However, we can relate this to p(��|x) using Bayes��� Theorem, here written for a more general data vector x: p(��|x) = p(��, x) p(x) = p(x|��)p(��) p(x) . (3) ��� p(��|x) is the posterior probability for the parameters. ��� p(x|��) is called the Likelihood and given its own symbol L(x ��). ��� p(��) is called the prior, and expresses what we know about the parameters prior to the experiment being done. This may be the result of previous experiments, or theory (e.g. some parameters, such as the age of the Universe, may have to be positive). In the absence of any previous information, the prior is often assumed to be a constant (a ���flat prior���). ��� p(x) is the evidence. For parameter estimation, the evidence simply acts to normalise the probabilities, p(x) = Z d�� p(x|��) p(��) (4) and the relative probabilities of the parameters do not depend on it, so it is often ignored and not even calculated. However, the evidence does play an important role in model selection, when more than one theoretical model is being considered, and one wants to choose which model is most likely, whatever the parameters are. We turn to this later. Actually all the probabilities above should be conditional probabilities, given any prior information I which we may have. For clarity, I have omitted these for now. I may be the result of previous experiments, or may be a theoretical prior, in the absence of any data. In such cases, it is common to adopt the principle of indifference and assume that all values of the parameter(s) is (are) equally likely, and take p(��)=constant (perhaps within some finite bounds, or if infinite bounds, set it to some arbitrary constant and work with an unnormalised prior). This is referred to as a flat prior. Other choices can be justified. Thus for flat priors, we have simply p(��|x) ��� L(x ��). (5) Although we may have the full probability distribution for the parameters, often one simply uses the peak of the distribution as the estimate of the parameters. This is then a Maximum Likelihood estimate. Note that if the priors are not flat, the peak in the posterior p(��|x) is not necessarily the maximum likelihood estimate. A ���rule of thumb��� is that if the priors are assigned theoretically, and they influence the result significantly, the data are usually not good enough. (If the priors come from previous experiment, the situation is different - we can be more certain that we really have some prior knowledge in this case). Finally, note that this method does not generally give a goodness-of-fit, only relative probabilities. It is still common to compute ��2 at this point to check the fit is sensible.
4 Alan Heavens 2.2 Updating the probability distribution for a parameter One will often see in the literature forecasts for a new survey, where it is assumed that we will know quite a lot about cosmological parameters from another experiment. Typically these days it is Planck, which is predicted to constrain many cosmological parameters very accurately. Often people ���include a Planck prior���. What does this mean, and is it justified? Essentially, what is assumed is that by the time of the survey, Planck will have happened, and we can combine results. We can do this in two ways: regard Planck+survey as new data, or regard the survey as the new data, but our prior information has been set by what we know from Planck. If Bayesian statistics makes sense, it should not matter which we choose. We show this now. If we obtain some more information, from a new experiment, then we can use Bayes��� theorem to update our estimate of the probabilities associated with each parameter. The problem reduces to that of showing that adding the results of a new experiment to the probability of the parameters is the same as doing the two experiments first, and then seeing how they both affect the probability of the parameters. In other words it should not matter how we gain our information, the effect on the probability of the parameters should be the same. We start with Bayes��� expression for the posterior probability of a parameter (or more generally of some hypothesis), where we put explicitly that all probabilities are conditional on some prior information I. p(��|xI) = p(��|I)p(x|��I) p(x|I) . (6) Let say we do a new experiment with new data, x0. We have two ways to analyse the new data: ��� Interpretation 1: we regard x0 as the dataset, and xI (means x and I) as the new prior information. ��� Interpretation 2: we put all the data together, and call it x0x, and interpret it with the old prior information I. If Bayesian inference is to be consistent, it should not matter which we do. Let us start with interpretation 1. We rewrite Bayes��� theorem, equation (6) by changing datasets x ��� x0, and letting the old data become part of the prior information I ��� I0 = xI. Bayes��� theorem is now p(��|x0I0) = p(��|xI)p(x0|��xI) p(x0|xI) . (7) We now notice that the new prior in this expression is just the old posteriori probability from equation (6), and that the new likelihood is just p(x0|x��I) = p(x0x|��I) p(x|��I) . (8) Substituting this expression for the new likelihood: p(��|xI0) = p(��|xI)p(x0x|��I) p(x0|xI)p(x|��I) . (9) Using Bayes��� theorem again on the first term on the top and the second on the bottom, we find p(��|xI0) = p(��|I)p(x0x|��I) p(x0|xI)p(x|I) , (10) and simplifying the bottom gives finally p(��|xI0) = p(��|I)p(x0x|��I) p(x0x|I) = p(��|([xx0]I) (11)
Statistical techniques in cosmology 5 which is Bayes��� theorem in Interpretation 2. i.e. it has the same form as equation (6), the outcome from the initial experiment, but now with the data x replaced by x0x. In other words, we have shown that x ��� x0 and I ��� xI is equivalent to x ��� x0x. This shows us that it doesn���t matter how we add in new information. Bayes��� theorem gives us a natural way of improving our statistical inferences as our state of knowledge increases. 2.3 Errors Let us assume we have a posterior probability distribution, which is single-peaked. Two common estimators (indicated by a hat: ��) �� of the parameters are the peak (most probable) values, or the mean, �� �� = Z d���� p(��|x). (12) An estimator is unbiased if its expectation value is the true value ��0: h��i �� = ��0. (13) Let us assume for now that the prior is flat, so the posterior is proportional to the likelihood. This can be relaxed. Close to the peak, a Taylor expansion of the log likelihood implies that locally it is a mutivariate gaussian in parameter space: ln L(x ��) = ln L(x ��0) + 1 2 (���� - ��0��) ���2 ln L �������������� (���� - ��0��) + ... (14) or L(x ��) = L(x ��0) exp - 1 2 (���� - ��0��)H����(���� - ��0��) . (15) The Hessian matrix H���� ��� -�������ln ���2 L ������� controls whether the estimates of ���� and ���� are corre- lated or not. If it is diagonal, the estimates are uncorrelated. Note that this is a statement about estimates of the quantities, not the quantities themselves, which may be entirely in- dependent, but if they have a similar effect on the data, their estimates may be correlated. Note that in cases of practical interest, the likelihood may not be well described by a mul- tivariate gaussian at levels which set the interesting credibility levels (e.g. 68%). We turn later to how to proceed in such cases. 2.4 Conditional and marginal errors If we fix all the parameters except one, then the error is given by the curvature along a line through the likelihood (posterior, if prior is not flat): ��conditional,�� = 1 ��� H���� . (16) This is called the conditional error, and is the minimum error bar attainable on ���� if all the other parameters are known. It is rarely relevant and should almost never be quoted. 2.5 Marginalising over a gaussian likelihood The marginal distribution of ��1 is obtained by integrating over the other parameters: p(��1) = Z d��2 ...d��N p(��) (17) a process which is called marginalisation. Often one sees marginal distributions of all pa- rameters in pairs, as a way to present some complex results. In this case two variables are left out of the integration.