






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Data Analysis Causal Discovery, Lecture Slide - Engineering, Advanced Data Analysis, Causal Discovery, Testing DAGs, Causal Discovery with Hidden Variables, On Conditional Independence Tests, Limitations on Consistency of Causal Discovery, A Pseudocode for the SGS Algorithm
Typology: Slides
1 / 11
This page cannot be seen from the preview
Don't miss anything!
1 Testing DAGs 1
2 Causal Discovery with Known Variables 3 2.1 Causal Discovery with Hidden Variables.............. 5 2.2 Software................................ 6 2.3 On Conditional Independence Tests................ 6
3 Limitations on Consistency of Causal Discovery 6
4 Exercises 8
A Pseudocode for the SGS Algorithm 9 Last time, we looked at the problem of estimating causal effects within a known graphical causal model — essentially the problem of removing confound- ing. Today, at last, we get at the problem of how to find the right graph in the first place. As always, we presume that there is some directed acyclic graph which adequately represents the systematic interactions among the variables. First, as a warm-up, we look at testing the implications of different DAG models, and so comparing them.
As seen in the homework, if we have multiple contending DAGs, we would like to focus our inference on telling which one is right (if any of them are). Since the graphs are different, they make different assertions about which variables have causal effects on which other variables. If we can experiment, those claims can be checked directly. If a model says X is a parent of Y , but when we experimentally manipulate X it makes no difference to Y , we can throw that model out. If we cannot experiment, we look for a qualitative, observational difference between the models — some conditional independence relation which one model says is present, and the other says is absent. For instance, in homework 10,
we had cancer |= tar|smoking in one model, but cancer 6 |= tar|smoking in the other. To discriminate between these models, we just need to be able to test for conditional independence. Recall from two lectures ago that conditional independence is equivalent to zero conditional information: X |= Y |Z if and only if I[X; Y |Z] = 0. In prin- ciple, this solves the problem. In practice, estimating mutual information is non-trivial, and in particular the sample mutual information often has a very complicated distribution. You could always bootstrap it, but often something more tractable is desirable. Completely general conditional independence test- ing is actually an active area of research, though unfortunately much of the work is still quite mathematical (Sriperumbudur et al., 2010). If all the variables are discrete, one just has a big contingency table problem, and could use a G^2 or χ^2 test. If everything is linear and multivariate Gaussian, X |= Y |Z is equivalent to zero partial correlation^1. Nonlinearly, if X |= Y |Z, then E [Y |Z] = E [Y |X, Z], so if smoothing Y on X and Z leads to different predic- tions than just smoothing Z, conditional independence fails. To reverse this, and go from E [Y |Z] = E [Y |X, Z] to X |= Y |Z, requires the extra assumption that Y doesn’t depend on X through its variance or any other moment. (This is weaker than the linear-and-Gaussian assumption, of course.) The conditional independence relation X |= Y |Z is fully equivalent to Pr (Y |X, Z) = Pr (Y |Z). We could check this using non-parametric density estimation, though we would have to bootstrap the distribution of the test statistic. A more au- tomatic, if slightly less rigorous, procedure comes from the idea mentioned in Lecture 6. If X is in fact useless for predicting Y given Z, then an adaptive bandwidth selection procedure (like cross-validation) should realize that giving any finite bandwidth to X just leads to over-fitting. The bandwidth given to X should tend to the maximum allowed, smoothing X away altogether. This argument can be made more formal, and made into the basis of a test (Hall et al., 2004; Li and Racine, 2007). Notice that this basic idea, of checking the conditional independence rela- tions implied by a model, can be used even when we do not have two rival models. (This is more like a goodness-of-fit test than a comparative hypothesis test.) As usual, it is simple to reject a model whose predictions do not match the data. Managing to match the data is only evidence for a model if such a match was very unlikely, if the model is false. I will not, however, repeat the earlier discussion of the logic of model-checking here. All of this is in fact fairly conventional hypothesis testing, where models are just handed to us by the Angel, or drawn out of scientific theories. The one wrinkle is that the DAG presents us with a lot of hypotheses which are in a sense small or local, making them easier to test, but which still bear on the global model. (We do not have to check a complete model of the determinants of cancer, just whether tar predicts cancer after controlling for smoking.) This is very suggestive. If we could paste together enough of these qualitative con-
(^1) As you know, the partial correlation between X and Y given Z is the correlation between them, after linearly regressing both on Z. That is, it is the correlation of their residuals.
but changing weight does not change sex, so there can’t be an edge (or even a directed path!) from weight to sex. Orienting edges is the core of the basic causal discovery procedure, the SGS algorithm (Spirtes et al., 2001, §5.4.1, p. 82). This assumes:
Abstractly, the algorithm works as follows:
Pseudo-code is in the appendix. Call the result of the SGS algorithm Ĝ. If all of the assumptions above hold, and the algorithm is correct in its guesses about when variables are conditionally independent, then Ĝ = G. In practice, of course, conditional independence guesses are really statistical tests based on finite data, so we should write the output as Ĝ n, to indicate that it is based on only n samples. If the conditional independence test is consistent, then
lim n→∞ Pr
Gn 6 = G
In other words, the SGS algorithm converges in probability on the correct causal structure; it is consistent for all graphs G. Of course, at finite n, the probability of error — of having the wrong structure — is (generally!) not zero, but this
Pearl (2009); Janzing (2007) makes a related suggestion). Arguably then using order in time to orient edges in a causal graph begs the question, or commits the fallacy of petitio principii. But of course every syllogism does, so this isn’t a distinctively statistical issue. (Take the classic: “All men are mortal; Socrates is a man; therefore Socrates is mortal.” How can we know that all men are mortal until we know about the mortality of this particular man, Socrates? Isn’t this just like asserting that tomatoes and peppers must be poisonous, because they belong to the nightshade family of plants, all of which are poisonous?) While these philosophical issues are genuinely fascinating, this footnote has gone on long enough, and it is time to return to the main text.
just means that, like any statistical procedure, we cannot be absolutely certain that it’s not making a mistake. One consequence of the independence tests making errors on finite data can be that we fail to orient some edges — perhaps we missed some colliders. These unoriented edges in Ĝ n can be thought of as something like a confidence region — they have some orientation, but multiple orientations are all compatible with the data.^5 As more and more edges get oriented, the confidence region shrinks. If the fifth assumption above fails to hold, then there are multiple graphs G to which the distribution is faithful. This is just a more complicated version of the difficulty of distinguishing between the graphs X → Y and X ← Y. All the graphs in this equivalence class may have some arrows in common; in that case the SGS algorithm will identify those arrows. If some edges differ in orientation across the equivalence class, SGS will not orient them, even in the limit. In terms of the previous paragraph, the confidence region never shrinks to a single point, just because the data doesn’t provide the information needed to do this. If there are unmeasured relevant variables, we can get not just unoriented edges, but actually arrows pointing in both directions. This is an excellent sign that some basic assumption is being violated. The SGS algorithm is statistically consistent, but very computationally inef- ficient; the number of tests it does grows exponentially in the number of variables p. This is the worst-case complexity for any consistent causal-discovery proce- dure, but this algorithm just proceeds immediately to the worst case, not taking advantage of any possible short-cuts. A refinement, called the PC algorithm, tries to minimize the number of conditional independence tests performed, es- sentially by doing easy tests first, and using what it can glean from them to cut down on the number of tests which will need to be done later (Spirtes et al., 2001, §5.4.2, pp. 84–88). There has been a recent revival of statistical work on the PC algorithm, since the paper of Kalisch and B¨uhlmnann (2007), and at the very least it makes a good default procedure.
Suppose that the set of variables we measure is not causally sufficient. Could we at least discover this? Could we possibly get hold of some of the causal rela- tionships? Algorithms which can do this exist (e.g., the CI and FCI algorithms of Spirtes et al. (2001, ch. 6)), but they require considerably more graph-fu. The results of these algorithms can succeed in removing some edges between observable variables, and definitely orienting some of the remaining edges. If there are actually no latent common causes, they end up acting like the SGS or PC algorithms.
(^5) I say “multiple orientations” rather than “all orientations”, because picking a direction for one edge might induce an orientation for others.
make the required data size as large as he likes by weakening the dependence, without ever setting it to zero.^7 The upshot is that so uniform, universal consistency is out of the question; we can be universally consistent, but without a uniform rate of convergence; or we can converge uniformly, but only on some less-than-universal class of distributions. These might be ones where all the dependencies which do exist are not too weak (and so not too hard to learn reliably from data), or the number of true edges is not too large (so that if we haven’t seen edges yet they probably don’t exist; Janzing and Herrmann, 2003; Kalisch and B¨uhlmnann, 2007). It’s worth emphasizing that the Robins et al. (2003) no-uniform-consistency result applies to any method of discovering causal structure from data. Invoking human judgment, Bayesian priors over causal structures, etc., etc., won’t get you out of it.
(^7) Roughly speaking, if X and Y are dependent given Z, the probability of missing this conditional dependence with a sample of size n should go to zero like O(2−nI[X;Y^ |Z]), I being mutual information. To make this probability equal to, say, α we thus need n = O(− log α/I) samples. The Adversary can thus make n extremely large by making I very small, yet positive.
To think through, not to hand in.
Chu, Tianjiao and Clark Glymour (2008). “Search for Additive Nonlinear Time Series Causal Models.” Journal of Machine Learning Research, 9 : 967–991. URL http://jmlr.csail.mit.edu/papers/v9/chu08a.html.
Hall, Peter, Jeff Racine and Qi Li (2004). “Cross-Validation and the Estimation of Conditional Probability Densities.” Journal of the American Statistical Association, 99 : 1015–1026. URL http://www.ssc.wisc.edu/~bhansen/ workshop/QiLi.pdf.
Hoyer, Patrik O., Domink Janzing, Joris Mooij, Jonas Peters and Bernhard Sch¨olkopf (2009). “Nonlinear causal discovery with additive noise mod- els.” In Advances in Neural Information Processing Systems 21 [NIPS 2008] (D. Koller and D. Schuurmans and Y. Bengio and L. Bottou, eds.), pp. 689–
Janzing, Dominik (2007). “On causally asymmetric versions of Occam’s Razor and their relation to thermodynamics.” E-print, arxiv.org. URL http:// arxiv.org/abs/0708.3411.
Janzing, Dominik and Daniel Herrmann (2003). “Reliable and Efficient Infer- ence of Bayesian Networks from Sparse Data by Statistical Learning Theory.” Electronic preprint. URL http://arxiv.org/abs/cs.LG/0309015.
Kalisch, Markus and Peter B¨uhlmnann (2007). “Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm.” Journal of Machine Learn- ing Research, 8 : 616–636. URL http://jmlr.csail.mit.edu/papers/v8/ kalisch07a.html.
Kalisch, Markus, Martin M¨achler and Diego Colombo (2010). pcalg: Estimation of CPDAG/PAG and causal inference using the IDA algorithm. URL http: //CRAN.R-project.org/package=pcalg. R package version 1.1-2.
Kalisch, Markus, Martin M¨achler, Diego Colombo, Marloes H. Maathuis and Peter B¨uhlmnann (2011). “Causal Inference using GRaphical Models with the R Package pcalg.” Journal of Statistical Software, submitted. URL ftp://ftp.stat.math.ethz.ch/Research-Reports/ Other-Manuscripts/buhlmann/pcalg-software.pdf.
Li, Qi and Jeffrey Scott Racine (2007). Nonparametric Econometrics: Theory and Practice. Princeton, New Jersey: Princeton University Press.
Pearl, Judea (2009). Causality: Models, Reasoning, and Inference. Cambridge, England: Cambridge University Press, 2nd edn.
Reichenbach, Hans (1956). The Direction of Time. Berkeley: University of California Press. Edited by Maria Reichenbach.
Robins, James M., Richard Scheines, Peter Spirtes and Larry Wasserman (2003). “Uniform Consistency in Causal Inference.” Biometrika, 90 : 491–515. URL http://www.stat.cmu.edu/tr/tr725/tr725.html.
Russell, Bertrand (1927). The Analysis of Matter. International Library of Phi- losophy, Psychology and Scientific Method. London: K. Paul Trench, Trubner and Co. Reprinted New York: Dover Books, 1954.
Spirtes, Peter, Clark Glymour and Richard Scheines (2001). Causation, Predic- tion, and Search. Cambridge, Massachusetts: MIT Press, 2nd edn.
Sriperumbudur, Bharath K., Arthur Gretton, Kenji Fukumizu, Bernhard Sch¨olkopf and Gert R.G. Lanckriet (2010). “Hilbert Space Embeddings and Metrics on Probability Measures.” Journal of Machine Learning Re- search, 11 : 1517–1561. URL http://jmlr.csail.mit.edu/papers/v11/ sriperumbudur10a.html.
Wiener, Norbert (1961). Cybernetics: Or, Control and Communication in the Animal and the Machine. Cambridge, Massachusetts: MIT Press, 2nd edn. First edition New York: Wiley, 1948.