chapter four

4 Testing the DAG with Causal Constraints

 

This chapter covers

  • Using d-separation to reason about how causality constrains conditional independence
  • Using networkx and pgmpy to do d-separation analysis
  • Refuting a causal DAG using conditional independence tests
  • Refuting a causal DAG using Verma constraints

Causality in the data generating process induces constraints, such as conditional independence, on the joint probability distribution of the variables in that process. We saw a flavor of these constraints in the previous chapter in the form of the Markov property, how effects become independent of indirect causes given their direct causes. These constraints give us the ability to test our model against the data; if the causal DAG we build is correct, we should see evidence of these constraints in the data.

In this chapter, we’ll use statistical analysis of the data to test our causal DAG. Namely, we’ll try to refute our causal DAG; meaning we’ll look for ways the data suggests our causal DAG is wrong. In this chapter we learn to test our causal DAG using conditional independence tests and an extension of conditional independence called Verma constraints that we can test when variables in our causal DAG are not observed in data.

To start, we look at the concept of d-separation. D-separation tells us what conditional independence constraints should hold given our causal DAG, and it is the keystone of graphical causal inference analysis.

4.1 Examples of how causality induces conditional independence

4.1.1 Colliders

4.1.2 Domain-free reasoning with a causal graph

4.2 D-separation and conditional independence

4.2.1 Defining d-separation

4.2.2 Examples of d-separation

4.2.3 D-separation in code

4.2.4 Don’t conflate d-separation with conditional independence

4.3 Refuting the Causal DAG

4.3.1 Revisiting the causal Markov property

4.3.2 Refutation using conditional independence tests

4.3.3 Some tests are more important than others

4.4 Don’t focus too much on conditional independence tests

4.4.1 Statistical tests always have some chance of error

4.4.2 Testing causal DAGs with traditional CI tests is fundamentally flawed

4.4.3 p-values vary with the size of the data

4.4.4 The problem of multiple comparisons

4.4.5 CI testing doesn’t work well in machine learning settings

4.5 Refuting a causal DAG given latent variables

4.5.1 Evaluating conditional independence via the Verma Constraints

4.5.2 Verma constraint intuition

4.5.3 Testing a Verma constraint

4.5.4 Summary

4.6 Summary