About me

nice_port_selfie

3rd year PhD student in Statistics and Public Policy.

One-time semi-professional passista.

These days I study semi-parametric causal inference.  I work on developing estimators and techniques which draw on machine learning tools and expert knowledge to carefully answer policy questions.

Email: jmauro@andrew dot cmu dot edu

More info: sites.google.com/view/jacquelinemauro/home

Advertisements
About me

Prediction v Inference v Causal Inference

Maria Cuellar and I were on a long drive back from a conference recently, and to keep ourselves entertained we had a wide-ranging argument about the difference between prediction, inference and causal inference. Yea, this really is how statisticians have fun.

I was confused about where inference fit in the whole story. I figured, prediction is just fitting a model to get the best \hat{Y}, regardless of the “truth” of the model. If I find some coefficients \hat{Y} = \hat{\beta_0} + \hat{\beta_1}X + \hat{\epsilon}, I’m only saying that if I plug in some new X, I’ll predict a new \hat{Y} according to this model. Easy.

If I care what the real relationship is between variables, I’m doing inference, right? That is, I claim Y = \beta_0 + \beta_1X + \epsilon because I think that every increase in X really implies a \beta_1 increase in Y, with some normal error. In other words, I think that when Y was being generated, it really was  generated from a normal distribution with mean \mu = \beta_0 + \beta_1X and some variance. I’ll get confidence intervals around my coefficients and say that I’m 95% sure about my conclusions.

But I’m playing fast and loose with language here. When I say “implies” do I mean “causes”? Most people will quickly and firmly say no to that can of worms. But! when people talk about regression, they will often say that X affects Y–affects is just a different word for causes so… what’s the deal? How is this not (poor) causal inference?

Well, it’s sort of still my impression that it is. But that doesn’t mean there isn’t such a thing as inference that’s totally separate from causal inference.

Inference asks the question — from this sample, what can I learn about a parameter of the entire population? So if I estimate the median of a sample, I can have some idea of what the median is in the whole population. I can put a confidence interval around it and be totally happy. This isn’t the same as prediction and prediction intervals, because I’m not asking about the median for some future sample and how sure I am that I’ll my guess of the median will be in the right range. I’m asking about the real, true, underlying median in the population.

So what about that regression example? Well, inference will say, there is a true \beta_1 in the population, such that if I took (X^TX)^{-1}X^TY I would get back \beta_1. Does that mean that \beta_1 has any real meaning? No. It’s some number that exists and I can get a confidence interval around. But if my model is wrong, the coefficients don’t say anything particularly interpretable about the relationship between X and Y.

All that to say, Maria was right and I’m sorry.

Prediction v Inference v Causal Inference

Not good science

I often read The American Conservative, a conservative outlet which I think is generally careful, smart and honest. I recommend it, especially if you’re a liberal who is looking for another viewpoint.

With that said, this article fails in its interpretation of data. Spectacularly. The author presents this figure:

He then concludes from it: “for communities who wish for their children to remain heterosexual, to form heterosexual marital unions, traditional families, etc., neutrality on the matter of sexuality will result in five to eight times as many people claiming homosexuality or bisexuality as would have otherwise been the case.”

Slow down.

This leap is not warranted. Setting aside any ideological disagreements, the scientific argument being made has a number of statistical issues that anyone who has dealt with data should identify at a glance. They are:

  1. The figure has no confidence intervals — we have no way to know if the trends we are looking at would be wiped out by randomness and/or missingness.
  2. We have no information on missingness, coverage errors or the many other issues that arise with survey taking.
  3. We have no idea how these lines were generated (splines? linear smoothers?)
  4. The figure shows the share identifying as LGB by age, not the number who are LGB. If older people are more likely to call themselves straight regardless of their underlying orientation, we would see the same pattern.
  5. This figure tells us nothing about the cause of the trend. To assert that this figure tells us that “neutrality on the matter of sexuality” is the reason behind any trend shown here is way premature.

The author looked at a figure and jumped to a conclusion he likely already believed, because it seemed to lend some support to his beliefs. I think we are all vulnerable to this kind of thinking. Luckily downer statisticians are here to remind you that a scatterplot of a survey can only tell you so much. And that so much is really not that much.

Not good science

Even stats 101 is better with gifs

Everything is better with a gif.

I made some figures for TA’ing last fall, and I like them. Basically frequentist statistics can seem weird, but computers can sample from the same distribution/population over and over again, so I think that’s a handy way to think of it when people talk about repeated experiments.

In this first one you sample from a distribution that is not Normal (meaning it doesn’t look like a bell curve) a bunch of times and each time calculate the sample average. If you keep track of your sample averages in the figure on the right, they start to look Normal. Magic.

 

CLT_from_pois

 

In the second one, you sample from a distribution a bunch of times and each time you calculate the sample average and the 95% confidence interval. The line turns red each time the confidence interval doesn’t contain the true mean (which is 10 in this case). The confidence interval misses about 5% of the time. MAGIC.

 

ConfInt.gif

Even stats 101 is better with gifs

Pubs

I’ve recently gone from having very few publications, to having a couple publications so I’m posting them up here.

In the first, we studied stressors on the US ICBM (Inter-Continental Ballistic Missile) force. The second looked at the Los Angeles Fire Department’s hiring practices, which had come under considerable… fire. In the third I lent a small hand looking at publishing trends in China and the last few are some articles I wrote as a fresh-faced college student.

Enjoy!

  1. Hardison, C. M., Rhodes, C., Mauro, J. A., Daugherty, L., Gerbec, E. N., Ramsey, C. (2014). Identifying Key Workplace Stressors Affecting Twentieth Air Force: Analyses Conducted from December 2012 Through February 2013. Santa Monica, CA: RAND Corporation, RR-592-AF.
  1. Chaitra M. Hardison, Nelson Lim, Kirsten M. Keller, Jefferson P. Marquis, Leslie Adrienne Payne, Robert Bozick, Louis T. Mariano, Jacqueline A. Mauro, Lisa Miyashiro, Gillian S. Oak, Lisa Saum-Manning. (2015) Recommendations for Improving the Recruiting and Hiring of Los Angeles Firefighters.  Santa Monica, CA: RAND Corporation, RR-687-LAFD. (http://www.rand.org/pubs/research_reports/RR687.html)
  1. Xin S, Mauro J, Mauro T, Elias P, Man M (2013). Ten-year publication trends in dermatology in mainland China. Report: International Journal of Dermatology, 1-5.
  1. Columbia Political Review: “Seeing Through the Fog: San Francisco Provides a Model for Health Care that Works” (http://goo.gl/Fas4t) and “Empowe(red): Ethical Consumerism and the Choices We Make” (http://goo.gl/g7GzF)
Pubs