Category Archives: Uncategorized

Sylvester’s criterion to diagnose Heywood cases.

So… here’s a trick I use sometimes that I’ve never seen anywhere else and it may be helpful to some people.

If you work with latent variable models, Factor Analysis, Structural Equation Modelling, Item Response Theory, etc. there’s a good chance that you have either encountered or have seen some version of a warning about a covariance matrix being “non positive definite”. This is an important warning because the software is telling you that your covariance matrix is not a valid covariance matrix and, therefore, your analysis is suspect. Usually, within the world of latent variables we call these types of warnings Heywood cases .

Now, most Heywood cases are very easy to spot because they pertain to one of two broad classes: negative variances or correlations greater than 1. When you inspect your matrix models and see either of those two cases, you know exactly which variable is giving you trouble. Thing is (as I found out a few years ago), there are other types of Heywood cases that are a lot more difficult to diagnose. Consider the following matrix that I once got helping a student with his analysis:


............space    lstnng    actvts    prntst    persnl    intrct    prgrmm
space       1.000
lstnng      0.599    1.000
actvts      0.706    0.646     1.000
prntst      0.702    0.459     0.653      1.000
persnl      0.591    0.582     0.844      0.776     1.00
intrct      0.627    0.964     0.501      0.325     0.639    1.000
prgrmm      0.493    0.602     0.981      0.687     0.944    0.642      1.000

This is the model-implied correlation matrix I obtained through the analysis which gave a Heywood case warning. The student was a bit puzzled because, although we had reviewed this type of situations in class, we had only mentioned the case of negative variances or correlations greater than one. Here he neither had a negative variance nor a correlation greater than one… but he still got a warning for non-positive definiteness.

My first reaction was, obviously, to check the eigenvalues of the matrix and, lo and behold, there it was:
[1] 5.01377877 1.00744933 0.62602056 0.30393170 0.16671742 0.01317704 -0.13107483

So… yeah. This was, indeed, an invalid correlation/covariance matrix and we needed to further diagnose where the problem was coming from… but how? Enter our good friend, linear algebra.

If you have ever taken a class in linear algebra beyond what’s required in a traditional methodology/statistics course sequence for social sciences, you may have encountered something called the minor of a matrix. Minors are important because they’re used to calculate the determinant of a matrix. They’re also important for cases like this one because they break down the structure of a matrix into simpler components that can be analyzed. Way in the back of my mind I remembered from my undergraduate years that positive-definite matrices had something special about their minors. So when I came home I went through my old, OLD notes and found this beautiful theorem known as Sylvester’s criterion:

A Hermitian matrix is positive-definite if and only if  all of the leading principal minors have positive determinant.

All covariance matrices are Hermitian (the subject for another blogpost) so we’re only left to wonder what is a principal minor. Well, if you imagine starting at the [1,1] coordinate of a matrix (so really the upper-left entry) and going downwards diagonally expanding one row and column at a time you’d end up with the principal minors. A picture makes a lot more sense for this:

SylvesterCriterion

So… yeah. The red square (so the q11 entry) is the first principal minor. The blue square (a 2 x 2 matrix) is the 2nd principal minor, the yellow square (a 3 x 3 matrix) is the 3rd principal minor and on it goes until you get the full n x n matrix. For the matrix Q to be positive-definite, all the n-1 principal minors need to have positive determinant. So if you want to “diagnose” your matrix for positive definiteness, all you need to do is start from the upper-left corner and check the determinants consecutively until you found one that is less than 0. Let’s start from the previous example. Notice that I’m calling the matrix ‘Q’:
1

> det(Q[1:2,1:2])
[1] 0.641199

> det(Q[1:3,1:3])
[1] 0.2933427

> det(Q[1:4,1:4])
[1] 0.01973229

> det(Q[1:5,1:5])
[1] 0.003930676

> det(Q[1:6,1:6])
[1] -0.003353769

The first [1,1] corner is a given (it’s a correlation matrix so it’s 1 and it’s positive). Then we move downwards the 2×2 matrix, the 3×3 matrix… all the way to the 5×5 matrix. By then the determinant of Q is very small so I suspected that whatever the issue might be, it had to do with the relationship of the variables  “persnl”,  “intrct” or “prgrmm”. The final determinant pointed towards the culprit. Whatever problem this matrix exhibited, it had to do with the relationship between “intrct” and”prgrmm”.

Once I pointed out to the student that I suspected the problem was coming from either one of these two variables, a careful examination revealed the cause: The “intrct” item was a reverse-coded item but, for some reason, several of the participants respondents were not reverse-coded. So you had a good chunk of the responses to this item pointing to one direction and a smaller (albeit still large) number pointing to the other direction. The moment this item was full reverse-coded the non-positive definite issue disappeared.

I guess there are two lessons to this story: (1) Rely on the natural structure of things to diagnose problems and (2) Learn lots of linear algebra 🙂

The Spearman correlation doesn’t need to tell you anything about the Pearson correlation

I… am a weird guy (but then again, that isn’t news, hehe). And weird people sometimes have weird pet-peeves. Melted cheese on Italian food and hamburgers? Great! I’ll take two. Melted cheese anywhere else? Blasphemy! That obviously extends to Statistics. And one of my weird pet peeves (which was actually strong enough  to prompt me to write it as a chapter for my dissertation and a subsequent published article) is when people conflate the Pearson and the Spearman correlation.

The theory of what I am going to talk about is developed in said article but when I was cleaning more of my computer files I found an interesting example that didn’t make it there. Here’s the gist of it:

I really don’t get why people say that the Spearman correlation is the ‘robust’ version or alternative to the Pearson correlation. Heck, even if you simply google the words Spearman correlation the second top hit reads

 Spearman’s rank-order correlation is the nonparametric version of the Pearson product-moment correlation

When I read that, my mathematical mind immediately goes to “if this is the non-parametric version of the Pearson correlation, that means it also estimates the same population parameter”. And honest to G-d (don’t quote me on that one, though. But I just *know* it’s true) I feel like the VAST MAJORITY of people think exactly that about the Spearman correlation. And I wouldn’t blame them either… you can’t open an intro textbook for social scientists that doesn’t have some dubious version of the previous statement. “Well” – the reader might think- “if that is not true then why aren’t more people saying it?” The answer is that, for better or worse, this is one of those questions that’s very simple but the answer is mathematically complicated. But here’s the gist of it (again, for those who like theory like I do, read the article).

The Spearman rank correlation is defined, in the population, like this:

int01

where the u_i are uniformly-distributed random variables and the C(\cdot) is the copula function that relates them (more on copulas here.) By defining the Spearman rank correlation in terms of the lower-dimensional marginals it co-relates (i.e. the u’s) and the copula function, it becomes apparent that the overlap with the Pearson correlation depends entirely on what C(u_x,u_y) is. Actually, it is not hard to show that if C(\cdot) is a Gaussian copula, and the marginals are normal, then the following identity can be derived:

\rho_S=\frac{6}{\pi}\sin^{-1}\left ( \frac{\rho}{2} \right )

That relates the Pearson correlation \rho to the Spearman correlation. We’ve known this since the times of Pearson because he came up with it (albeit not explicitly) and called it the “grade correlation”. But from this follows the obvious. An identity such as the one described above need not exist. There could be copula functions for which the Spearman and Pearson correlation have a crazy, wacky relationship… and that is what I am going to show you today.

I don’t remember why but I didn’t include this example in the article, but it is a very efficient one. It shows a case where the Spearman correlation is close to 1 but the Pearson correlation is close to 0 (obviously within sampling error).

Define Z \sim N(0, 0.1) , X=Z^{201} and Y= e^{Z}.  if you use R to simulate a very large sample (say 1 million) then you can find the following Spearman and Pearson correlations:


N <- 1000000
Z <- rnorm(N, mean=0, sd=0.1)
X <- Z^201
Y <- exp(Z)

> cor(X,Y, method="pearson")
[1] 0.004009381

> cor(X,Y, method="spearman")
[1] 0.9963492

The trick for this example is to notice that the rate of change of  X is microscopic and oscillating around 0 whereas the change in Y is fairly small and oscillating around 1.

In any case, the point being that even without being necessarily too formal, it’s not overly difficult to see that the Spearman correlation is its own statistic and estimates its own population parameter that may or may not have anything to do with the Pearson correlation, depending on the copula function describing the bivariate distribution.

 

 

Power Analysis for Multilevel Logistic Regression

WARNING (1): This app can take a little while to run. Do not close your web browser unless it gives you an error. If it appears ‘stuck’ but you haven’t got an error it means the simulation is still running on the background.

WARNING (2): If you keep getting a ‘disconnected  from server’ error, close down your browser and open a new window. If the problem still persists that means too many people have tried to access it during the day and the server has shut down. This app is hosted on a free server and it can only accommodate a certain number of people every day.  

This app will perform computer simulations to estimate power for multilevel logistic regression models allowing for continuous or categorical covariates/predictors and their interaction. The continuous predictors come in two types: normally distributed or skewed (i.e. χ2  with 1 degree of freedom). It currently only supports binary categorical covariates/predictors (i.e. Bernoulli-distributed) but with the option to manipulate the probability parameter p to simulate imbalance of the groups.

The app will give you the power for each individual covariate/predictor  AND the variance component for the intercept (if you choose to fit a random-intercept model) or the slope (if you choose to fit a model with both a random intercept and a random slope). It uses the Wald test statistic for the fixed effect predictors and a 1-degree-of-freedom likelihood-ratio test for the random effects (← yes, I know this is conservative but it’s the fastest one to implement).

When you open the app, here’s how it looks:

screen1

What **you**, as the user, need to provide is the following:

SampleSizes

The Level 1 and Level 2 sample sizes. If I were to use the ubiquitous example of “children in schools” the Level 1 sample would be the children (individuals within a cluster) and the Level 2 sample would be the schools  (number of clusters). For demonstration purposes here I’m asking for groups of 50 ‘children’ in 10 ‘schools’ for a total sample size of 50×10 = 500 children.

RandomEffects

The variance for the random effects. You can either choose to fit an intercept-only model (so no variance of the slope) or a random intercept AND random slope model. You cannot fit a random-slope only model here and you cannot set the variances at 0 to fit a single-level logistic regression (there’s other software to do power analysis for single-level logistic regression). At least the variance of the intercept needs to be specified. Notice that the app defaults to an intercept-only model and under ‘Select Covariate’ it will say ‘None’. That changes when you click on the drop-down menu where it gives you the option of which random slope do you want. Notice that you can only choose one predictor to have a random slope. Will work on the general case in the future.

Covariates1

The number of covariates (or predictors) which I believe is pretty self-explanatory. Just notice that the more covariates you add, the longer it will take for the simulation to run. The default in the app is 2 covariates.

Covariates2

This would be the core of the simulation engine because the user needs to specify:

  • Regression coefficients (‘Beta’).  This space lets the user specify the effect size for the regression coefficients under investigation. The default is 0.5 but that can be changed to any number. In the absence of any outside guidance, Cohen’s small-medium-large effect sizes are recommended. Remember that the regression coefficient for binary predictors is conceptualized as a standardized mean difference so it should be in Cohen’s d metric.
  • Level of the predictor (‘Level’). It only supports 2-level models so the options are ‘1’ or ‘2’. This section indicates whether a predictor belongs to the Level 1 sample (e.g. the ‘children’) or the Level 2 sample (e.g. the ‘school). Notice that whichever predictor gets assigned a random slope MUST also be selected as Level 1. Otherwise the power analysis results will not make sense. It currently only supports one predictor at the Level 1 with a random slope. Other predictors can be included at Level 1 but they won’t have the option for a random slope component.
  • Distribution of the covariates (‘Distribution’).  Offers 3 options: normally-distributed, skewed  (i.e. χ2  with 1 degree of freedom or a skew of about √8) and binary/Bernoulli-distributed. For the binary predictor the user can change the population parameter and create imbalance between the groups. So, for instance, if p=0.3 then 30% of the sample would belong to the group labelled as ‘1’ and 70% to the group labelled as ‘0’. The default for this option is 0.5 to create an even 50/50 split.
  • Intercept (‘Intercept Beta’).  Lets the user define the intercept for the regression model. The default is 0 and I wouldn’t recommend changing it unless you’re making inferences about the intercept of the regression model.

Covariates3

Once the number of covariates has been selected, the app will offer the user all possible 2-way interaction effects irrespective of the level of the predictor and distribution characteristics. The user can select whichever 2-way interaction is of interest and assign an effect size/regression coefficient (i.e. ‘Beta’). The app will use this effect size to calculate power. Notice that the distribution of the interaction is fully defined by the distribution of its constituting main effects.

simulationRuns

The number of datasets generated using the population parameters previously defined by the researcher. The default is 10 but I would personally recommend a minimum of 100. The larger the number of replications the more accurate the results will be but also the longer the simulation will take.

simulationRuns2

The simulated power is calculated as the proportion of statistically significant results out of the number of simulated datasets and will be printed here. Notice the time progress bar indicating that the simulation is still running. For a 2-covariate model with both a random effect for the intercept and the slope the simulation took almost 3 min to run. Expect longer waiting times if the model has lots of covariates.

simulationRuns3

This is what a sample of a full power analysis looks like. The estimated power can be found under the column ‘Power’. The column labelled ‘NA’ shows the proportion of models that did not converge. In this case, all models converged (there are 0s all throughout the NA column) but the power of the fixed and random effects is relatively low with the exception of the power for the variance of the random intercept. In this example one would need to either increase the effect size from 0.5 to something larger or increase the Level 1 and Level 2 sample sizes in order to obtain acceptable power levels of 80%. You can either download your power analysis results as a .csv file or copy-paste them by clicking on the appropriate button.

Finally, here is the link for the shiny web app:

YOU CAN CLICK HERE TO ACCESS THE APP

 

If you’re an R user and would either like to see the code that runs underneath or would prefer to work directly with it for your simulation, you can check it out on my github account.
If this app is of any use to you and you’d like to cite it, please also cite the lme4, simglm and paramtest R packages. This is really just a shiny wrapper for the 3 packages put together. Those 3 packages are the ones doing most of the “heavy-lifting” when it comes to the simulation and calculations.

Ordinal Alpha and Parallel Analysis

This shiny app will:

– Give you the polychoric (or tetrachoric, in case of binary data) correlation matrix

– Do Parallel Analysis and a scree plot based on the polychoric (or tetrachoric) correlation matrix

– Calculate ordinal alpha as recommended in:

Zumbo, B. D., Gadermann, A. M., & Zeisser, C.. (2007). Ordinal Versions of Coefficients Alpha and Theta For Likert Rating Scales. Journal of Modern Applied Statistical Methods, 6, 21-29.

It currently takes in certain SPSS files (so .sav file extensions from older versions of SPSS, say around 2013 or less), Microsoft Excel files (so .xls file extensions) and comma-delimited files (so .csv extensions). If your data is in none of those files, please change it before using the app (it’s super easy). or it will give you an error. Also notice that the app will use ALL of the variables in the file uploaded, so make sure you upload a file that only has the variables (test items in most cases) which you want to correlate/calculate alpha for. You’ll need to provide a clean dataset for it to work. So if you have missing values, you’ll need to manually remove them before submitting it. If there are outliers, those need to be dealt with before using the app.

Please notice that in accordance to research, if you have 8 (or more) Likert responses, the app will give you an error saying you have enough categories to safely treat your variables as continuous, so you don’t really need to use this app. You can see why in Rhemtulla, Brosseau-Liard & Savalei (2012).

YOU CAN CLICK HERE TO ACCESS THE APP

 

Normality: residuals or dependent variable?

So… something interesting happened the other day. As part of an unrelated set of circumstances, my super awesome BFF Ed and I were discussing one of those interesting, perennial misconceptions within methodology in the social sciences. OK, maybe in other areas as well but I can only speak about what I know best. The interesting aspect of this conversation is that it reflects the differences in training that he and I have so that, although we tend to see things from the same perspective, our solutions are sometimes different. You see, Ed is a full-blown mathematician who specializes in harmonic analysis but with a keen interest in urban ornithology as his more “applied” researcher side. Oh, and some psychometrics as well. I’m a psychometrician that’s mostly interested in technical problems but also flirts with the analysis of developmental data. This is going to play an important role in how we approached the answer to the following question:

In a traditional ANOVA setting (fixed effects, fully-balanced groups, etc.)… Does one test the normality assumption on the residuals or the dependent variable?

Ed’s answer (as well as my talkstats.com friends): On the residuals. ALWAYS.

My answer: Although the distributional assumptions for these models is on the residuals, for most designs found in education or social sciences it doesn’t really matter whether you use the residuals or the dependent variable.

Who is right, and who is wrong? The good thing about Mathematics (and Statistics as a branch of Mathematics) is that there’s only one answer. So either he is right or I am. Here are the two takes to the answer with a rationale.

Ed is right.

This is a simplified version of his answer that was also suggested on talkstats. Consider the following independent-groups t-test as shown in this snippet of R code. I’m assuming if you’re reading this you know that a t-test can be run as a linear regression.


dv1 <- rnorm(1000, 10, 1)
dv2 <- rnorm(1000, 0, 1)
dv <- c(dv1, dv2)
g <- as.factor(rep(c(1,0), each=1000))


dat<-data.frame(dv,g)


res <- as.data.frame(resid(lm(dv~g)))
colnames(res)<-c("residual")

If you plot the dependent variable, it looks like this:

bimodal

And if you plot the residuals, they look like this:

residu1

Clearly, the dependent variable is not normally distributed. It is bimodal, better described as a 50/50 Gaussian mixture if you wish. However, the residuals are very much bell-shaped and.. well, for lack of a better word, normally distributed. If we wanted to look at it more formally, we can conduct a Shapiro-Wilks test and see that it is not statistically significant.


shapiro.test(res$residual)

Shapiro-Wilk normality test
data: res$residual
W = 0.99901, p-value = 0.3432

So… yeah. Testing the dependent variable would’ve led someone to (erroneously) conclude that the assumption of normality was being violated and maybe this person would’ve ended up going down the rabbit hole of non-parametric regression methods… which are not bad per se, but I realize that for people with little training in statistics, these methods can be quite problematic to interpret. So Ed is right and I am wrong.

I am right.

When this example was put forward I pointed out to Ed (and other people involved in the discussion) to look at the assumption made regarding the population effect size. That’s a Cohen’s d of 10! Let’s see what happens when you run what’s considered a “large” effect size within the social sciences. Actually, let’s be very, very, VERY generous and jump straight from a Cohen’s d of 0.8 (large effect size) to a Cohen’s d of 1 (super large effect size?).

The plot of the dependent variable now looks like this:

no_bimodal

And the residual plot looks like this:

residu2

Uhm… both the dependent variable and the residuals are looking very normal to me. What if we test them using the Shapiro-Wilks test?

shapiro.test(res$residual)
Shapiro-Wilk normality test

data: res$residual
W = 0.99926, p-value = 0.6328

shapiro.test(dv)
Shapiro-Wilk normality test

data: dv
W = 0.99944, p-value = 0.8515

Yup, both are pretty normal-looking. So, in this case, whether you test the dependent variable or the residuals you end up with the same answer.

Just for kicks and giggles, I noticed that you needed a Cohen’s d of 2 before the Shapiro-Wilks test yielded a significant result, but you can see that the W statistics are quite similar between the previous case and this one.  And we’re talking about sample sizes of 2000. Heck, even the plot of the dependent variable is looking pretty bell-shaped

shapiro.test(dv)
Shapiro-Wilk normality test

data: dv
W = 0.99644, p-value = 8.008e-07

bibi

This is why I, in my response, I included the addendum of for most designs found in education or social sciences it doesn’t really matter whether you use the residuals or the dependent variable. A Cohen’s d of 2 is two-and-a-half units larger than what’s considered a large effect size in my field. If I were to see such a large effect size, I’d probably think something funky going on with the data than actually believing that such a large difference can be found. Ed, comes from a natural science background. And I know that, in the natural sciences, large effect sizes are pretty common-place (in my opinion it comes down to the problem of measurement that we face in the social sciences).

As you can see now, the degree of agreement between the normality tests of the dependent variable and the residuals is a function of the effect size. The larger the effect size, the larger the difference between the shapes of the distributions of the residuals VS the dependent variable (within this context, of course. This is not true in general).

Strictly speaking, Ed and the talkstats team are right in the sense that you can never go wrong with testing the residuals. Which is what I pointed out in the beginning as well. My applied research experience, however, has made me more practical and I realize that, in most cases, it doesn’t really matter. And at a certain sample size, the normality assumption is just so irrelevant that even testing it may be unnecessary. But anyway, some food for thought right there 😉

Lawley’s test for equality of correlations

So… while I kept on cleaning computer files I found something pretty interesting. A “secret” (as in “I-didn’t-remember-where-it-was” kind of secret) folder where I kept all my attempts at a Master’s thesis. Apparently, I went through 11 different projects which I left at various unfinished stages. But there was one that just draw me back in because it was… well… kind of pointless?

For some reason that I can’t quite remember, I thought it would be a good idea to investigate the properties of Lawley’s test for the equality of correlations in a correlation matrix.  So the null hypothesis of this test kind of looks like this:

matrix

I have absolutely no idea why I thought this was a good idea. I’m thinking it may have been back in the days where I was not as proficient in Structural Equation Modelling (SEM) so I’d dwell in my mathematical training to accomplish things that can be trivially set up as SEMs. Following on my poorly-documented code and meager notes, it seems like I thought I could tweak the test to see whether or not the assumption of Parallel Tests or Tau-Equivalent Tests could be evaluated. You know, to make sure people would know whether their Cronbach’s alpha was a true estimate of reliability or just a lower bound. Little did I know at that point that people just use alpha whether the assumptions behind it hold or not. The truth is nobody really cares.

The one thing I remember is that I couldn’t find any R package that would run the test. So I ended up coding it myself. I’m not sure if this would be of any use to anyone, but I’m hoping this may save some time to someone out there who may need it for whatever reason.

So the R function you’d have to declare is:


lawley <- function(data) {

R <- cor(data)
m <- cor(data)

r <- nrow(m)
c <- ncol(m)

u <- c
z <- lower.tri(c,diag = FALSE)
u[z] <- 0

l <- c
z1 <- upper.tri(c,diag = FALSE)
l[z1] <- 0
p <- r
n <- nrow(data)

mcp <- 0*1:ncol(m)
for (i in 1:ncol(m))
{
mcp[i] <- (sum(m[,i])-1)/(ncol(m)-1)
} # vector of average values of non-diagonal elements of each row

lR <- R
z2 <- upper.tri(R,diag = FALSE)
lR[z2] <- 0

mc <- (2/(p*(p-1))) * sum(R-lR)

TR <- R
z3 <- upper.tri(R,diag = TRUE)
TR[z3] <-0

A <- (TR - mc)^2
A[z3] <-0
A <- sum(A)

B <- sum((mcp-mc)^2)

C <- ((p-1)^2*(1-(1-mc)^2))/(p-((p-2)*(1-mc)^2))
X2 <- (((n-1)/((1-mc)^2))*(A-C*B))
v <- ((p+1)*(p-2))/2
P <- 1-pchisq(X2,v,ncp = 0, lower.tail = TRUE, log.p = FALSE)

result <- rbind(c(X2, v, P))

dimnames(result) <- list(c("X2"), c("statistic", "df", "p.value"))

print(result)
}

And the way you use it is very simple. Let’s try it with a mock dataset we’ll generate in the R package lavaan. Here we’re generating data from a One Factor model with equal loadings of 0.3 and equal error variances:


library(lavaan)
set.seed(123)

pop = 'f1 =~ 0.3*x1 + 0.3*x2 + 0.3*x3 + 0.3*x4 + 0.3*x5'

dat = simulateData(model=pop, sample.nobs=500)

lawley(dat)
statistic df p.value
X2 7.840194 9 0.5503277

So… yeah. Non-significant p-value (as per glorious alpha of .05) tells us that the null hypothesis is true and the population correlation matrix indeed has equal elements all around.

Let’s change one loading and see what happens:


library(lavaan)
set.seed(123)

pop = 'f1 =~ 0.4*x1 + 0.3*x2 + 0.3*x3 + 0.3*x4 + 0.3*x5'

dat = simulateData(model=pop, sample.nobs=500)

lawley(dat)
statistic df p.value
X2 18.78748 9 0.02706175

So the null hypothesis is now rejected because one of the loadings is different so there is one element of the covariance matrix that is not equal to every other element.

I literally have no clue why Lawley thought having such a test was a good idea. But then again I was investigating it a few years ago so maybe *I* thought the test was a good idea in the first place.

Anyhoo, I hope this helps someone if they need it.

My love-hate relationship with G*Power

I can’t help but have a love-hate relationship with G*Power and power analysis as carried out in the social sciences. I love it because it provides applied researchers who may not have a strong statistical background with a (somewhat) sensible way to plan their sample sizes. I hate it because it reminds me that we still have a long, looong, LOOOOOONG way to go before we can even attempt to claim we are all following “best practices” in data analysis. And the fact of the matter is that we may never will.

Let me show you why.

Say we have the very simple scenario of calculating power for the easy-cheesy t-test of the Pearson correlation coefficient. We are going to be extra indulgent with ourselves and claim the population effect size is \rho=0.5 (so a LARGE effect size à la Cohen). If you plug in the usual specifications in G*Power (Type I error rate of .05, desired power of 0.8, population effect size of \rho=0.5 against the null of \rho=0.0 ) this is what we get:

GPowerNormalPower

So your sample size should be 26. Just for kicks an giggles, I simulated the power curve for this exact scenario and marked with a line where the 80% power would be located.

BivariateNormalPower

Same answer as with G*Power, somewhere a little over n=25. Pretty straightforward, right? Well… sure… if you’re comfortable with the assumption that your data is bivariate normal. Both in the R simulation I made and in G*Power, the software assumes that your data looks like this:

corr_biv_norm

For even more kicking and giggling, let’s assume your data is NOT normal (which, as we know, is the more common case). In this particular instance, both variables are \chi^{2} -distributed with 1 degree of freedom (quite skewed). Each variable looks like this:

chisq_1

And their joint density (e.g. if you do a scatterplot), looks like that:

corr_non_norm

But here’s the catch… because of how I simulated them (through a Gaussian copula if you’re wondering), they both have the *same* population effect size of 0.5. What does the power curve look like in this case? It looks like this:

powerCurve

So that for the same large population effect size, you need a little over TWICE the sample size to obtain the same 80%.

You see where I’m going with this? Where’s the my-data-is-not-normal option in G*Power? Or my data-has-missing-values? Or my-data-has-measurement error? Or my data has all of those at once? Sure, I realize that this is a little bit of an extreme case because the sample size is not terribly large, the non-normality is severe and by the time n=100, the malicious influence of the non-normality has been “washed away” so to speak. The power curves look more and more similar as the sample size grows larger and larger. But it is still a reminder that every time I see people report their power analyses through G*Power my mind immediately goes to… “is this really power or a lower/upper bound to power?” And, moreover… if you go ahead, do your analyses and find your magic p-value under .05, you’re probably going to feel even *more* confident that your results are the real deal, right? I mean, you did your due diligence, you’re aware of the issues and you tried to address them the best way you could. And that’s exactly what kills me. Sometimes your best is just not good enough.

Solutions? Well… I dunno. Unless someone makes computer simulations mandatory in research methods classes, the only other option I have is usually to close my eyes and hope for the best.