Author Archives: oscarolvera

Power analysis for multiple regression with non-normal data

This app will perform computer simulations to estimate the power of the t-tests within a multiple regression context under the assumption that the predictors and the criterion variable are continuous and either normally or non-normally distributed. If you would like to see why I think this is important to keep in mind, please read this blog post.

When you first click on the app it looks like this:

mult1

What *you*, the user, needs to provide it with is the following:

The number of predictors. It can handle anything from 3 to 6 predictors. When you have more than that the overall aesthetics of the app is simply too crowded.  The default is 3 predictors.

The regression coefficients (i.e, the standardized effect sizes) that hold in the population. The default is 0.3. The app names them “x1, x2, x3,…x6”

The skewness and excess kurtosis of the data for each predictor AND for the dependent variable (the app calls it “y”). Please keep on reading to see how you should choose those. The defaults at this point are skewness of 2 and an excess kurtosis of 7.

The pairwise correlations among the predictors. I think this is quite important because the correlation among the predictors plays a role in calculating the standard error of the regression coefficients. So you can either be VERY optimistic and place those at 0 (predictors are perfectly orthogonal with one another) OR you can be very pessimistic and give them a high correlation (multicollinearity). The default inter-predictor correlation is 0.5.

The sample size. The default is 200.

The number of replications for the simulation. The default is 100.

Now, what’s the deal with the skewness and excess kurtosis? A lot of people do not know this but you cannot go around choosing values of skewness and excess kurtosis all willy-nilly. There is a quadratic relationship between the possible values of skewness and excess kurtosis that specifies they MUST be chosen according to the inequality kurtosis >skewness^2-2 . If you don’t do this, it will spit out an error. Now, I am **not** a super fan of the algorithm needed to generate data with those population-specified values of skewness and excess kurtosis. For many AND even more reasons. HOWEVER, I needed to choose something that was both sufficiently straight-forward to implement and not very computationally-intensive, so the 3rd order polynomial approach will have to do.

Now, the exact boundaries of what this method can calculate are actually smaller than the theoretical parabola. However, for practical purposes, as long as you choose values of kurtosis which are sufficiently far apart from the square of the skewness, you should be fine. So, a combo like skewness=3, kurtosis=7 would give it trouble. But something like skewness=3, kurtosis=15 would be perfectly fine.

A hypothetical run would look like this:

mult2

So the output under Results is the empirical, simulated power for each regression coefficient at the sample size selected. In this case, they gravitate around 60%.

Oh! And if for whatever reason you would like to have all your predictors be normal, you can set the values of skewness and kurtosis to 0. In fact, in that situation you would end up working with a multivariate normal distribution.

YOU CAN CLICK HERE TO ACCESS THE APP

The case of the missing correlations and positive-definiteness.

A couple of weeks ago, a student of mine inquired about how the pos_def_limits()function from the faux R package works. This package is being developed by the super awesome Dr. Lisa DeBruine who you should, like, totally follow, btw, in case you are not doing it already. What makes this post interesting is that I thought pos_def_limits()was doing one thing but it is doing something else. I think it helps highlight how different ways of approaching the same problems can give you insight into different aspects of it. But first, some preliminaries:

What is positive definiteness?

This one is not particularly complicated to point out. Define \Sigma as an n \times n real-valued matrix and v as an n \times 1 real-valued, non-zero vector. Then \Sigma is a positive-definite matrix if v^{t} \Sigma v >0 and it is positive-semi-definite if v^{t} \Sigma v \geq 0 for all v . I prefer this definition of positive definiteness because it generalizes easily to other types of linear operators (e.g., differentiation) as opposed to consequences of this definition, which is what we usually operate on. If you come from the social sciences (like I do) the “version” that you know about a matrix (usually, covariance matrix) being positive-definite is that all its eigenvalues have to be positive. Which is what pos_def_limits() relies on. It implements a grid-search over the plausible correlation range of [-1, +1] and, once it finds the minimum and maximum value for which the set of covariance matrices are all positive definite, it produces a result (or it lets you know if said matrix does not exist, which is super useful as well). Relying on the documentation example:

pos_def_limits(.8, .2, NA)
>  min     max
> -0.427 0.747

which means that if you have a correlation matrix that looks like:

\textbf{R}= \begin{bmatrix} 1 & 0.8 & 0.2\\ 0.8 & 1 & r\\ 0.2 & r &1 \end{bmatrix} 

then as long as -0.427 \leq r \leq 0.747 , your resulting matrix is positive definite and, hence, a valid correlation matrix. So far so good.

How I tackle this problem

This  is what I thought pos_def_limits() was doing under the hood before looking at the source code. So… a similar condition to the positive eigenvalues is that the determinant of the matrix has to be positive. So IF \Sigma is positive-semi- definite, THEN det(\Sigma) \geq 0 . Notice that this DOES NOT work the other way around: just because det(\Sigma) \geq 0 does not mean that \Sigma is positive-semi-definite). Anyway, we can rely on the fact that all correlation/covariance matrices are positive-definite by definition, which means the problem of finding the suitable upper and lower bounds is simply solving for r subject to the constraint that the determinant MUST be greater than or equal to zero. So with the help of a CAS (Computer Algebra System, my favourite one is MAPLE because I’m very Canadian, LOL) I can see that solving for det(\textbf{R}) \geq 0 results in the following:

det(\textbf{R})= -r^{2} + 0.32r + 0.32 \geq 0 

which is… A QUADRATIC EQUATION! We can graph it and see:

parabol1

So that any value on the x-axis between both roots yields a valid inequality and, therefore, a positive-definite matrix \textbf{R} . Do you remember how to solve for the roots of quadratic equations? Using our trusted formula from highschool we obtain:

x_1= -\frac{2(3\sqrt{6}-2)}{25} = -0.42788... 

x_2= \frac{2(2+3\sqrt{6})}{25} = 0.74788... 

which match the values approximated by pos_def_limits()

Extensions to this problem Pt I: Maximizing the determinant

There are 2 interesting things you can do with this determinantal equation. First and foremost, you can choose *the* value that maximizes the variance encoded in the correlation matrix \textbf{R} . The determinant has a lot of interesting properties, including a very nice geometric representation. The absolute value of the determinant is the volume of the parallelepiped described by the column vectors within the matrix.  And this generalizes to higher dimensions. Because correlations are bounded in the [-1, +1] range, the maximum determinant that ANY correlation matrix can have is 1 and the minimum is 0. So if I want my matrix \textbf{R} to have the largest possible determinant, I only need to choose the value at the vertex of the parabola:

parabol2

which has coordinates (0.16, 0.3456). So if r=0.16 then the determinant of \textbf{R} is at it maximum.

Extensions to this problem Pt II: What if more than one correlation is missing?

The “classical” version of this problem is to have a 3 x 3 matrix where 2 correlations are known and 1 is missing. But (as my student questioned further) what would happen if we had, say a 4 x 4 matrix and TWO correlations were missing? Well, no biggie. Let’s come up with a new matrix, let’s call it \textbf{S} with the following form:

\textbf{S}= \begin{bmatrix} 1 & 0.8 & 0.2 & a\\ 0.8 & 1 & r & 0.5\\ 0.2 & r &1 & 0.3\\ a &0.5&0.3 & 1 \end{bmatrix} 

Yeah, I had to make up a couple of extra correlations (0.3 and 0.5) to make sure only TWO correlations were missing. But there is a point to this that you will notice very quickly. Since we still want \textbf{S} to be a valid correlation matrix, the condition det(\textbf{S}) \geq 0 must still hold, irrespective of the dimensions or how many missing elements \textbf{S} has. So, once again, running this new matrix through the CAS system yields the condition:

 a^2r^2-a^2-0.68ar+0.92a-r^2+0.62r-0.0004 \geq 0

Which generates a system of quadratic inequalities which must be solved simultaneously to yield valid ranges. Rather than showing you the equations (booooooring! :D) let me show you something prettier. A picture of the solution space:

weird1

Yup. Any combination of points (a,r) within the red regions would yield a valid solution to the inequality above. HOWEVER, there is only ONE region where the solution is both within the valid (i.e., red) regions AND gives us solutions in the valid correlation range. What does your intuition tell you? I think we both know where to look 😉

weird2

That is correct! That little blob-looking thingy is where we want to be at. ANY combination of values within the blob is both between [-1, +1] AND satisfies the determinantal equation so any pair of values STRICTLY inside the blob will yield a valid correlation matrix \textbf{S} .

“But Oscar — you may ask– how do we choose the one pair of values that maximizes the determinant of \textbf{S} ?” Well, that’s a good question! It is not difficult to answer, but it does require a little more mathematics than the case where only one is missing. What we need is to, first, take the partial derivatives with respect to both r and a and set them to 0 (we need to find maximums):

\frac{\partial}{\partial a}det(\textbf{S})=2ar^2-2a+0.92-0.68r=0

\frac{\partial}{\partial r}det(\textbf{S})=2a^2r-2r+0.62-0.68a=0

So now we have a system of quadratic equations. There are two equations and two unknowns so we know this system is just-identified and *has* a solution. Again, when you throw them in MAPLE you end up with multiple values for a and r that maximize \textbf{S} . There was only one pair of solutions, though, which was both within the valid range [-1, +1] AND inside the special blob:

a=0.473863 ; r=-0.038687 

The last thing we need to check, however, is whether these points are minimums, maximums or saddle points as per the 2nd derivative test.

Which means I need all the 2nd partial derivatives from the equation above as:

\frac{\partial}{\partial a\partial a}det(\textbf{S})=2(r^{2}-1)

\frac{\partial}{\partial r\partial r}det(\textbf{S})=2(a^{2}-1)

\frac{\partial}{\partial a\partial r}det(\textbf{S})=4ar-0.68

Calculate the discriminant setting a=0.473863 and r=-0.038687 :

2(a^{2}-1) 2(r^{2}-1)-(4ar-0.68)^{2} = 2.529668

Which is greater than 0. So (a,r) are either minimums or maximums. The final check is to see if 2(a^{2}-1) < 0 to make sure it’s a local maximum. And since 2(0.473863^{2}-1)=-1.550908 < 0 , then it follows that the point (a,r) indeed maximizes the determinant*.

 

TWO potential future directions 

While working on this I noticed a couple of peculiarities that I think are sufficiently mathematically tractable for me to handle and turn into an actual article. UNLESS you (my dear reader) already know the answer to this. I am, after all, a lazy !@#$#% which means that if someone already worked out a formal proof for it, I’d much rather read it than having to come up with it on my own. Let us start with the easy one:

(1) (Easy): The range of plausible correlations shrinks as the dimensions of the correlation matrix increase

This one is, I think, easy to see. Let’s start with the basic 2×2 correlation matrix:

\textbf{A}= \begin{bmatrix} 1 & r\\ r & 1\\ \end{bmatrix} 

Then if r \in (-1, +1) (read \in as “element of”) then \textbf{A} is a valid correlation matrix and only becomes positive-semi-definite  if either r=1 or r=-1 . But you get the gist, all the valid correlation range applies.

Notice how the range of r shrunk for the 3×3 matrix \textbf{R} of our example to:

r \in [-\frac{2(3\sqrt{6}-2)}{25}, \frac{2(2+3\sqrt{6})}{25}] 

And this fact is independent of what correlation matrix you have. You cannot get any correlation matrix where, given that 2 of them are known, the resulting range of r is the full valid interval [-1, +1] AND it is still positive definite.

So, what happens if we go back to our 4 x 4 example with \textbf{S} ? Just for kicks and giggles, let’s make a=0.3 ? so that we are only left with 1 unknown. The new matrix looks like this:

\textbf{S}= \begin{bmatrix} 1 & 0.8 & 0.2 & 0.3\\ 0.8 & 1 & r & 0.5\\ 0.2 & r &1 & 0.3\\ 0.3 &0.5&0.3 & 1 \end{bmatrix} 

If we run this new \textbf{S} through pos_def_limits()(it can handle any number of dimensions) we get:

pos_def_limits(.8, .2,.3,.NA, 5,.3)
>  min     max
> -0.277 0.734

Yup, the range has now shrunk. But we don’t quite yet know why. Let’s try it now through the determinantal equation:

det(\textbf{R})= -0.91r^{2} + 0.416r + 0.1856 \geq 0 

And solving for the roots of this equation we get:

-\frac{0.416+\sqrt{0.84864})}{1.82} = -0.27759...

\frac{0.416+\sqrt{0.84864})}{1.82} = 0.73473...

Yup, same answer. But notice something interesting. The quadratic coefficient has now shrunk from 1 to -0.91. And, if you remember back from high school, you know that the coefficient of the quadratic term dictates how wide or narrow the parabola is so that if it is outside the [0, 1] range the parabola is wider (i.e., the roots are farther apart) and if it it’s within [0, 1] the parabola is narrower (i.e., the roots are closer together). Which prompts me to make the following claim:

Claim: As the dimensions of the correlation matrix increase arbitrary, the valid range that makes it positive definite shrinks until it collapses to a *single* point. In other words, for a large enough correlation matrix, only ONE value can make it positive definite.

This one shouldn’t be particularly difficult to prove. All I need to show is that the leading coefficient of the quadratic term shrinks as a function of the dimensions of the correlation matrix until it becomes 0. In which case you’ll have  a straight line (not a parabola). And that means you’d get only 1 root (and not 2 that encompass a range).

(1) (Hard): If I sample values for r from the valid correlation range uniformly, the distribution of the determinants concentrate around *the* value of r that maximizes the determinant. 

This one is a little bit more difficult to explain, but let me show you an interesting thing I found. Let’s use the classic 3 x 3 matrix case and only focus on \textbf{R} . We know from above that if r=0.16 then the value of det(\textbf{R}) is maximized. Now, let’s use R (the programming language, not the matrix) to uniformly sample random values of it, calculate the determinants and plot them:

n<-10000
r<-runif(n, min=-.427, max=.747)


for (i in 1:n) {
R<-matrix(c(1,.8,.2,.8,1,r[i],.2,r[i],1),3,3)
pp[i]<-det(R)
}


dat<-data.frame(pp)
a<- density(pp)
mmod<-a$x[a$y==max(a$y)]


p<-ggplot(dat, aes(x=pp))+geom_density(fill="lightgreen", alpha=.4, size=1)
p+ geom_vline(aes(xintercept=mmod),
color="red", linetype="dashed", size=1)+theme_bw()+xlab("Determinant")+ylab("")

det_dist

Compare the mode of the distribution to the theoretical, maximum possible determinant:


>R1<-matrix(c(1,.8,.2,.8,1,.16,.2,.16,1),3,3) > det(R1)
[1] 0.3456
> mmod ##this is the mode of the distribution above
[1] 0.3344818

Close within 0.011 error. Which leads me to make the following claim:

Claim: For the “missing correlation” problem, the distribution of the determinants of the correlation matrix concentrate around the value of r that maximizes it. 

I honestly have no clue how any of these two results would be useful once they are formalized. I am sensing that something like this may be able to play a role in error detection or diagnosing Heywood cases in Factor Analysis? I mean, for the first case (error detection) say you find a correlation matrix within the published literature that is not positive definite. If correlation matrices tend to concentrate around values that maximize their determinants, then you could potentially use this framework to pick and choose ranges of possible sets of correlations to point out where the problem may lie. A similar logic could be used for Heywood cases, ESPECIALLY if the dimensions of the correlation matrix are large. Or maybe all of this is BS and a very elaborate excuse for me to procrastinate. The world will never know  ¯\_(ツ)_/¯

 

 

*Technically speaking, I would *still* need to check solutions at the boundary of the blob because the global maximum may be in one of the other red regions. But we’ll leave this as an “exercise for the reader” 😉

Power Analysis for the Pearson correlation with non-normal data

This app will perform computer simulations to estimate the power of the t-test for the Pearson bivariate correlation under the assumption that the data are continuous and either bivariate normally distributed or non-normally distributed. To see why this is important, please check out this blog post.

It is very straightforward to use. When you click on the app, it will look like this:

corr2

What *you*, the user, needs to provide it with is the following:

The population correlation (i.e., the effect size) for which you would like to obtain power. The default is 0.3

The type of distributions you would like to correlate together. Right now it can handle the chi-square distribution (where skewness is controlled through the degrees of freedom), the uniform distribution (to have a symmetric distribution with negative kurtosis) and the binomial distribution where one can control the number of response categories (size) and the probability parameter. This will soon be replaced by a multinomial distribution so that the probability of every marginal response option can be specified.

The sample size. The default is 20

The number of replications for the simulation. The default is 100.

Now, what is the app actually doing? It runs R underneath and it is going to give you the estimated power of the t-test under both conditions. The first one is calculated under the assumption that your data are bivariate normally distributed. This is just a direct use of the pwr.r.test function of the pwr R package. Should give you answers very close or exactly the same as G*Power. I chose to use it for sake of comparison.

What comes out is something that looks like this:

corr

 

So, on top, you’re going to get the power as if you were using G*Power (or the pwr R package) which is exact and needs no simulation because we have closed-form expressions for it. On the bottom you are going to get the approximated power using simulations. Remember, when working with non-normal data you can’t always expect power to be lower, as in this case. Sometimes it may be higher and sometimes it won’t change much. Yes, at some point (if your sample size is large enough) the distribution of your data will matter very little. But, in the meantime, at least you can use this one to guide you!

Finally, here’s the link for the shiny web app:

YOU CAN CLICK HERE TO ACCESS THE APP

Sylvester’s criterion to diagnose Heywood cases.

So… here’s a trick I use sometimes that I’ve never seen anywhere else and it may be helpful to some people.

If you work with latent variable models, Factor Analysis, Structural Equation Modelling, Item Response Theory, etc. there’s a good chance that you have either encountered or have seen some version of a warning about a covariance matrix being “non positive definite”. This is an important warning because the software is telling you that your covariance matrix is not a valid covariance matrix and, therefore, your analysis is suspect. Usually, within the world of latent variables we call these types of warnings Heywood cases .

Now, most Heywood cases are very easy to spot because they pertain to one of two broad classes: negative variances or correlations greater than 1. When you inspect your matrix models and see either of those two cases, you know exactly which variable is giving you trouble. Thing is (as I found out a few years ago), there are other types of Heywood cases that are a lot more difficult to diagnose. Consider the following matrix that I once got helping a student with his analysis:


............space    lstnng    actvts    prntst    persnl    intrct    prgrmm
space       1.000
lstnng      0.599    1.000
actvts      0.706    0.646     1.000
prntst      0.702    0.459     0.653      1.000
persnl      0.591    0.582     0.844      0.776     1.00
intrct      0.627    0.964     0.501      0.325     0.639    1.000
prgrmm      0.493    0.602     0.981      0.687     0.944    0.642      1.000

This is the model-implied correlation matrix I obtained through the analysis which gave a Heywood case warning. The student was a bit puzzled because, although we had reviewed this type of situations in class, we had only mentioned the case of negative variances or correlations greater than one. Here he neither had a negative variance nor a correlation greater than one… but he still got a warning for non-positive definiteness.

My first reaction was, obviously, to check the eigenvalues of the matrix and, lo and behold, there it was:
[1] 5.01377877 1.00744933 0.62602056 0.30393170 0.16671742 0.01317704 -0.13107483

So… yeah. This was, indeed, an invalid correlation/covariance matrix and we needed to further diagnose where the problem was coming from… but how? Enter our good friend, linear algebra.

If you have ever taken a class in linear algebra beyond what’s required in a traditional methodology/statistics course sequence for social sciences, you may have encountered something called the minor of a matrix. Minors are important because they’re used to calculate the determinant of a matrix. They’re also important for cases like this one because they break down the structure of a matrix into simpler components that can be analyzed. Way in the back of my mind I remembered from my undergraduate years that positive-definite matrices had something special about their minors. So when I came home I went through my old, OLD notes and found this beautiful theorem known as Sylvester’s criterion:

A Hermitian matrix is positive-definite if and only if  all of the leading principal minors have positive determinant.

All covariance matrices are Hermitian (the subject for another blogpost) so we’re only left to wonder what is a principal minor. Well, if you imagine starting at the [1,1] coordinate of a matrix (so really the upper-left entry) and going downwards diagonally expanding one row and column at a time you’d end up with the principal minors. A picture makes a lot more sense for this:

SylvesterCriterion

So… yeah. The red square (so the q11 entry) is the first principal minor. The blue square (a 2 x 2 matrix) is the 2nd principal minor, the yellow square (a 3 x 3 matrix) is the 3rd principal minor and on it goes until you get the full n x n matrix. For the matrix Q to be positive-definite, all the n-1 principal minors need to have positive determinant. So if you want to “diagnose” your matrix for positive definiteness, all you need to do is start from the upper-left corner and check the determinants consecutively until you found one that is less than 0. Let’s start from the previous example. Notice that I’m calling the matrix ‘Q’:
1

> det(Q[1:2,1:2])
[1] 0.641199

> det(Q[1:3,1:3])
[1] 0.2933427

> det(Q[1:4,1:4])
[1] 0.01973229

> det(Q[1:5,1:5])
[1] 0.003930676

> det(Q[1:6,1:6])
[1] -0.003353769

The first [1,1] corner is a given (it’s a correlation matrix so it’s 1 and it’s positive). Then we move downwards the 2×2 matrix, the 3×3 matrix… all the way to the 5×5 matrix. By then the determinant of Q is very small so I suspected that whatever the issue might be, it had to do with the relationship of the variables  “persnl”,  “intrct” or “prgrmm”. The final determinant pointed towards the culprit. Whatever problem this matrix exhibited, it had to do with the relationship between “intrct” and”prgrmm”.

Once I pointed out to the student that I suspected the problem was coming from either one of these two variables, a careful examination revealed the cause: The “intrct” item was a reverse-coded item but, for some reason, several of the participants respondents were not reverse-coded. So you had a good chunk of the responses to this item pointing to one direction and a smaller (albeit still large) number pointing to the other direction. The moment this item was full reverse-coded the non-positive definite issue disappeared.

I guess there are two lessons to this story: (1) Rely on the natural structure of things to diagnose problems and (2) Learn lots of linear algebra 🙂

The Spearman correlation doesn’t need to tell you anything about the Pearson correlation

I… am a weird guy (but then again, that isn’t news, hehe). And weird people sometimes have weird pet-peeves. Melted cheese on Italian food and hamburgers? Great! I’ll take two. Melted cheese anywhere else? Blasphemy! That obviously extends to Statistics. And one of my weird pet peeves (which was actually strong enough  to prompt me to write it as a chapter for my dissertation and a subsequent published article) is when people conflate the Pearson and the Spearman correlation.

The theory of what I am going to talk about is developed in said article but when I was cleaning more of my computer files I found an interesting example that didn’t make it there. Here’s the gist of it:

I really don’t get why people say that the Spearman correlation is the ‘robust’ version or alternative to the Pearson correlation. Heck, even if you simply google the words Spearman correlation the second top hit reads

 Spearman’s rank-order correlation is the nonparametric version of the Pearson product-moment correlation

When I read that, my mathematical mind immediately goes to “if this is the non-parametric version of the Pearson correlation, that means it also estimates the same population parameter”. And honest to G-d (don’t quote me on that one, though. But I just *know* it’s true) I feel like the VAST MAJORITY of people think exactly that about the Spearman correlation. And I wouldn’t blame them either… you can’t open an intro textbook for social scientists that doesn’t have some dubious version of the previous statement. “Well” – the reader might think- “if that is not true then why aren’t more people saying it?” The answer is that, for better or worse, this is one of those questions that’s very simple but the answer is mathematically complicated. But here’s the gist of it (again, for those who like theory like I do, read the article).

The Spearman rank correlation is defined, in the population, like this:

int01

where the u_i are uniformly-distributed random variables and the C(\cdot) is the copula function that relates them (more on copulas here.) By defining the Spearman rank correlation in terms of the lower-dimensional marginals it co-relates (i.e. the u’s) and the copula function, it becomes apparent that the overlap with the Pearson correlation depends entirely on what C(u_x,u_y) is. Actually, it is not hard to show that if C(\cdot) is a Gaussian copula, and the marginals are normal, then the following identity can be derived:

\rho_S=\frac{6}{\pi}\sin^{-1}\left ( \frac{\rho}{2} \right )

That relates the Pearson correlation \rho to the Spearman correlation. We’ve known this since the times of Pearson because he came up with it (albeit not explicitly) and called it the “grade correlation”. But from this follows the obvious. An identity such as the one described above need not exist. There could be copula functions for which the Spearman and Pearson correlation have a crazy, wacky relationship… and that is what I am going to show you today.

I don’t remember why but I didn’t include this example in the article, but it is a very efficient one. It shows a case where the Spearman correlation is close to 1 but the Pearson correlation is close to 0 (obviously within sampling error).

Define Z \sim N(0, 0.1) , X=Z^{201} and Y= e^{Z}.  if you use R to simulate a very large sample (say 1 million) then you can find the following Spearman and Pearson correlations:


N <- 1000000
Z <- rnorm(N, mean=0, sd=0.1)
X <- Z^201
Y <- exp(Z)

> cor(X,Y, method="pearson")
[1] 0.004009381

> cor(X,Y, method="spearman")
[1] 0.9963492

The trick for this example is to notice that the rate of change of  X is microscopic and oscillating around 0 whereas the change in Y is fairly small and oscillating around 1.

In any case, the point being that even without being necessarily too formal, it’s not overly difficult to see that the Spearman correlation is its own statistic and estimates its own population parameter that may or may not have anything to do with the Pearson correlation, depending on the copula function describing the bivariate distribution.

 

 

Power Analysis for Multilevel Logistic Regression

::UPDATE::

A published article introducing this app is now online in BMC-Medical Research Methodology. If you plan on using this app, it would be a good idea to cite it 😉

The relationship between statistical power and predictor distribution in multilevel logistic regression: a simulation-based approach

 

WARNING (1): This app can take a little while to run. Do not close your web browser unless it gives you an error. If it appears ‘stuck’ but you haven’t got an error it means the simulation is still running on the background.

WARNING (2): If you keep getting a ‘disconnected  from server’ error, close down your browser and open a new window. If the problem still persists that means too many people have tried to access it during the day and the server has shut down. This app is hosted on a free server and it can only accommodate a certain number of people every day.  

This app will perform computer simulations to estimate power for multilevel logistic regression models allowing for continuous or categorical covariates/predictors and their interaction. The continuous predictors come in two types: normally distributed or skewed (i.e. χ2  with 1 degree of freedom). It currently only supports binary categorical covariates/predictors (i.e. Bernoulli-distributed) but with the option to manipulate the probability parameter p to simulate imbalance of the groups.

The app will give you the power for each individual covariate/predictor  AND the variance component for the intercept (if you choose to fit a random-intercept model) or the slope (if you choose to fit a model with both a random intercept and a random slope). It uses the Wald test statistic for the fixed effect predictors and a 1-degree-of-freedom likelihood-ratio test for the random effects (← yes, I know this is conservative but it’s the fastest one to implement).

When you open the app, here’s how it looks:

screen1

What **you**, as the user, need to provide is the following:

SampleSizes

The Level 1 and Level 2 sample sizes. If I were to use the ubiquitous example of “children in schools” the Level 1 sample would be the children (individuals within a cluster) and the Level 2 sample would be the schools  (number of clusters). For demonstration purposes here I’m asking for groups of 50 ‘children’ in 10 ‘schools’ for a total sample size of 50×10 = 500 children.

RandomEffects

The variance for the random effects. You can either choose to fit an intercept-only model (so no variance of the slope) or a random intercept AND random slope model. You cannot fit a random-slope only model here and you cannot set the variances at 0 to fit a single-level logistic regression (there’s other software to do power analysis for single-level logistic regression). At least the variance of the intercept needs to be specified. Notice that the app defaults to an intercept-only model and under ‘Select Covariate’ it will say ‘None’. That changes when you click on the drop-down menu where it gives you the option of which random slope do you want. Notice that you can only choose one predictor to have a random slope. Will work on the general case in the future.

Covariates1

The number of covariates (or predictors) which I believe is pretty self-explanatory. Just notice that the more covariates you add, the longer it will take for the simulation to run. The default in the app is 2 covariates.

Covariates2

This would be the core of the simulation engine because the user needs to specify:

  • Regression coefficients (‘Beta’).  This space lets the user specify the effect size for the regression coefficients under investigation. The default is 0.5 but that can be changed to any number. In the absence of any outside guidance, Cohen’s small-medium-large effect sizes are recommended. Remember that the regression coefficient for binary predictors is conceptualized as a standardized mean difference so it should be in Cohen’s d metric.
  • Level of the predictor (‘Level’). It only supports 2-level models so the options are ‘1’ or ‘2’. This section indicates whether a predictor belongs to the Level 1 sample (e.g. the ‘children’) or the Level 2 sample (e.g. the ‘school). Notice that whichever predictor gets assigned a random slope MUST also be selected as Level 1. Otherwise the power analysis results will not make sense. It currently only supports one predictor at the Level 1 with a random slope. Other predictors can be included at Level 1 but they won’t have the option for a random slope component.
  • Distribution of the covariates (‘Distribution’).  Offers 3 options: normally-distributed, skewed  (i.e. χ2  with 1 degree of freedom or a skew of about √8) and binary/Bernoulli-distributed. For the binary predictor the user can change the population parameter and create imbalance between the groups. So, for instance, if p=0.3 then 30% of the sample would belong to the group labelled as ‘1’ and 70% to the group labelled as ‘0’. The default for this option is 0.5 to create an even 50/50 split.
  • Intercept (‘Intercept Beta’).  Lets the user define the intercept for the regression model. The default is 0 and I wouldn’t recommend changing it unless you’re making inferences about the intercept of the regression model.

Covariates3

Once the number of covariates has been selected, the app will offer the user all possible 2-way interaction effects irrespective of the level of the predictor and distribution characteristics. The user can select whichever 2-way interaction is of interest and assign an effect size/regression coefficient (i.e. ‘Beta’). The app will use this effect size to calculate power. Notice that the distribution of the interaction is fully defined by the distribution of its constituting main effects.

simulationRuns

The number of datasets generated using the population parameters previously defined by the researcher. The default is 10 but I would personally recommend a minimum of 100. The larger the number of replications the more accurate the results will be but also the longer the simulation will take.

simulationRuns2

The simulated power is calculated as the proportion of statistically significant results out of the number of simulated datasets and will be printed here. Notice the time progress bar indicating that the simulation is still running. For a 2-covariate model with both a random effect for the intercept and the slope the simulation took almost 3 min to run. Expect longer waiting times if the model has lots of covariates.

simulationRuns3

This is what a sample of a full power analysis looks like. The estimated power can be found under the column ‘Power’. The column labelled ‘NA’ shows the proportion of models that did not converge. In this case, all models converged (there are 0s all throughout the NA column) but the power of the fixed and random effects is relatively low with the exception of the power for the variance of the random intercept. In this example one would need to either increase the effect size from 0.5 to something larger or increase the Level 1 and Level 2 sample sizes in order to obtain acceptable power levels of 80%. You can either download your power analysis results as a .csv file or copy-paste them by clicking on the appropriate button.

Finally, here is the link for the shiny web app:

YOU CAN CLICK HERE TO ACCESS THE APP

 

If you’re an R user and would either like to see the code that runs underneath or would prefer to work directly with it for your simulation, you can check it out on my github account.
If this app is of any use to you and you’d like to cite it, please also cite the lme4, simglm and paramtest R packages. This is really just a shiny wrapper for the 3 packages put together. Those 3 packages are the ones doing most of the “heavy-lifting” when it comes to the simulation and calculations.

Ordinal Alpha and Parallel Analysis

This shiny app will:

– Give you the polychoric (or tetrachoric, in case of binary data) correlation matrix

– Do Parallel Analysis and a scree plot based on the polychoric (or tetrachoric) correlation matrix

– Calculate ordinal alpha as recommended in:

Zumbo, B. D., Gadermann, A. M., & Zeisser, C.. (2007). Ordinal Versions of Coefficients Alpha and Theta For Likert Rating Scales. Journal of Modern Applied Statistical Methods, 6, 21-29.

It currently takes in certain SPSS files (so .sav file extensions from older versions of SPSS, say around 2013 or less), Microsoft Excel files (so .xls file extensions) and comma-delimited files (so .csv extensions). If your data is in none of those files, please change it before using the app (it’s super easy). or it will give you an error. Also notice that the app will use ALL of the variables in the file uploaded, so make sure you upload a file that only has the variables (test items in most cases) which you want to correlate/calculate alpha for. You’ll need to provide a clean dataset for it to work. So if you have missing values, you’ll need to manually remove them before submitting it. If there are outliers, those need to be dealt with before using the app.

Please notice that in accordance to research, if you have 8 (or more) Likert responses, the app will give you an error saying you have enough categories to safely treat your variables as continuous, so you don’t really need to use this app. You can see why in Rhemtulla, Brosseau-Liard & Savalei (2012).

YOU CAN CLICK HERE TO ACCESS THE APP