I’ve been trying to find a sensible correlation coefficient to assess the reliability of the cylindrical axis on an optometric prescription. Glasses can have *spherical* correction, which has equal refractive power in all axes; but they can also have *cylindrical* correction, which has power in a specific axis to correct for astigmatism. So a prescription might include one or more cylindrical lenses, which need to be set at a specific angle. In our PERGENIC work, we corrected participants’ vision when necessary, but we wanted to be sure that we were doing it reliably. We asked 10% of participants to return for a second session, and when they came back we performed a second refraction without referring to the results of the first one—that way, we could compare the two sets of data to make sure they were consistent.

So far, so good. But you run in to a problem if you try to calculate, say, a Pearson product-moment coefficient.^{1} Here’s the problem: an angle is modular. For a cylinder angle, 0° is exactly the same as 180°. So if the first time I prescribe correction at 179°, and the second time I prescribe correction at 1°, there’s only a 2° difference. But it looks like a 178° difference. Which is not very good.

So why not, before we correlate, just re-express the data in these sorts of instances to reflect the true difference? We could change the 179° to −1°, or we could change the 1° to 181°. That way, we have −1° versus 1°, or 179° versus 181°, either of which is only a 2° difference. The problem is that you’ll get a different correlation coefficient depending on which decision you make. They’re not equivalent. So maybe you do it by resampling—create a whole lot of datasets, and for each one you make all decisions like this in a random manner. Some datasets will compare −1° with 1°, others 179° with 181°. Then you calculate a standard coefficient for each dataset, and take some average measure across all the datasets. You’ll get a quasi-sensible answer, but it’s not very satisfying. And I knew there was a much better way to conceptualise it.

One very natural way to think about correlation is in Cartesian space—you place points in the space using their value on the first measure (here, the angle on the first visit) as the *x*-coordinate, and their value on the second measure (here, the angle on the second visit) as the *y*-coordinate. In this space, the Pearson coefficient (to simplify somewhat) describes how well a straight line accounts for the relationship between the two measures. So here, (**A**) has a high positive correlation, (**B**) has a moderate one, and (**C**) has basically zero.

When you have a circular variable, like an angle, all that happens is that this space gets folded. You take the top of the square and you wrap it around to meet the bottom, ending up with a cylinder. That way, in the *y*-dimension, 0° is equal to 180°, and -1° is only 2° away from 179°. To get the same for the *x*-dimension, you take the ends of your cylinder and bend them around to make a doughnut, or torus.

So now we’ve turned Cartesian space into a space that properly takes account of how angles relate to one another. And there’s a clear analogue to the Pearson coefficient in this space. How well does a straight line on the surface of the torus describe the relationship between the two measures? You can work it out in the same way. In theory. In practice, I wasn’t really up to the task. My geometry on the surface of a torus only goes so far.

But, of course, someone has had a crack at this before. Actually there’s a whole field of *circular statistics* that I knew had to be out there, but couldn’t find the right terms to Google it. I spent a long time searching through literature about wind directions, because I figured there’s a similar problem there—in fact, Fisher and Lee (1983; JSTOR subscription required) use wind direction as an example in a seminal paper on the subject. Anyway, the formula in Fisher and Lee’s paper (JSTOR subscription required) is here (free). Their coefficient wasn’t the first but it seems to be the one in most common usage today. There’s also a very good circular statistics toolbox by Philipp Berens on the MATLAB file exchange, with documentation here. I’m working on some good ways to visualise the correlation, so tune in again later for my progress on that.

^{1} Of course this isn’t necessary—or even necessarily a good idea—when calculating test–retest reliability. You can always use something like a Bland-Altman repeatability coefficient, and in many circumstances it’s much more theoretically sensible to do just that. And in that case, it’s easy to account for the special problem of circular variables. But there are plenty of circumstances where you’d like something more like *r*^{2}, and that’s what I’m really discussing here.