Showing posts with label effect size. Show all posts
Showing posts with label effect size. Show all posts

Sunday, March 17, 2013

Statistical Power in Statsmodels

I merged last week a branch of mine into statsmodels that contains large parts of basic power calculations and some effect size calculations. The documentation is in this section . Some parts are still missing but I thought I have worked enough on this for a while.

(Adding the power calculation for a new test now takes approximately: 3 lines of real code, 200 lines of wrapping it with mostly boiler plate and docstrings, and 30 to 100 lines of tests.)

The first part contains some information on the implementation. In the second part, I compare the calls to the function in the R pwr package to the calls in my (statsmodels') version.

I am comparing it to the pwr package because I ended up writing almost all unit tests against it. The initial development was based on the SAS manual, I used the explanations on the G-Power website for F-tests, and some parts were initially written based on articles that I read. However, during testing I adjusted the options (and fixed bugs), so I was able to match the results to pwr, and I think pwr has just the right level of abstraction and easiness of use, that I ended up with code that is pretty close to it.

If you just want to see the examples, skip to the second part.

Friday, March 15, 2013

Different Fields - Different Problems: Effect Size

or What's your scale?

Effect Size

I have been working on and off for a while now on adding statistical power calculations to statsmodels. One of the topics I ran into is the effect size.

At the beginning, I wasn't quite sure what to make of it. While I was working on power calculations, it just seemed to be a convenient way of specifying the distance between the alternative and the null hypothesis. However, there were references that sounded like it's something special and important. This was my first message to the mailing list

A classical alternative to NHST (null-hypothesis significance testing):
report your effect size and confidence intervals

http://en.wikipedia.org/wiki/Effect_size

http://onlinelibrary.wiley.com/doi/10.1111/j.1469-185X.2007.00027.x/abstract

http://onlinelibrary.wiley.com/doi/10.1111/j.1460-9568.2011.07902.x/abstract

I bumped into this while looking for power analysis.

MIA in python, as far as I can tell.

Now what is the fuss all about?

Scaling Issues

Today I finally found some good motivating quotes:

"In the behavioral, educational, and social sciences (BESS), units of measurement are many times arbitrary, in the sense that there is no necessary reason why the measurement instrument is based on a particular scaling. Many, but certainly not all, constructs dealt with in the BESS are not directly observable and the instruments used to measure such constructs do not generally have a natural scaling metric as do many measures, for example, in the physical sciences."

and

"However, effects sizes based on raw scores are not always helpful or generalizable due to the lack of natural scaling metrics and multiple scales existing for the same phenomenon in the BESS. A common methodological suggestion in the BESS is to report standardized effect sizes in order to facilitate the interpretation of results and for the cumulation of scientific knowledge across studies, which is the goal of meta-analysis (<...>). A standardized effect size is an effect size that describes the size of the effect but that does not depend on any particular measurement scale."

The two quotes are from the introduction in "Confidence Intervals for Standardized Effect Sizes: Theory, Application, and Implementation" by Ken Kelley http://www.jstatsoft.org/v20/a08 .

Large parts of the literature that I was browsing or reading on this, are in Psychological journals. This can also be seen in the list of references in the Wikipedia page on effect size.

One additional part, that I found puzzling was the definition of "conventional" effect sizes by Cohen. "For Cohen's d an effect size of 0.2 to 0.3 might be a "small" effect, around 0.5 a "medium" effect and 0.8 to infinity, a "large" effect." (sentence from the Wikipedia page)

"Small" what? small potatoes, small reduction in the number of deaths, low wages? or, "I'm almost indifferent" (+0 on the mailing lists)?

Where I come from

Now it's clearer why I haven't seen this in my traditional area, economics and econometrics.

Although economics falls into BESS, in the SS part, it has a long tradition of getting a common scale, money. Physical units also show up in some questions.

National Income Accounting tries to measure the economy with money as a unit. (And if something doesn't have a price associated with it, then it's ignored by most. That's another problem. Or, we make a price.) There are many measurement problems, but there is also a large industry to figure out common standards.

Effect sizes have a scale that is "natural":

  • What's the increase in lifetime salary, if you attend business school?
  • What's the increase in sales (in Dollars, or physical units) if you lower the price?
  • What's the increase in sales if you run an advertising campaign?
  • What's your rate of return if you invest in stocks?

Effects might not be easy to estimate, or cannot be estimated accurately, but we don't need a long debate about what to report as effect.

Post Scripts

(i) I just saw the table at the end of this SAS page http://support.sas.com/documentation/cdl/en/statug/65328/HTML/default/viewer.htm#statug_glm_details22.htm . I love replicating SAS tables, but will refrain for now, since I am supposed to go back to other things.

(ii) I started my last round of work on this because I was looking at effect size as distance measure for a chisquare goodness of fit test. When the sample size is very large, then small deviations from the Null Hypothesis will cause a statistical test to reject the Null, even if the effect, the difference to the Null is for all practical purposes irrelevant. My recent preferred solution to this is to switch to equivalence test or something similar, not testing the hypothesis that the effect is exactly zero, but to test whether the effect is "small" or not.

(iii) I have several plans for blog posts (cohens_kappa, power onion) but never found the quiet time or urge to actually write them.