“Effect Size” — same data, different interpretations

Just a short R-script note to embody the 3-page-paper of Rosenthal & Rubin (1982).

Table 1. (p. 167) listed a simple set-up. There was a between-subject treatment. Control group includes 34 alive cases and 66 dead cases. Treatment group includes 66 alive cases and 34 dead cases. The question is what is the percentage of the variance explained by the nominal IV indicating the group?

The authors pointed out that one may interpret the data result as death rate was reduced by 32% while the other may interpret the same as 10.24% variance was explained. Let's demo it more dramatically to imagine just 4% explained variance would reduce death rate by 20%.


Rosenthal, R. & Rubin, D. B. (1982). A simple, general purpose display of magnitude of experimental effect. Journal of Educational Psychology, 74, 166-169.

6 thoughts on ““Effect Size” — same data, different interpretations”

  1. Does this statistical illusion exist only in some specific studies or in all studies?

    The data in studies I've read are all collected by self report, and put them into hierarchy regression or SEM analysis. Do the result really could explain the behavior or just prove the researchers' concepts?

  2. Would you kindly give the paper' s download link here or in the group. I can't find it neither in Google nor the library.

  3. To Amy:

    Surely they are both concrete quantitative interpretations rather than illusions.
    As to your HLM or SEM literature, there is always a distinction between causal vs predictive interpretations.

  4. The more dramatic case in the citation of Wainer & Robinson (2003) --

    y_1<-c(rep("No Heart Attack",10933),rep("Heart Attack",104));
    y_2<-c(rep("No Heart Attack",10845),rep("Heart Attack",189));
    y<-(c(y_1,y_2)=="No Heart Attack"); ## TRUE vs FALSE
    paste("Heart attack cases are reduced RELATIVELY by ",round((189-104)/189*100,digits=2),"%; while r^2=",round(cor(x,y,method="spearman")^2,digits=4),", or only ",round(cor(x,y,method="spearman")^2*100,digits=2),"% variance has been explained.",sep="");
    ##...... same code as in the original post

    Nevertheless, I think Wainer & Robinson (2003) had inappropriately ignored the role of CI of effect size.
    Wainer, H. & Robinson, D. H. (2003). Shaping up the practice of null hypothesis significance testing. Educational Researcher, 32, 22-29.

  5. Odds CI of fisher's exact test interprets the 2*2 case better. Wainer & Robinson's case (2003) would report 99% confidence level that Aspirin would reduce Heart Attack Rate relatively by 24.83% ~ 60.71%


    The inappropriateness of r^2 demonstrated why Wilkinson & APA TFSI (1999, p. 599) recommended -- If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (r or d).


    Wilkinson, L., & APA Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>