The basic issue of the paper is to ask whether the observed effect in the
remote PEAR data is better modeled by the assumption that the mean is
actually shifted in the intentional conditions (influence) or by the
assumption that the operator is choosing the intentions to suit the
outcome of the remote device's operation (selection). The method is to
compare the distributions of run scores in the three intentional conditions
(high, low, and baseline) with their "rank frequencies." Rank frequency is
the proportion of series displaying each of the six possible relative
rankings of the three intentions from highest to lowest outcome.
Influence and selection models give different predictions of the
functional relationship between intentional distributions and rank
frequencies. Selection is refuted at about p=.03; influence is
consistent with the observed data. The selection model assumes that
the operator somehow becomes aware of the actual run outcomes
and assigns intentions to suit, but I also present an argument
showing that given the small overall effect size, a standard
DAT model would produce the same statistics in the output data as the
intention-selecting model that I actually analyzed. (The two models
would diverge for substantially larger effect sizes.)
In a more general way, I am skeptical of DAT because it seems
inconsistent with the data I have seen, May
et.al.'s meta-analysis
notwithstanding. However, the restricted version covered in the
JSE paper is the only case I have yet analyzed with any rigor.
York Dobyns
ydobyns@phoenix.princeton.edu