Antidepressant trials and negative results

The New England Journal of Medicine [subscr. req] (WSJ article [free]) publishes today:
Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies).
Since non-scientists occasionally stumble over here it must be mentioned that "negative" means: we gave the drug and nothing happened (not to be confused with "something bad happened"). I'd have guestimated that 31% is a bit of a high number for antidepressants already in clinical use. The caveats mentioned in the discussion section:
Our findings have several limitations: they are restricted to antidepressants, to industry-sponsored trials registered with the FDA, and to issues of efficacy (as opposed to "real-world" effectiveness). This study did not account for other factors that may distort the apparent risk–benefit ratio, such as selective publication of safety issues, as has been reported with rofecoxib (Vioxx, Merck) and with the use of selective serotonin-reuptake inhibitors for depression in children. Because we excluded articles covering multiple studies, we probably counted some studies as unpublished that were — technically — published. The practice of bundling negative and positive studies in a single article has been found to be associated with duplicate or multiple publication, which may also influence the apparent risk–benefit ratio.

There can be many reasons why the results of a study are not published, and we do not know the reasons for nonpublication. Thus, we cannot determine whether the bias observed resulted from a failure to submit manuscripts on the part of authors and sponsors, decisions by journal editors and reviewers not to publish submitted manuscripts, or both.

We wish to clarify that nonsignificance in a single trial does not necessarily indicate lack of efficacy. Each drug, when subjected to meta-analysis, was shown to be superior to placebo. On the other hand, the true magnitude of each drug's superiority to placebo was less than a diligent literature review would indicate.

We do not mean to imply that the primary methods agreed on between sponsors and the FDA are necessarily preferable to alternative methods. Nevertheless, when multiple analyses are conducted, the principle of prespecification controls the rate of false positive findings (type I error), and it prevents HARKing, or hypothesizing after the results are known.

It might be argued that some trials did not merit publication because of methodologic flaws, including problems beyond the control of the investigator. However, since the protocols were written according to international guidelines for efficacy studies and were carried out by companies with ample financial and human resources, to be fair to the people who put themselves at risk to participate, a cogent public reason should be given for failure to publish.
Negative results often aren't published because they may have come about simply as a result of methodology. It's easy for a reviewer to critique something about the study that made it not sensitive enough to pick up positive effects and suggest further experiments. Pharma companies have the added disadvantage that it's expensive to round up hundreds of patients so as to perform those further experiments and the other added disadvantage that it'd be detrimental to their bottom line if those additional experiments also turn out to be negative.

I've long fantasized about the idea of a "Journal of Negative Results," as a dumping ground for any unused negative data. Eventually even the most obdurate investigator will give up on a project and move on...at that point at least one publication could be gotten out of all the efforts. And we'd all be better off knowing that someone had already tried that before. Of course the reviewers would need to be required not to suggest further experiments for this journal to work.