« Ask the right experts | Main | How does gene duplication allow evolutionary innovation? »

Bias in science vs. honest errors

Some comments attached to the previous post discuss cases where scientists made statements or drew conclusions that turned out to be wrong. When should we suspect bias, as opposed to honest errors? Some scientists, of course, may have financial conflicts of interest, such as stock in tobacco or biotech companies. But strong opinions can be a source of bias even without a direct conflict of interest.

Here's an example from my own past research. For ten years, I directed the Long-Term Research on Agricultural Systems project at UC Davis. This huge field experiment included comparisons of organic and conventional farming methods. (LTRAS also compared irrigated and nonirrigated systems, which you might think would generate more interest, given how much of California's limited water supply is used by agriculture. But these comparisons never generated as much controversy, for some reason.)

The simplest way to compare conventional and organic systems would be to have the organic system exactly like the conventional one, only without the synthetic fertilizers and pesticides. But no serious organic farmer would farm that way.

So, for example, we substituted compost and nitrogen-fixing cover crops for fertilizers in the organic system (and in several alternative systems that were not strictly organic). OK, but which cover crops? A scientist biased against organic methods could tilt the balance in favor of the conventional system just be choosing a bad cover crop. A lazy scientist, or one pressed for time or money, could choose a cover crop based on published data (trying to match local conditions) or by asking a nearby organic farmer for a recommendation. Ideally, one would start with such sources but then test various alternatives before making a final decision. At LTRAS, Martha Jimenez tested four cover crop species, each at two seeding rates, and two combinations. Woollypod vetch or a mixture of vetch and peas did best in her one-year experiment, so Dennis Bryant and his crew tested these options over three years before deciding. (Vetch+peas proved to be the least risky, even though vetch-only did slightly better under ideal conditions.) Similarly, we tested Farm Advisor Tom Kearney's suggestion that we should use a different corn cultivar in systems without nitrogen fertilizer. (These tests and other results for the first nine years of this 100-year experiment have been published: see Field Crops Research 86:267; email me if you want a PDF). Without this "tuning", the organic system would have done worse than it did. Similarly, we tried to optimize each of the nine other systems at LTRAS within its particular system-specific constraints. For example, irrigating the nonirrigated system was not an option, but we did choose a wheat cultivar suited to nonirrigated conditions.

Here's where concerns about bias come in. For each system, someone who suspected us of bias could claim that we should have done more to optimize their favorite system. For example, if timing of cultivation is important in all systems, but especially in organic ones, should we always have given the organic systems priority when scheduling, even if that meant neglecting conventional ones in ways no conventional farmer would do? I know that we were committed to finding out which methods are best, rather than trying to prove preconceived ideas. But that doesn't mean we always made perfect decisions. And why should you believe me? After all, my brother Tom Denison is an organic farmer; I could be biased by that or by a graduate education and postdoctoral work in Crop Science that those not familiar with my advisers Tom Sinclair and Bob Loomis might assume was "brainwashing." (It would be more accurate to call their efforts "brain-building.")

If individual scientists or groups of scientists have conscious or unconscious biases, that may influence their conclusions and even their results. Fortunately, two solutions to this problem are built right into the fabric of science today. The first is peer review. Before a paper is published in any reputable scientific journal, it is reviewed by at least two experts with no direct connection to the authors of the paper. (We may know each other, however.) These reviewers look for problems such as unreliable methods, inconsistency between results and conclusions, and inconsistency with previously published results. The latter should not lead to rejection, but reviewers should insist the discrepancy be discussed. Note that most books, web sites, pamphlets, popular magazines, television program, and even certain "junk journals" (low citation impact is a clue) have little or no peer review. As I result, I have usually found reading such sources to be a waste of time. For example, critical details needed to assess the reliability of results are often left out.

Second, and more important, any really important conclusions need to be based on results confirmed by at least two independent groups. This is the best way to detect fraudulent or biased results: do other research groups, who may have different biases, nonetheless get the same results? This is one reason society would benefit from investing more in research. When research money is scarce, studies needed to confirm or refute important results may not get done.

With peer review and independent testing of important results, the biases and errors of individual scientists do not prevent the scientific community from reaching reliable conclusions, sooner or later.

Comments

This is the model of the scientific process I learned in school. My experience of actual science, in many cases, has not matched it. A "sooner or later" that exceeds a human lifetime isn't really good enough.

How can scientific process be improved? Identifying and neutralizing institutional biases has to be central to the effort. Identifying psychological biases characteristic of those who self-select to become scientists should take us much of the rest of the way. That ought to be enough of a challenge for the next three generations.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)


Type the characters you see in the picture above.