« April 2010 | Main | June 2010 »

May 28, 2010

Sexual imprinting in redheads

We may disagree on who's most attractive, but most of us prefer to mate with a member of our own species. But that can be tricky for redheads. Redhead mothers often lay their eggs -- I'm talking about ducks, of course -- in canvasback nests, saving themselves the trouble of raising the chicks. But what if chicks raised by canvasbacks end up thinking of themselves as canvasbacks? Or, however, they may think of themselves, what if their earlier experience makes them prefer canvasbacks as mates? They might end up mating with the wrong species.

This apparently doesn't happen very often in nature, but why not? One possibility is that redheads are genetically programmed to prefer redheads, even if they grow up with canvasbacks. After all, brush turkeys grow up to prefer mating with brush turkeys (below), even though they're incubated in compost piles managed mostly by males. Michael Sorenson, Mark Hauber, and Scott Derrickson, the authors of this week's paper, "Sexual imprinting misguides species recognition in a facultative interspecific brood parasite", published in Proceedings of the Royal Society, set out to test this hypothesis for redheads.
BushTurkey.JPG
BushTurkeyMound.JPG
Brush turkey and brush turkey mound in Australia. Photo by Ford Denison.

They raised male redhead ducks in nests with either three female redheads or three female canvasback as foster sisters. As a control, they also included a male canvasback chick. They reasoned that, because canvasbacks almost never lay their eggs in redhead nests, canvasbacks that mate with the same species they grew up with would be mating with their own species. For redheads, though, this same behavior would often lead to mating with the wrong species. So they expected redheads, but not canvasbacks, to have a genetic preference for their own species, too strong to be overcome by early experience.

That's not what they found, however. Males of each species directed their courtship almost exclusively towards the species they were raised with as chicks. (This must have been based on their foster sisters, because none of them were raised by mother ducks.)

Based on these experimental results, you might expect lots of redhead-canvasback hybrids in the wild, or at least lots of mixed-species pairs (if hybrid chicks have low survival). But such mixed marriages appear to be rare.

The authors suggest some possible explanations. In the wild, most parasitised nests would have female redhead chicks as well as males. It would have been interesting to include a treatment with equal numbers of females of each species in each nest.

The preferences of females for their own species are also likely to be important. In the wild, their are usually several males per female. (I've noticed this in mallards and assumed it was because females on nests are more susceptible to predators, but I don't know if this is the real reason.) In these experiments, however, there were similar numbers of males and females, so females may have been less choosy.

May 21, 2010

It's not all junk -- and it evolves!

We've known for a long time that most human DNA doesn't code for protein, that much of that noncoding DNA is junk (former genes that no longer do anything, multiple copies of selfish "jumping DNA", etc.), but that some noncoding DNA performs useful functions. Click "junk DNA" at right for past posts on this topic. This week's paper, Adaptive Evolution of an sRNA That Controls Myxococcus Development (published in Science by Yuen-Tsu N. Yu, Xi Yuan, and Gregory J. Velicer), is an example of how such functions can evolve.

Myxococcus xanthus is a "social bacterium", whose behavior somewhat resembles that of the "social amoeba", Dictyostelium. When starved, the individual bacterial cells get together in a mound and form spores. Previously, Velicer's group found a mutant that doesn't do this. Then a second mutation arose in that line that restored the original behavior. Now they report the molecular basis for this restored spore-forming ability. The product of the key gene turns out to be small RNA molecule. Its normal function is apparently to block aggregation and spore formation, except when starved. The new mutation essentially knocks out this function, restoring the ability to make spores, but without the normal link to starvation.

Creation disproved?

If evolutionists want to end the arguments all they have to do is, get their brilliant heads together and assemble a 'simple' living cell. --creationist comment, 2007
Somehow, I doubt that the simple living cell assembled by Venter and colleagues will end the arguments. (Discussion at Pharyngula and elsewhere.) First, we're still a ways from making a working cell "from scratch." Venter's group used an existing cell, minus its DNA. It probably won't be too long before well be able to fill that gap (if anyone wants to bother), using membranes, ribosomes, etc. made in the lab without the use of living cells. But I'd be very surprised if we succeeded in designing and making life in the lab without using information from existing life within the next 50 years. Venter used the complete DNA sequence from another bacterium. But even if he'd designed the whole genome from scratch, the idea of using DNA as the hereditary material would be "borrowed" from existing life.

Are we smart enough to invent a totally new form of life? A computer virus that mutates in response to selection imposed by spam filters might qualify. So might a self-reproducing robot. In both cases, they would need special conditions (availability of computers or robot parts) to reproduce, but that's true of most living things. Could we design a life-form that could reproduce and evolve without using any materials produced by existing life-forms? Not anytime soon, I bet.

Could we, instead, create the conditions under which such a new life-form would arise and then evolve? For example, suppose we set up a thin metal plate with billions of cell-size holes through it (dimensions chosen so that macromolecules would have more chance to interact with each other than if they were floating in a large volume of water), then manipulate the chemical conditions on the two faces to provide a potential source of chemical energy. Throw in some of the organic molecules we know can arise from nonliving processes, and wait. Or something like that.

Making life that way would be much more of a challenge to creationism than any extension of Venter's approach. Because, suppose we do eventually design and create life from scratch? What would that prove? Our creationist commentator seemed to think it would disprove the hypothesis that life was created by a god. But I would draw the opposite conclusion. If we ever design and create life from scratch, without copying any aspect of the design of existing life, surely someone with superhuman intelligence -- I'm giving this hypothetical god the benefit of the doubt here -- could have done the same thing. In other words, we would have shown that a god could have created life on Earth. Whether one actually did is another question.

Similarly, if we ever produce a new life-form by creating conditions that favor its evolution, but without a detailed design for what we want the new life-form to be like, that will show that life could have arisen without any intervention by an existing intelligence. Again, whether that's what happened the first time would remain an open question.

One corollary to Clarke's Third Law ("any sufficiently advanced technology is indistinguishable from magic") is that any sufficiently advanced technology makes its practitioners indistinguishable from gods.

May 19, 2010

15 minutes of fame

Science writer Carl Zimmer has posted his "Meet the Scientist" podcast interview with me on the Microbe World web page.

A story about our PLoS One paper was 2010's most-viewed research report on the University of Minnesota web page.

Separately, my PhD student, Will Ratcliff, was one of four students featured on the University of Minnesota web page. In the video (upper right, labeled "Multimedia"), he alternates with three social scientists.

Tenure, seniority, and the benefits of incumbency

Retention based on seniority is, in effect, a conspiracy between teachers' unions and people for whom lower taxes are more important than quality education for the kids in their community.

This post is inspired by two recent New York Times stories. One reports the battle between teachers' unions (favoring pay and job retention based on seniority) and educational reformers who want pay and retention to be based on other criteria, such as student test scores (Brill 2010). The other story reports that some incumbent politicians in the US lost primary battles to challengers in their own party (Zeleny and Hulse 2010). This is news because it hardly ever happens.

I want to make two points. First, US teachers and US politicians are in similar situations. Once they've been in the job for awhile, they can be hard to get rid of, even if their performance falls well below average. This is also true of university professors, medical doctors, and business executives, although pay in those occupations may depend more on current or past performance than it does for politicians or teachers.

Second, random changes to the current system could make things worse rather than better, for an economic reason I haven't seen discussed. More-thoughtful changes are another story.

How do poorly performing people manage to keep their jobs in these very different occupations? For teachers, basing retention on seniority makes it difficult or impossible to fire a poorly performing senior teacher, even if their contract doesn't explicitly promise the life-long employment that college and university professors typically enjoy. In the US Congress, at least, seniority translates into committee appointments that let them funnel government money ("pork") to their state. So voters are reluctant to oust them, even if they are corrupt or incompetent. Business executives often appoint the boards that determine their salaries and job retention. Bad doctors (Kolata 2005; Leonhardt 2006) are presumably responsible for more than their share of the 120,000 Americans a year who die each year due to medical errors (Levy 1996), although fully 84% of all doctors can't be bothered to change their gloves to keep from transferring pathogens among patients (Yoffe 1999). But, like rapist priests, having to move to a community with less oversight is about the worst bad doctors have to fear.

Before eliminating tenure, seniority, or incumbent-politician advantage, however, consider their positive aspects. For the sake of argument, let's assume that the rights of incumbents to continued employment are trumped by those of thousands of students, citizens, stock-holders, or patients. Let's also recognize that more-experienced people are often better at their jobs. Our concern is with those whose performance deteriorates severely over years, for whatever reason.

If security of employment isn't a right, it is certainly a major perquisite. So, if we reduced the job security of teachers, we would need to increase salaries or other benefits to attract equally qualified applicants. (And don't we want even more-qualified applicants than we have now?) Retention based on seniority is, in effect, a conspiracy between teachers' unions and people for whom lower taxes are more important than quality education for the kids in their community. As an alternative to higher salaries, we could consider perquisites that would particularly appeal to the sort of person we want to attract to teaching. Consider sabbaticals. Someone who is really excited about teaching French or in biology might take a job that paid for summers in France or expenses to participate in lab or field research, even if their salary were lower.

Suppose we eliminate seniority as the sole criterion for retention? How would this affect the type of people who apply for teaching positions? It would depend on what the new criteria are.

Letting principals or deans fire teachers or professors at will would select for ass kissers. Furthermore, we would have to raise taxes, to pay more than the many lucrative ass-kissing positions in the private sector. A fellow faculty member once suggested that my project would get more money from the dean if I started going to his church. Someone else suggested that a major reduction in our budget might have had something to do with my public criticism of the dean's biotechnology-only approach to hiring. I don't want to believe either of those claims, but I was glad that I had tenure. If deans or principals could fire at will, students would be exposed to less diversity of opinion.

What about test scores as a criterion? If pay or continued employment for teachers depended on the absolute test scores of their students, then school districts with poorly prepared students would have trouble hiring any teachers at all, because they wouldn't expect to survive more than one year.

But why not base pay and retention on how a teacher's students perform on a test at the end of the year, relative to other students who had similar scores at the beginning of the year? This seems more promising, but it depends on the test. If the test were based only on memorization, than potential teachers who are good at helping students develop critical thinking skills or creativity would look for another profession. If there were tests that measured the full range of student's intellectual progress, however, then "teaching to the test" could be a good thing. This wouldn't be easy, but it might be possible.

University faculty are hired to produce new knowledge that benefits society, not just to transmit existing knowledge. It's probably easier to set minimum expectations for this function than it is to evaluate the quality of teaching in specialized fields. The quality and quantity of published research is already a major factor in determining pay raises for university faculty. But, if a professor hasn't published for several years, maybe he or she should be fired. Again, this risk would make university positions less attractive, so we would have to increase salaries or improve working conditions, to attract people as qualified as the current pool.

Similar arguments apply to other professions. Term limits are a really stupid idea, given the real benefits of experience. But reducing the ability of more-senior politicians to direct funds to their states would reduce the incentive for voters to re-elect a politician who is not representing their other interests. Pay-for-performance for doctors seems like a good idea, but it should be based on long-term patient outcomes, not numbers of procedures performed. Business executives would take a longer-term view if much of their compensation came in the form of stock that could never be sold, and which reverted to the company on their death. If the company prospered over the long-term, though, they'd get dividends for life.

The key point is that the criteria we use to determine salary and job retention will affect what type of people are attracted to a job, as well as how motivated they are to excel.

LITERATURE CITED

Brill S. 2010. The Teachers' Unions' Last Stand. New York Times 17 May.
Kolata G. 2005. When the Doctor Is in, but You Wish He Weren't. New York Times 30 Nov. 2005.
Leonhardt D. 2006. Why Doctors So Often Get It Wrong. New York Times 22 Feb. 2006.
Levy D. 1996. Medical groups act to curb errors. USA Today, 14 October 1996 .
Yoffe E. 1999. Doctors are reminded,' wash up!'. New York Times 9 November 1999.
Zeleny J., and C. Hulse. 2010. Specter Defeat Signals a Wave Against Incumbents. New York Times 18 May.

May 15, 2010

Evolution of DNA methylation in animals, plants, and fungi

This week, I will try to explain what DNA methylation is and some of the reasons why it's important, before discussing this week's paper on how DNA methylation has evolved.

The paper is "Genome-Wide Evolutionary Analysis of Eukaryotic DNA Methylation", published in Science by Assaf Zemach and others from the lab of Daniel Zilberman.

DNA methylation usually refers to the attachment of a methyl (CH3) group to a cytosine, one of four DNA bases (C, in DNA's A,T,C,G alphabet). Here's a link showing one way cytidine can get methylated. And this Wikipedia article shows cytosine in place in double-stranded RNA. (DNA would be similar, but with T instead of U.)

The functions of DNA methylation mostly come from the reduced transcription of RNA from methylated stretches of DNA. Surprisingly, when a new DNA copy is made (e.g., when one of our cells divide), methylation patterns are generally copied, too. Together, these two facts explain many of DNA methylation's functions.

First, DNA methylation is key to imprinting, whereby genes inherited from one parent are often shut down, perhaps for life, by methylation. Imprinting often reflects an unconscious battle between male and female parents over whether to maximize growth of this particular offspring, whatever the consequences for the mother's future survival and reproduction, or take or more long-term view. Earlier, I discussed the possible role of imprinting in mental illness.

Second, DNA methylation is important in phenotypic plasticity, whereby individuals with the same genotype may develop different phenotypes in different environments. For example, DNA in embryos developing in mothers on low-protein diets gets methylated differently, with life-long consequences for regulation of blood glucose. In effect, individuals born to poorly nourished mothers develop phenotypes appropriate for starvation conditions.

This role for DNA methylation was presumably inherited from mouse-like ancestors with shorter lives than ours, so that the mother's nutritional environment was likely to be fairly similar to that experienced by her offspring. But humans typically reproduce twenty years or more later than their mother did, perhaps in a very different nutritional environment. If food is much more available later in life than it was for our mothers during pregnancy, we may have methylation patterns that make us more prone to become obese or develop diabetes.

Third, DNA methylation is widely used to shut down transposable elements (TEs), sections of unusually selfish "junk DNA" -- not all nonprotein-coding DNA is junk -- which, left unchecked, would make even more copies of themselves, throughout the genome, than they have already.

But how has DNA methylation changed over the course of evolution? That is the topic of this week's paper. The authors measured DNA methylation throughout the genomes of 17 species, including plants, fungi, and animals, as well as the effects of this methylation on transcription of affected regions into RNA.

There were some remarkable differences among species. Consider CG methylation. This refers to methylation of C when it's next to G, as opposed to just paired with G in the opposite DNA strand, which would be true for almost all C. (Sometimes this type of methylation is referred to as CpG, with the p indicating the phosphate connecting the two bases along the DNA strand.) Selagninella, an "early-diverging" plant, had very low levels of CG methylation throughout the protein-coding region of most genes. Rice, in contrast, had low CG methylation in the promoter region of most genes, but high CG methylation through most of the protein-coding section.

What about the common ancestor of these plants? In other words, was methylation of protein-coding regions gained at some point along the rice branch, or lost at some point along the Selagninella branch? Plants are descended from algae, so they looked at two algae as well. They show data for Chlorella, whose CG methylation is even more enthusiastic than rice, with significant (but less) methylation even in the promoter regions. They also found lots of methylation of transposable elements (TEs, transposons) in the algae and concluded that "methylation of both gene bodies and TEs thus appears to be an ancient property of plants."

More generally, they concluded that:

"Our data indicate that gene body methylation is basal, predating the divergence of plants and animals around 1.6 billion years ago (fig. S1), whereas the antitransposon function probably evolved independently in the vertebrate and plant lineages."

I expect we will be hearing much more about the evolution of DNA methylation and its implications.

May 7, 2010

E-word in NYT -- a bigger surprise than Roundup-ready weeds?

There have been isolated reports, for years, of various weeds evolving resistance to glyphosate (sold commercially as Roundup etc.), but now glyphosate-resistance is showing up in pigweed, a major problem for farmers in the US and elsewhere. Among alternative ways to kill weeds, other herbicides are mostly more toxic and break down more slowly, whereas mechanical cultivation tends to increase erosion.

On the positive side, maybe more people will buy my book on Darwinian Agriculture, although I'll have to revise it before publication to turn what was a prediction into a fact. Maybe I can get some Neanderthal crackpot with a radio show to accuse me of deliberately spreading Roundup-resistant weeds to increase sales. You can't buy publicity like that! But I'd rather have clean rivers than a best-selling book.

I was impressed that many of the experts discussing the problem in the New York Times referred to "evolution" or "natural selection", although one referred to weeds as "opponents that can adjust" (as if individual plants were trying different ways to survive herbicides) and said that some weeds can "mutate to survive", as if mutation were somehow directed. Plants have evolved so that individuals can adjust to certain changes in their environment (drying soil, for example). But it's populations, not individuals, that evolve. And, in this case, they evolved mainly because herbicide-susceptible individuals did not survive. I assume the author knows this, but some readers could be misled.

Maybe now people will start paying more attention to management practices that slow the evolution of herbicide resistance. Resistance-management programs for insect pests, to slow the evolution of resistance to the Bt toxin, seem to be working reasonably well, but there's nothing similar in place for weeds yet.

One important difference is that the insects plaguing an individual farmer may well come from a distant neighbor, so there's little individual incentive to implement expensive resistance-management programs. An individual farmer's weed problems, on the other hand, are much more dependent on how they were managed on that same farm in the past. So farmers may be more motivated to invent and implement resistance-management strategies for weeds.

One of my favorite weed management strategies is alternating, every few years, between using a field for grazed pasture, where weeds of row crops tend to die out, in rotation with row crops, where pasture weeds tend to die out. That requires farmers with the expertise and willingness to work with both crops and livestock, however. And milk or meat from animals eating mostly grass and clover may be more expensive than the same products produced in a feedlot.

May 6, 2010

Do legume hosts benefit from suppressing rhizobial reproduction?

This week's paper is by my PhD student Ryoko Oono, with major contributions from Imke Schmitt (University of Minnesota faculty) and Janet Sprent, who was an expert on legume-rhizobium evolution long before I started working on the problem.

"Multiple evolutionary origins of legume traits leading to extreme rhizobial differentiation" has been published on-line in New Phytologist.

Rhizobia are soil bacteria, but a lucky few accept invitations from legume plants to infect their roots, multiply a million-fold or more inside a nodule, and then convert ("fix") atmospheric nitrogen into a form that the plant can use. When the plant dies (or sometimes sooner), an unknown fraction of the rhizobia in each nodule escape back into the soil.

Below left is what rhizobia look like in the soil and in the nodules of some legume hosts, including soybean. In other hosts, including pea, they swell up and/or change their shape (below right, same scale) as they differentiate into the nitrogen-fixing bacteroid form. The swollen form is apparently nonreproductive (like worker bees), but copies of their genes can still end up back in the soil. This is because some of their clonemates in the same nodule haven't become bacteroids yet and so retain the ability to reproduce, like queen bees.
Bacteroids.jpg
The extreme differentiation shown above right is imposed by the legume host. But why? Are swollen bacteroids somehow more beneficial to the plant? Or are bacteroid swelling and their losing the ability to reproduce side-effects of some other process that may or may not benefit the plant?

Ryoko reasoned that, if a plant trait has evolved repeatedly over the course of evolution, then it is probably beneficial to the plant. On the other hand, a trait that has been abandoned repeatedly is probably harmful. But has either of these happened?

To find out, she studied 40 different legume species (wild and domesticated) and determined whether bacteroids were swollen in each. She used light microscopy, as in the above images, and also flow cytometry. With the flow cytometer, she could analyze thousands of rhizobia per nodule, to see whether they came in two distinct size classes: swollen nitrogen-fixing bacteroids and nonswollen reproductives. Janet Sprent contributed electron micrographs for a number of less-widespread species that Ryoko couldn't find.

Then, Ryoko worked with Imke Schmidt on ancestral-state reconstruction. I last mentioned this approach in discussing the evolution of transfer-RNA. The figure below shows a subset of Ryoko's results. The most-recent common ancestor of these legumes didn't cause swollen bacteroids, as indicated by the filled circle at left. This was also true of the more-distant ancestor (not shown) that these legume species share with peanut, soybean, and various wild species. But, some time after Cicer (chickpea) branched off, a mutant that caused bacteroids to swell arose, and passed this trait to its descendants, including Medicago (alfalfa), Pisum (pea), and Vicia (vetch), as indicated by the open circles.
LegumePhylogeny.jpg
Looking at a more complete family tree (not shown), Ryoko concluded that the ability to make bacteroids swell has evolved at least five times. (The infinity signs indicate species with indeterminate nodule growth, whose correlation with bacteroid swelling is less consistent than I had thought.)

If host-imposed bacteroid swelling has evolved repeatedly, maybe it benefits the host. But how? This question may be more complicated than it seems, because a trait that provides a long-term benefit to the legume species (perhaps by making rhizobia evolve to be more beneficial) won't evolve unless it also benefits individual plants with the trait. We discussed these issues in a review article published last year. One possibility is that swollen rhizobia might somehow be more efficient, fixing more nitrogen relative to their carbon cost to the plant.

So Ryoko has been measuring the efficiency of nitrogen fixation, comparing the same strain of rhizobia in hosts where bacteroids are versus are not swollen, using a method developed by the late great John Witty, with whom I spent a brief but fun sabbatical twenty years ago. She's getting some very interesting results, which you can hear about if you go to her talk at the Evolution meetings in Portland, in June.

This material is based upon work supported by the National Science Foundation under Grant No. NSF/IOS-0918986.

May 1, 2010

Transgenic aphids

"Transgenic" is the NPOV term we use to describe genetically engineered crops that have genes from other species. Bt (Bacillus thuringiensis) crops with a bacterial gene for an insect-killing toxin are a well-known example. If you worry about whether such crops are "natural", then the latest example of natural gene transfer among species -- fungus to aphid, in this case -- might be of interest. The paper was published in Science, by Nancy Moran and Tyler Jarvik. Ed Yong has a clear explanation.

My book will argue that we should look to nature for ideas and information to improve agriculture, but not in a simple-minded, "nature-is-perfect" way. If aphids have been transgenic for many generations, that shows that transgenes don't necessarily have severe negative long-term effects on the recipient species. And what about the hypothesis that humans got a key brain-development gene from Neanderthals, as claimed in a 2008 paper in Trends in Genetics (vol.24, p.19)? (I read about this in Microscosm, Carl Zimmer's interesting book on E. coli.) But a few positive examples don't disprove the possibility of negative effects in other cases.