Monday November 13, 2006
So Tony Blair wants to be a science evangelist? In a recent speech in Oxford, he outlined his plan to stand up for science and face down those who distort and undermine it. He singled out animal rights extremists and people who cause confusion over MMR and GM technology.
But encouraging scientific progress is not just about giving good PR to new gadgets or cures. Most important is protecting the principle of free inquiry, something on which he and his government are way behind. His call for politicians to stand up for science belies the fact that his own administration systematically attacks this basic principle.
The biggest threat to science doesn't come from a mother scared of what the MMR jab might do to her child, or the extremist who burns down farms in solidarity with research animals. It comes from those who claim to respect the way science creates knowledge, but then misinterpret, distort or ignore that knowledge.
On the surface, scientists might seem to have little to worry about. Starved of prestige and money by successive Tory governments, they have seen labs rebuilt and reputations renewed under Labour. Blair talked of having trouble with science in his early years until a Damascene conversion left him "fascinated by scientific process, its reasoning, deduction and evidence-based analysis; inspired by scientific progress; and excited by scientific possibility".
But last week the conclusions of the Commons science committee inquiry into the government's use of scientific advice showed that his good intentions were not being mirrored by his own advisers. The report said that the government hid behind a fig leaf of scientific respectability when spinning controversial policies in a bid to make them more acceptable to voters, and it called for a "radical re-engineering" of its use of science.
Furthermore, scientists are becoming concerned at the rise of creationism in the British education system. The geneticist Steve Jones, who has lectured on evolution at schools for 20 years, says that he now regularly meets pupils who claim to believe in creationism. The creationist interpretation of fossil evidence is even encouraged in the new GCSE Gateway to Science curriculum. In August, a survey of British university students found that a third believed in either creationism or intelligent design.
At the end of the last parliamentary session, the government agency charged with licensing drugs took the remarkable decision that it would license homeopathic remedies. These glorified bottles of water can now carry details of the ailments they supposedly treat on their labels. The remedies do not need clinical trial data and peer-reviewed research to make their claims (as every modern pharmaceutical does). Scientists say the new rules are an affront to the principle of basing healthcare advice on scientific evidence.
Science is a tough master. Use this method of uncovering truth and you are not allowed to be selective about your evidence. But innovation, the technological answers to climate change, and all Blair's "glittering prizes" will come, at some point in the chain, from the basic rules of free inquiry grounded in scientific method: think of an idea, test it with experiments, draw conclusions, refine your experiments, and so on.
A forward-thinking nation loses respect for that free inquiry at its peril. Children taught to disregard evidence when trying to work out where the earth came from; a scientific agency deciding to abandon basic principles; and a government twisting research to fit its ideological message - none of that respects free inquiry. And if you don't stand up for that, you don't stand up for science.
Michael Le Page
New Scientist Print Edition
13 November 2006
It is no secret that doctors occasionally kill their patients instead of curing them, whether by failing to wash their hands or prescribing the wrong drug. In many countries, serious efforts are now being made to reduce medical errors. The focus, though, is almost entirely on avoiding mistakes in treatment, rather than in the original diagnosis.
A 49-year-old man is admitted to hospital in Japan with chest pains and a partially paralysed arm. Doctors diagnose a simultaneous heart attack and stroke and the patient seems to respond well to treatment. The next day, however, he has a cardiac arrest, and later dies. The autopsy reveals that all along he'd had an aortic dissection, a tear in the lining of the major artery from the heart.
In the US, a previously healthy and active 79-year-old man is found confused and incapacitated. He is diagnosed with pneumonia and dehydration, and after treatment seems to be recovering well. After three days he starts breathing rapidly and his condition declines. Six days after admission he dies. The autopsy reveals rampant TB.
A 37-year-old woman who is six months pregnant is admitted to hospital in Italy with severe abdominal pain. The pain is attributed to kidney stones and after treatment the woman goes home. One week later, she vomits and loses consciousness, and despite doctors' best efforts, she and her baby die. The autopsy reveals massive internal bleeding caused by a rare blood disorder.
It is no secret that doctors occasionally kill their patients instead of curing them, whether by failing to wash their hands or prescribing the wrong drug. In many countries, serious efforts are now being made to reduce medical errors. The focus, though, is almost entirely on avoiding mistakes in treatment, rather than in the original diagnosis.
But as the cases above illustrate, major mistakes in diagnosis do happen, and they are surprisingly common. The causes range from medicine's inherent limitations, through flaws in hospital systems, right down to individual doctors seemingly forgetting what they learned in medical school. It is estimated that as many as 1 in 20 patients who die in hospital, do so because their illness was misdiagnosed.
Shockingly, our best way of uncovering diagnostic errors - the autopsy - is in steep decline. If no one suspects a wrong diagnosis, the evidence will be buried or cremated with the body, and nobody will be any the wiser, so there is nothing to stop the same mistakes being made over and over again. "Diagnostic errors do not receive the attention they deserve," says Kaveh Shojania of the University of Ottawa in Canada, who studies medical errors. "It is a big part of the problem."
The value of autopsies was gradually established during the 18th and 19th centuries. Today they remain the gold standard as a way for doctors to identify and learn from their mistakes. It is much easier to find out for sure what was wrong with someone after their death, when pathologists can cut open the body, examine any part in detail and take samples for testing (see "Anatomy of an autopsy").
Some autopsies have to be done for legal reasons. These forensic or coroners' autopsies are often required after violent, accidental or suspicious deaths, or where the cause is unclear. In many countries autopsies must also be carried out on patients who die during surgery or within 24 hours of admission to hospital. Sometimes, however, doctors just want to know more about why someone died; for these hospital autopsies doctors usually need permission from the next of kin.
It was in 1912 that a Harvard University doctor called Richard Cabot did one of the first large studies comparing hospital autopsy results with the initial diagnosis. After looking at 3000 cases, he concluded - to the stunned disbelief of his colleagues - that nearly half the diagnoses had been wrong.
At first glance, today's performance seems little better. A recent review of all the studies like Cabot's done since the 1960s concluded that the certified causes of death were wrong in at least a third of cases. Not all the errors would have affected survival (though they still matter, as health policies are often based on death certificate statistics), but some would. At least 10 per cent of autopsies show patients might have lived had their diagnosis been right.
You do need to treat these figures with caution. It is impossible to work out the true misdiagnosis rate for all patients, not least because autopsies are obviously not done on people who survive. Plus the rate of mistakes may appear artificially high if doctors are now requesting a hospital autopsy only if they suspect something went wrong.
To find out the true misdiagnosis rate, Shojania analysed 53 studies published over the past four decades, involving more than 13,000 autopsies in North America, Europe and Australia. Crucially, he took into account the falling autopsy rate and the possibility that autopsies were more likely if misdiagnosis was suspected.
In a paper published in 2003, his team concluded that the accuracy of diagnoses has been improving steadily, with the rate of major discrepancies affecting survival falling by a third each decade (Journal of the American Medical Association, vol 289, p 2849). Even so, the rate remains shockingly high: at least 4 per cent of all US patients who die in hospital might have survived had their diagnosis been right. The figure is higher in other countries. "It's a big deal," says Shojania.
So why are mistakes still so common? Even in today's era of high-tech medicine, some errors are inevitable. Doctors have limited knowledge, limited tools and limited time to make a diagnosis. Even well-studied diseases can produce strange symptoms unlike those in the textbooks, and patients can have several diseases at once. For example, the Japanese man with aortic dissection had an extremely rare collection of symptoms for such a case. It is sometimes impossible to work out what is wrong with a patient while they are alive, and not always possible when they are dead. "It's a miracle how often doctors get it right," says Mark Graber of the Veterans Affairs Medical Center in Northport, New York.
The crucial question, then, is not how many deaths are due to a major misdiagnosis. It is, how many can be avoided? "Half are preventable and half are not," Shojania suggests. He was the only doctor New Scientist asked who was prepared to make an estimate. The few studies to have investigated the causes of misdiagnosis suggest even more than half are preventable.
One such study, carried out by Graber and published last year, analysed 100 cases where diagnostic mistakes had injured a patient or led to their death (Archives of Internal Medicine, vol 165, p 1493). Its findings may not be widely representative, as it identified cases through voluntary reports and other methods as well as autopsies, but it does at least start to give us some idea of why errors are made. What Graber found is that it is typically not just one thing that goes wrong, but five or six.
A recurring theme was system failures at hospitals, such as X-rays getting lost, or a lack of qualified staff around on a holiday evening. An even more important cause of error was mistakes by individual doctors. These ranged from lack of medical knowledge to using flawed reasoning to reach their diagnosis. The commonest error of this sort is "premature closure": a doctor arrives at a diagnosis that seems to fit the facts, then stops considering other possibilities. "When you come up with an answer, you are happy," Graber says. "You stop thinking about the problem."
Some doctors gave every sign of sheer incompetence, such as failing to pass on test results or even skipping parts of a physical examination. One failed to notice that a patient's toes were gangrenous.
Just 7 out of the 100 misdiagnoses were identified as "no-fault" errors that staff had no part in. Some of these cases involved a disease presenting in an unusual way. Some patients missed their hospital appointments or told lies to their doctors. A case of AIDS went undiagnosed because the patient did not tell his doctors he had engaged in high-risk sex. More often, however, patients are just not very good at telling doctors what they need to know to make an accurate diagnosis.
Whatever the cause of misdiagnoses, nothing can be done about them if they are never discovered. And the only sure way to detect more diagnostic errors is to do more autopsies. Of course, they cannot help the patients in question, but they can help correct whatever it was that led to the error, be it bad organisation, flawed reasoning or faulty equipment. "No lesson is as powerful as seeing your own mistakes," says Graber.
But systems for alerting doctors to their errors tend to be patchy and unreliable. Who wants to tell a colleague that they got things horribly wrong?
And the number of hospital autopsies continues to fall in most countries despite repeated calls to reverse the trend. "There are constant pleas, but it's not happening," says Graber. "It's a losing battle." In the 1960s, an autopsy was done on around 60 per cent of patients who died in hospitals in Europe and the US. Today the rate of hospital autopsies is thought to be less than 10 per cent in Europe, and less than 5 per cent in the US.
Why? "Clinicians don't think it's necessary any more," says Shojania. "It's no longer part of training." Cash-strapped public healthcare systems often decide the money is better spent elsewhere, and private hospitals cannot charge relatives for autopsies so they have little incentive either.
Another possible cause is increasing fear of litigation. Some argue that such worries are groundless. A recent study of US appeals court records showed that the crucial factor in law is not whether an autopsy reveals a discrepancy, but whether the misdiagnosis was due to negligence. "It is not necessary to be right," says lead author Kevin Bove of Children's Hospital Medical Center in Cincinnati. "You just have to do the right thing."
But others argue that doctors may not request autopsies in cases where they suspect they could be held liable for negligence. "If you say 'don't worry, you will never get sued', that's just not realistic," says Lee Goldman, now at Columbia University in New York City. "You have so much selection over autopsies that of course no one gets sued."
Then there is the issue of getting consent from relatives. In the UK, there was public revulsion in 1999 on the discovery that a pathologist at Alder Hey Children's Hospital in Liverpool had stored thousands of organs from children's autopsies without their parents' knowledge. There have been similar public outcries about stored organs in Australia and Ireland. "Some doctors are now frightened to ask for consent," says Emyr Benbow, a pathologist at the University of Manchester in the UK.
An audit at University Hospitals of Leicester before and after the Alder Hey scandal revealed the hospital autopsy rate had dropped from 10 per cent to less than 1 per cent. The main cause was not that relatives were refusing consent; it was that doctors were less likely to ask for it.
So what can be done to change matters? "There should be a minimum autopsy rate, a requirement for feedback and doctors should not be subject to malpractice [lawsuits] if they do an autopsy," says Goldman.
Such protection from lawsuits would be unlikely to go down well with an increasingly litigious public. But if doctors keep quiet about misdiagnoses, as may happen now, there is no chance of improving matters. "When you are not happy with what you are getting from people who want to do their best, the system is all messed up," Goldman says.
Ideally, autopsies would be carried out on a random sample of people who die. In the US, hospitals once had to have a minimum autopsy rate of 20 per cent, but this was abandoned in 1970. In the UK, the Royal College of Pathologists once considered trying to push for a minimum 10 per cent random autopsy rate, but the Alder Hey scandal kicked the idea into touch. "There would be a substantial outcry," says Benbow.
Nearly a century after Cabot's 1912 autopsy study, it seems we have forgotten his most valuable lesson. But you can do something about this. If one of your family dies and a doctor suggests an autopsy, give permission. Or consider requesting one yourself.
It will not help your relative, but it might help save someone else's life. And since there's a fair chance of you succumbing to the same illness as your father, mother, or siblings, that someone might even be you.
From issue 2577 of New Scientist magazine, 13 November 2006, page 48-51
NewScientist.com news service
08 November 2006
A FEW blind mice have had their sight restored. The process, which involved transplanting precursor retinal cells into their damaged eyes, promises a cure for age-related macular degeneration or blindness due to diabetes.
The mice were blind because they had been bred to have non-functional photoreceptor cells, the eye's rod and cone cells that convert light into electrical signals to be sent to the brain. Elderly people and people with diabetes can also lose their vision when these cells fail.
In principle, restoring sight to animals that have simply lost photoreceptor cells should be relatively easy, because most of the brain's wiring for vision is still intact. Previous attempts to treat such blindness by transplanting stem cells had been unsuccessful, however. The stem cells had not developed enough to properly integrate with the recipient's retina and existing vision-related regions of the brain.
Now a team led by Robin Ali from University College London (UCL) and Robert MacLaren, an eye surgeon from Moorfields Eye Hospital, London, have overcome this problem by using retinal precursor cells that were at a later stage of development than stem cells. The team took these cells from healthy donor mice only after the cells had started producing rhodopsin, a pigment necessary for sensitivity to light. When transplanted into the eyes of blind mice, the retinal precursor cells differentiated into rod cells and grew to make the short neural connections required to restore sight. The team tested the mice's vision by observing how their pupils responded to different light intensities (Nature, vol 444, p 203).
"This research is the first to show that photoreceptor transplantation is feasible," MacLaren says. "We are now confident that this is the avenue to pursue to uncover ways of restoring vision to thousands who have lost their sight."
While this method may be workable in humans, it is not yet clear where doctors will find donor retinal precursor cells that the recipient will not reject.
One option is to grow human embryonic stem cells to the appropriate stage of development for transplantation. Earlier this year, Thomas Reh of the University of Washington in Seattle managed to do exactly this. "We can derive retina cells, including cells at exactly the stage that Ali's group found were best for transplantation, from human embryonic stem cells," says Reh. "So joining the approaches would seem to be an important next step in treating retinal degeneration and restoring vision. Stay tuned."
The UCL team also suggests the use of stem-cell-like precursor cells that are found on the edge of the retina. These cells could be harvested and transplanted into the retina if the disease is caught at an early stage in humans.
From issue 2577 of New Scientist magazine, 08 November 2006, page 14
New Scientist Print Edition
11 November 2006
VULTURES are not off the hook yet. The painkiller diclofenac was banned in India and Nepal last August because griffon vultures were dying from eating carcasses of cattle that had received the drug. Now Egyptian and red-headed vultures in the region are dying with similar symptoms, and conservationists suspect diclofenac is also to blame.
"Painkillers used for livestock in Europe have killed condors, hawks and owls"
Until now, little has been known about how painkillers affect scavenging birds. To find out, the UK's Royal Society for the Protection of Birds asked vets and zoos worldwide for their experiences. It found that meloxicam, the drug promoted to replace diclofenac in India, seems safe for most species (Biology Letters, DOI: 10.1098/rsbl.2006.0554). However, flunixin and carprofen, used for livestock in Europe, have killed vultures, condors, hawks, owls, rails and a Marabou stork. Ibuprofen and phenylbutazone might also be dangerous. Meanwhile diclofenac remains a risk, especially in South Africa, where ranchers now leave dead cattle out as "vulture restaurants".
From issue 2577 of New Scientist magazine, 11 November 2006, page 7
NewScientist.com news service
08 November 2006
Of the three strains of HIV known to infect humans, we know that two - the one causing the global AIDS epidemic and another that has infected a small number of people in Cameroon - came from a chimpanzee virus called SIV. The source of the third strain, which infects people in western central Africa, was a mystery. Now we know it came from gorillas.
Martine Peeters and colleagues at the University of Montpelier in France have discovered the virus in the droppings of gorillas living in remote forests in Cameroon (Nature, vol 444, p 164). The infected gorillas lived up to 400 kilometres apart, so the researchers think it must be a normal or endemic virus in the animals, as SIV is in chimps.
The next mystery is how the gorillas got it. The gorilla virus is descended from the chimp variety, but gorillas are vegetarian and rarely encounter chimps.
There is little mystery about how humans contracted the virus, though: local people picked it up hunting gorillas for food and traditional medicine. That means the virus could yet cross again and create another HIV strain, say the researchers, especially as growing demand for "bushmeat" leads to more hunting.
From issue 2577 of New Scientist magazine, 08 November 2006, page 17
New Scientist Print Edition
11 November 2006
FOR THOSE trying to reconstruct our evolutionary history, a little fossil often has to go a long way. A fragment of jaw or skull here, part of a thigh bone there, is often all palaeontologists have to go on. Tools and other cultural artefacts help fill in the gaps, but it's like viewing our history through a keyhole. Our hominin predecessors didn't bury time capsules for later species to pick through. Not deliberately, at least. They did, however, leave a huge package of coded information behind. And now we're going to try and read it.
In July a team led by Svante Pääbo, an evolutionary geneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, announced audacious plans to reconstruct the entire genome of the Neanderthals, our closest relatives in the fossil record. If they pull it off, and they are confident they can, it will be a remarkable technical feat. "This would be the first time we have sequenced the entire genome of an extinct organism," Pääbo says. It could also transform our view not only of Neanderthals but, perhaps more importantly, of ourselves.
Neanderthals have been at the centre of many of the most intense debates in palaeoanthropology ever since the discovery of their bones spawned the field 150 years ago. A popular caricature portrays them as beetle-browed brutes, but this is far from the truth. "Neanderthals were sophisticated stone-tool makers and made razor-sharp knives out of flint," says Richard Klein, an anthropologist at Stanford University, California. "They made fires when and where they wanted, and seem to have made a living by hunting large mammals such as bison and deer." Neanderthals also buried their dead, which, fortunately for researchers, increases the odds of the bones being preserved.
Bones and artefacts leave a whole range of questions wide open, though. How exactly are Neanderthals related to us? Did our ancestors interbreed with them, and if so, do modern Eurasians still carry a little Neanderthal DNA? Just how "human" were they? There's only one way to be sure: "By sequencing their entire genome we can begin to learn more about their biology," says Eddy Rubin, a geneticist at the Lawrence Berkeley National Laboratory in Walnut Creek, California. What's more, if we can answer the genetic questions we might solve the biggest mystery of all: why did Neanderthals die out while modern humans went on to conquer the globe?
It won't be easy. Although ancient DNA has been extracted and sequenced from Egyptian mummies, 5000-year-old maize plants and a menagerie of extinct mammals including mammoths, cave bears and ground sloths, in all these cases only minuscule fragments of badly degraded DNA have been recovered.
Pääbo and colleagues probably know better than anyone how hard it wil be. They pioneered the genetic study of Neanderthals by extracting and decoding fragments of mitochondrial DNA (mtDNA) from the bones of the original specimen, discovered in 1856 in the Neander Valley in Germany. The mtDNA that Pääbo sequenced suggested that humans split from Neanderthals roughly 500,000 years ago, which fits neatly with the fossil record. It also indicated that Neanderthals did not interbreed with our ancestors.
Although mtDNA can yield important information, the really significant information is in the cell nucleus, where the vast majority of genes reside. Extracting and sequencing this DNA, however, is much harder. Cells can contain thousands of mitochondria but they have only one nucleus, so nuclear genomes are far scarcer than mitochondrial ones. What is more, there are a number of awkward biological and chemical facts standing in the way of studying ancient DNA. Firstly, enzymes in recently dead organisms chop DNA into small pieces. Then, over time, a steady onslaught of oxidation and background radiation further degrades these fragments, and causes the nucleotide "letters" of the DNA code to change from one to another or into ones that are not naturally found in DNA. To make matters worse, ancient DNA is invariably contaminated with the DNA of hundreds of types of bacteria and fungi that invade a dead organism. Finally, in the case of Neanderthals, any modern human DNA that contaminates a sample causes tremendous problems, as it can so easily be mistaken for Neanderthal DNA.
Despite these formidable obstacles, the task is not hopeless. Dry or cold conditions can help preserve DNA, and in some exceptional circumstances it might be possible to retrieve useful DNA from bones 100,000 years old, Pääbo says. What's more, the changes in DNA sequence that result from nucleotide conversion follow a relatively stable pattern, which means that the original sequence can often be deduced. In fact the very presence of these changes can be a useful sign that you're working with ancient DNA, not more recent contamination with modern DNA.
Pääbo's team have selected two Neanderthal specimens to work on, based on the fact that both are have "clean" DNA that is relatively uncontaminated. One is a 38,000-year-old fossil from Vindija, Croatia. The other is the original specimen, which, despite being extensively handled, has unusually clean DNA in its right upper arm bone (during its lifetime the individual lost the use of its left arm after breaking it and had to rely on the right arm, causing the bones to grow thicker and denser than usual. After death this shielded the DNA from contamination). Pääbo's colleagues are also hunting for new specimens that can be sampled before other people get their hands on them.
There's a further problem with trying to reconstruct the genome of an extinct animal, however. Conventional genome sequencing requires large quantities of DNA, which is fine when you're dealing with a living species, but is a huge problem when all you've got is a few precious bones that have to be ground to dust to extract the DNA.
Draft sequence soon
Enter 454 Life Sciences, a genomics company in Branford, Connecticut, that has invented a new sequencing technique especially suited to the Neanderthal genome. It takes fragments of DNA 100 to 200 base pairs long - coincidentally about the length of DNA fragments extracted from ancient bones - and reads them directly. This cuts out the normal intermediate step of amplifying DNA in bacteria. The method is also extremely powerful. "Conventional sequencing generates 96 sequences in a single run," says Michael Egholm of 454. "We generate 250,000 sequences, each about 100 bases long - that's 25 to 30 million bases in a run."
This is crucial. Up to 95 per cent of the DNA extracted from Neanderthals will be from microorganisms and therefore irrelevant. To have a decent chance of capturing the whole Neanderthal genome - which, like the human genome, is expected to contain about 3 billion bases - from random fragments, 454 will have to generate at least 60 billion bases of sequence. "Only when you generate as much sequence data as we do can you even think about throwing out 95 per cent of the sequences you decode," says Egholm.
Using this approach, Pääbo and colleagues have so far sequenced roughly a million base pairs of nuclear DNA from the Croatian fossil. They hope to publish a draft of the whole genome in two years.
How plausible is this? "It is definitely possible to sequence the entire genome from such well-preserved specimens," says Eske Willerslev, an expert in ancient DNA at the University of Copenhagen, Denmark. "Perhaps the biggest difficulty will be verifying that the sequences obtained are genuinely from the Neanderthal genome and not a contaminant, as so much of it will be identical to the human genome."
The genome, once in hand, will provide insights into two key questions, Rubin predicts. "The first thing it can tell us is where the human genome is unique - places where the Neanderthal genome looks like the chimp genome. This will help us identify changes in the human genome that are of recent origin and which may contribute to the biology that distinguished us from Neanderthals." In other words, it could help us understand more about what it is to be human.
"The other, more difficult thing is to look for areas where the human genome is similar to the Neanderthal genome, which may help in making inferences about Neanderthal biology," Rubin says, although it's hard to say in advance just what the genome will reveal. He draws an analogy with Egyptian hieroglyphics: "Before understanding hieroglyphics we weren't sure what they would tell us, though we knew they'd tell us something," he says. "I think the Neanderthal genome will do the same thing."
The genome is sure to fuel the particularly intense controversy that has surrounded a much-vaunted aspect of human uniqueness: language. "There's been a debate going for more than 30 years about the speech capabilities of Neanderthals," says Philip Lieberman, a cognitive scientist at Brown University in Providence, Rhode Island.
Computer models of the mouth and vocal tract give us some idea of what sounds Neanderthals could make. "It is clear from the fossil record and comparisons with modern humans that Neanderthals, and probably their common ancestor with humans, could speak," Lieberman says, though perhaps with less sophistication than us. Yet fossils cannot tell the whole story. "The shape of the skull doesn't tell you what's inside the brain," Lieberman says.
Genes, however, might provide clues. In 2001, FOXP2 became the first gene to be tied to a specific language impairment. People with an error in FOXP2 suffer from a severe speech disorder involving difficulty pronouncing words and with some aspects of grammar and cognition. Genetic analyses indicate that FOXP2 reached its modern form in humans within the past 200,000 years - well after we and Neanderthals had parted ways. The Neanderthal genome will help to verify that date. "Neanderthal FOXP2 is likely to be the same as the chimpanzee version," says Simon Fisher of the Wellcome Trust Centre for Human Genetics in Oxford, UK, a member of the team that discovered FOXP2. "But if it turns out that Neanderthal FOXP2 is identical to that found in modern humans, these dates will have to be revised." Another possibility - unlikely, in Fisher's view - is that after splitting from our shared ancestor Neanderthals independently evolved the same version of FOXP2.
It will take more than examining Neanderthals' FOXP2, however, to settle debates about their speech capabilities, as it is extremely unlikely to be the only gene relevant to the evolution of language. Even if Neanderthals didn't have the human version, it is hard to say what this would have meant for their speech capabilities.
FOXP2 won't be the only interesting gene. "We're on the verge of sequencing many [individual] human genomes, and from this we'll begin to see associations between sequences and biology," says Rubin. "At the moment there are a limited number of questions to ask, but very quickly we will crack aspects of the human genome and find associations that we'll want to look at in Neanderthals."
So much for understanding Neanderthals. What about ourselves? "What is really interesting is what makes us specifically human," says Klein. And this is where having the Neanderthal genome could really pay off.
At the moment, geneticists trying to answer questions about human uniqueness often compare the human genome with the chimpanzee's. Even though the species differ in DNA sequence by just 1.2 per cent, lining up the genomes side by side reveals 35 million genetic differences.
Many of these differences fall in non-coding areas and have no obvious effects, which makes finding the differences that really matter a formidable challenge. The Neanderthal genome will provide something of a short cut. Humans and Neanderthals split much more recently than humans and chimps (500,000 versus 5 to 7 million years ago), which means there will be fewer genetic differences to sift through. "This comparison is helpful if you are interested in the more recent evolutionary changes that might define distinct biological features of Homo sapiens," says Fisher.
Perhaps the biggest open question about human evolution is why and how we became so globally successful as a species. Palaeoanthropologists generally make a distinction between anatomically modern humans and behaviourally modern humans: the former began to emerge around 200,000 years ago, the latter around 50,000 to 80,000 years ago in a cultural "big bang". Until then, humans and Neanderthals made the same sorts of artefacts and went about business pretty much the same way. Then, suddenly, people with complex culture, elaborate social systems and sophisticated technology started migrating out of Africa into Eurasia. Within a few thousand years the Neanderthals had breathed their last. Why? Solving the puzzle of the cultural big bang bears heavily on answering this long-debated question.
Some palaeoanthropologists have proposed that Neanderthals were wiped out in a genocide by invading Cro-Magnons, the first behaviourally modern humans in Europe who we know briefly coexisted with Neanderthals, or that they were pushed to the margins by the invaders' more sophisticated social systems and culture. Others have suggested that climate was the decisive factor. Whatever the cause, though, a still more fundamental question remains: why were humans more culturally advanced than Neanderthals? If they were biological and cognitive equals, was it just some new cultural trick that humans happened to stumble on first that got them ahead? Maybe, but that just raises another question. "Why didn't the Neanderthals simply copy the successful strategies of the modern humans?" Klein asks. After all, such imitation is common throughout recorded history.
To Klein, the lack of evidence of cultural transfer between humans and Neanderthals suggests that a biological and cognitive abyss separated the two species. Not everyone agrees. "I think it is very unlikely that some biological or cognitive difference caused the replacement of the Neanderthal population," says Terrence Deacon, a neurobiologist at the University of California, Berkeley. The lack of evidence does not prove there was no cultural transfer, he points out.
"We could argue back and forth endlessly," Klein says. "The idea that there was a genetic change related to brain development 50,000 to 80,000 years ago has been problematic when all we've had is the artefacts and the fossils." The Neanderthal genome could help end this game of intellectual tennis.
But could it do more than that? Could the Neanderthal genome be the blueprint for resurrecting a living Neanderthal, Jurassic-Park style? That would raise enormous ethical quandaries: who would act as a surrogate mother, who would care for it and what rights would it have? And if it was capable of understanding its situation, how would it feel to discover that the rest of your kind has long been extinct? Pääbo thinks these ethical issues rule out any attempt. In any case, the technical barriers are also too high, he says: a human egg with Neanderthal DNA would be unlikely to develop. "We would be able to create a physical Neanderthal genome but we will not be able to recreate a Neanderthal," he says. "Even if we wanted to."
Dan Jones is a writer based in Brighton, UK. He blogs at www.psom.blogspot.com
From issue 2577 of New Scientist magazine, 11 November 2006, page 44-47
11 November 2006
A few weeks from now, European Union fisheries ministers will gather for a familiar pre-solstice ritual. They will sit around a table until the wee hours, and share out Europe's fish stocks. Fisheries scientists have already made their contribution to this ceremony by calling for beleaguered North Sea cod to be left alone. They have done this for seven years. For the past six, the ministers have calmly tossed Europe's fishermen a cod quota anyway. This year they probably will again.
Yet the cod keep coming back to be fought over. What is going on here? Are the scientists just plain wrong? Or are the ministers quite sensibly grabbing what they can before all the fish die in 2048, as a widely reported scientific paper predicted this week?
Well, neither. For one thing, all the fish are not going to die in 2048 - or not necessarily. That is the trend if fishing continues as usual (see "Glimmer of hope for 'doomed' fish"), but we now know how to stop the trend. We have strong evidence, as some biologists have known in their guts all along, that the ocean is a complex living machine, and that when we kill off things - any things - it becomes less good at yielding what we want from it. That includes fish. The bottom line is that if we want to keep protein production (and our oxygen source, and our pollution sink) functioning, we need to save the whale and the kelp, the copepods, the capelin and everything else.
The second-from-bottom line in this remarkable study is just as important: setting up protected marine reserves and temporarily banning fishing can reverse the declines in our seas. So long as we have not removed too much biodiversity, simply leaving the sea alone allows ecosystems to recover. Fisheries scientists already know this. They call the great global conflicts of the 20th century the First and Second Great Fishing Experiments. During both world wars, fishing boats were kept off the North Sea. The huge numbers of big fish caught after the fighting stopped showed scientists that fish stocks are affected by fishing, which must be regulated accordingly. It should be said that some fish stocks, such as hake off western Europe and Norwegian herring, are doing nicely because ministers have followed scientific advice.
Which brings us back to cod, the poster-fish for what can go wrong. Early one morning next month, bleary-eyed European ministers will probably allow fishermen to take just enough of the few cod left to allow the depleted fishery to stagger on. If they followed scientific advice for a ban on cod fishing, the number of cod would grow, and after a few years catches would boom. But that would involve short-term sacrifice, and no minister will bite that bullet. We need mechanisms to make them. Europe pays farmers not to farm but to be stewards of the countryside. Why not do the same for fishermen?
If Europe does nothing, it risks a repeat of the biological nightmare that took place in another northern sea: the Grand Banks off Newfoundland. Once thick with cod, it is now bereft of them even though cod fishing was banned there more than a decade ago. Some scientists think the cod will never come back because the ocean ecosystem has been so badly denuded - not just because the cod have gone, but because the boats are now taking shrimp and crab instead. Biodiversity can hardly recover under such pressure.
The result, as any Newfoundlander will tell you, is that people are suffering badly, and when the same thing happens to overfished seas in poorer regions of the world the effects are likely to be even worse. The real message is that we must save the biodiversity that sustains the ocean while we can, because if we go too far it may not come back.
We need to do it now. Climate change is coming and it is already making life hard for North Sea cod by causing their favourite foods to bloom too far north, or too early, when baby cod are not big enough to eat them. This in itself is a good example of how a complex ocean food web needs all its components to be operating at the right place and time.
This week's study shows the sea will need all the biodiversity it can muster for even some of the resources we value to stand a chance of surviving in a warmer world. We have not acted fast enough to prevent climate change. At least we can hold off on our rape of the sea long enough to give it a fighting chance.
From issue 2577 of New Scientist magazine, 11 November 2006, page 5
New Scientist Print Edition
11 November 2006
In 1950, a 19-year-old girl left the elite Smith College in Massachusetts to join her family on an expedition that would change their lives. Prompted by her father's desire to visit unexplored places, the family set off for the Kalahari desert in search of Bushmen living out the "old ways" of hunter-gatherers. The girl, Elizabeth Marshall Thomas, went on to celebrate them in her 1959 book The Harmless People, which became a classic of popular anthropology. Nearly 50 years on, Marshall Thomas's latest book The Old Way revisits the story - and finds that the Bushmen's fate is more complex than it seems.
Elizabeth Marshall Thomas went on three expeditions to visit the Bushmen of what is now Botswana and Namibia. They were the last major population of hunter-gatherers. Marshall Thomas returned to her English degree at Smith College, Massachusetts, and has written seven books, both fiction and non-fiction, including the best-selling The Hidden Life of Dogs. Her latest book, The Old Way, was published in October (Farrar, Straus and Giroux, $25).
Westerners mourn the loss of this hunter-gatherer society, but you take a rather different view...
Yes, for me they are living in somewhat the same way, but with different economics. The idea that you help your own is still present. This is what kept the human race alive for 150,000 years.
The hunter-gatherers told anthropologists they don't define themselves by how they get food but by how they relate to each other. We saw that. They tried to keep jealousy at a minimum, with nobody more important or owning more things than anyone else.
You gave things away rather than keep them. You wanted other people to think of you with a good feeling.
Is that the "old way" of your book title?
There was a time when the playing field was level and all species lived in this way. How people and their domestic animals live now is profoundly different.
Are there still efforts to help the Bushmen regain that idealised notion of the hunter-gatherer life?
There are, and I think it's unfortunate. Tourists want to see it, and WWF and other organisations want to preserve the local ecosystems - which is a good thing. But it's the Bushmen's ecosystem and the reason that it's there today is because of their way of life. So I have a little problem with some foreign group telling them what they must and must not do.
Also, gathering food is not going to be as viable as it was in the past because back then the population density was one person per 10 square miles, but now there are many more people and much less space. And people don't have the skills they need to live in the old way. Foreign groups are asking young African men to go back to stone-age hunting when these men know perfectly well that everyone else has rifles.
So there's no going back?
No, though anybody could become a hunter-gatherer - you'd just have to learn it. But you don't see a lot of volunteers stepping forward to do it now because it's much too difficult. After the old lifestyle collapsed, the Bushmen were encouraged to be farmers like other Namibians, and they tried. Some farms were started around a place set up for them called Tsumkwe. But for a number of reasons the experiment didn't work very well and Tsumkwe is now a hellhole with a huge alcohol problem. Even so, if the farmers received the help they needed the farms might be a way of moving forward. On land that the Bushmen own they could do all sorts of things, such as sports hunting, where foreigners pay to hunt big game. The Bushmen could be paid guides, for example.
Are these the people you lived with?
Some of them are the very same people. We spent most of our time with the Ju/wasi - also spelt Ju/'hoansi in textbooks, but I use the older spelling because it looks closer to how it sounds. The Ju/wasi we knew lived in what is now Namibia. We also visited the /Gwi people who live on the border between Botswana and Namibia.
You wrote that the expedition was like voyaging into the deep past?
Yes. The Bushmen had Palaeolithic technology. They didn't plant crops and had no domestic animals, no fabric or manufactured goods. They sometimes used small bits of metal for arrowheads, but since the arrows were merely a variation of bone arrows, the technology did not change.
What did they eat?
Most food was gathered by the women. When people think of gathering, they think of it as mostly plant food, but it produced proteins such as turtle, snake, caterpillar, honey ants and the like. The most exciting food, however, was large antelope that the men hunted, and that amounted to about 20 per cent of their food. The success rate of hunting was a lot lower than gathering, but they could get large amounts of meat that would feed the whole group - usually about 25 people - for a while.
A big adventure for a 19-year-old girl. Didn't these experiences end up in a famous novel?
Yes. Sylvia Plath also went to Smith College, and we were in the same writing class as part of our literature degrees. Our teacher used to read aloud from our writings, but didn't give names. But I knew that Plath was in that class later because I recognised the style of poetry. I wish I had known her. But I believe I appeared briefly in The Bell Jar as a girl who won a prize for writing about her adventures among the pygmies of Africa.
What do you make of the accusations by some academics that your writing is too sentimental?
My mother Lorna also wrote about the Bushman culture and we were both accused of over-emphasising the lack of violence in Bushman culture, but we were only reporting what we had seen. In the Bushmen groups we visited, we observed that there was much emphasis on cooperation and on avoiding jealousy. The reason was that life was pretty marginal and one way to get through was to have others who help you in your hour of need. Everything in their culture was oriented to this.
So it isn't that they have a natural "niceness" - I never said that they did. They're just like everybody else. What they have done is recognise the damage one person can do to another and try to put a limit on it.
What about research that shows if you scale up the violence in Bushman society, it's as bad as Detroit?
There is no question that violence did happen in Bushman societies. I knew of a group of 15 where one man killed two others with an arrow. The men in that group killed the killer. So now three had died, and three in 15 is a pretty high percentage: that's higher than the murder rate of Detroit. But the reason the Bushmen we encountered were focused on not fighting was because they were a society that recognised the human proclivity for fighting and tried to remove its causes.
They had the same difficulties as everyone else but they treated it differently, and they recognised the value of having a low-violence society.
Did you sense that this kind of life couldn't last?
It was obvious that in the outside world there was a desire for land expansion. The pastoralists wanted it for grazing, and the white farmers for farms. People thought: "Why not take the land from the Bushmen, they're not doing anything with it?" The farmers and the pastoralists thought the Bushmen would be put to "better use" if they were made to work on the farms. My father saw it all coming. The first year we were there, a farmer followed our tracks and captured some Bushmen for slaves. My dad found out and went and got them back.
How did it all come to an end?
The /Gwi we knew were displaced by farmers from the lands they had always used. Most of them died of thirst, starvation or disease. Part of the Kalahari was designated "Bushmanland" in 1970. Unfortunately this was meant to be home not only to the original inhabitants but to all Bushmen from all language groups. The density of people meant the end of hunting and gathering. Many of the Ju/wasi now live in Tsumkwe, and depend on the wages of the few who can find work.
Did the ideas about Bushmen becoming hunter-gatherers again stop the farms taking off?
Yes. And that's my brother John's message too. He made a film called The Kalahari Family, and the last section is titled "Death by myth". He believes Bushman farms failed because they didn't get the support they needed, due to the efforts channelled towards getting them back to a hunter-gatherer way of life.
From issue 2577 of New Scientist magazine, 11 November 2006, page 52-53
New Scientist Print Edition
Lee Alan Dugatkin
11 November 2006
ALTRUISM - helping others at a cost to oneself - has been a stubborn thorn in the side of evolutionary biologists. If natural selection favours genes that produce traits which increase the reproductive success of the individuals in which they reside, then altruism is precisely the sort of behaviour that should disappear.
Darwin was acutely aware of the problem that altruism posed for his theory of natural selection. He was particularly worried about the self-sacrificial behaviour that social insects display: how could natural selection explain why a worker bee will defend its hive by stinging an intruder and dying in the process? In On the Origin of Species, he summarised the topic of social insect altruism as "one special difficulty, which at first appeared to me to be insuperable, and actually fatal to the whole theory". But then he came up with an explanation.
Since worker bees were helping blood relatives - especially their queen - Darwin hypothesised that natural selection might favour altruism at the level of blood kin. One hundred and four years later, the biologist Bill Hamilton would formalise Darwin's idea, but the path from Darwin to Hamilton was not smooth. The nature of altruism and its similarities to the human trait of goodness make it susceptible to political, philosophical and religious subjectivity. Studying the structure of an atom isn't personal: studying altruism can be. It certainly was for the next two figures in the history of altruism, Thomas Huxley and Peter Kropotkin.
Huxley, also known as "Darwin's bulldog", outlined his thoughts on this topic in an 1888 essay entitled "The struggle for existence": "From the point of view of the moralist, the animal world is on about the same level as the gladiator's show... Life [for prehistoric people] was a continuous free fight, and beyond the limited and temporary relations of the family, the Hobbesian war of each against all was the normal state of existence." For Huxley, altruism was rare, but when it occurred, it should be between blood relatives.
Kropotkin, once a page to the tsar of Russia and later a naturalist who spent five years studying natural history in Siberia, thought otherwise. In Siberia he thought that he saw altruism divorced from kinship in every species he came across. "Don't compete!" Kropotkin wrote in his influential book Mutual Aid: A factor of evolution (1902). "That is the watchword which comes to us from the bush, the forest, the river, the ocean. Therefore combine - practice mutual aid!"
How could two respected scientists come to such radically different conclusions? In addition to being a naturalist, Kropotkin was also the world's most famous anarchist. He believed that if animals could partake in altruism in the absence of government, then civilised society needed no government either, and could live in peace, behaving altruistically. Kropotkin was following what he saw as "the course traced by the modern philosophy of evolution... society as an aggregation of organisms trying to find out the best ways of combining the wants of the individuals with those of co-operation". He saw anarchism as the next phase of evolution.
Huxley was no less affected by events around him. Shortly before he published "The struggle for existence", his daughter, Mady, died of complications related to a mental illness. In his despair over Mady's passing he wrote, "You see a meadow rich in flower... and your memory rests upon it as an image of peaceful beauty. It is a delusion... not a bird twitters but is either slayer or slain... murder and sudden death are the order of the day." It was in the light of nature as the embodiment of struggle and destruction - the antithesis of altruism - that Huxley saw the death of his daughter and it was in that mindset that he penned his essay.
A suite of other fascinating characters would follow Huxley and Kropotkin. In the US there was the Quaker ecologist Warder Clyde Allee, who did the first real experiments on altruism in the 1930s and whose religious and scientific writings on the subject were often indistinguishable; in fact, he would often swipe text from one and add it to the other. Around the same time in the UK, J.B.S. Haldane, one of the founders of population genetics, was talking of altruism and kinship, and came close to developing a mathematical theory on the subject. But he stopped short - nobody quite knows why.
A mathematical theory for the evolution of altruism and its relation to blood kinship would come a generation later with Bill Hamilton, who was both a passionate naturalist and a gifted mathematician. While working on his PhD in the early 1960s, he built a complex mathematical model to describe blood kinship and the evolution of altruism. Fortunately, the model boiled down to a simple equation, now known as Hamilton's rule. The equation has only three variables: the cost of altruism to the altruist (c), the benefit that a recipient of altruism receives (b) and their genetic relatedness (r). Hamilton's rule states that natural selection favours altruism when r × b > c.
Hamilton's equation amounts to this: if a gene for altruism is to evolve, then the cost of altruism must be balanced by compensating benefits. In his model, the benefits can be accrued by blood relatives of the altruist because there's a chance (the probability r) that such relatives may also carry that gene for altruism. In other words, a gene for altruism can spread if it helps copies of itself residing in blood kin.
A generation of biologists were profoundly affected by Hamilton's rule. One them was the population geneticist George Price, an eclectic genius who became depressed when he came across Hamilton's work. He had hoped that goodness was exempt from scientific analysis, but Hamilton's theory seemed to demonstrate otherwise. Price went through the mathematics in the model and realised that Hamilton had underestimated the power of his own theory.
While working with Hamilton on kinship and altruism, the atheist Price underwent a religious epiphany. In an irony that turns the debate about religion and evolution on its head, Price believed that his findings on altruism were the result of divine inspiration. He became a devout Christian, donating most of his money to helping the poor. At various times he lived as a squatter; at other times he slept on the floor at the Galton Laboratory of University College London, where he was working. Price lived the life of the altruists that he had modelled mathematically.
Since Hamilton published his model, thousands of experiments have directly or indirectly tested predictions emerging from his rule, and the results are encouraging. Hamilton's rule doesn't explain all the altruism we see but it explains a sizeable chunk of it. With time, Hamilton himself began to realise the power of his model, as well as its implications, and was somewhat dismayed that altruism could be boiled down to a simple equation: "I like always to imagine that I and we are above all that, subject to far more mysterious laws," he noted in volume 1 of his book Narrow Roads of Gene Land. "In this prejudice, however, I seem, rather sadly, to have been losing more ground than I gain. The theory I outline... has turned out very successful. It... illuminates not only animal behaviour but, to some extent as yet unknown but actively being researched, human behaviour as well."
From issue 2577 of New Scientist magazine, 11 November 2006, page 56-57
Lee Alan Dugatkin is a biologist at the University of Louisville, Kentucky. His most recent book is The Altruism Equation: Seven scientists search for the origins of goodness (Princeton University Press, 2006).
POSTED: 4:09 p.m. EST, November 9, 2006
CAPE CANAVERAL, Florida (AP) -- Space shuttle Discovery was moved to the launch pad Thursday to await a launch that could be as early as December 6 -- an effort to avoid potential New Year's Eve computer glitches.
The worry is that shuttle computers aren't designed to make the change from the 365th day of the old year to the first day of the new year while in flight. NASA has never had a shuttle in space December 31 or January 1.
"We've just never had the computers up and going when we've transitioned from one year to another," said Discovery astronaut Joan Higginbotham. "We're not really sure how they're going to operate."
Starting December 7, launch opportunities would be available as late as December 17 or 18. With a 12-day mission, that would mean the shuttle is back on Earth before New Year's Eve.
However, NASA was quick to say that even if the shuttle crew finds itself still in space on January 1, procedures could be devised to make a transition if necessary.
"Under some weird circumstance ... if we have an 'Oh my god,' and we have to be up there, I am sure we would figure out a way to operate the vehicle safely," said Steve Oswald, a vice president for Boeing Co., the parent company of the builders and designers of NASA's shuttles. "It just wouldn't be flying in the normal certified mode that we are used to flying."
If Discovery gets off the ground next month, it will be the third shuttle flight of the year, and only the fourth since the 2003 Columbia disaster.
It also will be the first night launch in four years. NASA required daylight launches after Columbia to make sure engineers had clear photos of the shuttle's external fuel tank; falling foam from Columbia's tank damaged its wing, dooming the shuttle and its seven astronauts.
NASA managers believe illumination from the space shuttle's booster rockets should allow for photos at night during the first two minutes, and radar should be able to detect any falling debris. Astronauts also are able to inspect the shuttle for damage while in flight.
During the 12-day mission, the astronauts will take three spacewalks to add an $11 million addition to the international space station and rewire the space lab's electrical system. The shuttle will also drop off U.S. astronaut Sunita Williams and bring home German astronaut Thomas Reiter, who has been at the space station since July.
Have a question or comment about the Signs page? Discuss it on the Signs of the Times news forum with the Signs Team.
Some icons appearing on this site were taken from the Crystal Package by Evarldo and other packages by: Yellowicon, Fernando Albuquerque, Tabtab, Mischa McLachlan, and Rhandros Dembicki.