Philosophers have long been split into two camps: one arguing that moral judgments arise from rational thought, the other that the roots of morality are emotional. Now, as the subject of morality moves from the philosopher's armchair into the lab, the error of this dichotomy is becoming clear. Researchers looking at the psychological basis of morality are finding that reason and emotion both play a part.

A trolley train comes hurtling down the line, out of control. It is heading towards five people who are stuck on the track. If you do nothing they face certain death. But you have a choice: with the flick of a switch, you can divert the trolley down another line - a line on which only one person is stuck. What do you do? Perhaps, like most people, you believe that it is right to minimise the carnage, so you do the rational thing and flick that switch.

But what if the situation was slightly different? This time you are standing on a footbridge overlooking the track. The trolley is coming. The five people are still stuck, but there's no switch, no alternative route. All you've got is a hefty guy standing in front of you. If you push him onto the line, his bulk will be enough to stop the runaway trolley. You could sacrifice his life to save the others - one for five, the same as before. What do you do now? Suddenly the dilemma is transformed. If you respond the way most people do, you won't push the hapless fellow to his fate. The thought of actively killing someone, even for the greater good, just feels all wrong.

Two logically equivalent situations, yet two different outcomes. What is going on? For decades, this thought experiment has confounded philosophers and psychologists. They have long been split into two camps: one arguing that moral judgments arise from rational thought, the other that the roots of morality are emotional. But the trolley-train dilemma just doesn't fit this black-or-white way of thinking. Now, as the subject of morality moves from the philosopher's armchair into the lab, the error of this dichotomy is becoming clear. Researchers looking at the psychological basis of morality are finding that reason and emotion both play a part.

Meanwhile, brain-imaging studies are highlighting the interplay between the two, revealing the circumstances in which each comes to the fore. And these findings have implications for the way we think about many tricky personal and public policy issues, from stem cell research and abortion to capital punishment and war. The science of morality also demonstrates how moral arguments can influence our behaviour. It may even prompt you to question your own moral values and those of the society in which you live.

The American philosopher Jerry Fodor once quipped that you can drop out of the various debates about the nature of the human mind for a few centuries, return and find that people are still discussing the same questions. Doubtless he exaggerated for effect, but in the case of morality he was not far off. In 1777, the Scottish philosopher David Hume wrote: "There has been a controversy started of late...concerning the general foundation of morals; whether they be derived from reason, or from sentiment." Hume favoured the latter, and his "sentimentalist" view battled it out against the more rationalist ideas of others - most notably Immanuel Kant - for two centuries.

Then, in the 1960s the Kantian psychologist Lawrence Kohlberg of Harvard University came up with a theory that dominated for the next 30 years. Building on the work of the cognitive psychologist Jean Piaget, who emphasised the rational component of the child's mind in development, Kohlberg argued that children's capacity to make moral judgements derives from their ability to reason. As their cognitive powers increase, so too does their ability to reason in more abstract ways and hence to make more subtle moral judgements. Until a decade ago, Kohlberg's theory was still the launch point for most psychological discussions of moral decision-making.

"In the end no one found Kohlberg's theory convincing, but it was the only one we had," says Paul Bloom, a developmental psychologist at Yale University. "Things have changed radically in the past five to 10 years."

What has changed? For a start, evolutionary biologists have begun to expose the origins, purpose and biological underpinnings of morality. There is now general agreement that moral practices were somehow evolved. Elements of morality have been discovered in non-human species, particularly other primates. Some possess a sense of fairness, and many have certain codes of conduct that underlie their social interactions and almost certainly developed as adaptive strategies to help individuals cooperate and cope with conflict. This scuppers the idea that morality is entirely the product of higher reasoning.Gut instinct

In addition, psychologists increasingly recognise the importance of fast, unconscious processes in a range of mental domains from visual perception to language comprehension. Particularly significant here is the study of heuristics - the mental rules of thumb that allow us to make effective decisions based on limited information - which emphasises the importance of emotions (New Scientist, 4 September 1999, p 32). Take disgust, for example. If you come across a piece of mouldy food or rotting flesh, you don't think, "Oh, that's probably bacterially contaminated, and therefore dangerous, so I should get away from it." You just think "Ugh!" and quickly throw it away.

These strands of research have been brought together by social psychologist Jonathan Haidt from the University of Virginia in Charlottesville. According to his "social intuitionist model", moral judgements are rarely the result of deliberation. Instead, they are primarily the product of "moral intuitions" which work in much the same way as other emotional responses that guide behaviour. But where do these intuitions come from? Haidt suggests that some, such as "murdering your child is wrong", are evolved faculties of the mind like disgust. Others, such as "capital punishment is wrong", are picked up from our culture through socialisation and may be particular to specific historical and cultural settings (Daedalus, vol 133, p 55).

Studies by Haidt and his colleagues support the idea that reflective thought plays only a limited role in many moral judgements. For example, they presented people with a scenario in which a brother and sister, holidaying in a cabin, decide to experiment by having sex with each other. Both use contraception, so there is no chance of producing a child that would pay the genetic price of inbreeding. They never repeat the experiment and continue their lives normally, with neither suffering any adverse psychological effects.

If you recoil at this, you are not alone. But many people go further - they condemn the act as morally wrong. When pushed to explain why, they often end up saying something like: "I don't know, I can't explain it, I just know it's wrong." According to Haidt, this "moral dumbfounding" suggests that moral reasoning occurs after moral decisions have been made, and is really about publicly justifying moral judgements already reached though intuitive, emotional processes. "The reasoning process is more like a lawyer defending a client than a judge or scientist seeking the truth," he says.

Haidt's theory has been widely applauded for reinstating the role of emotionality in moral judgements. "I agree, and I think many people agree, that some of our moral intuitions are not the product of any reason," says Bloom. "We just have them, either because they're innate or because we scooped them up through socialisation." But, he adds, this is surely not the whole story. Even if reason is not the major element of moral decision-making, it still has the potential to play a vital role. "The amount of time we spend having sex compared with, say, commuting to work, is pretty small, but that doesn't mean that when we write the book of human nature, sex becomes an unimportant curiosity," says Bloom.

Reason can exert its force in at least two crucial ways. Historically, moral deliberation has been a catalyst for moral change. "For example, people thought through the issues that led to the moral notions that slavery is wrong, and that men and women and different ethnic groups should have equal rights," says Bloom. "This has played a huge role in our civilisation and society." Contemplation of moral issues leads to new moral norms, which become part of the moral fabric of the societies that future generations inhabit. Wherever humans have made moral progress, critical thinking has been essential.

In addition, moral reasoning also plays a role in our day-to-day lives. "We all have to decide how to live our lives," says Bloom. "How to divide our time between work and family, what our obligations to friends and colleagues are, whether we donate to charity or make eye contact with the homeless man in the street - and there's no real way around this but to think about the problems."

The growing realisation that human morality springs from both our biology and our culture has inspired Bloom and other developmental psychologists to try to discover what moral knowledge is biologically built into humans. In one study, Bloom's team showed animations to 12-month-old infants in which a ball tries to get up a hill, while being "helped" by a square and "hindered" by a triangle. In a second pair of clips, the ball either "cosied up" to the square, while the triangle remained alone, or it "befriended" the triangle.

From the amount of time the infants spent looking at these films, it was clear that they preferred the movie in which the ball associates with the square. Bloom believes this shows that even very young children have the notion that those who help us are our friends, and those who don't are not, which is a basic building block of morality.

Developmental psychologists are not the only ones investigating what sorts of moral dilemmas evolution has equipped us to deal with. Joshua Greene, a philosopher and cognitive scientist from Princeton University, and his colleagues are using brain-imaging techniques to get a handle on what goes on in the brain when we make moral choices. In particular, they have been looking at the trolley-train dilemma to see what the underlying difference in brain activity is when we decide to flick the switch compared with pushing the man. With the tools of modern brain imaging, Greene and colleagues are beginning to provide an answer where philosophers have floundered.Time to decide

Their functional magnetic resonance imaging studies suggest that the different situations elicit different brain responses. Given the choice to flick a switch, areas towards the front of the brain, associated with "executive" decision-making functions, become active, much as they do in any cost-benefit analysis. By contrast, when deciding whether or not to push a man to his death there appears to be a lot of activity in brain areas associated with rapid emotional responses. Throwing someone to their death is the sort of up-close-and-personal moral violation that the brain could well have evolved tools to deal with, explains Greene. By contrast, novel, abstract problems such as flicking a switch need a more logical analysis.

As well as using different brain areas in the footbridge scenario, people also take longer to make a decision - and longer still if they decide to push the man. There is evidence of an internal conflict as they consider taking a morally unpalatable action to promote the greater good. This shows up as increased activity in the anterior cingulate cortex, an area of the brain known to be activated in cognitive conflict. Following this, areas associated with cognitive control and the suppression of emotional responses also light up - with activity particularly marked in people who choose to push.

Greene believes this activity reflects the cognitive effort required to overcome the emotional aversion to harming others. He is currently working on variations of the trolley-train thought experiment to incorporate other moral issues, such as the role that promising not to harm a given individual might have in influencing decisions, and how this affects the underlying brain activity.

Greene and others believe their work has important implications, particularly for emotive issues that elicit reasoned justifications and condemnations in equal measure. "Understanding our moral intuitions about things such as stem-cell research, abortion and capital punishment can change the way we see the issues themselves," he says.

Take the case of human cloning. Leon Kass, a leading US bioethicist and adviser to President Bush, has famously argued that human cloning should be banned because it is disgusting, the so-called "wisdom of repugnance". "But knowing that disgust can be a profoundly irrational emotion undercuts his moral argument," says Bloom. "You don't want to take disgust seriously as a moral indicator, because it has historically been triggered by such things as homosexuality and interracial marriage, which we now don't think of as immoral."

Haidt's experiments show how easy it is for our feelings of disgust to become cross-wired with our sense of morality. For instance, a disgusting but essentially amoral act, such as having sex with an oven-ready chicken, comes to be seen, inappropriately, as morally reprehensible. And disgust is not the only emotion that can become conflated with moral issues. This fact is often exploited, whether consciously or not, by people such as Kass who are arguing for a particular moral stance. Haidt points out that appealing to someone's emotions often carries far more weight than using rational argument. To paraphrase a popular diplomatic line, it's more about winning hearts than converting minds.

The fact that we can play on people's emotions to influence the moral views of a society is alarming to some. "A number of the problems that beset the world result from yoking together certain beliefs with certain moral norms," says Stephen Stich, a philosopher at Rutgers University in New Jersey. "As we become sophisticated enough to know how to manipulate people's norms we ought to start thinking about how to regulate this."

The solution may lie in the realisation that morality is based on both emotions and rational thought. "I hope that people start thinking more seriously about where our moral intuitions come from, rather than taking them for granted," says Greene. If we can learn to question our personal moral assumptions - to see whether we can objectively defend them or whether they reflect a bias or prejudice in our culture, social group or era - that would be moral progress.