Can You Be Both A Moral Rationalist & A Moral Sentimentalist? | Issue 156
Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Metaethics
Andrew Kemle says that evolutionary forces give us the answer.
One of the major discoveries in the social sciences over the past few decades has been that people have innate other-regarding preferences. This means that we generally take other people’s interests and well-being into account when making decisions, and that although socialization can affect the strength of these preferences, we have them as part of our genetic make-up. We are born with them. Other-regarding preferences, in fact, appear to have a deep evolutionary history. Not only do other primates display signs of empathy and concern for their peers, but so do animals separated from us by hundreds of millions of years of seperate evolutionary development, such as some insects and possibly certain species of octopi (Peter Godfrey-Smith’s 2016 book Other Minds is highly enlightening in this regard).
There are a couple of thought-provoking implications here for anyone interested in metaethics. One is that the fact that we have innate, other-regarding preferences seem to support the idea that moral decision-making is basically emotion-driven, such that ‘moral sentimentalism’ is the correct way to think about the nature of morality. This is because our emotional responses evolved millions of years before our reasoning capacity, which is a comparatively recent development. One might even argue from this innateness that morality is a type of unconscious ‘social heuristic’. A heuristic is a problem-solving technique that relies on rule of thumb or on trial and error. We use them all the time. A social heuristic depends on social rules of thumb reinforced and corrected by emotional cues. To put it differently, perhaps the root of morality in our brains is an instinctive strategy that helps us navigate social dilemmas quickly. If so, it isn’t primarily reason that guides us in how we treat one another; instead our evolved preferences for certain forms of relationships motivate us to act through emotions, rather than through the force of reason alone.
Interpretations of this kind are fairly common in the literature on moral psychology (see for example, Jonathan Haidt’s 2012 book The Righteous Mind, or David G. Rand’s ‘Social Heuristic Hypothesis’). They are frequently taken to imply that morality is basically a non-rational concept. According to some, it follows from the claim that moral sentimentalism is true that moral rationalism – the idea that moral truth is uncovered through reason – is either false, or plays an insignificant part in our moral decision- making. That’s not the same as saying morality is irrational. Rather, it means that our perception of the rightness or wrongness of something comes from emotions, and perhaps the best reason can do is provide justifications or strategies for actions motivated by emotions. Basically, this type of moral sentimentalist would claim that the evolutionary development of moral sentiments invalidates moral rationalism, so that a moral rationalist would then have to say what’s wrong with the experiments revealing innate other-regarding preferences, or try to explain them away.
I think this rejection of moral rationalism is rather hasty. The evidence of innate other-regarding preferences can be connected to reason, even if we assume only a thin, merely instrumental rationality. The distinction between the emotional-heuristic conception of morality and a rationalist one can be blurry in any case. To resolve all this we need to treat evolution seriously.
To show you what I mean, I want to ask you to imagine a universe where David Gauthier is God.
Image © Venantius J Pinto 2023. To see more of his art, please visit behance.net/venantiuspinto
The Universe According to Dave (Gauthier)
Professor David Gauthier is a distinguished Canadian ethicist. The reason I’m asking you to imagine a universe where he is God is that he is a prominent modern-day proponent of something called ‘moral contractarianism’, particularly of the Hobbesian flavour.
This approach to understanding morality assumes Thomas Hobbes (1588-1679) was right to assert that human beings are all completely self-interested. Thus rationality in behaviour reduces to maximizing our self-interest, which Hobbes claims is achieved by groups taking up an implicit ‘social contract’ to be law-abiding. Gauthier argues that we can still connect Hobbes’ self-regarding, merely instrumental form of rationality to morality because: (1) There are personal gains to be had from cooperation; (2) Rational agents won’t ignore potential gains; and (3) Rational agents will therefore be incentivised to form contracts with others to ensure that cooperation is possible. Thus do contracts (often implicit/assumed) become the basis for morality, according to Gauthier.
This isn’t a defense of moral contractarianism, not by a long shot. Instead, moral contractarianism is a useful framework for bringing rationality into questions of morality. Importantly, it includes a version of rationality – a thin, self-regarding, merely instrumental version of it – that’s usually seen as being amoral in its character, since the assumption is that you’re acting in a way that maximizes your own utility without paying much care to anyone else. All that matters in this version of rationality is consistency between goals and actions. Whereas Kant’s followers might claim that reason itself implies moral prescriptions, here the imputing of morality to instrumental rationality seems surplus to requirements. Moral contractarianism also doesn’t seem to leave much room for emotions and heuristics. Emotional and heuristic responses are frequently considered irrational. The value of Gauthier’s brand of moral contractarianism for this discussion, then, is that it tries to connect morality even to a very morally thin and emotionless version of rationality.
So let’s pretend that Gauthier the God has decided to create a universe (God works in mysterious ways, so I don’t have to explain why). Gauthier-God has also decided that Darwinian evolution is present in this universe. He then creates Adam and Eve; and since Gauthier-God is above all an ethicist, he also creates Alice and Bob, along with Carol, David, Enoch, and Francine. Then he deposits eight copies of his book Morals by Agreement (1985), and leaves the humans to do their thing. These eight newly created and entirely self-interested humans, left in a Hobbesian ‘state of nature’, read Gauthier’s arguments, and have to decide what to do next.
If Gauthier is correct, all eight individuals will realize that their best chance of surviving and thriving is to cooperate. In order to cooperate, they know they’ll have to agree on a set of principles to structure their behaviour, so that nobody can exploit, kill, or dominate the others. Consequently, all eight – despite thinking first and foremost of the strategies needed to maximize their own self-interest – will agree to whatever principles best guarantee lasting cooperation among them. And thus is a moral society created. In this universe, the foundation of moral society is reason, since the eight individuals have arrived at their social contract through the analysis of costs and benefits and the exercise of deliberate negotiation.
Evolution exists in this universe, however; and so these eight people won’t be the only human beings to exist in it for very long. A new generation is born into a society with these governing principles; then another; then another; and another, with the original eight having passed on. The population of this universe will expand, and all while this is happening, natural selection will be filtering out types that decrease reproductive fitness while increasing the prevalence of types that increase reproductive fitness.
One thing that biologists have learned about evolution is that culture and genes interact with each other in complex ways. ‘Dual inheritance theory’, also known as ‘gene-culture coevolution’, deals with this feedback loop. Essentially, genes influence the development of culture by influencing the behaviour of individuals; but culture also influences genes by changing the fitness landscape. This is a fancy way of saying that culture changes the environment, say by nudging behaviour, or creating institutions or technology. If the environment changes, then whether a given inherited trait increases or decreases the reproductive fitness of its organism can change. If the environment changes such that a certain trait now confers a reproductive advantage to an organism, then natural selection will over time lead to that trait becoming dominant in the population: more and more organisms will be born with this trait since their parents will have better success at passing on their genes than competitors without that trait. Ergo, cultural changes can affect the biological evolution of a species.
In the house that Gauthier-God has built, cooperation is in the best interests of everyone, as it helps maximize individual gain. So the culture the original eight individuals created encouraged cooperative behaviour and sanctioned uncooperative behaviour. And as new generations emerge, any traits which create better cooperators – for example, by having a mechanism like empathy to understand other people’s emotions, or developing a desire to help the least fortunate, thus signalling to others that you’re onboard with cooperative agreements – will increasingly predominate in the population. At some point along this evolutionary path, people with an innate desire to help others, or to understand and cooperate with a diverse set of individuals, will gain the most, and thus will pass on their genes at a higher rate than those who focus on a myopic form of self-interest. If that point is reached, then the population will have evolved from a group of eight entirely rationally-self-interested individuals who came together to negotiate the principles ensuring a cooperative society, to a population where natural other-regarding preferences are the norm. According to most of the thinking on other-regarding preferences, this is the exact universe that we’re living in.
Rational Renegotiators
That’s a fun story, you might say, but it doesn’t really prove that the exclusive sentimentalist interpretation of the evolution of other-regarding preferences is wrong. All I’ve shown is that natural other-regarding feelings could start from a rational calculation in some mythical past. But even if I somehow proved that this sort of contractual agreement really happened, this doesn’t mean that any rational calculation is involved in activating our other-regarding preferences (that is, our moral instincts) right now.
That’s true. But there are two further things to note. One is that genes influence behaviour, but they don’t determine behavior. As I said, genes and environmental factors interact in complex ways – so complex that it can oftentimes be difficult to tease them apart. But one thing we can say is that innate other-regarding preferences don’t give morality its content. They just predispose people to favour cooperation. The Gauthier-God example shows how rational agents and the evolutionary process can converge on the same idea: that it’s better for everyone in the long-run if we’re nice to each other.
Also, culture itself is evolving; and if culture is evolving, then the environment is constantly changing. If the cultural environment is constantly changing, then any sort of social agreement is going to have to be continually updated to incorporate new information and perspectives. You don’t have to be a complexity theorist to recognize that a changing environment requires you to continually revaluate where you stand and what your next course of action’s going to be. This includes the content of any cooperative agreement. In other words, a dynamic environment means that any moral contract will have to be renegotiated whenever relevant new information is brought to light – and that will require the operation of rationality.
Let’s return to the Gauthier-God universe. If we’re now a thousand generations away from the original eight humans, we’re also a thousand generations of cultural change away from where we started. The conditions that Adam and Eve and the others faced are likely to be radically different from the conditions this new generation are facing, and so the strategies for maximizing gains will need to be renegotiated too. That includes the conditions ensuring cooperation. And this involves reasoning.
One thing that distinguishes the thousandth generation from Adam and Eve’s generation, is that it is going to be better able to reach a new agreement on morally-required actions. For instance, they’ll be able to draw on knowledge of the influence of natural selection on their biology to better incorporate other people’s demands in the negotiation. And after generations of breeding for cooperation, they’ll also be more forgiving of defections, be better able to bridge otherwise untraversable chasms, and will more quickly re-engage in dialogue if current plans seem to be breaking down. Previous rational renegotiations will also affect the moral sentiments this generation will use; and their own rational renegotiation will have an impact on the moral sentiments of the generations succeeding them, too. So we have a feedback loop between rational renegotiation and innate moral sentiments. One feeds off and impacts the other in an unbreakable cycle.
While God creating rationally self-interested individuals at the beginning of time then letting natural selection take its course doesn’t map very neatly onto our perception of reality, we do know that previous generations have constantly renegotiated their own social agreements, just as we do now. Their rational renegotiations have influenced us in what we believe to be right and wrong, just as our rational renegotiations will influence later generations. We can, then, storm the heavens, politely ask Gauthier-God to resign his Lordship, and map the feedback loop between moral reason and innate sentiment onto a universe that looks more like the one we live in.
Having it Both Ways
So can we be both rationalists and sentimentalists? I think the answer is clearly ‘yes’. The feedback loop described above suggests that, when taking a long-run view, the two aspects are inseparable, until we decide we no longer want to adapt to new environments, at which point we will die out.
Recall that moral rationalism in its general form assumes that we discover moral truths through a process of reasoning. Moral sentimentalists often dispute this by pointing to experimental evidence that people rely on their emotions or other instinctive heuristics to determine the rightness or wrongness of an act. But from an evolutionary standpoint, an emotional or heuristic basis for seeing an act as right or wrong doesn’t preclude reason from having a necessary role in applying the moral feelings generated. Furthermore, our moral sentiments aren’t formed completely independently of reason. Reason will also actively adjust these sentiments as we confront the world and test our convictions.
Moral sentiments, in the form of innate other-regarding preferences, don’t just motivate our actions: they also maintain the necessary dispositions in us to engage in cooperation and renegotiations. An emotionless but still rationally prudent being could still negotiate and form a moral contract, or, indeed, come to similar conclusions about the moral truth-values of actions as a sentimentalist. But there’s an evolutionary advantage to having moral sentiments: they help motivate the process, which is necessary given the complexity of the world we’re trying to navigate.
Any Other Implications?
Is morality a kind of contract between members of a society? I’m not a moral contractarian for several reasons. But if I’ve been successful in establishing that you can be a moral rationalist and a moral sentimentalist at the same time, then there are some benefits here to moral contractarian arguments. Chief among them is that we can answer one of the most pressing challenges to any social contract theory: it seems to exclude from the negotiation anyone who can’t exercise rationality. Consequently it doesn’t provide any reasons for expanding our considerations of others beyond rational agents. People with severe intellectual disabilities are often cited as being excluded from social contracts, as are non-human animals. We might also exclude another intelligent species sufficiently alien that we had difficulty communicating with them. This could be extraterrestrial species, or species of synthetic intelligences; or even octopuses, who after hundreds of millions of years of divergent evolution are about as alien to us as you can get without actually coming from outer space. On our present thinking, whether or not these categories of beings are able to rationally negotiate with us to agree moral rules is inconsequential, since rational negotiation is only one part of the moral loop. If someone’s capable of moral sentiments – capable of feeling certain things to be right or wrong – then that person is part of the moral community. Even if it turns out that synthetic intelligences or extraterrestrial intelligences or uplifted octopuses don’t experience what we’d call moral sentiments, that still isn’t grounds for excluding them from the moral community. As long as they influence the expression of moral sentiments in us, even if indirectly, by being present in the background of our deliberations as stakeholders, then they ought to be included in our moral community. And my argument would encourage research into what moral values and principles we (sentient creatures) share, despite our other differences.
Outside the realm of moral contractarianism, the dynamic interplay between human culture and biological evolution can itself help shed light on whether reasoned arguments or emotional responses trigger moral behaviour. It’s both. Moral psychologists should consider the evolution of moral behaviour to be like any repeated game strategy. Moral behaviour is reinforced through evolutionary processes so that it’s more readily to hand in complex environments; but the initial formation of these strategies still requires reasoning (even if we assume that to start with, this only means thin, instrumental reasoning). So evidence that suggests we instinctively arrive at conclusions about the rightness and wrongness of an act ought not to be interpreted as meaning that reason plays no role. And any inability of people to explain their reasoning process has more to do with the complexity of the situation and the limits of language – a gap that sentiments and heuristics help to bridge, without going through reason.
There’s an assumption held by some that humanity’s moral values are merely an evolutionary accident. That’s a fairly big assumption in any case. But if co-operation really is selectively beneficial in the long term (and how could it not be?), then the evolutionary process must favour cooperative agents. Thus it must select for underlying instincts that maintain cooperation. So the underlying ethical principles that enhance cooperation aren’t accidents of evolution – they’re tested by evolution.
Beyond all that, hopefully I’ve answered the question of whether you can be a moral rationalist and a moral sentimentalist at the same time. More expansive definitions of moral rationality – one’s that don’t reduce to just instrumental rationality – might be combined with moral sentiments for a more complete view of morality, but that’s beyond my present scope. It’s just something we’ll have to continually renegotiate as new information comes to light.
© Andrew Kemle 2023
Andrew Kemle is a graduate student in Political Science at the University of Calgary and an advisor to a student advocacy organization.