Humans, the Believing Animals | Issue 154
Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Society & Reason
Aristotle says humans are rational animals but Kevin Currie-Knight argues that our capacity for belief is even more fundamental.
I think Western philosophy makes a mistake whenever it follows Aristotle’s definition of humans as ‘rational animals’. Aristotle argued that unlike lower animals, humans have a rational soul, and that (although humans can surely ignore or overlook it) this rational soul is what separates humans from those lower animals. Just as Plato’s Socrates told us that the examined life – examined by reason – is the life worth living, Aristotle similarly concludes that the highest form of life for humans, who have rational souls, is the contemplative life.
I think Aristotle was wrong about humans being primarily rational animals. This is not because human irrationality proves that we aren’t rational. Aristotle never said that we are always rational, just that we have the unique capacity to be rational. Rather, I think there is something even more fundamental that distinguishes humans from the lower animals. We are believing animals. Aristotle is right in that I differ from my cat, a water buffalo, and the insects in my yard because I can reason in ways they can’t. However, a more fundamental difference between those animals and myself is that I believe in ways they can’t – about how I get to work, how it is appropriate to behave in different spaces, or about the cosmos. My question, then, is which is more primary or fundamentally human: reason, or belief?
The Believing Animal by Miles Walker 2023
Image © Miles Walker 2023 Please visit mileswalker.com
Belief versus Reason
The way Aristotle and various philosophers since have depicted us, reason is the primary player, and belief is what we do along the way before another bout of reasoning. We reason, then we believe; and ideally, we keep reasoning in order to search for better and truer beliefs. This is why Aristotle believed that the contemplative life was the highest life for humans.
Here we can ask two questions. First, is Aristotle’s an accurate depiction of how humans work? Second, is this is how humans should work?
We can answer the first question decisively in the negative. The more we learn about the human mind, and especially about its deep-rooted psychological biases, the more our minds seem almost tailor-made for belief-preservation. This makes it difficult and even unpleasant to challenge long-held beliefs through reason. For instance, our confirmation bias leads us to weigh more heavily, and seek out more readily, arguments and evidence which support our existing beliefs. Or the status quo bias finds humans favoring the preservation of ways we’ve gotten used to over changes in those ways. Meanwhile, the anchoring bias finds that how we appraise information is affected by how the issue was initially presented. If X is the first way I hear an issue framed, we are more likely to use X as the standard by which we judge subsequent arguments about that issue. If these three biases point to anything in the human condition, it is our tendency towards preserving existing beliefs rather than rising above them to challenge them with reason.
Both observation and introspection support this idea. Kathryn Schultz wrote a marvelously researched book, Being Wrong (2010), about the puzzle of why it feels so good to be right, and so bad to be (or to realize we are) wrong. If humans are primarily rational, rather than believing, creatures, shouldn’t we feel quite good about recognising ourselves as being wrong – at least if we discover our wrongness by rational means? But Schultz writes, correctly, that for most of us most of the time, we don’t. On the contrary, belief preservation feels amazing, and given the cognitive biases I just mentioned (and a host of others), we often struggle mightily to avoid giving up our already-held beliefs. Schultz points out that we love belief so much that when one becomes dislodged we rush to fill the hole with another.
If you need more persuasion, I’d ask that you look at how argument works on social media and elsewhere. Here we can clearly see the following tendencies:
(a) People tend to defend beliefs they hold much more often and enthusiastically than they honestly entertain arguments against their beliefs;
(b) The more a person is challenged on beliefs important to them, the more anger is likely to seep into the discussion; and
(c) People are rarely happy to be presented with insinuations that they are wrong or to admit flaws in their belief. (Note the asymmetry between what happens in arguments, and the elation that often happens in echo-chamber spaces where people collectively confirm a belief they share.)
Maybe all of this helps Aristotle make his case that the contemplative life is the highest life. If belief is more natural to us than reasonable self-criticism, this is all the more reason to become more contemplative. But look how prone we are to defend our beliefs, even to the point where we make every effort avoid giving up faulty ones.
Reason does great things: it helps us cope as intelligently as we can with uncertainty; convince others of our beliefs; and, yes, examine our existing beliefs in the face of evidence that they are inadequate. My point is not to trash reason and exalt belief, but to suggest that the latter plays a more important role in our lives than the ‘rational animal’ view suggests.
If anything, belief and reason are complementary. While reason helps us cope with uncertainty by giving us a means to think things through, belief helps us cope with uncertainty by allowing us to feel respite from it. If all we ever did was maintain our existing beliefs, we would never progress or adapt in a changing world. But if all we ever did was reason, we would never be able to rest in any security about anything we believe. Of course, neither Aristotle nor any other ‘rational animal’ human-definer suggest that reasoning should be omnipresent in our lives at the expense of belief: they just suggest that we persistently test our beliefs against reason (our own and others’). I want to suggest by contrast that while we should sometimes do just this, too much of it would be too disorienting and exhausting for most humans to find livable. I can hear Aristotle’s objection now: ‘Then so much the worse for most humans!’ My response is: ‘No, so much the better!’ Reasoning that doesn’t rest until we can find a secure foundation for some belief might be thrilling to philosophers, but it is anguishing, and, frankly, feels unnecessary for the rest of us most of the time. When we realize that we have no ultimate grounding to claim knowledge about any given thing, the philosopher frets, while the rest of us just pause for a second before going about our days the best we can.
Belief versus Reality
Does ‘mere’ belief do anything for us so great that we should feel okay defending it from time to time against reason’s encroachment?
In Denial of Death (1973), the philosopher and anthropologist Ernest Becker argues that humans throw themselves into belief systems owing to a fear of death. Becker suggests that as we become acquainted with the fact that we all die we use religious belief systems to console us that death is not the end, or secular belief systems to help us think of projects bigger than ourselves that will continue when we die. Talk yourself out of your beliefs in this web, and you talk yourself out of an ability to psychologically deny death.
Is seeing believing?
I suspect that Becker gets some things right, especially about why beliefs have the inertia they do: the world is simply easier to handle with belief than without it. I suspect, however, that he is wrong about the ultimate source of the inertia. I suspect that belief acquires its inertia not from fear of death, but from recognition of the unpleasantness of uncertainty. Beliefs give us a sense, however small, of safety against contingency, and assurance against uncertainty. For instance, I believe that I can trust certain news sources – and no part of this belief seems relevant to a fear of death.
Paul Feyerabend is therefore more correct in a book about the role of belief in the natural sciences, Conquest of Abundance: A Tale of Abstraction versus the Richness in Being (1999). I give the full title because it is appropriate to his argument, which is that theories about the natural world (and I would further say, all beliefs) are psychologically understandable moves to conquer the world’s abundant richness of experience by bringing it into a more abstract simplicity. Thus I think of beliefs as being somewhat like zip files for the mind: ways of allowing us to compress ideas about the world we would otherwise have to think about continually (which would be a costly endeavor in time, nutrition and energy).
Suppose for example I believe that global warming is a real human-made process. This belief (one of many I hold) does quite a few things for me. First, it fills in a bit of my picture of the world: how it works, and my relationship to it. Second, it gives me something to use in deciding how I will navigate that world. Third, when I receive a variety of viewpoints about climate change and related issues, my belief gives me some frame of reference for helping me decide what information I will take seriously and what I will dismiss. Relatedly, my belief helps me explain to myself why this diversity of viewpoints exists. It’s not because the world is complex enough for different folks to validly come to different conclusions, it’s because I see the world correctly, whereas those who disagree with me are either mistaken or wrongly motivated. My belief might even help me organize the world into good and bad people, and help me socially bond with the good. If I lose my belief, then I lose all of that. (Of course, the same benefits and potential losses can accrue to a person with the opposite belief – that man-made climate change isn’t real.)
Our minds want an understanding about the world that marries accuracy, workability, and efficiency. We want beliefs that give us a sense that we understand the world well enough (accuracy) that we can better think about and navigate it (workability) in manageable ways (efficiency). This is true for small beliefs – like what diet is healthiest – to large beliefs, like my political or religious beliefs. Understandings that satisfy some acceptable combination of accuracy, workability, and efficiency become beliefs, and the more a belief satisfies me, the more I will come to rely on it. And the more I come to rely on it, the more upholding it becomes important to me.
This seems especially true of beliefs that help me conceive of my relationship to others and the world, whether this is a political ideology, a religion, or a sense of the proper moral relations between people. We hold some beliefs closer and tighter than others – usually the ones we rely on more for understanding ourselves and those around us (cf WVO Quine’s ‘web of belief’ theory). Just yesterday, I found out I was wrong about a detail on my credit card statement, when the customer service representative convinced me that I had that detail wrong. Realizing my error was not a big deal, I was even glad to be pointed in the right direction. That detail was not terribly important to my understanding of the world or my relation to it. However, I was obsessed in my early twenties by the philosophy of Ayn Rand, so it was quite disorienting to conclude that this philosophy I had come to rely on in understanding the world was significantly flawed. The more important the belief to our sense of self and the world, the stronger our understandable inertia to maintain rather than challenge that belief.
Belief versus Philosophy
To summarise, I am not arguing that we shouldn’t reason. I am arguing that, contra Aristotle and much of the Western philosophical tradition, an honest look at the world cannot support that we are rational animals before we are believing ones. While reason is a useful tool in times of need, it is the decidedly weaker party. Psychological inertia is generally on the side of belief. And if I’m right about human psychology’s need to conquer the abundance of experience, it will never be otherwise. We can strengthen our reasoning ability and change our attitude towards it all we want, but reason will still be subordinate to belief. It will at best be an open question whether the contemplative, rather than the believing, life, is the highest type of life.
Let me end with several reasons why I think it’s time for philosophers to stress belief’s role in the human condition more, and reason’s a bit less. First, philosophy can compel people only if it starts with accurate sketches of them. Particularly, I think of political philosophers such as Rawls, Habermas, or Dewey, who sketch visions of social order which are dependent on people having frequent open conversations with others, collectively striving toward the type of truth that doesn’t reduce to each side vindicating their own position. It’s a lovely idea, but if what I’ve said is right, is a non-starter, because the difference in beliefs will always be present, and strongly felt.
Other philosophers tend to view reason as having some legislative or coercive power over us, saying, ‘’Reason compels us to…’’ Yet, reason compels us only when we want to be compelled by it, and most often, we are compelled by reason only to the extent that it validates beliefs to which we are already committed. There are exceptions, but a philosopher who fails to appreciate this point fails to describe the world that most people inhabit.
Lastly, philosophers should not see their lives of reason as valuable because they are an example how others should behave. Instead, philosophers should see their value as psychologists might see theirs: as specialized at doing something that most people have little interest in or ability to do. Some who’ve taken philosophy classes have been grateful for the intellectual enrichment, but I suspect the vast majority of people are fine leaving philosophy’s odd questions and reasoning tactics behind them so that they can return to more pressing concerns. They will go on to believe all sorts of things that will afford them various degrees of success. Reason will surely play a role, but it’s needed only when belief goes wrong. Challenging beliefs that haven’t yet gone wrong, or even shown signs of doing so (even the ones that lack adequate justification) is more often than not beside the point.
I fear that I sound glum on reason. I’m not. Nor am I glum on philosophy’s importance as the field that produces and evaluates reasons in order to arrive at positions. The point of this article has been that I think reason has been oversold and belief undersold by philosophers in their accounts of what makes humanity. In my experience, shifting my understanding from a reason-centric to a belief-centric model of human thinking has vastly improved my ability to understand the world around me.
© Kevin Currie-Knight 2023
Kevin Currie-Knight is a Teaching Associate Professor in East Carolina University’s College of Education. He generally focuses on the philosophy and history of education, and is the author of the book Education in the Marketplace (Palgrave MacMillan).