I spent the summer of 2011 as an undergraduate researcher at the Rocky Mountain Biological Laboratory, in Colorado. My job was to collect burying beetles—necrophagous critters with wing cases the colors of Halloween—using traps made out of coffee cans and chicken flesh. Behavioral biologists are fascinated by burying beetles because of their biparental model of care: males and females prepare meaty balls from carcasses and then coöperatively raise larvae on them. I was a matchmaker, charged with setting up pairs of beetles and watching them co-parent.
That summer was a dream. I lived in a community of more than a hundred scientists, students, and staff. The research station, based at the site of a deserted mining town, was a magnet for weirdos and plant lovers, naturalists and marmot chasers, flower people and climate watchers. It consisted of dozens of cabins—some Lincoln Log style and more than a century old, others retrofitted into modern laboratories—encircled by spruce and aspen forests, montane meadows, and monumental peaks. I was more accustomed to sidewalks than to summits, but now I saw elk and black bears and woke up one night to a porcupine gnawing on my cabin. For the first time in my life, I found love, or something close to it. In spare moments, I retired to my room, where I drew and wrote in my journal. On the weekends, we scaled the Rockies.
A part of me wanted that summer to be my forever. I envisioned a career as a collector of coleopterans, sneaking off to the mountains to cavort and observe. Another part of me worried that I was being frivolous. Back in college, my classmates had high-minded ambitions like fighting climate change, becoming human-rights lawyers, and starting microfinance firms to alleviate poverty. To spend time with books and beetles in wildflower country seemed the pinnacle of self-indulgence. Adding to the internal tension was something I’d observed among my beetles: the spectre of evolved selfishness. What looked like coöperation was, I discovered, laced with sexual conflict. The female beetles, when they had a size advantage, ejected their male partners; the males evidently stuck around less to help than to insure future mating opportunities. Where I first saw biparental collaboration was instead a complicated waltz of organisms seeking to perpetuate their own interests. Was I one of them—another gene machine bent on favoring itself?
I had, to that point, considered myself a mostly decent person, moved by empathy and committed to self-expression. Was all this actually vanity and delusion, selfishness masquerading as morality? The prospect was unsettling. So I hid away in a one-room library that smelled faintly of old textbooks and the alcohol used to preserve animal specimens, and there I started to work out a response. We’re evolved organisms, I figured, but we’re also an intelligent, cultural species capable of living by ideals that transcend our egoistic origins. What emerged from my musings was a personal ideology, at the core of which was an appreciation of creation—including artistic and scientific work. Even an awkward scribble, I supposed, expresses an incomprehensibly epic causal history, which includes a maker, the maker’s parents, the quality of the air in the room, and so on, until it expands to encompass the entire universe. Goodness could be reclaimed, I thought. I would draw and write and do science but as acts of memorialization—the duties of an apostle of being. I called the ideology Celebrationism, and, early in 2012, I started to codify it in a manic, sprawling novel of that name.
I had grown up a good Sikh boy: I wore a turban, didn’t cut my hair, didn’t drink or smoke. The idea of a god that acted in the world had long seemed implausible, yet it wasn’t until I started studying evolution in earnest that the strictures of religion and of everyday conventions began to feel brittle. By my junior year of college, I thought of myself as a materialist, open-minded but skeptical of anything that smacked of the supernatural. Celebrationism came soon after. It expanded from an ethical road map into a life philosophy, spanning aesthetics, spirituality, and purpose. By the end of my senior year, I was painting my fingernails, drawing swirling mehndi tattoos on my limbs, and regularly walking without shoes, including during my college graduation. “Why, Manvir?” my mother asked, quietly, and I launched into a riff about the illusory nature of normativity and about how I was merely a fancy organism produced by cosmic mega-forces.
After college, I spent a year in Copenhagen, where I studied social insects by day and worked on “Celebrationism” the rest of the time. Reassured of the virtue of intellectual and artistic work, I soon concluded that fictional wizards provided the best model for a life. As I wrote to my friend Cory, “They’re wise, eccentric, colorful, so knowledgeable about some of the most esoteric subjects, lone wolves in a sense, but all of their life experience constantly comes together in an exalting way every time they do something.” When, the following year, I started a Ph.D. in human evolutionary biology at Harvard, I saw the decision as in service of my Celebrationist creed. I could devote myself to meditating on the opportune swerves that produced us.
I was mistaken. Celebrationism died soon afterward. Just as observation and a dose of evolutionary logic revealed male burying beetles not as attentive fathers but as possessive mate guarders, the natural and behavioral sciences deflated my dreamy credo, exposing my lofty aspirations as performance and self-deception. I struggled, unsuccessfully, to construct a new framework for moral behavior which didn’t look like self-interest in disguise. A profound cynicism took hold.
Skepticism about objective morality is nothing new, of course. Michel de Montaigne, in the sixteenth century, remarked that “nothing in all the world has greater variety than law and custom,” a sign, for him, of the nonexistence of universal moral truths—and he had predecessors among the ancient Greeks. David Hume chimed in, two centuries later, to argue that judgments of right and wrong emanate from emotion and social conditioning, not the dispassionate application of reason. Even the more pious-sounding theorists, including Kant and Hegel, saw morality as something that we derive through our own thinking, our own rational will. The war between science and religion in the nineteenth century brought it all to a head, as a supernatural world view became supplanted by one that was more secular and scientific, in a development that Nietzsche described as the death of God. As the pillars of Christian faith crumbled, Western morality seemed poised to collapse. Nihilism loomed. “But how did we do this?” the madman in Nietzsche’s “The Gay Science” asks. “How could we drink up the sea? Who gave us the sponge to wipe away the entire horizon? What were we doing when we unchained the earth from its sun?”
Nietzsche’s response to a godless world was a moral makeover: individuals were to forge their own precepts and act in accordance with them. More than a century later, such forays have matured into an individualist morality that has become widespread. We behave morally, we often say, not because of doctrine but because of our higher-order principles, such as resisting cruelty or upholding the equality of all humans. Rather than valuing human life because an omnipotent godhead commands it, or because our houses of worship instruct it, we do so because we believe it is right.
At its core, this view of morality assumes a kind of moral integrity. Although some people may embrace principles for self-interested ends, the story goes, genuine altruism is possible through reasoned reflection and an earnest desire to be ethical. I told myself a version of this story in the Rockies: rummage through your soul and you can find personally resonant principles that inspire good behavior. The Harvard psychologist Lawrence Kohlberg turned a model like this into scholarly wisdom in the nineteen-sixties and seventies, positioning it as the apex of the six stages of moral development he described. For the youngest children, he thought, moral goodness hinges on what gets rewarded and punished. For actualized adults, in contrast, “right is defined by the decision of conscience in accord with self-chosen ethical principles appealing to logical comprehensiveness, universality, and consistency.”
All this may sound abstract, but it is routine for most educated Westerners. Consider how moral arguments are made. “Animal Liberation Now” (2023), the Princeton philosopher Peter Singer’s reboot of his 1975 classic, “Animal Liberation,” urges readers to emancipate nonhuman animals from the laboratory and the factory farm. Singer assumes that people are committed to promoting well-being and minimizing suffering, and so he spends most of the book showing, first, that our actions create hellish existences for many of our nonhuman brethren and, second, that there is no principled reason to deny moral standing to fish or fowl. His belief in human goodness is so strong, he admits, that he expected everyone who read the original version of his book “would surely be convinced by it and would tell their friends to read it, and therefore everyone would stop eating meat and demand changes to our treatment of animals.”
From an evolutionary perspective, this could seem an odd expectation. Humans have been fashioned by natural selection to pursue sex, status, and material resources. We are adept at looking out for ourselves. We help people, yes, but the decision to give is influenced by innumerable selfish considerations, including how close we are to a recipient, whether they’ve helped us before, how physically attractive they are, whether they seem responsible for their misfortune, and who might be watching. A Martian observer might, accordingly, have expected Singer’s arguments to focus less on the horrific conditions of overcrowded pig farms and instead to appeal to our hedonic urges—more along the lines of “Veganism makes you sexy” or “People who protest animal experimentation have more friends and nicer houses than their apathetic rivals.”
But Singer has always known his audience. Most people want to be good. Although “Animal Liberation Now” is largely filled with gruesome details, it also recounts changes that growing awareness has spurred. At least nine states have passed legislation limiting the confinement of sows, veal calves, and laying hens. Between 2005 and 2022 in the U.S., the proportion of hens that were uncaged rose from three per cent to thirty-five per cent, while Yum! Brands—the owner of such fast-food franchises as KFC, Taco Bell, and Pizza Hut, with more than fifty thousand locations around the world—has vowed to phase out eggs from caged hens by 2030. These changes are a microcosm of the centuries-long expansion of moral concern that, throughout much of the world, has ended slavery and decriminalized homosexuality. Could there be a clearer instance of genuine virtue?
I wasn’t yet thinking about any of this when I started graduate school. Instead, my mind was on monkeys. I had proposed studying the Zanzibar red colobus, a creature notable for retaining juvenile traits like a short face and a small head into adulthood. Our species underwent a similar juvenilization during our evolution, and the hope was that something might be learned about our past by studying this peculiar primate.
Still, I couldn’t read about monkeys all day. To start a Ph.D. at a major research university is to have proximity to countless intellectual currents, and I began to drift through the scholarly worlds on campus, which is how I found Moshe Hoffman. Moshe is intense. A curly-haired game theorist with a scalpel-like ability to dissect arguments and identify their logical flaws, Moshe was raised in an Orthodox Hasidic community in Los Angeles. He grew up wearing a kippah and spending half of each school day studying the Talmud and other religious texts until, at the age of fifteen, he forsook his faith. He had a chance conversation with an atheist classmate, then picked up Richard Dawkins’s “The Selfish Gene.” The book exposed him to game theory and evolutionary biology, setting him on a lifelong quest to solve the puzzles of human behavior.
When we met, near the end of my first year, Moshe was a postdoctoral researcher fixated on the nature of trust. We all depend on trust, yet it works in tricky ways. On the one hand, we trust people who are guided by consistent ethical precepts. I’d rather go to dinner with someone deeply opposed to stealing than a jerk who pockets my valuables as soon as I get up to pee. On the other hand, we’re turned off when people’s commitments seem calculated. The ascent of terms like “slacktivism,” “virtue signalling,” and “moral grandstanding” bespeaks a frustration with do-gooders motivated more by acclaim than by an internal moral compass. The idea is that, if you’re in it for the reputational perks, you can’t be relied on when those perks vanish. In “The Social Instinct” (2021), Nichola Raihani, who works on the evolution of coöperation, refers to this issue as the “reputation tightrope”: it’s beneficial to look moral but only as long as you don’t seem motivated by the benefits.
Moshe argued that humans deal with this dilemma by adopting moral principles. Through learning or natural selection, or some combination, we’ve developed a paradoxical strategy for making friends. We devote ourselves to moral ends in order to garner trust. Which morals we espouse depend on whose trust we are courting. He demonstrated this through a series of game-theoretic models, but you don’t need the math to get it. Everything that characterizes a life lived by moral principles—consistently abiding by them, valuing prosocial ends, refusing to consider costs and benefits, and maintaining that these principles exist for a transcendental reason—seems perfectly engineered to make a person look trustworthy.
His account identifies showmanship, conscious or otherwise, in ostensibly principled acts. We talk about moral principles as if they were inviolate, but we readily consider trade-offs and deviate from those principles when we can get away with it. Philip Tetlock, who works at the intersection of political science and psychology, labels our commitments “pseudo-sacred.” Sure, some people would die for their principles, yet they often abandon them once they gain power and no longer rely on trust. In “Human Rights in Africa” (2017), the historian Bonny Ibhawoh showed that post-colonial African dictators often started their careers as dissidents devoted to civil liberties.
Moshe wasn’t alone in this work. Around the time that he and I began chatting, researchers at Oxford, the École Normale Supérieure, and elsewhere were disrobing morality and finding performance underneath. Jillian Jordan, then a graduate student at Yale and now on the faculty of Harvard Business School, conducted a series of landmark studies demonstrating how people instinctively use moral behavior to cultivate laudable personas. A 2016 paper in the Proceedings of the National Academy of Sciences—which Jordan wrote with Moshe and two other researchers—studied uncalculating coöperation, the tendency to willfully ignore costs and benefits when helping others. It’s a key feature of both romantic love and principled behavior. The authors found not only that “coöperating without looking” (a phrase of Moshe’s) attracts trust but that people engage in it more when trying to win observers’ confidence. The motivations that we find so detestable—moral posturing for social rewards—may, in fact, be the hallmark of moral action.
Invested as I was in my own goodness, whether achieved or aspirational, I found Moshe’s ideas both alarming and mesmeric. To engage with them was to look in a mirror and find a sinister creature staring back. The more I sought Moshe out—first by taking a course he co-taught, then by meeting up for Indian food after class, then by working as his teaching assistant—the more I felt trapped within my self-interest. Celebrationism was exposed as a beautiful lie. The search for personally resonant principles was reinterpreted as a tactic not to overcome self-interest but to advance it. Any dignified motivations that had once held sway—making art for art’s sake, acting to minimize suffering—became smoke screens to distract others from my selfishness. Here were hard truths that I felt compelled to confront. I wanted to escape the performance, to adopt values for reasons other than their social utility, but even that urge, I recognized, reflected the same strategic impulse to appear good and consistent. It was like forcing yourself to wake up from a dream only to realize that you’re still dreaming.
Yes, there were venerable antecedents to all these arguments, but what had once been the province of the provocateur was now something of a scholarly consensus. The new, naturalistic study of morality stemmed from an array of converging disciplines and approaches, spanning sociology, biology, anthropology, and psychology. It was set forth in popular books like Matt Ridley’s “The Origins of Virtue” (1996), Joshua Greene’s “Moral Tribes” (2013), and Richard Wrangham’s “The Goodness Paradox” (2019). Not everyone in this field understands ethical behavior the way Moshe does. Still, they tend to employ a framework grounded in evolutionary theory—one that casts morality as a property of our primate brains and little else. Appeals to pure selflessness have become harder to defend, while a belief in objective moral truths—existing apart from our minds and discoverable through impartial judgment—has grown increasingly untenable.
Darwin himself sensed the implications. In “The Descent of Man” (1871), he suggested that studying the “moral sense” from “the side of natural history” would throw “light on one of the highest psychical faculties of man.” It took another hundred years for scholars of evolution to appreciate the extent to which a Darwinian world view can explain morality. By the beginning of the twenty-first century, philosophers like Sharon Street, at N.Y.U., were taking note. “Before life began, nothing was valuable,” Street wrote in a now classic article. “But then life arose and began to value—not because it was recognizing anything, but because creatures who valued (certain things in particular) tended to survive.” In other words, moral tenets—such as the rightness of loyalty or the wrongness of murder—do not exist unless natural selection produces organisms that value them.
In recent decades, all sorts of philosophers have added to the pool of adaptive theories about morality. Allan Gibbard argues that moral statements (“Killing is bad”) actually express attitudes (“I don’t like killing”), allowing us to coördinate on shared prescriptions (“No one shall kill”). Philip Kitcher sees ethics as an ever-evolving project invented by our remote ancestors and continually refined to help societies flourish. Richard Joyce has proposed that moral judgments help keep us out of trouble. Given normal human hedonism, we may struggle to stop ourselves from, say, stealing a brownie; the feeling that it’s morally wrong provides us an emotional bulwark. Non-moral explanations like these, whatever their differences, obviate talk of moral truths, construing them as dreamlike delusions.
Like the decline of religion, what’s often called the evolutionary debunking of morality can induce existential panic and strenuous efforts at circumvention. The eminent philosopher Derek Parfit, the subject of a recent biography by David Edmonds, spent decades writing “On What Matters,” a book that sought both to build a unified theory of objective morality and to defend it against challengers, including evolution-inspired skeptics. In 2015, at N.Y.U., Parfit and Street taught a course together on meta-ethics. On the last day of class, a student asked them whether they had learned anything from their collaboration. “My memory is that both of us said ‘No!’ ” Street told Edmonds. “He thought my position was nihilistic. He was worried about it being true and felt it needed beating back with arguments.”
What troubled me was less the notion that morality was our own creation than the implication that our motives were suspect—that evolutionarily ingrained egoism permeated our desires, including the desire to overcome that selfishness. Sincerity, I concluded, was dead. Just as the natural sciences had killed the Christian God, I thought, the social and behavioral sciences had made appeals to virtuous motivations preposterous. I became skeptical of all moral opinions, but especially of the most impassioned ones, which was a problem, because I was dating someone who had a lot of them. (It didn’t work out.) A close friend, a punk physicist with whom I often went dancing late at night, found my newfound cynicism hard to relate to, and we drifted apart.
Many theorists are skeptical of such skepticism. When I asked people on X how they have dealt with evolutionary debunking, Oliver Scott Curry, a social scientist at Oxford and the research director at Kindlab, which studies the practice of kindness, warned me not to confuse the selfishness of genes with the nature of our motivations, which apparently are more gallant. He was echoing a distinction often drawn between a behavior’s “ultimate” causes, which concern why it evolved, and its “proximate” causes, which include psychological and physiological mechanisms. The cognition underpinning moral judgment may have evolved to make us look good, these scholars grant, but that doesn’t count against its sincerity. In “Optimally Irrational” (2022), the University of Queensland economist Lionel Page explains, “There is no contradiction between saying that humans have genuine moral feelings and that these feelings have been shaped to guide them when playing games of social interactions.”
Such arguments make sense to some degree. An impulse can exist because of its evolutionary utility but still be heartfelt. The love I feel for my spouse functions to propagate my genes, but that doesn’t lessen the strength of my devotion. Why couldn’t this shift in perspective rescue goodness for me? A major reason is that the proximate-ultimate distinction leaves intact the unsavory aspects of human motivation. As anyone who has spent more than twenty minutes on social media can attest, humans are remarkably attentive to which moral proclamations garner esteem and attention. We weigh the status implications of claiming different principles. It’s true that we often assure ourselves otherwise and even internalize positions once we espouse them enough. Yet this fact didn’t redeem moral sincerity for me; it corrupted it.
I eventually ditched monkeys. Humans, complicated and enculturated, had a stronger appeal than tiny-headed primates. After my first year of graduate school, in 2014, I travelled to the Indonesian island of Siberut and stayed with its Indigenous inhabitants, the Mentawai. I returned for two more months in 2015 and then spent much of 2017 with a Mentawai community, studying traditions of justice, healing, and spirituality. As I learned more of the language, I saw how rarely Mentawai people invoke abstract concepts of right and wrong. Instead, they reason about duties and responsibilities in a way that seems both blatantly self-interested and refreshingly honest, and which I’ve since adopted when speaking to them.
A Mentawai man who had previously worked for me as a research assistant once asked me over WhatsApp to help pay his school fees. I agreed but then struggled to wire the money from the United States, and he was forced to drop out. When I visited again in 2020, I handed him a wad of cash. “Why are you doing this?” he asked. My reply came automatically: “Because, if I don’t pay you, people will think that I don’t keep my promises.” He nodded. The answer made sense.
How does one exist in a post-moral world? What do we do when the desire to be good is exposed as a self-serving performance and moral beliefs are recast as merely brain stuff? I responded by turning to a kind of nihilism, yet this is far from the only reaction. We could follow the Mentawai, favoring the language of transaction over virtue. Or we can carry on as if nothing has changed. Richard Joyce, in his new book, “Morality: From Error to Fiction,” advocates such an approach. His “moral fictionalism” entails maintaining our current way of talking while recognizing that a major benefit of this language is that it makes you likable, despite referring to nothing real. If you behave the way I did in grad school, going on about the theatre of morality, you will, he suggests, only attract censure and wariness. Better to blend in.
Intellectually, I find the proposal hard to swallow. The idea of cosplaying moral commitment for social acceptance would surely magnify whatever dissonance I already feel. Still, a decade after my first meeting with Moshe, experience forces me to acknowledge Joyce’s larger point. It’s easy to inhabit the fiction.
I still accept that I am a selfish organism produced by a cosmic mega-force, drifting around in a bedlam of energy and matter and, in most respects, not so very different from the beetles I scrutinized during that summer in Colorado. I still see the power in Moshe’s game-theory models. Traces of unease linger. But I no longer feel unmoored. A sense of meaning has re-established itself. Tressed, turbanned, and teetotalling, I am, at least by all appearances, still a good Sikh. I have become a teacher, a husband, and a father to a new baby daughter. When she smiles, a single dimple appears in her left cheek. Her existence feels more ecstatic and celebratory than any ideology I could have conceived, and I hope that she’ll one day grow up to be empathetic and aware of others’ suffering. I have moral intuitions, sometimes impassioned ones. I try to do right by people, and, on most days, I think I do an O.K. job. I dream on. ♦
Original article here