Many people cheat on taxes — no mystery there. But many people don’t, even if they wouldn’t be caught — now, that’s weird. Or is it? Psychologists are deeply perplexed by human moral behavior, because it often doesn’t seem to make any logical sense. You might think that we should just be grateful for it. But if we could understand these seemingly irrational acts, perhaps we could encourage more of them.
It’s not as though people haven’t been trying to fathom our moral instincts; it is one of the oldest concerns of philosophy and theology. But what distinguishes the project today is the sheer variety of academic disciplines it brings together: not just moral philosophy and psychology, but also biology, economics, mathematics, and computer science. They do not merely contemplate the rationale for moral beliefs, but study how morality operates in the real world, or fails to. David Rand of Yale University epitomizes the breadth of this science, ranging from abstract equations to large-scale societal interventions. “I’m a weird person,” he says, “who has a foot in each world, of model-making and of actual experiments and psychological theory building.”
In 2012 he and two similarly broad-minded Harvard professors, Martin Nowak and Joshua Greene, tackled a question that exercised the likes of Thomas Hobbes and Jean-Jacques Rousseau: Which is our default mode, selfishness or selflessness? Do we all have craven instincts we must restrain by force of will? Or are we basically good, even if we slip up sometimes?
They collected data from 10 experiments, most of them using a standard economics scenario called a public-goods game. Groups of four people, either American college students or American adults participating online, were given some money. They were allowed to place some of it into a pool, which was then multiplied and distributed evenly. A participant could maximize his or her income by contributing nothing and just sharing in the gains, but people usually gave something. Despite the temptation to be selfish, most people showed selflessness.
The fuzziness of psychological ideas makes them hard to test. If an experimental result doesn’t fit your theory of human behavior, you can fiddle with the definitions.
This finding was old news, but Rand and his colleagues wanted to know how much deliberation went into such acts of generosity. So in two of the experiments, subjects were prodded to think intuitively or deliberately; in two others, half of the subjects were forced to make their decision under time pressure and half were not; and in the rest, subjects could go at their own pace and some naturally made their decisions faster than others. If your morning commute is any evidence, people in a hurry would be extra selfish. But the opposite was true: Those who responded quickly gave more. Conversely, when people took their time to deliberate or were encouraged to contemplate their choice, they gave less.
The researchers worked under the assumption that snap judgments reveal our intuitive impulses. Our intuition, apparently, is to cooperate with others. Selfish behavior comes from thinking too much, not too little. Rand recently verified this finding in a meta-analysis of 51 similar studies from different research groups. “Most people think we are intuitively selfish,” Rand says — based on a survey he conducted—but “our lab experiments show that making people rely more on intuition increases cooperation.”
The cooperative impulse isn’t confined to an artificial experimental setting. In another paper, Rand and Ziv Epstein of Pomona College studied interviews with 51 recipients of the Carnegie Hero Medal, who had demonstrated extreme altruism by risking their lives to save others. Study participants read the interviews and rated the medalists on how much their thinking seemed intuitive versus deliberative. And intuition dominated. “I’m thankful I was able to act and not think about it,” a college student who rescued a 69-year-old woman from a car during a flash flood explained.
So Rand made a strong case that people are intuitive cooperators, but he considered these findings just the start. It’s one thing to put forward an idea and some evidence for it — lots of past researchers have done that. It’s quite another to describe and explain that idea in a rigorous, mathematical fashion. Ironically, Rand figured he could make better sense of humans by stepping away from studying real ones.
The overwhelming majority of psychological theories are verbal: explanations of the ways people act using everyday language, with maybe a few terms of art thrown in. But words can be imprecise. It may be true that “cooperation is intuitive,” but when is it intuitive? And what exactly does “intuitive” mean? The fuzziness of psychological ideas makes them hard to test. If an experimental result doesn’t fit your theory of human behavior, you can fiddle with the definitions and claim you were right all along.
Rand has sought to create quantitative models. “Science is about developing theories,” he says, “not about developing a list of observations. And the reason formal models are so important is that if your goal is theory-building, then it’s essential that you have theories that are really clearly articulated and are falsifiable.”
To do that, he has developed computer simulations of society — The Sims, basically. These models represent collections of individual people described by computer “agents,” algorithms that capture a specific package of traits, such as a tendency to cooperate or not. You can do controlled experiments on these computerized citizens that would be impossible or unethical to do with real people. You can endow them with new personalities to see how they’d fare. You can observe social processes in action, on time scales ranging from seconds to generations, instead of just taking a snapshot of a person or group. You can watch the spread of certain behaviors throughout a population and how they influence other behaviors. Over time, the patterns that emerge can tell you things about large-scale social interaction that a lab experiment with a few real people never could.

One of the first such models, in the early 1970s, studied housing segregation. It represented a city as a 16-by-13 grid of squares, populated by two types of people: stars and circles. Each star would move to the nearest location in which at least half its neighbors were also stars — it had a slight bias to be among similar others. Circles did the same. Even these mild biases led quickly to stark segregation, with all-star and all-circle regions of the board — a much more extreme partitioning than any one agent sought. The researcher, the economist Thomas Schelling, used his model to help explain racial segregation in American cities. A neighborhood can splinter into homogeneous patches even when individual residents are hardly prejudiced at all. (Of course, in reality, segregation also reflects outright racism and explicit policies of exclusion.) Schelling’s work became a case study of how a group’s collective behavior can diverge from the desires of any one agent.
Such models have also been used to explore cooperation. In an influential paper in 1981, the political scientist Robert Axelrod programmed agents to play a simple game called the Prisoner’s Dilemma. Two players have to decide whether to cooperate with or betray the other, and they receive points based on their choices. The scoring system is set up to mimic an essential dilemma of social life. Together the players perform best if they both cooperate, yet each can maximize his or her own individual outcome, at the expense of the other, by acting selfishly. The game takes its name from a scenario in which the police interrogate two thieves, offering each a reward for ratting out his or her accomplice. The thieves aren’t able to communicate to reach a joint decision; they have to make their decisions independently. Acting rationally, each should rat out the other. But when they both act “rationally,” they actually end up with the most combined jail time.
It’s possible we’re born with a tendency to cooperate, but frequent cooperation (with beneficial results) is required to sustain our benevolence.
The game gets more interesting — and more analogous to real life — when you play multiple rounds with the same partner. Here, repeated cooperation is best not just for both partners as a unit but also for each individually. You can still occasionally double-cross your partner for extra points, however, as long as it doesn’t trigger later betrayal.
What is the best strategy, then? To find out, Axelrod solicited Prisoner’s Dilemma strategies from mathematicians, biologists, economists, political scientists, computer scientists, and physicists from around the world. Axelrod programmed his computerized agents with these strategies and made them play a round-robin tournament. Some strategies were quite sophisticated, but the winner was a simple one called tit-for-tat.
Tit-for-tat resembles human reciprocity. It starts with cooperation and, after that, does whatever the other player did on the previous round. An agent using the strategy extends an olive branch at first. If its opponent reciprocates, it keeps cooperating. But if its opponent double-crosses it, the tit-for-tat agent rescinds its peace offering until its opponent makes amends.

By combining the short-term temptation to be selfish with the long-term benefits of collaboration, the Prisoner’s Dilemma is an ideal model for human cooperation, and Rand has built on Axelrod’s work to understand why evolution might have favored intuitive selflessness.
Rand and his grad student Adam Bear considered a variant of the Prisoner’s Dilemma in which matchups were either one-shot or multiple-round, chosen at random.8 The computerized agents faced a tough choice. In a one-off, they would score more points by betraying their opponent, whereas in repeated play cooperation made more sense. But the uncertainty made it unclear which strategy was best. Rand and Bear then added a twist. An agent could elect to pay some points at the start of an encounter—representing the efforts of deliberation—to suss out what kind of matchup it would face, so that it could tailor its strategy.
The agent had to decide whether the advantage of foreknowledge outweighed its cost. The price of the tip-off varied randomly, and each agent was programmed with a maximum price it would agree to pay; if the price exceeded that amount, the agent did not receive any advance information and instead chose some default behavior, following its “intuition.” In this way, the simulation allowed for different personality types. Some agents intuitively cooperated, others intuitively betrayed. Some occasionally deliberated, others didn’t.
Is deliberation helpful? That’s not immediately obvious. Intuitive thinking is fast but inflexible. Deliberative thinking can achieve better outcomes but takes time and energy. To see which strategy excelled in the long run, Rand and Bear’s model simulated a process of evolution. A large population of agents played the game with one another and either proliferated or died depending on how well they did. This process can model either genetic evolution or cultural evolution, in which the weak players don’t actually die, but merely adopt stronger strategies through imitation.
Most of us are genuinely good. And if we’re not, we can be encouraged to be. The math is there.
Typically, one strategy swept through the population and replaced the alternatives. This victorious strategy depended on the precise parameters of the game. For example, Rand and Bear varied the probability that matchups would be single- or multiple-round. When most were multi-round, the winning agents defaulted to cooperating but deliberated if the price was right and switched to betrayal if they found they were in a one-shot game. But when most were one-shots, the agents that prevailed were no longer willing to pay to deliberate at all. They simply double-crossed their opponents. In other words, the model produced either wary cooperation or uncompromising betrayal.
This outcome was notable for what was missing. Agents that always cooperated usually died off completely. Likewise, almost no set of game parameters favored agents that defaulted to the double-cross but were sometimes willing to deliberate. Bear and Rand stared at this asymmetry for several weeks, baffled.
Finally, they had a breakthrough. They realized that when your default is to betray, the benefits of deliberating — seeing a chance to cooperate — are uncertain, depending on what your partner does. With each partner questioning the other, and each partner factoring in the partner’s questioning of oneself, the suspicion compounds until there’s zero perceived benefit to deliberating. If your default is to cooperate, however, the benefits of deliberating — occasionally acting selfishly — accrue no matter what your partner does, and therefore deliberation makes more sense.
So, it seems there is a firm evolutionary logic to the human instinct to cooperate but adjust if necessary — to trust but verify. We ordinarily cooperate with other people, because cooperation brings us benefits, and our rational minds let us decipher when we might occasionally gain by acting selfishly instead.
The model also ties up a loose end from Rand’s earlier studies of public-goods games. In that research, time pressure caused some people to cooperate more, but never caused anyone to cooperate less. This asymmetry now makes sense. The only people who would have shown that behavior were those who were willing to deliberate, but defaulted to betrayal; the time pressure would bring out their Machiavellian inclinations. Evidently such people are rare. If someone is deep-down selfish, rational deliberation will only make them more so. And the evolutionary model shows why. Defectors who have qualms are quickly winnowed out by genetic or cultural evolution.
When it comes to getting people to cooperate more, Rand’s work brings good news. Our intuitions are not fixed at birth. We develop social heuristics, or rules of thumb for interpersonal behavior, based on the interactions we have. Change those interactions and you change behavior.
Rand, Nowak, and Greene tested that idea in their 2012 paper. They asked some subjects whether they’d ever played such economics games before. Those with previous experience didn’t become more generous when asked to think intuitively; they’d apparently become accustomed to the anonymous nature of such games and learned a new intuition. Unfortunately, it was a cynical one: They could get away with mooching off others. Similarly, subjects who reported that they couldn’t trust most of the people in their lives also didn’t become more generous when acting on intuition. It’s possible we’re born with a tendency to cooperate, but frequent cooperation (with beneficial results) is required to sustain our benevolence.
Happily, even the Grinch can expand his heart by three sizes, as Rand demonstrates in a recent study. First, he had test subjects play the Prisoner’s Dilemma for about 20 minutes with a variety of opponents. For half of the subjects, the average game lasted eight rounds, meaning cooperation was the best strategy; for half, the average game lasted a single round, which discouraged cooperation. Afterward, everyone played a public-goods game. Those stewed in cooperation gave significantly more money in the second phase of the experiment than did those without it. In less than half an hour, their intuitions had shifted.
How do you encourage cooperation in places where cooperation isn’t the norm? Corporate America comes to mind. “In a lot of situations people are basically rewarded for backstabbing and ladder-climbing,” Rand says. Rand and Bear’s modeling paper, in which intuitive defectors don’t trust each other enough even to consider whether cooperation would pay off, points to an answer. Rand suggests that, at least at first, incentives could come from above, so that the benefits of cooperating don’t depend solely on whether one’s partner cooperates. Companies might offer bonuses and recognition for helpful behavior. Once cooperation becomes a social heuristic, people will begin to cooperate when it benefits them, but also even when it doesn’t. Selflessness will be the new norm.
When selflessness is the norm, encouraging people to make decisions quickly can bring out their better angels. Extensions of this research reveal that we see quick or unthinking acts of generosity as particularly revealing of kindness, and that people may even use this signal strategically. In recent work, Rand and his collaborators have shown that people are faster to make decisions to cooperate when they know someone is watching, as if aware that others will judge them by their alacrity. Among other puzzles, Rand is currently trying to untangle this apparent paradox—the strategic use of intuition.
Rand’s work offers a correction to those misanthropes who peer into the hearts of men and women and see shadows. Most of us are genuinely good. And if we’re not, we can be encouraged to be. The math is there.
If you think seeing life as a set of economics games and cooperation as self-interest in disguise sounds dismal, it is actually not so distanced from what you might call virtue. “When I’m nice to other people, I’m not doing it because of some kind of calculation. I’m doing it because it feels good,” Rand says. “And the reason it feels good, I argue, is that it is actually payoff maximizing in the long run.”
Rand then adds a crucial clarification. “It feels good to be nice — unless the other person is a jerk,” he says. “And then it feels good to be mean.”
Tit for tat indeed.
Original article here


If you want to learn something about change there is no better place to look than evolution. Nothing represents a continuous and unrelenting cycle of order, disorder, and reorder on a grander scale. For long periods of time, Earth is relatively stable. Sweeping changes—warming, cooling, or an asteroid falling from space, for example—occur. These inflection points are followed by periods of disruption and chaos. Eventually, Earth, and everything on it, regains stability, but that stability is somewhere new.
The more you define yourself by any one activity, the more fragile you become. If that activity doesn’t go well or something changes unexpectedly, you lose a sense of who you are. But with self-complexity, you have develop multiple components to your identity.
We called them fairy rocks. They were just colorful specks of gravel—the kind you might buy for a fish tank—mixed into my preschool’s playground sand pit. But my classmates and I endowed them with magical properties, hunted them like treasure, and carefully sorted them into piles of sapphire, emerald, and ruby. Sifting the sand for those mystical gems is one of my earliest memories. I was no older than 3 at the time. My memory of kindergarten has likewise been reduced to isolated moments: tracing letters on tan paper with pink dashed lines; watching a movie about ocean creatures; my teacher slicing up a giant roll of parchment so we could all finger-paint self-portraits.
We’re surrounded by negativity everywhere we turn. The news we read, social media we peruse, and conversations we have and overhear. We absorb stress from our family, friends, and coworkers. And, it’s taking a toll.
Watch what you say out loud. Negative language is particularly insidious and potent. Be mindful of what you’re thinking and saying. Yes, those around you influence you and your mood, but we have more control over our thoughts and feelings than anyone else. And what we say out loud also carries significant weight. According to Trevor Moawad, a mental conditioning coach who works primarily with elite athletes, it’s ten times more damaging to our sense of thriving if we verbalize a thought than if we just think it.
Manage your energy. You can also increase your resilience in the face of negativity and encourage thriving by exercising, eating well, and getting enough sleep — all things we know we’re supposed to do but we often fail to when we’re bombarded with negativity. When we exercise, our muscles pump “hope molecules” into our bodily systems that are good for our mental and physical health. You can amplify these effects by exercising outside, with others, or to music.
If Socrates was the wisest person in Ancient Greece, then large language models must be the most foolish systems in the modern world.
And bullshit is dangerous, warned Frankfurt. Bullshit is a greater threat to the truth than lies. The person who lies thinks she knows what the truth is, and is therefore concerned with the truth. She can be challenged and held accountable; her agenda can be inferred. The truth-teller and the liar play on opposite sides of the same game, as Frankfurt puts it. The bullshitter pays no attention to the game. Truth doesn’t even get confronted; it gets ignored; it becomes irrelevant.
Need to know

I believe this with every fibre of my being and I live it, in myself and witnessing it in others, in every way I can. So what are some of these super powers that we’re seeing these days? I’ll try to encapsulate a few of the ones I’m aware of.
But in the second half of the 19th century, composers gradually began to deviate from a strict adherence to the principle of tonality, making it difficult to sense where the music stood in relation to the tonic. Schoenberg, believing that tonality had run its course, was determined to supplant it with the series, or tone row. In a series, each of the 12 notes of the chromatic scale of semitones appears exactly once; a note could be repeated only after the series had been completed. This gave a composer a staggering number of combinations to choose from: 1 x 2 x 3 x … x 12 = 479,001,600, to be exact (not counting shifts by octaves, which Schoenberg allowed). In serial music, complete democracy ruled: no single note held any preferred status over the others. Every note was related only to its immediate predecessor in the series; gone were the roles that different notes had played in relation to the tonic. At its heart it was a mathematical system, and Schoenberg was determined to impose it on music.
Dr. James McGaugh remembers that day too. At the time, he was director of UC Irvine’s Center for the Neurobiology of Learning and Memory, the research institute that he founded in 1983. In her email, Jill Price said that she had a problem with her memory. McGaugh responded almost immediately, explaining that he worked at a research institute and not a clinic, and that he’d be happy to direct her to somewhere she could find help.
Still, he started from a position of scepticism. “In interrogating her, I started with the scientific assumption that she couldn’t do it,” he told me. And even though Price showed that she could, repeatedly, McGaugh was still unmoved. “Yeah, it got my attention, but I didn’t say, ‘Wow.’ We had to do a lot more. So we did a lot more.” (In Price’s recollection, however, her ability to remember “really freaked Dr McGaugh out.”)
In May 2012, the journal Neurobiology of Learning and Memory published a follow-up study by UCI neuroscience graduate student Aurora LePort and neurobiologist Dr Craig Stark, then the director of the UCI Center for the Neurobiology of Learning and Memory. It was now nearly 12 years since Price first reached out to McGaugh, but researchers were only fractionally closer to finding the answer she was looking for.
For both Price and Petrella, there is a specific point in their lives that they feel triggered their ability to remember things with extraordinary clarity. For Petrella, it was when he was seven years old and playing a deliriously fun game in his backyard with a childhood friend. The next day, Petrella invited his friend over to play it again, but they only played for a few minutes before getting bored. Petrella realised then that nothing ever stays the same and that it was important that he remember things before they changed. For Price, it was her family’s traumatic move to the West coast. In each case, Price and Petrella say they already had strong memories before this decisive moment, but after it, their ability to remember was transformed.