Call us toll free: +1 4062079616
How To Be Spiritual In A Material World
Call us toll free: +1 4062079616

Full Width Blog

27 Sep 2024
Comments: 0

The best (and worst) ways to spot a liar

 

Thomas Ormerod’s team of security officers faced a seemingly impossible task. At airports across Europe, they were asked to interview passengers on their history and travel plans. Ormerod had planted a handful of people arriving at security with a false history, and a made-up future – and his team had to guess who they were. In fact, just one in 1000 of the people they interviewed would be deceiving them. Identifying the liar should have been about as easy as finding a needle in a haystack.

 

Using previous methods of lie detection, you might as well just flip a coin

 

So, what did they do? One option would be to focus on body language or eye movements, right? It would have been a bad idea. Study after study has found that attempts – even by trained police officers – to read lies from body language and facial expressions are more often little better than chance. According to one study, just 50 out of 20,000 people managed to make a correct judgement with more than 80% accuracy. Most people might as well just flip a coin.

Ormerod’s team tried something different – and managed to identify the fake passengers in the vast majority of cases. Their secret? To throw away many of the accepted cues to deception and start anew with some startlingly straightforward techniques.

Over the last few years, deception research has been plagued by disappointing results. Most previous work had focused on reading a liar’s intentions via their body language or from their face – blushing cheeks, a nervous laugh, darting eyes. The most famous example is Bill Clinton touching his nose when he denied his affair with Monica Lewinsky – taken at the time to be a sure sign he was lying. The idea, says Timothy Levine at the University of Alabama in Birmingham, was that the act of lying provokes some strong emotions – nerves, guilt, perhaps even exhilaration at the challenge – that are difficult to contain. Even if we think we have a poker face, we might still give away tiny flickers of movement known as “micro-expressions” that might give the game away, they claimed.

 

The problem is the huge variety of human behaviour; there is no universal dictionary of body language

 

Yet the more psychologists looked, the more elusive any reliable cues appeared to be. The problem is the huge variety of human behaviour. With familiarity, you might be able to spot someone’s tics whenever they are telling the truth, but others will probably act very differently; there is no universal dictionary of body language. “There are no consistent signs that always arise alongside deception,” says Ormerod, who is based at the University of Sussex. “I giggle nervously, others become more serious, some make eye contact, some avoid it.” Levine agrees: “The evidence is pretty clear that there aren’t any reliable cues that distinguish truth and lies,” he says. And although you may hear that our subconscious can spot these signs even if they seem to escape our awareness, this too seems to have been disproved.

Despite these damning results, our safety often still hinges on the existence of these mythical cues. Consider the screening some passengers might face before a long-haul flight – a process Ormerod was asked to investigate in the run up to the 2012 Olympics. Typically, he says, officers will use a “yes/no” questionnaire about the flyer’s intentions, and they are trained to observe “suspicious signs” (such as nervous body language) that might betray deception. “It doesn’t give a chance to listen to what they say, and think about credibility, observe behaviour change – they are the critical aspects of deception detection,” he says. The existing protocols are also prone to bias, he says – officers were more likely to find suspicious signs in certain ethnic groups, for instance. “The current method actually prevents deception detection,” he says.

Clearly, a new method is needed. But given some of the dismal results from the lab, what should it be? Ormerod’s answer was disarmingly simple: shift the focus away from the subtle mannerisms to the words people are actually saying, gently probing the right pressure points to make the liar’s front crumble.

Ormerod and his colleague Coral Dando at the University of Wolverhampton identified a series of conversational principles that should increase your chances of uncovering deceit:

Use open questions. This forces the liar to expand on their tale until they become entrapped in their own web of deceit.

Employ the element of surprise. Investigators should try to increase the liar’s “cognitive load” – such as by asking them unanticipated questions that might be slightly confusing, or asking them to report an event backwards in time – techniques that make it harder for them to maintain their façade.

Watch for small, verifiable details. If a passenger says they are at the University of Oxford, ask them to tell you about their journey to work.

 

Liar vs liar

It takes one to know one

Ironically, liars turn out to be better lie detectors. Geoffrey Bird at University College London and colleagues recently set up a game in which subjects had to reveal true or false statements about themselves. They were also asked to judge each other’s credibility. It turned out that people who were better at telling fibs could also detect others’ tall tales, perhaps because they recognised the tricks.

 

Observe changes in confidence. Watch carefully to see how a potential liar’s style changes when they are challenged: a liar may be just as verbose when they feel in charge of a conversation, but their comfort zone is limited and they may clam up if they feel like they are losing control.

The aim is a casual conversation rather than an intense interrogation. Under this gentle pressure, however, the liar will give themselves away by contradicting their own story, or by becoming obviously evasive or erratic in their responses. “The important thing is that there is no magic silver bullet; we are taking the best things and putting them together for a cognitive approach,” says Ormerod.

 

 

Ormerod openly admits his strategy might sound like common sense. “A friend said that you are trying to patent the art of conversation,” he says. But the results speak for themselves. The team prepared a handful of fake passengers, with realistic tickets and travel documents. They were given a week to prepare their story, and were then asked to line up with other, genuine passengers at airports across Europe. Officers trained in Ormerod and Dando’s interviewing technique were more than 20 times more likely to detect these fake passengers than people using the suspicious signs, finding them 70% of the time.

“It’s really impressive,” says Levine, who was not involved in this study. He thinks it is particularly important that they conducted the experiment in real airports. “It’s the most realistic study around.”

The art of persuasion

Levine’s own experiments have proven similarly powerful. Like Ormerod, he believes that clever interviews designed to reveal holes in a liar’s story are far better than trying to identify telltale signs in body language. He recently set up a trivia game, in which undergraduates played in pairs for a cash prize of $5 for each correct answer they gave. Unknown to the students, their partners were actors, and when the game master temporarily left the room, the actor would suggest that they quickly peek at the answers to cheat on the game. A handful of the students took him up on the offer.

 

One expert was even correct 100% of the time, across 33 interviews

 

Afterwards, the students were all questioned by real federal agents about whether or not they had cheated. Using tactical questions to probe their stories – without focusing on body language or other cues – they managed to find the cheaters with more than 90% accuracy; one expert was even correct 100% of the time, across 33 interviews – a staggering result that towers above the accuracy of body language analyses. Importantly, a follow-up study found that even novices managed to achieve nearly 80% accuracy, simply by using the right, open-ended questions that asked, for instance, how their partner would tell the story.

 

 

Indeed, often the investigators persuaded the cheaters to openly admit their misdeed. “The experts were fabulously good at this,” says Levine. Their secret was a simple trick known to masters in the art of persuasion: they would open the conversation by asking the students how honest they were. Simply getting them to say they told the truth primed them to be more candid later. “People want to think of being honest, and this ties them into being cooperative,” says Levine. “Even the people who weren’t honest had difficulty pretending to be cooperative [after this], so for the most part you could see who was faking it.”

 

Another trick is to ask people how honest they are

 

Clearly, such tricks may already be used by some expert detectives – but given the folklore surrounding body language, it’s worth emphasising just how powerful persuasion can be compared to the dubious science of body language. Despite their successes, Ormerod and Levine are both keen that others attempt to replicate and expand on their findings, to make sure that they stand up in different situations. “We should watch out for big sweeping claims,” says Levine.

Although the techniques will primarily help law enforcement, the same principles might just help you hunt out the liars in your own life. “I do it with kids all the time,” Ormerod says. The main thing to remember is to keep an open mind and not to jump to early conclusions: just because someone looks nervous, or struggles to remember a crucial detail, does not mean they are guilty. Instead, you should be looking for more general inconsistencies.

There is no foolproof form of lie detection, but using a little tact, intelligence, and persuasion, you can hope that eventually, the truth will out.

 

 

Original article here


24 Sep 2024
Comments: 0

Do You Talk to Yourself? Good.

Everyone has a few comforting quirks that they only indulge in behind closed doors. For some, it’s lying on the floor to relax. For others, it’s talking to themselves out loud. These unhinged habits might seem embarrassing, especially if you get caught in the act, but then you go on social media and realize there are dozens — and sometimes even hundreds of thousands — of other people just like you.

For anyone who yaps to themselves out loud, it’ll come as a relief to know there are nearly 300,000 posts about “talking to yourself” on TikTok. These videos show creators opening up about how much they love to chat with themselves, whether they’re muttering ideas out loud at home or having full-blown, interview-style conversations in their car.

This habit is way more common than you might think. “I’ve had clients come into sessions and admit they talk to themselves and worry it’s abnormal, but it’s really not,” says Lauren Auer, LCPC, a therapist and founder of Steadfast Counseling. “In fact, you might be surprised by how many people have internal — or external! — dialogues running throughout the day.”

Believe it or not, it’s also good for you. According to Auer, talking to yourself is an excellent way to process your thoughts, work through tough emotions, and find comfort when you’re stressed, but there are even more benefits to be had. Read on below for everything you need to know about this quirky little custom.

TikTokers Are Talking To Themselves

On TikTok, creator @good_mess_des joked that she likes to talk to herself because she knows she’ll always answer — and that’s honestly so real. In her comments, one person said, “I’m the funniest person I know. Of course I’m going to talk to myself” while another wrote, “I am my own consultant.”

Creator @y0rubangel also loves a one-sided yap sesh, so much so that she’ll put in headphones and talk to herself while walking. “Sometimes a girl just needs to talk,” she noted in her caption before over 4,000 commenters chimed in to validate her. One person said, “I do this all the time when I need to vent and get advice from a real one. (Myself.)” Another admitted they’ll even start laughing during their conversations.

While many chats are lighthearted and fun, others can feel like a true necessity, especially when you’re stressed. Creator @samherling said he talks to himself as a way to prevent overthinking. For him, it’s a positive coping mechanism. “I get relief by verbalizing what’s on my mind […] even though no one’s listening,” he said in a now-viral TikTok. He went on to compare the habit to journaling or speaking to a therapist.

According to Auer, these are all completely legitimate reasons to talk to yourself. “It’s one way your brain can make sense of things — it’s like thinking out loud,” she says. Instead of keeping it all inside, you’re giving your thoughts and feelings a place to go.

Talking to yourself can be cathartic, which is why so many people do it while they drive home from work. When you’ve had an annoying day it feels good to vent and complain — without having to explain the details to a listener on the other end of the phone.

As one final perk, Auer says talking to yourself can also help you feel less lonely. This is why you might catch yourself having a one-sided conversation if you live alone, work from home, or while on a long car ride. That said, even people who live with roommates or a partner might slip away to indulge in a quick chat.

Here’s Why You Talk To Yourself Out Loud

According to Auer, this habit is especially common among verbal processors, aka people who need to say their thoughts out loud in order to fully understand them. “Hearing your thoughts spoken can clarify things in a way that just thinking can’t,” she says. “Sometimes hearing a problem spoken aloud also shifts your perspective and helps you figure out what to do next.”

It’s also a common habit amongst chronic over-thinkers and those with anxiety who might need to unleash pent-up thoughts and worries. On TikTok, many people speculate it’s a go-to for creative or introspective types, too. It’s not limited to one type of person, but some folks definitely do it more often than others.

Is It OK To Talk To Yourself Out Loud?

If you’re still wondering whether or not it’s OK to talk to yourself, Auer gives this habit the therapist’s seal of approval. “Talking to yourself out loud can be a very healthy coping mechanism,” she says. “It can also serve as a great alternative to venting to others, especially if you’re trying not to offload your stress onto friends or loved ones.”

If you’ve never tried it, allow yourself to talk to yourself out loud the next time you’re cooking dinner, taking a shower, or going for a walk. It’ll feel good to verbalize your emotions, validate your feelings, and keep yourself company.

 

 

Original article here


21 Sep 2024
Comments: 0

A Mental Disease by Any Other Name

 

It starts without warning—or rather, the warnings are there, but your ability to detect them exists only in hindsight. First you’re sitting in the car with your son, then he tells you: “I cannot find my old self again.” You think, well, teenagers say dramatic stuff like this all the time. Then he’s refusing to do his homework, he’s writing suicidal messages on the wall in black magic marker, he’s trying to cut himself with a razor blade. You sit down with him; you two have a long talk. A week later, he runs home from a nighttime gathering at his friend’s apartment, he’s bursting through the front door, shouting about how his friends are trying to kill him. He spends the night crouching in his mother’s old room, clutching a stuffed animal to his chest. He’s 17 years old at this point, and you are his father, Dick Russell, a traveler, a former staff reporter for Sports Illustrated, but a father first and foremost. It is the turn of the 21st century.

Up until this point, your son, Frank, has been a fully functional kid, if somewhat odd. An eccentric genius, socially inept yet insightful—perhaps an artist in the making, you thought. Now you are being told your son’s quirks stem from pathology, his mystic phrases are not indications of creative genius but of neural networks misfiring. You sit with Frank as he receives his diagnosis, schizophrenia, and immediately all sorts of associations flood into your head. In the United States, a diagnosis of schizophrenia often means homelessness, joblessness, inability to maintain close relationships, and increased susceptibility to addiction. Your son is now dangling off this cliff. So you hand him over to the doctors, who prescribe him antipsychotics, and when he balloons up to 300 pounds, and they tell you he’s just being piggish, you believe them.

Had Frank been living someplace else, things may have turned out differently. In some countries, schizophrenics hold down jobs at five times the rates of American schizophrenics. In others, symptoms are interpreted as unusual powers.

Dick and his son tried a variety of treatments over 15 years, some more effective than others. Then, unexpectedly, the pair turned in a very different direction, beginning a journey that Dick now likens to a “torch-lit passageway through a long dark tunnel.” By sharing his story, he hopes to help others find this passageway—but he’s aware some of it sounds crazy. For instance: He now believes Frank might be a shaman.

Certain structures and regions in the brain are thought to be particularly important in constructing our sense of self. One is the meeting place between the two middle lobes of the brain: the temporal lobe, which translates sight and hearing into language, emotion, and memory, and the parietal lobe, which integrates all five senses to locate the body in space. This region, called the temporoparietal junction, or TPJ, assembles information from these and other lobes into a mental representation of one’s physical body, and its place in space and time. It also plays a role in what’s called theory of mind, the ability to recognize your thoughts and desires as your own and to understand that other people have mental states that are separate from yours.

When the TPJ is altered or disturbed, putting oneself together becomes difficult and sometimes painful. Body Dysmorphic Disorder, characterized by extreme preoccupation with imagined physical defects, is thought to arise from faulty TPJ interplay. Researchers see atypical TPJ activity in Alzheimer’s patients, Parkinson’s patients, and amnesiacs.

 

Don’t take my devils away, because my angels may flee, too.”

 

Schizophrenia is intimately related to TPJ messiness. It affects theory of mind; schizophrenic patients often believe that others harbor animosity toward them, and when they perform mental tasks related to theory of mind, their TPJ activity either spikes or crashes. Researchers have induced the kind of ghostly visions and out-of-body experiences that some schizophrenics experience, simply by stimulating the TPJ with electrodes. The psychiatrist Lot Postmes calls this “perceptual incoherence,” noting that the jumbled assortment of sensory information leads to an untethering of ego: “a normal sense of self, as a feeling of unitary entity, the ‘I,’ that owns and authors its thoughts, emotions, body, and actions.”

Having a dissolved self can make it immensely hard for a schizophrenic person to present a coherent picture of themselves to the world, and to relate to other, more gelled selves. “Schizophrenia is a disease whose main manifestations are sufferers’ [diminished] abilities to engage in social interactions,” says Matcheri S. Keshavan, a psychiatrist at Harvard Medical School and an expert on schizophrenia. And yet, ironically, people with schizophrenia need others just as much as socially capable people do, if not more. “A problem with schizophrenia is however much they want [social interactions], they often lose the skills needed for navigating them,” Keshavan says.

This craving for social connection puts schizophrenic people at stark contrast to people diagnosed with Autism Spectrum Disorder (ASD). In 2008, Bernard Crespi, a biologist at Simon Fraser University, in Canada, and Christopher Badcock, a sociologist at the London School of Economics, theorized that autism spectrum disorder (ASD) and schizophrenia were opposite sides of the same coin. “Social cognition,” they wrote, is “underdeveloped in autism, but hyper-developed to dysfunction in psychosis (schizophrenia).” In other words, where an autistic person’s sense of self is cripplingly narrow, schizophrenics’ selves are cripplingly expansive: They believe they are many people at once, and see motive and meaning everywhere.

As difficult as they may be to live with, these perceptual distortions can make schizophrenic people more creative. Schizophrenics tend to see themselves as more imaginative than others, and to embark on more artistic projects.5 Numerous people with schizophrenia have said that their creative thoughts and delusions come from the same source: Poet Rainer Maria Rilke refused treatment for his visions, saying “don’t take my devils away, because my angels may flee, too.” The author Stephen Mitchell, who translated many of Rilke’s works, put it this way: “He was dealing with an existential problem opposite from the one that most of us need to resolve: Whereas we find a thick, if translucent, barrier between self and other, he was often without even the thinnest differentiating membrane.”

Frank Russell reported feeling something similar. “He told me he feels like a mirror, reflecting what’s inside people,” his father, Dick, writes. “It was hard for him to sort out what’s him and what’s them.” And Frank, Dick reports, is highly creative. He draws, paints, and welds. He invents languages out of made-up “hieroglyphs” and archetypal symbols. He composes long poems about god and racial tension, and has won numerous awards for his poetry at school.

And yet Frank’s strange preoccupation with symbols, his belief he could become Chinese or shift into a bear, made social interaction awkward and difficult. He spent the 10 years following his initial diagnosis mostly isolated, largely incapable of forming long-lasting relationships or joining group activities. Apart from doctors, the only consistent people in Frank’s life were his parents. That was before they met Malidoma Patrice Some.

According to the World Health Organization, schizophrenia is universal. “So far, no society or culture anywhere in the world has been found free from … this puzzling illness,” states a 1997 report. A diagnosis of schizophrenia considers the combination of five symptoms, as well as their impact and duration: 1) Delusions, 2) Hallucinations, 3) Disorganized speech, 4) Grossly disorganized or catatonic behavior, and 5) Negative symptoms like affective flattening (restricted emotional expressiveness), alogia (diminished capacity for speech), or avolition (lack of initiative). But the WHO cautions that these criteria are to be taken with a grain of salt—“current operationalized diagnostic systems, while undoubtedly very reliable, leave the question of validity unanswered in the absence of external validating criteria,” the report notes. Diagnosis “should therefore be considered a provisional tool,” set to organize treatment plans while “leav[ing] the door open to future developments.”

Terms of diagnosis are in constant flux. “It keeps changing over time,” says Keshavan. “We’re doing research to develop better biomarkers, but it’s still complicated.” Robert Rosenheck, a psychiatrist at Yale University who studies the cost efficiency of various treatment models for schizophrenia, goes even further. “Usually with medicine, the whole idea is that you have illnesses with a medical basis, a physiological basis. We don’t have that for schizophrenia.”

 

If the broken-down person does not have a community around him or her, he or she may fail to heal.

 

Adding to the complexity, schizophrenia looks different across cultures. Several studies by the World Health Organization have compared outcomes of schizophrenia in the U.S. and Western Europe with outcomes in developing nations like Ghana and India. After following patients for five years, researchers found that those in developing countries fared “considerably better” than those in the developed countries. In one study, nearly 37 percent of patients diagnosed with schizophrenia in developing countries were asymptomatic after two years, compared to only 15.5 percent in the U.S. and Europe. In India, about half of people diagnosed with schizophrenia are able to hold down jobs, compared to only 15 percent in the U.S.

Many researchers have theorized that these counterintuitive findings stem from a key cultural difference: developing countries tend to be collectivistic or interdependent, meaning the predominant mindset is community-oriented. Developed countries, on the other hand, are usually individualistic—autonomy and self-motivated achievement are considered the norm. Other variables in developing countries can, at times, complicate this dichotomy—for instance, the relative scarcity of medication, and other cultural factors such as stigma. However, one study of “sociocentric” differences between ethnic minority groups within the U.S. found results suggesting that “certain protective aspects of ethnic minority culture”—namely, the prevalence of two collectivist values, empathy and social competence—“result in a more benign symptomatic expression of schizophrenia.”

“Take a young man with schizophrenia who’s socially unable to engage,” Keshavan says. “In a collectivistic culture, he’s still able to survive in a joint family with a less fortunate brother or cousin … he’ll feel supported and contained. Whereas in a more individualistic society, he’ll feel let go, and not particularly included. For that reason, schizophrenia tends to be highly disabling [in individualistic countries].” Individualist cultures also “[diminish] motivation to acknowledge illness and seek help from others, whether from therapists or in clinics or residential programs.” notes Russell Schutt, a leading expert on the sociology of schizophrenia.

Outcomes across cultures may also be affected by differences in the patients themselves. In 2012, Shihui Han, a neuroscientist at Peking University, asked volunteers from a traditionally interdependent country (China) and a more independent one (Denmark) to think about various people, all while monitoring activity in the TPJ. In both groups, the TPJ lit up when they attempted to infer other peoples’ thought processes, a theory of mind task. But in Chinese participants, the TPJ also activated when they thought about themselves. In Danish people, their medial prefrontal cortex, which the researchers used to measure the degree of self-reflection, lit up more than in the Chinese participants. In essence, Chinese subjects’ sense of self was blurrier on average, in a way that directly affected the area of the brain implicated in schizophrenic symptoms.

In Han’s study, the average TPJ activity levels of people from the traditionally interdependent country looked closer to those of schizophrenic patients. Other studies, including Chiyoko Kubayashi Frank at the School of Psychology at Fielding Graduate University in Santa Barbara, have theorized that diminished activity in the TPJ area in Japanese adults and children during theory of mind tasks “might represent the demoted sense of self-other distinction in the Japanese culture.”12This shows up in how both populations perceive the world differently: People from collectivistic countries are more likely to believe in God,13and to attend to context in images, while people from individualistic countries are likely to ignore context in favor of the image’s primary focus. This implies schizophrenics are less likely to be doubted or stigmatized for their visions in collectivistic cultures, and thus, they are less likely to feel what Schutt calls “socially-generated stress”—which, he notes, “has biological effects that can exacerbate symptoms of mental illness.”

Malidoma is from a collectivist society. Born into the Dagara tribe in Burkina Faso, he is the grandson of a renowned healer, who travels around the world but is based in the U.S. Malidoma sees himself as a bridge between his culture and the United States, existing to “bring the wisdom of our people to this part of the world.” Malidoma’s “career”—he chuckles at the term—is some combination of cultural ambassador, homeopath, and sage. He travels the country doing rituals and consultations, writing books, and giving speeches. He has three master’s degrees and two doctorates from Brandeis University. Sometimes he calls himself a “shaman,” because people know what that means (sort of) and it’s similar to his title back in Burkina Faso—a titiyulo, one who “constantly inquires with other dimensions.”

Dick first heard about Malidoma through James Hillman, a Jungian psychologist whose biography he was writing, at a time when Frank’s treatment had stagnated. For most of his 20s, Frank lived in group homes. His favorite, called Earth House, was a privately owned home, and far more structured than his other group homes. It offered classes, provided leadership opportunities, and fostered a loving, caring environment. Frank made close friends and acted in plays. Dick was elated: For the first time since Frank fell ill, his life was full of friends and purpose.

 

Both shamans and schizophrenic people believe they have magical abilities, hear voices, and have out-of-body experiences.

 

It’s because of reactions like this (and because communities help remind patients to take their meds) that community has emerged as a necessary dimension of schizophrenia treatment in Western medicine. In a review of 66 studies, researchers at the University of Santiago, in Chile, found that “community-based psychosocial interventions significantly reduced negative and psychotic symptoms, days of hospitalization, and substance abuse.”15 Patients were more likely to take medication regularly, hold jobs, and have friends. They were also less likely to feel ashamed of themselves. Similar results have been found in the U.S.

But for Frank and Dick, there was a problem. A spot in Earth House cost $20,000 per quarter—a price Dick, a lifelong journalist, could not afford for very long. After 16 months of Earth House subsidized by friends and family, he decided to stop “postponing the inevitable.” Dick drove Frank drove back to Boston and put him in a less structured group home, where, over time, he seemed to deteriorate.

It was around then, in 2012, that Dick decided to seek out Malidoma; first speaking with him over the telephone, and then meeting with him in Ojai, a small town outside of Los Angeles; then, a year later, meeting him in Jamaica, this time bringing Frank along.

When Malidoma first met Frank one evening over dinner in Jamaica, he recognized the man’s likeness to himself instantly. “The connection we had was immediately clear,” he says. As soon as schizophrenic met shaman, the latter shook his head and clasped Frank’s hands as if they’d known each other for years. He told Dick that Frank was “like a colleague!” Malidoma believes that Frank is a U.S. version of a titiyulo; in fact, there is a version of a titiyulo in pretty much every culture, he says. He also believes that one cannot choose to become a titiyulo: It happens to you. “Every shaman started with a crisis similar to those here who are called schizophrenic, psychotic. Shamanism or titiyulo journeys begin with a breakdown of the psyche,” he says. “One day they’re fine, normal, like everyone else. The next day they’re acting really weird and dangerously toward themselves and the village”—seeing and hearing things that aren’t there, acting paranoid, shouting.

When this happens, the Dagara people begin a collective effort to heal the broken-down person; one marked by loud rituals involving dancing and cheering and with an underlying current of celebration. Malidoma remembers watching his sister go through it. “My sister was screaming into the night,” he says, “but people were playing around her.” Usually, the uncontrollable breakdowns last about eight months, after which effectively new people emerge. “You have to go through this radical initiation where you can become the larger than life person the community needs for their own benefit, you know?” If the broken-down person does not have a community around him or her, Malidoma says, he or she may fail to heal. He believes this is what happened to Frank.

Had Frank been born into the Dagara tribe, and experienced the same breakdown at age 17 that led him to run from his friend’s apartment, Malidoma tells me that the community would have immediately rallied around him, performing the same rituals that his sister experienced. Following this intervention, his tribe members would begin the work of healing Frank and re-integrating him back into the community; once he was ready, he’d receive a prominent position. “He’d be known as a man of spirit, who’d be able to provide insight into the deep problems of the people around them,” he says.

 

No longer simply a madman, he is a painter and a poet, a traveler and a friend.

 

Malidoma is not the first person to posit a link between shamanism and schizophrenia. The psychiatrist Joseph Polimeni wrote an entire book on the subject, called Shamans Among Us. There, Polimeni noted several connections: Both shamans and schizophrenic people believe they have magical abilities, hear voices, and have out-of-body experiences. Shamans become shamans in their late teens to early 20s, about the same age range that schizophrenia is typically diagnosed (17-25) in men. Both schizophrenics and shamans are more commonly male than female. And the prevalence of shamans (say one per 60-150 people, the estimated size of most early human communities) is similar to the global prevalence of schizophrenia (around 1 percent).

 

 

This theory isn’t widely supported. Critics note that shamans appear to be able to enter and exit their shamanic states at will, while schizophrenic people have no control over their visions. But Robert Sapolsky, a neurobiologist at Stanford University, has hypothesized a similar and more widely accepted theory: Many spiritual leaders, like shamans and prophets, may have “schizotypal personality disorder.” People with this diagnosis are often relatives of schizophrenics who possess milder versions of some symptoms, such as peculiar ways of speaking or “metamagical” thinking, which is linked to creativity and high IQ. This profile sounds like it may fit Malidoma, who never experienced a “break” but whose brother and sister both did.

Whether or not Frank’s psychosis would have made him a shaman in another time or place, three central factors are present in the Dagara tribal intervention (early intervention, community, and purpose) that parallel the three factors that Keshavan, Schutt, Rosenheck, and others cite as complements to pharmaceutical drugs: early intervention, community support, and employment. Dick had perhaps missed the boat on the Dagara initiation ceremony, but Malidoma advised Dick to incorporate other aspects of his approach into his son’s life, including rituals and other purposeful activities.

After returning from Jamaica back home to Boston, Frank kept in touch with Malidoma by phone. He and Dick traveled to the homes and clinics of various alternative healers, who met Frank’s delusions with warmth and encouragement. Dick started to encourage his son more, too. When Frank asked Dick to include some of his thoughts in the memoir he was writing, including the idea that “one additive to beer is molten dolphin sweat,” Dick dutifully complied. Rather than provoke more delusional behavior in Frank, Dick says, these experiences have had a “grounding effect.” They show him he has friends and family who respect who he is and all that he’s capable of. “If some of [Franklin’s] dreams exist only in the imaginative realm, so be it,” Dick wrote in his memoir, My Mysterious Son: A Life-Changing Passage Between Schizophrenia and Shamanism. “I’ve learned the importance of this for him.”

The effects have been profound. Years ago, before meeting Malidoma, Frank was less motivated to seek social encounters. At 37 he took trips to New Mexico and Maine, and took classes in mechanical engineering. Today, he is a remarkably inventive jazz pianist. His room is filled with his paintings and metalwork, rife with archetypal imagery and hieroglyphic languages of his own creation.

He’s not cured. He still occasionally hears voices and harbors delusions. And he still lives in a group home. But he has been able to cut his medication in half again. He has lost weight, and his diabetes has become asymptomatic. He’s more courteous, alert, and engaged, his father and doctors say. He still has bad days, but they are fewer and farther between.

Perhaps the biggest factor motivating Frank’s improvement, however, has been the shift in how he views himself. No longer simply a madman, he is a painter and a poet, a traveler and a friend, an African and an American, a welder and a student.

And, most recently, a shaman. In February 2018, Frank, Dick, and Frank’s mother visited Malidoma’s tribe in Burkina Faso, where Frank took part in healing rituals. He spent four weeks living in the village before returning home to Boston in early March. Dick and Malidoma are loath to disclose details of the ceremony, and say only that Frank’s response to the rituals gave them hope.

The experience also shifted Dick’s perception. “I never expected to be conducting spiritual water rituals at the ocean,” he said. But that’s what he did, and, in the course of helping his son recover, he found that his own views of sickness and health had shifted. “To the extent that psychosis involves the creation of an alternate reality, the goal is to enter that world. There’s also a recognition that the world we think of as real is actually infused with aspects of the Other—that there is a mysterious impenetration or even an underlying unity.”

As for traditional medicine’s take? To the best of Dick’s knowledge, scientists haven’t studied a case exactly like Frank’s.

 

 

Original article here


16 Sep 2024
Comments: 0

Are Your Morals Too Good to Be True?

I spent the summer of 2011 as an undergraduate researcher at the Rocky Mountain Biological Laboratory, in Colorado. My job was to collect burying beetles—necrophagous critters with wing cases the colors of Halloween—using traps made out of coffee cans and chicken flesh. Behavioral biologists are fascinated by burying beetles because of their biparental model of care: males and females prepare meaty balls from carcasses and then coöperatively raise larvae on them. I was a matchmaker, charged with setting up pairs of beetles and watching them co-parent.

That summer was a dream. I lived in a community of more than a hundred scientists, students, and staff. The research station, based at the site of a deserted mining town, was a magnet for weirdos and plant lovers, naturalists and marmot chasers, flower people and climate watchers. It consisted of dozens of cabins—some Lincoln Log style and more than a century old, others retrofitted into modern laboratories—encircled by spruce and aspen forests, montane meadows, and monumental peaks. I was more accustomed to sidewalks than to summits, but now I saw elk and black bears and woke up one night to a porcupine gnawing on my cabin. For the first time in my life, I found love, or something close to it. In spare moments, I retired to my room, where I drew and wrote in my journal. On the weekends, we scaled the Rockies.

A part of me wanted that summer to be my forever. I envisioned a career as a collector of coleopterans, sneaking off to the mountains to cavort and observe. Another part of me worried that I was being frivolous. Back in college, my classmates had high-minded ambitions like fighting climate change, becoming human-rights lawyers, and starting microfinance firms to alleviate poverty. To spend time with books and beetles in wildflower country seemed the pinnacle of self-indulgence. Adding to the internal tension was something I’d observed among my beetles: the spectre of evolved selfishness. What looked like coöperation was, I discovered, laced with sexual conflict. The female beetles, when they had a size advantage, ejected their male partners; the males evidently stuck around less to help than to insure future mating opportunities. Where I first saw biparental collaboration was instead a complicated waltz of organisms seeking to perpetuate their own interests. Was I one of them—another gene machine bent on favoring itself?

I had, to that point, considered myself a mostly decent person, moved by empathy and committed to self-expression. Was all this actually vanity and delusion, selfishness masquerading as morality? The prospect was unsettling. So I hid away in a one-room library that smelled faintly of old textbooks and the alcohol used to preserve animal specimens, and there I started to work out a response. We’re evolved organisms, I figured, but we’re also an intelligent, cultural species capable of living by ideals that transcend our egoistic origins. What emerged from my musings was a personal ideology, at the core of which was an appreciation of creation—including artistic and scientific work. Even an awkward scribble, I supposed, expresses an incomprehensibly epic causal history, which includes a maker, the maker’s parents, the quality of the air in the room, and so on, until it expands to encompass the entire universe. Goodness could be reclaimed, I thought. I would draw and write and do science but as acts of memorialization—the duties of an apostle of being. I called the ideology Celebrationism, and, early in 2012, I started to codify it in a manic, sprawling novel of that name.

I had grown up a good Sikh boy: I wore a turban, didn’t cut my hair, didn’t drink or smoke. The idea of a god that acted in the world had long seemed implausible, yet it wasn’t until I started studying evolution in earnest that the strictures of religion and of everyday conventions began to feel brittle. By my junior year of college, I thought of myself as a materialist, open-minded but skeptical of anything that smacked of the supernatural. Celebrationism came soon after. It expanded from an ethical road map into a life philosophy, spanning aesthetics, spirituality, and purpose. By the end of my senior year, I was painting my fingernails, drawing swirling mehndi tattoos on my limbs, and regularly walking without shoes, including during my college graduation. “Why, Manvir?” my mother asked, quietly, and I launched into a riff about the illusory nature of normativity and about how I was merely a fancy organism produced by cosmic mega-forces.

After college, I spent a year in Copenhagen, where I studied social insects by day and worked on “Celebrationism” the rest of the time. Reassured of the virtue of intellectual and artistic work, I soon concluded that fictional wizards provided the best model for a life. As I wrote to my friend Cory, “They’re wise, eccentric, colorful, so knowledgeable about some of the most esoteric subjects, lone wolves in a sense, but all of their life experience constantly comes together in an exalting way every time they do something.” When, the following year, I started a Ph.D. in human evolutionary biology at Harvard, I saw the decision as in service of my Celebrationist creed. I could devote myself to meditating on the opportune swerves that produced us.

I was mistaken. Celebrationism died soon afterward. Just as observation and a dose of evolutionary logic revealed male burying beetles not as attentive fathers but as possessive mate guarders, the natural and behavioral sciences deflated my dreamy credo, exposing my lofty aspirations as performance and self-deception. I struggled, unsuccessfully, to construct a new framework for moral behavior which didn’t look like self-interest in disguise. A profound cynicism took hold.

Skepticism about objective morality is nothing new, of course. Michel de Montaigne, in the sixteenth century, remarked that “nothing in all the world has greater variety than law and custom,” a sign, for him, of the nonexistence of universal moral truths—and he had predecessors among the ancient Greeks. David Hume chimed in, two centuries later, to argue that judgments of right and wrong emanate from emotion and social conditioning, not the dispassionate application of reason. Even the more pious-sounding theorists, including Kant and Hegel, saw morality as something that we derive through our own thinking, our own rational will. The war between science and religion in the nineteenth century brought it all to a head, as a supernatural world view became supplanted by one that was more secular and scientific, in a development that Nietzsche described as the death of God. As the pillars of Christian faith crumbled, Western morality seemed poised to collapse. Nihilism loomed. “But how did we do this?” the madman in Nietzsche’s “The Gay Science” asks. “How could we drink up the sea? Who gave us the sponge to wipe away the entire horizon? What were we doing when we unchained the earth from its sun?”

Nietzsche’s response to a godless world was a moral makeover: individuals were to forge their own precepts and act in accordance with them. More than a century later, such forays have matured into an individualist morality that has become widespread. We behave morally, we often say, not because of doctrine but because of our higher-order principles, such as resisting cruelty or upholding the equality of all humans. Rather than valuing human life because an omnipotent godhead commands it, or because our houses of worship instruct it, we do so because we believe it is right.

At its core, this view of morality assumes a kind of moral integrity. Although some people may embrace principles for self-interested ends, the story goes, genuine altruism is possible through reasoned reflection and an earnest desire to be ethical. I told myself a version of this story in the Rockies: rummage through your soul and you can find personally resonant principles that inspire good behavior. The Harvard psychologist Lawrence Kohlberg turned a model like this into scholarly wisdom in the nineteen-sixties and seventies, positioning it as the apex of the six stages of moral development he described. For the youngest children, he thought, moral goodness hinges on what gets rewarded and punished. For actualized adults, in contrast, “right is defined by the decision of conscience in accord with self-chosen ethical principles appealing to logical comprehensiveness, universality, and consistency.”

All this may sound abstract, but it is routine for most educated Westerners. Consider how moral arguments are made. “Animal Liberation Now” (2023), the Princeton philosopher Peter Singer’s reboot of his 1975 classic, “Animal Liberation,” urges readers to emancipate nonhuman animals from the laboratory and the factory farm. Singer assumes that people are committed to promoting well-being and minimizing suffering, and so he spends most of the book showing, first, that our actions create hellish existences for many of our nonhuman brethren and, second, that there is no principled reason to deny moral standing to fish or fowl. His belief in human goodness is so strong, he admits, that he expected everyone who read the original version of his book “would surely be convinced by it and would tell their friends to read it, and therefore everyone would stop eating meat and demand changes to our treatment of animals.”

From an evolutionary perspective, this could seem an odd expectation. Humans have been fashioned by natural selection to pursue sex, status, and material resources. We are adept at looking out for ourselves. We help people, yes, but the decision to give is influenced by innumerable selfish considerations, including how close we are to a recipient, whether they’ve helped us before, how physically attractive they are, whether they seem responsible for their misfortune, and who might be watching. A Martian observer might, accordingly, have expected Singer’s arguments to focus less on the horrific conditions of overcrowded pig farms and instead to appeal to our hedonic urges—more along the lines of “Veganism makes you sexy” or “People who protest animal experimentation have more friends and nicer houses than their apathetic rivals.”

But Singer has always known his audience. Most people want to be good. Although “Animal Liberation Now” is largely filled with gruesome details, it also recounts changes that growing awareness has spurred. At least nine states have passed legislation limiting the confinement of sows, veal calves, and laying hens. Between 2005 and 2022 in the U.S., the proportion of hens that were uncaged rose from three per cent to thirty-five per cent, while Yum! Brands—the owner of such fast-food franchises as KFC, Taco Bell, and Pizza Hut, with more than fifty thousand locations around the world—has vowed to phase out eggs from caged hens by 2030. These changes are a microcosm of the centuries-long expansion of moral concern that, throughout much of the world, has ended slavery and decriminalized homosexuality. Could there be a clearer instance of genuine virtue?

I wasn’t yet thinking about any of this when I started graduate school. Instead, my mind was on monkeys. I had proposed studying the Zanzibar red colobus, a creature notable for retaining juvenile traits like a short face and a small head into adulthood. Our species underwent a similar juvenilization during our evolution, and the hope was that something might be learned about our past by studying this peculiar primate.

Still, I couldn’t read about monkeys all day. To start a Ph.D. at a major research university is to have proximity to countless intellectual currents, and I began to drift through the scholarly worlds on campus, which is how I found Moshe Hoffman. Moshe is intense. A curly-haired game theorist with a scalpel-like ability to dissect arguments and identify their logical flaws, Moshe was raised in an Orthodox Hasidic community in Los Angeles. He grew up wearing a kippah and spending half of each school day studying the Talmud and other religious texts until, at the age of fifteen, he forsook his faith. He had a chance conversation with an atheist classmate, then picked up Richard Dawkins’s “The Selfish Gene.” The book exposed him to game theory and evolutionary biology, setting him on a lifelong quest to solve the puzzles of human behavior.

When we met, near the end of my first year, Moshe was a postdoctoral researcher fixated on the nature of trust. We all depend on trust, yet it works in tricky ways. On the one hand, we trust people who are guided by consistent ethical precepts. I’d rather go to dinner with someone deeply opposed to stealing than a jerk who pockets my valuables as soon as I get up to pee. On the other hand, we’re turned off when people’s commitments seem calculated. The ascent of terms like “slacktivism,” “virtue signalling,” and “moral grandstanding” bespeaks a frustration with do-gooders motivated more by acclaim than by an internal moral compass. The idea is that, if you’re in it for the reputational perks, you can’t be relied on when those perks vanish. In “The Social Instinct” (2021), Nichola Raihani, who works on the evolution of coöperation, refers to this issue as the “reputation tightrope”: it’s beneficial to look moral but only as long as you don’t seem motivated by the benefits.

Moshe argued that humans deal with this dilemma by adopting moral principles. Through learning or natural selection, or some combination, we’ve developed a paradoxical strategy for making friends. We devote ourselves to moral ends in order to garner trust. Which morals we espouse depend on whose trust we are courting. He demonstrated this through a series of game-theoretic models, but you don’t need the math to get it. Everything that characterizes a life lived by moral principles—consistently abiding by them, valuing prosocial ends, refusing to consider costs and benefits, and maintaining that these principles exist for a transcendental reason—seems perfectly engineered to make a person look trustworthy.

His account identifies showmanship, conscious or otherwise, in ostensibly principled acts. We talk about moral principles as if they were inviolate, but we readily consider trade-offs and deviate from those principles when we can get away with it. Philip Tetlock, who works at the intersection of political science and psychology, labels our commitments “pseudo-sacred.” Sure, some people would die for their principles, yet they often abandon them once they gain power and no longer rely on trust. In Human Rights in Africa” (2017), the historian Bonny Ibhawoh showed that post-colonial African dictators often started their careers as dissidents devoted to civil liberties.

Moshe wasn’t alone in this work. Around the time that he and I began chatting, researchers at Oxford, the École Normale Supérieure, and elsewhere were disrobing morality and finding performance underneath. Jillian Jordan, then a graduate student at Yale and now on the faculty of Harvard Business School, conducted a series of landmark studies demonstrating how people instinctively use moral behavior to cultivate laudable personas. A 2016 paper in the Proceedings of the National Academy of Sciences—which Jordan wrote with Moshe and two other researchers—studied uncalculating coöperation, the tendency to willfully ignore costs and benefits when helping others. It’s a key feature of both romantic love and principled behavior. The authors found not only that “coöperating without looking” (a phrase of Moshe’s) attracts trust but that people engage in it more when trying to win observers’ confidence. The motivations that we find so detestable—moral posturing for social rewards—may, in fact, be the hallmark of moral action.

Invested as I was in my own goodness, whether achieved or aspirational, I found Moshe’s ideas both alarming and mesmeric. To engage with them was to look in a mirror and find a sinister creature staring back. The more I sought Moshe out—first by taking a course he co-taught, then by meeting up for Indian food after class, then by working as his teaching assistant—the more I felt trapped within my self-interest. Celebrationism was exposed as a beautiful lie. The search for personally resonant principles was reinterpreted as a tactic not to overcome self-interest but to advance it. Any dignified motivations that had once held sway—making art for art’s sake, acting to minimize suffering—became smoke screens to distract others from my selfishness. Here were hard truths that I felt compelled to confront. I wanted to escape the performance, to adopt values for reasons other than their social utility, but even that urge, I recognized, reflected the same strategic impulse to appear good and consistent. It was like forcing yourself to wake up from a dream only to realize that you’re still dreaming.

Yes, there were venerable antecedents to all these arguments, but what had once been the province of the provocateur was now something of a scholarly consensus. The new, naturalistic study of morality stemmed from an array of converging disciplines and approaches, spanning sociology, biology, anthropology, and psychology. It was set forth in popular books like Matt Ridley’s “The Origins of Virtue” (1996), Joshua Greene’s “Moral Tribes” (2013), and Richard Wrangham’s “The Goodness Paradox” (2019). Not everyone in this field understands ethical behavior the way Moshe does. Still, they tend to employ a framework grounded in evolutionary theory—one that casts morality as a property of our primate brains and little else. Appeals to pure selflessness have become harder to defend, while a belief in objective moral truths—existing apart from our minds and discoverable through impartial judgment—has grown increasingly untenable.

Darwin himself sensed the implications. In “The Descent of Man” (1871), he suggested that studying the “moral sense” from “the side of natural history” would throw “light on one of the highest psychical faculties of man.” It took another hundred years for scholars of evolution to appreciate the extent to which a Darwinian world view can explain morality. By the beginning of the twenty-first century, philosophers like Sharon Street, at N.Y.U., were taking note. “Before life began, nothing was valuable,” Street wrote in a now classic article. “But then life arose and began to value—not because it was recognizing anything, but because creatures who valued (certain things in particular) tended to survive.” In other words, moral tenets—such as the rightness of loyalty or the wrongness of murder—do not exist unless natural selection produces organisms that value them.

In recent decades, all sorts of philosophers have added to the pool of adaptive theories about morality. Allan Gibbard argues that moral statements (“Killing is bad”) actually express attitudes (“I don’t like killing”), allowing us to coördinate on shared prescriptions (“No one shall kill”). Philip Kitcher sees ethics as an ever-evolving project invented by our remote ancestors and continually refined to help societies flourish. Richard Joyce has proposed that moral judgments help keep us out of trouble. Given normal human hedonism, we may struggle to stop ourselves from, say, stealing a brownie; the feeling that it’s morally wrong provides us an emotional bulwark. Non-moral explanations like these, whatever their differences, obviate talk of moral truths, construing them as dreamlike delusions.

Like the decline of religion, what’s often called the evolutionary debunking of morality can induce existential panic and strenuous efforts at circumvention. The eminent philosopher Derek Parfit, the subject of a recent biography by David Edmonds, spent decades writing “On What Matters,” a book that sought both to build a unified theory of objective morality and to defend it against challengers, including evolution-inspired skeptics. In 2015, at N.Y.U., Parfit and Street taught a course together on meta-ethics. On the last day of class, a student asked them whether they had learned anything from their collaboration. “My memory is that both of us said ‘No!’ ” Street told Edmonds. “He thought my position was nihilistic. He was worried about it being true and felt it needed beating back with arguments.”

What troubled me was less the notion that morality was our own creation than the implication that our motives were suspect—that evolutionarily ingrained egoism permeated our desires, including the desire to overcome that selfishness. Sincerity, I concluded, was dead. Just as the natural sciences had killed the Christian God, I thought, the social and behavioral sciences had made appeals to virtuous motivations preposterous. I became skeptical of all moral opinions, but especially of the most impassioned ones, which was a problem, because I was dating someone who had a lot of them. (It didn’t work out.) A close friend, a punk physicist with whom I often went dancing late at night, found my newfound cynicism hard to relate to, and we drifted apart.

Many theorists are skeptical of such skepticism. When I asked people on X how they have dealt with evolutionary debunking, Oliver Scott Curry, a social scientist at Oxford and the research director at Kindlab, which studies the practice of kindness, warned me not to confuse the selfishness of genes with the nature of our motivations, which apparently are more gallant. He was echoing a distinction often drawn between a behavior’s “ultimate” causes, which concern why it evolved, and its “proximate” causes, which include psychological and physiological mechanisms. The cognition underpinning moral judgment may have evolved to make us look good, these scholars grant, but that doesn’t count against its sincerity. In “Optimally Irrational” (2022), the University of Queensland economist Lionel Page explains, “There is no contradiction between saying that humans have genuine moral feelings and that these feelings have been shaped to guide them when playing games of social interactions.”

Such arguments make sense to some degree. An impulse can exist because of its evolutionary utility but still be heartfelt. The love I feel for my spouse functions to propagate my genes, but that doesn’t lessen the strength of my devotion. Why couldn’t this shift in perspective rescue goodness for me? A major reason is that the proximate-ultimate distinction leaves intact the unsavory aspects of human motivation. As anyone who has spent more than twenty minutes on social media can attest, humans are remarkably attentive to which moral proclamations garner esteem and attention. We weigh the status implications of claiming different principles. It’s true that we often assure ourselves otherwise and even internalize positions once we espouse them enough. Yet this fact didn’t redeem moral sincerity for me; it corrupted it.

I eventually ditched monkeys. Humans, complicated and enculturated, had a stronger appeal than tiny-headed primates. After my first year of graduate school, in 2014, I travelled to the Indonesian island of Siberut and stayed with its Indigenous inhabitants, the Mentawai. I returned for two more months in 2015 and then spent much of 2017 with a Mentawai community, studying traditions of justice, healing, and spirituality. As I learned more of the language, I saw how rarely Mentawai people invoke abstract concepts of right and wrong. Instead, they reason about duties and responsibilities in a way that seems both blatantly self-interested and refreshingly honest, and which I’ve since adopted when speaking to them.

A Mentawai man who had previously worked for me as a research assistant once asked me over WhatsApp to help pay his school fees. I agreed but then struggled to wire the money from the United States, and he was forced to drop out. When I visited again in 2020, I handed him a wad of cash. “Why are you doing this?” he asked. My reply came automatically: “Because, if I don’t pay you, people will think that I don’t keep my promises.” He nodded. The answer made sense.

How does one exist in a post-moral world? What do we do when the desire to be good is exposed as a self-serving performance and moral beliefs are recast as merely brain stuff? I responded by turning to a kind of nihilism, yet this is far from the only reaction. We could follow the Mentawai, favoring the language of transaction over virtue. Or we can carry on as if nothing has changed. Richard Joyce, in his new book, “Morality: From Error to Fiction,” advocates such an approach. His “moral fictionalism” entails maintaining our current way of talking while recognizing that a major benefit of this language is that it makes you likable, despite referring to nothing real. If you behave the way I did in grad school, going on about the theatre of morality, you will, he suggests, only attract censure and wariness. Better to blend in.

Intellectually, I find the proposal hard to swallow. The idea of cosplaying moral commitment for social acceptance would surely magnify whatever dissonance I already feel. Still, a decade after my first meeting with Moshe, experience forces me to acknowledge Joyce’s larger point. It’s easy to inhabit the fiction.

I still accept that I am a selfish organism produced by a cosmic mega-force, drifting around in a bedlam of energy and matter and, in most respects, not so very different from the beetles I scrutinized during that summer in Colorado. I still see the power in Moshe’s game-theory models. Traces of unease linger. But I no longer feel unmoored. A sense of meaning has re-established itself. Tressed, turbanned, and teetotalling, I am, at least by all appearances, still a good Sikh. I have become a teacher, a husband, and a father to a new baby daughter. When she smiles, a single dimple appears in her left cheek. Her existence feels more ecstatic and celebratory than any ideology I could have conceived, and I hope that she’ll one day grow up to be empathetic and aware of others’ suffering. I have moral intuitions, sometimes impassioned ones. I try to do right by people, and, on most days, I think I do an O.K. job. I dream on. ♦

 

Original article here

 


Leave a Comment!

You must be logged in to post a comment.