Call us toll free: +1 4062079616
How To Be Spiritual In A Material World
Call us toll free: +1 4062079616

Full Width Blog

21 Sep 2024
Comments: 0

A Mental Disease by Any Other Name

 

It starts without warning—or rather, the warnings are there, but your ability to detect them exists only in hindsight. First you’re sitting in the car with your son, then he tells you: “I cannot find my old self again.” You think, well, teenagers say dramatic stuff like this all the time. Then he’s refusing to do his homework, he’s writing suicidal messages on the wall in black magic marker, he’s trying to cut himself with a razor blade. You sit down with him; you two have a long talk. A week later, he runs home from a nighttime gathering at his friend’s apartment, he’s bursting through the front door, shouting about how his friends are trying to kill him. He spends the night crouching in his mother’s old room, clutching a stuffed animal to his chest. He’s 17 years old at this point, and you are his father, Dick Russell, a traveler, a former staff reporter for Sports Illustrated, but a father first and foremost. It is the turn of the 21st century.

Up until this point, your son, Frank, has been a fully functional kid, if somewhat odd. An eccentric genius, socially inept yet insightful—perhaps an artist in the making, you thought. Now you are being told your son’s quirks stem from pathology, his mystic phrases are not indications of creative genius but of neural networks misfiring. You sit with Frank as he receives his diagnosis, schizophrenia, and immediately all sorts of associations flood into your head. In the United States, a diagnosis of schizophrenia often means homelessness, joblessness, inability to maintain close relationships, and increased susceptibility to addiction. Your son is now dangling off this cliff. So you hand him over to the doctors, who prescribe him antipsychotics, and when he balloons up to 300 pounds, and they tell you he’s just being piggish, you believe them.

Had Frank been living someplace else, things may have turned out differently. In some countries, schizophrenics hold down jobs at five times the rates of American schizophrenics. In others, symptoms are interpreted as unusual powers.

Dick and his son tried a variety of treatments over 15 years, some more effective than others. Then, unexpectedly, the pair turned in a very different direction, beginning a journey that Dick now likens to a “torch-lit passageway through a long dark tunnel.” By sharing his story, he hopes to help others find this passageway—but he’s aware some of it sounds crazy. For instance: He now believes Frank might be a shaman.

Certain structures and regions in the brain are thought to be particularly important in constructing our sense of self. One is the meeting place between the two middle lobes of the brain: the temporal lobe, which translates sight and hearing into language, emotion, and memory, and the parietal lobe, which integrates all five senses to locate the body in space. This region, called the temporoparietal junction, or TPJ, assembles information from these and other lobes into a mental representation of one’s physical body, and its place in space and time. It also plays a role in what’s called theory of mind, the ability to recognize your thoughts and desires as your own and to understand that other people have mental states that are separate from yours.

When the TPJ is altered or disturbed, putting oneself together becomes difficult and sometimes painful. Body Dysmorphic Disorder, characterized by extreme preoccupation with imagined physical defects, is thought to arise from faulty TPJ interplay. Researchers see atypical TPJ activity in Alzheimer’s patients, Parkinson’s patients, and amnesiacs.

 

Don’t take my devils away, because my angels may flee, too.”

 

Schizophrenia is intimately related to TPJ messiness. It affects theory of mind; schizophrenic patients often believe that others harbor animosity toward them, and when they perform mental tasks related to theory of mind, their TPJ activity either spikes or crashes. Researchers have induced the kind of ghostly visions and out-of-body experiences that some schizophrenics experience, simply by stimulating the TPJ with electrodes. The psychiatrist Lot Postmes calls this “perceptual incoherence,” noting that the jumbled assortment of sensory information leads to an untethering of ego: “a normal sense of self, as a feeling of unitary entity, the ‘I,’ that owns and authors its thoughts, emotions, body, and actions.”

Having a dissolved self can make it immensely hard for a schizophrenic person to present a coherent picture of themselves to the world, and to relate to other, more gelled selves. “Schizophrenia is a disease whose main manifestations are sufferers’ [diminished] abilities to engage in social interactions,” says Matcheri S. Keshavan, a psychiatrist at Harvard Medical School and an expert on schizophrenia. And yet, ironically, people with schizophrenia need others just as much as socially capable people do, if not more. “A problem with schizophrenia is however much they want [social interactions], they often lose the skills needed for navigating them,” Keshavan says.

This craving for social connection puts schizophrenic people at stark contrast to people diagnosed with Autism Spectrum Disorder (ASD). In 2008, Bernard Crespi, a biologist at Simon Fraser University, in Canada, and Christopher Badcock, a sociologist at the London School of Economics, theorized that autism spectrum disorder (ASD) and schizophrenia were opposite sides of the same coin. “Social cognition,” they wrote, is “underdeveloped in autism, but hyper-developed to dysfunction in psychosis (schizophrenia).” In other words, where an autistic person’s sense of self is cripplingly narrow, schizophrenics’ selves are cripplingly expansive: They believe they are many people at once, and see motive and meaning everywhere.

As difficult as they may be to live with, these perceptual distortions can make schizophrenic people more creative. Schizophrenics tend to see themselves as more imaginative than others, and to embark on more artistic projects.5 Numerous people with schizophrenia have said that their creative thoughts and delusions come from the same source: Poet Rainer Maria Rilke refused treatment for his visions, saying “don’t take my devils away, because my angels may flee, too.” The author Stephen Mitchell, who translated many of Rilke’s works, put it this way: “He was dealing with an existential problem opposite from the one that most of us need to resolve: Whereas we find a thick, if translucent, barrier between self and other, he was often without even the thinnest differentiating membrane.”

Frank Russell reported feeling something similar. “He told me he feels like a mirror, reflecting what’s inside people,” his father, Dick, writes. “It was hard for him to sort out what’s him and what’s them.” And Frank, Dick reports, is highly creative. He draws, paints, and welds. He invents languages out of made-up “hieroglyphs” and archetypal symbols. He composes long poems about god and racial tension, and has won numerous awards for his poetry at school.

And yet Frank’s strange preoccupation with symbols, his belief he could become Chinese or shift into a bear, made social interaction awkward and difficult. He spent the 10 years following his initial diagnosis mostly isolated, largely incapable of forming long-lasting relationships or joining group activities. Apart from doctors, the only consistent people in Frank’s life were his parents. That was before they met Malidoma Patrice Some.

According to the World Health Organization, schizophrenia is universal. “So far, no society or culture anywhere in the world has been found free from … this puzzling illness,” states a 1997 report. A diagnosis of schizophrenia considers the combination of five symptoms, as well as their impact and duration: 1) Delusions, 2) Hallucinations, 3) Disorganized speech, 4) Grossly disorganized or catatonic behavior, and 5) Negative symptoms like affective flattening (restricted emotional expressiveness), alogia (diminished capacity for speech), or avolition (lack of initiative). But the WHO cautions that these criteria are to be taken with a grain of salt—“current operationalized diagnostic systems, while undoubtedly very reliable, leave the question of validity unanswered in the absence of external validating criteria,” the report notes. Diagnosis “should therefore be considered a provisional tool,” set to organize treatment plans while “leav[ing] the door open to future developments.”

Terms of diagnosis are in constant flux. “It keeps changing over time,” says Keshavan. “We’re doing research to develop better biomarkers, but it’s still complicated.” Robert Rosenheck, a psychiatrist at Yale University who studies the cost efficiency of various treatment models for schizophrenia, goes even further. “Usually with medicine, the whole idea is that you have illnesses with a medical basis, a physiological basis. We don’t have that for schizophrenia.”

 

If the broken-down person does not have a community around him or her, he or she may fail to heal.

 

Adding to the complexity, schizophrenia looks different across cultures. Several studies by the World Health Organization have compared outcomes of schizophrenia in the U.S. and Western Europe with outcomes in developing nations like Ghana and India. After following patients for five years, researchers found that those in developing countries fared “considerably better” than those in the developed countries. In one study, nearly 37 percent of patients diagnosed with schizophrenia in developing countries were asymptomatic after two years, compared to only 15.5 percent in the U.S. and Europe. In India, about half of people diagnosed with schizophrenia are able to hold down jobs, compared to only 15 percent in the U.S.

Many researchers have theorized that these counterintuitive findings stem from a key cultural difference: developing countries tend to be collectivistic or interdependent, meaning the predominant mindset is community-oriented. Developed countries, on the other hand, are usually individualistic—autonomy and self-motivated achievement are considered the norm. Other variables in developing countries can, at times, complicate this dichotomy—for instance, the relative scarcity of medication, and other cultural factors such as stigma. However, one study of “sociocentric” differences between ethnic minority groups within the U.S. found results suggesting that “certain protective aspects of ethnic minority culture”—namely, the prevalence of two collectivist values, empathy and social competence—“result in a more benign symptomatic expression of schizophrenia.”

“Take a young man with schizophrenia who’s socially unable to engage,” Keshavan says. “In a collectivistic culture, he’s still able to survive in a joint family with a less fortunate brother or cousin … he’ll feel supported and contained. Whereas in a more individualistic society, he’ll feel let go, and not particularly included. For that reason, schizophrenia tends to be highly disabling [in individualistic countries].” Individualist cultures also “[diminish] motivation to acknowledge illness and seek help from others, whether from therapists or in clinics or residential programs.” notes Russell Schutt, a leading expert on the sociology of schizophrenia.

Outcomes across cultures may also be affected by differences in the patients themselves. In 2012, Shihui Han, a neuroscientist at Peking University, asked volunteers from a traditionally interdependent country (China) and a more independent one (Denmark) to think about various people, all while monitoring activity in the TPJ. In both groups, the TPJ lit up when they attempted to infer other peoples’ thought processes, a theory of mind task. But in Chinese participants, the TPJ also activated when they thought about themselves. In Danish people, their medial prefrontal cortex, which the researchers used to measure the degree of self-reflection, lit up more than in the Chinese participants. In essence, Chinese subjects’ sense of self was blurrier on average, in a way that directly affected the area of the brain implicated in schizophrenic symptoms.

In Han’s study, the average TPJ activity levels of people from the traditionally interdependent country looked closer to those of schizophrenic patients. Other studies, including Chiyoko Kubayashi Frank at the School of Psychology at Fielding Graduate University in Santa Barbara, have theorized that diminished activity in the TPJ area in Japanese adults and children during theory of mind tasks “might represent the demoted sense of self-other distinction in the Japanese culture.”12This shows up in how both populations perceive the world differently: People from collectivistic countries are more likely to believe in God,13and to attend to context in images, while people from individualistic countries are likely to ignore context in favor of the image’s primary focus. This implies schizophrenics are less likely to be doubted or stigmatized for their visions in collectivistic cultures, and thus, they are less likely to feel what Schutt calls “socially-generated stress”—which, he notes, “has biological effects that can exacerbate symptoms of mental illness.”

Malidoma is from a collectivist society. Born into the Dagara tribe in Burkina Faso, he is the grandson of a renowned healer, who travels around the world but is based in the U.S. Malidoma sees himself as a bridge between his culture and the United States, existing to “bring the wisdom of our people to this part of the world.” Malidoma’s “career”—he chuckles at the term—is some combination of cultural ambassador, homeopath, and sage. He travels the country doing rituals and consultations, writing books, and giving speeches. He has three master’s degrees and two doctorates from Brandeis University. Sometimes he calls himself a “shaman,” because people know what that means (sort of) and it’s similar to his title back in Burkina Faso—a titiyulo, one who “constantly inquires with other dimensions.”

Dick first heard about Malidoma through James Hillman, a Jungian psychologist whose biography he was writing, at a time when Frank’s treatment had stagnated. For most of his 20s, Frank lived in group homes. His favorite, called Earth House, was a privately owned home, and far more structured than his other group homes. It offered classes, provided leadership opportunities, and fostered a loving, caring environment. Frank made close friends and acted in plays. Dick was elated: For the first time since Frank fell ill, his life was full of friends and purpose.

 

Both shamans and schizophrenic people believe they have magical abilities, hear voices, and have out-of-body experiences.

 

It’s because of reactions like this (and because communities help remind patients to take their meds) that community has emerged as a necessary dimension of schizophrenia treatment in Western medicine. In a review of 66 studies, researchers at the University of Santiago, in Chile, found that “community-based psychosocial interventions significantly reduced negative and psychotic symptoms, days of hospitalization, and substance abuse.”15 Patients were more likely to take medication regularly, hold jobs, and have friends. They were also less likely to feel ashamed of themselves. Similar results have been found in the U.S.

But for Frank and Dick, there was a problem. A spot in Earth House cost $20,000 per quarter—a price Dick, a lifelong journalist, could not afford for very long. After 16 months of Earth House subsidized by friends and family, he decided to stop “postponing the inevitable.” Dick drove Frank drove back to Boston and put him in a less structured group home, where, over time, he seemed to deteriorate.

It was around then, in 2012, that Dick decided to seek out Malidoma; first speaking with him over the telephone, and then meeting with him in Ojai, a small town outside of Los Angeles; then, a year later, meeting him in Jamaica, this time bringing Frank along.

When Malidoma first met Frank one evening over dinner in Jamaica, he recognized the man’s likeness to himself instantly. “The connection we had was immediately clear,” he says. As soon as schizophrenic met shaman, the latter shook his head and clasped Frank’s hands as if they’d known each other for years. He told Dick that Frank was “like a colleague!” Malidoma believes that Frank is a U.S. version of a titiyulo; in fact, there is a version of a titiyulo in pretty much every culture, he says. He also believes that one cannot choose to become a titiyulo: It happens to you. “Every shaman started with a crisis similar to those here who are called schizophrenic, psychotic. Shamanism or titiyulo journeys begin with a breakdown of the psyche,” he says. “One day they’re fine, normal, like everyone else. The next day they’re acting really weird and dangerously toward themselves and the village”—seeing and hearing things that aren’t there, acting paranoid, shouting.

When this happens, the Dagara people begin a collective effort to heal the broken-down person; one marked by loud rituals involving dancing and cheering and with an underlying current of celebration. Malidoma remembers watching his sister go through it. “My sister was screaming into the night,” he says, “but people were playing around her.” Usually, the uncontrollable breakdowns last about eight months, after which effectively new people emerge. “You have to go through this radical initiation where you can become the larger than life person the community needs for their own benefit, you know?” If the broken-down person does not have a community around him or her, Malidoma says, he or she may fail to heal. He believes this is what happened to Frank.

Had Frank been born into the Dagara tribe, and experienced the same breakdown at age 17 that led him to run from his friend’s apartment, Malidoma tells me that the community would have immediately rallied around him, performing the same rituals that his sister experienced. Following this intervention, his tribe members would begin the work of healing Frank and re-integrating him back into the community; once he was ready, he’d receive a prominent position. “He’d be known as a man of spirit, who’d be able to provide insight into the deep problems of the people around them,” he says.

 

No longer simply a madman, he is a painter and a poet, a traveler and a friend.

 

Malidoma is not the first person to posit a link between shamanism and schizophrenia. The psychiatrist Joseph Polimeni wrote an entire book on the subject, called Shamans Among Us. There, Polimeni noted several connections: Both shamans and schizophrenic people believe they have magical abilities, hear voices, and have out-of-body experiences. Shamans become shamans in their late teens to early 20s, about the same age range that schizophrenia is typically diagnosed (17-25) in men. Both schizophrenics and shamans are more commonly male than female. And the prevalence of shamans (say one per 60-150 people, the estimated size of most early human communities) is similar to the global prevalence of schizophrenia (around 1 percent).

 

 

This theory isn’t widely supported. Critics note that shamans appear to be able to enter and exit their shamanic states at will, while schizophrenic people have no control over their visions. But Robert Sapolsky, a neurobiologist at Stanford University, has hypothesized a similar and more widely accepted theory: Many spiritual leaders, like shamans and prophets, may have “schizotypal personality disorder.” People with this diagnosis are often relatives of schizophrenics who possess milder versions of some symptoms, such as peculiar ways of speaking or “metamagical” thinking, which is linked to creativity and high IQ. This profile sounds like it may fit Malidoma, who never experienced a “break” but whose brother and sister both did.

Whether or not Frank’s psychosis would have made him a shaman in another time or place, three central factors are present in the Dagara tribal intervention (early intervention, community, and purpose) that parallel the three factors that Keshavan, Schutt, Rosenheck, and others cite as complements to pharmaceutical drugs: early intervention, community support, and employment. Dick had perhaps missed the boat on the Dagara initiation ceremony, but Malidoma advised Dick to incorporate other aspects of his approach into his son’s life, including rituals and other purposeful activities.

After returning from Jamaica back home to Boston, Frank kept in touch with Malidoma by phone. He and Dick traveled to the homes and clinics of various alternative healers, who met Frank’s delusions with warmth and encouragement. Dick started to encourage his son more, too. When Frank asked Dick to include some of his thoughts in the memoir he was writing, including the idea that “one additive to beer is molten dolphin sweat,” Dick dutifully complied. Rather than provoke more delusional behavior in Frank, Dick says, these experiences have had a “grounding effect.” They show him he has friends and family who respect who he is and all that he’s capable of. “If some of [Franklin’s] dreams exist only in the imaginative realm, so be it,” Dick wrote in his memoir, My Mysterious Son: A Life-Changing Passage Between Schizophrenia and Shamanism. “I’ve learned the importance of this for him.”

The effects have been profound. Years ago, before meeting Malidoma, Frank was less motivated to seek social encounters. At 37 he took trips to New Mexico and Maine, and took classes in mechanical engineering. Today, he is a remarkably inventive jazz pianist. His room is filled with his paintings and metalwork, rife with archetypal imagery and hieroglyphic languages of his own creation.

He’s not cured. He still occasionally hears voices and harbors delusions. And he still lives in a group home. But he has been able to cut his medication in half again. He has lost weight, and his diabetes has become asymptomatic. He’s more courteous, alert, and engaged, his father and doctors say. He still has bad days, but they are fewer and farther between.

Perhaps the biggest factor motivating Frank’s improvement, however, has been the shift in how he views himself. No longer simply a madman, he is a painter and a poet, a traveler and a friend, an African and an American, a welder and a student.

And, most recently, a shaman. In February 2018, Frank, Dick, and Frank’s mother visited Malidoma’s tribe in Burkina Faso, where Frank took part in healing rituals. He spent four weeks living in the village before returning home to Boston in early March. Dick and Malidoma are loath to disclose details of the ceremony, and say only that Frank’s response to the rituals gave them hope.

The experience also shifted Dick’s perception. “I never expected to be conducting spiritual water rituals at the ocean,” he said. But that’s what he did, and, in the course of helping his son recover, he found that his own views of sickness and health had shifted. “To the extent that psychosis involves the creation of an alternate reality, the goal is to enter that world. There’s also a recognition that the world we think of as real is actually infused with aspects of the Other—that there is a mysterious impenetration or even an underlying unity.”

As for traditional medicine’s take? To the best of Dick’s knowledge, scientists haven’t studied a case exactly like Frank’s.

 

 

Original article here


16 Sep 2024
Comments: 0

Are Your Morals Too Good to Be True?

I spent the summer of 2011 as an undergraduate researcher at the Rocky Mountain Biological Laboratory, in Colorado. My job was to collect burying beetles—necrophagous critters with wing cases the colors of Halloween—using traps made out of coffee cans and chicken flesh. Behavioral biologists are fascinated by burying beetles because of their biparental model of care: males and females prepare meaty balls from carcasses and then coöperatively raise larvae on them. I was a matchmaker, charged with setting up pairs of beetles and watching them co-parent.

That summer was a dream. I lived in a community of more than a hundred scientists, students, and staff. The research station, based at the site of a deserted mining town, was a magnet for weirdos and plant lovers, naturalists and marmot chasers, flower people and climate watchers. It consisted of dozens of cabins—some Lincoln Log style and more than a century old, others retrofitted into modern laboratories—encircled by spruce and aspen forests, montane meadows, and monumental peaks. I was more accustomed to sidewalks than to summits, but now I saw elk and black bears and woke up one night to a porcupine gnawing on my cabin. For the first time in my life, I found love, or something close to it. In spare moments, I retired to my room, where I drew and wrote in my journal. On the weekends, we scaled the Rockies.

A part of me wanted that summer to be my forever. I envisioned a career as a collector of coleopterans, sneaking off to the mountains to cavort and observe. Another part of me worried that I was being frivolous. Back in college, my classmates had high-minded ambitions like fighting climate change, becoming human-rights lawyers, and starting microfinance firms to alleviate poverty. To spend time with books and beetles in wildflower country seemed the pinnacle of self-indulgence. Adding to the internal tension was something I’d observed among my beetles: the spectre of evolved selfishness. What looked like coöperation was, I discovered, laced with sexual conflict. The female beetles, when they had a size advantage, ejected their male partners; the males evidently stuck around less to help than to insure future mating opportunities. Where I first saw biparental collaboration was instead a complicated waltz of organisms seeking to perpetuate their own interests. Was I one of them—another gene machine bent on favoring itself?

I had, to that point, considered myself a mostly decent person, moved by empathy and committed to self-expression. Was all this actually vanity and delusion, selfishness masquerading as morality? The prospect was unsettling. So I hid away in a one-room library that smelled faintly of old textbooks and the alcohol used to preserve animal specimens, and there I started to work out a response. We’re evolved organisms, I figured, but we’re also an intelligent, cultural species capable of living by ideals that transcend our egoistic origins. What emerged from my musings was a personal ideology, at the core of which was an appreciation of creation—including artistic and scientific work. Even an awkward scribble, I supposed, expresses an incomprehensibly epic causal history, which includes a maker, the maker’s parents, the quality of the air in the room, and so on, until it expands to encompass the entire universe. Goodness could be reclaimed, I thought. I would draw and write and do science but as acts of memorialization—the duties of an apostle of being. I called the ideology Celebrationism, and, early in 2012, I started to codify it in a manic, sprawling novel of that name.

I had grown up a good Sikh boy: I wore a turban, didn’t cut my hair, didn’t drink or smoke. The idea of a god that acted in the world had long seemed implausible, yet it wasn’t until I started studying evolution in earnest that the strictures of religion and of everyday conventions began to feel brittle. By my junior year of college, I thought of myself as a materialist, open-minded but skeptical of anything that smacked of the supernatural. Celebrationism came soon after. It expanded from an ethical road map into a life philosophy, spanning aesthetics, spirituality, and purpose. By the end of my senior year, I was painting my fingernails, drawing swirling mehndi tattoos on my limbs, and regularly walking without shoes, including during my college graduation. “Why, Manvir?” my mother asked, quietly, and I launched into a riff about the illusory nature of normativity and about how I was merely a fancy organism produced by cosmic mega-forces.

After college, I spent a year in Copenhagen, where I studied social insects by day and worked on “Celebrationism” the rest of the time. Reassured of the virtue of intellectual and artistic work, I soon concluded that fictional wizards provided the best model for a life. As I wrote to my friend Cory, “They’re wise, eccentric, colorful, so knowledgeable about some of the most esoteric subjects, lone wolves in a sense, but all of their life experience constantly comes together in an exalting way every time they do something.” When, the following year, I started a Ph.D. in human evolutionary biology at Harvard, I saw the decision as in service of my Celebrationist creed. I could devote myself to meditating on the opportune swerves that produced us.

I was mistaken. Celebrationism died soon afterward. Just as observation and a dose of evolutionary logic revealed male burying beetles not as attentive fathers but as possessive mate guarders, the natural and behavioral sciences deflated my dreamy credo, exposing my lofty aspirations as performance and self-deception. I struggled, unsuccessfully, to construct a new framework for moral behavior which didn’t look like self-interest in disguise. A profound cynicism took hold.

Skepticism about objective morality is nothing new, of course. Michel de Montaigne, in the sixteenth century, remarked that “nothing in all the world has greater variety than law and custom,” a sign, for him, of the nonexistence of universal moral truths—and he had predecessors among the ancient Greeks. David Hume chimed in, two centuries later, to argue that judgments of right and wrong emanate from emotion and social conditioning, not the dispassionate application of reason. Even the more pious-sounding theorists, including Kant and Hegel, saw morality as something that we derive through our own thinking, our own rational will. The war between science and religion in the nineteenth century brought it all to a head, as a supernatural world view became supplanted by one that was more secular and scientific, in a development that Nietzsche described as the death of God. As the pillars of Christian faith crumbled, Western morality seemed poised to collapse. Nihilism loomed. “But how did we do this?” the madman in Nietzsche’s “The Gay Science” asks. “How could we drink up the sea? Who gave us the sponge to wipe away the entire horizon? What were we doing when we unchained the earth from its sun?”

Nietzsche’s response to a godless world was a moral makeover: individuals were to forge their own precepts and act in accordance with them. More than a century later, such forays have matured into an individualist morality that has become widespread. We behave morally, we often say, not because of doctrine but because of our higher-order principles, such as resisting cruelty or upholding the equality of all humans. Rather than valuing human life because an omnipotent godhead commands it, or because our houses of worship instruct it, we do so because we believe it is right.

At its core, this view of morality assumes a kind of moral integrity. Although some people may embrace principles for self-interested ends, the story goes, genuine altruism is possible through reasoned reflection and an earnest desire to be ethical. I told myself a version of this story in the Rockies: rummage through your soul and you can find personally resonant principles that inspire good behavior. The Harvard psychologist Lawrence Kohlberg turned a model like this into scholarly wisdom in the nineteen-sixties and seventies, positioning it as the apex of the six stages of moral development he described. For the youngest children, he thought, moral goodness hinges on what gets rewarded and punished. For actualized adults, in contrast, “right is defined by the decision of conscience in accord with self-chosen ethical principles appealing to logical comprehensiveness, universality, and consistency.”

All this may sound abstract, but it is routine for most educated Westerners. Consider how moral arguments are made. “Animal Liberation Now” (2023), the Princeton philosopher Peter Singer’s reboot of his 1975 classic, “Animal Liberation,” urges readers to emancipate nonhuman animals from the laboratory and the factory farm. Singer assumes that people are committed to promoting well-being and minimizing suffering, and so he spends most of the book showing, first, that our actions create hellish existences for many of our nonhuman brethren and, second, that there is no principled reason to deny moral standing to fish or fowl. His belief in human goodness is so strong, he admits, that he expected everyone who read the original version of his book “would surely be convinced by it and would tell their friends to read it, and therefore everyone would stop eating meat and demand changes to our treatment of animals.”

From an evolutionary perspective, this could seem an odd expectation. Humans have been fashioned by natural selection to pursue sex, status, and material resources. We are adept at looking out for ourselves. We help people, yes, but the decision to give is influenced by innumerable selfish considerations, including how close we are to a recipient, whether they’ve helped us before, how physically attractive they are, whether they seem responsible for their misfortune, and who might be watching. A Martian observer might, accordingly, have expected Singer’s arguments to focus less on the horrific conditions of overcrowded pig farms and instead to appeal to our hedonic urges—more along the lines of “Veganism makes you sexy” or “People who protest animal experimentation have more friends and nicer houses than their apathetic rivals.”

But Singer has always known his audience. Most people want to be good. Although “Animal Liberation Now” is largely filled with gruesome details, it also recounts changes that growing awareness has spurred. At least nine states have passed legislation limiting the confinement of sows, veal calves, and laying hens. Between 2005 and 2022 in the U.S., the proportion of hens that were uncaged rose from three per cent to thirty-five per cent, while Yum! Brands—the owner of such fast-food franchises as KFC, Taco Bell, and Pizza Hut, with more than fifty thousand locations around the world—has vowed to phase out eggs from caged hens by 2030. These changes are a microcosm of the centuries-long expansion of moral concern that, throughout much of the world, has ended slavery and decriminalized homosexuality. Could there be a clearer instance of genuine virtue?

I wasn’t yet thinking about any of this when I started graduate school. Instead, my mind was on monkeys. I had proposed studying the Zanzibar red colobus, a creature notable for retaining juvenile traits like a short face and a small head into adulthood. Our species underwent a similar juvenilization during our evolution, and the hope was that something might be learned about our past by studying this peculiar primate.

Still, I couldn’t read about monkeys all day. To start a Ph.D. at a major research university is to have proximity to countless intellectual currents, and I began to drift through the scholarly worlds on campus, which is how I found Moshe Hoffman. Moshe is intense. A curly-haired game theorist with a scalpel-like ability to dissect arguments and identify their logical flaws, Moshe was raised in an Orthodox Hasidic community in Los Angeles. He grew up wearing a kippah and spending half of each school day studying the Talmud and other religious texts until, at the age of fifteen, he forsook his faith. He had a chance conversation with an atheist classmate, then picked up Richard Dawkins’s “The Selfish Gene.” The book exposed him to game theory and evolutionary biology, setting him on a lifelong quest to solve the puzzles of human behavior.

When we met, near the end of my first year, Moshe was a postdoctoral researcher fixated on the nature of trust. We all depend on trust, yet it works in tricky ways. On the one hand, we trust people who are guided by consistent ethical precepts. I’d rather go to dinner with someone deeply opposed to stealing than a jerk who pockets my valuables as soon as I get up to pee. On the other hand, we’re turned off when people’s commitments seem calculated. The ascent of terms like “slacktivism,” “virtue signalling,” and “moral grandstanding” bespeaks a frustration with do-gooders motivated more by acclaim than by an internal moral compass. The idea is that, if you’re in it for the reputational perks, you can’t be relied on when those perks vanish. In “The Social Instinct” (2021), Nichola Raihani, who works on the evolution of coöperation, refers to this issue as the “reputation tightrope”: it’s beneficial to look moral but only as long as you don’t seem motivated by the benefits.

Moshe argued that humans deal with this dilemma by adopting moral principles. Through learning or natural selection, or some combination, we’ve developed a paradoxical strategy for making friends. We devote ourselves to moral ends in order to garner trust. Which morals we espouse depend on whose trust we are courting. He demonstrated this through a series of game-theoretic models, but you don’t need the math to get it. Everything that characterizes a life lived by moral principles—consistently abiding by them, valuing prosocial ends, refusing to consider costs and benefits, and maintaining that these principles exist for a transcendental reason—seems perfectly engineered to make a person look trustworthy.

His account identifies showmanship, conscious or otherwise, in ostensibly principled acts. We talk about moral principles as if they were inviolate, but we readily consider trade-offs and deviate from those principles when we can get away with it. Philip Tetlock, who works at the intersection of political science and psychology, labels our commitments “pseudo-sacred.” Sure, some people would die for their principles, yet they often abandon them once they gain power and no longer rely on trust. In Human Rights in Africa” (2017), the historian Bonny Ibhawoh showed that post-colonial African dictators often started their careers as dissidents devoted to civil liberties.

Moshe wasn’t alone in this work. Around the time that he and I began chatting, researchers at Oxford, the École Normale Supérieure, and elsewhere were disrobing morality and finding performance underneath. Jillian Jordan, then a graduate student at Yale and now on the faculty of Harvard Business School, conducted a series of landmark studies demonstrating how people instinctively use moral behavior to cultivate laudable personas. A 2016 paper in the Proceedings of the National Academy of Sciences—which Jordan wrote with Moshe and two other researchers—studied uncalculating coöperation, the tendency to willfully ignore costs and benefits when helping others. It’s a key feature of both romantic love and principled behavior. The authors found not only that “coöperating without looking” (a phrase of Moshe’s) attracts trust but that people engage in it more when trying to win observers’ confidence. The motivations that we find so detestable—moral posturing for social rewards—may, in fact, be the hallmark of moral action.

Invested as I was in my own goodness, whether achieved or aspirational, I found Moshe’s ideas both alarming and mesmeric. To engage with them was to look in a mirror and find a sinister creature staring back. The more I sought Moshe out—first by taking a course he co-taught, then by meeting up for Indian food after class, then by working as his teaching assistant—the more I felt trapped within my self-interest. Celebrationism was exposed as a beautiful lie. The search for personally resonant principles was reinterpreted as a tactic not to overcome self-interest but to advance it. Any dignified motivations that had once held sway—making art for art’s sake, acting to minimize suffering—became smoke screens to distract others from my selfishness. Here were hard truths that I felt compelled to confront. I wanted to escape the performance, to adopt values for reasons other than their social utility, but even that urge, I recognized, reflected the same strategic impulse to appear good and consistent. It was like forcing yourself to wake up from a dream only to realize that you’re still dreaming.

Yes, there were venerable antecedents to all these arguments, but what had once been the province of the provocateur was now something of a scholarly consensus. The new, naturalistic study of morality stemmed from an array of converging disciplines and approaches, spanning sociology, biology, anthropology, and psychology. It was set forth in popular books like Matt Ridley’s “The Origins of Virtue” (1996), Joshua Greene’s “Moral Tribes” (2013), and Richard Wrangham’s “The Goodness Paradox” (2019). Not everyone in this field understands ethical behavior the way Moshe does. Still, they tend to employ a framework grounded in evolutionary theory—one that casts morality as a property of our primate brains and little else. Appeals to pure selflessness have become harder to defend, while a belief in objective moral truths—existing apart from our minds and discoverable through impartial judgment—has grown increasingly untenable.

Darwin himself sensed the implications. In “The Descent of Man” (1871), he suggested that studying the “moral sense” from “the side of natural history” would throw “light on one of the highest psychical faculties of man.” It took another hundred years for scholars of evolution to appreciate the extent to which a Darwinian world view can explain morality. By the beginning of the twenty-first century, philosophers like Sharon Street, at N.Y.U., were taking note. “Before life began, nothing was valuable,” Street wrote in a now classic article. “But then life arose and began to value—not because it was recognizing anything, but because creatures who valued (certain things in particular) tended to survive.” In other words, moral tenets—such as the rightness of loyalty or the wrongness of murder—do not exist unless natural selection produces organisms that value them.

In recent decades, all sorts of philosophers have added to the pool of adaptive theories about morality. Allan Gibbard argues that moral statements (“Killing is bad”) actually express attitudes (“I don’t like killing”), allowing us to coördinate on shared prescriptions (“No one shall kill”). Philip Kitcher sees ethics as an ever-evolving project invented by our remote ancestors and continually refined to help societies flourish. Richard Joyce has proposed that moral judgments help keep us out of trouble. Given normal human hedonism, we may struggle to stop ourselves from, say, stealing a brownie; the feeling that it’s morally wrong provides us an emotional bulwark. Non-moral explanations like these, whatever their differences, obviate talk of moral truths, construing them as dreamlike delusions.

Like the decline of religion, what’s often called the evolutionary debunking of morality can induce existential panic and strenuous efforts at circumvention. The eminent philosopher Derek Parfit, the subject of a recent biography by David Edmonds, spent decades writing “On What Matters,” a book that sought both to build a unified theory of objective morality and to defend it against challengers, including evolution-inspired skeptics. In 2015, at N.Y.U., Parfit and Street taught a course together on meta-ethics. On the last day of class, a student asked them whether they had learned anything from their collaboration. “My memory is that both of us said ‘No!’ ” Street told Edmonds. “He thought my position was nihilistic. He was worried about it being true and felt it needed beating back with arguments.”

What troubled me was less the notion that morality was our own creation than the implication that our motives were suspect—that evolutionarily ingrained egoism permeated our desires, including the desire to overcome that selfishness. Sincerity, I concluded, was dead. Just as the natural sciences had killed the Christian God, I thought, the social and behavioral sciences had made appeals to virtuous motivations preposterous. I became skeptical of all moral opinions, but especially of the most impassioned ones, which was a problem, because I was dating someone who had a lot of them. (It didn’t work out.) A close friend, a punk physicist with whom I often went dancing late at night, found my newfound cynicism hard to relate to, and we drifted apart.

Many theorists are skeptical of such skepticism. When I asked people on X how they have dealt with evolutionary debunking, Oliver Scott Curry, a social scientist at Oxford and the research director at Kindlab, which studies the practice of kindness, warned me not to confuse the selfishness of genes with the nature of our motivations, which apparently are more gallant. He was echoing a distinction often drawn between a behavior’s “ultimate” causes, which concern why it evolved, and its “proximate” causes, which include psychological and physiological mechanisms. The cognition underpinning moral judgment may have evolved to make us look good, these scholars grant, but that doesn’t count against its sincerity. In “Optimally Irrational” (2022), the University of Queensland economist Lionel Page explains, “There is no contradiction between saying that humans have genuine moral feelings and that these feelings have been shaped to guide them when playing games of social interactions.”

Such arguments make sense to some degree. An impulse can exist because of its evolutionary utility but still be heartfelt. The love I feel for my spouse functions to propagate my genes, but that doesn’t lessen the strength of my devotion. Why couldn’t this shift in perspective rescue goodness for me? A major reason is that the proximate-ultimate distinction leaves intact the unsavory aspects of human motivation. As anyone who has spent more than twenty minutes on social media can attest, humans are remarkably attentive to which moral proclamations garner esteem and attention. We weigh the status implications of claiming different principles. It’s true that we often assure ourselves otherwise and even internalize positions once we espouse them enough. Yet this fact didn’t redeem moral sincerity for me; it corrupted it.

I eventually ditched monkeys. Humans, complicated and enculturated, had a stronger appeal than tiny-headed primates. After my first year of graduate school, in 2014, I travelled to the Indonesian island of Siberut and stayed with its Indigenous inhabitants, the Mentawai. I returned for two more months in 2015 and then spent much of 2017 with a Mentawai community, studying traditions of justice, healing, and spirituality. As I learned more of the language, I saw how rarely Mentawai people invoke abstract concepts of right and wrong. Instead, they reason about duties and responsibilities in a way that seems both blatantly self-interested and refreshingly honest, and which I’ve since adopted when speaking to them.

A Mentawai man who had previously worked for me as a research assistant once asked me over WhatsApp to help pay his school fees. I agreed but then struggled to wire the money from the United States, and he was forced to drop out. When I visited again in 2020, I handed him a wad of cash. “Why are you doing this?” he asked. My reply came automatically: “Because, if I don’t pay you, people will think that I don’t keep my promises.” He nodded. The answer made sense.

How does one exist in a post-moral world? What do we do when the desire to be good is exposed as a self-serving performance and moral beliefs are recast as merely brain stuff? I responded by turning to a kind of nihilism, yet this is far from the only reaction. We could follow the Mentawai, favoring the language of transaction over virtue. Or we can carry on as if nothing has changed. Richard Joyce, in his new book, “Morality: From Error to Fiction,” advocates such an approach. His “moral fictionalism” entails maintaining our current way of talking while recognizing that a major benefit of this language is that it makes you likable, despite referring to nothing real. If you behave the way I did in grad school, going on about the theatre of morality, you will, he suggests, only attract censure and wariness. Better to blend in.

Intellectually, I find the proposal hard to swallow. The idea of cosplaying moral commitment for social acceptance would surely magnify whatever dissonance I already feel. Still, a decade after my first meeting with Moshe, experience forces me to acknowledge Joyce’s larger point. It’s easy to inhabit the fiction.

I still accept that I am a selfish organism produced by a cosmic mega-force, drifting around in a bedlam of energy and matter and, in most respects, not so very different from the beetles I scrutinized during that summer in Colorado. I still see the power in Moshe’s game-theory models. Traces of unease linger. But I no longer feel unmoored. A sense of meaning has re-established itself. Tressed, turbanned, and teetotalling, I am, at least by all appearances, still a good Sikh. I have become a teacher, a husband, and a father to a new baby daughter. When she smiles, a single dimple appears in her left cheek. Her existence feels more ecstatic and celebratory than any ideology I could have conceived, and I hope that she’ll one day grow up to be empathetic and aware of others’ suffering. I have moral intuitions, sometimes impassioned ones. I try to do right by people, and, on most days, I think I do an O.K. job. I dream on. ♦

 

Original article here

 


11 Sep 2024
Comments: 0

I’m a Neurologist. Here’s the One Thing I Do Every Day for My Long-Term Brain Health

 

Everything you do—walking to your yoga class, making your favorite latte order, talking to your bestie, and just getting through the workday—happens thanks to your brain. Your brain is the control center for your entire body—it’s how you get shit done. So how can you take care of such a beautifully complex and integral part of your body and keep it in great shape for as long as possible?

Lara V. Marcuse, MD, a board-certified neurologist and codirector of the Mount Sinai Epilepsy Program at the Icahn School of Medicine at Mount Sinai, shares the one thing she does every day (or almost every day, because life gets busy, folks!) to keep her brain healthy. As a bonus? It’s fun.

Pick up a difficult new skill, even if you suck at it.

“I started playing piano in my mid-40s,” Dr. Marcuse tells SELF. It all started by chance when her son began taking lessons: “I took his lesson book on the sly one night before bed, and I was totally enthralled by it,” she says, though she admits she found the songs themselves hard to get into at first. “I’m a 1980s New York City club kid. I grew up on a steady diet of house music, and I never liked classical.” It’s been seven years since she first gave playing a Chopin piece a shot, and she hasn’t looked back since. “[Playing piano] helps me get into [the] nooks and crannies of myself—and into my spirit,” she says.

Taking up a hobby that’s unfamiliar and even difficult forces your brain to exercise new or rarely used neural pathways, and that can help prevent cognitive decline and even protect your brain against Alzheimer’s disease, a type of dementia that leads to memory loss and an inability to complete daily tasks. Keeping your brain active makes neural pathways strong—and the opposite is true if you’re not finding ways to engage your mind.

Playing an instrument, in particular, engages every facet of your brain. If you’ve ever looked at a sheet of music, it’s basically like reading a different language. Your brain goes through a bunch of hoops to figure it out. (Anecdotally speaking, as a former cello player, I can attest to the fact that reading music is no joke; I recall spending hours trying to understand a simple string of notes.) When you sit down to play the keys or strum a guitar, your brain is hard at work trying to tell your hands what to do.

Musical activities trigger the auditory cortex (a.k.a. the part of your brain that helps you hear) and areas of your brain that are involved in memory function. According to a 2021 review published in Frontiers in Neuroscience, performing music is rewarding and makes you want to continue your musical training practice. It also improves brain plasticity, which refers to ways your brain changes in response to external or internal factors, like a stroke or another traumatic brain injury, and how the brain adapts afterward. Learning how to play might result in structural and functional changes in your brain over time, exactly because it takes a while to learn.

Your brain-bolstering activity of choice doesn’t have to be music-based, Dr. Marcuse says, as long as you’re interested in whatever you’re doing enough to want to commit to it. You can paint, try tai chi, or learn how to interpret tarot cards.

The other key piece of this is making sure that your new hobby involves some amount of challenge. “It has to be something a little new that’s a little hard,” Dr. Marcuse says. Passively watching the latest episode of The Bachelor won’t cut it, because you need your brain to be active, take in new information, digest it, and then put it back out there.

While you might feel that learning a new skill feels daunting, that’s the point! According to Dr. Marcuse, you don’t have to be good at the activity to protect your brain: “I never took music lessons as a kid. I’m not really good at it. I never will be,” she says.

And despite not being the next Mozart, she says that playing the piano adds some color and levity to her days, in addition to protecting her brain. “I really need that in my life—I have a very stressful job,” she says. “It makes me feel that the world is sort of full of beauty and hope.”

How to make a new skill a regular part of your life—and why it’s great for your brain

You don’t have to do the activity every single day, or even for a very long time. “Just try to do it frequently, and don’t do it for very long,” Dr. Marcuse says. Sometimes all she has time for is a few bars or a couple of scales—do whatever works for you, as long as you stay somewhat in the swing of a routine.

A 2020 research study found that increasing the frequency with which you engage in your hobby (like doing crossword puzzles, playing board games—or an instrument—or reading the newspaper) decreases cognitive impairment and depressive symptoms in older populations. In other words, doing your hobby more often will be better for your overall well-being. Practice not only increases the speed at which you can perform a task, but it also improves your accuracy. Research theorizes that when you attempt an activity for the first time, specific brain regions are activated to help you complete the task; this creates new neural pathways as your brain stores all this new information in your memory as you continue practicing your skill over time.

Consistently training your brain will help boost your cognitive processes over time, because the myelin sheath—the layer of protein that coats your nerves—thickens. A plumper myelin sheath helps your brain transmit and process information more efficiently. (An added bonus of practicing: Even though the word routine sounds dull as all get-out, maintaining one can reduce your stress levels and make you happier in general.)

Whether you decide to take a cooking class or learn Spanish, try a new hobby that really speaks to you. “Everything you do to protect your brain is going to make your life better,” Dr. Marcuse says. Bearing that in mind: I think it’s time to pull out the ol’ cello that’s been collecting dust in my closet.

 

 

Original article here

 


07 Sep 2024
Comments: 0

Should we be eating three meals a day?

 

It’s likely you eat three meals a day – modern life is designed around this way of eating. We’re told breakfast is the most important meal of the day, we’re given lunch breaks at work, and then our social and family lives revolve around evening meals. But is this the healthiest way to eat?

Before considering how frequently we should eat, scientists urge us to consider when we shouldn’t.

Intermittent fasting, where you restrict your food intake to an eight-hour window, is becoming a huge area of research.

Giving our bodies at least 12 hours a day without food allows our digestive system to rest, says Emily Manoogian, clinical researcher at the Salk Institute for Biological Studies in California, and author of a 2019 paper entitled “When to eat”.

Rozalyn Anderson, an associate professor at the University of Wisconsin’s School of Medicine and Public Health, has studied the benefits of calorie restriction, which is associated with lower levels of inflammation in the body.

“Having a fasting period every day could reap some of these benefits,” she says. “It gets into the idea that fasting puts the body in a different state, where it’s more ready to repair and surveil for damage, and clear misfolded proteins.” Misfolded proteins are faulty versions of ordinary proteins, which are molecules that perform a huge range of important jobs in the body. Misfolded proteins have been associated with a number of diseases.

Intermittent fasting is more in line with how our bodies have evolved, Anderson argues. She says it gives the body a break so it’s able to store food and get energy to where it needs to be, and trigger the mechanism to release energy from our body stores.

Fasting could also improve our glycaemic response, which is when our blood glucose rises after eating, says Antonio Paoli, professor of exercise and sport sciences at the University of Padova in Italy. Having a smaller blood glucose increase allows you to store less fat in the body, he says.

“Our data suggests that having an early dinner and increasing the time of your fasting window increases some positive effects on body, like better glycaemic control,” Paoli says.

It’s better for all cells to have lower levels of sugar in them because of a process called glycation, Paoli adds. This is where glucose links to proteins and forms compounds called “advanced glycation end products”, which can cause inflammation in the body and increase the risk of developing diabetes and heart disease.

 

“Wouldn’t one meal a day leave us feeling hungry? Not necessarily.”

 

But if intermittent fasting is a healthy way to eat – how many meals does this leave room for?

Some experts argue it’s best to have one meal a day, including David Levitsky, professor at Cornell University’s College of Human Ecology in New York, who does this himself.

“There’s a lot of data showing that, if I show you food or pictures of food, you’re likely to eat, and the more frequently food is in front of you, the more you’re going to eat that day,” he says.

This is because, before we had fridges and supermarkets, we ate when food was available.  Throughout history, we consumed one meal a day, including the Ancient Romans who ate one meal around midday, says food historian Seren Charrington-Hollins.

 

 

Wouldn’t one meal a day leave us feeling hungry? Not necessarily, Levitsky argues, because hunger is often a psychological sensation.

“When the clock says 12pm, we may get feelings to eat, or you might be conditioned to eat breakfast in the morning, but this is nonsense. Data shows that if you don’t eat breakfast, you’re going to eat fewer calories overall that day.

“Our physiology is built for feasting and fasting,” he says. However, Levitsky doesn’t recommend this approach for people with diabetes.

But Manoogan doesn’t recommend sticking to one meal a day, since this can increase the level of glucose in our blood when we’re not eating – known as fasting glucose. High levels of fasting glucose over a long period of time is a risk factor for type 2 diabetes.

Keeping blood glucose levels down requires eating more regularly than once a day, Manoogan says, as this prevents the body thinking it’s starving and releasing more glucose when you do eventually eat in response.

Instead, she says, two to three meals a day is best – with most of your calories consumed earlier in the day. This is because eating late at night is associated with cardio-metabolic disease, including diabetes and heart disease.

“If you eat most of your food earlier on, your body can use the energy you feed it throughout the day, rather than it being stored in your system as fat,” Manoogan says.

But eating too early in the morning should be avoided, too, she says, as this wouldn’t give you sufficient time to fast. Also, eating too soon after waking up works against our circadian rhythm – known as our body clock – which researchers say dictates how the body processes food differently throughout the day.Our bodies release melatonin overnight to help us sleep – but melatonin also pauses the creation of insulin, which stores glucose in the body. Because melatonin is released while you’re sleeping, the body uses it to make sure we don’t take in too much glucose while we’re sleeping and not eating, Manoogan says.

“If you take in calories when your melatonin is high, you get really high glucose levels. Consuming a lot of calories at night poses a significant challenge to the body because if insulin is supressed, your body can’t store glucose properly.”

 

 

And, as we know, high levels of glucose over long periods of time can increase the risk of developing type 2 diabetes.

This doesn’t mean we should skip breakfast altogether, but some evidence suggests we should wait an hour or two after waking up before we crack open the eggs. It’s also worth remembering that breakfast as we know and love it today is a relatively new concept.

“The Ancient Greeks were the first to introduce the concept of breakfast, they’d eat bread soaked in wine, then they had a frugal lunch, then a hearty evening meal,” says Charrington-Hollins.

Initially, breakfast was exclusive to aristocratic classes, says Charrington-Hollins. It first caught on in the 17th Century, when it became the luxury of those who could afford the food and the time for a leisurely meal in the morning.

“The concept today of breakfast being the norm [came about] during the Industrial Revolution in the 19th Century and its introduction of working hours,” says Charrington-Hollins. Such a routine lends itself to three meals a day. “The first meal would be something quite simple for the working classes – it might be street food from a vendor or bread.”

But after war, when availability of food diminished, the idea of eating a full breakfast wasn’t possible and a lot of people skipped it. “The idea of three meals a day went out the window,” says Charrington-Hollins. “In the 1950s breakfast becomes how we recognise it today: cereal and toast. Prior to that we were happy to eat a piece of bread with jam.”

So, the science seems to say the healthiest way to eat throughout the day is to have two or three meals, with a long fasting window overnight, to not eat too early or too late in the day, and to consume more calories earlier on in the day. Is this realistic?

Manoogan says it’s best to not specify the best times to eat, as this can be difficult for people with responsibilities and irregular time commitments, such as those working night shifts.

“Telling people to stop eating by 7pm isn’t helpful because people have different schedules. If you try to give your body regular fast nights, try to not eat too late or early and try to not have huge final meals, this can usually help. People can at least adopt parts of this,” she says.

 

 

“You could see a dramatic change just from a small delay in your first meal and advancing your last meal. Making this regular without changing anything else could have a big impact.”

But whatever changes you make, researchers agree that consistency is crucial.

“The body works in patterns,” says Anderson. “We respond to the anticipation of being fed. One thing intermittent fasting does is it imposes a pattern, and our biological systems do well with a pattern.” She says the body picks up on cues to anticipate our eating behaviours so it can best deal with the food when we eat it.

When it comes to how many meals we deem normal, Charrington-Hollins is seeing change on the horizon.

“Over the centuries, we’ve become conditioned to three meals a day, but this is being challenged now and people’s attitude to food is changing. We have more sedate lifestyles, we’re not doing the level of work we were doing in the 19th Century, so we need fewer calories.

“I think, long-term, we’ll be reducing back to a light meal then a main meal, depending on what happens work-wise. Our working hours will be the driving force.

“When we came off rations, we embraced three meals a day because there was suddenly an abundance of food. But time goes on – food is everywhere now.”

 

 

Original article here


Leave a Comment!

You must be logged in to post a comment.