Call us toll free: +1 4062079616
How To Be Spiritual In A Material World
Call us toll free: +1 4062079616

Full Width Blog

26 Sep 2023
Comments: 0

Humans Could Live up to 150 Years, New Research Suggests

The chorus of the theme song for the movie Fame, performed by actress Irene Cara, includes the line “I’m gonna live forever.” Cara was, of course, singing about the posthumous longevity that fame can confer. But a literal expression of this hubris resonates in some corners of the world—especially in the technology industry. In Silicon Valley, immortality is sometimes elevated to the status of a corporeal goal. Plenty of big names in big tech have sunk funding into ventures to solve the problem of death as if it were just an upgrade to your smartphone’s operating system.

Yet what if death simply cannot be hacked and longevity will always have a ceiling, no matter what we do? Researchers have now taken on the question of how long we can live if, by some combination of serendipity and genetics, we do not die from cancer, heart disease or getting hit by a bus. They report that when omitting things that usually kill us, our body’s capacity to restore equilibrium to its myriad structural and metabolic systems after disruptions still fades with time. And even if we make it through life with few stressors, this incremental decline sets the maximum life span for humans at somewhere between 120 and 150 years. In the end, if the obvious hazards do not take our lives, this fundamental loss of resilience will do so, the researchers conclude in findings published in May 2021 in Nature Communications.

“They are asking the question of ‘What’s the longest life that could be lived by a human complex system if everything else went really well, and it’s in a stressor-free environment?’” says Heather Whitson, director of the Duke University Center for the Study of Aging and Human Development, who was not involved in the paper. The team’s results point to an underlying “pace of aging” that sets the limits on life span, she says.

For the study, Timothy Pyrkov, a researcher at a Singapore-based company called Gero, and his colleagues looked at this “pace of aging” in three large cohorts in the U.S., the U.K. and Russia. To evaluate deviations from stable health, they assessed changes in blood cell counts and the daily number of steps taken and analyzed them by age groups.

For both blood cell and step counts, the pattern was the same: as age increased, some factor beyond disease drove a predictable and incremental decline in the body’s ability to return blood cells or gait to a stable level after a disruption. When Pyrkov and his colleagues in Moscow and Buffalo, N.Y., used this predictable pace of decline to determine when resilience would disappear entirely, leading to death, they found a range of 120 to 150 years. (In 1997 Jeanne Calment, the oldest person on record to have ever lived, died in France at the age of 122.)

The researchers also found that with age, the body’s response to insults could increasingly range far from a stable normal, requiring more time for recovery. Whitson says that this result makes sense: A healthy young person can produce a rapid physiological response to adjust to fluctuations and restore a personal norm. But in an older person, she says, “everything is just a little bit dampened, a little slower to respond, and you can get overshoots,” such as when an illness brings on big swings in blood pressure.

Measurements such as blood pressure and blood cell counts have a known healthy range, however, Whitson points out, whereas step counts are highly personal. The fact that Pyrkov and his colleagues chose a variable that is so different from blood counts and still discovered the same decline over time may suggest a real pace-of-aging factor in play across different domains.

Study co-author Peter Fedichev, who trained as a physicist and co-founded Gero, says that although most biologists would view blood cell counts and step counts as “pretty different,” the fact that both sources “paint exactly the same future” suggests that this pace-of-aging component is real.

The authors pointed to social factors that reflect the findings. “We observed a steep turn at about the age of 35 to 40 years that was quite surprising,” Pyrkov says. For example, he notes, this period is often a time when an athlete’s sports career ends, “an indication that something in physiology may really be changing at this age.”

The desire to unlock the secrets of immortality has likely been around as long as humans’ awareness of death. But a long life span is not the same as a long health span, says S. Jay Olshansky, a professor of epidemiology and biostatistics at the University of Illinois at Chicago, who was not involved in the work. “The focus shouldn’t be on living longer but on living healthier longer,” he says.

“Death is not the only thing that matters,” Whitson says. “Other things, like quality of life, start mattering more and more as people experience the loss of them.” The death modeled in this study, she says, “is the ultimate lingering death. And the question is: Can we extend life without also extending the proportion of time that people go through a frail state?”

The researchers’ “final conclusion is interesting to see,” Olshansky says. He characterizes it as “Hey, guess what? Treating diseases in the long run is not going to have the effect that you might want it to have. These fundamental biological processes of aging are going to continue.”

The idea of slowing down the aging process has drawn attention, not just from Silicon Valley types who dream about uploading their memories to computers but also from a cadre of researchers who view such interventions as a means to “compress morbidity”—to diminish illness and infirmity at the end of life to extend health span. The question of whether this will have any impact on the fundamental upper limits identified in the Nature Communications paper remains highly speculative. But some studies are being launched—testing the diabetes drug metformin, for example—with the goal of attenuating hallmark indicators of aging.

In this same vein, Fedichev and his team are not discouraged by their estimates of maximum human life span. His view is that their research marks the beginning of a longer journey. “Measuring something is the first step before producing an intervention,” Fedichev says. As he puts it, the next steps, now that the team has measured this independent pace of aging, will be to find ways to “intercept the loss of resilience.”

 

 

 

Original article here

 


22 Sep 2023
Comments: 0

Selfishness Is Learned

Many people cheat on taxes — no mystery there. But many people don’t, even if they wouldn’t be caught — now, that’s weird. Or is it? Psychologists are deeply perplexed by human moral behavior, because it often doesn’t seem to make any logical sense. You might think that we should just be grateful for it. But if we could understand these seemingly irrational acts, perhaps we could encourage more of them.

It’s not as though people haven’t been trying to fathom our moral instincts; it is one of the oldest concerns of philosophy and theology. But what distinguishes the project today is the sheer variety of academic disciplines it brings together: not just moral philosophy and psychology, but also biology, economics, mathematics, and computer science. They do not merely contemplate the rationale for moral beliefs, but study how morality operates in the real world, or fails to. David Rand of Yale University epitomizes the breadth of this science, ranging from abstract equations to large-scale societal interventions. “I’m a weird person,” he says, “who has a foot in each world, of model-making and of actual experiments and psychological theory building.”

In 2012 he and two similarly broad-minded Harvard professors, Martin Nowak and Joshua Greene, tackled a question that exercised the likes of Thomas Hobbes and Jean-Jacques Rousseau: Which is our default mode, selfishness or selflessness? Do we all have craven instincts we must restrain by force of will? Or are we basically good, even if we slip up sometimes?

They collected data from 10 experiments, most of them using a standard economics scenario called a public-goods game. Groups of four people, either American college students or American adults participating online, were given some money. They were allowed to place some of it into a pool, which was then multiplied and distributed evenly. A participant could maximize his or her income by contributing nothing and just sharing in the gains, but people usually gave something. Despite the temptation to be selfish, most people showed selflessness.

 

The fuzziness of psychological ideas makes them hard to test. If an experimental result doesn’t fit your theory of human behavior, you can fiddle with the definitions.

 

This finding was old news, but Rand and his colleagues wanted to know how much deliberation went into such acts of generosity. So in two of the experiments, subjects were prodded to think intuitively or deliberately; in two others, half of the subjects were forced to make their decision under time pressure and half were not; and in the rest, subjects could go at their own pace and some naturally made their decisions faster than others. If your morning commute is any evidence, people in a hurry would be extra selfish. But the opposite was true: Those who responded quickly gave more. Conversely, when people took their time to deliberate or were encouraged to contemplate their choice, they gave less.

The researchers worked under the assumption that snap judgments reveal our intuitive impulses. Our intuition, apparently, is to cooperate with others. Selfish behavior comes from thinking too much, not too little. Rand recently verified this finding in a meta-analysis of 51 similar studies from different research groups. “Most people think we are intuitively selfish,” Rand says — based on a survey he conducted—but “our lab experiments show that making people rely more on intuition increases cooperation.”

The cooperative impulse isn’t confined to an artificial experimental setting. In another paper, Rand and Ziv Epstein of Pomona College studied interviews with 51 recipients of the Carnegie Hero Medal, who had demonstrated extreme altruism by risking their lives to save others. Study participants read the interviews and rated the medalists on how much their thinking seemed intuitive versus deliberative. And intuition dominated. “I’m thankful I was able to act and not think about it,” a college student who rescued a 69-year-old woman from a car during a flash flood explained.

So Rand made a strong case that people are intuitive cooperators, but he considered these findings just the start. It’s one thing to put forward an idea and some evidence for it — lots of past researchers have done that. It’s quite another to describe and explain that idea in a rigorous, mathematical fashion. Ironically, Rand figured he could make better sense of humans by stepping away from studying real ones.

The overwhelming majority of psychological theories are verbal: explanations of the ways people act using everyday language, with maybe a few terms of art thrown in. But words can be imprecise. It may be true that “cooperation is intuitive,” but when is it intuitive? And what exactly does “intuitive” mean? The fuzziness of psychological ideas makes them hard to test. If an experimental result doesn’t fit your theory of human behavior, you can fiddle with the definitions and claim you were right all along.

Rand has sought to create quantitative models. “Science is about developing theories,” he says, “not about developing a list of observations. And the reason formal models are so important is that if your goal is theory-building, then it’s essential that you have theories that are really clearly articulated and are falsifiable.”

To do that, he has developed computer simulations of society — The Sims, basically. These models represent collections of individual people described by computer “agents,” algorithms that capture a specific package of traits, such as a tendency to cooperate or not. You can do controlled experiments on these computerized citizens that would be impossible or unethical to do with real people. You can endow them with new personalities to see how they’d fare. You can observe social processes in action, on time scales ranging from seconds to generations, instead of just taking a snapshot of a person or group. You can watch the spread of certain behaviors throughout a population and how they influence other behaviors. Over time, the patterns that emerge can tell you things about large-scale social interaction that a lab experiment with a few real people never could.

 

 

One of the first such models, in the early 1970s, studied housing segregation. It represented a city as a 16-by-13 grid of squares, populated by two types of people: stars and circles. Each star would move to the nearest location in which at least half its neighbors were also stars — it had a slight bias to be among similar others. Circles did the same. Even these mild biases led quickly to stark segregation, with all-star and all-circle regions of the board — a much more extreme partitioning than any one agent sought. The researcher, the economist Thomas Schelling, used his model to help explain racial segregation in American cities. A neighborhood can splinter into homogeneous patches even when individual residents are hardly prejudiced at all. (Of course, in reality, segregation also reflects outright racism and explicit policies of exclusion.) Schelling’s work became a case study of how a group’s collective behavior can diverge from the desires of any one agent.

Such models have also been used to explore cooperation. In an influential paper in 1981, the political scientist Robert Axelrod programmed agents to play a simple game called the Prisoner’s Dilemma. Two players have to decide whether to cooperate with or betray the other, and they receive points based on their choices. The scoring system is set up to mimic an essential dilemma of social life. Together the players perform best if they both cooperate, yet each can maximize his or her own individual outcome, at the expense of the other, by acting selfishly. The game takes its name from a scenario in which the police interrogate two thieves, offering each a reward for ratting out his or her accomplice. The thieves aren’t able to communicate to reach a joint decision; they have to make their decisions independently. Acting rationally, each should rat out the other. But when they both act “rationally,” they actually end up with the most combined jail time.

 

It’s possible we’re born with a tendency to cooperate, but frequent cooperation (with beneficial results) is required to sustain our benevolence.

 

The game gets more interesting — and more analogous to real life — when you play multiple rounds with the same partner. Here, repeated cooperation is best not just for both partners as a unit but also for each individually. You can still occasionally double-cross your partner for extra points, however, as long as it doesn’t trigger later betrayal.

What is the best strategy, then? To find out, Axelrod solicited Prisoner’s Dilemma strategies from mathematicians, biologists, economists, political scientists, computer scientists, and physicists from around the world. Axelrod programmed his computerized agents with these strategies and made them play a round-robin tournament. Some strategies were quite sophisticated, but the winner was a simple one called tit-for-tat.

Tit-for-tat resembles human reciprocity. It starts with cooperation and, after that, does whatever the other player did on the previous round. An agent using the strategy extends an olive branch at first. If its opponent reciprocates, it keeps cooperating. But if its opponent double-crosses it, the tit-for-tat agent rescinds its peace offering until its opponent makes amends.

 

 

By combining the short-term temptation to be selfish with the long-term benefits of collaboration, the Prisoner’s Dilemma is an ideal model for human cooperation, and Rand has built on Axelrod’s work to understand why evolution might have favored intuitive selflessness. 

Rand and his grad student Adam Bear considered a variant of the Prisoner’s Dilemma in which matchups were either one-shot or multiple-round, chosen at random.8 The computerized agents faced a tough choice. In a one-off, they would score more points by betraying their opponent, whereas in repeated play cooperation made more sense. But the uncertainty made it unclear which strategy was best. Rand and Bear then added a twist. An agent could elect to pay some points at the start of an encounter—representing the efforts of deliberation—to suss out what kind of matchup it would face, so that it could tailor its strategy.

The agent had to decide whether the advantage of foreknowledge outweighed its cost. The price of the tip-off varied randomly, and each agent was programmed with a maximum price it would agree to pay; if the price exceeded that amount, the agent did not receive any advance information and instead chose some default behavior, following its “intuition.” In this way, the simulation allowed for different personality types. Some agents intuitively cooperated, others intuitively betrayed. Some occasionally deliberated, others didn’t.

Is deliberation helpful? That’s not immediately obvious. Intuitive thinking is fast but inflexible. Deliberative thinking can achieve better outcomes but takes time and energy. To see which strategy excelled in the long run, Rand and Bear’s model simulated a process of evolution. A large population of agents played the game with one another and either proliferated or died depending on how well they did. This process can model either genetic evolution or cultural evolution, in which the weak players don’t actually die, but merely adopt stronger strategies through imitation.

 

Most of us are genuinely good. And if we’re not, we can be encouraged to be. The math is there.

 

Typically, one strategy swept through the population and replaced the alternatives. This victorious strategy depended on the precise parameters of the game. For example, Rand and Bear varied the probability that matchups would be single- or multiple-round. When most were multi-round, the winning agents defaulted to cooperating but deliberated if the price was right and switched to betrayal if they found they were in a one-shot game. But when most were one-shots, the agents that prevailed were no longer willing to pay to deliberate at all. They simply double-crossed their opponents. In other words, the model produced either wary cooperation or uncompromising betrayal.

This outcome was notable for what was missing. Agents that always cooperated usually died off completely. Likewise, almost no set of game parameters favored agents that defaulted to the double-cross but were sometimes willing to deliberate. Bear and Rand stared at this asymmetry for several weeks, baffled.

Finally, they had a breakthrough. They realized that when your default is to betray, the benefits of deliberating — seeing a chance to cooperate — are uncertain, depending on what your partner does. With each partner questioning the other, and each partner factoring in the partner’s questioning of oneself, the suspicion compounds until there’s zero perceived benefit to deliberating. If your default is to cooperate, however, the benefits of deliberating — occasionally acting selfishly — accrue no matter what your partner does, and therefore deliberation makes more sense.

So, it seems there is a firm evolutionary logic to the human instinct to cooperate but adjust if necessary — to trust but verify. We ordinarily cooperate with other people, because cooperation brings us benefits, and our rational minds let us decipher when we might occasionally gain by acting selfishly instead.

The model also ties up a loose end from Rand’s earlier studies of public-goods games. In that research, time pressure caused some people to cooperate more, but never caused anyone to cooperate less. This asymmetry now makes sense. The only people who would have shown that behavior were those who were willing to deliberate, but defaulted to betrayal; the time pressure would bring out their Machiavellian inclinations. Evidently such people are rare. If someone is deep-down selfish, rational deliberation will only make them more so. And the evolutionary model shows why. Defectors who have qualms are quickly winnowed out by genetic or cultural evolution.

When it comes to getting people to cooperate more, Rand’s work brings good news. Our intuitions are not fixed at birth. We develop social heuristics, or rules of thumb for interpersonal behavior, based on the interactions we have. Change those interactions and you change behavior.

Rand, Nowak, and Greene tested that idea in their 2012 paper. They asked some subjects whether they’d ever played such economics games before. Those with previous experience didn’t become more generous when asked to think intuitively; they’d apparently become accustomed to the anonymous nature of such games and learned a new intuition. Unfortunately, it was a cynical one: They could get away with mooching off others. Similarly, subjects who reported that they couldn’t trust most of the people in their lives also didn’t become more generous when acting on intuition. It’s possible we’re born with a tendency to cooperate, but frequent cooperation (with beneficial results) is required to sustain our benevolence.

Happily, even the Grinch can expand his heart by three sizes, as Rand demonstrates in a recent study. First, he had test subjects play the Prisoner’s Dilemma for about 20 minutes with a variety of opponents. For half of the subjects, the average game lasted eight rounds, meaning cooperation was the best strategy; for half, the average game lasted a single round, which discouraged cooperation. Afterward, everyone played a public-goods game. Those stewed in cooperation gave significantly more money in the second phase of the experiment than did those without it. In less than half an hour, their intuitions had shifted.

How do you encourage cooperation in places where cooperation isn’t the norm? Corporate America comes to mind. “In a lot of situations people are basically rewarded for backstabbing and ladder-climbing,” Rand says. Rand and Bear’s modeling paper, in which intuitive defectors don’t trust each other enough even to consider whether cooperation would pay off, points to an answer. Rand suggests that, at least at first, incentives could come from above, so that the benefits of cooperating don’t depend solely on whether one’s partner cooperates. Companies might offer bonuses and recognition for helpful behavior. Once cooperation becomes a social heuristic, people will begin to cooperate when it benefits them, but also even when it doesn’t. Selflessness will be the new norm.

When selflessness is the norm, encouraging people to make decisions quickly can bring out their better angels. Extensions of this research reveal that we see quick or unthinking acts of generosity as particularly revealing of kindness, and that people may even use this signal strategically. In recent work, Rand and his collaborators have shown that people are faster to make decisions to cooperate when they know someone is watching, as if aware that others will judge them by their alacrity. Among other puzzles, Rand is currently trying to untangle this apparent paradox—the strategic use of intuition.

Rand’s work offers a correction to those misanthropes who peer into the hearts of men and women and see shadows. Most of us are genuinely good. And if we’re not, we can be encouraged to be. The math is there.

If you think seeing life as a set of economics games and cooperation as self-interest in disguise sounds dismal, it is actually not so distanced from what you might call virtue. “When I’m nice to other people, I’m not doing it because of some kind of calculation. I’m doing it because it feels good,” Rand says. “And the reason it feels good, I argue, is that it is actually payoff maximizing in the long run.”

Rand then adds a crucial clarification. “It feels good to be nice — unless the other person is a jerk,” he says. “And then it feels good to be mean.”

Tit for tat indeed.

 

 

 

Original article here


18 Sep 2023
Comments: 0

Voice Is the Next Big Platform, Unless You Have an Accent

 My mother waited two months for her Amazon Echo to arrive. Then, she waited again — leaving it in the box until I came to help her install it. Her forehead crinkled as I download the Alexa app on her phone. Any device that requires vocal instructions makes my mother skeptical. She has bad memories of Siri. “She could not understand me,” my mom told me.

My mother was born in the Philippines, my father in India. Both of them speak English as a third language. In the nearly 50 years they’ve lived in the United States, they’ve spoken English daily — fluently, but with distinct accents and sometimes different phrasings than a native speaker. In their experience, that means Siri, Alexa, or basically any device that uses speech technology will struggle to recognize their commands.

My parents’ experience is hardly exclusive or unknown. (It’s even been chronicled in comedy, with this infamous trapped-in-a-voice-activated elevator sketch.) My sister-in-law told me she gave up on using Siri after it failed to recognize the “ethnic names” of her friends and family. I can vouch for the frustration: The other day, my command of “Text Zahir” morphed into “Text Zara here.”

Right now, it’s not much of a problem — but it’s slated to become more serious, given that we are in the middle of a voice revolution. Voice-based wearables, audio, and video entertainment systems are already here. Due in part to distracted drivers, voice control systems will soon be the norm in vehicles. Google Home and Amazon’s Alexa are radicalizing the idea of a “smart home” across millions of households in the US. That’s why it took so long for my mother’s Echo to arrive — the Echo was among Amazon’s bestsellers this holiday season, with a 900 percent increase from 2016 sales. It was backordered for weeks.

Overall, researchers estimate 24.5 million voice-driven devices will be delivered to Americans’ daily routines this year — evidence that underscores ComScore’s prediction that by 2020, half of all our searches will be performed by voice.

But as technology shifts to respond to our vocal chords, what happens to the huge swath of people who can’t be understood?

To train a machine to recognize speech, you need a lot of audio samples. First, researchers have to collect thousands of voices, speaking on a range of topics. They then manually transcribe the audio clips. This combination of data — audio clips and written transcriptions — allows machines to make associations between sound and words. The phrases that occur most frequently become a pattern for an algorithm to learn how a human speaks.

But an AI can only recognize what it’s been trained to hear. Its flexibility depends on the diversity of the accents to which it’s been introduced. Governments, academics, and smaller startups rely on collections of audio and transcriptions, called speech corpora, to bypass doing labor-intensive transcriptions themselves. The University of Pennsylvania’s Linguistic Data Consortium (LDC) is a powerhouse of these data sets, making them available under licensed agreements for companies and researchers. One of its most famous corpora is Switchboard.

Texas Instruments launched Switchboard in the early 1990s to build up a repository of voice data, which was then distributed by the LDC for machine learning programs. It’s a collection of roughly 2,400 telephone conversations, amassed from 543 people from around the US — a total of about 250 hours. Researchers lured the callers by offering them long-distance calling cards. A participant would dial in and be connected with another study participant. The two strangers would then chat spontaneously about a given topic — say, childcare or sports.

For years linguists have assumed that because the LDC is located in Philadelphia, the conversations skewed towards a Northeastern accent. But when Marsal Gavaldà, the director of machine intelligence at the messaging app Yik Yak, crunched the numbers in Switchboard’s demographic history, he found that the accent pool skewed more midwestern. South and North Midland accents comprised more than 40 percent of the voice data.

Other corpora exist, but Switchboard remains a benchmark for the models used in voice recognition systems. Case in point: Both IBM and Microsoft use Switchboard to test the word error rates for their voice-based systems. “From this set of just over 500 speakers, pretty much all engines have been trained,” says Gavaldà.

But building voice technology on a 26-year-old corpus inevitably lays a foundation for misunderstanding. English is professional currency in the linguistic marketplace, but numerous speakers learn it as a second, third, or fourth language. Gavaldà likens the process to drug trials. “It may have been tried in a hundred patients, [but] for a narrow demographic,” he tells me. “You try to extrapolate that to the general population, the dosage may be incorrect.”

Larger companies, of course, have to think globally to stay competitive — especially because most sales of smartphones happen outside the US Technology companies like Apple, Google, and Amazon have private, in-house methods of collecting this data for the languages and accents they’d like to accommodate. And the more consumers use their products the more their feedback will improve the products, through programs like Voice Traininglanguage on the Alexa app.

But even if larger tech companies are making headway in collecting more specific data, they’re motivated by the market to not share it with anyone — which is why it takes so long for the technology to trickle down. This secrecy also applied to my reporting of this piece. Amazon never replied to my request for comment, a spokesperson for Google directed me to a blog post outlining its deep learning techniques, and an Apple PR representative noted that Siri is now customized for 36 countries and supports 21 languages, language variants, and accents.

Outside the US, companies are aware of the importance of catering to accents. The Chinese search engine company Baidu, for one, says its deep learning approach to speech recognition achieves accuracy in English and Mandarin better than humans, and it’s developing a “deep speech” algorithm that will recognize a range of dialects and accents. “China has a fairly deep awareness of what’s happening in the English-speaking world, but the opposite is not true,” Baidu chief scientist Andrew Ng told The Atlantic.

Yet smaller companies and individuals who can’t invest in collecting data on their own are beholden to cheaper, more readily available databases that may not be as diverse as their target demographics. “[The data’s] not really becoming more diverse, at least from my perspective,” Arlo Faria, a speech researcher at the conference transcription startup Remeeting, tells me. Remeeting, for example, has used a corpus called Fisher that includes a group of non-native English speakers — but Fisher’s accents are largely left up to chance, depending on who happened to participate in the data collection. There are some Spanish and Indian accents, for instance, but very few British accents, Faria recalls.

That’s why, very often, voice recognition technology reacts to accents differently than humans, says Anne Wootton, co-founder and CEO of the Oakland-based audio search platform Pop Up Archive, “Oftentimes the software does a better job with like, Indian accents than deep Southern, like Shenandoah Valley accents,” she says. “I think that’s a reflection of what the training data includes or does not include.”

Rachael Tatman, a PhD candidate at the University of Washington’s Department of Linguistics who focuses on sociolinguistics, noted that the underrepresented groups in these data sets tend to be groups that are marginalized in general. A typical database of American voices, for example, would lack poor, uneducated, rural, non-white, non-native English voices. “The more of those categories you fall into, the worse speech recognition is for you,” she says.

Still, Jeffrey Kofman, the CEO and co-founder of Trint, another automated speech-to-text software based in the UK, is confident accent recognition is something speech science will be able to eventually solve. We video chatted on the Trint platform itself, where Australian English is now available alongside British and North American English as transcription accents. Trint also offers speech-to-text in a dozen European languages, and plans to add South Asian English sometime this year, he said.

Collecting data is expensive and cumbersome, which is why certain key demographics take priority. For Kofman, that’s South Asian accents, “because there are so many people from India, Pakistan, and those countries here in England, in the US and Canada, who speak very clearly but with a distinct accent,” he says. Next, he suspects, he’ll prioritize South African accents.

Obviously, it’s not just technology that discriminates against people with accents. It’s also other people. Mass media and globalization are having a huge effect on how people sound. Speech experts have documented the decline of certain regional American accents since as early as 1960, for example, in favor of a more homogenous accent fit for populations from mixed geographic areas. This effect is exacerbated when humans deal with digital assistants or operators; they tend to use a voice devoid of colloquialisms and natural cadence.

Or, in other words, a voice devoid of an identity and accent.

As voice recognition technology becomes better, using a robotic accent to communicate with a device stands to be challenged — if people feel less of a need to talk to their devices as if they are machines, they can start talking to them as naturally as they would a friend. And while some accent reduction coaches find their clients use voice assistants to practice neutralizing their thick foreign or regional accents, Lisa Wentz, a public speaking coach in San Francisco who works in accent reduction, says that she doesn’t recommend it.

That’s because, she tells me, most of her clients are aiming for other people to understand them. They don’t want to have to repeat themselves or feel like their accents prevent others from hearing them. Using devices that aren’t ready for different voices, then, only stands to make this feeling echo.

My mother and I set up her Alexa app together. She wasn’t very excited about it. I could already imagine her distrust and fear of a car purported to drive by the command of her voice. My mother would never ride in it; the risk of crashing would be too real. Still, she tried out a couple of questions on the Echo.

“Alexa, play ‘Que sera sera,’” my mother said.

“I can’t find the song ‘Kiss your ass era.’”

My mom laughed, less out of frustration and more out of amusement. She tried again, this time speaking slower, as if she were talking to a child. “Alexa, play ‘Que sera sera.’” She sang out the syllables of sera in a slight melody, so that the device could clearly hear “se-rah.”

Alexa understood, and found what my mom was looking for. “Here’s a sample of ‘Que sera sera,’ by Doris Day,” she said, pronouncing the sera a bit harsher — “se-raw.”

The 1964 hit started to play, and my mother smiled at the pleasure of recognition.

 

 

Original article here


14 Sep 2023
Comments: 0

Why You Should Cultivate a Fluid Sense of Self

If you want to learn something about change there is no better place to look than evolution. Nothing represents a continuous and unrelenting cycle of order, disorder, and reorder on a grander scale. For long periods of time, Earth is relatively stable. Sweeping changes—warming, cooling, or an asteroid falling from space, for example—occur. These inflection points are followed by periods of disruption and chaos. Eventually, Earth, and everything on it, regains stability, but that stability is somewhere new.

During this cycle, some species get selected out. Others survive and thrive. Species in the latter group tend to have high degrees of what evolutionary biologists call “complexity.” Complexity is comprised of two elements: differentiation and integration. Differentiation is the degree to which a species is composed of parts that are distinct in structure or function from one another. Integration is the degree to which those distinct parts communicate and enhance each other’s goals to create a cohesive whole.

Consider Homo sapiens (you and me), by far the most abundant and widespread species of primate. we have large frames, four limbs, opposable thumbs, body temperature that is somewhat resistant to external conditions, good vision and hearing, digestive tracts that can accommodate a variety of nutrients, and the capacity for language and understanding. In other words, we are a highly differentiated species. But we also have enormous brains and advanced nervous systems that integrate all of these parts into a cohesive whole. The combination of these qualities—widespread differentiation and strong integration—makes us a decidedly complex species. Our complexity is how we got here today and why, hopefully, we’ll stick around for at least a bit longer.

Though change at the individual level, the primary concern of my new book Master of Change, is different than change on an evolutionary scale, there is still much we can learn from evolution’s foundational principles, lessons that apply to the horizons of our own lives. If we want to survive and thrive during ongoing cycles of change and disorder, then we, too, can benefit from developing our own versions of complexity.

As a matter of fact, there is a psychological construct called self-complexity. Essentially, it says that the key to a strong and enduring identity—one that is equal parts rugged and flexible, that can navigate the inevitable changes we all face—is to diversify your sense of self.

The more you define yourself by any one activity, the more fragile you become. If that activity doesn’t go well or something changes unexpectedly, you lose a sense of who you are. But with self-complexity, you have develop multiple components to your identity.

We all can wear many hats: examples include writer, spouse, artist, parent, employee, neighbor, entrepreneur, baker, and creative, to name just a few. Take an inventory of your own identities. Are there any upon which you are over-reliant for meaning and self-worth? What would it look like to diversify your sense of self? Even if you desire to go “all in” on a certain endeavor, you’ve got to ensure that you don’t leave others completely behind.

I’ve come to use the metaphor of a house for identity. If your house only has one room in it, and that rooms floods, you are going to be very disoriented. But if your house has multiple rooms, you can seek refuge in the others while you weather the storm. It’s okay to put spend a lot of time in one room—so long as you have other rooms available when the one you are currently pouring yourself into changes.

For example: there are times when I lean heavily into each of my main identities—father, husband, writer, coach, friend, athlete, and neighbor. I’ve learned that keeping all of these identities strong ensures that when things don’t go well in one area of my life I can rely on the others to pick me up, which helps me to stay grounded and navigate whatever challenge I am facing.

What you want to do is challenge yourself to integrate the various elements of your identity into a cohesive whole. This allows you to emphasize and de-emphasize certain parts of your identity at different periods of time. The result is a fluid sense of self.

Unlike other types of matter, fluid contains both mass and volume but not shape. This allows it to flow over and around obstacles, changing form while retaining substance, neither getting stuck nor fracturing when unforeseen impediments manifest on its path. Cultivating a fluid sense of self allows you to do the same. By developing and nurturing multiple parts of your identity, you can more easily navigate change.

A large body of research shows that when there is too great a fusion between one’s identity and their pursuit, then anxiety, depression, and burnout frequently result. This is especially true for athlete’s during periods of change and transition, when one’s dominant—and all to often, sole— sense of identity feels at risk. Yet while it may be heightened for athletes, it’s a pattern that holds true in all lines of work and all walks of life: if you want to be excellent and experience something fully, then you’ve got to go all in, but only to a point. If your identity becomes too enmeshed in any one concept or endeavor—be it your age, how you look in the mirror, a relationship, or your career—then you are likely to face significant distress when things change, which, for better or worse, they always do.

None of the above is permission to be laissez-faire or go through the motions. Caring deeply about the people, activities, and projects you love is key to a rich and meaningful existence. The problem is not caring deeply; it is when your identity becomes too rigidly attached to any single object or endeavor.

 

 

 

Original article here

 

 


Leave a Comment!

You must be logged in to post a comment.