Call us toll free: +1 4062079616
How To Be Spiritual In A Material World
Call us toll free: +1 4062079616

Full Width Blog

29 May 2023
Comments: 0

The big idea: why colour is in the eye of the beholder

In February 2015, a Scottish woman uploaded a photograph of a dress to the internet. Within 48 hours the blurry snapshot had gone viral, provoking spirited debate around the world. The disagreement centred on the dress’s colour: some people were convinced it was blue and black while others were adamant it was white and gold.

Everyone, it seemed, was incredulous. People couldn’t understand how, faced with exactly the same photograph of exactly the same dress, they could reach such different and firmly held conclusions about its appearance. The confusion was grounded in a fundamental misunderstanding about colour – one that, despite mounting evidence to the contrary, shows little sign of disappearing.

For a long time, people believed that colours were objective, physical properties of objects or of the light that bounced off them. Even today, science teachers regale their students with stories about Isaac Newton and his prism experiment, telling them how different wavelengths of light produce the rainbow of hues around us.

But this theory isn’t really true. Different wavelengths of light do exist independently of us but they only become colours inside our bodies. Colour is ultimately a neurological process whereby photons are detected by light-sensitive cells in our eyes, transformed into electrical signals and sent to our brain, where, in a series of complex calculations, our visual cortex converts them into “colour”.

Most experts now agree that colour, as commonly understood, doesn’t inhabit the physical world at all but exists in the eyes or minds of its beholders. They argue that if a tree fell in a forest and no one was there to see it, its leaves would be colourless – and so would everything else. To put it another way: there is no such thing as colour; there are only the people who perceive it.

This is why no two people will ever see exactly the same colours. Every person’s visual system is unique and so, therefore, are their perceptions. About 8% of men are colour-blind and see fewer colours than everyone else; a small number of lucky women might, thanks to a genetic duplication on the X chromosome, be able to distinguish many more than the rest of us.

Animals inhabit very different chromatic worlds too. Most mammals are red-green colour-blind; bulls might be famous for their hatred of red capes, but the colour itself is invisible to them – they are actually enraged by the fabric’s movements. By contrast, most reptiles, amphibians, insects and birds perceive more colours than us. Bees see ultraviolet light, discerning elaborate patterns in flowers that we cannot perceive, while snakes see infrared radiation, detecting the warm bodies of prey from a distance.

 

People generally name only the colours they consider socially or culturally important

 

“Colour,” Umberto Eco once said, “is not an easy matter.” It is indeed elusive and illusive. Pretty much everything we take to be self-evident about it really isn’t self-evident at all. Scientists have shown that the sky isn’t blue, the sun isn’t yellow, snow isn’t white, black isn’t dark and darkness isn’t black.

One cause of the problem – or perhaps its symptom – is language. In English we divide colour space into 11 basic terms – black, white, red, yellow, green, blue, purple, brown, grey, orange and pink – but other languages do things differently. Many don’t have words for pink, brown and yellow, and some use one word for both green and blue. The Tiv people in west Africa use only three basic colour terms (black, white, red), and at least one Indigenous community has no specific words for any colours, only “light” and “dark”.

The vocabulary of these languages isn’t dictated by the prismatic spectrum but, once again, by what is happening inside their speakers’ heads. People generally name only the colours they consider socially or culturally important. The Aztecs, who were enthusiastic farmers, used more than a dozen words for green; the Mursi cattleherders of Ethiopia have 11 colour terms for cows, and none for anything else.

These differences might even influence the colours they see. Debates about linguistic relativity – the extent to which our words shape our thoughts and perceptions – have been rumbling on for decades, and while many scholars have overstated the case for it, some have found persuasive evidence that if you don’t, say, have a word for blue, you will probably find it harder to distinguish.

The meanings of colour are no less socially constructed, which is why a single colour can mean completely different things in different places and at different times. In the west white is the colour of light, life and purity, but in parts of Asia it is the colour of death. In America red is conservative and blue progressive, while in Europe it’s the other way around. Many people today think of blue as masculine and pink as feminine, but only a hundred years ago baby boys were dressed in pink and girls in blue.

When all of this is taken together – the subjective nature of visual perception, the complicating influence of language, the role that social life and cultural traditions play in filtering our understandings of colour – it becomes really rather difficult to reach a conclusion different from that of the 18th-century philosopher David Hume: that, in the end, colour is “merely a phantasm of the senses”.

The ancient Egyptian term for “colour” was iwn – a word that also meant “skin”, “nature”, “character” and “being”, and was represented in part by a hieroglyph of human hair. To the Egyptians, colours were like people – full of life, energy, power and personality. We now understand just how completely the two are entangled. That’s because every hue we see around us is actually manufactured within us – in the same grey matter that forms language, stores memories, stokes emotions, shapes thoughts and gives rise to consciousness. Colour is, if you’ll pardon the pun, a pigment of our imaginations.

 

 

 

Original article here


27 May 2023
Comments: 0

The Dutch solution to busyness that captivated the world

The Hague, where I live, has 11km of gorgeous coastline with rolling dunes and sandy beaches. In summer, I often see locals in Scheveningen or Kijkduin (the city’s most famous beaches) sunbathing, strolling in nature or riding their bikes, then sitting down on one of the many benches available. Sometimes, they’re reading or chatting with their friends, but just as often, they’re engaging in niksen.

Niksen is a Dutch wellness trend that means “doing nothing”. It first caught the attention of the world in 2019 as a way to manage stress or recover from burnout. At the time, many people were complaining about exhaustion and depression caused by overwork and were looking for solutions – which is why concepts such as Japanese ikigai or Danish hygge also entered the English lexicon. As a linguist myself, I loved the idea that you could express the whole concept of doing nothing in one short and easy-to-pronounce word.

In my book Niksen: Embracing the Dutch Art of Doing Nothing, I define it as “doing nothing without a purpose” – so not scrolling on Facebook or engaging in meditation. Whereas mindfulness is about being present in the moment, niksen is more about carving out time to just be, letting your mind wander wherever it wants to go. And as we’re slowly recovering after the pandemic, it’s important to rethink the way we work and spend our time.

Linguistically, niksen (doing nothing) is a verb created from “niks“, which means “nothing”.

“It fits with the tendency of the Dutch language to create verbs out of nouns. From from ‘voetbal’ (football) to voetballen (playing football), from ‘internet’ to internetten, from ‘whatsapp‘ to whatsappen etc. I think this is something that happens in Dutch in particular,” said Monique Flecken, a psycholinguist at the University of Amsterdam, who researches how the languages we speak affect the way we see the world. Essentially, it’s much less work to say “niksen” instead of “to do nothing”. “The Dutch are a practical, direct people and their language reflects that,” she said.

In the Netherlands, the word can be used in a variety of ways, both positive and negative. Flecken said: “A parent might say to their kid, “Zit je weer te niksen?” (Are you doing nothing again?). And I would also say ‘lekker niksen’, which translates to ‘delicious doing nothing’, when talking about an evening blissfully free of any tasks or work.”

To Thijs Launspach, a psychologist, TEDx speaker and author of the book Crazy Busy: Staying Sane in a Stressful World, niksen means “doing nothing or occupying yourself with something trivial as a way of enjoying your own time. Not doing nothing entirely but doing as little as possible,” he said, pointing out that this mostly applies to elderly people who have more unstructured free time. Younger generations, on the other hand, are more stressed out than ever – even in the Netherlands, a country traditionally applauded for its work-life balance.

 

It’s not necessarily bad to be for a moment in a state of stress, where you’re really on and focused. The problem is when this is getting out of hand

 

There are plenty of reasons for that. “Our lives and our jobs have become increasingly complex. We tend to spend a lot of time with computers. There is a lot of pressure on being the best version of yourself, be it in our jobs, or the expectations of parents [or] from social media. There is a lot of pressure to perform,” Launspach said.

Of course, some stress can be good, as Leiden University psychology professor Bernet Elzinga points out. “It’s not necessarily bad to be for a moment in a state of stress, where you’re really on and focused. The problem is when this is getting out of hand,” she said. But niksen can help with that. “When you do nothing, you connect to your default mode network. And that network is responsible for mind-wandering and reflection,” Elzinga explained.

Paradoxically, niksen can also make us more productive, simply because breaks allow our brains to rest and come back with better focus and sustained attention. This is probably why, while the Dutch don’t work long hours, they tend to be very efficient at work. Working overtime is not encouraged due to the “just be normal, that’s already crazy enough” attitude prevalent in the Netherlands – a nod towards the country’s honest and egalitarian culture.

And it seems to work: the Dutch are a creative nation. Just think of all the famous painters like Rembrandt, Vermeer or Escher, as well as the innovative solutions the Dutch have found to battle the recurring threat of floods, such as huge dams and floating houses.

The Dutch also like to enjoy life, as shown by the word lekker. This means “delicious” but can be used to refer to anything nice and pleasant, like lekker warm(deliciously warm), lekker slapen (sleeping deliciously), and, of course, lekker niksen, or “deliciously doing nothing”. This available architecture of leisure makes it more possible for people to do nothing more easily.

Locals like spending their time in active ways, such as cycling or hiking, allowing time for clearing the mind. And each time the sun comes out, the Dutch flock to cafes and terraces en masse, even in the winter. For me, these are perfect places for doing nothing.

However, Launspach is not a fan of doing nothing as a stress-preventing measure. “I’m a little bit sceptical of the idea that you should create a buffer between you and stress. I don’t know if that’s even possible in the way that we live and work now,” he said.

Elzinga believes that it’s much better to do some sort of physical activity to distract you from your daily worries, preferably in nature. But luckily, in the Netherlands, there is a way to combine all these things – niksen, nature and movement.

While the country is not commonly known for its natural resources , the Dutch appreciate the little natural areas they have. Many dune areas – my favourite thing about The Netherlands – are a part of a large network of hiking and cycling routes crisscrossing the country. Even in large cities such as Rotterdam, The Hague or Amsterdam, you’re never too far away from a trail.

In a cooperation with the Dutch Railway system, Wandelnet – a foundation devoted to creating and maintaining hiking routes – has created NS Wandelingen, a system of hiking routes that are easy to reach by train or other public transport. They range between 7km and 22km in length, making them perfect for a day trip. And given the many benches along the way, it’s even possible to fit in a little niksen break.

This leisure time is possible for the Dutch because the Netherlands is a country with an excellent welfare system, and while people tend to work hard, they also take (and are granted) many days off.

“Having a good social support system, having lower stress level relates to feeling secure and in balance. So, I wouldn’t overestimate the importance of that,” said Elzinga.

And with everything going on in the world – the Covid-19 pandemic, the war in Ukraine – relieving stress is more important than ever.

 

 

Original article here


22 May 2023
Comments: 0

How To Worry Less And Enjoy Your Life More

 

Did you know that worrying is like a rocking chair?

It gives you something to do, but it doesn’t get you anywhere.

Excessive worrying is exhausting and can become a great waste of time and energy. It puts your mind in overdrive and distracts you from doing the more important things, as well as preventing you from having a healthy, happy life.

Most people worry on a regular basis, as a natural part of life. However, the worry habit can become problematic when it becomes too intense and chronic. Some people even worry about worrying. When you are plagued by thoughts that are persistent and out of control, they end up causing interference and paralysis in your daily life.

The bad news is that constant worrying eventually takes a toll on your emotional and physical health, as well as your self-esteem, relationships, and career. When you keep hitting the panic button, the stress response is triggered, which leads to fear, then more stress and anxiety, and then more worry. An endless worry cycle is created.

The good news is that worrying is a mental habit that can be broken. You can train your brain to stay calm and remain positive. You are in charge of your thoughts and imagination. How you choose to use your mind is your responsibility. Do you continue to misuse your imagination and keep focusing on the negative or do you choose to switch to the positive?

The reason most people worry too much is often rooted in fear – ultimately a fear of something happening that cannot be controlled. When we anticipate something negative happening in the future, it is based on irrational thoughts. We are afraid of the worst-case scenario, as we have allowed our minds to wander too far on the negative side.

9 Practical Tips To Minimize Worrying 

  • Learn to distinguish between what you can control in life and what you can’t. The things you can change need an action plan with a list of concrete steps you can take to solve the problem. And for the things you cannot change, you need to learn acceptance. The Serenity Prayer sums it up perfectly: “God, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.”
  • Recognize your trigger points. Find out the source of your worries and gain a clearer picture of what’s driving them. Your worries will start losing power, and you will start regaining control of your mind. Self-awareness is the first step to changing your mindset.
  • Re-frame your thoughts and change your perspective. Instead of saying, “I can’t handle this,” become more self-confident and start believing that you can handle it. Make a list of the negative messages you are feeding your mind and replace them with positive affirmations. Make sure you challenge those thoughts. No amount of worrying will make life any more predictable.
  • Break the worry cycle by limiting your worrying time. Schedule a worry time each day to keep the worry contained and prevent it from contaminating your whole day. Sit down for 15 minutes and make a list of all the things you want to worry about, and then proceed to worry about them. At the end of the exercise, you will come to the realization that worrying is futile. You can become more mindful of your thoughts and where you place your energy. Choose to focus on what really matters in your life and become more productive.
  • Get up and move. Take a 20-minute walk. Stretch out and do some yoga. Exercise is a productive activity that can interrupt your worries and shift your body’s energy. It releases endorphins, which get rid of the tension and break the cycle.
  • Breathing exercises, meditation and relaxation techniques also interrupt your train of thoughts and calm the body’s response to stress. This dynamic trio helps relax your body, guide your thoughts, and keep them from getting caught up in worries.
  • Ground yourself in the present. Become mindful of your thoughts and activities. Make time to unplug and take a look at the bigger picture. Ask yourself if what you are worrying about is really going to matter in five years from now. If not, let it go and don’t spend more than five minutes worrying about it. If it does matter, then make a list of one or two things you can do now to solve the problem. Keep in mind, you may not resolve the whole problem right then and there, but at least your thoughts will be clearer and calmer, and you will be able to move forward by focusing on what matters instead of worrying about things that don’t really matter.
  • Stop overthinking and dwelling on your worries by finding a healthy distraction like reading a good book, watching a funny movie, listening to great music, taking up hobbies that you enjoy, or doing something different, such as trying a new recipe.
  • Build a support network. Enlist the help of your trusted friends and family. By sharing your thoughts and tuning into your feelings, it can help you process them as you feel supported. In some cases, chronic worry can be a symptom of GAD or generalized anxiety disorder. You can seek the help of a therapist or join a support group. Don’t keep your worries to yourself. This will cause them to build up and become overwhelming.

Next time you catch yourself worrying, instead of pressing the panic button, reach for the pause button and ask yourself, “What’s important now?” Coach Lou Holtz came up with the acronym W.I.N. or What’s Important Now. This enables you to prioritize your decisions, choices and actions. The antidote for worrying is to maintain calm, clarity, and focus. Also, by having a grateful mind, you can look for the positive in every situation.

Strengthen your mind by learning to turn off those worrying thoughts and regain control. Look at life from a less fearful and more balanced perspective.

 

 

Original article here


18 May 2023
Comments: 0

Chat GPT is about to revolutionize the economy. We need to decide what that looks like.

 

 

Whether it’s based on hallucinatory beliefs or not, an artificial-intelligence gold rush has started over the last several months to mine the anticipated business opportunities from generative AI models like ChatGPT. App developers, venture-backed startups, and some of the world’s largest corporations are all scrambling to make sense of the sensational text-generating bot released by OpenAI last November.

You can practically hear the shrieks from corner offices around the world: “What is our ChatGPT play? How do we make money off this?”

But while companies and executives see a clear chance to cash in, the likely impact of the technology on workers and the economy on the whole is far less obvious. Despite their limitations—chief among of them their propensity for making stuff up—ChatGPT and other recently released generative AI models hold the promise of automating all sorts of tasks that were previously thought to be solely in the realm of human creativity and reasoning, from writing to creating graphics to summarizing and analyzing data. That has left economists unsure how jobs and overall productivity might be affected.

For all the amazing advances in AI and other digital tools over the last decade, their record in improving prosperity and spurring widespread economic growth is discouraging. Although a few investors and entrepreneurs have become very rich, most people haven’t benefited. Some have even been automated out of their jobs.

Productivity growth, which is how countries become richer and more prosperous, has been dismal since around 2005 in the US and in most advanced economies (the UK is a particular basket case). The fact that the economic pie is not growing much has led to stagnant wages for many people.

What productivity growth there has been in that time is largely confined to a few sectors, such as information services, and in the US to a few cities—think San Jose, San Francisco, Seattle, and Boston.

Will ChatGPT make the already troubling income and wealth inequality in the US and many other countries even worse? Or could it help? Could it in fact provide a much-needed boost to productivity?

ChatGPT, with its human-like writing abilities, and OpenAI’s other recent release DALL-E 2, which generates images on demand, use large language models trained on huge amounts of data. The same is true of rivals such as Claude from Anthropic and Bard from Google. These so-called foundational models, such as GPT-3.5 from OpenAI, which ChatGPT is based on, or Google’s competing language model LaMDA, which powers Bard, have evolved rapidly in recent years.

They keep getting more powerful: they’re trained on ever more data, and the number of parameters—the variables in the models that get tweaked—is rising dramatically. Earlier this month, OpenAI released its newest version, GPT-4. While OpenAI won’t say exactly how much bigger it is, one can guess; GPT-3, with some 175 billion parameters, was about 100 times larger than GPT-2.

But it was the release of ChatGPT late last year that changed everything for many users. It’s incredibly easy to use and compelling in its ability to rapidly create human-like text, including recipes, workout plans, and—perhaps most surprising—computer code. For many non-experts, including a growing number of entrepreneurs and businesspeople, the user-friendly chat model—less abstract and more practical than the impressive but often esoteric advances that have been brewing in academia and a handful of high-tech companies over the last few years—is clear evidence that the AI revolution has real potential.

Venture capitalists and other investors are pouring billions into companies based on generative AI, and the list of apps and services driven by large language models is growing longer every day.

 

Will ChatGPT make the already troubling income and wealth inequality in the US and many other countries even worse? Or could it help?

 

Among the big players, Microsoft has invested a reported $10 billion in OpenAI and its ChatGPT, hoping the technology will bring new life to its long-struggling Bing search engine and fresh capabilities to its Office products. In early March, Salesforce said it will introduce a ChatGPT app in its popular Slack product; at the same time, it announced a $250 million fund to invest in generative AI startups. The list goes on, from Coca-Cola to GM. Everyone has a ChatGPT play.

Meanwhile, Google announced it is going to use its new generative AI tools in Gmail, Docs, and some of its other widely used products.

Still, there are no obvious killer apps yet. And as businesses scramble for ways to use the technology, economists say a rare window has opened for rethinking how to get the most benefits from the new generation of AI.

“We’re talking in such a moment because you can touch this technology. Now you can play with it without needing any coding skills. A lot of people can start imagining how this impacts their workflow, their job prospects,” says Katya Klinova, the head of research on AI, labor, and the economy at the Partnership on AI in San Francisco.

“The question is who is going to benefit? And who will be left behind?” says Klinova, who is working on a report outlining the potential job impacts of generative AI and providing recommendations for using it to increase shared prosperity.

The optimistic view: it will prove to be a powerful tool for many workers, improving their capabilities and expertise, while providing a boost to the overall economy. The pessimistic one: companies will simply use it to destroy what once looked like automation-proof jobs, well-paying ones that require creative skills and logical reasoning; a few high-tech companies and tech elites will get even richer, but it will do little for overall economic growth.

Helping the least skilled

The question of ChatGPT’s impact on the workplace isn’t just a theoretical one.

In the most recent analysis, OpenAI’s Tyna Eloundou, Sam Manning, and Pamela Mishkin, with the University of Pennsylvania’s Daniel Rock, found that large language models such as GPT could have some effect on 80% of the US workforce. They further estimated that the AI models, including GPT-4 and other anticipated software tools, would heavily affect 19% of jobs, with at least 50% of the tasks in those jobs “exposed.” In contrast to what we saw in earlier waves of automation, higher-income jobs would be most affected, they suggest. Some of the people whose jobs are most vulnerable: writers, web and digital designers, financial quantitative analysts, and—just in case you were thinking of a career change—blockchain engineers.

“There is no question that [generative AI] is going to be used—it’s not just a novelty,” says David Autor, an MIT labor economist and a leading expert on the impact of technology on jobs. “Law firms are already using it, and that’s just one example. It opens up a range of tasks that can be automated.”

Autor has spent years documenting how advanced digital technologies have destroyed many manufacturing and routine clerical jobs that once paid well. But he says ChatGPT and other examples of generative AI have changed the calculation.

Previously, AI had automated some office work, but it was those rote step-by-step tasks that could be coded for a machine. Now it can perform tasks that we have viewed  as creative, such as writing and producing graphics. “It’s pretty apparent to anyone who’s paying attention that generative AI opens the door to computerization of a lot of kinds of tasks that we think of as not easily automated,” he says.

 

Generative AI could help a wide swath of people gain the skills to compete with those who have more education and expertise.

 

The worry is not so much that ChatGPT will lead to large-scale unemployment—as Autor points out, there are plenty of jobs in the US—but that companies will replace relatively well-paying white-collar jobs with this new form of automation, sending those workers off to lower-paying service employment while the few who are best able to exploit the new technology reap all the benefits.

In this scenario, tech-savvy workers and companies could quickly take up the AI tools, becoming so much more productive that they dominate their workplaces and their sectors. Those with fewer skills and little technical acumen to begin with would be left further behind.

But Autor also sees a more positive possible outcome: generative AI could help a wide swath of people gain the skills to compete with those who have more education and expertise.

One of the first rigorous studies done on the productivity impact of ChatGPT suggests that such an outcome might be possible.

Two MIT economics graduate students, Shakked Noy and Whitney Zhang, ran an experiment involving hundreds of college-educated professionals working in areas like marketing and HR; they asked half to use ChatGPT in their daily tasks and the others not to. ChatGPT raised overall productivity (not too surprisingly), but here’s the really interesting result: the AI tool helped the least skilled and accomplished workers the most, decreasing the performance gap between employees. In other words, the poor writers got much better; the good writers simply got a little faster.

The preliminary findings suggest that ChatGPT and other generative AIs could, in the jargon of economists, “upskill” people who are having trouble finding work. There are lots of experienced workers “lying fallow” after being displaced from office and manufacturing jobs over the last few decades, Autor says. If generative AI can be used as a practical tool to broaden their expertise and provide them with the specialized skills required in areas such as health care or teaching, where there are plenty of jobs, it could revitalize our workforce.

Determining which scenario wins out will require a more deliberate effort to think about how we want to exploit the technology.

“I don’t think we should take it as the technology is loose on the world and we must adapt to it. Because it’s in the process of being created, it can be used and developed in a variety of ways,” says Autor. “It’s hard to overstate the importance of designing what it’s there for.”

Simply put, we are at a juncture where either less-skilled workers will increasingly be able to take on what is now thought of as knowledge work, or the most talented knowledge workers will radically scale up their existing advantages over everyone else. Which outcome we get depends largely on how employers implement tools like ChatGPT. But the more hopeful option is well within our reach.

Beyond human-like

There are some reasons to be pessimistic, however. Last spring, in “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence,” the Stanford economist Erik Brynjolfsson warned that AI creators were too obsessed with mimicking human intelligence rather than finding ways to use the technology to allow people to do new tasks and extend their capabilities.

The pursuit of human-like capabilities, Brynjolfsson argued, has led to technologies that simply replace people with machines, driving down wages and exacerbating inequality of wealth and income. It is, he wrote, “the single biggest explanation” for the rising concentration of wealth.

A year later, he says ChatGPT, with its human-sounding outputs, “is like the poster child for what I warned about”: it has “turbocharged” the discussion around how the new technologies can be used to give people new abilities rather than simply replacing them.

Despite his worries that AI developers will continue to blindly outdo each other in mimicking human-like capabilities in their creations, Brynjolfsson, the director of the Stanford Digital Economy Lab, is generally a techno-optimist when it comes to artificial intelligence. Two years ago, he predicted a productivity boom from AI and other digital technologies, and these days he’s bullish on the impact of the new AI models.

Much of Brynjolfsson’s optimism comes from the conviction that businesses could greatly benefit from using generative AI such as ChatGPT to expand their offerings and improve the productivity of their workforce. “It’s a great creativity tool. It’s great at helping you to do novel things. It’s not simply doing the same thing cheaper,” says Brynjolfsson. As long as companies and developers can “stay away from the mentality of thinking that humans aren’t needed,” he says, “it’s going to be very important.”

Within a decade, he predicts, generative AI could add trillions of dollars in economic growth in the US. “A majority of our economy is basically knowledge workers and information workers,” he says. “And it’s hard to think of any type of information workers that won’t be at least partly affected.”

When that productivity boost will come—if it does—is an economic guessing game. Maybe we just need to be patient.

In 1987, Robert Solow, the MIT economist who won the Nobel Prize that year for explaining how innovation drives economic growth, famously said, “You can see the computer age everywhere except in the productivity statistics.” It wasn’t until later, in the mid and late 1990s, that the impacts—particularly from advances in semiconductors—began showing up in the productivity data as businesses found ways to take advantage of ever cheaper computational power and related advances in software.

Could the same thing happen with AI? Avi Goldfarb, an economist at the University of Toronto, says it depends on whether we can figure out how to use the latest technology to transform businesses as we did in the earlier computer age.

So far, he says, companies have just been dropping in AI to do tasks a little bit better: “It’ll increase efficiency—it might incrementally increase productivity—but ultimately, the net benefits are going to be small. Because all you’re doing is the same thing a little bit better.” But, he says, “the technology doesn’t just allow us to do what we’ve always done a little bit better or a little bit cheaper. It might allow us to create new processes to create value to customers.”

The verdict on when—even if—that will happen with generative AI remains uncertain. “Once we figure out what good writing at scale allows industries to do differently, or—in the context of Dall-E—what graphic design at scale allows us to do differently, that’s when we’re going to experience the big productivity boost,” Goldfarb says. “But if that is next week or next year or 10 years from now, I have no idea.”

Power struggle

When Anton Korinek, an economist at the University of Virginia and a fellow at the Brookings Institution, got access to the new generation of large language models such as ChatGPT, he did what a lot of us did: he began playing around with them to see how they might help his work. He carefully documented their performance in a paper in February, noting how well they handled 25 “use cases,” from brainstorming and editing text (very useful) to coding (pretty good with some help) to doing math (not great).

ChatGPT did explain one of the most fundamental principles in economics incorrectly, says Korinek: “It screwed up really badly.” But the mistake, easily spotted, was quickly forgiven in light of the benefits. “I can tell you that it makes me, as a cognitive worker, more productive,” he says. “Hands down, no question for me that I’m more productive when I use a language model.”

When GPT-4 came out, he tested its performance on the same 25 questions that he documented in February, and it performed far better. There were fewer instances of making stuff up; it also did much better on the math assignments, says Korinek.

Since ChatGPT and other AI bots automate cognitive work, as opposed to physical tasks that require investments in equipment and infrastructure, a boost to economic productivity could happen far more quickly than in past technological revolutions, says Korinek. “I think we may see a greater boost to productivity by the end of the year—certainly by 2024,” he says.

 

Who will control the future of this amazing technology?

 

What’s more, he says, in the longer term, the way the AI models can make researchers like himself more productive has the potential to drive technological progress.

That potential of large language models is already turning up in research in the physical sciences. Berend Smit, who runs a chemical engineering lab at EPFL in Lausanne, Switzerland, is an expert on using machine learning to discover new materials. Last year, after one of his graduate students, Kevin Maik Jablonka, showed some interesting results using GPT-3, Smit asked him to demonstrate that GPT-3 is, in fact, useless for the kinds of sophisticated machine-learning studies his group does to predict the properties of compounds.

“He failed completely,” jokes Smit.

It turns out that after being fine-tuned for a few minutes with a few relevant examples, the model performs as well as advanced machine-learning tools specially developed for chemistry in answering basic questions about things like the solubility of a compound or its reactivity. Simply give it the name of a compound, and it can predict various properties based on the structure.

As in other areas of work, large language models could help expand the expertise and capabilities of non-experts—in this case, chemists with little knowledge of complex machine-learning tools. Because it’s as simple as a literature search, Jablonka says, “it could bring machine learning to the masses of chemists.”

These impressive—and surprising—results are just a tantalizing hint of how powerful the new forms of AI could be across a wide swath of creative work, including scientific discovery, and how shockingly easy they are to use. But this also points to some fundamental questions.

As the potential impact of generative AI on the economy and jobs becomes more imminent, who will define the vision for how these tools should be designed and deployed? Who will control the future of this amazing technology?

Diane Coyle, an economist at Cambridge University in the UK, says one concern is the potential for large language models to be dominated by the same big companies that rule much of the digital world. Google and Meta are offering their own large language models alongside OpenAI, she points out, and the large computational costs required to run the software create a barrier to entry for anyone looking to compete.

The worry is that these companies have similar “advertising-driven business models,” Coyle says. “So obviously you get a certain uniformity of thought, if you don’t have different kinds of people with different kinds of incentives.”

Coyle acknowledges that there are no easy fixes, but she says one possibility is a publicly funded international research organization for generative AI, modeled after CERN, the Geneva-based intergovernmental European nuclear research body where the World Wide Web was created in 1989. It would be equipped with the huge computing power needed to run the models and the scientific expertise to further develop the technology.

Such an effort outside of Big Tech, says Coyle, would “bring some diversity to the incentives that the creators of the models face when they’re producing them.”

While it remains uncertain which public policies would help make sure that large language models best serve the public interest, says Coyle, it’s becoming clear that the choices about how we use the technology can’t be left to a few dominant companies and the market alone.

History provides us with plenty of examples of how important government-funded research can be in developing technologies that bring about widespread prosperity. Long before the invention of the web at CERN, another publicly funded effort in the late 1960s gave rise to the internet, when the US Department of Defense supported ARPANET, which pioneered ways for multiple computers to communicate with each other.

In Power and Progress: Our 1000-Year Struggle Over Technology & Prosperity, the MIT economists Daron Acemoglu and Simon Johnson provide a compelling walk through the history of technological progress and its mixed record in creating widespread prosperity. Their point is that it’s critical to deliberately steer technological advances in ways that provide broad benefits and don’t just make the elite richer.

From the decades after World War II until the early 1970s, the US economy was marked by rapid technological changes; wages for most workers rose while income inequality dropped sharply. The reason, Acemoglu and Johnson say, is that technological advances were used to create new tasks and jobs, while social and political pressures helped ensure that workers shared the benefits more equally with their employers than they do now.

In contrast, they write, the more recent rapid adoption of manufacturing robots in “the industrial heartland of the American economy in the Midwest” over the last few decades simply destroyed jobs and led to a “prolonged regional decline.”

The book, which comes out in May, is particularly relevant for understanding what today’s rapid progress in AI could bring and how decisions about the best way to use the breakthroughs will affect us all going forward. In a recent interview, Acemoglu said they were writing the book when GPT-3 was first released. And, he adds half-jokingly, “we foresaw ChatGPT.”

Acemoglu maintains that the creators of AI “are going in the wrong direction.” The entire architecture behind the AI “is in the automation mode,” he says. “But there is nothing inherent about generative AI or AI in general that should push us in this direction. It’s the business models and the vision of the people in OpenAI and Microsoft and the venture capital community.”

If you believe we can steer a technology’s trajectory, then an obvious question is: Who is “we”? And this is where Acemoglu and Johnson are most provocative. They write: “Society and its powerful gatekeepers need to stop being mesmerized by tech billionaires and their agenda … One does not need to be an AI expert to have a say about the direction of progress and the future of our society forged by these technologies.”

The creators of ChatGPT and the businesspeople involved in bringing it to market, notably OpenAI’s CEO, Sam Altman, deserve much credit for offering the new AI sensation to the public. Its potential is vast. But that doesn’t mean we must accept their vision and aspirations for where we want the technology to go and how it should be used.

According to their narrative, the end goal is artificial general intelligence, which, if all goes well, will lead to great economic wealth and abundances. Altman, for one, has promoted the vision at great length recently, providing further justification for his longtime advocacy of a universal basic income (UBI) to feed the non-technocrats among us. For some, it sounds tempting. No work and free money! Sweet!

It’s the assumptions underlying the narrative that are most troubling—namely, that AI is headed on an inevitable job-destroying path and most of us are just along for the (free?) ride. This view barely acknowledges the possibility that generative AI could lead to a creativity and productivity boom for workers far beyond the tech-savvy elites by helping to unlock their talents and brains. There is little discussion of the idea of using the technology to produce widespread prosperity by expanding human capabilities and expertise throughout the working population.

 

Companies can decide to use ChatGPT to give workers more abilities—or to simply cut jobs and trim costs.

 

As Acemoglu and Johnson write: “We are heading toward greater inequality not inevitably but because of faulty choices about who has power in society and the direction of technology … In fact, UBI fully buys into the vision of the business and tech elite that they are the enlightened, talented people who should generously finance the rest.”

Acemoglu and Johnson write of various tools for achieving “a more balanced technology portfolio,” from tax reforms and other government policies that might encourage the creation of more worker-friendly AI to reforms that might wean academia off Big Tech’s funding for computer science research and business schools.

But, the economists acknowledge, such reforms are “a tall order,” and a social push to redirect technological change is “not just around the corner.”

The good news is that, in fact, we can decide how we choose to use ChatGPT and other large language models. As countless apps based on the technology are rushed to market, businesses and individual users will have a chance to choose how they want to exploit it; companies can decide to use ChatGPT to give workers more abilities—or to simply cut jobs and trim costs.

Another positive development: there is at least some momentum behind open-source projects in generative AI, which could break Big Tech’s grip on the models. Notably, last year more than a thousand international researchers collaborated on a large language model called Bloom that can create text in languages such as French, Spanish, and Arabic. And if Coyle and others are right, increased public funding for AI research could help change the course of future breakthroughs.

Stanford’s Brynjolfsson refuses to say he’s optimistic about how it will play out. Still, his enthusiasm for the technology these days is clear. “We can have one of the best decades ever if we use the technology in the right direction,” he says. “But it’s not inevitable.”

 

 

Original article here

 

 


Leave a Comment!

You must be logged in to post a comment.