Call us toll free: +1 4062079616
How To Be Spiritual In A Material World
Call us toll free: +1 4062079616

Full Width Blog

01 May 2023
Comments: 0

Who Invented the Measurement of Time?

In modern times, clocks underpin everything people do, from work to school to sleep. Timekeeping is also the invisible structure that makes modern infrastructure work. It forms the foundation of the high-speed computers that conduct financial trading and even the GPS system that pinpoints locations on Earth’s surface with unprecedented accuracy.

But humans have likely lived by some version of the clock for a very long time. The ancient Egyptians invented the first water clocks and sundials more than 3,500 years ago. Before that, people likely tracked time with devices that did not survive in the archaeological record—such as an upright stick in the dirt that acted as a primitive sundial—or no device at all, says Rita Gautschy, an archeoastronomer at the University of Basel in Switzerland.

“It’s really difficult to get a grip on when people started with timekeeping,” Gautschy says. Simply by observing the location of the sunrise and the sunset each day and by watching how high the sun reaches in the sky, a person can construct a primitive calendar. These early human efforts at understanding the flow of time left no trace at all, she says.

The oldest sundial on record came from Egypt and was made around 1500 B.C.E. It consisted of a simple upright stick and a roughly semicircular base divided into 12 pie-shaped sections. The shadow of the stick gave an approximate hour of the day. Other early sundials measured time by the length of a stick’s shadow as the sun moved across the sky rather than by the movement of the shadow across the base, Gautschy says.

“It’s a first step, and if you accomplish this step, you can do fine-tuning and also adapt to different months,” she says. Sundials must account for both time of year and latitude to be truly accurate.

During the night, ancient people could track time by the apparent movement of the stars from east to west, Gautschy says. And for measuring discrete units of time, they used water clocks. These were vessels that either had holes for water to flow out of at a constant rate or were filled from another vessel and had markings on the inside to indicate increments of time. The oldest surviving water clocks were found in Egypt and Babylon, and the earliest of these date to around 1500 B.C.E. In China historical records claim that water clocks were invented by the Yellow Emperor, or Huangdi, a half-historical, half-mythical figure said to have lived between 2717 and 2599 B.C.E., says Zheng-Hui Hwang, a mechanical engineer at National Cheng Kung University in Taiwan, who has written about the history of ancient Chinese timekeeping devices. The earliest Chinese water clocks were probably outflow devices and were known as louke. The unit ke divided the day into 100 equal segments from midnight to midnight. Over time, Hwang says, inventors made these clocks more sophisticated by equipping them with multiple water supply vessels or otherwise adjusting them to ensure the rate of water flow remained stable.

Water would eventually lead to some extremely sophisticated timekeeping: by the early 700s C.E., Tang Dynasty monks developed a mechanical clock powered by a water wheel, Hwang says. In 1194 Sung Dynasty official Su Song built on this design to develop a 40-foot-tall (12-meter-tall) mechanical clock powered by a water wheel that worked much like the mechanical clocks that would be invented in Europe some 200 years later.

The ancient Chinese timekeeping system also divided each 24-hour day into two-hour segments. This system was also seen in ancient Japan and Korea, according to a 2004 paper in Publications of the Astronomical Society of Japan.

In modern times, an hour is always the same length, but ancient peoples around the world operated with a more complex system, says David Rooney, a historian of technology, former curator of timekeeping at the Royal Observatory in Greenwich, London, and author of About Time: A History of Civilization in Twelve Clocks (W. W. Norton, 2021). Some ancient timekeeping systems divided the light portion of the day into 12 segments and the night into 12 segments, but because days and nights vary in length throughout the year except at the equator, these “seasonal hours” were different lengths from day to night and across the year.

“If your religion demands prayer time being linked to things like sunrise or sunset, or if you’re working in the fields, as most people did then, patterns of daylight and darkness matter more than this idea of a universal hour,” Rooney says.

Seasonal hours coexisted with universal hours until the 15th century in Europe and up until the 19th century in Japan, Rooney says. “We used to live with a much more complex—and rich and diverse—temporal culture,” he adds.

Religion was a major driver of the standardization of time across cultures, both across the year and day to day, Gautschy says. In ancient Mesopotamia, Anatolia (today approximately Iraq and Turkey) and Greece, people developed lunar calendars to track rituals and feast days, she says, whereas people in Egypt focused more on the solar calendar and also had a calendar based on the star Sirius. People of Islamic cultures, Rooney says, used water clocks to track prayer and fasting, whereas Christians developed the mechanical clock in 14th-century Europe as a way to schedule prayer.

The bottom line, Rooney says, is that humans have been temporal creatures for far longer than the industrial age—although not always happily. After the Romans installed their first public sundial in 263 B.C.E., he says, the Roman playwright Plautus objected to the new fad of timekeeping via a character in one of his plays: “The gods damn that man who first discovered the hours, and—yes—who first set up a sundial here, who’s smashed the day into bits for poor me! You know, when I was a boy, my stomach was the only sundial, by far the best and truest compared to all of these…. But now what there is, isn’t eaten unless the sun says so. In fact, town’s so stuffed with sundials that most people crawl along, shriveled up with hunger.”

It’s a strikingly modern thought for being 2,200 years old, Rooney says. “That could be written in the 21st century and spoken in any office,” he says, “that we’re under the tyranny of the clock.”

 

 

Original article here


28 Apr 2023
Comments: 0

Why Six Hours Of Sleep Is As Bad As None At All

Not getting enough sleep is detrimental to both your health and productivity. Yawn. We’ve heard it all before. But results from one study impress just how bad a cumulative lack of sleep can be on performance. Subjects in a lab-based sleep study who were allowed to get only six hours of sleep a night for two weeks straight functioned as poorly as those who were forced to stay awake for two days straight. The kicker is the people who slept six hours per night thought they were doing just fine.

This sleep deprivation study, published in the journal Sleep, took 48 adults and restricted their sleep to a maximum of four, six, or eight hours a night for two weeks; one unlucky subset was deprived of sleep for three days straight.

During their time in the lab, the participants were tested every two hours (unless they were asleep, of course) on their cognitive performance as well as their reaction time. They also answered questions about their mood and any symptoms they were experiencing, basically, “How sleepy do you feel?”

Why Six Hours of Sleep Isn’t Enough

As you can imagine, the subjects who were allowed to sleep eight hours per night had the highest performance on average. Subjects who got only four hours a night did worse each day. The group who got six hours of sleep seemed to be holding their own, until around day 10 of the study.

In the last few days of the experiment, the subjects who were restricted to a maximum of six hours of sleep per night showed cognitive performance that was as bad as the people who weren’t allowed to sleep at all. Getting only six hours of shut-eye was as bad as not sleeping for two days straight. The group who got only four hours of sleep each night performed just as poorly, but they hit their low sooner.

One of the most alarming results from the sleep study is that the six-hour sleep group didn’t rate their sleepiness as being all that bad, even as their cognitive performance was going downhill. The no-sleep group progressively rated their sleepiness level higher and higher. By the end of the experiment, their sleepiness had jumped by two levels. But the six-hour group only jumped one level. Those findings raise the question about how people cope when they get insufficient sleep, perhaps suggesting that they’re in denial (willful or otherwise) about their present state.

We Have No Idea How Much We Sleep

Complicating matters is the fact that people are terrible at knowing how much time they actually spend asleep. According to the Behavioral Risk Factor Surveillance System survey, as reported by the CDC, more than 35% of Americans sleep less than seven hours in a typical day. That’s one out of every three people. However, those who suffer from sleep problems don’t accurately estimate how much they sleep each night.

If you think you sleep seven hours a night, as one out of every three Americans does, it’s entirely possible you’re only getting six.

Research from University of Chicago, for instance, shows that people are as likely to overestimate how much they sleep as underestimate it. Another sleep study published in Epidemiology, indicates people generally overestimate their nightly sleep by around 0.8 hours. The same study also estimates that for every hour beyond six that people sleep, they overestimate sleep by about half an hour. If you think you sleep seven hours a night, as one out of every three Americans does, it’s entirely possible you’re only getting six.

So no one knows how much or little they’re sleeping, and when they don’t sleep enough, they believe they’re doing better than they are.

Even just a little bit of sleep deprivation, in this case, six rather than eight hours of sleep across two weeks, accumulates to jaw-dropping results. Cumulative sleep deprivation isn’t a new concept by any means, but it’s rare to find research results that are so clear about the effects.

Fixing Sleep: Easier Said Than Done

Figuring out how to get enough sleep, consistently, is a tough nut to crack. The same advice experts have batted around for decades is probably a good place to start: Have a consistent bedtime; don’t look at electronic screens at least 30 minutes before bed; limit alcohol intake (alcohol makes many people sleepy, but it can also decrease the quality and duration of sleep); and get enough exercise.

Other advice that you’ll hear less often, but which is equally valid, is to lose excess weight. Sleep apnea and obesity have a very high correlation, according to the National Sleep Foundation. What’s more, obese workers already suffer from more lost productive time than normal weight and overweight workers.

Other causes of sleep problems include physical, neurological, and psychological issues. Even stress and worry can negatively affect sleep. The CDC has called lack of sleep a health problem, and for good reason. Diet, exercise, mental health, and physical health all affect our ability to sleep, and in return, our ability to perform to our best.

Fixing bad sleep habits to get enough sleep is easier said than done. But if you’re functioning as if you hadn’t slept for two days straight, isn’t it worthwhile?

 

 

Original article here


26 Apr 2023
Comments: 0

Questing Greatness

 

I’ve been watching for how greatness shows up lately. For example, how Ryan Reynolds and Rob McElhenney sourced Wrexham AFC into super winners with such grace, generousity and community spirit, and how Iam Tongi is winning the hearts of so many on American Idol this year.

Here are some insights into what I’ve gleaned from my recent questing into greatness …

* Genuine authenticity and kindness matter. We are looking for super heroes who personify these traits along with greatness.

* People matter. Animals matter. The Earth and Nature matter. There is great Love in greatness for these things.

* Making the world a greater place is an exhilarating, high vibing, Joy-filled adventure. As Ryan Reynolds put it, it’s addictive.

* Once you surrender your smallness, greatness will flood you with everything you need to make it so. Ideas, innovation, great creativity, laughter, sunshine emanating from you and more.

* Saying YES to greatness is one of the best things we might ever do in our lifetime. It opens the door to adventure and super herodom, even if you might be the only one who knows it’s so. Because it’s not about fortune and fame, you see. It’s about contribution and being moved so profoundly that Life is forever changed by the very presence of greatness in our world.

 

 

About the Author:

 

Soleira Green is a visionary author, quantum coach, ALLchemist & future innovator. She has been creating leading edge breakthroughs in consciousness, quantum evolution, transformation, innovation, intelligence and more over the past 25 years, has written and self-published eleven books, and taught courses all over the world on these topics.

 


24 Apr 2023
Comments: 0

A Power Law Keeps the Brain’s Perceptions Balanced

 

The human brain is often described in the language of tipping points: It toes a careful line between high and low activity, between dense and sparse networks, between order and disorder. Now, by analyzing firing patterns from a record number of neurons, researchers have uncovered yet another tipping point — this time, in the neural code, the mathematical relationship between incoming sensory information and the brain’s neural representation of that information. Their findings, published in Nature in June 2019, suggest that the brain strikes a balance between encoding as much information as possible and responding flexibly to noise, which allows it to prioritize the most significant features of a stimulus rather than endlessly cataloging smaller details. The way it accomplishes this feat could offer fresh insights into how artificial intelligence systems might work, too.

A balancing act is not what the scientists initially set out to find. Their work began with a simpler question: Does the visual cortex represent various stimuli with many different response patterns, or does it use similar patterns over and over again? Researchers refer to the neural activity in the latter scenario as low dimensional: The neural code associated with it would have a very limited vocabulary, but it would also be resilient to small perturbations in sensory inputs. Imagine a one-dimensional code in which a stimulus is simply represented as either good or bad. The amount of firing by individual neurons might vary with the input, but the neurons as a population would be highly correlated, their firing patterns always either increasing or decreasing together in the same overall arrangement. Even if some neurons misfired, a stimulus would most likely still get correctly labeled.

At the other extreme, high-dimensional neural activity is far less correlated. Since information can be graphed or distributed across many dimensions, not just along a few axes like “good-bad,” the system can encode far more detail about a stimulus. The trade-off is that there’s less redundancy in such a system — you can’t deduce the overall state from any individual value — which makes it easier for the system to get thrown off.

For the past couple of decades, research indicated that neural systems generally favored low-dimensional representations. Although the natural world contains an absolutely massive amount of information, the brain seemed to be discarding much of that in favor of simpler neural descriptions. But later analyses showed that this conclusion could be chalked up to weaknesses in the experiments themselves: The lab animals were presented with only a few stimuli, or very simple stimuli, and researchers could only record from a limited number of neurons at a time. “Of course those experiments gave those results,” said Kenneth Harris, a neuroscientist at University College London.

“They couldn’t do anything different.”

So Harris and his colleagues revisited the problem, after creating a new technique for recording from 10,000 neurons simultaneously. As they showed mice nearly 3,000 images of natural scenes, they monitored the responses in the animals’ visual cortex and found a range of patterns that fit with a higher-dimensional picture of neural activity.

But the researchers also discovered something puzzling about that activity. The neurons didn’t care about all the dimensions equally: A few dimensions, or firing patterns, captured most of the neural responses to the visual stimuli. Adding other dimensions further increased that predictive power only by smaller and smaller increments. This decay followed what’s known as a power law, a special mathematical relationship “that’s been found almost everywhere people look for it,” said Jakob Macke, a computational neuroscientist at the Technical University of Munich who did not participate in the study.

Harris and his colleagues were stumped about what it might signify. Although recent studies have called the relevance (and even prevalence) of power laws into question — Harris quipped that even “the distribution of the number of exclamation marks in tweets from Donald Trump follows a power law” — there was something special about this one. It consistently had a particular slope, an exponent that couldn’t be explained by the mathematical structure of the stimuli.

“This sort of thing, this quantitative regularity in the data,” Harris said, “just doesn’t happen in biology. … We had absolutely no idea what it meant” — but it seemed to mean something.

In search of an explanation, they turned to previous mathematical work on the differentiability of functions. They found that if the power law mapping input to output decayed any slower, small changes in input would be able to generate large changes in output. The researchers referred to this as a breakdown in smoothness: The outputs produced by the underlying code were not always continuous.

It’s like being on the border of fractality, according to the co-leaders of the study, Carsen Stringer and Marius Pachitariu, both of whom worked in Harris’s lab and are now researchers at the Howard Hughes Medical Institute’s Janelia Research Campus in Virginia. “If you think of a fractal like the coastline of England,” Stringer said, “if you’re moving just a little bit along that coastline, you’re going to be changing very quickly, because there’s lots of jagged edges.”

In brain terms, that meant two very similar images could be represented by very different neural activity. “And that’s problematic,” she added. “If just one pixel changes, or if the image moves a bit, you don’t want your representation to totally change.”

Conversely, if the power law decayed any faster, the neural representations would become lower dimensional. They would encode less information, emphasizing some key dimensions while ignoring the rest.

Taken together, those principles implied that the representations were as detailed and high dimensional as they could get while still remaining smooth.

According to Harris, one way to interpret the finding is that with a slower decay, too much emphasis would be placed on less important dimensions (because if the curve relating neural activity to dimension were to get flatter, it would indicate that neural populations cared about all the dimensions more equally). Representations of the finer details in a stimulus would swamp the representation of the bigger features: The visual cortex would always be hypersensitive to certain trivial details, which would in turn make it difficult to formulate coherent perceptions and decisions. Meanwhile, with a faster decay, more weight than necessary would be placed on the larger features, overwhelming smaller features that might be relevant, too.

The brain seems to get it just right. “This is in a cool sweet spot in between,” said Eric Shea-Brown, a mathematical neuroscientist at the University of Washington who was not involved in the study. “It’s a balance between being smooth and systematic, in terms of mapping like inputs to like responses, but other than that, expressing as much as possible about the input.”

Harris and his team performed another experiment to test their idea. The particular slope of the power law they found depended on incoming stimuli being high dimensional, as any complex image is bound to be. But they calculated that if incoming visual inputs were simpler and lower dimensional, the slope would have to be steeper to avoid a breakdown in smoothness.

That’s exactly what they saw when they analyzed the neural activity of mice presented with low-dimensional images.

The researchers now want to determine the biological mechanism that makes this power law possible. They also hope to continue probing the role it might play in other brain regions, in other tasks or behaviors, and in models of disease.

One tantalizing context that they’re starting to explore is artificial intelligence. Deep learning systems have their own problem with breakdowns in smoothness: After training, they might be able to accurately label an image as a panda, but changes made to just a handful of pixels — which would be practically invisible to the human eye — might lead them to classify the image as a chimpanzee instead. “It’s a pathological feature of these networks,” Harris said. “There are always going to be some details that they’re oversensitive to.”

Computer scientists have been trying to determine why this happens, and Harris thinks his team’s findings might offer some clues. Preliminary analyses of deep learning networks have revealed that some of their layers typically obey power laws that decay more slowly than those observed in the mouse experiments. Harris, Stringer and their colleagues suspect that these networks might be vulnerable because, unlike networks in the brain, they produce representations that aren’t totally continuous. Perhaps, Harris said, it might be possible to apply the lessons of the power law he’s been studying to deep learning networks to make them more stable. But this research is still in its early days, according to Macke, who is also studying power laws in deep learning networks.

Shea-Brown still thinks it’s a good place to start. “Continuous and smooth relationships,” he said, “seem obviously important for creating the ability to generalize and compare different types of situations in an environment.” Scientists are starting to understand how the brain uses its full network of neurons to encode representations of the world. Now, with “this surprising and beautiful result,” they have both “a new target … and a very useful reference point” in hand for thinking about that code.

Harris noted that the unexpected presence of this power law in the visual cortex “was just something that came out in the data.” Now that other research questions can be pursued using his group’s technique for imaging and analyzing thousands of neurons at once, “this thing with the power law will probably be some very basic first finding,” with many other unanticipated insights on the horizon. “This whole approach is going to completely change the way we think about things.”

 

 

Original article here


Leave a Comment!

You must be logged in to post a comment.