If Socrates was the wisest person in Ancient Greece, then large language models must be the most foolish systems in the modern world.
In his Apology, Plato tells the story of how Socrates’s friend Chaerephon goes to visit the oracle at Delphi. Chaerephon asks the oracle whether there is anyone wiser than Socrates. The priestess responds that there isn’t: Socrates is the wisest of them all.
At first, Socrates seems puzzled. How could he be the wisest, when there were so many other people who were well known for their knowledge and wisdom, and yet Socrates claims that he lacks both?
He makes it his mission to solve the mystery. He goes around interrogating a series of politicians, poets, and artisans (as philosophers do). And what does he find? Socrates’ investigation reveals that those who claim to have knowledge either do not really know what they think they know, or else know far less than they proclaim to know.
Socrates is the wisest, then, because he is aware of the limits of his own knowledge. He doesn’t think he knows more than he does, and he doesn’t claim to know more than he does.
How does that compare with large language models like ChatGPT4?
In contrast to Socrates, large language models don’t know what they don’t know. These systems are not built to be truth-tracking. They are not based on empirical evidence or logic. They make statistical guesses that are very often wrong.
Large language models don’t inform users that they are making statistical guesses. They present incorrect guesses with the same confidence as they present facts. Whatever you ask, they will come up with a convincing response, and it’s never “I don’t know,” even though it should be. If you ask ChatGPT about current events, it will remind you that it only has access to information up to September 2021 and it can’t browse the internet. For almost any other kind of question, it will venture a response that will often mix facts with confabulations.
The philosopher Harry Frankfurt famously argued that bullshit is speech that is typically persuasive but is detached from a concern with the truth. Large language models are the ultimate bullshitters because they are designed to be plausible (and therefore convincing) with no regard for the truth. Bullshit doesn’t need to be false. Sometimes bullshitters describe things as they are, but if they are not aiming for the truth, what they say is still bullshit.
And bullshit is dangerous, warned Frankfurt. Bullshit is a greater threat to the truth than lies. The person who lies thinks she knows what the truth is, and is therefore concerned with the truth. She can be challenged and held accountable; her agenda can be inferred. The truth-teller and the liar play on opposite sides of the same game, as Frankfurt puts it. The bullshitter pays no attention to the game. Truth doesn’t even get confronted; it gets ignored; it becomes irrelevant.
Bullshit is more dangerous the more persuasive it is, and large language models are persuasive by design on two counts. First, they have analysed enormous amounts of text, which allows them to make a statistical guess as to what is a likely appropriate response to the prompt given. In other words, it mimics the patterns that it has picked up in the texts it has gone through. Second, these systems are refined through a process of reinforcement learning from human feedback (RLHF). The reward model has been trained directly from human feedback. Humans taught it what kinds of responses they prefer. Through numerous iterations, the system learns how to satisfy human beings’ preferences, thereby becoming more and more persuasive.
As the proliferation of fake news has taught us, human beings don’t always prefer truth. Falsity is often much more attractive than bland truths. We like good, exciting stories much more than we like truth. Large language models are analogous to a nightmare student, professor, or journalist; those who, instead of acknowledging the limits of their knowledge, try to wing it by bullshitting you.
Plato’s Apology suggests that we should build AI to be more like Socrates and less like bullshitters. We shouldn’t expect tech companies to design ethically out of their own good will. Silicon Valley is well known for its bullshitting abilities, and companies can even feel compelled to bullshit to stay competitive in that environment. That companies working in a corporate bullshitting environment create bullshitting products should hardly be surprising. One of the things that the past two decades have taught us is that tech needs as much regulation as any other industry, and no industry can regulate itself. We regulate food, drugs, telecommunications, finance, transport; why wouldn’t tech be next?
Plato leaves us with a final warning. One of the lessons of his work is to beware the flaws of democracy. Athenian democracy killed Socrates. It condemned its most committed citizen, its most valuable teacher, while it allowed sophists—the bullshitters of that time—to thrive. Our democracies seem likewise vulnerable to bullshitters. In the recent past, we have made them prime ministers and presidents. And now we are fueling the power of large language models, considering using them in all walks of life—even in contexts like journalism, politics, and medicine, in which truth is vital to the health of our institutions. Is that wise?
Original article here



But in the second half of the 19th century, composers gradually began to deviate from a strict adherence to the principle of tonality, making it difficult to sense where the music stood in relation to the tonic. Schoenberg, believing that tonality had run its course, was determined to supplant it with the series, or tone row. In a series, each of the 12 notes of the chromatic scale of semitones appears exactly once; a note could be repeated only after the series had been completed. This gave a composer a staggering number of combinations to choose from: 1 x 2 x 3 x … x 12 = 479,001,600, to be exact (not counting shifts by octaves, which Schoenberg allowed). In serial music, complete democracy ruled: no single note held any preferred status over the others. Every note was related only to its immediate predecessor in the series; gone were the roles that different notes had played in relation to the tonic. At its heart it was a mathematical system, and Schoenberg was determined to impose it on music.
I love my family. I was brought up in a very close family. Of course we fight, but we also laugh together. We share everything — all the joys and the pains. And I know the pain I am feeling, my family will feel the same thing. We confide in each other and find comfort in that. When I want to be happy, my family is always there for me.
I’ve been working in rural areas for the last ten years. And in the women, peasants and farmers I’ve met there, I’ve found another kind of genius. They help me understand that the future is possible.


Time travel makes regular appearances in popular culture, with innumerable time travel storylines in movies, television and literature. But it is a surprisingly old idea: one can argue that the Greek tragedy Oedipus Rex, written by Sophocles over 2,500 years ago, is the first time travel story.
Kurt Vonnegut’s anti-war novel Slaughterhouse-Five, published in 1969, describes how to evade the grandfather paradox. If free will simply does not exist, it is not possible to kill one’s grandfather in the past, since he was not killed in the past. The novel’s protagonist, Billy Pilgrim, can only travel to other points on his world line (the timeline he exists in), but not to any other point in space-time, so he could not even contemplate killing his grandfather.
Atomic clocks, combined with precise astronomical measurements, have revealed that the length of a day is suddenly getting longer, and scientists don’t know why.
A comparison between these estimates and an atomic clock has revealed a seemingly ever-shortening length of day over the past few years.
In February 2015, a Scottish woman uploaded a photograph of a dress to the internet. Within 48 hours the blurry snapshot had gone viral, provoking spirited debate around the world. The disagreement centred on the dress’s colour: some people were convinced it was blue and black while others were adamant it was white and gold.
In modern times, clocks underpin everything people do, from work to school to sleep. Timekeeping is also the invisible structure that makes modern infrastructure work. It forms the foundation of the high-speed computers that conduct financial trading and even the GPS system that pinpoints locations on Earth’s surface with unprecedented accuracy.

We all want to feel inner peace. We look for it throughout our entire lives, as being at peace allows us to dream and to actually follow those dreams. When we are at peace with ourselves, we are more understanding and loving towards others, we are able to embody the concept of being One, and therefore we create deeper and more meaningful connections with family, friends and people in general.
Meditation has been found to have numerous scientific benefits. For example, it reduces stress, improves concentration, boosts creativity, and increases happiness. Meditation affects our mental health in a positive manner, decreasing anxiety and depression, improving sleep, and enhancing self-awareness