Science Archive

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

In the case of Covid. the whole outpuring of how to deal with and exaggerate this problem, brings such so called experts to the fore, with guesswork passed off as science -with no account to the wider harm done to health and mass survival. R.J Cook

‘I see dead people’: why so many of us believe in ghosts

October 30, 2020 11.22am GMT

Author

  1. Anna Stone Senior Lecturer in Psychology, University of East London

Disclosure statement

Anna Stone does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Partners

University of East London

University of East London provides funding as a member of The Conversation UK.

The Conversation UK receives funding from these organisations

View the full list

CC BY NDWe believe in the free flow of information
Republish our articles for free, online or in print, under Creative Commons licence.

Halloween seems an appropriate time of year to share the story of the Chaffin family and how a ghost helped decide a dispute over an inheritance. James L Chaffin of Monksville, North Carolina, died after an accident in 1921, leaving his estate in full to his favourite son Marshall and nothing to his wife and three other children. A year later Marshall died, so the house and 120 acres of land went to Marshall’s widow and son.

But four years later, his youngest son James “Pink” Chaffin started having extraordinary dreams in which his father visited him and directed him to the location of a second, later will in which Chaffin senior left the property divided between his widow and the surviving children. The case went to court and, as you’d expect, the newspapers of the time went mad for the story.

The court found in Pink’s favour and, thanks to the publicity, the Society for Psychical Research (SPR) investigated, finally coming to the conclusion that Pink had indeed been visited by his father’s ghost. Pink himself never wavered from this explanation, stating: “I was fully convinced that my father’s spirit had visited me for the purpose of explaining some mistake.”

Unlikely as it might seem in the cold light of day, ghosts and hauntings are a mainstream area of belief. Recent studies by YouGov in the UK and the USA show that between 30% and 50% of the population says they believe in ghosts. Belief in ghosts also appears to be global, with most (if not all) cultures around the world having some widely accepted kind of ghosts.

The existence of a ghost as an incorporeal (bodyless) soul or spirit of a dead person or animal is contrary to the laws of nature as we understand them, so it seems there is something here that calls for explanation. We can look at the worlds of literature, philosophy and anthropology for some of the reasons why people are so keen to believe.

Blithe (and vengeful) spirits

The desire for justice and the belief in some form of supernatural protection (which we see in more major religions) address basic human needs. Ghosts have long been thought of as vehicles for justice. Shakespeare’s Hamlet is visited by the ghost of his murdered father seeking revenge on his murderer. In Macbeth, meanwhile, the murdered Banquo points an accusing finger at the man responsible for his death.

19th-century painting of a ghost at a feast of medieval noblemen and women.
Unwelcome guest: Banquo’s ghost from Shakespeare’s Macbeth. Théodore Chassériau

This idea has its equivalents today in various countries. In Kenya, a murdered person may become an ngoma, a spirit who pursues their murderer, sometimes causing them to give themself up to the police. Or in Russia the rusalka is the spirit of a dead woman who died by drowning and now lures men to their death. She may be released when her death is avenged.

Ghosts can also be friends and protectors. In Charles Dickens’s A Christmas Carol, Ebeneezer Scrooge is helped by the ghosts of Christmas Present, Past and Future to mend his hardhearted ways before it’s too late. In the Sixth Sense (spoiler alert), the ghost character played by Bruce Willis helps a young boy to come to terms with his ability to see ghosts and to help them to find peace. Many people are comforted by thinking that their deceased loves ones are watching over them and perhaps guiding them.

But many people also like to believe that death is not the end of existence – it’s a comfort when we lose people we love or when we face the idea of our own mortality. Many cultures around the world have had beliefs that the dead can communicate with the living, and the phenomenon of spiritualism supposes that we can communicate with the spirits of the dead, often through the services of specially talented spirit mediums.

And we love to be scared, as long as we know we aren’t actually in danger. Halloween TV schedules are full of films where a group of (usually young) volunteers spends a night in a haunted house (with gory results). We seem to enjoy the illusion of danger and ghost stories can offer this kind of thrill.

Body and soul

Belief in ghosts finds support in the longstanding philosophical idea that humans are naïve dualists, naturally believing that our physical being is separate from our consciousness. This view of ourselves makes it easy for us to entertain the idea that our mind could have an existence separate from our body – opening the door to believing that our mind or consciousness could survive death, and so perhaps become a ghost.

Looking at how the brain works, the experience of hallucinations is a lot more common than many people realise. The SPR, founded in 1882, collected thousands of verified first-hand reports of visual or auditory hallucinations of a recently deceased person. More recent research suggests that a majority of elderly bereaved people may experience visual or auditory hallucinations of their departed loved ones that persist for a few months.

Another source of hallucinations is the phenomenon of sleep paralysis, which may be experienced when falling asleep or waking up. This temporary paralysis is sometimes accompanied by the hallucination of a figure in the room that could be interpreted as a supernatural being. The idea that this could be a supernatural visitation is easier to understand when you think that when we believe in a phenomenon, we are more likely to experience it.

Consider what might happen if you were in a reputedly haunted house at night and you saw something moving in the corner of your eye. If you believe in ghosts, you might interpret what you saw as a ghost. This is an example of top-down perception in which what we see is influenced by what we expect to see. And, in the dark, where it might be difficult to see properly, our brain makes the best inference it can, which will depend on what we think is likely – and that could be a ghost.

According to the Dutch philosopher Baruch Spinoza, belief comes quickly and naturally, whereas scepticism is slow and unnatural. In a study of neural activity, Harris and colleagues discovered that believing a statement requires less effort than disbelieving it.

Given these multiple reasons for us to believe in ghosts, it seems that the belief is likely to be with us for many years to come.

Comment There is a bit of a bourgeoise sneer about this article. How is it that she and so many others can ridicule peoples belief in ghosts, but we have to listen to Islam intruding self righteously into our culture, but must not dismiss their beliefs or offend them in the way this writer dismisses believers in ghosts ? That’s liberals for you. A true scientist, as opposed to the sort of SAGE technicians about to do even more social and economic damage, keeps an open mind. R.J Cook

REVEALED: The scientific PROOF that shows reincarnation is REAL

WHILE many scientists will dispel the notion of reincarnation as a myth, there are some credible experts out there who believe that it is a genuine phenomenon.

Dr Ian Stevenson, former Professor of Psychiatry at the University of Virginia School of Medicine and former chair of the Department of Psychiatry and Neurology, dedicated the majority of his career to finding evidence of reincarnation, until his death in 2007.

Dr Stevenson claims to have found over 3,000 examples of reincarnation during his time which he shared with the scientific community.

In a study titled ‘Birthmarks and Birth Defects Corresponding to Wounds on Deceased Persons’, Dr Stevenson used facial recognition to analyse similarities between the claimant and their alleged prior incarnation, while also studying birth marks.

He wrote in his study: “About 35 per cent of children who claim to remember previous lives have birthmarks and/or birth defects that they (or adult informants) attribute to wounds on a person whose life the child remembers. The cases of 210 such children have been investigated. 

3

Are people ‘reborn’ once they die? (Image: GETTY)

“The birthmarks were usually areas of hairless, puckered skin; some were areas of little or no pigmentation (hypopigmented macules); others were areas of increased pigmentation (hyperpigmented nevi).

“The birth defects were nearly always of rare types. In cases in which a deceased person was identified the details of whose life unmistakably matched the child’s statements, a close correspondence was nearly always found between the birthmarks and/or birth defects on the child and the wounds on the deceased person. 

afterlife

Have scientists found proof of reincarnation? (Image: GETTY)

“In 43 of 49 cases in which a medical document (usually a postmortem report) was obtained, it confirmed the correspondence between wounds and birthmarks (or birth defects).”

In a separate study, Dr Stevenson interviewed three children who claimed to remember aspects of their previous lives.

4

Many believe that the soul leaves the body after death (Image: GETTY)

The children made 30-40 statements each regarding memories that they themselves had not experienced, and through verification, he found that up to 92 per cent of the statements were correct.

The article, published on Scientific Exploration, Dr Stevenson wrote: “It was possible in each case to find a family that had lost a member whose life corresponded to the subject’s statements. 

Dr Ian Stevenson, former Professor of Psychiatry at the University of Virginia School of Medicine and former chair of the Department of Psychiatry and Neurology, dedicated the majority of his career to finding evidence of reincarnation, until his death in 2007.

Dr Stevenson claims to have found over 3,000 examples of reincarnation during his time which he shared with the scientific community.

In a study titled ‘Birthmarks and Birth Defects Corresponding to Wounds on Deceased Persons’, Dr Stevenson used facial recognition to analyse similarities between the claimant and their alleged prior incarnation, while also studying birth marks.

He wrote in his study: “About 35 per cent of children who claim to remember previous lives have birthmarks and/or birth defects that they (or adult informants) attribute to wounds on a person whose life the child remembers. The cases of 210 such children have been investigated. 

3

Are people ‘reborn’ once they die? (Image: GETTY)

“The birthmarks were usually areas of hairless, puckered skin; some were areas of little or no pigmentation (hypopigmented macules); others were areas of increased pigmentation (hyperpigmented nevi).

“The birth defects were nearly always of rare types. In cases in which a deceased person was identified the details of whose life unmistakably matched the child’s statements, a close correspondence was nearly always found between the birthmarks and/or birth defects on the child and the wounds on the deceased person. 

afterlife

Have scientists found proof of reincarnation? (Image: GETTY)

“In 43 of 49 cases in which a medical document (usually a postmortem report) was obtained, it confirmed the correspondence between wounds and birthmarks (or birth defects).”

In a separate study, Dr Stevenson interviewed three children who claimed to remember aspects of their previous lives.

4

Many believe that the soul leaves the body after death (Image: GETTY)

The children made 30-40 statements each regarding memories that they themselves had not experienced, and through verification, he found that up to 92 per cent of the statements were correct.

Dr Ian Stevenson, former Professor of Psychiatry at the University of Virginia School of Medicine and former chair of the Department of Psychiatry and Neurology, dedicated the majority of his career to finding evidence of reincarnation, until his death in 2007.

Dr Stevenson claims to have found over 3,000 examples of reincarnation during his time which he shared with the scientific community.

In a study titled ‘Birthmarks and Birth Defects Corresponding to Wounds on Deceased Persons’, Dr Stevenson used facial recognition to analyse similarities between the claimant and their alleged prior incarnation, while also studying birth marks.

He wrote in his study: “About 35 per cent of children who claim to remember previous lives have birthmarks and/or birth defects that they (or adult informants) attribute to wounds on a person whose life the child remembers. The cases of 210 such children have been investigated. 

3

Are people ‘reborn’ once they die? (Image: GETTY)

“The birthmarks were usually areas of hairless, puckered skin; some were areas of little or no pigmentation (hypopigmented macules); others were areas of increased pigmentation (hyperpigmented nevi).

“The birth defects were nearly always of rare types. In cases in which a deceased person was identified the details of whose life unmistakably matched the child’s statements, a close correspondence was nearly always found between the birthmarks and/or birth defects on the child and the wounds on the deceased person. 

afterlife

Have scientists found proof of reincarnation? (Image: GETTY)

“In 43 of 49 cases in which a medical document (usually a postmortem report) was obtained, it confirmed the correspondence between wounds and birthmarks (or birth defects).”

In a separate study, Dr Stevenson interviewed three children who claimed to remember aspects of their previous lives.

4

Many believe that the soul leaves the body after death (Image: GETTY)

The children made 30-40 statements each regarding memories that they themselves had not experienced, and through verification, he found that up to 92 per cent of the statements were correct.

The article, published on Scientific Exploration, Dr Stevenson wrote: “It was possible in each case to find a family that had lost a member whose life corresponded to the subject’s statements. 

What happens after you die? That used to be just a religious question, but science is starting to weigh in. Sam Littlefair looks at the evidence that you’ve lived before.

James Huston (left) was the only pilot on the aircraft carrier Natoma Bay to die in the battle of Iwo Jima. More than five decades later, a two-year-old named James Leninger (right) talked about flying off a boat named “Natoma” near Iwo Jima and made drawings of fighter planes getting shot down.

On March 3, 1945, James Huston, a twenty-one-year-old U.S. Navy pilot, flew his final flight. He took off from the USS Natoma Bay, an aircraft carrier engaged in the battle of Iwo Jima. Huston was flying with a squadron of eight pilots, including his friend Jack Larsen, to strike a nearby Japanese transport vessel. Huston’s plane was shot in the nose and crashed in the ocean.

Fifty-three years later, in April of 1998, a couple from Louisiana named Bruce and Andrea Leninger gave birth to a boy. They named him James.

When he was twenty-two months old, James and his father visited a flight museum, and James discovered a fascination with planes—especially World War II aircraft, which he would stare at in awe. James got a video about a Navy flight squad, which he watched repeatedly for weeks.

One of James Leninger’s drawings of fighter planes.

Within two months, James started saying the phrase, “Airplane crash on fire,” including when he saw his father off on trips at the airport. He would slam his toy planes nose-first into the coffee table, ruining the surface with dozens of scratches.

James started having nightmares, first with screaming, and then with words like, “Airplane crash on fire! Little man can’t get out!”, while thrashing and kicking his legs.

Eventually, James talked to his parents about the crash. James said, “Before I was born, I was a pilot and my airplane got shot in the engine, and it crashed in the water, and that’s how I died.” James said that he flew off of a boat and his plane was shot by the Japanese. When his parents asked the name of the boat, he said “Natoma.”

SIGN UP FOR LION’S ROAR NEWSLETTERS

Get even more Buddhist wisdom delivered straight to your inbox! Sign up for Lion’s Roar free email newsletters.

When his parents asked James who “little man” was, he would say “James” or “me.” When his parents asked if he could remember anyone else, he offered the name “Jack Larsen.” When James was two and a half, he saw a photo of Iwo Jima in a book, and said “My plane got shot down there, Daddy.”

When James Leninger was eleven years old, Jim Tucker came to visit him and his family. Tucker, a psychiatrist from the University of Virginia, is one of the world’s leading researchers on the scientific study of reincarnation or rebirth. He spent two days interviewing the Leninger family, and says that James represents one of the strongest cases of seeming reincarnation that he has ever investigated.

“You’ve got this child with nightmares focusing on plane crashes, who says he was shot down by the Japanese, flew off a ship called ‘Natoma,’ had a friend there named Jack Larsen, his plane got hit in the engine, crashed in the water, quickly sank, and said he was killed at Iwo Jima. We have documentation for all of this,” says Tucker in an interview.

“It turns out there was one guy from the ship Natoma Bay who was killed during the Iwo Jima operations, and everything we have documented from James’ statements fits for this guy’s life.”

As a toddler, James Leninger was fascinated with airplanes and knew obscure details about WWII aircraft.

Jim Tucker grew up in North Carolina. He was a Southern Baptist, but when he started training in psychiatry, he left behind any religious or spiritual worldview. Years later, he read about the work of a psychiatrist named Ian Stevenson in the local paper.

Stevenson was a well-respected academic who left his position as chair of psychiatry at the University of Virginia in the 1960s to undertake a full-time study of reincarnation. Though his papers never got published in any mainstream scientific journals, he received appreciative reviews in respected publications like The Journal of the American Medical Association, The American Journal of Psychology, and The Lancet. Before his death in 2007, Stevenson handed over much of his work to Tucker at the University of Virginia’s Division of Perceptual Studies.

The first step in researching the possibility of rebirth is the collection of reports of past life memories. Individually, any one report, like James Leninger’s, proves little. But when thousands of the cases are analyzed collectively, they can yield compelling evidence.

After decades of research, the Division of Perceptual Studies now houses 2,500 detailed records of children who have reported memories of past lives. Tucker has written two books summarizing the research, Life Before Life and Return to Life. In Life Before Life, Tucker writes, “The best explanation for the strongest cases is that memories, emotions, and even physical injuries can sometimes carry over from one life to the next.”

Children in rebirth cases generally start making statements about past lives between the ages of two and four and stop by the age of six or seven, the age when most children lose early childhood memories.

A typical case of Tucker’s starts with a communication from a parent whose child has described a past life. Parents often have no prior interest in reincarnation, and they get in touch with Tucker out of distress—their child is describing things there is no logical way they could have experienced. Tucker corresponds with the parents to find out more. If it sounds like a strong case, with the possibility of identifying a previous life, he proceeds.

‘This is not like what you might see on TV, where someone says they were Cleopatra. These kids are typically talking about being an ordinary person.’

When Tucker meets with a family, he interviews the parents, the child, and other potential informants. He fills out an eight-page registration form, and collects records, photographs, and evidence. Eventually, he codes more than two hundred variables for each case into a database.

In the best cases, the researcher meets the family before they’ve identified a suspected previous identity. If the researchers can identify the previous personality (PP) first, they have the opportunity to perform controlled tests.

In a recent case, Tucker met a family whose son remembered fighting in the jungle in the Vietnam War and getting killed in action. The boy gave a name for the PP. When the parents looked up the name, they found that it was a real person. Before doing any further research, they contacted Tucker.

Tucker did a controlled test with the boy, who was five. He showed him eight pairs of photos. In each pair, one photo was related to the soldier’s life and one was not—such as a photo of his high school and a photo of a high school he didn’t go to. For two of the pairs, the boy made no choice. In the remaining six, he chose correctly.

In another case, a seven-year-old girl named Nicole remembered living in a small town, on “C Street,” in the early 1900s. She remembered much of the town having been destroyed by a fire and often talked of wanting to go home. Through research, Tucker hypothesized Nicole was describing Virginia City, Nevada, a small mining town that was destroyed by fire in 1875, where the main road was “C Street.” Tucker traveled to Virginia City with Nicole and her mother. As they drove down the road into the town, Nicole remarked, “They didn’t have these black roads when I lived here before.”

Nicole had described strange memories of her previous life. She said there were trees floating in the water. She said horses walked down the streets. And she talked about a “hooley dance.” In the town, they discovered that there had once been a massive network of river flumes used to transport logs to the town to construct nearby mineshafts. They discovered that wild horses wandered through the streets of the town. And that a “hooley” is a type of Irish dance that was popular there.

“We weren’t able to identify a specific individual,” says Tucker. “But there are parts of the case that are hard to dismiss.”

As her plane was lifting off from Nevada, Nicole burst into tears. “I don’t want to leave here,” she said.

Her mother asked if she really believed Virginia City was her home. “No,” said Nicole. “I know it was.”

Researcher Jim Tucker, author of “Life Before Life,” who has collected hundreds of cases of children claiming past-life memories.

Tucker is trying to investigate scientifically a question that has traditionally been the province of religion: what happens after we die? Two of the world’s largest religions, Hinduism and Buddhism, argue that we are reborn.

Certain schools of Buddhism don’t particularly concern themselves with the idea of rebirth, and some modern analysts argue that the Buddha taught it simply as a matter of convenience because it was the accepted belief in the India of his time. Most Buddhists, however, see it as central to the teachings on the suffering of samsara—the wheel of cyclic existence—and nirvana, the state of enlightenment in which one is free from the karma that drives rebirth (although one may still choose to be reborn in order to follow the bodhisattva path of compassion).

Buddhists generally prefer the term “rebirth” to “reincarnation” to differentiate between the Hindu and Buddhist views. The concept of reincarnation generally refers to the transmigration of an atman, or soul, from lifetime to lifetime. This is the Hindu view, and it is how reincarnation is generally understood in the West.

Instead, Buddhism teaches the doctrine of anatman, or non-self, which says there is no permanent, unchanging entity such as a soul. In reality, we are an ever-changing collection of consciousnesses, feelings, perceptions, and impulses that we struggle to hold together to maintain the illusion of a self.

See also: The Buddhist Teachings on Rebirth

In the Buddhist view, the momentum, or “karma,” of this illusory self is carried forward from moment to moment—and from lifetime to lifetime. But it’s not really “you” that is reborn. It’s just the illusion of “you.” When asked what gets reborn, Buddhist teacher Chögyam Trungpa Rinpoche reportedly said, “Your bad habits.”

For Jim Tucker, though, the spiritual connections—Buddhist or otherwise—are incidental. “I’m purely investigating what the facts show,” he says, “as opposed to how much they may agree or disagree with particular belief systems.”

Rebirth is just one component of a theory of consciousness that Tucker is working on. “The mainstream materialist position is that consciousness is produced by the brain, this meat,” he says. “So consciousness is what some people would call an epiphenomenon, a byproduct.”

He sees it the other way around: our minds don’t exist in the world; the world exists in our minds. Tucker describes waking reality as like a “shared dream,” and when we die, we don’t go to another place. We go into another dream.

Tucker’s dream model parallels some key Buddhist concepts. In Buddhism, reality is described as illusion, often compared to a sleeping dream. In Siddhartha Gautama’s final realization, he reportedly saw the truth of rebirth and recalled all of his past lives. Later that night, he attained enlightenment, exited the cycle of death and rebirth, and earned the title of “Buddha”—which literally means “one who is awake.”

In fact, according to scripture, the Buddha met most—if not all—of Jim Tucker’s six criteria for a proven case of rebirth. Maybe he would have made an interesting case.

Memory is only one phenomenon associated with past lives, and memories alone are not enough to make a case.

In order to proceed with an investigation, Tucker’s team requires that a case meet at least two out of six criteria:

  1. a specific prediction of rebirth, as in the Tibetan Buddhist tulku system
  2. a family member (usually the mother) dreaming about the previous personality (PP) coming
  3. birthmarks or birth defects that seem to correspond to the previous life
  4. corroborated statements about a previous life
  5. recognitions by the child of persons or objects connected to the PP
  6. unusual behaviors connected to the PP

Tucker’s research suggests that, if rebirth is real, much more than memories pass from one life to the next. Many children have behaviors and emotions that seem closely related to their previous life.

Emotionality is a signal of a strong case. The more emotion a child shows when recalling a past life, the stronger their case tends to be. When children start talking about past-life memories, they’re often impassioned. Sometimes they demand to be taken to their “other” family. When talking about their past life, the child might talk in the first person, confuse past and present, and get upset. Sometimes they try to run away.

In one case, a boy named Joey talked about his “other mother” dying in a car accident. Tucker recounts the following scene in Life Before Life: “One night at dinner when he was almost four years old, he stood up in his chair and appeared pale as he looked intently at his mother and said, ‘You are not my family—my family is dead.’ He cried quietly for a minute as a tear rolled down his cheek, then he sat back down and continued with his meal.”

In another unsettling case, a British boy recalled the life of a German WWII pilot. At age two, he started talking about crashing his plane while on a bombing mission over England. When he learned to draw, he drew swastikas and eagles. He goose-stepped and did the Nazi salute. He wanted to live in Germany, and had an unusual taste for sausages and thick soups.

In some cases, these emotions manifest in symptoms that look like post-traumatic stress disorder, but without any obvious trauma in this life. Some of these children engage in “post-traumatic play” in which they act out their trauma—often the way the PP died—with toys. One boy repeatedly acted out his PP’s suicide, pretending a stick was a rifle and putting it under his chin. In the cases where the PP died unnaturally, more than a third of the children had phobias related to the mode of death. Among the children whose PP died by drowning, a majority were afraid of water.                                        ‚

Positive attributes can also carry over, seemingly. In almost one in ten cases, parents report that their child has an unusual skill related to the previous life. Some of those cases involve “xenoglossy,” the ability to speak or write a language one couldn’t have learned by normal means.

A two-year-old boy named Hunter remembered verifiable details from a past life as a famous pro golfer. Hunter took toy golf clubs everywhere he went. He started golf lessons three years ahead of the minimum age, and his instructors called him a prodigy. By age seven, he had competed in fifty junior tournaments, winning forty-one.

In many cases, the child is born with marks or birth defects that seem connected to wounds on the PP’s body. Ian Stevenson reported two hundred such cases in his monograph Reincarnation and Biology, including several in which a child who remembered having been shot had a small, round birthmark (matching a bullet entrance wound) and, on the other side of their body, a larger, irregularly-shaped birthmark (matching a bullet exit wound).

Ryan, a boy from Oklahoma (left, with his mother and Jim Tucker) remembered many details about a life as a Hollywood movie extra and agent. After investigation, the details matched the life of a man named Marty Martyn (right), who died in 1964.

Patrick, born in the Midwest in 1992, had several notable birthmarks. His older half-brother, Kevin, had died of cancer twelve years before Patrick was born. Kevin was in good health until he was sixteen months old, when he developed a limp, caused by cancer. He was admitted to the hospital, and doctors saw that he had a swelling above his right ear, also caused by the disease. His left eye was protruding and bleeding slightly, and he eventually went blind in that eye. The doctors gave him fluid through an IV inserted in the right side of his neck. He died within a few months.

Soon after Patrick’s birth, his mother noticed a dark line on his neck, exactly where Kevin’s IV had been; an opacity in his left eye, where Kevin’s eye had protruded; and a bump above his right ear, where Kevin had swelling. When he began to walk, Patrick had a limp, like his brother. At age four, he began recalling memories from Kevin’s life and saying he had been Kevin.

See also: A comparison of modern reincarnation research and the traditional Buddhist views

Tucker says there is no perfect case, but, collectively, the research becomes hard to rationalize with normal explanations. Research has demonstrated that children in past-life memory cases are no more prone to fantasies, suggestions, or dissociation than other children. Regarding coincidence, statisticians have declined to do statistical analyses because the cases involve too many complex factors, but one statistician commented, “phrases like ‘highly improbable’ and ‘extremely rare’ come to mind.” For the stronger cases, the most feasible explanation is elaborate fraud, but it’s hard to see any reason why a family would make up such a story. Often, reincarnation actually violates their belief system, and many families remain anonymous, anyway.

“This is not like what you might see on TV, where someone says they were Cleopatra,” says Tucker. “These kids are typically talking about being an ordinary person. The child has numerous memories of a nondescript life.”

Tucker quotes the late astronomer and science writer Carl Sagan, whose last book was The Demon Haunted World, a repudiation of pseudoscience and a classic work on skepticism. In it, Sagan wrote that there were three paranormal phenomena he believed deserve serious study, the third being “that young children sometimes report details of a previous life, which upon checking turn out to be accurate and which they could not have known about in any other way than reincarnation.”

Sagan didn’t say he believed in reincarnation, but he felt the research had yielded enough evidence that it deserved further study. Until an idea is disproven, Sagan said, it’s critical that we engage with the idea with openness and ruthless scrutiny. “This,” he wrote, “is how deep truths are winnowed from deep nonsense.”

Further Reading

Can you help us at a critical time?

COVID-19 has brought tremendous suffering, uncertainty, fear, and strain to the world.

Our sincere wish is that these Buddhist teachings, guided practices, and stories can be a balm in these difficult times. Over the past month, over 400,000 readers like you have visited our site, reading almost a million pages and streaming over 120,000 hours of video teachings. We want to provide even more Buddhist wisdom but our resources are strained. Can you help us?

No one is free from the pandemic’s impact, including Lion’s Roar. We rely significantly on advertising and newsstand sales to support our work — both of which have dropped precipitously this year. Can you lend your support to Lion’s Roar at this critical time?SUPPORT LION’S ROAR

About Sam Littlefair

Sam Littlefair is the former editor of LionsRoar.com. He has also written for The Coast, Mindful, and Atlantic Books Today. Find him on Twitter, @samlfair, and Facebook, @samlfair.

Topics: Lion’s Roar – May ’18, Rebirth and Reincarnation, Science, Wellness

Related Posts…

Halloween Hatha
by Cyndi Lee The Myth of Multitasking
by Sharon Salzberg The Middle Way of Stress
by Judy Lief

What Is Multi-Dimensional Space? October 26th 2020

Join the Community

Follow @wiseGEEK

Subscribe to wiseGEEK

Learn something new every day. Alan Rankin Last Modified Date: October 23, 2020

Humans experience day-to-day reality in four dimensions: the three physical dimensions and time. According to Albert Einstein’s theory of relativity, time is actually the fourth physical dimension, with measurable characteristics similar to the other three. An ongoing field of study in physics is the attempt to explain both relativity and quantum theory, which governs reality at very small scales. Several proposals in this field suggest the existence of multi-dimensional space. In other words, there may be additional physical dimensions that humans cannot perceive.

Tesseracts visually represent the four dimensions, including time.

Tesseracts visually represent the four dimensions, including time.

The science surrounding multi-dimensional space is so mind-boggling that even the physicists who study it do not fully understand it. It may be helpful to start with the three observable dimensions, which correspond to the height, width, and length of a physical object. Einstein, in his work on general relativity in the early 20th century, demonstrated that time is also a physical dimension. This is observable only in extreme conditions; for example, the immense gravity of a planetary body can actually slow down time in its near vicinity. The new model of the universe created by this theory is known as space-time.

In theory, gravity from a massive object bends space-time around it.
In theory, gravity from a massive object bends space-time around it.

Since Einstein’s era, scientists have discovered many of the universe’s secrets, but not nearly all. A major field of study, quantum mechanics, is devoted to learning about the smallest particles of matter and how they interact. These particles behave in a very different manner than the matter of observable reality. Physicist John Wheeler is reported to have said, “If you are not completely confused by quantum mechanics, you do not understand it.” It has been suggested that multi-dimensional space can explain the strange behavior of these elementary particles.

For much of the 20th and 21st centuries, physicists have tried to reconcile the discoveries of Einstein with those of quantum physics. It is believed that such a theory would explain much that is still unknown about the universe, including poorly understood forces such as gravity. One of the leading contenders for this theory is known variously as superstring theory, supersymmetry, or M-theory. This theory, while explaining many aspects of quantum mechanics, can only be correct if reality has 10, 11, or as many as 26 dimensions. Thus, many physicists believe multi-dimensional space is likely.

The extra dimensions of this multi-dimensional space would exist beyond the ability of humans to observe them. Some scientists suggest they are folded or curled into the observable three dimensions in such a way that they cannot be seen by ordinary methods. Scientists hope their effects can be documented by watching how elementary particles behave when they collide. Many experiments in the world’s particle accelerator laboratories, such as CERN in Europe, are conducted to search for this evidence. Other theories claim to reconcile relativity and quantum mechanics without requiring the existence of multi-dimensional space; which theory is correct remains to be seen.

Dreams Are The REAL World

~ admin

Arno Pienaar – Dreams are reality just as much as the real world is classified as reality. Dreams are your actual own reality and the real world is the creator’s reality. 

Dreams are by far the most intriguing aspect of existence for a human-being. Within them we behold experiences that the conscious mind may recollect, but for the most part, cannot make sense of. The only sense we can gain from them is the way they make us feel intuitively.

SunSkyCloudsMountainWaterTreeStone

Subconscious Guiding Mechanism

The feeling is known to be the message carried over from the guiding mechanism of the sub-conscious mind.

The guidance we receive in our dreams comes, in fact, from our very selves, although the access we have to everything is only tapped into briefly, when the conscious mind is completely shut down in the sleeping state.

The subconscious tends to show us the things that dominate our consciousness whenever it has the chance and the onus is on us to sort out the way we live our lives in the primary waking state, which is where we embody programming that is keeping us out of our own paradise, fully conscious in the now.

Labels such as the astral plane, dream-scape or the fourth dimension, have served to make people believe that this dimension of reality is somehow not as real as the “real” world, or that the dream state is not as valid as the waking state.

This is one of the biggest lies ever as the dream state is in fact the only reality where you can tap into the unconscious side of yourself, which you otherwise cannot perceive, except during transcendental states under psychedelics or during disciplined meditational practices.

Dreams offer a vital glimpse into your dark side, the unconscious embedded programming which corrupts absolutely until light has shone on it.

The dream state shows us what we are unconsciously projecting as a reality and must be used to face the truth of what you have mistaken for reality.

A person with an eating disorder will, for sure, have plenty of dreams involving gluttony, a nimfo will have many lustful encounters in the dreamstate, a narcissist will have audiences worshiping himself, or himself worshiping himself and someone filled with hatred will encounter scenes I wish not to elaborate on.

The patterns of your dreams and especially recurring themes, are projections within your unconscious mind that is governing the ultimate experience of your “waking state.”

I believe the new heaven and earth is the merging of heaven (dreams) and earth (matrix) into one conclusive experience.

Besides for showing us what needs attention, dreams also transcend the rules and laws of matter, time and space.

The successful lucid dreamer gains an entire new heaven and earth, where the absolute impossible is only possible.

For the one who gains access to everything through the dream state, the constraints of the so called real world in the waking state becomes but a monkey on the back.

When you can fly, see and talk to anybody, go anywhere you choose, then returning to the world of matter, time and space, is arguably a nightmare.

Anybody with a sound mind would choose to exist beyond the limitations of the matrix-construct. There are many that already do.

The Real World vs. the Dream World

The greatest of sages have enlightened us that the REAL WORLD is indeed the illusion, maya, or manyan.

If what we have thought to be real is, in fact, the veil to fool us that we are weak, small and limited, then our dreams must be the real world and this experience, here, is just an aspect of ourselves that is in dire need of deprogramming from the jaws of hypnotic spell-casting.

There is actually no such thing as reality. There is also no such thing as the real world. What makes the “waking state” the real world and the “dream state” the unreal world?

People would argue that the matrix is a world in which our physical bodies are housed, and that we always return after sleep to continue our existence in the real world.

Morpheus exclaimed that the body cannot survive without the mind. What he ment was that the body is but a projection of the mind.

Have you ever had a dream that was interrupted unexpectedly, only to continue from where you had left off when you go back to sleep?

Do you have a sanctuary which you visit regularly in the dream state? A safe have in your sub-conscious mind?

When we have the intent to return to any reality we do so, as it is proven by fellow lucid dreamers.

What if I told you that this matrix-hive is a dream just like any other dream you have, and that billions of souls share this dream together?

Do you think these souls consciously chose to share this dream together? The answer is no, they were merely incepted by an idea that “this is the real world” from the very beings that summoned them into this plane through the sexual act.

Every night we have to re-energize ourselves by accessing the dream world (i.e. the actual world)/True Source of infinite Potential, which is the reservoir that refills us, only to return to give that energy to the dreamworld which we believe to be the real world. This “real world” only seems like the REAL WORLD because most of its inhabitants believe just that.

Pause and Continue

Just like we can pause a dream when interrupted and return, so do we can pause the “real world”. Whether you believe it or not, we only return to the “waking reality” because we have forsaken ourselves for it and we expect to return to it on a daily basis.

We intend to always come back because we have such a large investment in an illusion and this is our chain to the physical world. We are so attached to this dream, that we evenreincarnate to continue from where we left off, because this dream is able to trap you in limbo forever.

We have capitulated to it and, in so doing, gave it absolute power over us. We are in fact in a reality of another, not in our own. That is why we cannot manifest what we want in it, because it has claimed ownership over us here and while we are in it, we are subject to its rules, laws and limitations.

When one enters the dimension of another, one fall subject to its construct.

In the case of the Real World, the Real World has been hacked by a virus that affects all the beings that embrace the code of that matrix. It is like a spiderweb that traps souls.

As soon as we wake up in the morning, we start dreaming of the world we share together. As long as the mind machine is in power, it will always kick in again after waking up.

Whatever it is we believe, becomes activated by this dream once more, so to validate our contribution to this illusion, to which we have agreed.

The time is now to turn all of it back to the five elements, so that we can have our own reality again!

Hyperdimensionality is a Reality Identity Crisis

We are only hyper-dimensional beings because we are not in our own reality yet — we are in the middle of two realities fighting over us. It is time to come to terms with this identity crisis.

We cannot be forced to be in a dimension we choose not to partake in, a dimension that was made to fool you into believe it is the alpha & omega.

It is this very choice (rejecting the digital holographic program) that many are now making, which is breaking the mirror on the wall (destroying the illusion).

Deprogramming Souls of Matrix-Based Constructs is coming in 2016. The spiderweb will be disentangled and the laws of time, matter and space will be transcended in the Real World, once we will regain full consciousness in the NOW.

Original source Dreamcatcherreality.com

SF Source How To Exit The Matrix  May 2016

Physicists Say There’s a 90 Percent Chance Civilization Will Soon Collapse October 9th 2020

Final Countdown

If humanity continues down its current path, civilization as we know it is heading toward “irreversible collapse” in a matter of decades.

That’s according to research published in the journal Scientific Reports, which models out our future based on current rates of deforestation and other resource use. As Motherboard reports, even the rosiest projections in the research show a 90 percent chance of catastrophe.

Last Gasp

The paper, penned by physicists from the Alan Turing Institute and the University of Tarapacá, predicts that deforestation will claim the last forests on Earth in between 100 and 200 years. Coupled with global population changes and resource consumption, that’s bad new for humanity.

“Clearly it is unrealistic to imagine that the human society would start to be affected by the deforestation only when the last tree would be cut down,” reads the paper.

Coming Soon

In light of that, the duo predicts that society as we know it could end within 20 to 40 years.

In lighter news, Motherboard reports that the global rate of deforestation has actually decreased in recent years. But there’s still a net loss in forest overall — and newly-planted trees can’t protect the environment nearly as well as old-growth forest.

If humanity continues down its current path, civilization as we know it is heading toward “irreversible collapse” in a matter of decades.

In light of that, the duo predicts that society as we know it could end within 20 to 40 years.

“Calculations show that, maintaining the actual rate of population growth and resource consumption, in particular forest consumption, we have a few decades left before an irreversible collapse of our civilization,” reads the paper.

READ MORE: Theoretical Physicists Say 90% Chance of Societal Collapse Within Several Decades [Motherboard]

More on societal collapse: Doomsday Report Author: Earth’s Leaders Have Failed

What is Binary, and Why Do Computers Use It?

Anthony Heddings@anthonyheddings
October 1, 2018, 6:40am EDT

Computers don’t understand words or numbers the way humans do. Modern software allows the end user to ignore this, but at the lowest levels of your computer, everything is represented by a binary electrical signal that registers in one of two states: on or off. To make sense of complicated data, your computer has to encode it in binary.

Binary is a base 2 number system. Base 2 means there are only two digits—1 and 0—which correspond to the on and off states your computer can understand. You’re probably familiar with base 10—the decimal system. Decimal makes use of ten digits that range from 0 to 9, and then wraps around to form two-digit numbers, with each digit being worth ten times more than the last (1, 10, 100, etc.). Binary is similar, with each digit being worth two times more than the last.

Counting in Binary

In binary, the first digit is worth 1 in decimal. The second digit is worth 2, the third worth 4, the fourth worth 8, and so on—doubling each time. Adding these all up gives you the number in decimal. So,

1111 (in binary)  =  8 + 4 + 2 + 1  =  15 (in decimal)

Accounting for 0, this gives us 16 possible values for four binary bits. Move to 8 bits, and you have 256 possible values. This takes up a lot more space to represent, as four digits in decimal give us 10,000 possible values. It may seem like we’re going through all this trouble of reinventing our counting system just to make it clunkier, but computers understand binary much better than they understand decimal. Sure, binary takes up more space, but we’re held back by the hardware. And for some things, like logic processing, binary is better than decimal.

There’s another base system that’s also used in programming: hexadecimal. Although computers don’t run on hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of hexadecimal can represent a whole byte, eight digits in binary. Hexadecimal uses 0-9 like decimal, and also the letters A through F to represent the additional six digits.

So Why Do Computers Use Binary?

The short answer: hardware and the laws of physics. Every number in your computer is an electrical signal, and in the early days of computing, electrical signals were much harder to measure and control very precisely. It made more sense to only distinguish between an “on” state—represented by negative charge—and an “off” state—represented by a positive charge. For those unsure of why the “off” is represented by a positive charge, it’s because electrons have a negative charge—more electrons mean more current with a negative charge.

So, the early room-sized computers used binary to build their systems, and even though they used much older, bulkier hardware, we’ve kept the same fundamental principles. Modern computers use what’s known as a transistor to perform calculations with binary. Here’s a diagram of what a field-effect transistor (FET) looks like:

Essentially, it only allows current to flow from the source to the drain if there is a current in the gate. This forms a binary switch. Manufacturers can build these transistors incredibly small—all the way down to 5 nanometers, or about the size of two strands of DNA. This is how modern CPUs operate, and even they can suffer from problems differentiating between on and off states (though that’s mostly due to their unreal molecular size, being subject to the weirdness of quantum mechanics).

But Why Only Base 2?

So you may be thinking, “why only 0 and 1? Couldn’t you just add another digit?” While some of it comes down to tradition in how computers are built, to add another digit would mean we’d have to distinguish between different levels of current—not just “off” and “on,” but also states like “on a little bit” and “on a lot.”

The problem here is if you wanted to use multiple levels of voltage, you’d need a way to easily perform calculations with them, and the hardware for that isn’t viable as a replacement for binary computing. It indeed does exist; it’s called a ternary computer, and it’s been around since the 1950s, but that’s pretty much where development on it stopped. Ternary logic is way more efficient than binary, but as of yet, nobody has an effective replacement for the binary transistor, or at the very least, no work’s been done on developing them at the same tiny scales as binary.

The reason we can’t use ternary logic comes down to the way transistors are stacked in a computer—something called “gates”and how they’re used to perform math. Gates take two inputs, perform an operation on them, and return one output.

This brings us to the long answer: binary math is way easier for a computer than anything else. Boolean logic maps easily to binary systems, with True and False being represented by on and off. Gates in your computer operate on boolean logic: they take two inputs and perform an operation on them like AND, OR, XOR, and so on. Two inputs are easy to manage. If you were to graph the answers for each possible input, you would have what’s known as a truth table:

A binary truth table operating on boolean logic will have four possible outputs for each fundamental operation. But because ternary gates take three inputs, a ternary truth table would have 9 or more. While a binary system has 16 possible operators (2^2^2), a ternary system would have 19,683 (3^3^3). Scaling becomes an issue because while ternary is more efficient, it’s also exponentially more complex.

Who knows? In the future, we could begin to see ternary computers become a thing, as we push the limits of binary down to a molecular level. For now, though, the world will continue to run on binary.

Image credits: spainter_vfx/Shutterstock, Wikipedia, Wikipedia, Wikipedia, WikipediaREAD NEXT

What is Binary, and Why Do Computers Use It?

Anthony Heddings@anthonyheddings
October 1, 2018, 6:40am EDT

Computers don’t understand words or numbers the way humans do. Modern software allows the end user to ignore this, but at the lowest levels of your computer, everything is represented by a binary electrical signal that registers in one of two states: on or off. To make sense of complicated data, your computer has to encode it in binary.

Binary is a base 2 number system. Base 2 means there are only two digits—1 and 0—which correspond to the on and off states your computer can understand. You’re probably familiar with base 10—the decimal system. Decimal makes use of ten digits that range from 0 to 9, and then wraps around to form two-digit numbers, with each digit being worth ten times more than the last (1, 10, 100, etc.). Binary is similar, with each digit being worth two times more than the last.

Counting in Binary

In binary, the first digit is worth 1 in decimal. The second digit is worth 2, the third worth 4, the fourth worth 8, and so on—doubling each time. Adding these all up gives you the number in decimal. So,

1111 (in binary)  =  8 + 4 + 2 + 1  =  15 (in decimal)

Accounting for 0, this gives us 16 possible values for four binary bits. Move to 8 bits, and you have 256 possible values. This takes up a lot more space to represent, as four digits in decimal give us 10,000 possible values. It may seem like we’re going through all this trouble of reinventing our counting system just to make it clunkier, but computers understand binary much better than they understand decimal. Sure, binary takes up more space, but we’re held back by the hardware. And for some things, like logic processing, binary is better than decimal.

There’s another base system that’s also used in programming: hexadecimal. Although computers don’t run on hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of hexadecimal can represent a whole byte, eight digits in binary. Hexadecimal uses 0-9 like decimal, and also the letters A through F to represent the additional six digits.

So Why Do Computers Use Binary?

The short answer: hardware and the laws of physics. Every number in your computer is an electrical signal, and in the early days of computing, electrical signals were much harder to measure and control very precisely. It made more sense to only distinguish between an “on” state—represented by negative charge—and an “off” state—represented by a positive charge. For those unsure of why the “off” is represented by a positive charge, it’s because electrons have a negative charge—more electrons mean more current with a negative charge.

So, the early room-sized computers used binary to build their systems, and even though they used much older, bulkier hardware, we’ve kept the same fundamental principles. Modern computers use what’s known as a transistor to perform calculations with binary. Here’s a diagram of what a field-effect transistor (FET) looks like:

Essentially, it only allows current to flow from the source to the drain if there is a current in the gate. This forms a binary switch. Manufacturers can build these transistors incredibly small—all the way down to 5 nanometers, or about the size of two strands of DNA. This is how modern CPUs operate, and even they can suffer from problems differentiating between on and off states (though that’s mostly due to their unreal molecular size, being subject to the weirdness of quantum mechanics).

But Why Only Base 2?

So you may be thinking, “why only 0 and 1? Couldn’t you just add another digit?” While some of it comes down to tradition in how computers are built, to add another digit would mean we’d have to distinguish between different levels of current—not just “off” and “on,” but also states like “on a little bit” and “on a lot.”

The problem here is if you wanted to use multiple levels of voltage, you’d need a way to easily perform calculations with them, and the hardware for that isn’t viable as a replacement for binary computing. It indeed does exist; it’s called a ternary computer, and it’s been around since the 1950s, but that’s pretty much where development on it stopped. Ternary logic is way more efficient than binary, but as of yet, nobody has an effective replacement for the binary transistor, or at the very least, no work’s been done on developing them at the same tiny scales as binary.

The reason we can’t use ternary logic comes down to the way transistors are stacked in a computer—something called “gates”and how they’re used to perform math. Gates take two inputs, perform an operation on them, and return one output.

This brings us to the long answer: binary math is way easier for a computer than anything else. Boolean logic maps easily to binary systems, with True and False being represented by on and off. Gates in your computer operate on boolean logic: they take two inputs and perform an operation on them like AND, OR, XOR, and so on. Two inputs are easy to manage. If you were to graph the answers for each possible input, you would have what’s known as a truth table:

A binary truth table operating on boolean logic will have four possible outputs for each fundamental operation. But because ternary gates take three inputs, a ternary truth table would have 9 or more. While a binary system has 16 possible operators (2^2^2), a ternary system would have 19,683 (3^3^3). Scaling becomes an issue because while ternary is more efficient, it’s also exponentially more complex.

Who knows? In the future, we could begin to see ternary computers become a thing, as we push the limits of binary down to a molecular level. For now, though, the world will continue to run on binary.

Image credits: spainter_vfx/Shutterstock, Wikipedia, Wikipedia, Wikipedia, WikipediaREAD NEXT

Anthony Heddings
Anthony Heddings is the resident cloud engineer for LifeSavvy Media, a technical writer, programmer, and an expert at Amazon’s AWS platform. He’s written hundreds of articles for How-To Geek and CloudSavvy IT that have been read millions of times

If The Big Bang Wasn’t The Beginning, What Was It? Posted September 30th 2020

Ethan SiegelSenior ContributorStarts With A BangContributor GroupScienceThe Universe is out there, waiting for you to discover it.

The history of our expanding Universe is one illustrated image.
Our entire cosmic history is theoretically well-understood, but only because we understand the … [+] NICOLE RAGER FULLER / NATIONAL SCIENCE FOUNDATION

For more than 50 years, we’ve had definitive scientific evidence that our Universe, as we know it, began with the hot Big Bang. The Universe is expanding, cooling, and full of clumps (like planets, stars, and galaxies) today because it was smaller, hotter, denser, and more uniform in the past. If you extrapolate all the way back to the earliest moments possible, you can imagine that everything we see today was once concentrated into a single point: a singularity, which marks the birth of space and time itself.

At least, we thought that was the story: the Universe was born a finite amount of time ago, and started off with the Big Bang. Today, however, we know a whole lot more than we did back then, and the picture isn’t quite so clear. The Big Bang can no longer be described as the very beginning of the Universe that we know, and the hot Big Bang almost certainly doesn’t equate to the birth of space and time. So, if the Big Bang wasn’t truly the beginning, what was it? Here’s what the science tells us.

Looking back at the distant Universe with NASA's Hubble space telescope.
Nearby, the stars and galaxies we see look very much like our own. But as we look farther away, we … [+] NASA, ESA, AND A. FEILD (STSCI)

Our Universe, as we observe it today, almost certainly emerged from a hot, dense, almost-perfectly uniform state early on. In particular, there are four pieces of evidence that all point to this scenario: Recommended For You

  1. the Hubble expansion of the Universe, which shows that the amount that light from a distant object is redshifted is proportional to the distance to that object,
  2. the existence of a leftover glow — the Cosmic Microwave Background (CMB) — in all directions, with the same temperature everywhere just a few degrees above absolute zero,
  3. light elements — hydrogen, deuterium, helium-3, helium-4, and lithium-7 — that exist in a particular ratio of abundances back before any stars were formed,
  4. and a cosmic web of structure that gets denser and clumpier, with more space between larger and larger clumps, as time goes on.

These four facts: the Hubble expansion of the Universe, the existence and properties of the CMB, the abundance of the light elements from Big Bang nucleosynthesis, and the formation and growth of large-scale structure in the Universe, represent the four cornerstones of the Big Bang.

The cosmic microwave background & large-scale structure are two cosmological cornerstones.
The largest-scale observations in the Universe, from the cosmic microwave background to the cosmic … [+] Chris Blake and Sam Moorfield

Why are these the four cornerstones? In the 1920s, Edwin Hubble, using the largest, most powerful telescope in the world at the time, was able to measure how individual stars varied in brightness over time, even in galaxies beyond our own. That enabled us to know how far away the galaxies that housed those stars were. By combining that information with data about how significantly the atomic spectral lines from those galaxies were shifted, we could determine what the relationship was between distance and a spectral shift.

As it turned out, it was simple, straightforward, and linear: Hubble’s law. The farther away a galaxy was, the more significantly its light was redshifted, or shifted systematically towards longer wavelengths. In the context of General Relativity, that corresponds to a Universe whose very fabric is expanding with time. As time marches on, all points in the Universe that aren’t somehow bound together (either gravitationally or by some other force) will expand away from one another, causing any emitted light to be shifted towards longer wavelengths by time the observer receives it.

How light redshifts and distances change over time in the expanding Universe.
This simplified animation shows how light redshifts and how distances between unbound objects change … [+] Rob Knop

Although there are many possible explanations for the effect we observe as Hubble’s Law, the Big Bang is a unique idea among those possibilities. The idea is simple and straightforward in its simplicity, but also breathtaking in how powerful it is. It simply says this:

  • the Universe is expanding and stretching light to longer wavelengths (and lower energies and temperatures) today,
  • and that means, if we extrapolate backwards, the Universe was denser and hotter earlier on.
  • Because it’s been gravitating the whole time, the Universe gets clumpier and forms larger, more massive structures later on.
  • If we go back to early enough times, we’ll see that galaxies were smaller, more numerous, and made of intrinsically younger, bluer stars.
  • If we go back earlier still, we’ll find a time where no stars have had time to form.
  • Even earlier, and we’ll find that it’s hot enough that light, at some early time, would have split even neutral atoms apart, creating an ionized plasma which “releases” the radiation at last when the Universe does become neutral. (The origin of the CMB.)
  • And at even earlier times still, things were hot enough that even atomic nuclei would be blasted apart; transitioning to a cooler phase allows the first stable nuclear reactions, yielding the light elements, to proceed.
As the Universe cools, atomic nuclei form, followed by neutral atoms as it cools further.
As the Universe cools, atomic nuclei form, followed by neutral atoms as it cools further. All of … [+] E. Siegel

All of these claims, at some point during the 20th century, were validated and confirmed by observations. We’ve measured the clumpiness of the Universe, and found that it increases exactly as predicted as time goes on. We’ve measured how galaxies evolve with distance (and cosmic time), and found that the earlier, more distant ones are overall younger, bluer, more numerous, and smaller in size. We’ve discovered and measured the CMB, and not only does it spectacularly match the Big Bang’s predictions, but we’ve observed how its temperature changes (increases) at earlier times. And we’ve successfully measured the primordial abundances of the light elements, finding a spectacular agreement with the predictions of Big Bang nucleosynthesis.

We can extrapolate back even further if we like: beyond the limits of what our current technology has the capability to directly observe. We can imagine the Universe getting even denser, hotter, and more compact than it was when protons and neutrons were being blasted apart. If we stepped back even earlier, we’d see neutrinos and antineutrinos, which need about a light-year of solid lead to stop half of them, start to interact with electrons and other particles in the early Universe. Beginning in the mid-2010s, we were able to detect their imprint on first the photons of the CMB and, a few years later, on the large-scale structure that would later grow in the Universe.

The impact of neutrinos on the large-scale structure features in the Universe.
If there were no oscillations due to matter interacting with radiation in the Universe, there would … [+] D. Baumann et al. (2019), Nature Physics

That’s the earliest signal, thus far, we’ve ever detected from the hot Big Bang. But there’s nothing stopping us from running the clock back farther: all the way to the extremes. At some point:

  • it gets hot and dense enough that particle-antiparticle pairs get created out of pure energy, simply from quantum conservation laws and Einstein’s E = mc²,
  • the Universe gets denser than individual protons and neutrons, causing it to behave as a quark-gluon plasma rather than as individual nucleons,
  • the Universe gets even hotter, causing the electroweak force to unify, the Higgs symmetry to be restored, and for fundamental particles to lose their rest mass,

and then we go to energies that lie beyond the limits of known, tested physics, even from particle accelerators and cosmic rays. Some processes must occur under those conditions to reproduce the Universe we see. Something must have created dark matter. Something must have created more matter than antimatter in our Universe. And something must have happened, at some point, for the Universe to exist at all.

An illustration of the Big Bang from an initially hot, dense state to our modern Universe.
There is a large suite of scientific evidence that supports the picture of the expanding Universe … [+] NASA / GSFC

From the moment this extrapolation was first considered back in the 1920s — and then again in its more modern forms in the 1940s and 1960s — the thinking was that the Big Bang takes you all the way back to a singularity. In many ways, the big idea of the Big Bang was that if you have a Universe filled with matter and radiation, and it’s expanding today, then if you go far enough back in time, you’ll come to a state that’s so hot and so dense that the laws of physics themselves break down.

At some point, you achieve energies, densities, and temperatures that are so large that the quantum uncertainty inherent to nature leads to consequences that make no sense. Quantum fluctuations would routinely create black holes that encompass the entire Universe. Probabilities, if you try to compute them, give answers that are either negative or greater than 1: both physical impossibilities. We know that gravity and quantum physics don’t make sense at these extremes, and that’s what a singularity is: a place where the laws of physics are no longer useful. Under these extreme conditions, it’s possible that space and time themselves can emerge. This, originally, was the idea of the Big Bang: a birth to time and space themselves.

The Big Bang, from the earliest stages, to modern-day galaxies.
A visual history of the expanding Universe includes the hot, dense state known as the Big Bang and … [+] NASA / CXC / M. WEISS

But all of that was based on the notion that we actually could extrapolate the Big Bang scenario as far back as we wanted: to arbitrarily high energies, temperatures, densities, and early times. As it turned out, that created a number of physical puzzles that defied explanation. Puzzles such as:

  • Why did causally disconnected regions of space — regions with insufficient time to exchange information, even at the speed of light — have identical temperatures to one another?
  • Why was the initial expansion rate of the Universe in balance with the total amount of energy in the Universe so perfectly: to more than 50 decimal places, to deliver a “flat” Universe today?
  • And why, if we achieved these ultra-high temperatures and densities early on, don’t we see any leftover relic remnants from those times in our Universe today?

If you still want to invoke the Big Bang, the only answer you can give is, “well, the Universe must have been born that way, and there is no reason why.” But in physics, that’s akin to throwing up your hands in surrender. Instead, there’s another approach: to concoct a mechanism that could explain those observed properties, while reproducing all the successes of the Big Bang, and still making new predictions about phenomena we could observe that differ from the conventional Big Bang.

The 3 big puzzles, the horizon, flatness, and monopole problems, that inflation solves.
In the top panel, our modern Universe has the same properties (including temperature) everywhere … [+] E. SIEGEL / BEYOND THE GALAXY

About 40 years ago, that’s exactly the idea that was put forth: cosmic inflation. Instead of extrapolating the Big Bang all the way back to a singularity, inflation basically says that there’s a cutoff: you can go back to a certain high temperature and density, but no further. According to the big idea of cosmic inflation, this hot, dense, uniform state was preceded by a state where:

  • the Universe wasn’t filled with matter and radiation,
  • but instead possessed a large amount of energy intrinsic to the fabric of space itself,
  • which caused the Universe to expand exponentially (and at a constant, unchanging rate),
  • which drives the Universe to be flat, empty, and uniform (up to the scale of quantum fluctuations),
  • and then inflation ends, converting that intrinsic-to-space energy into matter and radiation,

and that’s where the hot Big Bang comes from. Not only did this solve the puzzles the Big Bang couldn’t explain, but it made multiple new predictions that have since been verified. There’s a lot we still don’t know about cosmic inflation, but the data that’s come in over the last 3 decades overwhelmingly supports the existence of this inflationary state: that preceded and set up the hot Big Bang.

How inflation and quantum fluctuations give rise to the Universe we observe today.
The quantum fluctuations that occur during inflation get stretched across the Universe, and when … [+] E. SIEGEL, WITH IMAGES DERIVED FROM ESA/PLANCK AND THE DOE/NASA/ NSF INTERAGENCY TASK FORCE ON CMB RESEARCH

All of this, taken together, is enough to tell us what the Big Bang is and what it isn’t. It is the notion that our Universe emerged from a hotter, denser, more uniform state in the distant past. It is not the idea that things got arbitrarily hot and dense until the laws of physics no longer applied.

It is the notion that, as the Universe expanded, cooled, and gravitated, we annihilated away our excess antimatter, formed protons and neutrons and light nuclei, atoms, and eventually, stars, galaxies, and the Universe we recognize today. It is no longer considered inevitable that space and time emerged from a singularity 13.8 billion years ago.

And it is a set of conditions that applies at very early times, but was preceded by a different set of conditions (inflation) that came before it. The Big Bang might not be the very beginning of the Universe itself, but it is the beginning of our Universe as we recognize it. It’s not “the” beginning, but it is “our” beginning. It may not be the entire story on its own, but it’s a vital part of the universal cosmic story that connects us all.

Follow me on Twitter. Check out my website or some of my other work hereEthan Siegel

I am a Ph.D. astrophysicist, author, and science communicator, who professes physics and astronomy at various colleges. I have won numerous awards for science writing…

Artificial intelligence with virtual hanging head on podium. Global world cybernetic mind controls humanity. Digital Brain with AI in the spotlight. Super computer. science futuristic concept.

brain wave

Neuralink: 3 neuroscientists react to Elon Musk’s brain chip reveal September 17th 2020

With a pig-filled demonstration, Neuralink revealed its latest advancements in brain implants this week. But what do scientists think of Elon Musk’s company’s grand claims?ShutterstockMike Brown9.4.2020 8:00 AM

What does the future look like for humans and machines? Elon Musk would argue that it involves wiring brains directly up to computers – but neuroscientists tell Inverse that’s easier said than done.

On August 28, Musk and his team unveiled the latest updates from secretive firm Neuralink with a demo featuring pigs implanted with their brain chip device. These chips are called Links, and they measure 0.9 inches wide by 0.3 inches tall. They connect to the brain via wires, and provide a battery life of 12 hours per charge, after which the user would need to wirelessly charge again. During the demo, a screen showed the real-time spikes of neurons firing in the brain of one pig, Gertrude, as she snuffed around her pen during the event.

More like this

Innovation9.8.2020 9:53 PM”Plug and play” brain prosthesis could change how people with paralysis use implantsBy Sarah WellsInnovation9.8.2020 9:32 PMStarship: Watch SpaceX nail 6th test David GrossmanInnovation9.2.2020 10:55 AMMusk Reads: Neuralink’s big revealBy Mike Brown.

It was an event designed to show how far Neuralink has come in terms of making its science objectives reality. But how much of Musk’s ambitions for Links are still in the realm of science fiction?

Neuralink argues the chips will one day have medical applications, listing a whole manner of ailments that its chips could feasibly solve. Memory loss, depression, seizures, and brain damage were all suggested as conditions where a generalized brain device like the Link could help.

Ralph Adolphs, Bren Professor of Psychology, Neuroscience, and Biology at California Institute of Technology, tells Inverse Neuralink’s announcement was “tremendously exciting” and “a huge technical achievement.”

Neuralink is “a good example of technology outstripping our current ability to know how to use it,” Adolphs says. “The primary initial application will be for people who are ill and for clinical reasons it is justified to implant such a chip into their brain. It would be unethical to do so right now in a healthy person.”

“But who knows what the future holds?” He adds.

Adolphs says the chip is comparable to the natural processes that emerge through evolution. Currently, to interface between the brain and the world, humans use their hands and mouth. But to imagine just sitting and thinking about these actions is a lot harder, so a lot of the future work will need to focus on making this interface with the world feel more natural, Adolphs says.

Achieving that goal could be further out than the Neuralink demo suggested. John Krakauer, chief medical and scientific officer at MindMaze and professor of neurology at Johns Hopkins University, tells Inverse that his view is humanity is “still a long way away” from consumer-level linkups.

“Let me give a more specific concern: The device we saw was placed over a single sensorimotor area,” Krakauer says. “If we want to read thoughts rather than movements (assuming we knew their neural basis) where do we put it? How many will we need? How does one avoid having one’s scalp studded with them? No mention of any of this of course.”

While a brain linkup may get people “excited” because it “has echoes of Charles Xavier in the X-Men,” Krakauer argues that there’s plenty of potential non-invasive solutions to help people with the conditions Neuralink says its technology will treat.

These existing solutions don’t require invasive surgery, but Krakauer fears “the cool factor clouds critical thinking.”

But Elon Musk, Neuralink’s CEO, wants the Link to take humans far beyond new medical treatments.

The ultimate objective, according to Musk, is for Neuralink to help create a symbiotic relationship between humans and computers. Musk argues that Neuralink-like devices could help humanity keep up with super-fast machines. But Krakauer finds such an ambition troubling.

“I would like to see less unsubstantiated hype about a brain ‘Alexa’ and interfacing with A.I.,” Krakauer says. “The argument is if you can’t avoid the singularity, join it. I’m sorry but this angle is just ridiculous.”

Neuralink's link implant.
Neuralink’s link implant.Neuralink

Even a general-purpose linkup could be much further away from development than it may seem. Musk told WaitButWhy in 2017 that a general-purpose linkup could be eight to 10 years away for people with no disability. That would place the timescale for roll-out somewhere around 2027 at the latest — seven years from now.

Kevin Tracey, a neurosurgery professor and president of the Feinstein Institutes for Medical Research, tells Inverse that he “can’t imagine” that any of the publicly suggested diseases could see a solution “sooner than 10 years.” Considering that Neuralink hopes to offer the device as a medical solution before it moves to more general-purpose implants, these notes of caution cast the company’s timeline into doubt.

But unlike Krakauer, Tracey argues that “we need more hype right now.” Not enough attention has been paid to this area of research, he says.

“In the United States for the last 20 years, the federal government’s investment supporting research hasn’t kept up with inflation,” Tracey says. “There’s been this idea that things are pretty good and we don’t have to spend so much money on research. That’s nonsense. COVID proved we need to raise enthusiasm and investment.”

Neuralink’s device is just one part of the brain linkup puzzle, Tracey explains. There are three fields at play: molecular medicine to make and find the targets, neuroscience to understand how the pathways control the target, and the devices themselves. Advances in each area can help the others. Neuralink may help map new pathways, for example, but it’s just one aspect of what needs to be done to make it work as planned.

Neuralink’s smaller chips may also help avoid issues with brain scarring seen with larger devices, Tracey says. And advancements in robots can also help with surgeries, an area Neuralink has detailed before.

But perhaps the biggest benefit from the announcement is making the field cool again.

“If and to the extent that a new, very cool device elevates the discussion on the neuroscience implications of new devices, and what do we need to get these things to the benefit of humanity through more science, that’s all good,” Tracey says.

How sleep helps us lose weight September 12th 2020

When it comes to weight loss, diet and exercise are usually thought of as the two key factors that will achieve results. However, sleep is an often-neglected lifestyle factor that also plays an important role.

The recommended sleep duration for adults is seven to nine hours a night, but many people often sleep for less than this. Research has shown that sleeping less than the recommended amount is linked to having greater body fat, increased risk of obesity, and can also influence how easily you lose weight on a calorie-controlled diet.

Typically, the goal for weight loss is usually to decrease body fat while retaining as much muscle mass as possible. Not obtaining the correct amount of sleep can determine how much fat is lost as well as how much muscle mass you retain while on a calorie restricted diet.

One study found that sleeping 5.5 hours each night over a two-week period while on a calorie-restricted diet resulted in less fat loss when compared to sleeping 8.5 hours each night. But it also resulted a greater loss of fat-free mass (including muscle).

Another study has shown similar results over an eight-week period when sleep was reduced by only one hour each night for five nights of the week. These results showed that even catch-up sleep at the weekend may not be enough to reverse the negative effects of sleep deprivation while on a calorie-controlled diet.

Metabolism, appetite, and sleep

There are several reasons why shorter sleep may be associated with higher body weight and affect weight loss. These include changes in metabolism, appetite and food selection.

Sleep influences two important appetite hormones in our body – leptin and ghrelin. Leptin is a hormone that decreases appetite, so when leptin levels are high we usually feel fuller. On the other hand, ghrelin is a hormone that can stimulate appetite, and is often referred to as the “hunger hormone” because it’s thought to be responsible for the feeling of hunger.

One study found that sleep restriction increases levels of ghrelin and decreases leptin. Another study, which included a sample of 1,024 adults, also found that short sleep was associated with higher levels of ghrelin and lower levels of leptin. This combination could increase a person’s appetite, making calorie-restriction more difficult to adhere to, and may make a person more likely to overeat.

Consequently, increased food intake due to changes in appetite hormones may result in weight gain. This means that, in the long term, sleep deprivation may lead to weight gain due to these changes in appetite. So getting a good night’s sleep should be prioritised.

Along with changes in appetite hormones, reduced sleep has also been shown to impact on food selection and the way the brain perceives food. Researchers have found that the areas of the brain responsible for reward are more active in response to food after sleep loss (six nights of only four hours’ sleep) when compared to people who had good sleep (six nights of nine hours’ sleep).

This could possibly explain why sleep-deprived people snack more often and tend to choose carbohydrate-rich foods and sweet-tasting snacks, compared to those who get enough sleep.

Person's hands typing on keyboard while eating unhealthy snacks.
Sleep deprivation may make you eat more unhealthy food during the day. Flotsam/ Shutterstock

Sleep duration also influences metabolism, particularly glucose (sugar) metabolism. When food is eaten, our bodies release insulin, a hormone that helps to process the glucose in our blood. However, sleep loss can impair our bodies’ response to insulin, reducing its ability to uptake glucose. We may be able to recover from the occasional night of sleep loss, but in the long term this could lead to health conditions such as obesity and type 2 diabetes.

Our own research has shown that a single night of sleep restriction (only four hours’ sleep) is enough to impair the insulin response to glucose intake in healthy young men. Given that sleep-deprived people already tend to choose foods high in glucose due to increased appetite and reward-seeking behaviour, the impaired ability to process glucose can make things worse.

An excess of glucose (both from increased intake and a reduced ability to uptake into the tissues) could be converted to fatty acids and stored as fat. Collectively, this can accumulate over the long term, leading to weight gain.

However, physical activity may show promise as a countermeasure against the detrimental impact of poor sleep. Exercise has a positive impact on appetite, by reducing ghrelin levels and increasing levels of peptide YY, a hormone that is released from the gut, and is associated with the feeling of being satisfied and full.

After exercise, people tend to eat less, particularly when the energy expended by exercise is taken into account. However, it’s unknown if this still remains in the context of sleep restriction.

Research has also shown that exercise training may protect against the metabolic impairments that result from a lack of sleep, by improving the body’s response to insulin, leading to improved glucose control.

We have also shown the potential benefits of just a single session of exercise on glucose metabolism after sleep restriction. While this shows promise, studies are yet to determine the role of long-term physical activity in people with poor sleep.

It’s clear that sleep is important for losing weight. A lack of sleep can increase appetite by changing hormones, makes us more likely to eat unhealthy foods, and influences how body fat is lost while counting our calories. Sleep should therefore be considered as an essential alongside diet and physical activity as part of a healthy lifestyle

Elon Musk Says Settlers Will Likely Die on Mars. He’s Right.

But is that such a bad thing?

Mars or Milton Keynes, What’s the difference ?

By Caroline Delbert 

Sep 2, 2020

Earlier this week, Elon Musk said there’s a “good chance” settlers in the first Mars missions will die. And while that’s easy to imagine, he and others are working hard to plan and minimize the risk of death by hardship or accident. In fact, the goal is to have people comfortably die on Mars after a long life of work and play that, we hope, looks at least a little like life on Earth.

Let’s explore it together.

There are already major structural questions about how humans will settle on Mars. How will we aim Musk’s planned hundreds of Starships at Mars during the right times for the shortest, safest trips? How will a spaceship turn into something that safely lands on the planet’s surface? How will astronauts reasonably survive a yearlong trip in cramped, close quarters where maximum possible volume is allotted to supplies?

And all of that is before anyone even touches the surface.

Then there are logistical reasons to talk about potential Mars settlers in, well, actuarial terms. First, the trip itself will take a year based on current estimates, and applicants to settlement programs are told to expect this trip to be one way.

It follows, statistically, that there’s an almost certain “chance” these settlers will die on Mars, because their lives will continue there until they naturally end. Musk is referring to accidental death in tough conditions, but people are likely to stay on Mars

When Mars One opened applications in 2013, people flocked to audition to die on Mars after a one-way trip and a lifetime of settlement. As chemist and applicant Taylor Rose Nations said in a 2014 podcast episode:

“If I can go to Mars and be a human guinea pig, I’m willing to sort of donate my body to science. I feel like it’s worth it for me personally, and it’s kind of a selfish thing, but just to turn around and look and see Earth. That’s a lifelong total dream.”

Musk said in a conference Monday that building reusable rocket technology and robust, “complex life support” are his major priorities, based on his long-term goals of settling humans on Mars. Musk has successfully transported astronauts to the International Space Station (ISS), where NASA and global space administrations already have long-term life support technology in place. But that’s not the same as, for example, NASA’s advanced life support projects:

“Advanced life support (ALS) technologies required for future human missions include improved physico-chemical technologies for atmosphere revitalization, water recovery, and waste processing/resource recovery; biological processors for food production; and systems modeling, analysis, and controls associated with integrated subsystems operations.”

In other words, while the ISS does many of these different functions like water recovery, people on the moon (for NASA) or Mars (for Musk’s SpaceX) will require long-term life support for the same group of people, not a group that rotates every few months with frequent short trips from Earth.

And if the Mars colony plans to endure and put down roots, that means having food, shelter, medical care, and mental and emotional stimulation for the entire population.

There must be redundancies and ways to repair everything. Researchers like 3D printers and chemical processes such as ligand bonding as they plan these hypothetical missions, because it’s more prudent to send raw materials that can be turned into 100 different things or 50 different medicines. The right chemical processes can recycle discarded items into fertilizer molecules.

“Good chance you’ll die, it’s going to be tough going,” Musk said, “but it will be pretty glorious if it works out.”

David Bohm, Quantum Mechanics and Enlightenment

The visionary physicist, whose ideas remain influential, sought spiritual as well as scientific illumination. September 8th 2020

Scientific American

  • John Horgan
GettyImages-3251636.jpg

Theoretical physicist Dr. David J. Bohm at a 1971 symposium in London. Photo by Keystone.

Some scientists seek to clarify reality, others to mystify it. David Bohm seemed driven by both impulses. He is renowned for promoting a sensible (according to Einstein and other experts) interpretation of quantum mechanics. But Bohm also asserted that science can never fully explain the world, and his 1980 book Wholeness and the Implicate Order delved into spirituality. Bohm’s interpretation of quantum mechanics has attracted increasing attention lately. He is a hero of Adam Becker’s 2018 book What Is Real? The Unfinished Quest for the Meaning of Quantum Mechanics (reviewed by James Gleick, David Albert and Peter Woit). In The End of Science I tried to make sense of this paradoxical truth-seeker, who died in 1992 at the age of 74. Below is an edited version of that profile. See also my post on another quantum visionary, John Wheeler. –John Horgan

In August 1992 I visited David Bohm at his home in a London suburb. His skin was alarmingly pale, especially in contrast to his purplish lips and dark, wiry hair. His frame, sinking into a large armchair, seemed limp, languorous, and at the same time suffused with nervous energy. One hand cupped the top of his head, the other gripped an armrest. His fingers, long and blue-veined, with tapered, yellow nails, were splayed. He was recovering, he said, from a heart attack.

Bohm’s wife brought us tea and biscuits and vanished. Bohm spoke haltingly at first, but gradually the words came faster, in a low, urgent monotone. His mouth was apparently dry, because he kept smacking his lips. Occasionally, after making an observation that amused him, he pulled his lips back from his teeth in a semblance of a smile. He also had the disconcerting habit of pausing every few sentences and saying, “Is that clear?” or simply, “Hmmm?” I was often so hopelessly befuddled that I just smiled and nodded. But Bohm could be bracingly clear, too. Like an exotic subatomic particle, he oscillated in and out of focus.

Born and raised in the U.S., Bohm left in 1951, the height of anti-communist hysteria, after refusing to answer questions from a Congressional committee about whether he or anyone he knew was a communist. After stays in Brazil and Israel, he settled in England. Bohm was a scientific dissident too. He rebelled against the dominant interpretation of quantum mechanics, the so-called Copenhagen interpretation promulgated by Danish physicist Niels Bohr.

Bohm began questioning the Copenhagen interpretation in the late 1940s while writing a book on quantum mechanics. According to the Copenhagen interpretation, a quantum entity such as an electron has no definite existence apart from our observation of it. We cannot say with certainty whether it is either a wave or a particle. The interpretation also rejects the possibility that the seemingly probabilistic behavior of quantum systems stems from underlying, deterministic mechanisms.

Bohm found this view unacceptable. “The whole idea of science so far has been to say that underlying the phenomenon is some reality which explains things,” he explained. “It was not that Bohr denied reality, but he said quantum mechanics implied there was nothing more that could be said about it.” Such a view reduced quantum mechanics to “a system of formulas that we use to make predictions or to control things technologically. I said that’s not enough. I don’t think I would be very interested in science if that were all there was.”

In 1952 Bohm proposed that particles are indeed particles–and at all times, not just when they are observed in a certain way. Their behavior is determined by a force that Bohm called the “pilot wave.” Any effort to observe a particle alters its behavior by disturbing the pilot wave. Bohm thus gave the uncertainty principle a purely physical rather than metaphysical meaning. Niels Bohr had interpreted the uncertainty principle as meaning “not that there is uncertainty, but that there is an inherent ambiguity” in a quantum system, Bohm explained.

Bohm’s interpretation gets rid of one quantum paradox, wave/particle duality, but it preserves and even highlights another, nonlocality, the capacity of one particle to influence another instantaneously across vast distances. Einstein had drawn attention to nonlocality in 1935 in an effort to show that quantum mechanics must be flawed. Together with Boris Podolsky and Nathan Rosen, Einstein proposed a thought experiment involving two particles that spring from a common source and fly in opposite directions.

According to the standard model of quantum mechanics, neither particle has fixed properties, such as momentum, before it is measured. But by measuring one particle’s momentum, the physicist instantaneously forces the other particle, no matter how distant, to assume a fixed momentum. Deriding this effect as “spooky action at a distance,” Einstein argued that quantum mechanics must be flawed or incomplete. But in 1980 French physicists demonstrated spooky action in a laboratory. Bohm never had any doubts about the experiment’s outcome. “It would have been a terrific surprise to find out otherwise,” he said.

But here is the paradox of Bohm: Although he tried to make the world more sensible with his pilot-wave model, he also argued that complete clarity is impossible. He reached this conclusion after seeing an experiment on television, in which a drop of ink was squeezed onto a cylinder of glycerine. When the cylinder was rotated, the ink diffused through the glycerine in an apparently irreversible fashion. Its order seemed to have disintegrated. But when the direction of rotation was reversed, the ink gathered into a drop again.

The experiment inspired Bohm to write Wholeness and the Implicate Order, published in 1980. He proposed that underlying physical appearances, the “explicate order,” there is a deeper, hidden “implicate order.” Applying this concept to the quantum realm, Bohm proposed that the implicate order is a field consisting of an infinite number of fluctuating pilot waves. The overlapping of these waves generates what appears to us as particles, which constitute the explicate order. Even space and time might be manifestations of a deeper, implicate order, according to Bohm.

To plumb the implicate order, Bohm said, physicists might need to jettison basic assumptions about nature. During the Enlightenment, thinkers such as Newton and Descartes replaced the ancients’ organic concept of order with a mechanistic view. Even after the advent of relativity and quantum mechanics, “the basic idea is still the same,” Bohm told me, “a mechanical order described by coordinates.”

Bohm hoped scientists would eventually move beyond mechanistic and even mathematical paradigms. “We have an assumption now that’s getting stronger and stronger that mathematics is the only way to deal with reality,” Bohm said. “Because it’s worked so well for a while, we’ve assumed that it has to be that way.”

Someday, science and art will merge, Bohm predicted. “This division of art and science is temporary,” he observed. “It didn’t exist in the past, and there’s no reason why it should go on in the future.” Just as art consists not simply of works of art but of an “attitude, the artistic spirit,” so does science consist not in the accumulation of knowledge but in the creation of fresh modes of perception. “The ability to perceive or think differently is more important than the knowledge gained,” Bohm explained.

Bohm rejected the claim of physicists such as Hawking and Weinberg that physics can achieve a final “theory of everything” that explains the world. Science is an infinite, “inexhaustible process,” he said. “The form of knowledge is to have at any moment something essential, and the appearance can be explained. But then when we look deeper at these essential things they turn out to have some feature of appearances. We’re not ever going to get a final essence which isn’t also the appearance of something.”

Bohm feared that belief in a final theory might become self-fulfilling. “If you have fish in a tank and you put a glass barrier in there, the fish keep away from it,” he noted. “And then if you take away the glass barrier they never cross the barrier and they think the whole world is that.” He chuckled drily. “So your thought that this is the end could be the barrier to looking further.” Trying to convince me that final knowledge is unattainable, Bohm offered the following argument:

“Anything known has to be determined by its limits. And that’s not just quantitative but qualitative. The theory is this and not that. Now it’s consistent to propose that there is the unlimited. You have to notice that if you say there is the unlimited, it cannot be different, because then the unlimited will limit the limited, by saying that the limited is not the unlimited, right? The unlimited must include the limited. We have to say, from the unlimited the limited arises, in a creative process. That’s consistent. Therefore we say that no matter how far we go there is the unlimited. It seems that no matter how far you go, somebody will come up with another point you have to answer. And I don’t see how you could ever settle that.”

To my relief, Bohm’s wife entered the room and asked if we wanted more tea. As she refilled my cup, I pointed out a book on Buddhism on a shelf and asked Bohm if he was interested in spirituality. He nodded. He had been a friend of Krishnamurti, one of the first modern Indian sages to try to show Westerners how to achieve the state of spiritual serenity and grace called enlightenment. Was Krishnamurti enlightened? “In some ways, yes,” Bohm replied. “His basic thing was to go into thought, to get to the end of it, completely, and thought would become a different kind of consciousness.”

Of course, one could never truly plumb one’s own mind, Bohm said. Any attempt to examine one’s own thought changes it–just as the measurement of an electron alters its course. We cannot achieve final self-knowledge, Bohm seemed to imply, any more we can achieve a final theory of physics.

Was Krishnamurti a happy person? Bohm seemed puzzled by my question. “That’s hard to say,” he replied. “He was unhappy at times, but I think he was pretty happy overall. The thing is not about happiness, really.” Bohm frowned, as if realizing the import of what he had just said.

I said goodbye to Bohm and his wife and departed. Outside, a light rain was falling. I walked up the path to the street and glanced back at Bohm’s house, a modest whitewashed cottage on a street of modest whitewashed cottages. He died of a heart attack two months later.

In Wholeness and the Implicate Order Bohm insisted on the importance of “playfulness” in science, and in life, but Bohm, in his writings and in person, was anything but playful. For him, truth-seeking was not a game, it was a dreadful, impossible, necessary task. Bohm was desperate to know, to discover the secret of everything, but he knew it wasn’t attainable, not for any mortal being. No one gets out of the fish tank alive.

John Horgan directs the Center for Science Writings at the Stevens Institute of Technology. His books include “The End of Science,” “The End of War” and “Mind-Body Problems,” available for free at mindbodyproblems.com.

The views expressed are those of the author(s) and are not necessarily those of Scientific American. Scientific American

More from Scientific American

This post originally appeared on Scientific American and was published July 23, 2018. This article is republished here with permission.

Why String Theory Is Still Not Even Wrong

A Frozen Graveyard: The Sad Tales of Antarctica’s Deaths

Beneath layers of snow and ice on the world’s coldest continent, there may be hundreds of people buried forever. Martha Henriques investigates their stories.

BBC Future

  • Martha Henriques
p06h2xhx.jpg

Crevasses can be deadly; this vehicle in the 1950s had a lucky escape. Credit: Getty Images.

In the bleak, almost pristine land at the edge of the world, there are the frozen remains of human bodies – and each one tells a story of humanity’s relationship with this inhospitable continent.

Even with all our technology and knowledge of the dangers of Antarctica, it can remain deadly for anyone who goes there. Inland, temperatures can plummet to nearly -90C (-130F). In some places, winds can reach 200mph (322km/h). And the weather is not the only risk.

Many bodies of scientists and explorers who perished in this harsh place are beyond reach of retrieval. Some are discovered decades or more than a century later. But many that were lost will never be found, buried so deep in ice sheets or crevasses that they will never emerge – or they are headed out towards the sea within creeping glaciers and calving ice.

The stories behind these deaths range from unsolved mysteries to freak accidents. In the second of the series Frozen Continent, BBC Future explored what these events reveal about life on the planet’s most inhospitable landmass.

1800s: Mystery of the Chilean Bones

At Livingston Island, among the South Shetlands off the Antarctic Peninsula, a human skull and femur have been lying near the shore for 175 years. They are the oldest human remains ever found in Antarctica.

The bones were discovered on the beach in the 1980s. Chilean researchers found that they belonged to a woman who died when she was about 21 years old. She was an indigenous person from southern Chile, 1,000km (620 miles) away.

Analysis of the bones suggested that she died between 1819 and 1825. The earlier end of that range would put her among the very first people to have been in Antarctica.

A Russian orthodox church sits on a small rise above Chile’s research base. Credit: Yadvinder Malhi.

The question is, how did she get there? The traditional canoes of the indigenous Chileans couldn’t have supported her on such a long voyage through what can be incredibly rough seas.

“There’s no evidence for an independent Amerindian presence in the South Shetlands,” says Michael Pearson, an Antarctic heritage consultant and independent researcher. “It’s not a journey you’d make in a bark canoe.”

The original interpretation by the Chilean researchers was that she was an indigenous guide to the sealers travelling from the northern hemisphere to the Antarctic islands that had been newly discovered by William Smith in 1819. But women taking part in expeditions to the far south in those early days was virtually unheard of.

Sealers did have a close relationship with the indigenous people of southern Chile, says Melisa Salerno, an archaeologist of the Argentinean Scientific and Technical Research Council (Conicet). Sometimes they would exchange seal skins with each other. It’s not out of the question that they traded expertise and knowledge, too. But the two cultures’ interactions weren’t always friendly.

“Sometimes it was a violent situation,” says Salerno. “The sealers could just take a woman from one beach and later leave her far away on another.”

Any scientist or explorer visiting Antarctica knows that they could be at risk. Credit: Getty Images.

A lack of surviving logs and journals from the early ships sailing south to Antarctica makes it even more difficult to trace this woman’s history.

Her story is unique among the early human presence in Antarctica. A woman who, by all the usual accounts, shouldn’t have been there – but somehow she was. Her bones mark the start of human activity on Antarctica, and the unavoidable loss of life that comes with trying to occupy this inhospitable continent.

29 March 1912: Scott’s South Pole Expedition Crew

Robert Falcon Scott’s team of British explorers reached the South Pole on 17 January 1912, just three weeks after the Norwegian team led by Roald Amundsen had departed from the same spot.

The British group’s morale was crushed when they discovered that they had not arrived first. Soon after, things would get much worse.

Attaining the pole was a feat to test human endurance, and Scott had been under huge pressure. As well as dealing with the immediate challenges of the harsh climate and lack of natural resources like wood for building, he had a crew of more than 60 men to lead. More pressure came from the high hopes of his colleagues back home.

Robert Falcon Scott writing his journal. Credit: Herbert Ponting/Wikipedia.

“They mean to do or die – that is the spirit in which they are going to the Antarctic,” Leonard Darwin, a president of the Royal Geographical Society and son of Charles Darwin, said in a speech at the time.

“Captain Scott is going to prove once again that the manhood of the nation is not dead … the self-respect of the whole nation is certainly increased by such adventures as this,” he said.

Scott was not impervious to the expectations. “He was a very rounded, human character,” says Max Jones, a historian of heroism and polar exploration at the University of Manchester. “In his journals, you find he’s racked with doubts and anxieties about whether he’s up to the task and that makes him more appealing. He had failings and weaknesses too.”

Despite his worries and doubts, the mindset of “do or die” drove the team to take risks that might seem alien to us now.

On the team’s return from the pole, Edgar Evans died first, in February. Then Lawrence Oates. He had considered himself a burden, thinking the team could not return home with him holding them back. “I am just going outside and may be some time,” he said on 17 March.

Members of the ill-fated British expedition to the pole. Credit: Getty Images.

Perhaps he had not realised how close the rest of the group were to death. The bodies of Oates and Evans were never found, but Scott, Edward Wilson and Henry Bowers were discovered by a search party several months after their deaths. They had died on 29 March 1912, according to the date in Scott’s diary entry. The search party covered them with snow and left them where they lay.

“I do not think human beings ever came through such a month as we have come through,” Scott wrote in his diary’s final pages. The team knew they were within 18km (11 miles) of the last food depot, with the supplies that could have saved them. But they were confined to a tent for days, growing weaker, trapped by a fierce blizzard.

“They were prepared to risk their lives and they saw that as legitimate. You can view that as part of a mindset of imperial masculinity, tied up with enduring hardship and hostile environments,” says Jones. “I’m not saying that they had a death wish, but I think that they were willing to die.”

14 October 1965: Jeremy Bailey, David Wild and John Wilson

Four men were riding a Muskeg tractor and its sledges near the Heimefront Mountains, to the east of their base at Halley Research Station in East Antarctica, close to the Weddell Sea. The Muskeg was a heavy-duty vehicle designed to haul people and supplies over long distances on the ice. A team of dogs ran behind.

Three of the men were in the cab. The fourth, John Ross, sat behind on the sledge at the back, close to the huskies. Jeremy (Jerry) Bailey, a scientist measuring the depth of the ice beneath the tractor, was driving. He and David (Dai) Wild, a surveyor, and John Wilson, a doctor, were scanning the ice ahead. Snow obscured much of the small, flat windscreen. The group had been travelling all day, taking turns to warm up in the cab or sit out back on the sledge.

Ross was staring out at the vast ice, snow and Stella Group mountains. At about 8:30, the dogs alongside the sledge stopped running. The sledge had ground to a halt.

Ross, muffled with a balaclava and two anoraks, had heard nothing. He turned to see that the Muskeg was gone. Ahead, the first sledge was leaning down into the ice. Ross ran up to it to find it had wedged in the top of a large crevasse running directly across their course. The Muskeg itself had fallen about 30m (100ft) into the crevasse. Down below, its tracks were wedged vertically against one ice wall, and the cab had been flattened hard against the other.

Ross shouted down. There was no reply from the three men in the cab. After about 20 minutes of shouting, Ross heard a reply. The exchange, as he recorded it from memory soon after the event, was brief:

Ross: Dai?

Bailey: Dai’s dead. It’s me.

Ross: Is that John or Jerry?

Bailey: Jerry.

Ross: How is John?

Bailey: He’s a goner, mate.

Ross: What about yourself?

Bailey: I’m all smashed up.

Ross: Can you move about at all or tie a rope round yourself?

Bailey: I’m all smashed up.

Ross tried climbing down into the crevasse, but the descent was difficult. Bailey told him not to risk it, but Ross tried anyway. After several attempts, Bailey stopped responding to Ross’s calls. Ross heard a scream from the crevasse. After that, Bailey didn’t respond.

Crevasses – deep clefts in the ice stretching down hundreds of feet – are serious threats while travelling across the Antarctic. On 14 October 1965, there had been strong winds kicking up drifts and spreading snow far over the landscape, according to reports on the accident held at the British Antarctic Survey archives. This concealed the top of the chasms, and crucially, the thin blue line in the ice ahead of each drop that would have warned the men to stop.

“You can imagine – there’s a bit of drift about, and there’s bits of ice on the windscreen, your fingers are bloody cold, and you think it’s about time to stop anyway,” says Rod Rhys Jones, one of the expedition party who had not gone on that trip with the Muskeg. He points to the crevassed area the Muskeg had been driving over, on a map of the continent spread over his coffee table, littered with books on the Antarctic.

Many bodies are never recovered; others are buried on the continent. Credit: Getty Images.

“You’re driving along over the ice and thumping and bumping and banging. You don’t see the little blue line.”

Jones questions whether the team had been given adequate training for the hazards of travel in Antarctica. They were young men, mostly fresh out of university. Many of them had little experience in harsh physical conditions. Much of their time preparing for life in Antarctica was spent learning to use the scientific equipment they would need, not training them in how to avoid accidents on the ice.

Each accident in Antarctica has slowly led to changes in the way people travelled and were trained. Reports filed after the incident recommended several ways to make travel through crevassed regions safer, from adapting the vehicle, to new ways to hitch them together.

August 1982: Ambrose Morgan, Kevin Ockleton and John Coll

The three men set out over the ice for an expedition to a nearby island in the depths of the Antarctic winter.

The sea ice was firm, and they made it easily to Petermann Island. The southern aurora was visible in the sky, unusually bright and strong enough to wipe out communications. The team reached the island safely and camped out at a hut near the shore.

Soon after reaching the shore, a large storm blew in that, by the next day, entirely destroyed the sea ice. The group was stranded, but concern among the party was low. There was enough food in the hut to last three people more than a month.

In the next few days, the sea ice failed to reform as storms swept and disrupted the ice in the channel.

Death is never far away in Antarctica. Credit: Richard Fisher.

There were no books or papers in the hut, and contact with the outside world was limited to scheduled radio transmissions to the base. Soon, it had been two weeks. The transmissions were kept brief, as the batteries in their radios were getting weaker and weaker. The team grew restless. Gentoo and Adelie penguins surrounded the hut. They might have looked endearing, but their smell soon began to bother the men.

Things got worse. The team got diarrhoea, as it turned out some of the food in the hut was much older than they had thought. The stench of the penguins didn’t make them feel any better. They killed and ate a few to boost their supplies.

The men waited with increasing frustration, complaining of boredom on their radio transmissions to base. On Friday 13 August 1982, they were seen through a telescope, waving back to the main base. Radio batteries were running low. The sea ice had reformed again, providing a tantalising hope for escape.

Two days later, on Sunday 15 August, the group didn’t check in on the radio at the scheduled time. Then another large storm blew in.

The men at the base climbed up to a high point where they could see the island. All the sea ice was gone again, taken out by the storm.

“These guys had done something which we all did – go out on a little trip to the island,” says Pete Salino, who had been on the main base at the time. The three men were never seen again.

There were very strong currents around the island. Reliable, thick ice formed relatively rarely, Salino recalls. The way they tested whether the ice would hold them was primitive – they would whack it with a wooden stick tipped with metal to see if it would smash.

Even after an extensive search, the bodies were never found. Salino suspects the men went out onto the ice when it reformed and either got stuck or weren’t able to turn back when the storm blew in.

“It does sound mad now, sitting in a cosy room in Surrey,” Salino says. “When we used to go out, there was always a risk of falling through, but you’d always go prepared. We’d always have spare clothing in a sealed bag. We all accepted the risk and felt that it could have been any of us.”

Legacy of Death

For those who experience the loss of colleagues and friends in Antarctica, grieving can be uniquely difficult. When a friend disappears or a body cannot be recovered, the typical human rituals of death – a burial, a last goodbye – elude those left behind.

Clifford Shelley, a British geophysicist based at Argentine Islands off the Antarctic Peninsula in the late 1970s, lost friends who were climbing the nearby peak Mount Peary in 1976. It was thought that those men – Geoffrey Hargreaves, Michael Walker and Graham Whitfield – were trapped in an avalanche. Signs of their camp were found by an air search, but their bodies were never recovered.

The graves of past explorers. Credit: Getty Images.

“You just wait and wait, but there’s nothing. Then you just sort of lose hope,” Shelley says.

Even when the body is recovered, the demanding nature of life and work on Antarctica can make it a hard place to grieve. Ron Pinder, a radio operator in the South Orkneys in the late 1950s and early 1960s, still mourns someone who slipped from a cliff on Signy Island while tagging birds in 1961. The body of his friend, Roger Filer, was found at the foot of a 20ft (6m) cliff below the nests where he was thought to have been tagging birds. His body was buried on the island.

“It is 57 years ago now. It is in the distant past. But it affects me more now than it did then. Life was such that you had to get on with it,” Pinder says.

The same rings true for Shelley. “I don’t think we did really process it,” he says. “It remains at the back of your mind. But it’s certainly a mixed feeling, because Antarctica is superbly beautiful, both during the winter and the summer. It’s the best place to be and we were doing the things we wanted to do.”

The monument to those who lost their lives at the Scott Polar Research Institute. Credit: swancharlotte/Wikipedia/CC BY-SA 4.0.

These deaths have led to changes in how people work in Antarctica. As a result, the people there today can live more safely on this hazardous, isolated continent. Although terrible incidents still happen, much has been learned from earlier fatalities.

For the friends and families of the dead, there is an ongoing effort to make sure their lost loved ones are not forgotten. Outside the Scott Polar Research Institute in Cambridge, UK, two high curved oak pillars lean towards one another, gently touching at the top. It is half of a monument to the dead, erected by the British Antarctic Monument Trust, set up by Rod Rhys Jones and Brian Dorsett-Bailey, Jeremy’s brother, to recognise and honour those who died in Antarctica. The other half of the monument is a long slither of metal leaning slightly towards the sea at Port Stanley in the Falkland Islands, where many of the researchers set off for the last leg of their journey to Antarctica.

Viewed from one end so they align, the oak pillars curve away from each other, leaving a long tapering empty space between them. The shape of that void is perfectly filled by the tall steel shard mounted on a plinth on the other side of the world. It is a physical symbol that spans the hemispheres, connecting home with the vast and wild continent that drew these scientists away for the last time.

More from BBC Future

Water, Water, Every Where — And Now Scientists Know Where It Came From September 3rd 2020

Nell Greenfieldboyce 2010

Nell Greenfieldboyce

Water on Earth is omnipresent and essential for life as we know it, and yet scientists remain a bit baffled about where all of this water came from: Was it present when the planet formed, or did the planet form dry and only later get its water from impacts with water-rich objects such as comets?

A new study in the journal Science suggests that the Earth likely got a lot ofits precious water from the original materials that built the planet, instead of having water arrive later from afar.

The researchers who did this study went looking for signs of water in a rare kind of meteorite. Only about 2% of the meteorites found on Earth are so-called enstatite chondrite meteorites. Their chemical makeup suggests they’re close to the kind of primordial stuff that glommed together and produced our planet 4.5 billion years ago.

You wouldn’t necessarily know how special these meteorites are at first glance. “It’s a bit like a gray rock,” says Laurette Piani, a researcher in France at the Centre de Recherches Pétrographiques et Géochimiques.

What she wanted to know about these rocks is how much hydrogen was in there — because that’s what could produce water.

Space

NASA Braves The Heat To Get Up Close And Personal With Our Sun

Compared with planets such as Jupiter and Saturn, the Earth formed close to the sun. Scientists have long thought that the temperatures must have been hot enough to prevent any water from being in the form of ice. That means there would be no ice to join with the swirling bits of rock and dust that were smashing into each other and slowly building up the young Earth.

If this is all true, our home planet must have been watered later on, perhaps when it got hit by icy comets or meteorites with water-rich minerals coming from farther out in the solar system.

Space

Frosty Asteroid May Give Clues About Earth’s Oceans

Even though that’s been the prevailing view, some planetary scientists don’t buy it. After all, the story of Earth’s water would be a lot more simple and straightforward if the water was just present to begin with.

So Piani and her colleagues recently took a close look at 13 of those unusual meteorites, which are also thought to have formed close in to the sun.

“Before the study, there were almost no measurement of the hydrogen or water in this meteorite,” Piani says. Those measurements that did exist were inconsistent, she says, and were done on meteorites that could have undergone changes after falling to the Earth’s surface.

“We do not want to have meteorites that were altered and modified by the Earth processes,” Piani explains, saying that they deliberately selected the most pristine meteorites possible.

The researchers then analyzed the meteorite’s chemical makeup to see how much hydrogen was in there. Since hydrogen can react with oxygen to produce water, knowing how much hydrogen is in the rocks indicates how much water this material could have contributed to a growing Earth.

What they found was much less hydrogen than in more ordinary meteorites.

Still, what was there would be enough to explain plenty of Earth’s water — at least several times the amount of water in the Earth’s present-day oceans. “It’s a very big quantity of water in the initial material,” Piani says. “And this was never really considered before.”

What’s more, the team also measured the deuterium-to-hydrogen ratio in the meteorites and found that it’s similar to what’s known to exist in the interior of the Earth — which also contains a lot of water. This is additional evidence that there’s a link between our planet’s water and the basic building materials that were present when it formed.

The findings pleased Anne Peslier, a planetary scientist at NASA’s Johnson Space Center in Houston, who wasn’t part of the research team but has a special interest in water.

“I was happy because it makes it nice and simple,” Peslier says. “We don’t have to invoke complicated models where we have to bring material, water-rich material from the outer part of the solar system.”

She says the delivery of so much water from way out there would have required something unusual to disturb the orbits of this water-rich material, such as Jupiter having a little trip inside the inner solar system.

“So here, we just don’t need Jupiter. We don’t need to do anything weird. We just grab the material that was there where the Earth formed, and that’s where the water comes from,” Peslier says.

Even if a lot of the water was there at the start, however, she thinks some must have arrived later on. “I think it’s both,” she says.

Despite these convincing results, she says, there’s still plenty of watery mysteries to plumb. For example, researchers are still trying to determine exactly how much water is locked deep inside the Earth, but it’s surely substantial — several oceans’ worth.

“There is more water down beneath our feet,” Peslier says, “than there is that you see at the surface.”

A 16-Million-Year-Old Tree Tells a Deep Story of the Passage of Time

The sequoia tree slab is an invitation to begin thinking about a vast timescale that includes everything from fossils of armored amoebas to the great Tyrannosaurus rex.

Smithsonian Magazine

  • Riley Black
GettyImages-584372283.jpg

How many answers are hidden inside the giants? Photo by Kelly Cheng Travel Photography / Getty Images.

Paleobotanist Scott Wing hopes that he’s wrong. Even though he carefully counted each ring in an immense, ancient slab of sequoia, the scientist notes that there’s always a little bit of uncertainty in the count. Wing came up with about 260, but, he says, it’s likely a young visitor may one day write him saying: “You’re off by three.” And that would a good thing, Wing says, because it’d be another moment in our ongoing conversation about time.

The shining slab, preserved and polished, is the keystone to consideration of time and our place in it in the “Hall of Fossils—Deep Time” exhibition at the Smithsonian’s National Museum of Natural History. The fossil greets visitors at one of the show’s entrances and just like the physical tree, what the sequoia represents has layers.

Each yearly delineation on the sequoia’s surface is a small part of a far grander story that ties together all of life on Earth. Scientists know this as Deep Time. It’s not just on the scale of centuries, millennia, epochs, or periods, but the ongoing flow that goes back to the origins of our universe, the formation of the Earth, and the evolution of all life, up through this present moment. It’s the backdrop for everything we see around us today, and it can be understood through techniques as different as absolute dating of radioactive minerals and counting the rings of a prehistoric tree. Each part informs the whole.

In decades past, the Smithsonian’s fossil halls were known for the ancient celebrities they contained. There was the dinosaur hall, and the fossil mammal hall, surrounded by the remains of other extinct organisms. But now all of those lost species have been brought together into an integrated story of dynamic and dramatic change. The sequoia is an invitation to begin thinking about how we fit into the vast timescale that includes everything from fossils of armored amoebas called forams to the great Tyrannosaurus rex.

Exactly how the sequoia fossil came to be at the Smithsonian is not entirely clear. The piece was gifted to the museum long ago, “before my time,” Wing says. Still, enough of the tree’s backstory is known to identify it as a massive tree that grew in what’s now central Oregon about 16 million years ago. This tree was once a long-lived part of a true forest primeval.

There are fossils both far older and more recent in the recesses of the Deep Time displays. But what makes the sequoia a fitting introduction to the story that unfolds behind it, Wing says, is that the rings offer different ways to think about time. Given that the sequoia grew seasonally, each ring marks the passage of another year, and visitors can look at the approximately 260 delineations and think about what such a time span represents.

Wing says, people can play the classic game of comparing the tree’s life to a human lifespan. If a long human life is about 80 years, Wing says, then people can count 80, 160, and 240 years, meaning the sequoia grew and thrived over the course of approximately three human lifespans—but during a time when our own ancestors resembled gibbon-like apes. Time is not something that life simply passes through. In everything—from the rings of an ancient tree to the very bones in your body—time is part of life.

The record of that life—and even afterlife—lies between the lines. “You can really see that this tree was growing like crazy in its initial one hundred years or so,” Wing says, with the growth slowing as the tree became larger. And despite the slab’s ancient age, some of the original organic material is still locked inside.

“This tree was alive, photosynthesizing, pulling carbon dioxide out of the atmosphere, turning it into sugars and into lignin and cellulose to make cell walls,” Wing says. After the tree perished, water carrying silica and other minerals coated the log to preserve the wood and protect some of those organic components inside. “The carbon atoms that came out of the atmosphere 16 million years ago are locked in this chunk of glass.”

And so visitors are drawn even further back, not only through the life of the tree itself but through a time span so great that it’s difficult to comprehend. A little back of the envelope math indicates that the tree represents about three human lifetimes, but that the time between when the sequoia was alive and the present could contain about 200,000 human lifetimes. The numbers grow so large that they begin to become abstract. The sequoia is a way to touch that history and start to feel the pull of all those ages past, and what they mean to us. “Time is so vast,” Wing says, “that this giant slab of a tree is just scratching the surface.”

Riley Black is a freelance science writer specializing in evolution, paleontology and natural history who blogs regularly for Scientific American.Smithsonian Magazine

More from Smithsonian Magazine

This post originally appeared on Smithsonian Magazine and was published June 10, 2019. This article is republished here with permission.

“Holy Grail” Metallic Hydrogen Is Going to Change Everything

The substance has the potential to revolutionize everything from space travel to the energy grid. August 26th 2020

Inverse

  • Kastalia Medrano
GettyImages-532102097.jpg

Photo from Stocktrek Images / Getty Images.

Two Harvard scientists have succeeded in creating an entirely new substance long believed to be the “holy grail” of physics — metallic hydrogen, a material of unparalleled power that could one day propel humans into deep space. The research was published in January 2017 in the journal Science.

Scientists created the metallic hydrogen by pressurizing a hydrogen sample to more pounds per square inch than exists at the center of the Earth. This broke the molecule down from its solid state and allowed the particles to dissociate into atomic hydrogen.

The best rocket fuel we currently have is liquid hydrogen and liquid oxygen, burned for propellant. The efficacy of such substances is characterized by “specific impulse,” the measure of impulse fuel can give a rocket to propel it forward.

“People at NASA or the Air Force have told me that if they could get an increase from 450 seconds [of specific impulse] to 500 seconds, that would have a huge impact on rocketry,” Isaac Silvera, the Thomas D. Cabot Professor of the Natural Sciences at Harvard University, told Inverse by phone. “If you can trigger metallic hydrogen to recover to the molecular phase, [the energy release] calculated for that is 1700 seconds.”

Metallic hydrogen could potentially enable rockets to get into orbit in a single stage, even allowing humans to explore the outer planets. Metallic hydrogen is predicted to be “metastable” — meaning if you make it at a very high pressure then release it, it’ll stay at that pressure. A diamond, for example, is a metastable form of graphite. If you take graphite, pressurize it, then heat it, it becomes a diamond; if you take the pressure off, it’s still a diamond. But if you heat it again, it will revert back to graphite.

Scientists first theorized atomic metallic hydrogen a century ago. Silvera, who created the substance along with post-doctoral fellow Ranga Dias, has been chasing it since 1982 and working as a professor of physics at the University of Amsterdam.

Metallic hydrogen has also been predicted to be a high- or possibly room-temperature superconductor. There are no other known room-temperature superconductors in existence, meaning the applications are immense — particularly for the electric grid, which suffers for energy lost through heat dissipation. It could also facilitate magnetic levitation for futuristic high-speed trains; substantially improve performance of electric cars; and revolutionize the way energy is produced and stored.

But that’s all still likely a couple of decades off. The next step in terms of practical application is to determine if metallic hydrogen is indeed metastable. Right now Silvera has a very small quantity. If the substance does turn out to be metastable, it might be used to create room-temperature crystal and — by spraying atomic hydrogen onto the surface —use it like a seed to grow more, the way synthetic diamonds are made. Inverse

More from Inverse

This Is How Your Brain Becomes Addicted to Caffeine August 23rd 2020

Regular ingestion of the drug alters your brain’s chemical make up, leading to fatigue, headaches and nausea if you try to quit.

Smithsonian Magazine

Regular caffeine use alters your brain’s chemical makeup, leading to fatigue, headaches and nausea if you try to quit.

Within 24 hours of quitting the drug, your withdrawal symptoms begin. Initially, they’re subtle: The first thing you notice is that you feel mentally foggy, and lack alertness. Your muscles are fatigued, even when you haven’t done anything strenuous, and you suspect that you’re more irritable than usual.

Over time, an unmistakable throbbing headache sets in, making it difficult to concentrate on anything. Eventually, as your body protests having the drug taken away, you might even feel dull muscle pains, nausea and other flu-like symptoms.

This isn’t heroin, tobacco or even alcohol withdrawal. We’re talking about quitting caffeine, a substance consumed so widely (the FDA reports that more than 80 percent of American adults drink it daily) and in such mundane settings (say, at an office meeting or in your car) that we often forget it’s a drug—and by far the world’s most popular psychoactive one.

Like many drugs, caffeine is chemically addictive, a fact that scientists established back in 1994. In May 2013, with the publication of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM), caffeine withdrawal was finally included as a mental disorder for the first time—even though its merits for inclusion are symptoms that regular coffee-drinkers have long known well from the times they’ve gone off it for a day or more.

Why, exactly, is caffeine addictive? The reason stems from the way the drug affects the human brain, producing the alert feeling that caffeine drinkers crave.

Soon after you drink (or eat) something containing caffeine, it’s absorbed through the small intestine and dissolved into the bloodstream. Because the chemical is both water- and fat-soluble (meaning that it can dissolve in water-based solutions—think blood—as well as fat-based substances, such as our cell membranes), it’s able to penetrate the blood-brain barrier and enter the brain.

Structurally, caffeine closely resembles a molecule that’s naturally present in our brain, called adenosine (which is a by product of many cellular processes, including cellular respiration)—so much so, in fact, that caffeine can fit neatly into our brain cells’ receptors for adenosine, effectively blocking them off. Normally, the adenosine produced over time locks into these receptors and produces a feeling of tiredness.

Caffeine_and_adenosine.svg.png

Caffeine structurally resembles adenosine enough for it to fit into the brain’s adenosine receptors.

When caffeine molecules are blocking those receptors, they prevent this from occurring, thereby generating a sense of alertness and energy for a few hours.

Additionally, some of the brain’s own natural stimulants (such as dopamine) work more effectively when the adenosine receptors are blocked, and all the surplus adenosine floating around in the brain cues the adrenal glands to secrete adrenaline, another stimulant.

For this reason, caffeine isn’t technically a stimulant on its own, says Stephen R. Braun, the author or Buzzed: the Science and Lore of Caffeine and Alcohol, but a stimulant enabler: a substance that lets our natural stimulants run wild. Ingesting caffeine, he writes, is akin to “putting a block of wood under one of the brain’s primary brake pedals.” This block stays in place for anywhere from four to six hours, depending on the person’s age, size and other factors, until the caffeine is eventually metabolized by the body.

In people who take advantage of this process on a daily basis (i.e. coffee/tea, soda or energy drink addicts), the brain’s chemistry and physical characteristics actually change over time as a result. The most notable change is that brain cells grow more adenosine receptors, which is the brain’s attempt to maintain equilibrium in the face of a constant onslaught of caffeine, with its adenosine receptors so regularly plugged (studies indicate that the brain also responds by decreasing the number of receptors for norepinephrine, a stimulant). This explains why regular coffee drinkers build up a tolerance over time—because you have more adenosine receptors, it takes more caffeine to block a significant proportion of them and achieve the desired effect.

This also explains why suddenly giving up caffeine entirely can trigger a range of withdrawal effects. The underlying chemistry is complex and not fully understood, but the principle is that your brain is used to operating in one set of conditions (with an artificially-inflated number of adenosine receptors, and a decreased number of norepinephrine receptors) that depend upon regular ingestion of caffeine. Suddenly, without the drug, the altered brain chemistry causes all sorts of problems, including the dreaded caffeine withdrawal headache.

The good news is that, compared to many drug addictions, the effects are relatively short-term. To kick the thing, you only need to get through about 7-12 days of symptoms without drinking any caffeine. During that period, your brain will naturally decrease the number of adenosine receptors on each cell, responding to the sudden lack of caffeine ingestion. If you can make it that long without a cup of joe or a spot of tea, the levels of adenosine receptors in your brain reset to their baseline levels, and your addiction will be broken.

Joseph Stromberg Smithsonian

Is Consciousness an Illusion?

Philosopher Daniel Dennett holds a distinctive and openly paradoxical position on the question of consciousness. August 20th 2020

The New York Review of Books

  • Thomas Nagel
nagel_1-030917.jpg

Daniel Dennett at the Centro Cultural de la Ciencia, Buenos Aires, Argentina, June 2016. Photo by Soledad Aznarez / AP Images.

For fifty years the philosopher Daniel Dennett has been engaged in a grand project of disenchantment of the human world, using science to free us from what he deems illusions—illusions that are difficult to dislodge because they are so natural. In From Bacteria to Bach and Back, his eighteenth book (thirteenth as sole author), Dennett presents a valuable and typically lucid synthesis of his worldview. Though it is supported by reams of scientific data, he acknowledges that much of what he says is conjectural rather than proven, either empirically or philosophically.

Dennett is always good company. He has a gargantuan appetite for scientific knowledge, and is one of the best people I know at transmitting it and explaining its significance, clearly and without superficiality. He writes with wit and elegance; and in this book especially, though it is frankly partisan, he tries hard to grasp and defuse the sources of resistance to his point of view. He recognizes that some of what he asks us to believe is strongly counterintuitive. I shall explain eventually why I think the overall project cannot succeed, but first let me set out the argument, which contains much that is true and insightful.

The book has a historical structure, taking us from the prebiotic world to human minds and human civilization. It relies on different forms of evolution by natural selection, both biological and cultural, as its most important method of explanation. Dennett holds fast to the assumption that we are just physical objects and that any appearance to the contrary must be accounted for in a way that is consistent with this truth. Bach’s or Picasso’s creative genius, and our conscious experience of hearing Bach’s Fourth Brandenburg Concerto or seeing Picasso’s Girl Before a Mirror, all arose by a sequence of physical events beginning with the chemical composition of the earth’s surface before the appearance of unicellular organisms. Dennett identifies two unsolved problems along this path: the origin of life at its beginning and the origin of human culture much more recently. But that is no reason not to speculate.

The task Dennett sets himself is framed by a famous distinction drawn by the philosopher Wilfrid Sellars between the “manifest image” and the “scientific image”—two ways of seeing the world we live in. According to the manifest image, Dennett writes, the world is

full of other people, plants, and animals, furniture and houses and cars…and colors and rainbows and sunsets, and voices and haircuts, and home runs and dollars, and problems and opportunities and mistakes, among many other such things. These are the myriad “things” that are easy for us to recognize, point to, love or hate, and, in many cases, manipulate or even create…. It’s the world according to us.

According to the scientific image, on the other hand, the world

is populated with molecules, atoms, electrons, gravity, quarks, and who knows what else (dark energy, strings? branes?).

This, according to Dennett, is the world as it is in itself, not just for us, and the task is to explain scientifically how the world of molecules has come to include creatures like us, complex physical objects to whom everything, including they themselves, appears so different.

He greatly extends Sellars’s point by observing that the concept of the manifest image can be generalized to apply not only to humans but to all other living beings, all the way down to bacteria. All organisms have biological sensors and physical reactions that allow them to detect and respond appropriately only to certain features of their environment—“affordances,” Dennett calls them—that are nourishing, noxious, safe, dangerous, sources of energy or reproductive possibility, potential predators or prey.

For each type of organism, whether plant or animal, these are the things that define their world, that are salient and important for them; they can ignore the rest. Whatever the underlying physiological mechanisms, the content of the manifest image reveals itself in what the organisms do and how they react to their environment; it need not imply that the organisms are consciously aware of their surroundings. But in its earliest forms, it is the first step on the route to awareness.


The lengthy process of evolution that generates these results is first biological and then, in our case, cultural, and only at the very end is it guided partly by intelligent design, made possible by the unique capacities of the human mind and human civilization. But as Dennett says, the biosphere is saturated with design from the beginning—everything from the genetic code embodied in DNA to the metabolism of unicellular organisms to the operation of the human visual system—design that is not the product of intention and that does not depend on understanding.

One of Dennett’s most important claims is that most of what we and our fellow organisms do to stay alive, cope with the world and one another, and reproduce is not understood by us or them. It is competence without comprehension. This is obviously true of organisms like bacteria and trees that have no comprehension at all, but it is equally true of creatures like us who comprehend a good deal. Most of what we do, and what our bodies do—digest a meal, move certain muscles to grasp a doorknob, or convert the impact of sound waves on our eardrums into meaningful sentences—is done for reasons that are not our reasons. Rather, they are what Dennett calls free-floating reasons, grounded in the pressures of natural selection that caused these behaviors and processes to become part of our repertoire. There are reasons why these patterns have emerged and survived, but we don’t know those reasons, and we don’t have to know them to display the competencies that allow us to function.

Nor do we have to understand the mechanisms that underlie those competencies. In an illuminating metaphor, Dennett asserts that the manifest image that depicts the world in which we live our everyday lives is composed of a set of user-illusions,

like the ingenious user-illusion of click-and-drag icons, little tan folders into which files may be dropped, and the rest of the ever more familiar items on your computer’s desktop. What is actually going on behind the desktop is mind-numbingly complicated, but users don’t need to know about it, so intelligent interface designers have simplified the affordances, making them particularly salient for human eyes, and adding sound effects to help direct attention. Nothing compact and salient inside the computer corresponds to that little tan file-folder on the desktop screen.

He says that the manifest image of each species is “a user-illusion brilliantly designed by evolution to fit the needs of its users.” In spite of the word “illusion” he doesn’t wish simply to deny the reality of the things that compose the manifest image; the things we see and hear and interact with are “not mere fictions but different versions of what actually exists: real patterns.” The underlying reality, however, what exists in itself and not just for us or for other creatures, is accurately represented only by the scientific image—ultimately in the language of physics, chemistry, molecular biology, and neurophysiology.


Our user-illusions were not, like the little icons on the desktop screen, created by an intelligent interface designer. Nearly all of them—such as our images of people, their faces, voices, and actions, the perception of some things as delicious or comfortable and others as disgusting or dangerous—are the products of “bottom-up” design, understandable through the theory of evolution by natural selection, rather than “top-down” design by an intelligent being. Darwin, in what Dennett calls a “strange inversion of reasoning,” showed us how to resist the intuitive tendency always to explain competence and design by intelligence, and how to replace it with explanation by natural selection, a mindless process of accidental variation, replication, and differential survival.

As for the underlying mechanisms, we now have a general idea of how they might work because of another strange inversion of reasoning, due to Alan Turing, the creator of the computer, who saw how a mindless machine could do arithmetic perfectly without knowing what it was doing. This can be applied to all kinds of calculation and procedural control, in natural as well as in artificial systems, so that their competence does not depend on comprehension. Dennett’s claim is that when we put these two insights together, we see that

all the brilliance and comprehension in the world arises ultimately out of uncomprehending competences compounded over time into ever more competent—and hence comprehending—systems. This is indeed a strange inversion, overthrowing the pre-Darwinian mind-first vision of Creation with a mind-last vision of the eventual evolution of us, intelligent designers at long last.

And he adds:

Turing himself is one of the twigs on the Tree of Life, and his artifacts, concrete and abstract, are indirectly products of the blind Darwinian processes in the same way spider webs and beaver dams are….

An essential, culminating stage of this process is cultural evolution, much of which, Dennett believes, is as uncomprehending as biological evolution. He quotes Peter Godfrey-Smith’s definition, from which it is clear that the concept of evolution can apply more widely:

Evolution by natural selection is change in a population due to (i) variation in the characteristics of members of the population, (ii) which causes different rates of reproduction, and (iii) which is heritable.

In the biological case, variation is caused by mutations in DNA, and it is heritable through reproduction, sexual or otherwise. But the same pattern applies to variation in behavior that is not genetically caused, and that is heritable only in the sense that other members of the population can copy it, whether it be a game, a word, a superstition, or a mode of dress.


This is the territory of what Richard Dawkins memorably christened “memes,” and Dennett shows that the concept is genuinely useful in describing the formation and evolution of culture. He defines “memes” thus:

They are a kind of way of behaving (roughly) that can be copied, transmitted, remembered, taught, shunned, denounced, brandished, ridiculed, parodied, censored, hallowed.

They include such things as the meme for wearing your baseball cap backward or for building an arch of a certain shape; but the best examples of memes are words. A word, like a virus, needs a host to reproduce, and it will survive only if it is eventually transmitted to other hosts, people who learn it by imitation:

Like a virus, it is designed (by evolution, mainly) to provoke and enhance its own replication, and every token it generates is one of its offspring. The set of tokens descended from an ancestor token form a type, which is thus like a species.

ezgif.com-webp-to-jpg.jpg

Alan Turing; drawing by David Levine.

The distinction between type and token comes from the philosophy of language: the word “tomato” is a type, of which any individual utterance or inscription or occurrence in thought is a token. The different tokens may be physically very different—you say “tomayto,” I say “tomahto”—but what unites them is the perceptual capacity of different speakers to recognize them all as instances of the type. That is why people speaking the same language with different accents, or typing with different fonts, can understand each other.

A child picks up its native language without any comprehension of how it works. Dennett believes, plausibly, that language must have originated in an equally unplanned way, perhaps initially by the spontaneous attachment of sounds to prelinguistic thoughts. (And not only sounds but gestures: as Dennett observes, we find it very difficult to talk without moving our hands, an indication that the earliest language may have been partly nonvocal.) Eventually such memes coalesced to form languages as we know them, intricate structures with vast expressive capacity, shared by substantial populations.

Language permits us to transcend space and time by communicating about what is not present, to accumulate shared bodies of knowledge, and with writing to store them outside of individual minds, resulting in the vast body of collective knowledge and practice dispersed among many minds that constitutes civilization. Language also enables us to turn our attention to our own thoughts and develop them deliberately in the kind of top-down creativity characteristic of science, art, technology, and institutional design.

But such top-down research and development is possible only on a deep foundation of competence whose development was largely bottom-up, the result of cultural evolution by natural selection. Without denigrating the contributions of individual genius, Dennett urges us not to forget its indispensable precondition, the arms race over millennia of competing memes—exemplified by the essentially unplanned evolution, survival, and extinction of languages.

Of course the biological evolution of the human brain made all of this possible, together with some coevolution of brain and culture over the past 50,000 years, but at this point we can only speculate about what happened. Dennett cites recent research in support of the view that brain architecture is the product of bottom-up competition and coalition-formation among neurons—partly in response to the invasion of memes. But whatever the details, if Dennett is right that we are physical objects, it follows that all the capacities for understanding, all the values, perceptions, and thoughts that present us with the manifest image and allow us to form the scientific image, have their real existence as systems of representation in the central nervous system.


This brings us to the question of consciousness, on which Dennett holds a distinctive and openly paradoxical position. Our manifest image of the world and ourselves includes as a prominent part not only the physical body and central nervous system but our own consciousness with its elaborate features—sensory, emotional, and cognitive—as well as the consciousness of other humans and many nonhuman species. In keeping with his general view of the manifest image, Dennett holds that consciousness is not part of reality in the way the brain is. Rather, it is a particularly salient and convincing user-illusion, an illusion that is indispensable in our dealings with one another and in monitoring and managing ourselves, but an illusion nonetheless.

You may well ask how consciousness can be an illusion, since every illusion is itself a conscious experience—an appearance that doesn’t correspond to reality. So it cannot appear to me that I am conscious though I am not: as Descartes famously observed, the reality of my own consciousness is the one thing I cannot be deluded about. The way Dennett avoids this apparent contradiction takes us to the heart of his position, which is to deny the authority of the first-person perspective with regard to consciousness and the mind generally.

The view is so unnatural that it is hard to convey, but it has something in common with the behaviorism that was prevalent in psychology at the middle of the last century. Dennett believes that our conception of conscious creatures with subjective inner lives—which are not describable merely in physical terms—is a useful fiction that allows us to predict how those creatures will behave and to interact with them. He has coined the term “heterophenomenology” to describe the (strictly false) attribution each of us makes to others of an inner mental theater—full of sensory experiences of colors, shapes, tastes, sounds, images of furniture, landscapes, and so forth—that contains their representation of the world.

According to Dennett, however, the reality is that the representations that underlie human behavior are found in neural structures of which we know very little. And the same is true of the similar conception we have of our own minds. That conception does not capture an inner reality, but has arisen as a consequence of our need to communicate to others in rough and graspable fashion our various competencies and dispositions (and also, sometimes, to conceal them):

Curiously, then, our first-person point of view of our own minds is not so different from our second-person point of view of others’ minds: we don’t see, or hear, or feel, the complicated neural machinery churning away in our brains but have to settle for an interpreted, digested version, a user-illusion that is so familiar to us that we take it not just for reality but also for the most indubitable and intimately known reality of all.

The trouble is that Dennett concludes not only that there is much more behind our behavioral competencies than is revealed to the first-person point of view—which is certainly true—but that nothing whatever is revealed to the first-person point of view but a “version” of the neural machinery. In other words, when I look at the American flag, it may seem to me that there are red stripes in my subjective visual field, but that is an illusion: the only reality, of which this is “an interpreted, digested version,” is that a physical process I can’t describe is going on in my visual cortex.

I am reminded of the Marx Brothers line: “Who are you going to believe, me or your own eyes?” Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”

If I understand him, this requires us to interpret ourselves behavioristically: when it seems to me that I have a subjective conscious experience, that experience is just a belief, manifested in what I am inclined to say. According to Dennett, the red stripes that appear in my visual field when I look at the flag are just the “intentional object” of such a belief, as Santa Claus is the intentional object of a child’s belief in Santa Claus. Neither of them is real. Recall that even trees and bacteria have a manifest image, which is to be understood through their outward behavior. The same, it turns out, is true of us: the manifest image is not an image after all.


There is no reason to go through such mental contortions in the name of science. The spectacular progress of the physical sciences since the seventeenth century was made possible by the exclusion of the mental from their purview. To say that there is more to reality than physics can account for is not a piece of mysticism: it is an acknowledgment that we are nowhere near a theory of everything, and that science will have to expand to accommodate facts of a kind fundamentally different from those that physics is designed to explain. It should not disturb us that this may have radical consequences, especially for Dennett’s favorite natural science, biology: the theory of evolution, which in its current form is a purely physical theory, may have to incorporate nonphysical factors to account for consciousness, if consciousness is not, as he thinks, an illusion. Materialism remains a widespread view, but science does not progress by tailoring the data to fit a prevailing theory.

There is much in the book that I haven’t discussed, about education, information theory, prebiotic chemistry, the analysis of meaning, the psychological role of probability, the classification of types of minds, and artificial intelligence. Dennett’s reflections on the history and prospects of artificial intelligence and how we should manage its development and our relation to it are informative and wise. He concludes:

The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence….

We should hope that new cognitive prostheses will continue to be designed to be parasitic, to be tools, not collaborators. Their only “innate” goal, set up by their creators, should be to respond, constructively and transparently, to the demands of the user.

About the true nature of the human mind, Dennett is on one side of an old argument that goes back to Descartes. He pays tribute to Descartes, citing the power of what he calls “Cartesian gravity,” the pull of the first-person point of view; and he calls the allegedly illusory realm of consciousness the “Cartesian Theater.” The argument will no doubt go on for a long time, and the only way to advance understanding is for the participants to develop and defend their rival conceptions as fully as possible—as Dennett has done. Even those who find the overall view unbelievable will find much to interest them in this book.

Thomas Nagel is University Professor Emeritus at NYU. He is the author of “The View From Nowhere, Mortal Questions, and Mind and Cosmos,” among other books.
The New York Review of Books

More from The New York Review of Books

This post originally appeared on The New York Review of Books

Wireless Charging Risk August 16th 2020

Wireless charging is increasingly common in modern smartphones, and there’s even speculation that Apple might ditch charging via a cable entirely in the near future. But the slight convenience of juicing up your phone by plopping it onto a pad rather than plugging it in comes with a surprisingly robust environmental cost. According to new calculations from OneZero and iFixit, wireless charging is drastically less efficient than charging with a cord, so much so that the widespread adoption of this technology could necessitate the construction of dozens of new power plants around the world. (Unless manufacturers find other ways to make up for the energy drain, of course.)

On paper, wireless charging sounds appealing. Just drop a phone down on a charger and it will start charging. There’s no wear and tear on charging ports, and chargers can even be built into furniture. Not all of the energy that comes out of a wall outlet, however, ends up in a phone’s battery. Some of it gets lost in the process as heat.

While this is true of all forms of charging to a certain extent, wireless chargers lose a lot of energy compared to cables. They get even less efficient when the coils in the phone aren’t aligned properly with the coils in the charging pad, a surprisingly common problem.

To get a sense of how much extra power is lost when using wireless charging versus wired charging in the real world, I tested a Pixel 4 using multiple wireless chargers, as well as the standard charging cable that comes with the phone. I used a high-precision power meter that sits between the charging block and the power outlet to measure power consumption.

In my tests, I found that wireless charging used, on average, around 47% more power than a cable.

Charging the phone from completely dead to 100% using a cable took an average of 14.26 watt-hours (Wh). Using a wireless charger took, on average, 21.01 Wh. That comes out to slightly more than 47% more energy for the convenience of not plugging in a cable. In other words, the phone had to work harder, generate more heat, and suck up more energy when wirelessly charging to fill the same size battery.

How the phone was positioned on the charger significantly affected charging efficiency. The flat Yootech charger I tested was difficult to line up properly. Initially I intended to measure power consumption with the coils aligned as well as possible, then intentionally misalign them to detect the difference.

Instead, during one test, I noticed that the phone wasn’t charging. It looked like it was aligned properly, but while trying to fiddle with it, the difference between positions that charged properly and those that didn’t charge at all could be measured in millimeters. Without a visual indicator, it would be impossible to tell. Without careful alignment, this could make the phone take way more energy to charge than necessary or, more annoyingly, not charge at all.

The first test with the Yootech pad — before I figured out how to align the coils properly — took a whopping 25.62 Wh to charge, or 80% more energy than an average cable charge. Hearing about the hypothetical inefficiencies online was one thing, but here I could see how I’d nearly doubled the amount of power it took to charge my phone by setting it down slightly wrong instead of just plugging in a cable.

Google’s official Pixel Stand fared better, likely due to its propped-up design. Since the base of the phone sits flat, the coils can only be misaligned from left to right — circular pads like the Yootech allow for misalignment in any direction. Again, the threshold was a few millimeters of difference tops (as seen below), but the Pixel Stand continued charging while misaligned, albeit slower and using more power. In general, the propped-up design helped align the coils without much fiddling, but it still used an average of 19.8 Wh, or 39% more power, to charge the phone than cables.

On top of this, both wireless chargers independently consumed a small amount of power when no phone was charging at all — around 0.25 watts, which might not sound like much, but over 24 hours it would consume around six watt-hours. A household with multiple wireless chargers left plugged in — say, a charger by the bed, one in the living room, and another in the office — could waste the same amount of power in a day as it would take to fully charge a phone. By contrast, in my testing the normal cable charger did not draw any measurable amount of power.

While wireless charging might use relatively more power than a cable, it’s often written off as negligible. The extra power consumed by charging one phone with wireless charging versus a cable is the equivalent of leaving one extra LED light bulb on for a few hours. It might not even register on your power bill. At scale, however, it can turn into an environmental problem.

“I think in terms of power consumption, for me worrying about how much I’m paying for electricity, I don’t think it’s a factor,” Kyle Wiens, CEO of iFixit, told OneZero. “If all of a sudden, the 3 billion[-plus] smartphones that are in use, if all of them take 50% more power to charge, that adds up to a big amount. So it’s a society-wide issue, not a personal issue.”

To get a frame of reference for scale, iFixit helped me calculate the impact that the kind of excess power drain I experienced could have if every smartphone user on the planet switched to wireless charging — not a likely scenario any time soon, but neither was 3.5 billion people carrying around smartphones, say, 30 years ago.

“We worked out that at 100% efficiency from wall socket to battery, it would take about 73 coal power plantsrunning for a day to charge the 3.5 billion smartphone batteries once fully,” iFixit technical writer Arthur Shi told OneZero. But if people place their phones wrong and reduce the efficiency of their charging, the number grows: “If the wireless charging efficiency was only 50%, you would need to double the [73] power plants in order to charge all the batteries.”

If everyone in the world switched to wireless charging, it would have a measurable impact on the global power grid.

This is rough math, of course. Measuring power consumption by the number of power plants devices require is a bit like measuring how many vehicles it takes to transport a couple dozen people. It could take a dozen two-seat convertibles, or one bus. Shi’s math assumed relatively small coal power plants outputting 50 MW, as many power plants in the United States are, but those same needs could also be met by a couple very large power plants outputting more than 2,000 MW (of which the United States has only 29).

However, the broader point remains the same: If everyone in the world switched to wireless charging, it would have a measurable impact on the global power grid. While tech companies like Apple and Google tout how environmentally friendly their phones are, power consumption often goes overlooked. “They want to cover the carbon impact of the product over their entire life cycle?” Wiens said. “The entire life cycle includes all the power that these things ever consumed plugged into the wall.”

There are some things that companies can do to balance out the excess power wireless chargers use. Manufacturers can design phones to disable wireless charging if their coils aren’t aligned — instead of allowing excessively inefficient charging for the sake of user experience — or design chargers to hold phones so they align properly. They can also continue to offer wired charging, which might mean Apple’s rumored future port-less phone would have to wait.

Finally, tech companies can work to offset their excesses in one area with savings in another. Wireless charging is only one small piece of the environmental picture, and environmental reports for major phones from Google and Apple only loosely point to energy efficiency and make no mention of the impact of using wireless chargers. There are many ways tech companies could be more energy-efficient to put less strain on our power grids. Until wireless charging itself gets a more thorough examination, though, the world would probably be better off if we all stuck to good old-fashioned plugs.

Update:A previous version of this article misstated two units of measurement in reference to the Pixel Stand charger. It consumes 0.25 watts when plugged in without a phone attached, which over 24 hours would consume around six watt-hours.

Bill Gates on Covid: Most US Tests Are ‘Completely Garbage’

The techie-turned-philanthropist on vaccines, Trump, and why social media is “a poisoned chalice.”

For 20 years, Bill Gates has been easing out of the roles that made him rich and famous—CEO, chief software architect, and chair of Microsoft—and devoting his brainpower and passion to the Bill and Melinda Gates Foundation, abandoning earnings calls and antitrust hearings for the metrics of disease eradication and carbon reduction. This year, after he left the Microsoft board, one would have thought he would have relished shedding the spotlight directed at the four CEOs of big tech companies called before Congress.

But as with many of us, 2020 had different plans for Gates. An early Cassandra who warned of our lack of preparedness for a global pandemic, he became one of the most credible figures as his foundation made huge investments in vaccines, treatments, and testing. He also became a target of the plague of misinformation afoot in the land, as logorrheic critics accused him of planning to inject microchips in vaccine recipients. (Fact check: false. In case you were wondering.)

My first interview with Gates was in 1983, and I’ve long lost count of how many times I’ve spoken to him since. He’s yelled at me (more in the earlier years) and made me laugh (more in the latter years). But I’ve never looked forward to speaking to him more than in our year of Covid. We connected on Wednesday, remotely of course. In discussing our country’s failed responses, his issues with his friend Mark Zuckerberg’s social networks, and the innovations that might help us out of this mess, Gates did not disappoint. The interview has been edited for length and clarity.

WIRED: You have been warning us about a global pandemic for years. Now that it has happened just as you predicted, are you disappointed with the performance of the United States? Exclusive Offer.

Bill Gates: Yeah. There’s three time periods, all of which have disappointments. There is 2015 until this particular pandemic hit. If we had built up the diagnostic, therapeutic, and vaccine platforms, and if we’d done the simulations to understand what the key steps were, we’d be dramatically better off. Then there’s the time period of the first few months of the pandemic, when the US actually made it harder for the commercial testing companies to get their tests approved, the CDC had this very low volume test that didn’t work at first, and they weren’t letting people test. The travel ban came too late, and it was too narrow to do anything. Then, after the first few months, eventually we figured out about masks, and that leadership is important.Get WIRED Access SubscribeMost Popular

Advertisement

So you’re disappointed, but are you surprised?

I’m surprised at the US situation because the smartest people on epidemiology in the world, by a lot, are at the CDC. I would have expected them to do better. You would expect the CDC to be the most visible, not the White House or even Anthony Fauci. But they haven’t been the face of the epidemic. They are trained to communicate and not try to panic people but get people to take things seriously. They have basically been muzzled since the beginning. We called the CDC, but they told us we had to talk to the White House a bunch of times. Now they say, “Look, we’re doing a great job on testing, we don’t want to talk to you.” Even the simplest things, which would greatly improve this system, they feel would be admitting there is some imperfection and so they are not interested.

Do you think it’s the agencies that fell down or just the leadership at the top, the White House?

We can do the postmortem at some point. We still have a pandemic going on, and we should focus on that. The White House didn’t allow the CDC to do its job after March. There was a window where they were engaged, but then the White House didn’t let them do that. So the variance between the US and other countries isn’t that first period, it’s the subsequent period where the messages—the opening up, the leadership on masks, those things—are not the CDC’s fault. They said not to open back up; they said that leadership has to be a model of face mask usage. I think they have done a good job since April, but we haven’t had the benefit of it.

At this point, are you optimistic?

Yes. You have to admit there’s been trillions of dollars of economic damage done and a lot of debts, but the innovation pipeline on scaling up diagnostics, on new therapeutics, on vaccines is actually quite impressive. And that makes me feel like, for the rich world, we should largely be able to end this thing by the end of 2021, and for the world at large by the end of 2022. That is only because of the scale of the innovation that’s taking place. Now whenever we get this done, we will have lost many years in malaria and polio and HIV and the indebtedness of countries of all sizes and instability. It’ll take you years beyond that before you’d even get back to where you were at the start of 2020. It’s not World War I or World War II, but it is in that order of magnitude as a negative shock to the system.

In March it was unimaginable that you’d be giving us that timeline and saying it’s great.

Well it’s because of innovation that you don’t have to contemplate an even sadder statement, which is this thing will be raging for five years until natural immunity is our only hope.

Let’s talk vaccines, which your foundation is investing in. Is there anything that’s shaping up relatively quickly that could be safe and effective?

Before the epidemic came, we saw huge potential in the RNA vaccines—Moderna, Pfizer/BioNTech, and CureVac. Right now, because of the way you manufacture them, and the difficulty of scaling up, they are more likely—if they are helpful—to help in the rich countries. They won’t be the low-cost, scalable solution for the world at large. There you’d look more at AstraZeneca or Johnson & Johnson. This disease, from both the animal data and the phase 1 data, seems to be very vaccine preventable. There are questions still. It will take us awhile to figure out the duration [of protection], and the efficacy in elderly, although we think that’s going to be quite good. Are there any side effects, which you really have to get out in those large phase 3 groups and even after that through lots of monitoring to see if there are any autoimmune diseases or conditions that the vaccine could interact with in a deleterious fashion.Most Popular

Advertisement

Are you concerned that in our rush to get a vaccine we are going to approve something that isn’t safe and effective?

Yeah. In China and Russia they are moving full speed ahead. I bet there’ll be some vaccines that will get out to lots of patients without the full regulatory review somewhere in the world. We probably need three or four months, no matter what, of phase 3 data, just to look for side effects. The FDA, to their credit, at least so far, is sticking to requiring proof of efficacy. So far they have behaved very professionally despite the political pressure. There may be pressure, but people are saying no, make sure that that’s not allowed. The irony is that this is a president who is a vaccine skeptic. Every meeting I have with him he is like, “Hey, I don’t know about vaccines, and you have to meet with this guy Robert Kennedy Jr. who hates vaccines and spreads crazy stuff about them.”

Wasn’t Kennedy Jr. talking about you using vaccines to implant chips into people?

Yeah, you’re right. He, Roger Stone, Laura Ingraham. They do it in this kind of way: “I’ve heard lots of people say X, Y, Z.” That’s kind of Trumpish plausible deniability. Anyway, there was a meeting where Francis Collins, Tony Fauci, and I had to [attend], and they had no data about anything. When we would say, “But wait a minute, that’s not real data,” they’d say, “Look, Trump told you you have to sit and listen, so just shut up and listen anyway.” So it’s a bit ironic that the president is now trying to have some benefit from a vaccine.

What goes through your head when you’re in a meeting hearing misinformation, and the President of the United States wants you to keep your mouth shut?

That was a bit strange. I haven’t met directly with the president since March of 2018. I made it clear I’m glad to talk to him about the epidemic anytime. And I have talked to Debbie Birx, I’ve talked to Pence, I’ve talked to Mnuchin, Pompeo, particularly on the issue of, Is the US showing up in terms of providing money to procure the vaccine for the developing countries? There have been lots of meetings, but we haven’t been able to get the US to show up. It’s very important to be able to tell the vaccine companies to build extra factories for the billions of doses, that there is procurement money to buy those for the marginal cost. So in this supplemental bill, I’m calling everyone I can to get 4 billion through GAVI for vaccines and 4 billion through a global fund for therapeutics. That’s less than 1 percent to the bill, but in terms of saving lives and getting us back to normal, that under 1 percent is by far the most important thing if we can get it in there.

Speaking of therapeutics, if you were in the hospital and you have the disease and you’re looking over the doctor’s shoulder, what treatment are you going to ask for?

Remdesivir. Sadly the trials in the US have been so chaotic that the actual proven effect is kind of small. Potentially the effect is much larger than that. It’s insane how confused the trials here in the US have been. The supply of that is going up in the US; it will be quite available for the next few months. Also dexamethasone—it’s actually a fairly cheap drug—that’s for late-stage disease.Most Popular

Advertisement

I’m assuming you’re not going to have trouble paying for it, Bill, so you could ask for anything.

Well, I don’t want special treatment, so that’s a tricky thing. Other antivirals are two to three months away. Antibodies are two to three months away. We’ve had about a factor-of-two improvement in hospital outcomes already, and that’s with just remdesivir and dexamethasone. These other things will be additive to that.

You helped fund a Covid diagnostic testing program in Seattle that got quicker results, and it wasn’t so intrusive. The FDA put it on pause. What happened?

There’s this thing where the health worker jams the deep turbinate, in the back of your nose, which actually hurts and makes you sneeze on the healthy worker. We showed that the quality of the results can be equivalent if you just put a self-test in the tip of your nose with a cotton swab. The FDA made us jump through some hoops to prove that you didn’t need to refrigerate the result, that it could go back in a dry plastic bag, and so on. So the delay there was just normal double checking, maybe overly careful but not based on some political angle. Because of what we have done at FDA, you can buy these cheaper swabs that are available by the billions. So anybody who’s using the deep turbinate now is just out of date. It’s a mistake, because it slows things down.

But people aren’t getting their tests back quickly enough.

Well, that’s just stupidity. The majority of all US tests are completely garbage, wasted. If you don’t care how late the date is and you reimburse at the same level, of course they’re going to take every customer. Because they are making ridiculous money, and it’s mostly rich people that are getting access to that. You have to have the reimbursement system pay a little bit extra for 24 hours, pay the normal fee for 48 hours, and pay nothing [if it isn’t done by then]. And they will fix it overnight.

Why don’t we just do that?

Because the federal government sets that reimbursement system. When we tell them to change it they say, “As far as we can tell, we’re just doing a great job, it’s amazing!” Here we are, this is August. We are the only country in the world where we waste the most money on tests. Fix the reimbursement. Set up the CDC website. But I have been on that kick, and people are tired of listening to me.

As someone who has built your life on science and logic, I’m curious what you think when you see so many people signing onto this anti-science view of the world.

Well, strangely, I’m involved in almost everything that anti-science is fighting. I’m involved with climate change, GMOs, and vaccines. The irony is that it’s digital social media that allows this kind of titillating, oversimplistic explanation of, “OK, there’s just an evil person, and that explains all of this.” And when you have [posts] encrypted, there is no way to know what it is. I personally believe government should not allow those types of lies or fraud or child pornography [to be hidden with encryption like WhatsApp or Facebook Messenger].

Well, you’re friends with Mark Zuckerberg. Have you talked to him about this?

After I said this publicly, he sent me mail. I like Mark, I think he’s got very good values, but he and I do disagree on the trade-offs involved there. The lies are so titillating you have to be able to see them and at least slow them down. Like that video where, what do they call her, the sperm woman? That got over 10 million views! [Note: It was more than 20 million.] Well how good are these guys at blocking things, where once something got the 10 million views and everybody was talking about it, they didn’t delete the link or the searchability? So it was meaningless. They claim, “Oh, now we don’t have it.” What effect did that have? Anybody can go watch that thing! So I am a little bit at odds with the way that these conspiracy theories spread, many of which are anti-vaccine things. We give literally tens of billions for vaccines to save lives, then people turn around saying, “No, we’re trying to make money and we’re trying to end lives.” That’s kind of a wild inversion of what our values are and what our track record is.Most Popular

Advertisement

As you are the technology adviser to Microsoft, I think you can look forward in a few months to fighting this battle yourself when the company owns TikTok.

Yeah, my critique of dance moves will be fantastically value-added for them.

TikTok is more than just dance moves. There’s political content.

I know, I’m kidding. You’re right. Who knows what’s going to happen with that deal. But yes, it’s a poison chalice. Being big in the social media business is no simple game, like the encryption issue.

So are you wary of Microsoft getting into that game?

I mean, this may sound self-serving, but I think that the game being more competitive is probably a good thing. But having Trump kill off the only competitor, it’s pretty bizarre.

Do you understand what rule or regulation the president is invoking to demand that TikTok sell to an American company and then take a cut of the sales price?

I agree that the principle this is proceeding on is singly strange. The cut thing, that’s doubly strange. Anyway, Microsoft will have to deal with all of that.

You have been very cautious in staying away from the political arena. But the issues you care most about—public health and climate change—have had huge setbacks because of who leads the country. Are you reconsidering spending on political change?

The foundation needs to be bipartisan. Whoever gets elected in the US, we are going to want to work with them. We do care a lot about competence, and hopefully voters will take into account how this administration has done at picking competent people and should that weigh into their vote. But there’s going to be plenty of money on both sides of this election, and I don’t like diverting money to political things. Even though the pandemic has made it pretty clear we should expect better, there’s other people who will put their time into the campaigning piece.

Did you have deja vu last week when those tech CEOs testified remotely before Congress?

Yeah. I had a whole committee attacking me, and they had four at a time. I mean, Jesus Christ, what’s the Congress coming to? If you want to give a guy a hard time, give him at least a whole day that he has to sit there on the hot seat by himself! And they didn’t even have to get on a plane!

Do you think the antitrust concerns are the same as when Microsoft was under the gun, or has the landscape changed?

Even without antitrust rules, tech does tend to be quite competitive. And even though in the short run you don’t think it’s going to dislodge people, there will be changes that will keep bringing prices down. But there are a lot of valid issues, and if you’re super-successful, the pleasure of going in front of the Congress comes with the territory.

How has your life changed living under the pandemic?

I used to travel a lot. If I wanted to see President Macron and say, “Hey, give money for the coronavirus vaccine,” to really show I’m serious I’d go there. Now, we had a GAVI replenishment summit where I just sat at home and got up a little early. I am able to get a lot done. My kids are home more than I thought they would be, which at least for me is a nice thing. I’m microwaving more food. I’m getting fairly good at it. The pandemic sadly is less painful for those who were better off before the pandemic.

Do you have a go-to mask you use?

No, I use a pretty ugly normal mask. I change it every day. Maybe I should get a designer mask or something creative, but I just use this surgical-looking mask.

Comment Gates calls social media a poisoned challice because it was intended to be a disinformation highway. Covid 19 is very useful to Gates class. Philantropist he is not. His money grabbing organisation has exploited Chinese slave labour for years. Cheap manufactured computers have been crucial to the development of social media, making Gates super rich. He speaks for very profound and wealthy vested interests. As for the masks, there is no evidence that they or lockdown works.

The impact of Covid 19 has been on old, already sick and most importantly BAME – remember the mantra ‘Black Lives Matter.’ All white men are equally privileged and have no right to an opinion unless they are part of the devious manipulative controlling elite. As for herd immunity or vaccine, for that elite these dreams must be beyond the horizon. That is why they immediately rubbish the Russian vaccine. The elite have us right where they want us. Our fears and preoccupations must be BAME, domestic violence , sex crimes ,feminist demands and fighting racists – our fears focused on Russia and China. That elite faked the figures for the first wave and are determined to find or fake evidence of a second one. Robert Cook

Forget Everything You Think You Know About Time

Is a linear representation of time accurate? This physicist says no.

Nautilus

  • Brian Gallagher

In April 2018, in the famous Faraday Theatre at the Royal Institution in London, Carlo Rovelli gave an hour-long lecture on the nature of time. A red thread spanned the stage, a metaphor for the Italian theoretical physicist’s subject. “Time is a long line,” he said. To the left lies the past—the dinosaurs, the big bang—and to the right, the future—the unknown. “We’re sort of here,” he said, hanging a carabiner on it, as a marker for the present.

Then he flipped the script. “I’m going to tell you that time is not like that,” he explained.

Rovelli went on to challenge our common-sense notion of time, starting with the idea that it ticks everywhere at a uniform rate. In fact, clocks tick slower when they are in a stronger gravitational field. When you move nearby clocks showing the same time into different fields—one in space, the other on Earth, say—and then bring them back together again, they will show different times. “It’s a fact,” Rovelli said, and it means “your head is older than your feet.” Also a non-starter is any shared sense of “now.” We don’t really share the present moment with anyone. “If I look at you, I see you now—well, but not really, because light takes time to come from you to me,” he said. “So I see you sort of a little bit in the past.” As a result, “now” means nothing beyond the temporal bubble “in which we can disregard the time it takes light to go back and forth.”

Rovelli turned next to the idea that time flows in only one direction, from past to future. Unlike general relativity, quantum mechanics, and particle physics, thermodynamics embeds a direction of time. Its second law states that the total entropy, or disorder, in an isolated system never decreases over time. Yet this doesn’t mean that our conventional notion of time is on any firmer grounding, Rovelli said. Entropy, or disorder, is subjective: “Order is in the eye of the person who looks.” In other words the distinction between past and future, the growth of entropy over time, depends on a macroscopic effect—“the way we have described the system, which in turn depends on how we interact with the system,” he said.

“A million years of your life would be neither past nor future for me. So the present is not thin; it’s horrendously thick.”

Getting to the last common notion of time, Rovelli became a little more cautious. His scientific argument that time is discrete—that it is not seamless, but has quanta—is less solid. “Why? Because I’m still doing it! It’s not yet in the textbook.” The equations for quantum gravity he’s written down suggest three things, he said, about what “clocks measure.” First, there’s a minimal amount of time—its units are not infinitely small. Second, since a clock, like every object, is quantum, it can be in a superposition of time readings. “You cannot say between this event and this event is a certain amount of time, because, as always in quantum mechanics, there could be a probability distribution of time passing.” Which means that, third, in quantum gravity, you can have “a local notion of a sequence of events, which is a minimal notion of time, and that’s the only thing that remains,” Rovelli said. Events aren’t ordered in a line “but are confused and connected” to each other without “a preferred time variable—anything can work as a variable.”

Even the notion that the present is fleeting doesn’t hold up to scrutiny. It is certainly true that the present is “horrendously short” in classical, Newtonian physics. “But that’s not the way the world is designed,” Rovelli explained. Light traces a cone, or consecutively larger circles, in four-dimensional spacetime like ripples on a pond that grow larger as they travel. No information can cross the bounds of the light cone because that would require information to travel faster than the speed of light.

“In spacetime, the past is whatever is inside our past light-cone,” Rovelli said, gesturing with his hands the shape of an upside down cone. “So it’s whatever can affect us. The future is this opposite thing,” he went on, now gesturing an upright cone. “So in between the past and the future, there isn’t just a single line—there’s a huge amount of time.” Rovelli asked an audience member to imagine that he lived in Andromeda, which is two and a half million light years away. “A million years of your life would be neither past nor future for me. So the present is not thin; it’s horrendously thick.”

Listening to Rovelli’s description, I was reminded of a phrase from his book, The Order of Time: Studying time “is like holding a snowflake in your hands: gradually, as you study it, it melts between your fingers and vanishes.” 

Brian Gallagher is the editor of Facts So Romantic, the Nautilus blog. Follow him on Twitter @BSGallagher.
Nautilus

More from Nautilus

Big Bounce Simulations Challenge the Big Bang

Detailed computer simulations have found that a cosmic contraction can generate features of the universe that we observe today.

In a cyclic universe, periods of expansion alternate with periods of contraction. The universe has no beginning and no end.

Samuel Velasco/Quanta Magazine

Charlie Wood

Contributing Writer

August 4, 2020

Cyclic Universe

The standard story of the birth of the cosmos goes something like this: Nearly 14 billion years ago, a tremendous amount of energy materialized as if from nowhere.

In a brief moment of rapid expansion, that burst of energy inflated the cosmos like a balloon. The expansion straightened out any large-scale curvature, leading to a geometry that we now describe as flat. Matter also thoroughly mixed together, so that now the cosmos appears largely (though not perfectly) featureless. Here and there, clumps of particles have created galaxies and stars, but these are just minuscule specks on an otherwise unblemished cosmic canvas.

That theory, which textbooks call inflation, matches all observations to date and is preferred by most cosmologists. But it has conceptual implications that some find disturbing. In most regions of space-time, the rapid expansion would never stop. As a consequence, inflation can’t help but produce a multiverse — a technicolor existence with an infinite variety of pocket universes, one of which we call home. To critics, inflation predicts everything, which means it ultimately predicts nothing. “Inflation doesn’t work as it was intended to work,” said Paul Steinhardt, an architect of inflation who has become one of its most prominent critics.

In recent years, Steinhardt and others have been developing a different story of how our universe came to be. They have revived the idea of a cyclical universe: one that periodically grows and contracts. They hope to replicate the universe that we see — flat and smooth — without the baggage that comes with a bang.

In ‘Brief History of Time’ Stephen Hawkins suggests that all the matter in the universe ariginated from a pin head size store of infinitely dense matter. That seemed unlikely to me. The idea of an ever expanding universe is based on that concept. Robert Cook.

Abstractions​ navigates promising ideas in science and mathematics. Journey with us and join the conversation.

To that end, Steinhardt and his collaborators recently teamed up with researchers who specialize in computational models of gravity. They analyzed how a collapsing universe would change its own structure, and they ultimately discovered that contraction can beat inflation at its own game. No matter how bizarre and twisted the universe looked before it contracted, the collapse would efficiently erase a wide range of primordial wrinkles.

“It’s very important, what they claim they’ve done,” said Leonardo Senatore, a cosmologist at Stanford University who has analyzed inflation using a similar approach. There are aspects of the work he hasn’t yet had a chance to investigate, he said, but at first glance “it looks like they’ve done it.”

Squeezing the View

Over the last year and a half, a fresh view of the cyclic, or “ekpyrotic,” universe has emerged from a collaboration between Steinhardt, Anna Ijjas, a cosmologist at the Max Planck Institute for Gravitational Physics in Germany, and others — one that achieves renewal without collapse.

When it comes to visualizing expansion and contraction, people often focus on a balloonlike universe whose change in size is described by a “scale factor.” But a second measure — the Hubble radius, which is the greatest distance we can see — gets short shrift. The equations of general relativity let them evolve independently, and, crucially, you can flatten the universe by changing either.

Picture an ant on a balloon. Inflation is like blowing up the balloon. It puts the onus of smoothing and flattening primarily on the swelling cosmos. In the cyclic universe, however, the smoothing happens during a period of contraction. During this epoch, the balloon deflates modestly, but the real work is done by a drastically shrinking horizon. It’s as if the ant views everything through an increasingly powerful magnifying glass. The distance it can see shrinks, and thus its world grows more and more featureless.

Lucy Reading-Ikkanda/Quanta Magazine

Steinhardt and company imagine a universe that expands for perhaps a trillion years, driven by the energy of an omnipresent (and hypothetical) field, whose behavior we currently attribute to dark energy. When this energy field eventually grows sparse, the cosmos starts to gently deflate. Over billions of years a contracting scale factor brings everything a bit closer, but not all the way down to a point. The dramatic change comes from the Hubble radius, which rushes in and eventually becomes microscopic. The universe’s contraction recharges the energy field, which heats up the cosmos and vaporizes its atoms. A bounce ensues, and the cycle starts anew.

In the bounce model, the microscopic Hubble radius ensures smoothness and flatness. And whereas inflation blows up many initial imperfections into giant plots of multiverse real estate, slow contraction squeezes them essentially out of existence. We are left with a cosmos that has no beginning, no end, no singularity at the Big Bang, and no multiverse.

From Any Cosmos to Ours

One challenge for both inflation and bounce cosmologies is to show that their respective energy fields create the right universe no matter how they get started. “Our philosophy is that there should be no philosophy,” Ijjas said. “You know it works when you don’t have to ask under what condition it works.”

She and Steinhardt criticize inflation for doing its job only in special cases, such as when its energy field forms without notable features and with little motion. Theorists have explored these situations most thoroughly, in part because they are the only examples tractable with chalkboard mathematics. In recent computer simulations, which Ijjas and Steinhardt describe in a pair of preprints posted online in June, the team stress-tested their slow-contraction model with a range of baby universes too wild for pen-and paper analysis.

Adapting code developed by Frans Pretorius, a theoretical physicist at Princeton University who specializes in computational models of general relativity, the collaboration explored twisted and lumpy fields, fields moving in the wrong direction, even fields born with halves racing in opposing directions. In nearly every case, contraction swiftly produced a universe as boring as ours.

“You let it go and — bam! In a few cosmic moments of slow contraction it looks as smooth as silk,” Steinhardt said.

Katy Clough, a cosmologist at the University of Oxford who also specializes in numerical solutions of general relativity, called the new simulations “very comprehensive.” But she also noted that computational advances have only recently made this kind of analysis possible, so the full range of conditions that inflation can handle remains uncharted.

“It’s been semi-covered, but it needs a lot more work,” she said.

While interest in Ijjas and Steinhardt’s model varies, most cosmologists agree that inflation remains the paradigm to beat. “[Slow contraction] is not an equal contender at this point,” said Gregory Gabadadze, a cosmologist at New York University.

The collaboration will next flesh out the bounce itself — a more complex stage that requires novel interactions to push everything apart again. Ijjas already has one bounce theory that upgrades general relativity with a new interaction between matter and space-time, and she suspects that other mechanisms exist too. She plans to put her model on the computer soon to understand its behavior in detail.

Related:

Physicists Debate Hawking’s Idea That the Universe Had No Beginning
How the Universe Got Its Bounce Back
A Fight for the Soul of Science

The group hopes that after gluing the contraction and expansion stages together, they’ll identify unique features of a bouncing universe that astronomers might spot.

The collaboration has not worked out every detail of a cyclic cosmos with no bang and no crunch, much less shown that we live in one. But Steinhardt now feels optimistic that the model will soon offer a viable alternative to the multiverse. “The roadblocks I was most worried about have been surpassed,” he said. “I’m not kept up at night anymore.”

Editor’s note: Some of this research was funded in part by the Simons Foundation, which also funds this editorially independent magazine. Simons Foundation funding decisions play no role in our coverage. M

KODAK Digital Still Camera

This Scientist Believes Ageing Is Optional august 10th 2020

In his book, “Lifespan,” celebrated scientist David Sinclair lays out exactly why we age—and why he thinks we don’t have to.

Outside

  • Graham Averill
life-review-book_h.jpg

If scientist David Sinclair is correct about aging, we might not have to age as quickly as we do. Photo by tomazl / iStock.

The oldest-known living person is Kane Tanaka, a Japanese woman who is a mind-boggling 116 years old. But if you ask David Sinclair, he’d argue that 116 is just middle age. At least, he thinks it should be. Sinclair is one of the leading scientists in the field of aging, and he believes that growing old isn’t a natural part of life—it’s a disease that needs a cure.

Sounds crazy, right? Sinclair, a Harvard professor who made Time’s list of the 100 most influential people in the world in 2014, will acquiesce that everyone has to die at some point, but he argues that we can double our life expectancy and live healthy, active lives right up until the end.

His 2019 book, Lifespan: Why We Age and Why We Don’t Have To ($28, Atria Books), out this fall, details the cutting-edge science that’s taking place in the field of longevity right now. The quick takeaway from this not-so-quick read: scientists are tossing out previous assumptions about aging, and they’ve discovered several tools that you can employ right now to slow down, and in some cases, reverse the clock.

In the nineties, as a postdoc in an MIT lab, Sinclair caused a stir in the field when he discovered the mechanism that leads to aging in yeast, which offered some insight into why humans age. Using his work with yeast as a launching point, Sinclair and his lab colleagues have focused on identifying the mechanism for aging in humans and published a study in 2013 asserting that the malfunction of a family of proteins called sirtuins is the single cause of aging. Sirtuins are responsible for repairing DNA damage and controlling overall cellular health by keeping cells on task. In other words, sirtuins tell kidney cells to act like kidney cells. If they get overwhelmed, cells start to misbehave, and we see the symptoms of aging, like organ failure or wrinkles. All of the genetic info in our cells is still there as we get older, but our body loses the ability to interpret it. This is because our body starts to run low on NAD, a molecule that activates the sirtuins: we have half as much NAD in our body when we’re 50 as we do at 20. Without it, the sirtuins can’t do their job, and the cells in our body forget what they’re supposed to be doing.

Sinclair splits his time between the U.S. and Australia, running labs at Harvard Medical School and at the University of New South Wales. All of his research seeks to prove that aging is a problem we can solve—and figure out how to stop. He argues that we can slow down the aging process, and in some cases even reverse it, by putting our body through “healthy stressors” that increase NAD levels and promote sirtuin activity. The role of sirtuins in aging is now fairly well accepted, but the idea that we can reactivate them (and how best to do so) is still being worked out.

Getting cold, working out hard, and going hungry every once in a while all engage what Sinclair calls our body’s survival circuit, wherein sirtuins tell cells to boost their defenses in order to keep the organism (you) alive. While Sinclair’s survival-circuit theory has yet to be proven in a trial setting, there’s plenty of research to suggest that exercise, cold exposure, and calorie reduction all help slow down the side effects of aging and stave off diseases associated with getting older. Fasting, in particular, has been well supported by other research: in various studies, both mice and yeast that were fed restricted diets live much longer than their well-fed cohorts. A two-year-long human experiment in the 1990s found that participants who had a restricted diet that left them hungry often had decreased blood pressure, blood-sugar levels, and cholesterol levels. Subsequent human studies found that decreasing calories by 12 percent slowed down biological aging based on changes in blood biomarkers.

Longevity science is a bit like the Wild West: the rules aren’t quite established. The research is exciting, but human clinical trials haven’t found anything definitive just yet. Throughout the field, there’s an uncomfortable relationship between privately owned companies, researchers, and even research institutes like Harvard: Sinclair points to a biomarker test by a company called InsideTracker as proof of his own reduced “biological age,” but he is also an investor in that company. He is listed as an inventor on a patent held by a NAD booster that’s on the market right now, too.

While the dust settles, the best advice for the curious to take from Lifespan is to experiment with habits that are easy, free, and harmless—like taking a brisk, cold walk and eating a lighter diet. With cold exposure, Sinclair explains, moderation is the key. He believes that you can reap benefits by simply taking a walk in the winter without a jacket. He doesn’t prescribe an exact fasting regimen that works best, but he doesn’t recommend anything extreme—simply missing a meal here and there, like skipping breakfast and having a late lunch.

How the Pandemic Defeated America

A virus has brought the world’s most powerful country to its knees.

窗体顶端

Like ​The Atlantic? Subscribe to The Atlantic Daily​, our free weekday email newsletter.

窗体底端

窗体顶端

窗体底端

Editor’s Note: The Atlantic is making vital coverage of the coronavirus available to all readers. Find the collection here.

Updated at 1:12 p.m. ET on August 4, 2020.

How did it come to this? A virus a thousand times smaller than a dust mote has humbled and humiliated the planet’s most powerful nation. America has failed to protect its people, leaving them with illness and financial ruin. It has lost its status as a global leader. It has careened between inaction and ineptitude. The breadth and magnitude of its errors are difficult, in the moment, to truly fatho

In the first half of 2020, SARSCoV2—the new coronavirus behind the disease COVID19—infected 10 million people around the world and killed about half a million. But few countries have been as severely hit as the United States, which has just 4 percent of the world’s population but a quarter of its confirmed COVID19 cases and deaths. These numbers are estimates. The actual toll, though undoubtedly higher, is unknown, because the richest country in the world still lacks sufficient testing to accurately count its sick citizens.

Despite ample warning, the U.S. squandered every possible opportunity to control the coronavirus. And despite its considerable advantages—immense resources, biomedical might, scientific expertise—it floundered. While countries as different as South Korea, Thailand, Iceland, Slovakia, and Australia acted decisively to bend the curve of infections downward, the U.S. achieved merely a plateau in the spring, which changed to an appalling upward slope in the summer. “The U.S. fundamentally failed in ways that were worse than I ever could have imagined,” Julia Marcus, an infectious-disease epidemiologist at Harvard Medical School, told me.

Since the pandemic began, I have spoken with more than 100 experts in a variety of fields. I’ve learned that almost everything that went wrong with America’s response to the pandemic was predictable and preventable. A sluggish response by a government denuded of expertise allowed the coronavirus to gain a foothold. Chronic underfunding of public health neutered the nation’s ability to prevent the pathogen’s spread. A bloated, inefficient health-care system left hospitals ill-prepared for the ensuing wave of sickness. Racist policies that have endured since the days of colonization and slavery left Indigenous and Black Americans especially vulnerable to COVID19. The decades-long process of shredding the nation’s social safety net forced millions of essential workers in low-paying jobs to risk their life for their livelihood. The same social-media platforms that sowed partisanship and misinformation during the 2014 Ebola outbreak in Africa and the 2016 U.S. election became vectors for conspiracy theories during the 2020 pandemic.

The U.S. has little excuse for its inattention. In recent decades, epidemics of SARS, MERS, Ebola, H1N1 flu, Zika, and monkeypox showed the havoc that new and reemergent pathogens could wreak. Health experts, business leaders, and even middle schoolers ran simulated exercises to game out the spread of new diseases. In 2018, I wrote an article for The Atlantic arguing that the U.S. was not ready for a pandemic, and sounded warnings about the fragility of the nation’s health-care system and the slow process of creating a vaccine. But the COVID19 debacle has also touched—and implicated—nearly every other facet of American society: its shortsighted leadership, its disregard for expertise, its racial inequities, its social-media culture, and its fealty to a dangerous strain of individualism.

SARSCoV2 is something of an anti-Goldilocks virus: just bad enough in every way. Its symptoms can be severe enough to kill millions but are often mild enough to allow infections to move undetected through a population. It spreads quickly enough to overload hospitals, but slowly enough that statistics don’t spike until too late. These traits made the virus harder to control, but they also softened the pandemic’s punch. SARSCoV2 is neither as lethal as some other coronaviruses, such as SARS and MERS, nor as contagious as measles. Deadlier pathogens almost certainly exist. Wild animals harbor an estimated 40,000 unknown viruses, a quarter of which could potentially jump into humans. How will the U.S. fare when “we can’t even deal with a starter pandemic?,” Zeynep Tufekci, a sociologist at the University of North Carolina and an Atlantic contributing writer, asked me.

Despite its epochal effects, COVID19 is merely a harbinger of worse plagues to come. The U.S. cannot prepare for these inevitable crises if it returns to normal, as many of its people ache to do. Normal led to this. Normal was a world ever more prone to a pandemic but ever less ready for one. To avert another catastrophe, the U.S. needs to grapple with all the ways normal failed us. It needs a full accounting of every recent misstep and foundational sin, every unattended weakness and unheeded warning, every festering wound and reopened scar.

A pandemic can be prevented in two ways: Stop an infection from ever arising, or stop an infection from becoming thousands more. The first way is likely impossible. There are simply too many viruses and too many animals that harbor them. Bats alone could host thousands of unknown coronaviruses; in some Chinese caves, one out of every 20 bats is infected. Many people live near these caves, shelter in them, or collect guano from them for fertilizer. Thousands of bats also fly over these people’s villages and roost in their homes, creating opportunities for the bats’ viral stowaways to spill over into human hosts. Based on antibody testing in rural parts of China, Peter Daszak of EcoHealth Alliance, a nonprofit that studies emerging diseases, estimates that such viruses infect a substantial number of people every year. “Most infected people don’t know about it, and most of the viruses aren’t transmissible,” Daszak says. But it takes just one transmissible virus to start a pandemic.

Sometime in late 2019, the wrong virus left a bat and ended up, perhaps via an intermediate host, in a human—and another, and another. Eventually it found its way to the Huanan seafood market, and jumped into dozens of new hosts in an explosive super-spreading event. The COVID19 pandemic had begun.

“There is no way to get spillover of everything to zero,” Colin Carlson, an ecologist at Georgetown University, told me. Many conservationists jump on epidemics as opportunities to ban the wildlife trade or the eating of “bush meat,” an exoticized term for “game,” but few diseases have emerged through either route. Carlson said the biggest factors behind spillovers are land-use change and climate change, both of which are hard to control. Our species has relentlessly expanded into previously wild spaces. Through intensive agriculture, habitat destruction, and rising temperatures, we have uprooted the planet’s animals, forcing them into new and narrower ranges that are on our own doorsteps. Humanity has squeezed the world’s wildlife in a crushing grip—and viruses have come bursting out.

Curtailing those viruses after they spill over is more feasible, but requires knowledge, transparency, and decisiveness that were lacking in 2020. Much about coronaviruses is still unknown. There are no surveillance networks for detecting them as there are for influenza. There are no approved treatments or vaccines. Coronaviruses were formerly a niche family, of mainly veterinary importance. Four decades ago, just 60 or so scientists attended the first international meeting on coronaviruses. Their ranks swelled after SARS swept the world in 2003, but quickly dwindled as a spike in funding vanished. The same thing happened after MERS emerged in 2012. This year, the world’s coronavirus experts—and there still aren’t many—had to postpone their triennial conference in the Netherlands because SARSCoV2 made flying too risky.

In the age of cheap air travel, an outbreak that begins on one continent can easily reach the others. SARS already demonstrated that in 2003, and more than twice as many people now travel by plane every year. To avert a pandemic, affected nations must alert their neighbors quickly. In 2003, China covered up the early spread of SARS, allowing the new disease to gain a foothold, and in 2020, history repeated itself. The Chinese government downplayed the possibility that SARSCoV2 was spreading among humans, and only confirmed as much on January 20, after millions had traveled around the country for the lunar new year. Doctors who tried to raise the alarm were censured and threatened. One, Li Wenliang, later died of COVID19. The World Health Organization initially parroted China’s line and did not declare a public-health emergency of international concern until January 30. By then, an estimated 10,000 people in 20 countries had been infected, and the virus was spreading fast.

The United States has correctly castigated China for its duplicity and the WHO for its laxity—but the U.S. has also failed the international community. Under President Donald Trump, the U.S. has withdrawn from several international partnerships and antagonized its allies. It has a seat on the WHO’s executive board, but left that position empty for more than two years, only filling it this May, when the pandemic was in full swing. Since 2017, Trump has pulled more than 30 staffers out of the Centers for Disease Control and Prevention’s office in China, who could have warned about the spreading coronavirus. Last July, he defunded an American epidemiologist embedded within China’s CDC. America First was America oblivious.

Even after warnings reached the U.S., they fell on the wrong ears. Since before his election, Trump has cavalierly dismissed expertise and evidence. He filled his administration with inexperienced newcomers, while depicting career civil servants as part of a “deep state.” In 2018, he dismantled an office that had been assembled specifically to prepare for nascent pandemics. American intelligence agencies warned about the coronavirus threat in January, but Trump habitually disregards intelligence briefings. The secretary of health and human services, Alex Azar, offered similar counsel, and was twice ignored.

Being prepared means being ready to spring into action, “so that when something like this happens, you’re moving quickly,” Ronald Klain, who coordinated the U.S. response to the West African Ebola outbreak in 2014, told me. “By early February, we should have triggered a series of actions, precisely zero of which were taken.” Trump could have spent those crucial early weeks mass-producing tests to detect the virus, asking companies to manufacture protective equipment and ventilators, and otherwise steeling the nation for the worst. Instead, he focused on the border. On January 31, Trump announced that the U.S. would bar entry to foreigners who had recently been in China, and urged Americans to avoid going there.

Related Stories

Travel bans make intuitive sense, because travel obviously enables the spread of a virus. But in practice, travel bans are woefully inefficient at restricting either travel or viruses. They prompt people to seek indirect routes via third-party countries, or to deliberately hide their symptoms. They are often porous: Trump’s included numerous exceptions, and allowed tens of thousands of people to enter from China. Ironically, they create travel: When Trump later announced a ban on flights from continental Europe, a surge of travelers packed America’s airports in a rush to beat the incoming restrictions. Travel bans may sometimes work for remote island nations, but in general they can only delay the spread of an epidemic—not stop it. And they can create a harmful false confidence, so countries “rely on bans to the exclusion of the things they actually need to do—testing, tracing, building up the health system,” says Thomas Bollyky, a global-health expert at the Council on Foreign Relations. “That sounds an awful lot like what happened in the U.S.”

This was predictable. A president who is fixated on an ineffectual border wall, and has portrayed asylum seekers as vectors of disease, was always going to reach for travel bans as a first resort. And Americans who bought into his rhetoric of xenophobia and isolationism were going to be especially susceptible to thinking that simple entry controls were a panacea.

And so the U.S. wasted its best chance of restraining COVID19. Although the disease first arrived in the U.S. in mid-January, genetic evidence shows that the specific viruses that triggered the first big outbreaks, in Washington State, didn’t land until mid-February. The country could have used that time to prepare. Instead, Trump, who had spent his entire presidency learning that he could say whatever he wanted without consequence, assured Americans that “the coronavirus is very much under control,” and “like a miracle, it will disappear.” With impunity, Trump lied. With impunity, the virus spread.

On February 26, Trump asserted that cases were “going to be down to close to zero.” Over the next two months, at least 1 million Americans were infected.

As the coronavirus established itself in the U.S., it found a nation through which it could spread easily, without being detected. For years, Pardis Sabeti, a virologist at the Broad Institute of Harvard and MIT, has been trying to create a surveillance network that would allow hospitals in every major U.S. city to quickly track new viruses through genetic sequencing. Had that network existed, once Chinese scientists published SARSCoV2’s genome on January 11, every American hospital would have been able to develop its own diagnostic test in preparation for the virus’s arrival. “I spent a lot of time trying to convince many funders to fund it,” Sabeti told me. “I never got anywhere.”

The CDC developed and distributed its own diagnostic tests in late January. These proved useless because of a faulty chemical component. Tests were in such short supply, and the criteria for getting them were so laughably stringent, that by the end of February, tens of thousands of Americans had likely been infected but only hundreds had been tested. The official data were so clearly wrong that The Atlantic developed its own volunteer-led initiative—the COVID Tracking Project—to count cases.

Diagnostic tests are easy to make, so the U.S. failing to create one seemed inconceivable. Worse, it had no Plan B. Private labs were strangled by FDA bureaucracy. Meanwhile, Sabeti’s lab developed a diagnostic test in mid-January and sent it to colleagues in Nigeria, Sierra Leone, and Senegal. “We had working diagnostics in those countries well before we did in any U.S. states,” she told me.

It’s hard to overstate how thoroughly the testing debacle incapacitated the U.S. People with debilitating symptoms couldn’t find out what was wrong with them. Health officials couldn’t cut off chains of transmission by identifying people who were sick and asking them to isolate themselves.

Read: How the coronavirus became an American catastrophe

Water running along a pavement will readily seep into every crack; so, too, did the unchecked coronavirus seep into every fault line in the modern world. Consider our buildings. In response to the global energy crisis of the 1970s, architects made structures more energy-efficient by sealing them off from outdoor air, reducing ventilation rates. Pollutants and pathogens built up indoors, “ushering in the era of ‘sick buildings,’ ” says Joseph Allen, who studies environmental health at Harvard’s T. H. Chan School of Public Health. Energy efficiency is a pillar of modern climate policy, but there are ways to achieve it without sacrificing well-being. “We lost our way over the years and stopped designing buildings for people,” Allen says.

The indoor spaces in which Americans spend 87 percent of their time became staging grounds for super-spreading events. One study showed that the odds of catching the virus from an infected person are roughly 19 times higher indoors than in open air. Shielded from the elements and among crowds clustered in prolonged proximity, the coronavirus ran rampant in the conference rooms of a Boston hotel, the cabins of the Diamond Princess cruise ship, and a church hall in Washington State where a choir practiced for just a few hours.

The hardest-hit buildings were those that had been jammed with people for decades: prisons. Between harsher punishments doled out in the War on Drugs and a tough-on-crime mindset that prizes retribution over rehabilitation, America’s incarcerated population has swelled sevenfold since the 1970s, to about 2.3 million. The U.S. imprisons five to 18 times more people per capita than other Western democracies. Many American prisons are packed beyond capacity, making social distancing impossible. Soap is often scarce. Inevitably, the coronavirus ran amok. By June, two American prisons each accounted for more cases than all of New Zealand. One, Marion Correctional Institution, in Ohio, had more than 2,000 cases among inmates despite having a capacity of 1,500. 


Other densely packed facilities were also besieged. America’s nursing homes and long-term-care facilities house less than 1 percent of its people, but as of mid-June, they accounted for 40 percent of its coronavirus deaths. More than 50,000 residents and staff have died. At least 250,000 more have been infected. These grim figures are a reflection not just of the greater harms that COVID19 inflicts upon elderly physiology, but also of the care the elderly receive. Before the pandemic, three in four nursing homes were understaffed, and four in five had recently been cited for failures in infection control. The Trump administration’s policies have exacerbated the problem by reducing the influx of immigrants, who make up a quarter of long-term caregivers.

Read: Another coronavirus nursing-home disaster is coming

Even though a Seattle nursing home was one of the first COVID19 hot spots in the U.S., similar facilities weren’t provided with tests and protective equipment. Rather than girding these facilities against the pandemic, the Department of Health and Human Services paused nursing-home inspections in March, passing the buck to the states. Some nursing homes avoided the virus because their owners immediately stopped visitations, or paid caregivers to live on-site. But in others, staff stopped working, scared about infecting their charges or becoming infected themselves. In some cases, residents had to be evacuated because no one showed up to care for them.

America’s neglect of nursing homes and prisons, its sick buildings, and its botched deployment of tests are all indicative of its problematic attitude toward health: “Get hospitals ready and wait for sick people to show,” as Sheila Davis, the CEO of the nonprofit Partners in Health, puts it. “Especially in the beginning, we catered our entire [COVID19] response to the 20 percent of people who required hospitalization, rather than preventing transmission in the community.” The latter is the job of the public-health system, which prevents sickness in populations instead of merely treating it in individuals. That system pairs uneasily with a national temperament that views health as a matter of personal responsibility rather than a collective good.

At the end of the 20th century, public-health improvements meant that Americans were living an average of 30 years longer than they were at the start of it. Maternal mortality had fallen by 99 percent; infant mortality by 90 percent. Fortified foods all but eliminated rickets and goiters. Vaccines eradicated smallpox and polio, and brought measles, diphtheria, and rubella to heel. These measures, coupled with antibiotics and better sanitation, curbed infectious diseases to such a degree that some scientists predicted they would soon pass into history. But instead, these achievements brought complacency. “As public health did its job, it became a target” of budget cuts, says Lori Freeman, the CEO of the National Association of County and City Health Officials.

Today, the U.S. spends just 2.5 percent of its gigantic health-care budget on public health. Underfunded health departments were already struggling to deal with opioid addiction, climbing obesity rates, contaminated water, and easily preventable diseases. Last year saw the most measles cases since 1992. In 2018, the U.S. had 115,000 cases of syphilis and 580,000 cases of gonorrhea—numbers not seen in almost three decades. It has 1.7 million cases of chlamydia, the highest number ever recorded.

Since the last recession, in 2009, chronically strapped local health departments have lost 55,000 jobs—a quarter of their workforce. When COVID19 arrived, the economic downturn forced overstretched departments to furlough more employees. When states needed battalions of public-health workers to find infected people and trace their contacts, they had to hire and train people from scratch. In May, Maryland Governor Larry Hogan asserted that his state would soon have enough people to trace 10,000 contacts every day. Last year, as Ebola tore through the Democratic Republic of Congo—a country with a quarter of Maryland’s wealth and an active war.

Ripping unimpeded through American communities, the coronavirus created thousands of sickly hosts that it then rode into America’s hospitals. It should have found facilities armed with state-of-the-art medical technologies, detailed pandemic plans, and ample supplies of protective equipment and life-saving medicines. Instead, it found a brittle system in danger of collapse.

Compared with the average wealthy nation, America spends nearly twice as much of its national wealth on health care, about a quarter of which is wasted on inefficient care, unnecessary treatments, and administrative chicanery. The U.S. gets little bang for its exorbitant buck. It has the lowest life-expectancy rate of comparable countries, the highest rates of chronic disease, and the fewest doctors per person. This profit-driven system has scant incentive to invest in spare beds, stockpiled supplies, peacetime drills, and layered contingency plans—the essence of pandemic preparedness. America’s hospitals have been pruned and stretched by market forces to run close to full capacity, with little ability to adapt in a crisis.

When hospitals do create pandemic plans, they tend to fight the last war. After 2014, several centers created specialized treatment units designed for Ebola—a highly lethal but not very contagious disease. These units were all but useless against a highly transmissible airborne virus like SARSCoV2. Nor were hospitals ready for an outbreak to drag on for months. Emergency plans assumed that staff could endure a few days of exhausting conditions, that supplies would hold, and that hard-hit centers could be supported by unaffected neighbors. “We’re designed for discrete disasters” like mass shootings, traffic pileups, and hurricanes, says Esther Choo, an emergency physician at Oregon Health and Science University. The COVID19 pandemic is not a discrete disaster. It is a 50-state catastrophe that will likely continue at least until a vaccine is ready.

Wherever the coronavirus arrived, hospitals reeled. Several states asked medical students to graduate early, reenlisted retired doctors, and deployed dermatologists to emergency departments. Doctors and nurses endured grueling shifts, their faces chapped and bloody when they finally doffed their protective equipment. Soon, that equipment—masks, respirators, gowns, gloves—started running out.

Millions of Americans have found themselves impoverished and disconnected from medical care.

American In the middle of the greatest health and economic crises in generations, hospitals operate on a just-in-time economy. They acquire the goods they need in the moment through labyrinthine supply chains that wrap around the world in tangled lines, from countries with cheap labor to richer nations like the U.S. The lines are invisible until they snap. About half of the world’s face masks, for example, are made in China, some of them in Hubei province. When that region became the pandemic epicenter, the mask supply shriveled just as global demand spiked. The Trump administration turned to a larder of medical supplies called the Strategic National Stockpile, only to find that the 100 million respirators and masks that had been dispersed during the 2009 flu pandemic were never replaced. Just 13 million respirators were left.

In April, four in five frontline nurses said they didn’t have enough protective equipment. Some solicited donations from the public, or navigated a morass of back-alley deals and internet scams. Others fashioned their own surgical masks from bandannas and gowns from garbage bags. The supply of nasopharyngeal swabs that are used in every diagnostic test also ran low, because one of the largest manufacturers is based in Lombardy, Italy—initially the COVID19 capital of Europe. About 40 percent of critical-care drugs, including antibiotics and painkillers, became scarce because they depend on manufacturing lines that begin in China and India. Once a vaccine is ready, there might not be enough vials to put it in, because of the long-running global shortage of medical-grade glass—literally, a bottle-neck bottleneck.

The federal government could have mitigated those problems by buying supplies at economies of scale and distributing them according to need. Instead, in March, Trump told America’s governors to “try getting it yourselves.” As usual, health care was a matter of capitalism and connections. In New York, rich hospitals bought their way out of their protective-equipment shortfall, while neighbors in poorer, more diverse parts of the city rationed their supplies.

While the president prevaricated, Americans acted. Businesses sent their employees home. People practiced social distancing, even before Trump finally declared a national emergency on March 13, and before governors and mayors subsequently issued formal stay-at-home orders, or closed schools, shops, and restaurants. A study showed that the U.S. could have averted 36,000 COVID19 deaths if leaders had enacted social-distancing measures just a week earlier. But better late than never: By collectively reducing the spread of the virus, America flattened the curve. Ventilators didn’t run out, as they had in parts of Italy. Hospitals had time to add extra beds.

Social distancing worked. But the indiscriminate lockdown was necessary only because America’s leaders wasted months of prep time. Deploying this blunt policy instrument came at enormous cost. Unemployment rose to 14.7 percent, the highest level since record-keeping began, in 1948. More than 26 million people lost their jobs, a catastrophe in a country that—uniquely and absurdly—ties health care to employment. Some COVID19 survivors have been hit with seven-figure medical bills. In the middle of the greatest health and economic crises in generations, millions of Americans have found themselves disconnected from medical care and impoverished. They join the millions who have always lived that way.

The coronavirus found, exploited, and widened every inequity that the U.S. had to offer. Elderly people, already pushed to the fringes of society, were treated as acceptable losses. Women were more likely to lose jobs than men, and also shouldered extra burdens of child care and domestic work, while facing rising rates of domestic violence. In half of the states, people with dementia and intellectual disabilities faced policies that threatened to deny them access to lifesaving ventilators. Thousands of people endured months of COVID19 symptoms that resembled those of chronic postviral illnesses, only to be told that their devastating symptoms were in their head. Latinos were three times as likely to be infected as white people. Asian Americans faced racist abuse. Far from being a “great equalizer,” the pandemic fell unevenly upon the U.S., taking advantage of injustices that had been brewing throughout the nation’s history.

Read: COVID-19 can last for several months

Of the 3.1 million Americans who still cannot afford health insurance in states where Medicaid has not been expanded, more than half are people of color, and 30 percent are Black.* This is no accident. In the decades after the Civil War, the white leaders of former slave states deliberately withheld health care from Black Americans, apportioning medicine more according to the logic of Jim Crow than Hippocrates. They built hospitals away from Black communities, segregated Black patients into separate wings, and blocked Black students from medical school. In the 20th century, they helped construct America’s system of private, employer-based insurance, which has kept many Black people from receiving adequate medical treatment. They fought every attempt to improve Black people’s access to health care, from the creation of Medicare and Medicaid in the ’60s to the passage of the Affordable Care Act in 2010.

A number of former slave states also have among the lowest investments in public health, the lowest quality of medical care, the highest proportions of Black citizens, and the greatest racial divides in health outcomes. As the COVID19 pandemic wore on, they were among the quickest to lift social-distancing restrictions and reexpose their citizens to the coronavirus. The harms of these moves were unduly foisted upon the poor and the Black.

As of early July, one in every 1,450 Black Americans had died from COVID19—a rate more than twice that of white Americans. That figure is both tragic and wholly expected given the mountain of medical disadvantages that Black people face. Compared with white people, they die three years younger. Three times as many Black mothers die during pregnancy. Black people have higher rates of chronic illnesses that predispose them to fatal cases of COVID19. When they go to hospitals, they’re less likely to be treated. The care they do receive tends to be poorer. Aware of these biases, Black people are hesitant to seek aid for COVID19 symptoms and then show up at hospitals in sicker states. “One of my patients said, ‘I don’t want to go to the hospital, because they’re not going to treat me well,’ ” says Uché Blackstock, an emergency physician and the founder of Advancing Health Equity, a nonprofit that fights bias and racism in health care. “Another whispered to me, ‘I’m so relieved you’re Black. I just want to make sure I’m listened to.’ ”

Rather than countering misinformation during the pandemic, trusted sources often made things worse.

Black people were both more worried about the pandemic and more likely to be infected by it. The dismantling of America’s social safety net left Black people with less income and higher unemployment. They make up a disproportionate share of the low-paid “essential workers” who were expected to staff grocery stores and warehouses, clean buildings, and deliver mail while the pandemic raged around them. Earning hourly wages without paid sick leave, they couldn’t afford to miss shifts even when symptomatic. They faced risky commutes on crowded public transportation while more privileged people teleworked from the safety of isolation. “There’s nothing about Blackness that makes you more prone to COVID,” says Nicolette Louissaint, the executive director of Healthcare Ready, a nonprofit that works to strengthen medical supply chains. Instead, existing inequities stack the odds in favor of the virus.

Native Americans were similarly vulnerable. A third of the people in the Navajo Nation can’t easily wash their hands, because they’ve been embroiled in long-running negotiations over the rights to the water on their own lands. Those with water must contend with runoff from uranium mines. Most live in cramped multigenerational homes, far from the few hospitals that service a 17-million-acre reservation. As of mid-May, the Navajo Nation had higher rates of COVID19 infections than any U.S. state.

Americans often misperceive historical inequities as personal failures. Stephen Huffman, a Republican state senator and doctor in Ohio, suggested that Black Americans might be more prone to COVID19 because they don’t wash their hands enough, a remark for which he later apologized. Republican Senator Bill Cassidy of Louisiana, also a physician, noted that Black people have higher rates of chronic disease, as if this were an answer in itself, and not a pattern that demanded further explanation.

Clear distribution of accurate information is among the most important defenses against an epidemic’s spread. And yet the largely unregulated, social-media-based communications infrastructure of the 21st century almost ensures that misinformation will proliferate fast. “In every outbreak throughout the existence of social media, from Zika to Ebola, conspiratorial communities immediately spread their content about how it’s all caused by some government or pharmaceutical company or Bill Gates,” says Renée DiResta of the Stanford Internet Observatory, who studies the flow of online information. When COVID19 arrived, “there was no doubt in my mind that it was coming.”

Read: The great 5G conspiracy

Sure enough, existing conspiracy theories—George Soros! 5G! Bioweapons!—were repurposed for the pandemic. An infodemic of falsehoods spread alongside the actual virus. Rumors coursed through online platforms that are designed to keep users engaged, even if that means feeding them content that is polarizing or untrue. In a national crisis, when people need to act in concert, this is calamitous. “The social internet as a system is broken,” DiResta told me, and its faults are readily abused.

Beginning on April 16, DiResta’s team noticed growing online chatter about Judy Mikovits, a discredited researcher turned anti-vaccination champion. Posts and videos cast Mikovits as a whistleblower who claimed that the new coronavirus was made in a lab and described Anthony Fauci of the White House’s coronavirus task force as her nemesis. Ironically, this conspiracy theory was nested inside a larger conspiracy—part of an orchestrated PR campaign by an anti-vaxxer and QAnon fan with the explicit goal to “take down Anthony Fauci.” It culminated in a slickly produced video called Plandemic, which was released on May 4. More than 8 million people watched it in a week.

Doctors and journalists tried to debunk Plandemic’s many misleading claims, but these efforts spread less successfully than the video itself. Like pandemics, infodemics quickly become uncontrollable unless caught early. But while health organizations recognize the need to surveil for emerging diseases, they are woefully unprepared to do the same for emerging conspiracies. In 2016, when DiResta spoke with a CDC team about the threat of misinformation, “their response was: ‘ That’s interesting, but that’s just stuff that happens on the internet.’ ”

From the June 2020 issue: Adrienne LaFrance on how QAnon is more important than you think

Rather than countering misinformation during the pandemic’s early stages, trusted sources often made things worse. Many health experts and government officials downplayed the threat of the virus in January and February, assuring the public that it posed a low risk to the U.S. and drawing comparisons to the ostensibly greater threat of the flu. The WHO, the CDC, and the U.S. surgeon general urged people not to wear masks, hoping to preserve the limited stocks for health-care workers. These messages were offered without nuance or acknowledgement of uncertainty, so when they were reversed—the virus is worse than the flu; wear masks—the changes seemed like befuddling flip-flops.

The media added to the confusion. Drawn to novelty, journalists gave oxygen to fringe anti-lockdown protests while most Americans quietly stayed home. They wrote up every incremental scientific claim, even those that hadn’t been verified or peer-reviewed.

There were many such claims to choose from. By tying career advancement to the publishing of papers, academia already creates incentives for scientists to do attention-grabbing but irreproducible work. The pandemic strengthened those incentives by prompting a rush of panicked research and promising ambitious scientists global attention.

In March, a small and severely flawed French study suggested that the antimalarial drug hydroxychloroquine could treat COVID19. Published in a minor journal, it likely would have been ignored a decade ago. But in 2020, it wended its way to Donald Trump via a chain of credulity that included Fox News, Elon Musk, and Dr. Oz. Trump spent months touting the drug as a miracle cure despite mounting evidence to the contrary, causing shortages for people who actually needed it to treat lupus and rheumatoid arthritis. The hydroxychloroquine story was muddied even further by a study published in a top medical journal, The Lancet, that claimed the drug was not effective and was potentially harmful. The paper relied on suspect data from a small analytics company called Surgisphere, and was retracted in June.**

Science famously self-corrects. But during the pandemic, the same urgent pace that has produced valuable knowledge at record speed has also sent sloppy claims around the world before anyone could even raise a skeptical eyebrow. The ensuing confusion, and the many genuine unknowns about the virus, has created a vortex of fear and uncertainty, which grifters have sought to exploit. Snake-oil merchants have peddled ineffectual silver bullets (including actual silver). Armchair experts with scant or absent qualifications have found regular slots on the nightly news. And at the center of that confusion is Donald Trump.

During a pandemic, leaders must rally the public, tell the truth, and speak clearly and consistently. Instead, Trump repeatedly contradicted public-health experts, his scientific advisers, and himself. He said that “nobody ever thought a thing like [the pandemic] could happen” and also that he “felt it was a pandemic long before it was called a pandemic.” Both statements cannot be true at the same time, and in fact neither is true.

A month before his inauguration, I wrote that “the question isn’t whether [Trump will] face a deadly outbreak during his presidency, but when.” Based on his actions as a media personality during the 2014 Ebola outbreak and as a candidate in the 2016 election, I suggested that he would fail at diplomacy, close borders, tweet rashly, spread conspiracy theories, ignore experts, and exhibit reckless self-confidence. And so he did.

No one should be shocked that a liar who has made almost 20,000 false or misleading claims during his presidency would lie about whether the U.S. had the pandemic under control; that a racist who gave birth to birtherism would do little to stop a virus that was disproportionately killing Black people; that a xenophobe who presided over the creation of new immigrant-detention centers would order meatpacking plants with a substantial immigrant workforce to remain open; that a cruel man devoid of empathy would fail to calm fearful citizens; that a narcissist who cannot stand to be upstaged would refuse to tap the deep well of experts at his disposal; that a scion of nepotism would hand control of a shadow coronavirus task force to his unqualified son-in-law; that an armchair polymath would claim to have a “natural ability” at medicine and display it by wondering out loud about the curative potential of injecting disinfectant; that an egotist incapable of admitting failure would try to distract from his greatest one by blaming China, defunding the WHO, and promoting miracle drugs; or that a president who has been shielded by his party from any shred of accountability would say, when asked about the lack of testing, “I don’t take any responsibility at all.”

Left: A woman hugs her grandmother through a plastic sheet in Wantagh, New York. Right: An elderly woman has her oxygen levels tested in Yonkers, New York. (Al Bello / Getty; Andrew Renneisen / The New York Times / Redux)

Trump is a comorbidity of the COVID19 pandemic. He isn’t solely responsible for America’s fiasco, but he is central to it. A pandemic demands the coordinated efforts of dozens of agencies. “In the best circumstances, it’s hard to make the bureaucracy move quickly,” Ron Klain said. “It moves if the president stands on a table and says, ‘Move quickly.’ But it really doesn’t move if he’s sitting at his desk saying it’s not a big deal.”

In the early days of Trump’s presidency, many believed that America’s institutions would check his excesses. They have, in part, but Trump has also corrupted them. The CDC is but his latest victim. On February 25, the agency’s respiratory-disease chief, Nancy Messonnier, shocked people by raising the possibility of school closures and saying that “disruption to everyday life might be severe.” Trump was reportedly enraged. In response, he seems to have benched the entire agency. The CDC led the way in every recent domestic disease outbreak and has been the inspiration and template for public-health agencies around the world. But during the three months when some 2 million Americans contracted COVID19 and the death toll topped 100,000, the agency didn’t hold a single press conference. Its detailed guidelines on reopening the country were shelved for a month while the White House released its own uselessly vague plan.

Again, everyday Americans did more than the White House. By voluntarily agreeing to months of social distancing, they bought the country time, at substantial cost to their financial and mental well-being. Their sacrifice came with an implicit social contract—that the government would use the valuable time to mobilize an extraordinary, energetic effort to suppress the virus, as did the likes of Germany and Singapore. But the government did not, to the bafflement of health experts. “There are instances in history where humanity has really moved mountains to defeat infectious diseases,” says Caitlin Rivers, an epidemiologist at the Johns Hopkins Center for Health Security. “It’s appalling that we in the U.S. have not summoned that energy around COVID19.”

Instead, the U.S. sleepwalked into the worst possible scenario: People suffered all the debilitating effects of a lockdown with few of the benefits. Most states felt compelled to reopen without accruing enough tests or contact tracers. In April and May, the nation was stuck on a terrible plateau, averaging 20,000 to 30,000 new cases every day. In June, the plateau again became an upward slope, soaring to record-breaking heights.

Read: Ed Yong on living in a patchwork pandemic

Trump never rallied the country. Despite declaring himself a “wartime president,” he merely presided over a culture war, turning public health into yet another politicized cage match. Abetted by supporters in the conservative media, he framed measures that protect against the virus, from masks to social distancing, as liberal and anti-American. Armed anti-lockdown protesters demonstrated at government buildings while Trump egged them on, urging them to “LIBERATE” Minnesota, Michigan, and Virginia. Several public-health officials left their jobs over harassment and threats.

It is no coincidence that other powerful nations that elected populist leaders—Brazil, Russia, India, and the United Kingdom—also fumbled their response to COVID19. “When you have people elected based on undermining trust in the government, what happens when trust is what you need the most?” says Sarah Dalglish of the Johns Hopkins Bloomberg School of Public Health, who studies the political determinants of health.

“Trump is president,” she says. “How could it go well?”

The countries that fared better against COVID19 didn’t follow a universal playbook. Many used masks widely; New Zealand didn’t. Many tested extensively; Japan didn’t. Many had science-minded leaders who acted early; Hong Kong didn’t—instead, a grassroots movement compensated for a lax government. Many were small islands; not large and continental Germany. Each nation succeeded because it did enough things right.

Read: What really doomed America’s coronavirus response

Meanwhile, the United States underperformed across the board, and its errors compounded. The dearth of tests allowed unconfirmed cases to create still more cases, which flooded the hospitals, which ran out of masks, which are necessary to limit the virus’s spread. Twitter amplified Trump’s misleading messages, which raised fear and anxiety among people, which led them to spend more time scouring for information on Twitter. Even seasoned health experts underestimated these compounded risks. Yes, having Trump at the helm during a pandemic was worrying, but it was tempting to think that national wealth and technological superiority would save America. “We are a rich country, and we think we can stop any infectious disease because of that,” says Michael Osterholm, the director of the Center for Infectious Disease Research and Policy at the University of Minnesota. “But dollar bills alone are no match against a virus.”

COVID‐19 is an assault on America’s body, and a referendum on the ideas that animate its culture.

Public-health experts talk wearily about the panic-neglect cycle, in which outbreaks trigger waves of attention and funding that quickly dissipate once the diseases recede. This time around, the U.S. is already flirting with neglect, before the panic phase is over. The virus was never beaten in the spring, but many people, including Trump, pretended that it was. Every state reopened to varying degrees, and many subsequently saw record numbers of cases. After Arizona’s cases started climbing sharply at the end of May, Cara Christ, the director of the state’s health-services department, said, “We are not going to be able to stop the spread. And so we can’t stop living as well.” The virus may beg to differ.

At times, Americans have seemed to collectively surrender to COVID19. The White House’s coronavirus task force wound down. Trump resumed holding rallies, and called for less testing, so that official numbers would be rosier. The country behaved like a horror-movie character who believes the danger is over, even though the monster is still at large. The long wait for a vaccine will likely culminate in a predictable way: Many Americans will refuse to get it, and among those who want it, the most vulnerable will be last in line.

Still, there is some reason for hope. Many of the people I interviewed tentatively suggested that the upheaval wrought by COVID19 might be so large as to permanently change the nation’s disposition. Experience, after all, sharpens the mind. East Asian states that had lived through the SARS and MERS epidemics reacted quickly when threatened by SARSCoV2, spurred by a cultural memory of what a fast-moving coronavirus can do. But the U.S. had barely been touched by the major epidemics of past decades (with the exception of the H1N1 flu). In 2019, more Americans were concerned about terrorists and cyberattacks than about outbreaks of exotic diseases. Perhaps they will emerge from this pandemic with immunity both cellular and cultural.

There are also a few signs that Americans are learning important lessons. A June survey showed that 60 to 75 percent of Americans were still practicing social distancing. A partisan gap exists, but it has narrowed. “In public-opinion polling in the U.S., high-60s agreement on anything is an amazing accomplishment,” says Beth Redbird, a sociologist at Northwestern University, who led the survey. Polls in May also showed that most Democrats and Republicans supported mask wearing, and felt it should be mandatory in at least some indoor spaces. It is almost unheard-of for a public-health measure to go from zero to majority acceptance in less than half a year. But pandemics are rare situations when “people are desperate for guidelines and rules,” says Zoë McLaren, a health-policy professor at the University of Maryland at Baltimore County. The closest analogy is pregnancy, she says, which is “a time when women’s lives are changing, and they can absorb a ton of information. A pandemic is similar: People are actually paying attention, and learning.”

Redbird’s survey suggests that Americans indeed sought out new sources of information—and that consumers of news from conservative outlets, in particular, expanded their media diet. People of all political bents became more dissatisfied with the Trump administration. As the economy nose-dived, the health-care system ailed, and the government fumbled, belief in American exceptionalism declined. “Times of big social disruption call into question things we thought were normal and standard,” Redbird told me. “If our institutions fail us here, in what ways are they failing elsewhere?” And whom are they failing the most?

Americans were in the mood for systemic change. Then, on May 25, George Floyd, who had survived COVID19’s assault on his airway, asphyxiated under the crushing pressure of a police officer’s knee. The excruciating video of his killing circulated through communities that were still reeling from the deaths of Breonna Taylor and Ahmaud Arbery, and disproportionate casualties from COVID19. America’s simmering outrage came to a boil and spilled into its streets.

Defiant and largely cloaked in masks, protesters turned out in more than 2,000 cities and towns. Support for Black Lives Matter soared: For the first time since its founding in 2013, the movement had majority approval across racial groups. These protests were not about the pandemic, but individual protesters had been primed by months of shocking governmental missteps. Even people who might once have ignored evidence of police brutality recognized yet another broken institution. They could no longer look away.

It is hard to stare directly at the biggest problems of our age. Pandemics, climate change, the sixth extinction of wildlife, food and water shortages—their scope is planetary, and their stakes are overwhelming. We have no choice, though, but to grapple with them. It is now abundantly clear what happens when global disasters collide with historical negligence.

COVID19 is an assault on America’s body, and a referendum on the ideas that animate its culture. Recovery is possible, but it demands radical introspection. America would be wise to help reverse the ruination of the natural world, a process that continues to shunt animal diseases into human bodies. It should strive to prevent sickness instead of profiting from it. It should build a health-care system that prizes resilience over brittle efficiency, and an information system that favors light over heat. It should rebuild its international alliances, its social safety net, and its trust in empiricism. It should address the health inequities that flow from its history. Not least, it should elect leaders with sound judgment, high character, and respect for science, logic, and reason.

The pandemic has been both tragedy and teacher. Its very etymology offers a clue about what is at stake in the greatest challenges of the future, and what is needed to address them. Pandemic. Pan and demos. All people.

* This article has been updated to clarify why 3.1 million Americans still cannot afford health insurance.

** This article originally mischaracterized similarities between two studies that were retracted in June, one in The Lancet and one in the New England Journal of Medicine. It has been updated to reflect that the latter study was not specifically about hydroxychloroquine. It appears in the September 2020 print edition with the headline “Anatomy of an American Failure.”

Ed Yong is a staff writer at The Atlantic, where he covers science.

Connect Twitter

Why a Traffic Flow Suddenly Turns Into a Traffic Jam

Those aggravating slowdowns aren’t one driver’s fault. They’re everybody’s fault. August 3rd 2020

Nautilus

  • Benjamin Seibold
15967_97855ff80c2ef0cc2f1b586e78fb287b.png

Photo by Raymond Depardon / Magnum Photos.

Few experiences on the road are more perplexing than phantom traffic jams. Most of us have experienced one: The vehicle ahead of you suddenly brakes, forcing you to brake, and making the driver behind you brake. But, soon afterward, you and the cars around you accelerate back to the original speed—and it becomes clear that there were no obstacles on the road, and apparently no cause for the slowdown.

Because traffic quickly resumes its original speed, phantom traffic jams usually don’t cause major delays. But neither are they just minor nuisances. They are hot spots for accidents because they force unexpected braking. And the unsteady driving they cause is not good for your car, causing wear and tear and poor gas mileage.

So what is going on, exactly? To answer this question mathematicians, physicists, and traffic engineers have devised many types of traffic models. For instance, microscopic models resolve the paths of the individual vehicles, and are good at describing vehicle–vehicle interactions. In contrast, macroscopic models describe traffic as a fluid, in which cars are interpreted as fluid particles. They are effective at capturing large-scale phenomena that involve many vehicles. Finally, cellular models divide the road into segments and prescribe rules by which cars move from cell to cell, providing a framework for capturing the uncertainty that is inherent in real traffic.

It soon becomes clear that there were no obstacles on the road, and apparently no cause for the slowdown.

In setting out to understand how a phantom traffic jam forms, we first have to be aware of the many effects present in real traffic that could conceivably contribute to a jam: different types of vehicles and drivers, unpredictable behavior, on- and off-ramps, and lane switching, to name just a few. We might expect that some combination of these effects is necessary to cause a phantom jam. One of the great advantages of studying mathematical models is that these various effects can be turned off in theoretical analysis or computer simulations. This creates a host of identical, predictable drivers on a single-lane highway without any ramps. In other words, your perfect commute home.

Surprisingly, when all these effects are turned off, phantom traffic jams still occur! This observation tells us that phantom jams are not the fault of individual drivers, but result instead from the collective behavior of all drivers on the road. It works like this. Envision a uniform traffic flow: All vehicles are evenly distributed along the highway, and all drive with the same velocity. Under perfect conditions, this ideal traffic flow could persist forever. However, in reality, the flow is constantly exposed to small perturbations: imperfections on the asphalt, tiny hiccups of the engines, half-seconds of driver inattention, and so on. To predict the evolution of this traffic flow, the big question is to decide whether these small perturbations decay, or are amplified.

If they decay, the traffic flow is stable and there are no jams. But if they are amplified, the uniform flow becomes unstable, with small perturbations growing into backwards-traveling waves called “jamitons.” These jamitons can be observed in reality, are visible in various types of models and computer simulations, and have also been reproduced in tightly controlled experiments.

In macroscopic, or “fluid-dynamical,” models, each driver—interpreted as a traffic-fluid particle—observes the local density of traffic around her at any instant in time and accordingly decides on a target velocity: fast, when few cars are nearby, or slow, when the congestion level is high. Then she accelerates or decelerates towards this target velocity. In addition, she anticipates what the traffic will do next. This predictive driving effect is modeled by a “traffic pressure,” which acts in many ways like the pressure in a real fluid.

Phantom jams are not the fault of individual drivers, but result instead from the collective behavior of all drivers on the road.

The mathematical analysis of traffic models reveals that these two are competing effects. The delay before drivers reach their target velocity causes the growth of perturbations, while traffic pressure makes perturbations decay. A uniform flow profile is stable if the anticipation effect dominates, which it does when traffic density is low. The delay effect dominates when traffic densities are high, causing instabilities and, ultimately, phantom jams.

The transition from uniform traffic flow to jamiton-dominated flow is similar to water turning from a liquid state into a gas state. In traffic, this phase transition occurs once traffic density reaches a particular, critical threshold at which the drivers’ anticipation exactly balances the delay effect in their velocity adjustment. The most fascinating aspect of this phase transition is that the character of the traffic changes dramatically while individual drivers do not change their driving behavior at all.

The occurrence of jamiton traffic waves, then, can be explained by phase transition behavior. To think about how to prevent phantom jams, though, we also need to understand the details of the structure of a fully established jamiton. In macroscopic traffic models, jamitons are the mathematical analog of detonation waves, which naturally occur in explosions. All jamitons have a localized region of high traffic density and low vehicle velocity. The transition from high to low speed is extremely abrupt—like a shock wave in a fluid. Vehicles that run into the shock front are forced to brake heavily. After the shock is a “reaction zone,” in which drivers attempt to accelerate back to their original velocity. Finally, at the end of the phantom jam, from the drivers’ perspective, is the “sonic point.”

The name “sonic point” comes from the analogy with detonation waves. In an explosion, it is at this point that the flow turns from supersonic to subsonic. This has crucial implications for the information flow within a detonation wave, as well as in a jamiton. The sonic point provides an information boundary, similar to the event horizon in a black hole: no information from further downstream can affect the jamiton through the sonic point. This makes dispersing jamitons rather difficult—a vehicle can’t affect the jamiton through its driving behavior after passing through.

Instead, the driving behavior of a vehicle must be affected before it runs into a jamiton. Wireless communication between vehicles provides one possibility to achieve this goal, and today’s mathematical models allow us to develop appropriate ways to use tomorrow’s technology. For example, once a vehicle detects a sudden braking event followed by an immediate acceleration, it can broadcast a “jamiton warning” to the vehicles following it within a mile distance. The drivers of those vehicles can then, at the least, prepare for unexpected braking; or, better still, increase their headway so that they can eventually contribute to the dissipation of the traffic wave.

The character of the traffic changes dramatically while individual drivers don’t change their driving behavior
at all.

The insights we glean from fluid-dynamical traffic models can help with many other real-world problems. For example, supply chains exhibit a queuing behavior reminiscent of traffic jams. Jamming, queuing, and wave phenomena can also be observed in gas pipelines, information webs, and flows in biological networks—all of which can be understood as fluid-like flows.

Besides being an important mathematical case study, the phantom traffic jam is, perhaps, also an interesting and instructive social system. Whenever jamitons arise, they are caused by the collective behavior of all drivers—not a few bad apples on the road. Those who drive preventively can dissipate jamitons, and benefit all of the drivers behind them. It is a classic example of the effectiveness of the Golden Rule.

So the next time you are caught in a warrantless, pointless, and spontaneous traffic jam, remember just how much more it is than it seems.

Benjamin Seibold is an Assistant Professor of Mathematics at Temple University.

Could Air-Conditioning Fix Climate Change?

Researchers proposed a carbon-neutral “synthetic oil well” on every rooftop.

Scientific American

  • Richard Conniff
GettyImages-171242693.jpg

Photo from 4FR / Getty Images.

It is one of the great dilemmas of climate change: We take such comfort from air conditioning that worldwide energy consumption for that purpose has already tripled since 1990. It is on track to grow even faster through mid-century—and assuming fossil-fuel–fired power plants provide the electricity, that could cause enough carbon dioxide emissions to warm the planet by another deadly half-degree Celsius.

A paper published in the Nature Communications proposes a partial remedy:  Heating, ventilation and air conditioning (or HVAC) systems move a lot of air. They can replace the entire air volume in an office building five or 10 times an hour.  Machines that capture carbon dioxide from the atmosphere—a developing fix for climate change—also depend on moving large volumes of air.  So why not save energy by tacking the carbon capture machine onto the air conditioner?

This futuristic proposal, from a team led by chemical engineer Roland Dittmeyer at Germany’s Karlsruhe Institute of Technology, goes even further. The researchers imagine a system of modular components, powered by renewable energy, that would not just extract carbon dioxide and water from the air. It would also convert them into hydrogen, and then use a multistep chemical process to transform that hydrogen into liquid hydrocarbon fuels. The result: “Personalized, localized and distributed, synthetic oil wells” in buildings or neighborhoods, the authors write. “The envisioned model of ‘crowd oil’ from solar refineries, akin to ‘crowd electricity’ from solar panels,” would enable people “to take control and collectively manage global warming and climate change, rather than depending on the fossil power industrial behemoths.”

The research group has already developed an experimental model that can complete several key steps of the process, Dittmeyer says, adding, “The plan in two or three years is to have the first experimental showcase where I can show you a bottle of hydrocarbon fuel from carbon dioxide captured in an air-conditioning unit.”

Neither Dittmeyer nor co-author Geoffrey Ozin, a chemical engineer at the University of Toronto, would predict how long it might take before building owners could purchase and install such units. But Ozin claims much of the necessary technology is already commercially available. He says the carbon capture equipment could come from a Swiss “direct air capture” company called Climeworks, and the electrolyzers to convert carbon dioxide and water into hydrogen are available from Siemens, Hydrogenics or other companies. “And you use Roland’s amazing microstructure catalytic reactors, which convert the hydrogen and carbon dioxide into a synthetic fuel,” he adds. Those reactors are being brought to market by the German company Ineratec, a spinoff from Dittmeyer’s research. Because the system would rely on advanced forms of solar energy, Ozin thinks of the result as “photosynthetic buildings.”

The authors calculate that applying this system to the HVAC in one of Europe’s tallest skyscrapers, the MesseTurm, or Trade Fair Tower, in Frankfurt, would extract and convert enough carbon dioxide to yield at least 2,000 metric tons (660,000 U.S. gallons) of fuel a year. The office space in the entire city of Frankfurt could yield more than 370,000 tons (122 million gallons) annually, they say.

“This is a wonderful concept—it made my day,” says David Keith, a Harvard professor of applied physics and public policy, who was not involved in the new paper. He suggests that the best use for the resulting fuels would be to “help solve two of our biggest energy challenges”: providing a carbon-neutral fuel to fill the gaps left by intermittent renewables such as wind and solar power, and providing fuel for “the hard-to-electrify parts of transportation and industry,” such as airplanes, large trucks and steel- or cement-making. Keith is already targeting some of these markets through Carbon Engineering, a company he founded focused on direct air capture of carbon dioxide for large-scale liquid fuel production. But he says he is “deeply skeptical” about doing it on a distributed building or neighborhood basis. “Economies of scale can’t be wished away. There’s a reason we have huge wind turbines,” he says—and a reason we do not have backyard all-in-one pulp-and-paper mills for disposing of our yard wastes. He believes it is simply “faster and cheaper” to take carbon dioxide from the air and turn it into fuel “by doing it an appropriate scale.”

Other scientists who were not involved in the new paper note two other potential problems. “The idea that Roland has presented is an interesting one,” says Jennifer Wilcox, a chemical engineer at Worcester Institute of Technology, “but more vetting needs to be done in order to determine the true potential of the approach.” While it seems to make sense to take advantage of the air movement already being generated by HVAC systems, Wilcox says, building and operating the necessary fans is not what makes direct air capture systems so expensive. “The dominant capital cost,” she says, “is the solid adsorbent materials”—that is, substances to which the carbon dioxide adheres—and the main energy cost is the heat needed to recover the carbon dioxide from these materials afterward. Moreover, she contends that any available solar or other carbon-free power source would be put to better use in replacing fossil-fuel-fired power plants, to reduce the amount of carbon dioxide getting into the air in the first place.

“The idea of converting captured carbon into liquid fuel is persuasive,“ says Matthew J. Realff, a chemical engineer at Georgia Institute of Technology. “We have an enormous investment in our liquid fuel infrastructure, and using that has tremendous value. You wouldn’t have to build a whole new infrastructure. But this concept of doing it at the household level is a little bit fantastical”—partly because the gases involved (carbon monoxide and hydrogen) are toxic and explosive. The process to convert them to a liquid fuel is well understood, Realff says, but it produces a range of products that now typically get separated out in massive refineries—requiring huge amounts of energy. “It’s possible that it could be worked out at the scale that is being proposed,” he adds. “But we haven’t done it at this point, and it may not turn out to be the most effective way from an economic perspective.” There is, however, an unexpected benefit of direct air capture of carbon dioxide, says Realff, and it could help stimulate market acceptance of the technology: One reason office buildings replace their air so frequently is simply to protect workers from elevated levels of carbon dioxide. His research suggests that capturing the carbon dioxide from the air stream may be one way to cut energy costs, by reducing the frequency of air changes.

Dittmeyer disputes the argument that thinking big is always better. He notes that small, modular plants are a trend in some areas of chemical engineering, “because they are more flexible and don’t involve such a financial risk.” He also anticipates that cost will become less of a barrier as governments face up to the urgency of achieving a climate solution, and as jurisdictions increasingly impose carbon taxes or mandate strict energy efficiency standards for buildings.

“Of course, it’s a visionary perspective,” he says, “it relies on this idea of a decentralized product empowering people, not leaving it to industry. Industrial players observe the situation, but as long as there is no profit in the short term, they won’t do anything. If we have the technology that is safe and affordable, though maybe not as cheap, we can generate some momentum” among individuals, much as happened in the early stages of the solar industry. “And then I would expect the industrial parties to act, too.”

Could Air-Conditioning Fix Climate Change?

Researchers proposed a carbon-neutral “synthetic oil well” on every rooftop. August 2nd 2020

Scientific American

  • Richard Conniff
GettyImages-171242693.jpg

Photo from 4FR / Getty Images.

It is one of the great dilemmas of climate change: We take such comfort from air conditioning that worldwide energy consumption for that purpose has already tripled since 1990. It is on track to grow even faster through mid-century—and assuming fossil-fuel–fired power plants provide the electricity, that could cause enough carbon dioxide emissions to warm the planet by another deadly half-degree Celsius.

A paper published in the Nature Communications proposes a partial remedy:  Heating, ventilation and air conditioning (or HVAC) systems move a lot of air. They can replace the entire air volume in an office building five or 10 times an hour.  Machines that capture carbon dioxide from the atmosphere—a developing fix for climate change—also depend on moving large volumes of air.  So why not save energy by tacking the carbon capture machine onto the air conditioner?

This futuristic proposal, from a team led by chemical engineer Roland Dittmeyer at Germany’s Karlsruhe Institute of Technology, goes even further. The researchers imagine a system of modular components, powered by renewable energy, that would not just extract carbon dioxide and water from the air. It would also convert them into hydrogen, and then use a multistep chemical process to transform that hydrogen into liquid hydrocarbon fuels. The result: “Personalized, localized and distributed, synthetic oil wells” in buildings or neighborhoods, the authors write. “The envisioned model of ‘crowd oil’ from solar refineries, akin to ‘crowd electricity’ from solar panels,” would enable people “to take control and collectively manage global warming and climate change, rather than depending on the fossil power industrial behemoths.”

The research group has already developed an experimental model that can complete several key steps of the process, Dittmeyer says, adding, “The plan in two or three years is to have the first experimental showcase where I can show you a bottle of hydrocarbon fuel from carbon dioxide captured in an air-conditioning unit.”

Neither Dittmeyer nor co-author Geoffrey Ozin, a chemical engineer at the University of Toronto, would predict how long it might take before building owners could purchase and install such units. But Ozin claims much of the necessary technology is already commercially available. He says the carbon capture equipment could come from a Swiss “direct air capture” company called Climeworks, and the electrolyzers to convert carbon dioxide and water into hydrogen are available from Siemens, Hydrogenics or other companies. “And you use Roland’s amazing microstructure catalytic reactors, which convert the hydrogen and carbon dioxide into a synthetic fuel,” he adds. Those reactors are being brought to market by the German company Ineratec, a spinoff from Dittmeyer’s research. Because the system would rely on advanced forms of solar energy, Ozin thinks of the result as “photosynthetic buildings.”

The authors calculate that applying this system to the HVAC in one of Europe’s tallest skyscrapers, the MesseTurm, or Trade Fair Tower, in Frankfurt, would extract and convert enough carbon dioxide to yield at least 2,000 metric tons (660,000 U.S. gallons) of fuel a year. The office space in the entire city of Frankfurt could yield more than 370,000 tons (122 million gallons) annually, they say.

“This is a wonderful concept—it made my day,” says David Keith, a Harvard professor of applied physics and public policy, who was not involved in the new paper. He suggests that the best use for the resulting fuels would be to “help solve two of our biggest energy challenges”: providing a carbon-neutral fuel to fill the gaps left by intermittent renewables such as wind and solar power, and providing fuel for “the hard-to-electrify parts of transportation and industry,” such as airplanes, large trucks and steel- or cement-making. Keith is already targeting some of these markets through Carbon Engineering, a company he founded focused on direct air capture of carbon dioxide for large-scale liquid fuel production. But he says he is “deeply skeptical” about doing it on a distributed building or neighborhood basis. “Economies of scale can’t be wished away. There’s a reason we have huge wind turbines,” he says—and a reason we do not have backyard all-in-one pulp-and-paper mills for disposing of our yard wastes. He believes it is simply “faster and cheaper” to take carbon dioxide from the air and turn it into fuel “by doing it an appropriate scale.”

Other scientists who were not involved in the new paper note two other potential problems. “The idea that Roland has presented is an interesting one,” says Jennifer Wilcox, a chemical engineer at Worcester Institute of Technology, “but more vetting needs to be done in order to determine the true potential of the approach.” While it seems to make sense to take advantage of the air movement already being generated by HVAC systems, Wilcox says, building and operating the necessary fans is not what makes direct air capture systems so expensive. “The dominant capital cost,” she says, “is the solid adsorbent materials”—that is, substances to which the carbon dioxide adheres—and the main energy cost is the heat needed to recover the carbon dioxide from these materials afterward. Moreover, she contends that any available solar or other carbon-free power source would be put to better use in replacing fossil-fuel-fired power plants, to reduce the amount of carbon dioxide getting into the air in the first place.

“The idea of converting captured carbon into liquid fuel is persuasive,“ says Matthew J. Realff, a chemical engineer at Georgia Institute of Technology. “We have an enormous investment in our liquid fuel infrastructure, and using that has tremendous value. You wouldn’t have to build a whole new infrastructure. But this concept of doing it at the household level is a little bit fantastical”—partly because the gases involved (carbon monoxide and hydrogen) are toxic and explosive. The process to convert them to a liquid fuel is well understood, Realff says, but it produces a range of products that now typically get separated out in massive refineries—requiring huge amounts of energy. “It’s possible that it could be worked out at the scale that is being proposed,” he adds. “But we haven’t done it at this point, and it may not turn out to be the most effective way from an economic perspective.” There is, however, an unexpected benefit of direct air capture of carbon dioxide, says Realff, and it could help stimulate market acceptance of the technology: One reason office buildings replace their air so frequently is simply to protect workers from elevated levels of carbon dioxide. His research suggests that capturing the carbon dioxide from the air stream may be one way to cut energy costs, by reducing the frequency of air changes.

Dittmeyer disputes the argument that thinking big is always better. He notes that small, modular plants are a trend in some areas of chemical engineering, “because they are more flexible and don’t involve such a financial risk.” He also anticipates that cost will become less of a barrier as governments face up to the urgency of achieving a climate solution, and as jurisdictions increasingly impose carbon taxes or mandate strict energy efficiency standards for buildings.

“Of course, it’s a visionary perspective,” he says, “it relies on this idea of a decentralized product empowering people, not leaving it to industry. Industrial players observe the situation, but as long as there is no profit in the short term, they won’t do anything. If we have the technology that is safe and affordable, though maybe not as cheap, we can generate some momentum” among individuals, much as happened in the early stages of the solar industry. “And then I would expect the industrial parties to act, too.”

Richard Conniff is an award-winning science writer. His books include The Species Seekers: Heroes, Fools, and the Mad Pursuit of Life on Earth (W. W. Norton, 2011).

Richard Conniff is an award-winning science writer. His books include The Species Seekers: Heroes, Fools, and the Mad Pursuit of Life on Earth (W. W. Norton, 2011).

Could Consciousness All Come Down to the Way Things Vibrate?

A resonance theory of consciousness suggests that the way all matter vibrates, and the tendency for those vibrations to sync up, might be a way to answer the so-called ‘hard problem’ of consciousness.

The Conversation

  • Tam Hunt
file-20181109-74754-hj6p7i.jpg

What do synchronized vibrations add to the mind/body question? Photo by agsandrew / Shutterstock.com.

Why is my awareness here, while yours is over there? Why is the universe split in two for each of us, into a subject and an infinity of objects? How is each of us our own center of experience, receiving information about the rest of the world out there? Why are some things conscious and others apparently not? Is a rat conscious? A gnat? A bacterium?

These questions are all aspects of the ancient “mind-body problem,” which asks, essentially: What is the relationship between mind and matter? It’s resisted a generally satisfying conclusion for thousands of years.

The mind-body problem enjoyed a major rebranding over the last two decades. Now it’s generally known as the “hard problem” of consciousness, after philosopher David Chalmers coined this term in a now classic paper and further explored it in his 1996 book, “The Conscious Mind: In Search of a Fundamental Theory.”

Chalmers thought the mind-body problem should be called “hard” in comparison to what, with tongue in cheek, he called the “easy” problems of neuroscience: How do neurons and the brain work at the physical level? Of course they’re not actually easy at all. But his point was that they’re relatively easy compared to the truly difficult problem of explaining how consciousness relates to matter.

Over the last decade, my colleague, University of California, Santa Barbara psychology professor Jonathan Schooler and I have developed what we call a “resonance theory of consciousness.” We suggest that resonance – another word for synchronized vibrations – is at the heart of not only human consciousness but also animal consciousness and of physical reality more generally. It sounds like something the hippies might have dreamed up – it’s all vibrations, man! – but stick with me. file-20181109-74769-7ov2ol.jpg

How do things in nature – like flashing fireflies – spontaneously synchronize? Photo by Suzanne Tucker /Shutterstock.com.

All About the Vibrations

All things in our universe are constantly in motion, vibrating. Even objects that appear to be stationary are in fact vibrating, oscillating, resonating, at various frequencies. Resonance is a type of motion, characterized by oscillation between two states. And ultimately all matter is just vibrations of various underlying fields. As such, at every scale, all of nature vibrates.

Something interesting happens when different vibrating things come together: They will often start, after a little while, to vibrate together at the same frequency. They “sync up,” sometimes in ways that can seem mysterious. This is described as the phenomenon of spontaneous self-organization.

Mathematician Steven Strogatz provides various examples from physics, biology, chemistry and neuroscience to illustrate “sync” – his term for resonance – in his 2003 book “Sync: How Order Emerges from Chaos in the Universe, Nature, and Daily Life,” including:

  • When fireflies of certain species come together in large gatherings, they start flashing in sync, in ways that can still seem a little mystifying.
  • Lasers are produced when photons of the same power and frequency sync up.
  • The moon’s rotation is exactly synced with its orbit around the Earth such that we always see the same face.

Examining resonance leads to potentially deep insights about the nature of consciousness and about the universe more generally. file-20181109-74751-1503r83.jpg

External electrodes can record a brain’s activity. Photo by vasara / Shutterstock.com.

Sync Inside Your Skull

Neuroscientists have identified sync in their research, too. Large-scale neuron firing occurs in human brains at measurable frequencies, with mammalian consciousness thought to be commonly associated with various kinds of neuronal sync.

For example, German neurophysiologist Pascal Fries has explored the ways in which various electrical patterns sync in the brain to produce different types of human consciousness.

Fries focuses on gamma, beta and theta waves. These labels refer to the speed of electrical oscillations in the brain, measured by electrodes placed on the outside of the skull. Groups of neurons produce these oscillations as they use electrochemical impulses to communicate with each other. It’s the speed and voltage of these signals that, when averaged, produce EEG waves that can be measured at signature cycles per second. file-20181109-116826-1hsxqnf.jpg

Each type of synchronized activity is associated with certain types of brain function. Image from artellia / Shutterstock.com.

Gamma waves are associated with large-scale coordinated activities like perception, meditation or focused consciousness; beta with maximum brain activity or arousal; and theta with relaxation or daydreaming. These three wave types work together to produce, or at least facilitate, various types of human consciousness, according to Fries. But the exact relationship between electrical brain waves and consciousness is still very much up for debate.

Fries calls his concept “communication through coherence.” For him, it’s all about neuronal synchronization. Synchronization, in terms of shared electrical oscillation rates, allows for smooth communication between neurons and groups of neurons. Without this kind of synchronized coherence, inputs arrive at random phases of the neuron excitability cycle and are ineffective, or at least much less effective, in communication.

A Resonance Theory of Consciousness

Our resonance theory builds upon the work of Fries and many others, with a broader approach that can help to explain not only human and mammalian consciousness, but also consciousness more broadly.

Based on the observed behavior of the entities that surround us, from electrons to atoms to molecules, to bacteria to mice, bats, rats, and on, we suggest that all things may be viewed as at least a little conscious. This sounds strange at first blush, but “panpsychism” – the view that all matter has some associated consciousness – is an increasingly accepted position with respect to the nature of consciousness.

The panpsychist argues that consciousness did not emerge at some point during evolution. Rather, it’s always associated with matter and vice versa – they’re two sides of the same coin. But the large majority of the mind associated with the various types of matter in our universe is extremely rudimentary. An electron or an atom, for example, enjoys just a tiny amount of consciousness. But as matter becomes more interconnected and rich, so does the mind, and vice versa, according to this way of thinking.

Biological organisms can quickly exchange information through various biophysical pathways, both electrical and electrochemical. Non-biological structures can only exchange information internally using heat/thermal pathways – much slower and far less rich in information in comparison. Living things leverage their speedier information flows into larger-scale consciousness than what would occur in similar-size things like boulders or piles of sand, for example. There’s much greater internal connection and thus far more “going on” in biological structures than in a boulder or a pile of sand.

Under our approach, boulders and piles of sand are “mere aggregates,” just collections of highly rudimentary conscious entities at the atomic or molecular level only. That’s in contrast to what happens in biological life forms where the combinations of these micro-conscious entities together create a higher level macro-conscious entity. For us, this combination process is the hallmark of biological life.

The central thesis of our approach is this: the particular linkages that allow for large-scale consciousness – like those humans and other mammals enjoy – result from a shared resonance among many smaller constituents. The speed of the resonant waves that are present is the limiting factor that determines the size of each conscious entity in each moment.

As a particular shared resonance expands to more and more constituents, the new conscious entity that results from this resonance and combination grows larger and more complex. So the shared resonance in a human brain that achieves gamma synchrony, for example, includes a far larger number of neurons and neuronal connections than is the case for beta or theta rhythms alone.

What about larger inter-organism resonance like the cloud of fireflies with their little lights flashing in sync? Researchers think their bioluminescent resonance arises due to internal biological oscillators that automatically result in each firefly syncing up with its neighbors.

Is this group of fireflies enjoying a higher level of group consciousness? Probably not, since we can explain the phenomenon without recourse to any intelligence or consciousness. But in biological structures with the right kind of information pathways and processing power, these tendencies toward self-organization can and often do produce larger-scale conscious entities.

Our resonance theory of consciousness attempts to provide a unified framework that includes neuroscience, as well as more fundamental questions of neurobiology and biophysics, and also the philosophy of mind. It gets to the heart of the differences that matter when it comes to consciousness and the evolution of physical systems.

It is all about vibrations, but it’s also about the type of vibrations and, most importantly, about shared vibrations.

Tam Hunt is an Affiliate Guest in Psychology at the University of California, Santa Barbara.The Conversation

More from The Conversation

Science or Compliance ?

Getting There Social Theory July 25th 2020

There is a lot going on in the world, much of it quite bad. As a trained social scientist from the early 1970s, I was taught sociological and economic theories no longer popular with the global monstrously rich ruling elite.  By the way, in case you don’t know, you do not have to hold political office to be a part of that elite.  I was also taught philosophy and economic history of Britain and the United States.

So, firstly let’s get economics dealt with.  I was taught about its founders, men like Jeremy Bentham, also a philosopher, Malthus, Jevons and Marshall – the latter bringing order, principles and so called ‘rational economic man’ into the discipline.

Society was pretty rigid, wars were the way countries got richer and class systems became more objectified.  All went well until the post World War One ‘Great Depression’ when, in spite of rapidly falling interest rates, the rich decided to let the poor sink while they retrenched and had a good time- stirring up Nazism. .  

Economics revolutionary John Maynard Keynes concluded that governments needed to tax the rich and borrow to spend their way out of depression.  Britain’s elite would have none of it but the U.S.A , Italy and Germany took it up -I admit to being a nerd, and was reading Keynes ‘General Theory of Employment Interest and Money ‘ in my teens, much more interesting to me than following football.

Meanwhile Russia was locked out as a pariah. Britain had done its best to discredit and destroy Russia because they killed the British Royal family’s treacherous cousins – because they were terrible and corrupt rulers of Russia – and terrified the British rich with a fear of communism from the lower orders rising up.  

Only World War Two saved them offering a wonderful opportunity to slaughter more of the lower orders.  In the process, their empire was exposed and fell apart in the post war age – a ghost of it surviving as the Commonwealth ( sic ).

So we come to sociology.  Along the way, through this Industrial Revolution, Empire Building , oppression and decline a so called ‘Science of Society had been developing, with substantial data collected and poured into theories.  Marxism was the most famous, with Karl Marx’s forgotten friend and industrialist Friedrich Engels well placed to collect the data.  

The essence of Marxist theory, which was primarily based on the Hegelian dialectic and Marx’s historical studies, was that Capitalistm contained the seeds of its own destruction due to an inherent conflict between those who owned the means of production and the slaves being exploited for profit and greed.  Taking the opportunity provided by incompetent Russian elite rule in 1917, Germany helped smuggle Lenin back into Russia to ferment the Russian revolution.

That revolution and Russia has terrified the rich western elites ever since, with all manner of methods and episodes used to undermine it.  It is no wonder, leaving the vile Stalin to one side, that Russia developed police state methods, leading to what windbag Churchill called an Iron Curtain descending and dividing Europe.

By 1991, the West, dominated by Britain and the U.S elites who had the most to lose from what the U.S had called ‘The Domino Theory’ of one country falling to communism after another- because the masses might realise how they were being exploited – thought their day had come.  Gorbachev got rid of the Berlin Wall, the U.S undermined him to get their friend Yeltsin into power.  

But it didn’t last when Putin stepped up.  Oligarch’s allowed by Yeltsin, to rip off state assets rushed to Britain, even donating to Tory Party funds. Ever since, the Western elite have been in overdrive to discredit Putin has made in spite of the progress he has inspired and directed.  

Anglo US sanctions aren’t working fast enough and West Germany wants to buy Russian Gas – Nord Stream 2.  So now we have fake socialist, former head of Britain’s corrupt CPS, now Labour’s top man ( sic ) wanting RT ( Russia Today ) closed down.  There is a lot of worry about people not watching BBC TV in spite of being forced to pay for an expensive licence even though they do not want to watch BBC’s smug upper middle class drivel and biased news.  This is where sociology comes back into the picture.

The discipline ( sic ) of actual sociology derived from French thinkers like Auguste Comte who predicted that sociologists would become the priesthood of modern society – see where our government gets its ‘the science mantra ‘ from.  

As with economics, sociology was about understanding the increasingly complex way of life in an increasingly industrialised Industrialising world. Early schools of sociological thought drew comparisons with Darwin’s idea of organisms evolving.  So society’s head was its government, the transport system was its veins and arteries etc, with every part working towards functional integration.

Herbert Spencer, whose girlfriend Mary Ann Evans wrote social science orientated novels under the name George Elliot, and Frenchman Emile Durkeheim, founded this ‘functionalists’ school of sociology.  Frenchman Emile Durkheim, inspired by thinkers from the French Revolutionary era, took that school a stage further.  His theory considered dysfunctions which he called ‘pathological’ factors like suicide.  Robert K Merton went on after 1945, to write about dysfunctional aspects of society, building on Durkheim’s work.  Both men had a concept of ‘anomie.’  Durkheim talked of normlessness, Merton of people and societies never satisfied, having ever receding horizons.

To an old school person like myself, these ideas are still useful as is Keynes on economics.  One just has to look behind today’s self interested pseudo scientific jargon speak about ‘experts say,  the science and studies reveal.’  The important thing to remember about any social science, and epidemiologists are among them, is that you get or predict according to what you put in.  As far as Covid 19 is concerned, there are too many vested interests now to take anything they say seriously.  It is quite clear that there is no evidence that lockdown works. There is clear evidence that certain groups make themselves vulnerable or are deluded that death does not come with old age.  

I am old.  I am one of the ‘I’ and ‘Me’ generation whose interests should not come first and nor should the BAME.  The same goes for Africa, Indian sub continent and the Middle East, where overpopulation, foreign aid, corruption, Oxfam ,ignorance, dictators and religious bigotry are not solutions to Covid 19 or anything else.  If our pathetic fake caring politicians carry on like this, grovelling to the likes of the WHO, then we are all doomed.  

As for little Greta, she is a rather noisy poorly educated opinionated stooge. She has no idea what she is talking about.  As for modern sociology, it is pure feminist narrow minded dogma popular on police training courses for morons to use for profiling and fitting up innocent men.  They go on toilet paper degree courses, getting rather impressive letters, BSc to make them look and sound like experts. 

Robert Cook

New Alien Theory July 18th 2020

After decades of searching, we still haven’t discovered a single sign of extraterrestrial intelligence. Probability tells us life should be out there, so why haven’t we found it yet?

The problem is often referred to as Fermi’s paradox, after the Nobel Prize–winning physicist Enrico Fermi, who once asked his colleagues this question at lunch. Many theories have been proposed over the years. It could be that we are simply alone in the universe or that there is some great filter that prevents intelligent life progressing beyond a certain stage. Maybe alien life is out there, but we are too primitive to communicate with it, or we are placed inside some cosmic zoo, observed but left alone to develop without external interference. Now, three researchers think they think they may have another potential answer to Fermi’s question: Aliens do exist; they’re just all asleep.

According to a research paper accepted for publication in the Journal of the British Interplanetary Society, extraterrestrials are sleeping while they wait. In the paper, authors from Oxford’s Future of Humanity Institute and the Astronomical Observatory of BelgradeAnders Sandberg, Stuart Armstrong, and Milan Cirkovic argue that the universe is too hot right now for advanced, digital civilizations to make the most efficient use of their resources. The solution: Sleep and wait for the universe to cool down, a process known as aestivating (like hibernation but sleeping until it’s colder).

Understanding the new hypothesis first requires wrapping your head around the idea that the universe’s most sophisticated life may elect to leave biology behind and live digitally. Having essentially uploaded their minds onto powerful computers, the civilizations choosing to do this could enhance their intellectual capacities or inhabit some of the harshest environments in the universe with ease.

The idea that life might transition toward a post-biological form of existence is gaining ground among experts. “It’s not something that is necessarily unavoidable, but it is highly likely,” Cirkovic told me in an interview.

Once you’re living digitally, Cirkovic explained, it’s important to process information efficiently. Each computation has a certain cost attached to it, and this cost is tightly coupled with temperature. The colder it gets, the lower the cost is, meaning you can do more with the same amount of resources. This is one of the reasons why we cool powerful computers. Though humans may find the universe to be a pretty frigid place (the background radiation hovers about 3 kelvins above absolute zero, the very lower limit of the temperature scale), digital minds may find it far too hot.

But why aestivate? Surely any aliens wanting more efficient processing could cool down their systems manually, just as we do with computers. In the paper, the authors concede this is a possibility. “While it is possible for a civilization to cool down parts of itself to any low temperature,” the authors write, that, too, requires work. So it wouldn’t make sense for a civilization looking to maximize its computational capacity to waste energy on the process. As Sandberg and Cirkovic elaborate in a blog post, it’s more likely that such artificial life would be in a protected sleep mode today, ready to wake up in colder futures.

If such aliens exist, they’re in luck. The universe appears to be cooling down on its own. Over the next trillions of years, as it continues to expand and the formation of new stars slows, the background radiation will reduce to practically zero. Under those conditions, Sandberg and Cirkovic explain, this kind of artificial life would get “tremendously more done.” Tremendous isn’t an understatement, either. The researchers calculate that by employing such a strategy, they could achieve up to 1030 times more than if done today. That’s a 1 with 30 zeroes after it.

But just because the aliens are asleep doesn’t mean we can’t find signs of them. Any aestivating civilization has to preserve resources it intends to use in the future. Processes that waste or threaten these resources, then, should be conspicuously absent, thanks to interference from the aestivators. (If they are sufficiently advanced to upload their minds and aestivators, they should be able to manipulate space.) This includes galaxies colliding, galactic winds venting matter into intergalactic space, and stars converting into black holes, which can push resources beyond the reach of the sleeping civilization or change them into less-useful forms.

Another strategy to find the sleeping aliens, Cirkovic said, might be to try and meddle with the aestivators’ possessions and territory, which we may already reside within. One way of doing this would be to send out self-replicating probes into the universe that would steal the aestivators’ things. Any competent species ought to have measures in place to respond to these kind of threats. “It could be an exceptionally dangerous test,” he cautioned, “but if there really are very old and very advanced civilizations out there, we can assume there is a potential for danger in anything we do.”

Interestingly, neither Sandberg nor Cirkovic said they have much faith in finding anything. Sandberg, writing on his blog, states that he does not believe the hypothesis to be a likely one: “I personally think the likeliest reason we are not seeing aliens is not that they are aestivating.” He writes that he feels it’s more likely that “they do not exist or are very far away.”

Cirkovic concurred. “I don’t find it very likely, either,” he said in our interview. “I much prefer hypotheses that do not rely on assuming intentional decisions made by extraterrestrial societies. Any assumption is extremely speculative.” There could be forms of energy that we can’t even conceive of using now, he said—producing antimatter in bulk, tapping evaporating black holes, using dark matter. Any of this could change what we might expect to see from an advanced technical civilization.

Yet, he said, the theory has a place. It’s important to cover as much ground as possible. You need to test a wide set of hypotheses one by one—falsifying them, pruning them—to get closer to the truth. “This is how science works. We need to have as many hypotheses and explanations for Fermi’s paradox as possible,” he said.

Plus, there’s a modest likelihood their aestivating aliens idea might be part of the answer, Cirkovic said. We shouldn’t expect a single hypothesis to account for Fermi’s paradox. It will be more of a “patchwork-quilt kind of solution,” he said.

And it’s important to keep exploring solutions. Fermi’s paradox is so much more than an intellectual exercise. It’s about trying to understand what might be out there and how this might explain our past and guide our future.

“I would say that 90-plus percent of hypotheses that were historically proposed to account for Fermi’s paradox have practical consequences,” Cirkovic said. They allow us to think proactively about some of the problems we as a species face, or may one day face, and prompt us to develop strategies to actively shape a more prosperous and secure future for humanity.“We can apply this reasoning to our past, to the emergence of life and complexity. We can also apply similar reasoning to thinking about our future. It can help us avoid catastrophes and help us understand the most likely fate of intelligent species in the universe.”

Stephen Hawking Left Us Bold Predictions on AI, Superhumans, and Aliens

The great physicist’s thoughts on the future of the human race and the fragility of planet Earth.

Quartz

  • Max de Haldevang

The late physicist Stephen Hawking’s last writings predict that a breed of superhumans will take over, having used genetic engineering to surpass their fellow beings.

In Brief Answers to the Big Questions, to published in October 2018 and excerpted in the UK’s Sunday Times (paywall), Hawking pulls no punches on subjects like machines taking over, the biggest threat to Earth, and the possibilities of intelligent life in space.

Artificial Intelligence

Hawking delivers a grave warning on the importance of regulating AI, noting that “in the future AI could develop a will of its own, a will that is in conflict with ours.” A possible arms race over autonomous-weapons should be stopped before it can start, he writes, asking what would happen if a crash similar to the 2010 stock market Flash Crash happened with weapons. He continues:

In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.

Earth’s Bleak Future, Gene Editing, and Superhumans

The bad news: At some point in the next 1,000 years, nuclear war or environmental calamity will “cripple Earth.” However, by then, “our ingenious race will have found a way to slip the surly bonds of Earth and will therefore survive the disaster.” The Earth’s other species probably won’t make it, though.

The humans who do escape Earth will probably be new “superhumans” who have used gene editing technology like CRISPR to outpace others. They’ll do so by defying laws against genetic engineering, improving their memories, disease resistance, and life expectancy, he says

Hawking seems curiously enthusiastic about this final point, writing, “There is no time to wait for Darwinian evolution to make us more intelligent and better natured.”

Once such superhumans appear, there are going to be significant political problems with the unimproved humans, who won’t be able to compete. Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings who are improving themselves at an ever-increasing rate. If the human race manages to redesign itself, it will probably spread out and colonise other planets and stars.

Intelligent Life in Space

Hawking acknowledges there are various explanations for why intelligent life hasn’t been found or has not visited Earth. His predictions here aren’t so bold, but his preferred explanation is that humans have “overlooked” forms of intelligent life that are out there.

Does God Exist?

No, Hawking says.

The question is, is the way the universe began chosen by God for reasons we can’t understand, or was it determined by a law of science? I believe the second. If you like, you can call the laws of science “God”, but it wouldn’t be a personal God that you would meet and put questions to.

The Biggest Threats to Earth

Threat number one one is an asteroid collision, like the one that killed the dinosaurs. However, “we have no defense” against that, Hawking writes. More immediately: climate change. “A rise in ocean temperature would melt the ice caps and cause the release of large amounts of carbon dioxide,” Hawking writes. “Both effects could make our climate like that of Venus with a temperature of 250C.”

The Best Idea Humanity Could Implement

Nuclear fusion power. That would give us clean energy with no pollution or global warming.

More from Quartz

Advertisement


Could Invisible Aliens Really Exist Among Us? An Astrobiologist Explains

The Earth may be crawling with undiscovered creatures with a biochemistry that differs from life as we know it. July 13th 2020

The Conversation

  • Samantha Rolfe

They probably won’t look anything like this. Credit: Martina Badini / Shutterstock.

Life is pretty easy to recognise. It moves, it grows, it eats, it excretes, it reproduces. Simple. In biology, researchers often use the acronym “MRSGREN” to describe it. It stands for movement, respiration, sensitivity, growth, reproduction, excretion and nutrition.

But Helen Sharman, Britain’s first astronaut and a chemist at Imperial College London, recently said that alien lifeforms that are impossible to spot may be living among us. How could that be possible?

While life may be easy to recognise, it’s actually notoriously difficult to define and has had scientists and philosophers in debate for centuries – if not millennia. For example, a 3D printer can reproduce itself, but we wouldn’t call it alive. On the other hand, a mule is famously sterile, but we would never say it doesn’t live.

As nobody can agree, there are more than 100 definitions of what life is. An alternative (but imperfect) approach is describing life as “a self-sustaining chemical system capable of Darwinian evolution”, which works for many cases we want to describe.

The lack of definition is a huge problem when it comes to searching for life in space. Not being able to define life other than “we’ll know it when we see it” means we are truly limiting ourselves to geocentric, possibly even anthropocentric, ideas of what life looks like. When we think about aliens, we often picture a humanoid creature. But the intelligent life we are searching for doesn’t have to be humanoid.

Life, But Not as We Know It

Sharman says she believes aliens exist and “there’s no two ways about it”. Furthermore, she wonders: “Will they be like you and me, made up of carbon and nitrogen? Maybe not. It’s possible they’re here right now and we simply can’t see them.”

Such life would exist in a “shadow biosphere”. By that, I don’t mean a ghost realm, but undiscovered creatures probably with a different biochemistry. This means we can’t study or even notice them because they are outside of our comprehension. Assuming it exists, such a shadow biosphere would probably be microscopic.

So why haven’t we found it? We have limited ways of studying the microscopic world as only a small percentage of microbes can be cultured in a lab. This may mean that there could indeed be many lifeforms we haven’t yet spotted. We do now have the ability to sequence the DNA of unculturable strains of microbes, but this can only detect life as we know it – that contain DNA.

If we find such a biosphere, however, it is unclear whether we should call it alien. That depends on whether we mean “of extraterrestrial origin” or simply “unfamiliar”.

Silicon-Based Life

A popular suggestion for an alternative biochemistry is one based on silicon rather than carbon. It makes sense, even from a geocentric point of view. Around 90 percent of the Earth is made up of silicon, iron, magnesium and oxygen, which means there’s lots to go around for building potential life.

Artist’s impression of a silicon-based life form. Credit: Zita.

Silicon is similar to carbon, it has four electrons available for creating bonds with other atoms. But silicon is heavier, with 14 protons (protons make up the atomic nucleus with neutrons) compared to the six in the carbon nucleus. While carbon can create strong double and triple bonds to form long chains useful for many functions, such as building cell walls, it is much harder for silicon. It struggles to create strong bonds, so long-chain molecules are much less stable.

What’s more, common silicon compounds, such as silicon dioxide (or silica), are generally solid at terrestrial temperatures and insoluble in water. Compare this to highly soluble carbon dioxide, for example, and we see that carbon is more flexible and provides many more molecular possibilities.

Life on Earth is fundamentally different from the bulk composition of the Earth. Another argument against a silicon-based shadow biosphere is that too much silicon is locked up in rocks. In fact, the chemical composition of life on Earth has an approximate correlation with the chemical composition of the sun, with 98 percent of atoms in biology consisting of hydrogen, oxygen and carbon. So if there were viable silicon lifeforms here, they may have evolved elsewhere.

That said, there are arguments in favour of silicon-based life on Earth. Nature is adaptable. A few years ago, scientists at Caltech managed to breed a bacterial protein that created bonds with silicon – essentially bringing silicon to life. So even though silicon is inflexible compared with carbon, it could perhaps find ways to assemble into living organisms, potentially including carbon.

And when it comes to other places in space, such as Saturn’s moon Titan or planets orbiting other stars, we certainly can’t rule out the possibility of silicon-based life.

To find it, we have to somehow think outside of the terrestrial biology box and figure out ways of recognising lifeforms that are fundamentally different from the carbon-based form. There are plenty of experiments testing out these alternative biochemistries, such as the one from Caltech.

Regardless of the belief held by many that life exists elsewhere in the universe, we have no evidence for that. So it is important to consider all life as precious, no matter its size, quantity or location. The Earth supports the only known life in the universe. So no matter what form life elsewhere in the solar system or universe may take, we have to make sure we protect it from harmful contamination – whether it is terrestrial life or alien lifeforms.

So could aliens be among us? I don’t believe that we have been visited by a life form with the technology to travel across the vast distances of space. But we do have evidence for life-forming, carbon-based molecules having arrived on Earth on meteorites, so the evidence certainly doesn’t rule out the same possibility for more unfamiliar life forms.

Samantha Rolfe is a Lecturer in Astrobiology and Principal Technical Officer at the University of Hertfordshire’s Bayfordbury Observatory.

Memories Can Be Injected and Survive Amputation and Metamorphosis July 13th 2020

If a headless worm can regrow a memory, then where is the memory stored? And, if a memory can regenerate, could you transfer it?

Nautilus

  • Marco Altamirano

The study of memory has always been one of the stranger outposts of science. In the 1950s, an unknown psychology professor at the University of Michigan named James McConnell made headlines—and eventually became something of a celebrity

The study of memory has always been one of the stranger outposts of science. In the 1950s, an unknown psychology professor at the University of Michigan named James McConnell made headlines—and eventually became something of a celebrity—with a series of experiments on freshwater flatworms called planaria. These worms fascinated McConnell not only because they had, as he wrote, a “true synaptic type of nervous system” but also because they had “enormous powers of regeneration…under the best conditions one may cut [the worm] into as many as 50 pieces” with each section regenerating “into an intact, fully-functioning organism.” 

In an early experiment, McConnell trained the worms à la Pavlov by pairing an electric shock with flashing lights. Eventually, the worms recoiled to the light alone. Then something interesting happened when he cut the worms in half. The head of one half of the worm grew a tail and, understandably, retained the memory of its training. Surprisingly, however, the tail, which grew a head and a brain, also retained the memory of its training. If a headless worm can regrow a memory, then where is the memory stored, McConnell wondered. And, if a memory can regenerate, could he transfer it? 

McConnell’s work has recently experienced a sort of renaissance.

Perhaps. Swedish neurobiologist Holger Hydén had suggested, in the 1960s, that memories were stored in neuron cells, specifically in RNA, the messenger molecule that takes instructions from DNA and links up with ribosomes to make proteins, the building blocks of life. McConnell, having become interested in Hydén’s work, scrambled to test for a speculative molecule that he called “memory RNA” by grafting portions of trained planaria onto the bodies of untrained planaria. His aim was to transfer RNA from one worm to another but, encountering difficulty getting the grafts to stick, he turned to a “more spectacular type of tissue transfer, that of ‘cannibalistic ingestion.’” Planaria, accommodatingly, are cannibals, so McConnell merely had to blend trained worms and feed them to their untrained peers. (Planaria lack the acids and enzymes that would completely break down food, so he hoped that some RNA might be integrated into the consuming worms.) 

Shockingly, McConnell reported that cannibalizing trained worms induced learning in untrained planaria. In other experiments, he trained planaria to run through mazes and even developed a technique for extracting RNA from trained worms in order to inject it into untrained worms in an effort to transmit memories from one animal to another. Eventually, after his retirement in 1988, McConnell faded from view, and his work was relegated to the sidebars of textbooks as a curious but cautionary tale. Many scientists simply assumed that invertebrates like planaria couldn’t be trained, making the dismissal of McConnell’s work easy. McConnell also published some of his studies in his own journal, The Worm Runner’s Digest, alongside sci-fi humor and cartoons. As a result, there wasn’t a lot of interest in attempting to replicate his findings.

Nonetheless, McConnell’s work has recently experienced a sort of renaissance, taken up by innovative scientists like Michael Levin, a biologist at Tufts University specializing in limb regeneration, who has reproduced modernized and automated versions of his planarian maze-training experiments. The planarian itself has enjoyed a newfound popularity, too, after Levin cut the tail off a worm and shot a bioelectric current through the incision, provoking the worm to regrow another head in place of its tail (garnering Levin the endearing moniker of “young Frankenstein”). Levin also sent 15 worm pieces into space, with one returning, strangely enough, with two heads (“remarkably,” Levin and his colleagues wrote, “amputating this double-headed worm again, in plain water, resulted again in the double-headed phenotype.”) 

David Glanzman, a neurobiologist at the University of California, Los Angeles, has another promising research program that recently struck a chord reminiscent of McConnell’s memory experiments—although, instead of planaria, Glanzman’s lab works mostly with aplysia, the darling mollusk of neuroscience on account of its relatively simple nervous system. (Also known as “sea hares,” aplysia are giant, inky sea slugs that swim with undulating, ruffled wings.)

In 2015, Glanzman was testing the textbook theory on memory, which holds that memories are stored in synapses, the connective junctions between neurons. His team, attempting to create and erase a memory in aplysia, periodically delivered mild electric shocks to train the mollusk to prolong a reflex, one where it withdraws, upon touch, its siphon, a little breathing tube between the gill and the tail. After training, his lab witnessed new synaptic growth between the sensory neuron that felt touch and the motor neuron that triggered the siphon withdrawal reflex. Developing after the training, the increased connectivity between those neurons seemed to corroborate the theory that memories are stored in synaptic connections. Glanzman’s team tried to erase the memory of the training by dismantling the synaptic connections between the neurons and, sure enough, the snails subsequently behaved as if they’d lost the memory, further corroborating the synaptic memory theory. After Glanzman’s team administered a “reminder” shock to the snails, the researchers were surprised to quickly notice different, newer synaptic connections growing between the neurons. The snails then behaved, once again, as if they remembered the sensitizing training they seemed to have previously forgotten. 

If the memory persisted through such major synaptic change, where the synaptic connections that emerged through training had disappeared and completely different, newer connections had taken their place, then maybe, Glanzman thought, memories are not really stored in synapses after all. The experiment seems like something out of Eternal Sunshine of the Spotless Mind, a movie in which ex-lovers trying to forget each other undergo a questionable procedure that deletes the memory of a person, but evidently not to the point beyond recall. The lovers both hide a plan deep within their minds to meet in Montauk in the end. The movie suggests, in a way, that memories are never completely lost, that it always remains possible to go back, even to people and places that seem long forgotten.

But if memories aren’t stored in synaptic connections, where are they stored instead? Glanzman’s unpopular hypothesis was that they might reside in the nucleus of the neuron cell, where DNA and RNA sequences compose instructions for life processes. DNA sequences are fixed and unchanging, so most of an organism’s adaptability comes from supple epigenetic mechanisms, processes that regulate gene expression in response to environmental cues or pressures, which sometimes involve RNA. If DNA is printed sheet music, RNA-induced epigenetic mechanisms are like improvisational cuts and arrangements that might conduct learning and memory.

Perhaps memories reside in epigenetic changes induced by RNA, that improv molecule that scores protein-based adaptations of life. Glanzman’s team went back to their aplysia and trained them over two days to prolong their siphon-withdrawal reflex. They then dissected their nervous systems, extracting RNA involved in forming the memory of their training, and injected it into untrained aplysia, which were tested for learning a day later. Glanzman’s team found that the RNA from trained donors induced learning, while the RNA from untrained donors had no effect. They had transferred a memory, vaguely but surely, from one animal to another, and they had strong evidence that RNA was the memory-transferring agent.

Glanzman now believes that synapses are necessary for the activation of a memory, but that the memory is encoded in the nucleus of the neuron through epigenetic changes. “It’s like a pianist without hands,” Glanzman says. “He may know how to play Chopin, but he’d need hands to exercise the memory.” 

The work of Douglas Blackiston, an Allen Discovery Center scientist at Tufts University, who has studied memory in insects, paints a similar picture. He wanted to know if a butterfly could remember something about its life as a caterpillar, so he exposed caterpillars to the scent of ethyl acetate followed by a mild electric shock. After acquiring an aversion to ethyl acetate, the caterpillars pupated and, after emerging as adult butterflies several weeks later, were tested for memory of their aversive training. Surprisingly, the adult butterflies remembered—but how? The entire caterpillar becomes a cytoplasmic soup before it metamorphosizes into a butterfly. “The remodeling is catastrophic,” Blackiston says. “After all, we’re moving from a crawling machine to a flying machine. Not only the body but the entire brain has to be rewired.”

It’s hard to study exactly what goes on during pupation in vivo, but there’s a subset of caterpillar neurons that may persist in what are called “mushroom bodies,” a pair of structures involved in olfaction that many insects have located near their antennae. In other words, some structure remains. “It’s not soup,” Blackiston says. “Well, maybe it’s soup, but it’s chunky.” There’s near complete pruning of neurons during pupation, and the few neurons that remain become disconnected from other neurons, dissolving the synaptic connections between them in the process, until they reconnect with other neurons during the remodeling into the butterfly brain. Like Glanzman, Blackiston employs a hand analogy: “It’s like a small group of neurons were holding hands, but then let go and moved around, finally reconnecting with different neurons in the new brain.” If the memory was stored anywhere, Blackiston suspects it was stored in the subset of neurons located in the mushroom bodies, the only known carryover material from the caterpillar to the butterfly. 

In the end, despite its whimsical caricature of the science of memory, Eternal Sunshine may have stumbled on a correct premise. Not only do Glanzman and Blackiston believe their experiments harbor hopeful news for Alzheimer’s patients, it also might be possible to repair deteriorated neurons that could, at least theoretically, find their way back to lost memories, perhaps with the guidance of appropriate RNA.

Marco Altamirano is a writer based in New Orleans and the author of Time, Technology, and Environment: An Essay on the Philosophy of Nature. Follow him on

Blindsight: a strange neurological condition that could help explain consciousness

July 2, 2020 11.31am BST

Author

  1. Henry Taylor Birmingham Fellow in Philosophy, University of Birmingham

Disclosure statement

Henry Taylor previously received funding from The Leverhulme Trust and Isaac Newton Trust, but they do not stand to benefit from publication of this article.

Partners

University of Birmingham

University of Birmingham provides funding as a founding partner of The Conversation UK.

The Conversation UK receives funding from these organisations

View the full list

CC BY NDWe believe in the free flow of information
Republish our articles for free, online or in print, under Creative Commons licence.

Imagine being completely blind but still being able to see. Does that sound impossible? Well, it happens. A few years ago, a man (let’s call him Barry) suffered two strokes in quick succession. As a result, Barry was completely blind, and he walked with a stick.

One day, some psychologists placed Barry in a corridor full of obstacles like boxes and chairs. They took away his walking stick and told him to walk down the corridor. The result of this simple experiment would prove dramatic for our understanding of consciousness. Barry was able to navigate around the obstacles without tripping over a single one.

Barry has blindsight, an extremely rare condition that is as paradoxical as it sounds. People with blindsight consistently deny awareness of items in front of them, but they are capable of amazing feats, which demonstrate that, in some sense, they must be able to see them.

In another case, a man with blindsight (let’s call him Rick) was put in front of a screen and told to guess (from several options) what object was on the screen. Rick insisted that he didn’t know what was there and that he was just guessing, yet he was guessing with over 90% accuracy.

Into the brain

Blindsight results from damage to an area of the brain called the primary visual cortex. This is one of the areas, as you might have guessed, responsible for vision. Damage to primary visual cortex can result in blindness – sometimes total, sometimes partial.

So how does blindsight work? The eyes receive light and convert it into information that is then passed into the brain. This information then travels through a series of pathways through the brain to eventually end up at the primary visual cortex. For people with blindsight, this area is damaged and cannot properly process the information, so the information never makes it to conscious awareness. But the information is still processed by other areas of the visual system that are intact, enabling people with blindsight to carry out the kind of tasks that we see in the case of Barry and Rick.

Some blind people appear to be able to ‘see’. Akemaster/Shutterstock

Blindsight serves as a particularly striking example of a general phenomenon, which is just how much goes on in the brain below the surface of consciousness. This applies just as much to people without blindsight as people with it. Studies have shown that naked pictures of attractive people can draw our attention, even when we are completely unaware of them. Other studies have demonstrated that we can correctly judge the colour of an object without any conscious awareness of it.

Blindsight debunked?

Blindsight has generated a lot of controversy. Some philosophers and psychologists have argued that people with blindsight might be conscious of what is in front of them after all, albeit in a vague and hard-to-describe way.

This suggestion presents a difficulty, because ascertaining whether someone is conscious of a particular thing is a complicated and highly delicate task. There is no “test” for consciousness. You can’t put a probe or a monitor next to someone’s head to test whether they are conscious of something – it’s a totally private experience.

We can, of course, ask them. But interpreting what people say about their own experiences can be a thorny task. Their reports sometimes seem to indicate that they have no consciousness at all of the objects in front of them (Rick once insisted that he did not believe that there really were any objects there). Other individuals with blindsight report feeling “visual pin-pricks” or “dark shadows” indicating the tantalising possibility that they did have some conscious awareness left over.

The boundaries of consciousness

So, what does blindsight tell us about consciousness? Exactly how you answer this question will heavily depend on which interpretation you accept. Do you think that those who have blindsight are in some sense conscious of what is out there or not?

The visual cortex. Geyer S, Weiss M, Reimann K, Lohmann G and Turner R/wikipedia, CC BY-SA

If they’re not, then blindsight provides an exciting tool that we can use to work out exactly what consciousness is for. By looking at what the brain can do without consciousness, we can try to work out which tasks ultimately require consciousness. From that, we may be able to work out what the evolutionary function of consciousness is, which is something that we are still relatively in the dark about.

On the other hand, if we could prove that people with blindsight are conscious of what is in front of them, this raises no less interesting and exciting questions about the limits of consciousness. What is their consciousness actually like? How does it differ from more familiar kinds of consciousness? And precisely where in the brain does consciousness begin and end? If they are conscious, despite damage to their visual cortex, what does that tell us about the role of this brain area in generating consciousness?

In my research, I am interested in the way that blindsight reveals the fuzzy boundaries at the edges of vision and consciousness. In cases like blindsight, it becomes increasingly unclear whether our normal concepts such as “perception”, “consciousness” and “seeing” are up to the task of adequately describing and explaining what is really going on. My goal is to develop more nuanced views of perception and consciousness that can help us understand their distinctly fuzzy edges.

To ultimately understand these cases, we will need to employ careful philosophical reflection on the concepts we use and the assumptions we make, just as much as we will need a thorough scientific investigation of the mechanics of the mind.

Before you go…

The past year has been marked by record-breaking hurricanes, floods and heatwaves. As this extreme weather becomes the new normal, The Conversation’s academic authors have analysed these events and how they are (or aren’t) linked to climate change. Should you wish to make a donation, it will help us continue to provide research-led reactions to the climate crisis.

Scientists say most likely number of contactable alien civilisations is 36

New calculations come up with an estimate for worlds capable of communicating with others.

The Guardian

  • Nicola Davis
GettyImages-498384831.jpg

We’re listening … but is anything out there? Photo by dszc / Getty Images.

They may not be little green men. They may not arrive in a vast spaceship. But according to new calculations there could be more than 30 intelligent civilisations in our galaxy today capable of communicating with others.

Experts say the work not only offers insights into the chances of life beyond Earth but could shed light on our own future and place in the cosmos.

“I think it is extremely important and exciting because for the first time we really have an estimate for this number of active intelligent, communicating civilisations that we potentially could contact and find out there is other life in the universe – something that has been a question for thousands of years and is still not answered,” said Christopher Conselice, a professor of astrophysics at the University of Nottingham and a co-author of the research.

In 1961 the astronomer Frank Drake proposed what became known as the Drake equation, setting out seven factors that would need to be known to come up with an estimate for the number of intelligent civilisations out there. These factors ranged from the the average number of stars that form each year in the galaxy through to the timespan over which a civilisation would be expected to be sending out detectable signals.

But few of the factors are measurable. “Drake equation estimates have ranged from zero to a few billion [civilisations] – it is more like a tool for thinking about questions rather than something that has actually been solved,” said Conselice.

Now Conselice and colleagues report in the Astrophysical Journal how they refined the equation with new data and assumptions to come up with their estimates.

“Basically, we made the assumption that intelligent life would form on other [Earth-like] planets like it has on Earth, so within a few billion years life would automatically form as a natural part of evolution,” said Conselice.

The assumption, known as the Astrobiological Copernican Principle, is fair as everything from chemical reactions to star formation is known to occur if the conditions are right, he said. “[If intelligent life forms] in a scientific way, not just a random way or just a very unique way, then you would expect at least this many civilisations within our galaxy,” he said.

He added that, while it is a speculative theory, he believes alien life would have similarities in appearance to life on Earth. “We wouldn’t be super shocked by seeing them,” he said.

Under the strictest set of assumptions – where, as on Earth, life forms between 4.5bn and 5.5bn years after star formation – there are likely between four and 211 civilisations in the Milky Way today capable of communicating with others, with 36 the most likely figure. But Conselice noted that this figure is conservative, not least as it is based on how long our own civilisation has been sending out signals into space – a period of just 100 years so far.

The team add that our civilisation would need to survive at least another 6,120 years for two-way communication. “They would be quite far away … 17,000 light years is our calculation for the closest one,” said Conselice. “If we do find things closer … then that would be a good indication that the lifespan of [communicating] civilisations is much longer than a hundred or a few hundred years, that an intelligent civilisation can last for thousands or millions of years. The more we find nearby, the better it looks for the long-term survival of our own civilisation.”

Dr Oliver Shorttle, an expert in extrasolar planets at the University of Cambridge who was not involved in the research, said several as yet poorly understood factors needed to be unpicked to make such estimates, including how life on Earth began and how many Earth-like planets considered habitable could truly support life.

Dr Patricia Sanchez-Baracaldo, an expert on how Earth became habitable, from the University of Bristol, was more upbeat, despite emphasising that many developments were needed on Earth for conditions for complex life to exist, including photosynthesis. “But, yes if we evolved in this planet, it is possible that intelligent life evolved in another part of the universe,” she said.

Prof Andrew Coates, of the Mullard Space Science Laboratory at University College London, said the assumptions made by Conselice and colleagues were reasonable, but the quest to find life was likely to take place closer to home for now.

“[The new estimate] is an interesting result, but one which it will be impossible to test using current techniques,” he said. “In the meantime, research on whether we are alone in the universe will include visiting likely objects within our own solar system, for example with our Rosalind Franklin Exomars 2022 rover to Mars, and future missions to Europa, Enceladus and Titan [moons of Jupiter and Saturn]. It’s a fascinating time in the search for life elsewhere.”

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

More from The Guardian

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

More from The Guardian

Advertisement


‘Miles-wide anomaly’ over Texas sparks concerns HAARP weather manipulation has BEGUN

BIZARRE footage has emerged that proves the US government is testing weather manipulation technology, according to wild claims online.

The clip, captured in Texas, US, shows the moment radar was completely blotted out by an unknown source.

Another video shows a green blob forming above Sugar Land, quickly growing in size in a circular formation.

According to Travis Herzog, a meteorologist at ABC News, the phenomenon was caused by a flock of birds filling the sky.

But conspiracy theorist Tyler Glockner, who runs YouTube channel secureteam10, disagrees.

He posted a video yesterday speculating it could be something more sinister which was accidentally exposed by the news channel.

He also pointed out the number of birds needed to cause such an event would have been seen or recorded by someone.

And his video has now racked up more than 350,000 hits in less than 48 hours.

“I am willing to bet there is a power station near the centre of that burst. Some kind of HAARP technology,” one viewer suggested.

Another added: “This is HAARP and some kind of weather modification/manipulation technology.”

And a third simply claimed: “Scary weather manipulation in progress.”

The High-Frequency Active Auroral Research Programme was initiated as a research project between the US Air Force, Navy, University of Alaska Fairbanks and the Defence Advance Research Agency.

Many conspiracists believe the US government is already using the HAARP programme to control weather occurrences through the use of chemtrailing.

Over the years, HAARP has been blamed for generating natural catastrophes such as thunderstorms and power loss as well as strange cloud formations.

But it was actually designed and built by BAE Advance Technologies to help analyse the ionosphere and investigate the potential for developing enhanced technologies.

Climate change expert Janos Pasztor previously revealed to Daily Star Online how this technology could lead to weaponisation.

The following extract is from U.S Research on weather modification dating back to 1957 Posted July 5th 2020

COMMENT secretary ‘.’ From: Sent: Andrea Psoras-QEDI [apsoras@qedinternational.com] Monday, May 05, 2008 3:08 PM To: secretary • , Subject: CFTC Requests Public Input on Possible Regulation of “Event Contracts” Commodity Futures Trading Commission Three Lafayette Centre 1155 21st Street, NW Page 1 of35 C) -·1 ‘ C) i .-(:::::l -q ‘. .. -~ .. (~· …. ..: .. ~ Washington, DC 20581 f\1 I ” (/) -….,J : : 1 r…····~· c ) :.:S.2 202-418-5000 202-418-5521, fax 202-418-5514, 11lY questions@cftc. gov Dear Commissioners and Secretary: :u f’ \ }_:;~ :;: ) ,, -·· -f l’;y … c: N Not everything is a commodity, nor should something that is typically covered by some sort of property and casualty insurance suddenly become exchange tradable. Insurance companies for a number of years have provided compensation of some sort for random, but periodic events. Where the insurance industry wants to off-load their risk at the expense of other commodities markets participants, contributes to sorts of moral hazards – which I vigorously oppose. If where there is ‘interest’ to develop these sorts of risk event instruments, to me it seems an admission that the insurance sector is perhaps marginal or worse, incompetent or too greedy to determine how to offer insurance for events presumably produced by nature. Now where there are the weather and earth shaking technologies, or some circles call these weather and electro-magnetic weapons, used insidiously unfortunately by our military, our intelligence apparatus, and perhaps our military contractors for purposes contrary to that to which our public servants take their oath of office to the Constitution,

I suggest prohibiting the use of that technology rather than leaving someone else holding the bag in the event destruction produced by, and where so-called ‘natural’ events were produced by military contractor technology in the guise of ‘mother nature’. * Consider Rep Denis Kucinich as well as former Senator John Glenn attempted to have our Congress prohibit the use of space based weapons. That class ofweapons includes the ‘weather weapons’. http://www.globalresearch.ca/articles/CH0409F.html as well as other articles about this on the Global Research website. Respectfully, Andrea Psoras “CFTC Requests Public Input on Possible Regulation of “Event Contracts” Washington, DC-The Commodity Futures Trading Commission (CFTC) is asking for public 5/7/2008 ::0 r:1 :> I ,.1 -·· -·’ ” r·n 0 Page 2 of35 comment on the appropriate regulatory treatment of finanCial agreements offered by markets commonly referred to as event, prediction, or information markets.

During the past several years, the CFTC has received numerous requests for guidance involving the trading of event contracts. These contracts typically involve financial agreements that are linked to events or measurable outcomes and often serve as information collection vehicles. The contracts are based on a broad spectrum of events, such as the results of presidential elections, world population levels, or economic measures. “Event markets are rapidly evolving, and growing, presenting a host of difficult policy and legal questions including: What public purpose is served in the oversight of these markets and what differentiates these markets from pure gambling outside the CFTC’ s jurisdiction?” said CFTC Acting chairman Walt Lukken.

“The CFTC is evaluating how these markets should be regulated with the proper protections in place and I encourage members of the public to provide their views.” In response to requests for guidance, and to promote regulatory certainty, the CFTC has commenced a comprehensive review of the Commodity Exchange Act’s applicability to event contracts and markets.

The CFTC is issuing a Concept Release to solicit the expertise and opinions of all interested parties, including CFTC registrants, legal practitioners, economists, state and federal regulatory authorities, academics, and event market participants. The Concept Release will be published in the Federal Register shortly; comments will be accepted for 60 days after publication in the Federal Register.” Comments may also be submitted electronically to secretary@cftc.gov. All comments received will be posted on the CFTC’s website. * Weather as a Force Multiplier: Owning the Weather in 2025 A Research Paper Presented To Air Force 2025 August 1996 Below are highlights contained within the actual report. Please remember that this research report was issued in 1996 -8 years ago -and that much of what was discussed as being in preliminary stages back then is now a reality. In the United States, weather-modification will likely become a part of national security policy with both domestic and international applications. Our government will pursue such a policy, depending on its interests, at various levels. In this paper we show that appropriate application of weather-modification can provide battlespace dominance to a degree never before imagined. In the future, such operations will enhance air and space superiority and provide new options for battlespace shaping and battlespace awareness. “The technology is there, waiting for us to pull it all together” [General Gordon R. Sullivan, “Moving into the 21st Century: America’s Army and Modernization,” Military Review (July 1993) quoted in Mary Ann Seagraves and Richard Szymber, “Weather a Force Multiplier,” Military Review, November/December 1995, 75]. A global, precise, real-time, robust, systematic weather-modification capability 5/7/2008 would provide war-fighting CINCs [an acronym meaning “Commander IN Chief’ of a unified command] with a powerful force multiplier to achieve military objectives.

Since weather will be common to all possible futures, a weather-modification capability would be universally applicable and have utility across the entire spectrum of conflict. The capability of influencing the weather even on a small scale could change it from a force degrader to a force multiplier.

In 1957, the president’s advisory committee on weather control explicitly recognized the military potential of weather-modification, warning in their report that it could become a more important weapon than the atom bomb [William B. Meyer, “The Life and Times ofUS Weather: What Can We Do About It?” American Heritage 37, no. 4 (June/July 1986), 48]. Today [since 1969], weather-modification is the alteration of weather phenomena over a limited area for a limited period of time. [Herbert S. Appleman, An Introduction to Weather-modification (Scott AFB, Ill.: Air Weather Service/MAC, September 1969), 1]. In the broadest sense, weather-modification can be divided into two major categories: suppression and intensification of weather patterns. In extreme cases, it might involve the creation of completely new weather patterns, attenuation or control of severe storms, or even alteration of global climate on a far-reaching and/or long-lasting scale.

Extreme and controversial examples of weather modification-creation of made-to-order weather, large-scale climate modification, creation and/or control (or “steering”) of severe storms, etc.-were researched as part of this study … the weather-modification applications proposed in this report range from technically proven to potentially feasible. Applying Weather-modification to Military Operations How will the military, in general, and the USAF, in particular, manage and employ a weather-modification capability?

We envision this will be done by the weather force support element (WFSE), whose primary mission would be to support the war-fighting CINCs with weather-modification options, in addition to current forecasting support. Although the WFSE could operate anywhere as long as it has access to the GWN and the system components already discussed, it will more than likely be a component within the AOC or its 2025-equivalent. With the CINC’s intent as guidance, the WFSE formulates weather-modification options using information provided by the GWN, local weather data network, and weather-modification forecast model.

The options include range of effect, probability of success, resources to be expended, the enemy’s vulnerability, and risks involved. The CINC chooses an effect based on these inputs, and the WFSE then implements the chosen course, selecting the right modification tools and employing them to achieve the desired effect. Sensors detect the change and feed data on the new weather pattern to the modeling system which updates its forecast accordingly. The WFSE checks the effectiveness of its efforts by pulling down the updated current conditions and new forecast(s) from the GWN and local weather data network, and plans follow-on missions as needed. This concept is illustrated in figure 3-2. 5/7/2008 Page 3 of35

Two key technologies are necessary to meld an integrated, comprehensive, responsive, precise, and effective weather-modification system. Advances in the science of chaos are critical to this endeavor. Also key to the feasibility of such a system is the ability to model the extremely complex nonlinear system of global weather in ways that can accurately predict the outcome of changes in the influencing variables. Researchers have already successfully controlled single variable nonlinear systems in the lab and hypothesize that current mathematical techniques and computer capacity could handle systems with up to five variables.

Advances in these two areas would make it feasible to affect regional weather patterns by making small, continuous nudges to one or more influencing factors. Conceivably, with enough lead time and the right conditions, you could get “made-to-order” weather [William Brown, “Mathematicians Learn How to Tame Chaos,” New Scientist (30 May 1992): 16]. The total weather-modification process would be a real-time loop of continuous, appropriate, measured interventions, and feedback capable of producing desired weather behavior. The essential ingredient ofthe weather-modification system is the set of intervention techniques used to modify the weather.

The number of specific intervention methodologies is limited only by the imagination, but with few exceptions they involve infusing either energy or chemicals into the meteorological process in the right way, at the right place and time. The intervention could be designed to modify the weather in a number of ways, such as influencing clouds and precipitation, storm intensity, climate, space, or fog. 5/7/2008 Page 4 of35 PRECIPITATION ” … significant beneficial influences can be derived through judicious exploitation of the solar absorption potential of carbon black dust” [William M. Gray et al., “Weather-modification by Carbon Dust Absorption of Solar Energy,” Journal of Applied Meteorology 15 (April1976): 355]. The study ultimately found that this technology could be used to enhance rainfall on the mesoscale, generate cirrus clouds, and enhance cumulonimbus (thunderstorm) clouds in otherwise dry areas . . . .if we are fortunate enough to have a fairly large body of water available upwind from the targeted battlefield, carbon dust could be placed in the atmosphere over that water. Assuming the dynamics are supportive in the atmosphere, the rising saturated air will eventually form clouds and rainshowers downwind over the land. Numerous dispersal techniques [of carbon dust] have already been studied, but the most convenient, safe, and cost-effective method discussed is the use of afterburner-type jet engines to generate carbon particles while flying through the targeted air.

This method is based on injection ofliquid hydrocarbon fuel into the afterburner’s combustion gases [this explains why contrails have now become chemtrails]. To date, much work has been done on UAVs [Unmanned Aviation Vehicles] which can closely (if not completely) match the capabilities of piloted aircraft. If this UAV technology were combined with stealth and carbon dust technologies, the result could be a UAV aircraft invisible to radar while en route to the targeted area, which could spontaneously create carbon dust in any location. If clouds were seeded (using chemical nuclei similar to those used today or perhaps a more effective agent discovered through continued research) before their downwind arrival to a desired location, the result could be a suppression of precipitation. In other words, precipitation could be “forced” to fall before its arrival in the desired territory, thereby making the desired territory “dry.” FOG Field experiments with lasers have demonstrated the capability to dissipate warm fog at an airfield with zero visibility. Smart materials based on nanotechnology are currently being developed with gigaops computer capability at their core.

They could adjust their size to optimal dimensions for a given fog seeding situation and even make adjustments throughout the process. They might also enhance their dispersal qualities by adjusting their buoyancy, by communicating with each other, and by steering themselves within the fog. They will be able to provide immediate and continuous effectiveness feedback by integrating with a larger sensor network and can also change their temperature and polarity to improve their seeding effects [J. Storrs Hall, “Overview ofNanotechnology,” adapted from papers by Ralph C. Merkle and K. Eric Drexler, Rutgers University, November 1995]. As mentioned above, UAVs could be used to deliver and distribute these smart materials. Recent army research lab experiments have demonstrated the feasibility of 5/7/2008 Page 5 of35 generating fog.

They used commercial equipment to generate thick fog in an area 100 meters long. Further study has shown fogs to be effective at blocking much of the UV/IR/visible spectrum, effectively masking emitters of such radiation from IR weapons [Robert A. Sutherland, “Results of Man-Made Fog Experiment,” Proceedings of the 1991 Battlefield Atmospherics Coriference (Fort Bliss, Tex.: Hinman Hall, 3-6 December~1991)]. STORMS The damage caused by storms is indeed horrendous. For instance, a tropical storm has an energy equal to 10,000 one-megaton hydrogen bombs [Louis J. Battan, Harvesting the Clouds (Garden City, N.Y.: Doubleday & Co., 1960), 120]. At any instant there are approximately 2,000 thunderstorms taking place. In fact 45,000 thunderstorms, which contain heavy rain, hail, microbursts, wind shear, and lightning form daily [GeneS. Stuart, “Whirlwinds and Thunderbolts,” Nature on the Rampage (Washington, D.C.: National Geographic Society, 1986), 130]. Weather-modification technologies might involve techniques that would increase latent heat release in the atmosphere, provide additional water vapor for cloud cell development, and provide additional surface and lower atmospheric heating to increase atmospheric instability.

The focus of the weather-modification effort would be to provide additional “conditions” that would make the atmosphere unstable enough to generate cloud and eventually storm cell development. One area of storm research that would significantly benefit military operations is lightning modification … but some offensive military benefit could be obtained by doing research on increasing the potential and intensity of lightning. Possible mechanisms to investigate would be ways to modify the electropotential characteristics over certain targets to induce lightning strikes on the desired targets as the storm passes over their location. In summary, the ability to modify battlespace weather through storm cell triggering or enhancement would allow us to exploit the technological “weather” advances. SPACE WEATHER-MODIFICATION This section discusses opportunities for control and modification of the ionosphere and near-space environment for force enhancement. A number of methods have been explored or proposed to modify the ionosphere, including injection of chemical vapors and heating or charging via electromagnetic radiation or particle beams (such as ions, neutral particles, x-rays, MeV particles, and energetic electrons)-[Peter M. Banks, “Overview of Ionospheric Modification from Space Platforms,” in Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems (AGARD Conference Proceedings 485, October 1990) 19-1].

It is important to note that many techniques to modify the upper atmosphere have been successfully demonstrated experimentally. Ground-based modification techniques employed by the FSU include vertical HF heating, oblique HF heating, microwave heating, and magnetospheric modification [Capt Mike Johnson, Upper 5/7/2008 Page 6 of35 Atmospheric Research and Modification-Former Soviet Union (U), DST-18205-475-92 (Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992)]. Creation of an artificial uniform ionosphere was first proposed by Soviet researcher A. V. Gurevich in the mid-1970s. An artificial ionospheric mirror (AIM) would serve as a precise mirror for electromagnetic [EM] radiation of a selected source.

Atmospheric Research and Modification-Former Soviet Union (U), DST-18205-475-92 (Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992)]. Creation of an artificial uniform ionosphere was first proposed by Soviet researcher A. V. Gurevich in the mid-1970s. An artificial ionospheric mirror (AIM) would serve as a precise mirror for electromagnetic [EM] radiation of a selected frequency or a range of frequencies. AJ1iftcial Ionospheric Mirrors S’IATION ARTIFICIAL WEATHER GRO U.ND-IJAS..Iill AIM GENERATOR 121\niZ STATION While most weather-modification efforts rely on the existence of certain preexisting conditions, it may be possible to produce some weather effects artificially, regardless of preexisting conditions. For instance, virtual weather could be created by influencing the weather inforrilation received by an end user.

Nanotechnology also offers possibilities for creating simulated weather. A cloud, or several clouds, of microscopic computer particles, all communicating with each other and with a larger control system could provide tremendous capability. Interconnected, atmospherically buoyant, and having navigation capability in three dimensions, such clouds could be designed to have a wide-range of properties … Even if power levels achieved were insufficient to be an effective strike weapon [if power levels WERE sufficient, they would be an effective strike weapon], the potential for psychological operations in many situations could be fantastic. One major advantage of using simulated weather to achieve a desired effect is that unlike other approaches, it makes what are otherwise the results of deliberate actions appear to be the consequences of natural weather phenomena. In addition, it is potentially relatively inexpensive to do. According to J. Storrs Hall, a 5/7/2008 Page 7 of35

Andrea Psoras Senior Vice President The Electronic Frontier Foundation, an advocate for freedom of information on the Internet, has condemned Santorum’s bill. “It is a terrible precedent for information policy,” said staff member Ren Bucholz. “If the rule is, data provided by taxpayer money can’t be provided to the public but through a private entity, we won’t have a very useful public agency.” QED International Associates, Inc. US Agent for Rapid Ratings International 708 Third A venue, 23rd Fl New York, NY 10017 (212) 953-40580 apsoras@gmail.com (646) 709-9629c apsoras@qedinternational.com http://www.qedintemational.com

  • 07-13-19

Apollo 11 really landed on the Moon—and here’s how you can be sure (sorry, conspiracy nuts)

We went to the Moon. Here’s all the proof you’ll ever need.

By Charles Fishman7 minute Read

This is the 43rd in an exclusive series of 50 articles, one published each day until July 20, exploring the 50th anniversary of the first-ever Moon landing. You can check out 50 Days to the Moon here every day.

The United States sent astronauts to the Moon, they landed, they walked around, they drove around, they deployed lots of instruments, they packed up nearly half a ton of Moon rocks, and they flew home.

No silly conspiracy was involved.

There were no Hollywood movie sets.

Anybody who writes about Apollo and talks about Apollo is going to be asked how we actually know that we went to the Moon.

Not that the smart person asking the question has any doubts, mind you, but how do we know we went, anyway?

It’s a little like asking how we know there was a Revolutionary War. Where’s the evidence? Maybe it’s just made up by the current government to force us to think about America in a particular way.

How do we know there was a Titanic that sank?

And by the way, when I go to the battlefields at Gettysburg—or at Normandy, for that matter—they don’t look much like battlefields to me. Can you prove we fought a Civil War? World War II?

In the case of Apollo, in the case of the race to the Moon, there is a perfect reply.

The race to the Moon in the 1960s was, in fact, an actual race.

The success of the Soviet space program—from Sputnik to Strelka and Belka to Yuri Gagarin—was the reason for Apollo. John Kennedy launched America to the Moon precisely to beat the Russians to the Moon.

When Kennedy was frustrated with the fact that the Soviets were first to achieve every important milestone in space, he asked Vice President Lyndon Johnson to figure it out—fast. The opening question of JFK’s memo to LBJ:

“Do we have a chance of beating the Soviets by putting a laboratory in space, or by a trip around the Moon, or by a rocket to land on the Moon, or by a rocket to go to the Moon and back with a man. Is there any other space program which promises dramatic results in which we could win?”

Win. Kennedy wanted to know how to beat the Soviets—how to win in space.

That memo was written a month before Kennedy’s dramatic “go to the Moon” speech. The race to the Moon he launched would last right up to the moment, almost 100 months later, when Apollo 11 would land on the Moon.

The race would shape the American and Soviet space programs in subtle and also dramatic ways.

Apollo 8 was the first U.S. mission that went to the Moon: The Apollo capsule and the service module, with Frank Borman, Bill Anders, and Jim Lovell, flew to the Moon at Christmastime in 1968, but without a lunar module. The lunar modules were running behind, and there wasn’t one ready for the flight.

Apollo 8 represented a furious rejuggling of the NASA flight schedule to accommodate the lack of a lunar module. The idea was simple: Let’s get Americans to the Moon quick, even if they weren’t ready to land on the Moon. Let’s “lasso the Moon” before the Soviets do.

At the moment when the mission was conceived and the schedule redone to accommodate a different kind of Apollo 8, in late summer 1968, NASA officials were worried that the Russians might somehow mount exactly the same kind of mission: Put cosmonauts in a capsule and send them to orbit the Moon, without landing. Then the Soviets would have made it to the Moon first.

Apollo 8 was designed to confound that, and it did.

In early December 1968, in fact, the rivalry remained alive enough that Time magazine did a cover story on it. “Race for the Moon” was the headline, and the cover was an illustration of an American astronaut and a Soviet cosmonaut, in spacesuits, leaping for the surface of the Moon.

Seven months later, when Apollo 11, with Michael Collins, Neil Armstrong, and Buzz Aldrin aboard, entered orbit around the Moon on July 19, 1969, there was a Soviet spaceship there to meet them. It was Luna 15, and it had been launched a few days before Apollo 11. Its goal: Land on the Moon, scoop up Moon rocks and dirt, and then dash back to a landing in the Soviet Union before Collins, Aldrin, and Armstrong could return with their own Moon rocks.

If that had happened, the Soviets would at least have been able to claim that they had gotten Moon rocks back to Earth first (and hadn’t needed people to do it).

So put aside for a moment the pure ridiculousness of a Moon landing conspiracy that somehow doesn’t leak out. More than 410,000 Americans worked on Apollo, on behalf of 20,000 companies. Was their work fake? Were they all in on the conspiracy? And then, also, all their family members—more than 1 million people—not one of whom ever whispered a word of the conspiracy?

What of the reporters? Hundreds of reporters covering space, writing stories not just of the dramatic moments, but about all the local companies making space technology, from California to Delaware.

Put aside as well the thousands of hours of audio recordings—between spacecraft and mission control; in mission control, where dozens of controllers talked to each other; in the spacecraft themselves, where there were separate recordings of the astronauts just talking to each other in space. There were 2,502 hours of Apollo spaceflight, more than 100 days. It’s an astonishing undertaking not only to script all that conversation, but then to get people to enact it with authenticity, urgency, and emotion. You can now listen to all of it online, and it would take you many years to do so.

For those who believe the missions were fake, all that can, somehow, be waved off. A puzzling shadow in a picture from the Moon, a quirk in a single moment of audio recording, reveals that the whole thing was a vast fabrication. (With grace and straight-faced reporting, the Associated Press this week reviewed, and rebutted, the most popular sources of the conspiracy theories.)

Forget all that.

If the United States had been faking the Moon landings, one group would not have been in on the conspiracy: The Soviets.

The Soviet Union would have revealed any fraud in the blink of an eye, and not just without hesitation, but with joy and satisfaction.

In fact, the Russians did just the opposite. The Soviet Union was one of the few places on Earth (along with China and North Korea) where ordinary people couldn’t watch the landing of Apollo 11 and the Moon walk in real time. It was real enough for the Russians that they didn’t let their own people see it.

That’s all the proof you need. If the Moon landings had been faked—indeed, if any part of them had been made up, or even exaggerated—the Soviets would have told the world. They were watching. Right to the end, they had their own ambitions to be first to the Moon, in the only way they could muster at that point.

And that’s a kind of proof that the conspiracy-meisters cannot wriggle around.

But another thing is true about the Moon landings: You’ll never convince someone who wants to think they were faked that they weren’t. There is nothing in particular you could ever say, no particular moment or piece of evidence you could produce, that would cause someone like that to light up and say, “Oh! You’re right! We did go to the Moon.”

Anyone who wants to live in a world where we didn’t go to the Moon should be happy there. That’s a pinched and bizarre place, one that defies not just the laws of physics but also the laws of ordinary human relationships.

I prefer to live in the real world, the one in which we did go to the Moon, because the work that was necessary to get American astronauts to the Moon and back was extraordinary. It was done by ordinary people, right here on Earth, people who were called to do something they weren’t sure they could, and who then did it, who rose to the occasion in pursuit of a remarkable goal.

That’s not just the real world, of course. It’s the best of America.

We went to the Moon, and on the 50th anniversary of that first landing, it’s worth banishing forever the nutty idea that we didn’t, and also appreciating what the achievement itself required, and what it says about the people who were able to do it.


A Mysterious Anomaly Under Africa Is Radically Weakening Earth’s Magnetic Field Posted June 29th 2020

PETER DOCKRILL 6 MARCH 2018

Around Earth, an invisible magnetic field traps electrons and other charged particles.
(Image: © NASA’s Goddard Space Flight Center)

Above our heads, something is not right. Earth’s magnetic field is in a state of dramatic weakening – and according to mind-boggling new research, this phenomenal disruption is part of a pattern lasting for over 1,000 years.

The Earth‘s magnetic field is weakening between Africa and South America, causing issues for satellites and space craft.

Scientists studying the phenomenon observed that an area known as the South Atlantic Anomaly has grown considerably in recent years, though the reason for it is not entirely clear.

Using data gathered by the European Space Agency’s (ESA) Swarm constellation of satellites, researchers noted that the area of the anomaly dropped in strength by more than 8 per cent between 1970 and 2020.

“The new, eastern minimum of the South Atlantic Anomaly has appeared over the last decade and in recent years is developing vigorously,” said Jürgen Matzka, from the German Research Centre for Geosciences.

“We are very lucky to have the Swarm satellites in orbit to investigate the development of the South Atlantic Anomaly. The challenge now is to understand the processes in Earth’s core driving theses changes.”

Earth’s magnetic field doesn’t just give us our north and south poles; it’s also what protects us from solar winds and cosmic radiation – but this invisible force field is rapidly weakening, to the point scientists think it could actually flip, with our magnetic poles reversing.

As crazy as that sounds, this actually does happen over vast stretches of time. The last time it occurred was about 780,000 years ago, although it got close again around 40,000 years back.

When it takes place, it’s not quick, with the polarity reversal slowly occurring over thousands of years.

Nobody knows for sure if another such flip is imminent, and one of the reasons for that is a lack of hard data.

The region that concerns scientists the most at the moment is called the South Atlantic Anomaly – a huge expanse of the field stretching from Chile to Zimbabwe. The field is so weak within the anomaly that it’s hazardous for Earth’s satellites to enter it, because the additional radiation it’s letting through could disrupt their electronics.

“We’ve known for quite some time that the magnetic field has been changing, but we didn’t really know if this was unusual for this region on a longer timescale, or whether it was normal,” says physicist Vincent Hare from the University of Rochester in New York.

One of the reasons scientists don’t know much about the magnetic history of this region of Earth is it lacks what’s called archeomagnetic data – physical evidence of magnetism in Earth’s past, preserved in archaeological relics from bygone ages.

One such bygone age belonged to a group of ancient Africans, who lived in the Limpopo River Valley – which borders Zimbabwe, South Africa, and Botswana: regions that fall within the South Atlantic Anomaly of today.

Approximately 1,000 years ago, these Bantu peoples observed an elaborate, superstitious ritual in times of environmental hardship.

During times of drought, they would burn down their clay huts and grain bins, in a sacred cleansing rite to make the rains come again – never knowing they were performing a kind of preparatory scientific fieldwork for researchers centuries later.

“When you burn clay at very high temperatures, you actually stabilise the magnetic minerals, and when they cool from these very high temperatures, they lock in a record of the earth’s magnetic field,” one of the team, geophysicist John Tarduno explains.

As such, an analysis of the ancient artefacts that survived these burnings reveals much more than just the cultural practices of the ancestors of today’s southern Africans.

“We were looking for recurrent behaviour of anomalies because we think that’s what is happening today and causing the South Atlantic Anomaly,” Tarduno says.

“We found evidence that these anomalies have happened in the past, and this helps us contextualise the current changes in the magnetic field.”

Like a “compass frozen in time immediately after [the] burning”, the artefacts revealed that the weakening in the South Atlantic Anomaly isn’t a standalone phenomenon of history.

Similar fluctuations occurred in the years 400-450 CE, 700-750 CE, and 1225-1550 CE – and the fact that there’s a pattern tells us that the position of the South Atlantic Anomaly isn’t a geographic fluke.

“We’re getting stronger evidence that there’s something unusual about the core-mantel boundary under Africa that could be having an important impact on the global magnetic field,” Tarduno says.

The current weakening in Earth’s magnetic field – which has been taking place for the last 160 years or so – is thought to be caused by a vast reservoir of dense rock called the African Large Low Shear Velocity Province, which sits about 2,900 kilometres (1,800 miles) below the African continent.

“It is a profound feature that must be tens of millions of years old,” the researchers explained in The Conversation last year.

“While thousands of kilometres across, its boundaries are sharp.”

This dense region, existing in between the hot liquid iron of Earth’s outer core and the stiffer, cooler mantle, is suggested to somehow be disturbing the iron that helps generate Earth’s magnetic field.

There’s a lot more research to do before we know more about what’s going on here.

As the researchers explain, the conventional idea of pole reversals is that they can start anywhere in the core – but the latest findings suggest what happens in the magnetic field above us is tied to phenomena at special places in the core-mantle boundary.

If they’re right, a big piece of the field weakening puzzle just fell in our lap – thanks to a clay-burning ritual a millennia ago. What this all means for the future, though, no-one is certain.

“We now know this unusual behaviour has occurred at least a couple of times before the past 160 years, and is part of a bigger long-term pattern,” Hare says.

“However, it’s simply too early to say for certain whether this behaviour will lead to a full pole reversal.”

The findings are reported in Geophysical Review Letters.

Extending from Earth like invisible spaghetti is the planet’s magnetic field. Created by the churn of Earth’s core, this field is important for everyday life: It shields the planet from solar particles, it provides a basis for navigation and it might have played an important role in the evolution of life on Earth. 

But what would happen if Earth’s magnetic field disappeared tomorrow? A larger number of charged solar particles would bombard the planet, putting power grids and satellites on the fritz and increasing human exposure to higher levels of cancer-causing ultraviolet radiation. In other words, a missing magnetic field would have consequences that would be problematic but not necessarily apocalyptic, at least in the short term.

And that’s good news, because for more than a century, it’s been weakening. Even now, there are especially flimsy spots, like the South Atlantic Anomaly in the Southern Hemisphere, which create technical problems for low-orbiting satellites. 

Related: What Will Happen to Earth When the Sun Dies?

Read more

One possibility, according to the ESA, is that the weakening field is a sign that the Earth’s magnetic field is about to reverse, whereby the North Pole and South Pole switch places.

The last time a “geomagnetic reversal” took place was 780,000 years ago, with some scientists claiming that the next one is long overdue. Typically, such events take place every 250,000 years.

The repercussions of such an event could be significant, as the Earth’s magnetic field plays an important role in protecting the planet from solar winds and harmful cosmic radiation.

Telecommunication and satellite systems also rely on it to operate, suggesting that computers and mobile phones could experience difficulties.

The South Atlantic Anomaly has been captured by the Swarm satellite constellation (Division of Geomagnetism, DTU Space)

The South Atlantic Anomaly is already causing issues with satellites orbiting Earth, the ESA warned, while spacecrafts flying in the area could also experience “technical malfunctions”.

A 2018 study published in the scientific journal Proceedings of the National Academy of Sciences found that despite the weakening field, “Earth’s magnetic field is probably not reversing”.

The study also explained that the process is not an instantaneous one and could take tens of thousands of years to take place.

ESA said it would continue to monitor the weakening magnetic field with its constellation of Swarm satellites.

“The mystery of the origin of the South Atlantic Anomaly has yet to be solved,” the space agency stated. “However, one thing is certain: magnetic field observations from Swarm are providing exciting new insights into the scarcely understood processes of Earth’s interior.”

Alien life is out there, but our theories are probably steering us away from it May 22nd 2020

If we discovered evidence of alien life, would we even realise it? Life on other planets could be so different from what we’re used to that we might not recognise any biological signatures that it produces.

Recent years have seen changes to our theories about what counts as a biosignature and which planets might be habitable, and further turnarounds are inevitable. But the best we can really do is interpret the data we have with our current best theory, not with some future idea we haven’t had yet.

This is a big issue for those involved in the search for extraterrestrial life. As Scott Gaudi of Nasa’s Advisory Council has said: “One thing I am quite sure of, now having spent more than 20 years in this field of exoplanets … expect the unexpected.”

But is it really possible to “expect the unexpected”? Plenty of breakthroughs happen by accident, from the discovery of penicillin to the discovery of the cosmic microwave background radiation left over from the Big Bang. These often reflect a degree of luck on behalf of the researchers involved. When it comes to alien life, is it enough for scientists to assume “we’ll know it when we see it”?

Many results seem to tell us that expecting the unexpected is extraordinarily difficult. “We often miss what we don’t expect to see,” according to cognitive psychologist Daniel Simons, famous for his work on inattentional blindness. His experiments have shown how people can miss a gorilla banging its chest in front of their eyes. Similar experiments also show how blind we are to non-standard playing cards such as a black four of hearts. In the former case, we miss the gorilla if our attention is sufficiently occupied. In the latter, we miss the anomaly because we have strong prior expectations.

There are also plenty of relevant examples in the history of science. Philosophers describe this sort of phenomenon as “theory-ladenness of observation”. What we notice depends, quite heavily sometimes, on our theories, concepts, background beliefs and prior expectations. Even more commonly, what we take to be significant can be biased in this way.

For example, when scientists first found evidence of low amounts of ozone in the atmosphere above Antarctica, they initially dismissed it as bad data. With no prior theoretical reason to expect a hole, the scientists ruled it out in advance. Thankfully, they were minded to double check, and the discovery was made.

More than 200,000 stars captured in one small section of the sky by Nasa’s TESS mission. Nasa

Could a similar thing happen in the search for extraterrestrial life? Scientists studying planets in other solar systems (exoplanets) are overwhelmed by the abundance of possible observation targets competing for their attention. In the last 10 years scientists have identified more than 3,650 planets – more than one a day. And with missions such as NASA’s TESS exoplanet hunter this trend will continue.

Each and every new exoplanet is rich in physical and chemical complexity. It is all too easy to imagine a case where scientists do not double check a target that is flagged as “lacking significance”, but whose great significance would be recognised on closer analysis or with a non-standard theoretical approach.

The Müller-Lyer optical illusion. Fibonacci/Wikipedia, CC BY-SA

However, we shouldn’t exaggerate the theory-ladenness of observation. In the Müller-Lyer illusion, a line ending in arrowheads pointing outwards appears shorter than an equally long line with arrowheads pointing inwards. Yet even when we know for sure that the two lines are the same length, our perception is unaffected and the illusion remains. Similarly, a sharp-eyed scientist might notice something in her data that her theory tells her she should not be seeing. And if just one scientist sees something important, pretty soon every scientist in the field will know about it.

History also shows that scientists are able to notice surprising phenomena, even biased scientists who have a pet theory that doesn’t fit the phenomena. The 19th-century physicist David Brewster incorrectly believed that light is made up of particles travelling in a straight line. But this didn’t affect his observations of numerous phenomena related to light, such as what’s known as birefringence in bodies under stress. Sometimes observation is definitely not theory-laden, at least not in a way that seriously affects scientific discovery.

We need to be open-minded

Certainly, scientists can’t proceed by just observing. Scientific observation needs to be directed somehow. But at the same time, if we are to “expect the unexpected”, we can’t allow theory to heavily influence what we observe, and what counts as significant. We need to remain open-minded, encouraging exploration of the phenomena in the style of Brewster and similar scholars of the past.

Studying the universe largely unshackled from theory is not only a legitimate scientific endeavour – it’s a crucial one. The tendency to describe exploratory science disparagingly as “fishing expeditions” is likely to harm scientific progress. Under-explored areas need exploring, and we can’t know in advance what we will find.

In the search for extraterrestrial life, scientists must be thoroughly open-minded. And this means a certain amount of encouragement for non-mainstream ideas and techniques. Examples from past science (including very recent ones) show that non-mainstream ideas can sometimes be strongly held back. Space agencies such as NASA must learn from such cases if they truly believe that, in the search for alien life, we should “expect the unexpected”.

Could invisible aliens really exist among us? An astrobiologist explains . May 22nd 2020

Life is pretty easy to recognise. It moves, it grows, it eats, it excretes, it reproduces. Simple. In biology, researchers often use the acronym “MRSGREN” to describe it. It stands for movement, respiration, sensitivity, growth, reproduction, excretion and nutrition.

But Helen Sharman, Britain’s first astronaut and a chemist at Imperial College London, recently said that alien lifeforms that are impossible to spot may be living among us. How could that be possible?

While life may be easy to recognise, it’s actually notoriously difficult to define and has had scientists and philosophers in debate for centuries – if not millennia. For example, a 3D printer can reproduce itself, but we wouldn’t call it alive. On the other hand, a mule is famously sterile, but we would never say it doesn’t live.

As nobody can agree, there are more than 100 definitions of what life is. An alternative (but imperfect) approach is describing life as “a self-sustaining chemical system capable of Darwinian evolution”, which works for many cases we want to describe.

The lack of definition is a huge problem when it comes to searching for life in space. Not being able to define life other than “we’ll know it when we see it” means we are truly limiting ourselves to geocentric, possibly even anthropocentric, ideas of what life looks like. When we think about aliens, we often picture a humanoid creature. But the intelligent life we are searching for doesn’t have to be humanoid.

Life, but not as we know it

Sharman says she believes aliens exist and “there’s no two ways about it”. Furthermore, she wonders: “Will they be like you and me, made up of carbon and nitrogen? Maybe not. It’s possible they’re here right now and we simply can’t see them.”

Such life would exist in a “shadow biosphere”. By that, I don’t mean a ghost realm, but undiscovered creatures probably with a different biochemistry. This means we can’t study or even notice them because they are outside of our comprehension. Assuming it exists, such a shadow biosphere would probably be microscopic.

So why haven’t we found it? We have limited ways of studying the microscopic world as only a small percentage of microbes can be cultured in a lab. This may mean that there could indeed be many lifeforms we haven’t yet spotted. We do now have the ability to sequence the DNA of unculturable strains of microbes, but this can only detect life as we know it – that contain DNA.

If we find such a biosphere, however, it is unclear whether we should call it alien. That depends on whether we mean “of extraterrestrial origin” or simply “unfamiliar”.

Silicon-based life

A popular suggestion for an alternative biochemistry is one based on silicon rather than carbon. It makes sense, even from a geocentric point of view. Around 90% of the Earth is made up of silicon, iron, magnesium and oxygen, which means there’s lots to go around for building potential life.

Artist’s impression of a silicon-based life form. Zita

Silicon is similar to carbon, it has four electrons available for creating bonds with other atoms. But silicon is heavier, with 14 protons (protons make up the atomic nucleus with neutrons) compared to the six in the carbon nucleus. While carbon can create strong double and triple bonds to form long chains useful for many functions, such as building cell walls, it is much harder for silicon. It struggles to create strong bonds, so long-chain molecules are much less stable.

What’s more, common silicon compounds, such as silicon dioxide (or silica), are generally solid at terrestrial temperatures and insoluble in water. Compare this to highly soluble carbon dioxide, for example, and we see that carbon is more flexible and provides many more molecular possibilities.

Life on Earth is fundamentally different from the bulk composition of the Earth. Another argument against a silicon-based shadow biosphere is that too much silicon is locked up in rocks. In fact, the chemical composition of life on Earth has an approximate correlation with the chemical composition of the sun, with 98% of atoms in biology consisting of hydrogen, oxygen and carbon. So if there were viable silicon lifeforms here, they may have evolved elsewhere.

That said, there are arguments in favour of silicon-based life on Earth. Nature is adaptable. A few years ago, scientists at Caltech managed to breed a bacterial protein that created bonds with silicon – essentially bringing silicon to life. So even though silicon is inflexible compared with carbon, it could perhaps find ways to assemble into living organisms, potentially including carbon.

And when it comes to other places in space, such as Saturn’s moon Titan or planets orbiting other stars, we certainly can’t rule out the possibility of silicon-based life.

To find it, we have to somehow think outside of the terrestrial biology box and figure out ways of recognising lifeforms that are fundamentally different from the carbon-based form. There are plenty of experiments testing out these alternative biochemistries, such as the one from Caltech.

Regardless of the belief held by many that life exists elsewhere in the universe, we have no evidence for that. So it is important to consider all life as precious, no matter its size, quantity or location. The Earth supports the only known life in the universe. So no matter what form life elsewhere in the solar system or universe may take, we have to make sure we protect it from harmful contamination – whether it is terrestrial life or alien lifeforms.


Read more: Elon Musk’s Starship may be more moral catastrophe than bold step in space exploration


So could aliens be among us? I don’t believe that we have been visited by a life form with the technology to travel across the vast distances of space. But we do have evidence for life-forming, carbon-based molecules having arrived on Earth on meteorites, so the evidence certainly doesn’t rule out the same possibility for more unfamiliar life forms.

Project HAARP: Is The US Controlling The Weather? – YouTube

www.youtube.com/watch?v=InoHOvYXJ0Q

23/07/2013 · Project HAARP: US Weather Control? A secretive government radio energy experiment in Alaska, with the potential to control the weather or a simple scientific experiment?

The Science of Corona Spread according to Neil Ferguson et al of Imperial College London Posted May 14th 2020

Note this report is about spread and guesswork as to the nature and structure OF Corona with particular regard to mutation and effects of the Corona Virus. It is about a maths model of predicted spread, and rate of spread, with R representing the reinfection rate. R at 1 means each person with Corona can be expected or predicted to infect one other person who will go on to infect one other etc.

What Ferguson does know for certain as a bassis for his modelling is is that the virtually privatised, asset stripped debt loaded poorly equiped run down and management top heavy NHS will fail massively especially in densely populated urban areas of high ethnic diversity, religious bigotry, poverty and squalor

He also knows that a privatised very expensive profit based care homes will fail hideously, so those already close to natural death, especially if they have previous health conditions will die sooner with corona, which given the squalor of the homes will make sure they get it.

So operation smokescreen needs the Ferguson maths to justify putting key at risk voters’ peace of mind above the wider national interest – to hell with the young, scare them to death, blind them with science like the following report which they won’t understand, upon which there will be further analysis and comment here soon.

On the wider scene, Britain has been a massively malign influence on Europe, the U.S and beyond, so Ferguson must factor in no limit to borders, air traffic or illegal immigrants. Though he clearly did not believe his own advice because he broke it at least twice for sexual contact with a married mother.

The maths of his assessment for his affair with a married woman here was simple : M + F = S where M represents male F represents female and S represents sex. But we do not need algebra to explain the obvious anymore than we need what is below, from Fergusoon’s 14 page report.

We might also consider that M + F , because of other human factors/variables, could equal D where D reresents divorce, or MB where MB represents Male Bankruptcy or a number of other possibilities.

But for Ferguson, operation smokescreen, blinding people with science, has only one possibility, LOCKDOWN because that is what the government wanted, the media wanted it and now a lot of workers want it, especially teachers who do not want to go back to work. Britain is ridiculing and patronising European countries for doing the sensible thing and easing out of lockdown. People with brains should fear the British elite more than Europe’s.

Public sector workers are paid to stay at home. Furloughed private sector workers are going to be bankrolled by the taxpaper the Chancellor said so. Lockdown is costing £14 billion a day. Imagine if all that money had been invested in an NHS fit to cope with all the illegal and legal mass of third world immigrants and an ageing population. But moron politicians are always economical with the truth, out to feed their own egos and winging it.

As an ex maths teacher, I could convert all of this into alegbra and probable outcomes. British people are more likely to belive what they can’t understand which is why so many still believe in God. So if God made everything, then God made ‘the science’ so it must be true

It is not is necessary to tell us that if someone catches a cold it is an airborne virus which will spread to anyone in its path, the poorly and old being vulnerable to a cold turning fatal. That is the reality of Corona.

Ferguson made his report on the basis of probability, some limits to the masses, regardless of the damage caused long term, because he got paid, would look good and enhance his and pompous Imperial College’s reputation.

Robert Cook

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 1 of 14

10 February 2020 Imperial College London COVID-19 Response Team DOI: https://doi.org/10.25561/77154 Page 1 of 14 Report 4: Severity of 2019-novel coronavirus (nCoV) Ilaria Dorigatti+ , Lucy Okell+ , Anne Cori, Natsuko Imai , Marc Baguelin, Sangeeta Bhatia, Adhiratha Boonyasiri, Zulma Cucunubá, Gina Cuomo-Dannenburg, Rich FitzJohn, Han Fu, Katy Gaythorpe , Arran Hamlet, Wes Hinsley, Nan Hong , Min Kwun, Daniel Laydon, Gemma Nedjati-Gilani, Steven Riley, Sabine van Elsland, Erik Volz, Haowei Wang, Raymond Wang, Caroline Walters , Xiaoyue Xi, Christl Donnelly, Azra Ghani, Neil Ferguson*. With support from other volunteers from the MRC Centre.1 WHO Collaborating Centre for Infectious Disease Modelling MRC Centre for Global Infectious Disease Analysis Abdul Latif Jameel Institute for Disease and Emergency Analytics (J-IDEA) Imperial College London *Correspondence: neil.ferguson@imperial.ac.uk 1 See full list at end of document. +These two authors contributed equally. Summary We present case fatality ratio (CFR) estimates for three strata of 2019-nCoV infections. For cases detected in Hubei, we estimate the CFR to be 18% (95% credible interval: 11%-81%). For cases detected in travellers outside mainland China, we obtain central estimates of the CFR in the range 1.2- 5.6% depending on the statistical methods, with substantial uncertainty around these central values. Using estimates of underlying infection prevalence in Wuhan at the end of January derived from testing of passengers on repatriation flights to Japan and Germany, we adjusted the estimates of CFR from either the early epidemic in Hubei Province, or from cases reported outside mainland China, to obtain estimates of the overall CFR in all infections (asymptomatic or symptomatic) of approximately 1% (95% confidence interval 0.5%-4%). It is important to note that the differences in these estimates does not reflect underlying differences in disease severity between countries. CFRs seen in individual countries will vary depending on the sensitivity of different surveillance systems to detect cases of differing levels of severity and the clinical care offered to severely ill cases. All CFR estimates should be viewed cautiously at the current time as the sensitivity of surveillance of both deaths and cases in mainland China is unclear. Furthermore, all estimates rely on limited data on the typical time intervals from symptom onset to death or recovery which influences the CFR estimates.

Report 4: Severity of 2019-novel coronavirus (nCoV) 1

WHO Collaborating Centre for Infectious Disease Modelling

MRC Centre for Global Infectious Disease Analysis

Abdul Latif Jameel Institute for Disease and Emergency Analytics (J-IDEA)

Imperial College London

*

Correspondence: neil.ferguson@imperial.ac.uk

1 Summary

We present case fatality ratio (CFR) estimates for three strata of 2019-nCoV infections. For cases

detected in Hubei, we estimate the CFR to be 18% (95% credible interval: 11%-81%). For cases

detected in travellers outside mainland China, we obtain central estimates of the CFR in the range 1.2-

5.6% depending on the statistical methods, with substantial uncertainty around these central values.

Using estimates of underlying infection prevalence in Wuhan at the end of January derived from

testing of passengers on repatriation flights to Japan and Germany, we adjusted the estimates of CFR

from either the early epidemic in Hubei Province, or from cases reported outside mainland China, to

obtain estimates of the overall CFR in all infections (asymptomatic or symptomatic) of approximately

1% (95% confidence interval 0.5%-4%). It is important to note that the differences in these estimates

does not reflect underlying differences in disease severity between countries. CFRs seen in individual

countries will vary depending on the sensitivity of different surveillance systems to detect cases of

differing levels of severity and the clinical care offered to severely ill cases. All CFR estimates should

be viewed cautiously at the current time as the sensitivity of surveillance of both deaths and cases in

mainland China is unclear. Furthermore, all estimates rely on limited data on the typical time intervals

from symptom onset to death or recovery which influences the CFR estimates.

SUGGESTED CITATION

Ilaria Dorigatti, Lucy Okell, Anne Cori et al. Severity of 2019-novel coronavirus (nCoV). Imperial College London

(10-02-2020), doi: https://doi.org/10.25561/77154.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives

4.0 International License.10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 2 of 14

1. Introduction: Challenges in assessing the spectrum of severity

There are two main challenges in assessing the severity of clinical outcomes during an epidemic of a

newly emerging infection:

1. Surveillance is typically biased towards detecting clinically severe cases, particularly at the

start of an epidemic when diagnostic capacity is limited (Figure 1). Estimates of the proportion

of fatal cases (the case fatality ratio, CFR) may thus be biased upwards until the extent of

clinically milder disease is determined [1].

2. There can be a period of two to three weeks between a case developing symptoms,

subsequently being detected and reported and observing the final clinical outcome. During a

growing epidemic the final clinical outcome of the majority of the reported cases is typically

unknown. Dividing the cumulative reported deaths by reported cases will underestimate the

CFR among these cases early in an epidemic [1-3].

Figure 1 illustrates the first challenge. Published data from China suggest that the majority of detected

and reported cases have moderate or severe illness, with atypical pneumonia and/or acute respiratory

distress being used to define suspected cases eligible for testing. In these individuals, clinical outcomes

are likely to be more severe, and hence any estimates of the CFR are likely to be high.

Outside mainland China, countries alert to the risk of infection being imported via international travel

have instituted surveillance for 2019-nCoV infection with a broader set of clinical criteria for defining

a suspected case, typically including a combination of symptoms (e.g. cough + fever) combined with

recent travel history to the affected region (Wuhan and/or Hubei Province). Such surveillance is

therefore likely to pick up clinically milder cases as well as the more severe cases also being detected

in mainland China. However, by restricting testing to those with a travel history or link, it is also likely

to miss other symptomatic cases (and possibly hospitalised cases with atypical pneumonia) that have

occurred through local transmission or through travel to other affected areas of China.

Figure 1: Spectrum of cases for 2019-nCoV, illustrating imputed sensitivity of surveillance in

mainland China and in travellers arriving in other countries or territories from mainland China.10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 3 of 14

Finally, the bottom of the pyramid represents the likely largest population of those infected with

either mild, non-specific symptoms or who are asymptomatic. Quantifying the extent of infection

overall in the population requires random population surveys of infection prevalence. The only such

data at present for 2019-nCoV are the PCR infection prevalence surveys conducted in exposed

expatriates who have recently been repatriated to Japan, Germany and the USA from Wuhan city (see

below).

To obtain estimates of the severity of 2019-nCoV across the full severity range we examined aggregate

data from Hubei Province, China (representing the top two levels – deaths and hospitalised cases – in

Figure 1) and individual-level data from reports of cases outside mainland China (the top three levels

and perhaps part of the fourth level in Figure 1). We also analysed data on infections in repatriated

expatriates returning from Hubei Provence (representing all levels in Figure 1).

2. Current estimates of the case fatality ratio

The CFR is defined as the proportion of cases of a disease who will ultimately die from the disease. For

a given case definition, once all deaths and cases have been ascertained (for example at the end of an

epidemic), this is simply calculated as deaths/cases. However, at the start of the epidemic this ratio

underestimates the true CFR due to the time-lag between onset of symptoms and death [1-3]. We

adopted several approaches to account for this time-lag and to adjust for the unknown final clinical

outcome of the majority of cases reported both inside and outside China (cases reported in mainland

China and those reported outside mainland China) (see Methods section below). We present the range

of resulting CFR estimates in Table 1 for two parts of the case severity pyramid. Note that all estimates

have high uncertainty and therefore point estimates represent a snapshot at the current time and

may change as additional information becomes available. Furthermore, all data sources have inherent

potential biases due to the limits in testing capacity as outlined earlier. 10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 4 of 14

Table 1: Estimates of CFR for two severity ranges: cases reported in mainland China, and those

reported outside. All estimates quoted to two significant figures.

1Mode quoted for Bayesian estimates, given uncertainty in the tail of the onset-to-death distribution. 2Estimates made

without imputing onset dates in traveller cases for whom onset dates are unknown are slightly higher than when onset dates

are imputed. 3Maximum likelihood estimate. 4This estimate relies on information from just 2 deaths reported outside

mainland China thus far and therefore has wide uncertainty. Both of these deaths occurred a relatively short time after onset

compared with the typical pattern in China.

Use of data on those who have recovered among exported cases gives very similar point estimates to

just relying on death data, but a rather narrower uncertainty range. This highlights the value of case

follow-up data on both fatal and non-fatal cases.

Given that the estimates of CFR across all infections rely on a single point estimate of infection

prevalence, they should be treated cautiously. In particular, the sensitivity of the diagnostics used to

test repatriated passengers is not known, and it is unclear when infected people might test positive,

or how representative those passengers were of the general population of Wuhan (their infection risk

might have been higher or lower than the general population). Additional representative studies to

assess the extent of mildly symptomatic or asymptomatic infection are therefore urgently needed.

Figure 2 shows projected expected numbers of deaths detected in cases detected up to 4

th

February

outside mainland China over the next few weeks for different values of the CFR. If no further deaths

are reported amongst this group (and indeed if many of those now in hospital recover and are

Severity range Method and data used Time to outcome

distributions used

CFR

China: Epidemic

currently in

Hubei

Parametric model fitted to publicly

reported number of cases and deaths

in Hubei as of 5

th

February, assuming

exponential growth at rate 0.14/day.

Onset-to-death estimated

from 26 deaths in China;

assume 5-day period from

onset to report and 1-day

period from death to report.

18%

1

(95% credible

interval: 11-81%)

Outside mainland

China: cases in

travellers from

mainland China

to other

countries or

territories

(showing a

broader

spectrum of

symptoms than

cases in Hubei,

including milder

disease)

Parametric model fitted to reported

traveller cases up to 8

th

February using

both death and recovery outcomes

and inferring latest possible dates of

onset in traveller cases

2

.

Onset-to-death estimated

from 26 deaths in China;

onset-to-recovery estimated

from 36 cases detected

outside mainland China

4

.

5.1%

3

(95% credible

interval: 1.1%-38%)

Parametric model fitted to reported

traveller cases up to 8

th

February using

only death outcome and inferring

latest possible unreported dates of

onset in traveller cases

2

.

Onset-to-death estimated

from 26 deaths in China.

5.6%

1

Tesseracts visually represent the four dimensions, including time.

The science surrounding multi-dimensional space is so mind-boggling that even the physicists who study it do not fully understand it. It may be helpful to start with the three observable dimensions, which correspond to the height, width, and length of a physical object. Einstein, in his work on general relativity in the early 20th century, demonstrated that time is also a physical dimension. This is observable only in extreme conditions; for example, the immense gravity of a planetary body can actually slow down time in its near vicinity. The new model of the universe created by this theory is known as space-time.

In theory, gravity from a massive object bends space-time around it.
In theory, gravity from a massive object bends space-time around it.

Since Einstein’s era, scientists have discovered many of the universe’s secrets, but not nearly all. A major field of study, quantum mechanics, is devoted to learning about the smallest particles of matter and how they interact. These particles behave in a very different manner than the matter of observable reality. Physicist John Wheeler is reported to have said, “If you are not completely confused by quantum mechanics, you do not understand it.” It has been suggested that multi-dimensional space can explain the strange behavior of these elementary particles.

For much of the 20th and 21st centuries, physicists have tried to reconcile the discoveries of Einstein with those of quantum physics. It is believed that such a theory would explain much that is still unknown about the universe, including poorly understood forces such as gravity. One of the leading contenders for this theory is known variously as superstring theory, supersymmetry, or M-theory. This theory, while explaining many aspects of quantum mechanics, can only be correct if reality has 10, 11, or as many as 26 dimensions. Thus, many physicists believe multi-dimensional space is likely.

The extra dimensions of this multi-dimensional space would exist beyond the ability of humans to observe them. Some scientists suggest they are folded or curled into the observable three dimensions in such a way that they cannot be seen by ordinary methods. Scientists hope their effects can be documented by watching how elementary particles behave when they collide. Many experiments in the world’s particle accelerator laboratories, such as CERN in Europe, are conducted to search for this evidence. Other theories claim to reconcile relativity and quantum mechanics without requiring the existence of multi-dimensional space; which theory is correct remains to be seen.

Dreams Are The REAL World

~ admin

Arno Pienaar – Dreams are reality just as much as the real world is classified as reality. Dreams are your actual own reality and the real world is the creator’s reality. 

Dreams are by far the most intriguing aspect of existence for a human-being. Within them we behold experiences that the conscious mind may recollect, but for the most part, cannot make sense of. The only sense we can gain from them is the way they make us feel intuitively.

SunSkyCloudsMountainWaterTreeStone

Subconscious Guiding Mechanism

The feeling is known to be the message carried over from the guiding mechanism of the sub-conscious mind.

The guidance we receive in our dreams comes, in fact, from our very selves, although the access we have to everything is only tapped into briefly, when the conscious mind is completely shut down in the sleeping state.

The subconscious tends to show us the things that dominate our consciousness whenever it has the chance and the onus is on us to sort out the way we live our lives in the primary waking state, which is where we embody programming that is keeping us out of our own paradise, fully conscious in the now.

Labels such as the astral plane, dream-scape or the fourth dimension, have served to make people believe that this dimension of reality is somehow not as real as the “real” world, or that the dream state is not as valid as the waking state.

This is one of the biggest lies ever as the dream state is in fact the only reality where you can tap into the unconscious side of yourself, which you otherwise cannot perceive, except during transcendental states under psychedelics or during disciplined meditational practices.

Dreams offer a vital glimpse into your dark side, the unconscious embedded programming which corrupts absolutely until light has shone on it.

The dream state shows us what we are unconsciously projecting as a reality and must be used to face the truth of what you have mistaken for reality.

A person with an eating disorder will, for sure, have plenty of dreams involving gluttony, a nimfo will have many lustful encounters in the dreamstate, a narcissist will have audiences worshiping himself, or himself worshiping himself and someone filled with hatred will encounter scenes I wish not to elaborate on.

The patterns of your dreams and especially recurring themes, are projections within your unconscious mind that is governing the ultimate experience of your “waking state.”

I believe the new heaven and earth is the merging of heaven (dreams) and earth (matrix) into one conclusive experience.

Besides for showing us what needs attention, dreams also transcend the rules and laws of matter, time and space.

The successful lucid dreamer gains an entire new heaven and earth, where the absolute impossible is only possible.

For the one who gains access to everything through the dream state, the constraints of the so called real world in the waking state becomes but a monkey on the back.

When you can fly, see and talk to anybody, go anywhere you choose, then returning to the world of matter, time and space, is arguably a nightmare.

Anybody with a sound mind would choose to exist beyond the limitations of the matrix-construct. There are many that already do.

The Real World vs. the Dream World

The greatest of sages have enlightened us that the REAL WORLD is indeed the illusion, maya, or manyan.

If what we have thought to be real is, in fact, the veil to fool us that we are weak, small and limited, then our dreams must be the real world and this experience, here, is just an aspect of ourselves that is in dire need of deprogramming from the jaws of hypnotic spell-casting.

There is actually no such thing as reality. There is also no such thing as the real world. What makes the “waking state” the real world and the “dream state” the unreal world?

People would argue that the matrix is a world in which our physical bodies are housed, and that we always return after sleep to continue our existence in the real world.

Morpheus exclaimed that the body cannot survive without the mind. What he ment was that the body is but a projection of the mind.

Have you ever had a dream that was interrupted unexpectedly, only to continue from where you had left off when you go back to sleep?

Do you have a sanctuary which you visit regularly in the dream state? A safe have in your sub-conscious mind?

When we have the intent to return to any reality we do so, as it is proven by fellow lucid dreamers.

What if I told you that this matrix-hive is a dream just like any other dream you have, and that billions of souls share this dream together?

Do you think these souls consciously chose to share this dream together? The answer is no, they were merely incepted by an idea that “this is the real world” from the very beings that summoned them into this plane through the sexual act.

Every night we have to re-energize ourselves by accessing the dream world (i.e. the actual world)/True Source of infinite Potential, which is the reservoir that refills us, only to return to give that energy to the dreamworld which we believe to be the real world. This “real world” only seems like the REAL WORLD because most of its inhabitants believe just that.

Pause and Continue

Just like we can pause a dream when interrupted and return, so do we can pause the “real world”. Whether you believe it or not, we only return to the “waking reality” because we have forsaken ourselves for it and we expect to return to it on a daily basis.

We intend to always come back because we have such a large investment in an illusion and this is our chain to the physical world. We are so attached to this dream, that we evenreincarnate to continue from where we left off, because this dream is able to trap you in limbo forever.

We have capitulated to it and, in so doing, gave it absolute power over us. We are in fact in a reality of another, not in our own. That is why we cannot manifest what we want in it, because it has claimed ownership over us here and while we are in it, we are subject to its rules, laws and limitations.

When one enters the dimension of another, one fall subject to its construct.

In the case of the Real World, the Real World has been hacked by a virus that affects all the beings that embrace the code of that matrix. It is like a spiderweb that traps souls.

As soon as we wake up in the morning, we start dreaming of the world we share together. As long as the mind machine is in power, it will always kick in again after waking up.

Whatever it is we believe, becomes activated by this dream once more, so to validate our contribution to this illusion, to which we have agreed.

The time is now to turn all of it back to the five elements, so that we can have our own reality again!

Hyperdimensionality is a Reality Identity Crisis

We are only hyper-dimensional beings because we are not in our own reality yet — we are in the middle of two realities fighting over us. It is time to come to terms with this identity crisis.

We cannot be forced to be in a dimension we choose not to partake in, a dimension that was made to fool you into believe it is the alpha & omega.

It is this very choice (rejecting the digital holographic program) that many are now making, which is breaking the mirror on the wall (destroying the illusion).

Deprogramming Souls of Matrix-Based Constructs is coming in 2016. The spiderweb will be disentangled and the laws of time, matter and space will be transcended in the Real World, once we will regain full consciousness in the NOW.

Original source Dreamcatcherreality.com

SF Source How To Exit The Matrix  May 2016

Physicists Say There’s a 90 Percent Chance Civilization Will Soon Collapse October 9th 2020

Final Countdown

If humanity continues down its current path, civilization as we know it is heading toward “irreversible collapse” in a matter of decades.

That’s according to research published in the journal Scientific Reports, which models out our future based on current rates of deforestation and other resource use. As Motherboard reports, even the rosiest projections in the research show a 90 percent chance of catastrophe.

Last Gasp

The paper, penned by physicists from the Alan Turing Institute and the University of Tarapacá, predicts that deforestation will claim the last forests on Earth in between 100 and 200 years. Coupled with global population changes and resource consumption, that’s bad new for humanity.

“Clearly it is unrealistic to imagine that the human society would start to be affected by the deforestation only when the last tree would be cut down,” reads the paper.

Coming Soon

In light of that, the duo predicts that society as we know it could end within 20 to 40 years.

In lighter news, Motherboard reports that the global rate of deforestation has actually decreased in recent years. But there’s still a net loss in forest overall — and newly-planted trees can’t protect the environment nearly as well as old-growth forest.

If humanity continues down its current path, civilization as we know it is heading toward “irreversible collapse” in a matter of decades.

In light of that, the duo predicts that society as we know it could end within 20 to 40 years.

“Calculations show that, maintaining the actual rate of population growth and resource consumption, in particular forest consumption, we have a few decades left before an irreversible collapse of our civilization,” reads the paper.

READ MORE: Theoretical Physicists Say 90% Chance of Societal Collapse Within Several Decades [Motherboard]

More on societal collapse: Doomsday Report Author: Earth’s Leaders Have Failed

What is Binary, and Why Do Computers Use It?

Anthony Heddings@anthonyheddings
October 1, 2018, 6:40am EDT

Computers don’t understand words or numbers the way humans do. Modern software allows the end user to ignore this, but at the lowest levels of your computer, everything is represented by a binary electrical signal that registers in one of two states: on or off. To make sense of complicated data, your computer has to encode it in binary.

Binary is a base 2 number system. Base 2 means there are only two digits—1 and 0—which correspond to the on and off states your computer can understand. You’re probably familiar with base 10—the decimal system. Decimal makes use of ten digits that range from 0 to 9, and then wraps around to form two-digit numbers, with each digit being worth ten times more than the last (1, 10, 100, etc.). Binary is similar, with each digit being worth two times more than the last.

Counting in Binary

In binary, the first digit is worth 1 in decimal. The second digit is worth 2, the third worth 4, the fourth worth 8, and so on—doubling each time. Adding these all up gives you the number in decimal. So,

1111 (in binary)  =  8 + 4 + 2 + 1  =  15 (in decimal)

Accounting for 0, this gives us 16 possible values for four binary bits. Move to 8 bits, and you have 256 possible values. This takes up a lot more space to represent, as four digits in decimal give us 10,000 possible values. It may seem like we’re going through all this trouble of reinventing our counting system just to make it clunkier, but computers understand binary much better than they understand decimal. Sure, binary takes up more space, but we’re held back by the hardware. And for some things, like logic processing, binary is better than decimal.

There’s another base system that’s also used in programming: hexadecimal. Although computers don’t run on hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of hexadecimal can represent a whole byte, eight digits in binary. Hexadecimal uses 0-9 like decimal, and also the letters A through F to represent the additional six digits.

So Why Do Computers Use Binary?

The short answer: hardware and the laws of physics. Every number in your computer is an electrical signal, and in the early days of computing, electrical signals were much harder to measure and control very precisely. It made more sense to only distinguish between an “on” state—represented by negative charge—and an “off” state—represented by a positive charge. For those unsure of why the “off” is represented by a positive charge, it’s because electrons have a negative charge—more electrons mean more current with a negative charge.

So, the early room-sized computers used binary to build their systems, and even though they used much older, bulkier hardware, we’ve kept the same fundamental principles. Modern computers use what’s known as a transistor to perform calculations with binary. Here’s a diagram of what a field-effect transistor (FET) looks like:

Essentially, it only allows current to flow from the source to the drain if there is a current in the gate. This forms a binary switch. Manufacturers can build these transistors incredibly small—all the way down to 5 nanometers, or about the size of two strands of DNA. This is how modern CPUs operate, and even they can suffer from problems differentiating between on and off states (though that’s mostly due to their unreal molecular size, being subject to the weirdness of quantum mechanics).

But Why Only Base 2?

So you may be thinking, “why only 0 and 1? Couldn’t you just add another digit?” While some of it comes down to tradition in how computers are built, to add another digit would mean we’d have to distinguish between different levels of current—not just “off” and “on,” but also states like “on a little bit” and “on a lot.”

The problem here is if you wanted to use multiple levels of voltage, you’d need a way to easily perform calculations with them, and the hardware for that isn’t viable as a replacement for binary computing. It indeed does exist; it’s called a ternary computer, and it’s been around since the 1950s, but that’s pretty much where development on it stopped. Ternary logic is way more efficient than binary, but as of yet, nobody has an effective replacement for the binary transistor, or at the very least, no work’s been done on developing them at the same tiny scales as binary.

The reason we can’t use ternary logic comes down to the way transistors are stacked in a computer—something called “gates”and how they’re used to perform math. Gates take two inputs, perform an operation on them, and return one output.

This brings us to the long answer: binary math is way easier for a computer than anything else. Boolean logic maps easily to binary systems, with True and False being represented by on and off. Gates in your computer operate on boolean logic: they take two inputs and perform an operation on them like AND, OR, XOR, and so on. Two inputs are easy to manage. If you were to graph the answers for each possible input, you would have what’s known as a truth table:

A binary truth table operating on boolean logic will have four possible outputs for each fundamental operation. But because ternary gates take three inputs, a ternary truth table would have 9 or more. While a binary system has 16 possible operators (2^2^2), a ternary system would have 19,683 (3^3^3). Scaling becomes an issue because while ternary is more efficient, it’s also exponentially more complex.

Who knows? In the future, we could begin to see ternary computers become a thing, as we push the limits of binary down to a molecular level. For now, though, the world will continue to run on binary.

Image credits: spainter_vfx/Shutterstock, Wikipedia, Wikipedia, Wikipedia, WikipediaREAD NEXT

What is Binary, and Why Do Computers Use It?

Anthony Heddings@anthonyheddings
October 1, 2018, 6:40am EDT

Computers don’t understand words or numbers the way humans do. Modern software allows the end user to ignore this, but at the lowest levels of your computer, everything is represented by a binary electrical signal that registers in one of two states: on or off. To make sense of complicated data, your computer has to encode it in binary.

Binary is a base 2 number system. Base 2 means there are only two digits—1 and 0—which correspond to the on and off states your computer can understand. You’re probably familiar with base 10—the decimal system. Decimal makes use of ten digits that range from 0 to 9, and then wraps around to form two-digit numbers, with each digit being worth ten times more than the last (1, 10, 100, etc.). Binary is similar, with each digit being worth two times more than the last.

Counting in Binary

In binary, the first digit is worth 1 in decimal. The second digit is worth 2, the third worth 4, the fourth worth 8, and so on—doubling each time. Adding these all up gives you the number in decimal. So,

1111 (in binary)  =  8 + 4 + 2 + 1  =  15 (in decimal)

Accounting for 0, this gives us 16 possible values for four binary bits. Move to 8 bits, and you have 256 possible values. This takes up a lot more space to represent, as four digits in decimal give us 10,000 possible values. It may seem like we’re going through all this trouble of reinventing our counting system just to make it clunkier, but computers understand binary much better than they understand decimal. Sure, binary takes up more space, but we’re held back by the hardware. And for some things, like logic processing, binary is better than decimal.

There’s another base system that’s also used in programming: hexadecimal. Although computers don’t run on hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of hexadecimal can represent a whole byte, eight digits in binary. Hexadecimal uses 0-9 like decimal, and also the letters A through F to represent the additional six digits.

So Why Do Computers Use Binary?

The short answer: hardware and the laws of physics. Every number in your computer is an electrical signal, and in the early days of computing, electrical signals were much harder to measure and control very precisely. It made more sense to only distinguish between an “on” state—represented by negative charge—and an “off” state—represented by a positive charge. For those unsure of why the “off” is represented by a positive charge, it’s because electrons have a negative charge—more electrons mean more current with a negative charge.

So, the early room-sized computers used binary to build their systems, and even though they used much older, bulkier hardware, we’ve kept the same fundamental principles. Modern computers use what’s known as a transistor to perform calculations with binary. Here’s a diagram of what a field-effect transistor (FET) looks like:

Essentially, it only allows current to flow from the source to the drain if there is a current in the gate. This forms a binary switch. Manufacturers can build these transistors incredibly small—all the way down to 5 nanometers, or about the size of two strands of DNA. This is how modern CPUs operate, and even they can suffer from problems differentiating between on and off states (though that’s mostly due to their unreal molecular size, being subject to the weirdness of quantum mechanics).

But Why Only Base 2?

So you may be thinking, “why only 0 and 1? Couldn’t you just add another digit?” While some of it comes down to tradition in how computers are built, to add another digit would mean we’d have to distinguish between different levels of current—not just “off” and “on,” but also states like “on a little bit” and “on a lot.”

The problem here is if you wanted to use multiple levels of voltage, you’d need a way to easily perform calculations with them, and the hardware for that isn’t viable as a replacement for binary computing. It indeed does exist; it’s called a ternary computer, and it’s been around since the 1950s, but that’s pretty much where development on it stopped. Ternary logic is way more efficient than binary, but as of yet, nobody has an effective replacement for the binary transistor, or at the very least, no work’s been done on developing them at the same tiny scales as binary.

The reason we can’t use ternary logic comes down to the way transistors are stacked in a computer—something called “gates”and how they’re used to perform math. Gates take two inputs, perform an operation on them, and return one output.

This brings us to the long answer: binary math is way easier for a computer than anything else. Boolean logic maps easily to binary systems, with True and False being represented by on and off. Gates in your computer operate on boolean logic: they take two inputs and perform an operation on them like AND, OR, XOR, and so on. Two inputs are easy to manage. If you were to graph the answers for each possible input, you would have what’s known as a truth table:

A binary truth table operating on boolean logic will have four possible outputs for each fundamental operation. But because ternary gates take three inputs, a ternary truth table would have 9 or more. While a binary system has 16 possible operators (2^2^2), a ternary system would have 19,683 (3^3^3). Scaling becomes an issue because while ternary is more efficient, it’s also exponentially more complex.

Who knows? In the future, we could begin to see ternary computers become a thing, as we push the limits of binary down to a molecular level. For now, though, the world will continue to run on binary.

Image credits: spainter_vfx/Shutterstock, Wikipedia, Wikipedia, Wikipedia, WikipediaREAD NEXT

Anthony Heddings
Anthony Heddings is the resident cloud engineer for LifeSavvy Media, a technical writer, programmer, and an expert at Amazon’s AWS platform. He’s written hundreds of articles for How-To Geek and CloudSavvy IT that have been read millions of times

If The Big Bang Wasn’t The Beginning, What Was It? Posted September 30th 2020

Ethan SiegelSenior ContributorStarts With A BangContributor GroupScienceThe Universe is out there, waiting for you to discover it.

The history of our expanding Universe is one illustrated image.
Our entire cosmic history is theoretically well-understood, but only because we understand the … [+] NICOLE RAGER FULLER / NATIONAL SCIENCE FOUNDATION

For more than 50 years, we’ve had definitive scientific evidence that our Universe, as we know it, began with the hot Big Bang. The Universe is expanding, cooling, and full of clumps (like planets, stars, and galaxies) today because it was smaller, hotter, denser, and more uniform in the past. If you extrapolate all the way back to the earliest moments possible, you can imagine that everything we see today was once concentrated into a single point: a singularity, which marks the birth of space and time itself.

At least, we thought that was the story: the Universe was born a finite amount of time ago, and started off with the Big Bang. Today, however, we know a whole lot more than we did back then, and the picture isn’t quite so clear. The Big Bang can no longer be described as the very beginning of the Universe that we know, and the hot Big Bang almost certainly doesn’t equate to the birth of space and time. So, if the Big Bang wasn’t truly the beginning, what was it? Here’s what the science tells us.

Looking back at the distant Universe with NASA's Hubble space telescope.
Nearby, the stars and galaxies we see look very much like our own. But as we look farther away, we … [+] NASA, ESA, AND A. FEILD (STSCI)

Our Universe, as we observe it today, almost certainly emerged from a hot, dense, almost-perfectly uniform state early on. In particular, there are four pieces of evidence that all point to this scenario: Recommended For You

  1. the Hubble expansion of the Universe, which shows that the amount that light from a distant object is redshifted is proportional to the distance to that object,
  2. the existence of a leftover glow — the Cosmic Microwave Background (CMB) — in all directions, with the same temperature everywhere just a few degrees above absolute zero,
  3. light elements — hydrogen, deuterium, helium-3, helium-4, and lithium-7 — that exist in a particular ratio of abundances back before any stars were formed,
  4. and a cosmic web of structure that gets denser and clumpier, with more space between larger and larger clumps, as time goes on.

These four facts: the Hubble expansion of the Universe, the existence and properties of the CMB, the abundance of the light elements from Big Bang nucleosynthesis, and the formation and growth of large-scale structure in the Universe, represent the four cornerstones of the Big Bang.

The cosmic microwave background & large-scale structure are two cosmological cornerstones.
The largest-scale observations in the Universe, from the cosmic microwave background to the cosmic … [+] Chris Blake and Sam Moorfield

Why are these the four cornerstones? In the 1920s, Edwin Hubble, using the largest, most powerful telescope in the world at the time, was able to measure how individual stars varied in brightness over time, even in galaxies beyond our own. That enabled us to know how far away the galaxies that housed those stars were. By combining that information with data about how significantly the atomic spectral lines from those galaxies were shifted, we could determine what the relationship was between distance and a spectral shift.

As it turned out, it was simple, straightforward, and linear: Hubble’s law. The farther away a galaxy was, the more significantly its light was redshifted, or shifted systematically towards longer wavelengths. In the context of General Relativity, that corresponds to a Universe whose very fabric is expanding with time. As time marches on, all points in the Universe that aren’t somehow bound together (either gravitationally or by some other force) will expand away from one another, causing any emitted light to be shifted towards longer wavelengths by time the observer receives it.

How light redshifts and distances change over time in the expanding Universe.
This simplified animation shows how light redshifts and how distances between unbound objects change … [+] Rob Knop

Although there are many possible explanations for the effect we observe as Hubble’s Law, the Big Bang is a unique idea among those possibilities. The idea is simple and straightforward in its simplicity, but also breathtaking in how powerful it is. It simply says this:

  • the Universe is expanding and stretching light to longer wavelengths (and lower energies and temperatures) today,
  • and that means, if we extrapolate backwards, the Universe was denser and hotter earlier on.
  • Because it’s been gravitating the whole time, the Universe gets clumpier and forms larger, more massive structures later on.
  • If we go back to early enough times, we’ll see that galaxies were smaller, more numerous, and made of intrinsically younger, bluer stars.
  • If we go back earlier still, we’ll find a time where no stars have had time to form.
  • Even earlier, and we’ll find that it’s hot enough that light, at some early time, would have split even neutral atoms apart, creating an ionized plasma which “releases” the radiation at last when the Universe does become neutral. (The origin of the CMB.)
  • And at even earlier times still, things were hot enough that even atomic nuclei would be blasted apart; transitioning to a cooler phase allows the first stable nuclear reactions, yielding the light elements, to proceed.
As the Universe cools, atomic nuclei form, followed by neutral atoms as it cools further.
As the Universe cools, atomic nuclei form, followed by neutral atoms as it cools further. All of … [+] E. Siegel

All of these claims, at some point during the 20th century, were validated and confirmed by observations. We’ve measured the clumpiness of the Universe, and found that it increases exactly as predicted as time goes on. We’ve measured how galaxies evolve with distance (and cosmic time), and found that the earlier, more distant ones are overall younger, bluer, more numerous, and smaller in size. We’ve discovered and measured the CMB, and not only does it spectacularly match the Big Bang’s predictions, but we’ve observed how its temperature changes (increases) at earlier times. And we’ve successfully measured the primordial abundances of the light elements, finding a spectacular agreement with the predictions of Big Bang nucleosynthesis.

We can extrapolate back even further if we like: beyond the limits of what our current technology has the capability to directly observe. We can imagine the Universe getting even denser, hotter, and more compact than it was when protons and neutrons were being blasted apart. If we stepped back even earlier, we’d see neutrinos and antineutrinos, which need about a light-year of solid lead to stop half of them, start to interact with electrons and other particles in the early Universe. Beginning in the mid-2010s, we were able to detect their imprint on first the photons of the CMB and, a few years later, on the large-scale structure that would later grow in the Universe.

The impact of neutrinos on the large-scale structure features in the Universe.
If there were no oscillations due to matter interacting with radiation in the Universe, there would … [+] D. Baumann et al. (2019), Nature Physics

That’s the earliest signal, thus far, we’ve ever detected from the hot Big Bang. But there’s nothing stopping us from running the clock back farther: all the way to the extremes. At some point:

  • it gets hot and dense enough that particle-antiparticle pairs get created out of pure energy, simply from quantum conservation laws and Einstein’s E = mc²,
  • the Universe gets denser than individual protons and neutrons, causing it to behave as a quark-gluon plasma rather than as individual nucleons,
  • the Universe gets even hotter, causing the electroweak force to unify, the Higgs symmetry to be restored, and for fundamental particles to lose their rest mass,

and then we go to energies that lie beyond the limits of known, tested physics, even from particle accelerators and cosmic rays. Some processes must occur under those conditions to reproduce the Universe we see. Something must have created dark matter. Something must have created more matter than antimatter in our Universe. And something must have happened, at some point, for the Universe to exist at all.

An illustration of the Big Bang from an initially hot, dense state to our modern Universe.
There is a large suite of scientific evidence that supports the picture of the expanding Universe … [+] NASA / GSFC

From the moment this extrapolation was first considered back in the 1920s — and then again in its more modern forms in the 1940s and 1960s — the thinking was that the Big Bang takes you all the way back to a singularity. In many ways, the big idea of the Big Bang was that if you have a Universe filled with matter and radiation, and it’s expanding today, then if you go far enough back in time, you’ll come to a state that’s so hot and so dense that the laws of physics themselves break down.

At some point, you achieve energies, densities, and temperatures that are so large that the quantum uncertainty inherent to nature leads to consequences that make no sense. Quantum fluctuations would routinely create black holes that encompass the entire Universe. Probabilities, if you try to compute them, give answers that are either negative or greater than 1: both physical impossibilities. We know that gravity and quantum physics don’t make sense at these extremes, and that’s what a singularity is: a place where the laws of physics are no longer useful. Under these extreme conditions, it’s possible that space and time themselves can emerge. This, originally, was the idea of the Big Bang: a birth to time and space themselves.

The Big Bang, from the earliest stages, to modern-day galaxies.
A visual history of the expanding Universe includes the hot, dense state known as the Big Bang and … [+] NASA / CXC / M. WEISS

But all of that was based on the notion that we actually could extrapolate the Big Bang scenario as far back as we wanted: to arbitrarily high energies, temperatures, densities, and early times. As it turned out, that created a number of physical puzzles that defied explanation. Puzzles such as:

  • Why did causally disconnected regions of space — regions with insufficient time to exchange information, even at the speed of light — have identical temperatures to one another?
  • Why was the initial expansion rate of the Universe in balance with the total amount of energy in the Universe so perfectly: to more than 50 decimal places, to deliver a “flat” Universe today?
  • And why, if we achieved these ultra-high temperatures and densities early on, don’t we see any leftover relic remnants from those times in our Universe today?

If you still want to invoke the Big Bang, the only answer you can give is, “well, the Universe must have been born that way, and there is no reason why.” But in physics, that’s akin to throwing up your hands in surrender. Instead, there’s another approach: to concoct a mechanism that could explain those observed properties, while reproducing all the successes of the Big Bang, and still making new predictions about phenomena we could observe that differ from the conventional Big Bang.

The 3 big puzzles, the horizon, flatness, and monopole problems, that inflation solves.
In the top panel, our modern Universe has the same properties (including temperature) everywhere … [+] E. SIEGEL / BEYOND THE GALAXY

About 40 years ago, that’s exactly the idea that was put forth: cosmic inflation. Instead of extrapolating the Big Bang all the way back to a singularity, inflation basically says that there’s a cutoff: you can go back to a certain high temperature and density, but no further. According to the big idea of cosmic inflation, this hot, dense, uniform state was preceded by a state where:

  • the Universe wasn’t filled with matter and radiation,
  • but instead possessed a large amount of energy intrinsic to the fabric of space itself,
  • which caused the Universe to expand exponentially (and at a constant, unchanging rate),
  • which drives the Universe to be flat, empty, and uniform (up to the scale of quantum fluctuations),
  • and then inflation ends, converting that intrinsic-to-space energy into matter and radiation,

and that’s where the hot Big Bang comes from. Not only did this solve the puzzles the Big Bang couldn’t explain, but it made multiple new predictions that have since been verified. There’s a lot we still don’t know about cosmic inflation, but the data that’s come in over the last 3 decades overwhelmingly supports the existence of this inflationary state: that preceded and set up the hot Big Bang.

How inflation and quantum fluctuations give rise to the Universe we observe today.
The quantum fluctuations that occur during inflation get stretched across the Universe, and when … [+] E. SIEGEL, WITH IMAGES DERIVED FROM ESA/PLANCK AND THE DOE/NASA/ NSF INTERAGENCY TASK FORCE ON CMB RESEARCH

All of this, taken together, is enough to tell us what the Big Bang is and what it isn’t. It is the notion that our Universe emerged from a hotter, denser, more uniform state in the distant past. It is not the idea that things got arbitrarily hot and dense until the laws of physics no longer applied.

It is the notion that, as the Universe expanded, cooled, and gravitated, we annihilated away our excess antimatter, formed protons and neutrons and light nuclei, atoms, and eventually, stars, galaxies, and the Universe we recognize today. It is no longer considered inevitable that space and time emerged from a singularity 13.8 billion years ago.

And it is a set of conditions that applies at very early times, but was preceded by a different set of conditions (inflation) that came before it. The Big Bang might not be the very beginning of the Universe itself, but it is the beginning of our Universe as we recognize it. It’s not “the” beginning, but it is “our” beginning. It may not be the entire story on its own, but it’s a vital part of the universal cosmic story that connects us all.

Follow me on Twitter. Check out my website or some of my other work hereEthan Siegel

I am a Ph.D. astrophysicist, author, and science communicator, who professes physics and astronomy at various colleges. I have won numerous awards for science writing…

Artificial intelligence with virtual hanging head on podium. Global world cybernetic mind controls humanity. Digital Brain with AI in the spotlight. Super computer. science futuristic concept.

brain wave

Neuralink: 3 neuroscientists react to Elon Musk’s brain chip reveal September 17th 2020

With a pig-filled demonstration, Neuralink revealed its latest advancements in brain implants this week. But what do scientists think of Elon Musk’s company’s grand claims?ShutterstockMike Brown9.4.2020 8:00 AM

What does the future look like for humans and machines? Elon Musk would argue that it involves wiring brains directly up to computers – but neuroscientists tell Inverse that’s easier said than done.

On August 28, Musk and his team unveiled the latest updates from secretive firm Neuralink with a demo featuring pigs implanted with their brain chip device. These chips are called Links, and they measure 0.9 inches wide by 0.3 inches tall. They connect to the brain via wires, and provide a battery life of 12 hours per charge, after which the user would need to wirelessly charge again. During the demo, a screen showed the real-time spikes of neurons firing in the brain of one pig, Gertrude, as she snuffed around her pen during the event.

More like this

Innovation9.8.2020 9:53 PM”Plug and play” brain prosthesis could change how people with paralysis use implantsBy Sarah WellsInnovation9.8.2020 9:32 PMStarship: Watch SpaceX nail 6th test David GrossmanInnovation9.2.2020 10:55 AMMusk Reads: Neuralink’s big revealBy Mike Brown.

It was an event designed to show how far Neuralink has come in terms of making its science objectives reality. But how much of Musk’s ambitions for Links are still in the realm of science fiction?

Neuralink argues the chips will one day have medical applications, listing a whole manner of ailments that its chips could feasibly solve. Memory loss, depression, seizures, and brain damage were all suggested as conditions where a generalized brain device like the Link could help.

Ralph Adolphs, Bren Professor of Psychology, Neuroscience, and Biology at California Institute of Technology, tells Inverse Neuralink’s announcement was “tremendously exciting” and “a huge technical achievement.”

Neuralink is “a good example of technology outstripping our current ability to know how to use it,” Adolphs says. “The primary initial application will be for people who are ill and for clinical reasons it is justified to implant such a chip into their brain. It would be unethical to do so right now in a healthy person.”

“But who knows what the future holds?” He adds.

Adolphs says the chip is comparable to the natural processes that emerge through evolution. Currently, to interface between the brain and the world, humans use their hands and mouth. But to imagine just sitting and thinking about these actions is a lot harder, so a lot of the future work will need to focus on making this interface with the world feel more natural, Adolphs says.

Achieving that goal could be further out than the Neuralink demo suggested. John Krakauer, chief medical and scientific officer at MindMaze and professor of neurology at Johns Hopkins University, tells Inverse that his view is humanity is “still a long way away” from consumer-level linkups.

“Let me give a more specific concern: The device we saw was placed over a single sensorimotor area,” Krakauer says. “If we want to read thoughts rather than movements (assuming we knew their neural basis) where do we put it? How many will we need? How does one avoid having one’s scalp studded with them? No mention of any of this of course.”

While a brain linkup may get people “excited” because it “has echoes of Charles Xavier in the X-Men,” Krakauer argues that there’s plenty of potential non-invasive solutions to help people with the conditions Neuralink says its technology will treat.

These existing solutions don’t require invasive surgery, but Krakauer fears “the cool factor clouds critical thinking.”

But Elon Musk, Neuralink’s CEO, wants the Link to take humans far beyond new medical treatments.

The ultimate objective, according to Musk, is for Neuralink to help create a symbiotic relationship between humans and computers. Musk argues that Neuralink-like devices could help humanity keep up with super-fast machines. But Krakauer finds such an ambition troubling.

“I would like to see less unsubstantiated hype about a brain ‘Alexa’ and interfacing with A.I.,” Krakauer says. “The argument is if you can’t avoid the singularity, join it. I’m sorry but this angle is just ridiculous.”

Neuralink's link implant.
Neuralink’s link implant.Neuralink

Even a general-purpose linkup could be much further away from development than it may seem. Musk told WaitButWhy in 2017 that a general-purpose linkup could be eight to 10 years away for people with no disability. That would place the timescale for roll-out somewhere around 2027 at the latest — seven years from now.

Kevin Tracey, a neurosurgery professor and president of the Feinstein Institutes for Medical Research, tells Inverse that he “can’t imagine” that any of the publicly suggested diseases could see a solution “sooner than 10 years.” Considering that Neuralink hopes to offer the device as a medical solution before it moves to more general-purpose implants, these notes of caution cast the company’s timeline into doubt.

But unlike Krakauer, Tracey argues that “we need more hype right now.” Not enough attention has been paid to this area of research, he says.

“In the United States for the last 20 years, the federal government’s investment supporting research hasn’t kept up with inflation,” Tracey says. “There’s been this idea that things are pretty good and we don’t have to spend so much money on research. That’s nonsense. COVID proved we need to raise enthusiasm and investment.”

Neuralink’s device is just one part of the brain linkup puzzle, Tracey explains. There are three fields at play: molecular medicine to make and find the targets, neuroscience to understand how the pathways control the target, and the devices themselves. Advances in each area can help the others. Neuralink may help map new pathways, for example, but it’s just one aspect of what needs to be done to make it work as planned.

Neuralink’s smaller chips may also help avoid issues with brain scarring seen with larger devices, Tracey says. And advancements in robots can also help with surgeries, an area Neuralink has detailed before.

But perhaps the biggest benefit from the announcement is making the field cool again.

“If and to the extent that a new, very cool device elevates the discussion on the neuroscience implications of new devices, and what do we need to get these things to the benefit of humanity through more science, that’s all good,” Tracey says.

How sleep helps us lose weight September 12th 2020

When it comes to weight loss, diet and exercise are usually thought of as the two key factors that will achieve results. However, sleep is an often-neglected lifestyle factor that also plays an important role.

The recommended sleep duration for adults is seven to nine hours a night, but many people often sleep for less than this. Research has shown that sleeping less than the recommended amount is linked to having greater body fat, increased risk of obesity, and can also influence how easily you lose weight on a calorie-controlled diet.

Typically, the goal for weight loss is usually to decrease body fat while retaining as much muscle mass as possible. Not obtaining the correct amount of sleep can determine how much fat is lost as well as how much muscle mass you retain while on a calorie restricted diet.

One study found that sleeping 5.5 hours each night over a two-week period while on a calorie-restricted diet resulted in less fat loss when compared to sleeping 8.5 hours each night. But it also resulted a greater loss of fat-free mass (including muscle).

Another study has shown similar results over an eight-week period when sleep was reduced by only one hour each night for five nights of the week. These results showed that even catch-up sleep at the weekend may not be enough to reverse the negative effects of sleep deprivation while on a calorie-controlled diet.

Metabolism, appetite, and sleep

There are several reasons why shorter sleep may be associated with higher body weight and affect weight loss. These include changes in metabolism, appetite and food selection.

Sleep influences two important appetite hormones in our body – leptin and ghrelin. Leptin is a hormone that decreases appetite, so when leptin levels are high we usually feel fuller. On the other hand, ghrelin is a hormone that can stimulate appetite, and is often referred to as the “hunger hormone” because it’s thought to be responsible for the feeling of hunger.

One study found that sleep restriction increases levels of ghrelin and decreases leptin. Another study, which included a sample of 1,024 adults, also found that short sleep was associated with higher levels of ghrelin and lower levels of leptin. This combination could increase a person’s appetite, making calorie-restriction more difficult to adhere to, and may make a person more likely to overeat.

Consequently, increased food intake due to changes in appetite hormones may result in weight gain. This means that, in the long term, sleep deprivation may lead to weight gain due to these changes in appetite. So getting a good night’s sleep should be prioritised.

Along with changes in appetite hormones, reduced sleep has also been shown to impact on food selection and the way the brain perceives food. Researchers have found that the areas of the brain responsible for reward are more active in response to food after sleep loss (six nights of only four hours’ sleep) when compared to people who had good sleep (six nights of nine hours’ sleep).

This could possibly explain why sleep-deprived people snack more often and tend to choose carbohydrate-rich foods and sweet-tasting snacks, compared to those who get enough sleep.

Person's hands typing on keyboard while eating unhealthy snacks.
Sleep deprivation may make you eat more unhealthy food during the day. Flotsam/ Shutterstock

Sleep duration also influences metabolism, particularly glucose (sugar) metabolism. When food is eaten, our bodies release insulin, a hormone that helps to process the glucose in our blood. However, sleep loss can impair our bodies’ response to insulin, reducing its ability to uptake glucose. We may be able to recover from the occasional night of sleep loss, but in the long term this could lead to health conditions such as obesity and type 2 diabetes.

Our own research has shown that a single night of sleep restriction (only four hours’ sleep) is enough to impair the insulin response to glucose intake in healthy young men. Given that sleep-deprived people already tend to choose foods high in glucose due to increased appetite and reward-seeking behaviour, the impaired ability to process glucose can make things worse.

An excess of glucose (both from increased intake and a reduced ability to uptake into the tissues) could be converted to fatty acids and stored as fat. Collectively, this can accumulate over the long term, leading to weight gain.

However, physical activity may show promise as a countermeasure against the detrimental impact of poor sleep. Exercise has a positive impact on appetite, by reducing ghrelin levels and increasing levels of peptide YY, a hormone that is released from the gut, and is associated with the feeling of being satisfied and full.

After exercise, people tend to eat less, particularly when the energy expended by exercise is taken into account. However, it’s unknown if this still remains in the context of sleep restriction.

Research has also shown that exercise training may protect against the metabolic impairments that result from a lack of sleep, by improving the body’s response to insulin, leading to improved glucose control.

We have also shown the potential benefits of just a single session of exercise on glucose metabolism after sleep restriction. While this shows promise, studies are yet to determine the role of long-term physical activity in people with poor sleep.

It’s clear that sleep is important for losing weight. A lack of sleep can increase appetite by changing hormones, makes us more likely to eat unhealthy foods, and influences how body fat is lost while counting our calories. Sleep should therefore be considered as an essential alongside diet and physical activity as part of a healthy lifestyle

Elon Musk Says Settlers Will Likely Die on Mars. He’s Right.

But is that such a bad thing?

Mars or Milton Keynes, What’s the difference ?

By Caroline Delbert 

Sep 2, 2020

Earlier this week, Elon Musk said there’s a “good chance” settlers in the first Mars missions will die. And while that’s easy to imagine, he and others are working hard to plan and minimize the risk of death by hardship or accident. In fact, the goal is to have people comfortably die on Mars after a long life of work and play that, we hope, looks at least a little like life on Earth.

Let’s explore it together.

There are already major structural questions about how humans will settle on Mars. How will we aim Musk’s planned hundreds of Starships at Mars during the right times for the shortest, safest trips? How will a spaceship turn into something that safely lands on the planet’s surface? How will astronauts reasonably survive a yearlong trip in cramped, close quarters where maximum possible volume is allotted to supplies?

And all of that is before anyone even touches the surface.

Then there are logistical reasons to talk about potential Mars settlers in, well, actuarial terms. First, the trip itself will take a year based on current estimates, and applicants to settlement programs are told to expect this trip to be one way.

It follows, statistically, that there’s an almost certain “chance” these settlers will die on Mars, because their lives will continue there until they naturally end. Musk is referring to accidental death in tough conditions, but people are likely to stay on Mars

When Mars One opened applications in 2013, people flocked to audition to die on Mars after a one-way trip and a lifetime of settlement. As chemist and applicant Taylor Rose Nations said in a 2014 podcast episode:

“If I can go to Mars and be a human guinea pig, I’m willing to sort of donate my body to science. I feel like it’s worth it for me personally, and it’s kind of a selfish thing, but just to turn around and look and see Earth. That’s a lifelong total dream.”

Musk said in a conference Monday that building reusable rocket technology and robust, “complex life support” are his major priorities, based on his long-term goals of settling humans on Mars. Musk has successfully transported astronauts to the International Space Station (ISS), where NASA and global space administrations already have long-term life support technology in place. But that’s not the same as, for example, NASA’s advanced life support projects:

“Advanced life support (ALS) technologies required for future human missions include improved physico-chemical technologies for atmosphere revitalization, water recovery, and waste processing/resource recovery; biological processors for food production; and systems modeling, analysis, and controls associated with integrated subsystems operations.”

In other words, while the ISS does many of these different functions like water recovery, people on the moon (for NASA) or Mars (for Musk’s SpaceX) will require long-term life support for the same group of people, not a group that rotates every few months with frequent short trips from Earth.

And if the Mars colony plans to endure and put down roots, that means having food, shelter, medical care, and mental and emotional stimulation for the entire population.

There must be redundancies and ways to repair everything. Researchers like 3D printers and chemical processes such as ligand bonding as they plan these hypothetical missions, because it’s more prudent to send raw materials that can be turned into 100 different things or 50 different medicines. The right chemical processes can recycle discarded items into fertilizer molecules.

“Good chance you’ll die, it’s going to be tough going,” Musk said, “but it will be pretty glorious if it works out.”

David Bohm, Quantum Mechanics and Enlightenment

The visionary physicist, whose ideas remain influential, sought spiritual as well as scientific illumination. September 8th 2020

Scientific American

  • John Horgan
GettyImages-3251636.jpg

Theoretical physicist Dr. David J. Bohm at a 1971 symposium in London. Photo by Keystone.

Some scientists seek to clarify reality, others to mystify it. David Bohm seemed driven by both impulses. He is renowned for promoting a sensible (according to Einstein and other experts) interpretation of quantum mechanics. But Bohm also asserted that science can never fully explain the world, and his 1980 book Wholeness and the Implicate Order delved into spirituality. Bohm’s interpretation of quantum mechanics has attracted increasing attention lately. He is a hero of Adam Becker’s 2018 book What Is Real? The Unfinished Quest for the Meaning of Quantum Mechanics (reviewed by James Gleick, David Albert and Peter Woit). In The End of Science I tried to make sense of this paradoxical truth-seeker, who died in 1992 at the age of 74. Below is an edited version of that profile. See also my post on another quantum visionary, John Wheeler. –John Horgan

In August 1992 I visited David Bohm at his home in a London suburb. His skin was alarmingly pale, especially in contrast to his purplish lips and dark, wiry hair. His frame, sinking into a large armchair, seemed limp, languorous, and at the same time suffused with nervous energy. One hand cupped the top of his head, the other gripped an armrest. His fingers, long and blue-veined, with tapered, yellow nails, were splayed. He was recovering, he said, from a heart attack.

Bohm’s wife brought us tea and biscuits and vanished. Bohm spoke haltingly at first, but gradually the words came faster, in a low, urgent monotone. His mouth was apparently dry, because he kept smacking his lips. Occasionally, after making an observation that amused him, he pulled his lips back from his teeth in a semblance of a smile. He also had the disconcerting habit of pausing every few sentences and saying, “Is that clear?” or simply, “Hmmm?” I was often so hopelessly befuddled that I just smiled and nodded. But Bohm could be bracingly clear, too. Like an exotic subatomic particle, he oscillated in and out of focus.

Born and raised in the U.S., Bohm left in 1951, the height of anti-communist hysteria, after refusing to answer questions from a Congressional committee about whether he or anyone he knew was a communist. After stays in Brazil and Israel, he settled in England. Bohm was a scientific dissident too. He rebelled against the dominant interpretation of quantum mechanics, the so-called Copenhagen interpretation promulgated by Danish physicist Niels Bohr.

Bohm began questioning the Copenhagen interpretation in the late 1940s while writing a book on quantum mechanics. According to the Copenhagen interpretation, a quantum entity such as an electron has no definite existence apart from our observation of it. We cannot say with certainty whether it is either a wave or a particle. The interpretation also rejects the possibility that the seemingly probabilistic behavior of quantum systems stems from underlying, deterministic mechanisms.

Bohm found this view unacceptable. “The whole idea of science so far has been to say that underlying the phenomenon is some reality which explains things,” he explained. “It was not that Bohr denied reality, but he said quantum mechanics implied there was nothing more that could be said about it.” Such a view reduced quantum mechanics to “a system of formulas that we use to make predictions or to control things technologically. I said that’s not enough. I don’t think I would be very interested in science if that were all there was.”

In 1952 Bohm proposed that particles are indeed particles–and at all times, not just when they are observed in a certain way. Their behavior is determined by a force that Bohm called the “pilot wave.” Any effort to observe a particle alters its behavior by disturbing the pilot wave. Bohm thus gave the uncertainty principle a purely physical rather than metaphysical meaning. Niels Bohr had interpreted the uncertainty principle as meaning “not that there is uncertainty, but that there is an inherent ambiguity” in a quantum system, Bohm explained.

Bohm’s interpretation gets rid of one quantum paradox, wave/particle duality, but it preserves and even highlights another, nonlocality, the capacity of one particle to influence another instantaneously across vast distances. Einstein had drawn attention to nonlocality in 1935 in an effort to show that quantum mechanics must be flawed. Together with Boris Podolsky and Nathan Rosen, Einstein proposed a thought experiment involving two particles that spring from a common source and fly in opposite directions.

According to the standard model of quantum mechanics, neither particle has fixed properties, such as momentum, before it is measured. But by measuring one particle’s momentum, the physicist instantaneously forces the other particle, no matter how distant, to assume a fixed momentum. Deriding this effect as “spooky action at a distance,” Einstein argued that quantum mechanics must be flawed or incomplete. But in 1980 French physicists demonstrated spooky action in a laboratory. Bohm never had any doubts about the experiment’s outcome. “It would have been a terrific surprise to find out otherwise,” he said.

But here is the paradox of Bohm: Although he tried to make the world more sensible with his pilot-wave model, he also argued that complete clarity is impossible. He reached this conclusion after seeing an experiment on television, in which a drop of ink was squeezed onto a cylinder of glycerine. When the cylinder was rotated, the ink diffused through the glycerine in an apparently irreversible fashion. Its order seemed to have disintegrated. But when the direction of rotation was reversed, the ink gathered into a drop again.

The experiment inspired Bohm to write Wholeness and the Implicate Order, published in 1980. He proposed that underlying physical appearances, the “explicate order,” there is a deeper, hidden “implicate order.” Applying this concept to the quantum realm, Bohm proposed that the implicate order is a field consisting of an infinite number of fluctuating pilot waves. The overlapping of these waves generates what appears to us as particles, which constitute the explicate order. Even space and time might be manifestations of a deeper, implicate order, according to Bohm.

To plumb the implicate order, Bohm said, physicists might need to jettison basic assumptions about nature. During the Enlightenment, thinkers such as Newton and Descartes replaced the ancients’ organic concept of order with a mechanistic view. Even after the advent of relativity and quantum mechanics, “the basic idea is still the same,” Bohm told me, “a mechanical order described by coordinates.”

Bohm hoped scientists would eventually move beyond mechanistic and even mathematical paradigms. “We have an assumption now that’s getting stronger and stronger that mathematics is the only way to deal with reality,” Bohm said. “Because it’s worked so well for a while, we’ve assumed that it has to be that way.”

Someday, science and art will merge, Bohm predicted. “This division of art and science is temporary,” he observed. “It didn’t exist in the past, and there’s no reason why it should go on in the future.” Just as art consists not simply of works of art but of an “attitude, the artistic spirit,” so does science consist not in the accumulation of knowledge but in the creation of fresh modes of perception. “The ability to perceive or think differently is more important than the knowledge gained,” Bohm explained.

Bohm rejected the claim of physicists such as Hawking and Weinberg that physics can achieve a final “theory of everything” that explains the world. Science is an infinite, “inexhaustible process,” he said. “The form of knowledge is to have at any moment something essential, and the appearance can be explained. But then when we look deeper at these essential things they turn out to have some feature of appearances. We’re not ever going to get a final essence which isn’t also the appearance of something.”

Bohm feared that belief in a final theory might become self-fulfilling. “If you have fish in a tank and you put a glass barrier in there, the fish keep away from it,” he noted. “And then if you take away the glass barrier they never cross the barrier and they think the whole world is that.” He chuckled drily. “So your thought that this is the end could be the barrier to looking further.” Trying to convince me that final knowledge is unattainable, Bohm offered the following argument:

“Anything known has to be determined by its limits. And that’s not just quantitative but qualitative. The theory is this and not that. Now it’s consistent to propose that there is the unlimited. You have to notice that if you say there is the unlimited, it cannot be different, because then the unlimited will limit the limited, by saying that the limited is not the unlimited, right? The unlimited must include the limited. We have to say, from the unlimited the limited arises, in a creative process. That’s consistent. Therefore we say that no matter how far we go there is the unlimited. It seems that no matter how far you go, somebody will come up with another point you have to answer. And I don’t see how you could ever settle that.”

To my relief, Bohm’s wife entered the room and asked if we wanted more tea. As she refilled my cup, I pointed out a book on Buddhism on a shelf and asked Bohm if he was interested in spirituality. He nodded. He had been a friend of Krishnamurti, one of the first modern Indian sages to try to show Westerners how to achieve the state of spiritual serenity and grace called enlightenment. Was Krishnamurti enlightened? “In some ways, yes,” Bohm replied. “His basic thing was to go into thought, to get to the end of it, completely, and thought would become a different kind of consciousness.”

Of course, one could never truly plumb one’s own mind, Bohm said. Any attempt to examine one’s own thought changes it–just as the measurement of an electron alters its course. We cannot achieve final self-knowledge, Bohm seemed to imply, any more we can achieve a final theory of physics.

Was Krishnamurti a happy person? Bohm seemed puzzled by my question. “That’s hard to say,” he replied. “He was unhappy at times, but I think he was pretty happy overall. The thing is not about happiness, really.” Bohm frowned, as if realizing the import of what he had just said.

I said goodbye to Bohm and his wife and departed. Outside, a light rain was falling. I walked up the path to the street and glanced back at Bohm’s house, a modest whitewashed cottage on a street of modest whitewashed cottages. He died of a heart attack two months later.

In Wholeness and the Implicate Order Bohm insisted on the importance of “playfulness” in science, and in life, but Bohm, in his writings and in person, was anything but playful. For him, truth-seeking was not a game, it was a dreadful, impossible, necessary task. Bohm was desperate to know, to discover the secret of everything, but he knew it wasn’t attainable, not for any mortal being. No one gets out of the fish tank alive.

John Horgan directs the Center for Science Writings at the Stevens Institute of Technology. His books include “The End of Science,” “The End of War” and “Mind-Body Problems,” available for free at mindbodyproblems.com.

The views expressed are those of the author(s) and are not necessarily those of Scientific American. Scientific American

More from Scientific American

This post originally appeared on Scientific American and was published July 23, 2018. This article is republished here with permission.

Why String Theory Is Still Not Even Wrong

A Frozen Graveyard: The Sad Tales of Antarctica’s Deaths

Beneath layers of snow and ice on the world’s coldest continent, there may be hundreds of people buried forever. Martha Henriques investigates their stories.

BBC Future

  • Martha Henriques
p06h2xhx.jpg

Crevasses can be deadly; this vehicle in the 1950s had a lucky escape. Credit: Getty Images.

In the bleak, almost pristine land at the edge of the world, there are the frozen remains of human bodies – and each one tells a story of humanity’s relationship with this inhospitable continent.

Even with all our technology and knowledge of the dangers of Antarctica, it can remain deadly for anyone who goes there. Inland, temperatures can plummet to nearly -90C (-130F). In some places, winds can reach 200mph (322km/h). And the weather is not the only risk.

Many bodies of scientists and explorers who perished in this harsh place are beyond reach of retrieval. Some are discovered decades or more than a century later. But many that were lost will never be found, buried so deep in ice sheets or crevasses that they will never emerge – or they are headed out towards the sea within creeping glaciers and calving ice.

The stories behind these deaths range from unsolved mysteries to freak accidents. In the second of the series Frozen Continent, BBC Future explored what these events reveal about life on the planet’s most inhospitable landmass.

1800s: Mystery of the Chilean Bones

At Livingston Island, among the South Shetlands off the Antarctic Peninsula, a human skull and femur have been lying near the shore for 175 years. They are the oldest human remains ever found in Antarctica.

The bones were discovered on the beach in the 1980s. Chilean researchers found that they belonged to a woman who died when she was about 21 years old. She was an indigenous person from southern Chile, 1,000km (620 miles) away.

Analysis of the bones suggested that she died between 1819 and 1825. The earlier end of that range would put her among the very first people to have been in Antarctica.

A Russian orthodox church sits on a small rise above Chile’s research base. Credit: Yadvinder Malhi.

The question is, how did she get there? The traditional canoes of the indigenous Chileans couldn’t have supported her on such a long voyage through what can be incredibly rough seas.

“There’s no evidence for an independent Amerindian presence in the South Shetlands,” says Michael Pearson, an Antarctic heritage consultant and independent researcher. “It’s not a journey you’d make in a bark canoe.”

The original interpretation by the Chilean researchers was that she was an indigenous guide to the sealers travelling from the northern hemisphere to the Antarctic islands that had been newly discovered by William Smith in 1819. But women taking part in expeditions to the far south in those early days was virtually unheard of.

Sealers did have a close relationship with the indigenous people of southern Chile, says Melisa Salerno, an archaeologist of the Argentinean Scientific and Technical Research Council (Conicet). Sometimes they would exchange seal skins with each other. It’s not out of the question that they traded expertise and knowledge, too. But the two cultures’ interactions weren’t always friendly.

“Sometimes it was a violent situation,” says Salerno. “The sealers could just take a woman from one beach and later leave her far away on another.”

Any scientist or explorer visiting Antarctica knows that they could be at risk. Credit: Getty Images.

A lack of surviving logs and journals from the early ships sailing south to Antarctica makes it even more difficult to trace this woman’s history.

Her story is unique among the early human presence in Antarctica. A woman who, by all the usual accounts, shouldn’t have been there – but somehow she was. Her bones mark the start of human activity on Antarctica, and the unavoidable loss of life that comes with trying to occupy this inhospitable continent.

29 March 1912: Scott’s South Pole Expedition Crew

Robert Falcon Scott’s team of British explorers reached the South Pole on 17 January 1912, just three weeks after the Norwegian team led by Roald Amundsen had departed from the same spot.

The British group’s morale was crushed when they discovered that they had not arrived first. Soon after, things would get much worse.

Attaining the pole was a feat to test human endurance, and Scott had been under huge pressure. As well as dealing with the immediate challenges of the harsh climate and lack of natural resources like wood for building, he had a crew of more than 60 men to lead. More pressure came from the high hopes of his colleagues back home.

Robert Falcon Scott writing his journal. Credit: Herbert Ponting/Wikipedia.

“They mean to do or die – that is the spirit in which they are going to the Antarctic,” Leonard Darwin, a president of the Royal Geographical Society and son of Charles Darwin, said in a speech at the time.

“Captain Scott is going to prove once again that the manhood of the nation is not dead … the self-respect of the whole nation is certainly increased by such adventures as this,” he said.

Scott was not impervious to the expectations. “He was a very rounded, human character,” says Max Jones, a historian of heroism and polar exploration at the University of Manchester. “In his journals, you find he’s racked with doubts and anxieties about whether he’s up to the task and that makes him more appealing. He had failings and weaknesses too.”

Despite his worries and doubts, the mindset of “do or die” drove the team to take risks that might seem alien to us now.

On the team’s return from the pole, Edgar Evans died first, in February. Then Lawrence Oates. He had considered himself a burden, thinking the team could not return home with him holding them back. “I am just going outside and may be some time,” he said on 17 March.

Members of the ill-fated British expedition to the pole. Credit: Getty Images.

Perhaps he had not realised how close the rest of the group were to death. The bodies of Oates and Evans were never found, but Scott, Edward Wilson and Henry Bowers were discovered by a search party several months after their deaths. They had died on 29 March 1912, according to the date in Scott’s diary entry. The search party covered them with snow and left them where they lay.

“I do not think human beings ever came through such a month as we have come through,” Scott wrote in his diary’s final pages. The team knew they were within 18km (11 miles) of the last food depot, with the supplies that could have saved them. But they were confined to a tent for days, growing weaker, trapped by a fierce blizzard.

“They were prepared to risk their lives and they saw that as legitimate. You can view that as part of a mindset of imperial masculinity, tied up with enduring hardship and hostile environments,” says Jones. “I’m not saying that they had a death wish, but I think that they were willing to die.”

14 October 1965: Jeremy Bailey, David Wild and John Wilson

Four men were riding a Muskeg tractor and its sledges near the Heimefront Mountains, to the east of their base at Halley Research Station in East Antarctica, close to the Weddell Sea. The Muskeg was a heavy-duty vehicle designed to haul people and supplies over long distances on the ice. A team of dogs ran behind.

Three of the men were in the cab. The fourth, John Ross, sat behind on the sledge at the back, close to the huskies. Jeremy (Jerry) Bailey, a scientist measuring the depth of the ice beneath the tractor, was driving. He and David (Dai) Wild, a surveyor, and John Wilson, a doctor, were scanning the ice ahead. Snow obscured much of the small, flat windscreen. The group had been travelling all day, taking turns to warm up in the cab or sit out back on the sledge.

Ross was staring out at the vast ice, snow and Stella Group mountains. At about 8:30, the dogs alongside the sledge stopped running. The sledge had ground to a halt.

Ross, muffled with a balaclava and two anoraks, had heard nothing. He turned to see that the Muskeg was gone. Ahead, the first sledge was leaning down into the ice. Ross ran up to it to find it had wedged in the top of a large crevasse running directly across their course. The Muskeg itself had fallen about 30m (100ft) into the crevasse. Down below, its tracks were wedged vertically against one ice wall, and the cab had been flattened hard against the other.

Ross shouted down. There was no reply from the three men in the cab. After about 20 minutes of shouting, Ross heard a reply. The exchange, as he recorded it from memory soon after the event, was brief:

Ross: Dai?

Bailey: Dai’s dead. It’s me.

Ross: Is that John or Jerry?

Bailey: Jerry.

Ross: How is John?

Bailey: He’s a goner, mate.

Ross: What about yourself?

Bailey: I’m all smashed up.

Ross: Can you move about at all or tie a rope round yourself?

Bailey: I’m all smashed up.

Ross tried climbing down into the crevasse, but the descent was difficult. Bailey told him not to risk it, but Ross tried anyway. After several attempts, Bailey stopped responding to Ross’s calls. Ross heard a scream from the crevasse. After that, Bailey didn’t respond.

Crevasses – deep clefts in the ice stretching down hundreds of feet – are serious threats while travelling across the Antarctic. On 14 October 1965, there had been strong winds kicking up drifts and spreading snow far over the landscape, according to reports on the accident held at the British Antarctic Survey archives. This concealed the top of the chasms, and crucially, the thin blue line in the ice ahead of each drop that would have warned the men to stop.

“You can imagine – there’s a bit of drift about, and there’s bits of ice on the windscreen, your fingers are bloody cold, and you think it’s about time to stop anyway,” says Rod Rhys Jones, one of the expedition party who had not gone on that trip with the Muskeg. He points to the crevassed area the Muskeg had been driving over, on a map of the continent spread over his coffee table, littered with books on the Antarctic.

Many bodies are never recovered; others are buried on the continent. Credit: Getty Images.

“You’re driving along over the ice and thumping and bumping and banging. You don’t see the little blue line.”

Jones questions whether the team had been given adequate training for the hazards of travel in Antarctica. They were young men, mostly fresh out of university. Many of them had little experience in harsh physical conditions. Much of their time preparing for life in Antarctica was spent learning to use the scientific equipment they would need, not training them in how to avoid accidents on the ice.

Each accident in Antarctica has slowly led to changes in the way people travelled and were trained. Reports filed after the incident recommended several ways to make travel through crevassed regions safer, from adapting the vehicle, to new ways to hitch them together.

August 1982: Ambrose Morgan, Kevin Ockleton and John Coll

The three men set out over the ice for an expedition to a nearby island in the depths of the Antarctic winter.

The sea ice was firm, and they made it easily to Petermann Island. The southern aurora was visible in the sky, unusually bright and strong enough to wipe out communications. The team reached the island safely and camped out at a hut near the shore.

Soon after reaching the shore, a large storm blew in that, by the next day, entirely destroyed the sea ice. The group was stranded, but concern among the party was low. There was enough food in the hut to last three people more than a month.

In the next few days, the sea ice failed to reform as storms swept and disrupted the ice in the channel.

Death is never far away in Antarctica. Credit: Richard Fisher.

There were no books or papers in the hut, and contact with the outside world was limited to scheduled radio transmissions to the base. Soon, it had been two weeks. The transmissions were kept brief, as the batteries in their radios were getting weaker and weaker. The team grew restless. Gentoo and Adelie penguins surrounded the hut. They might have looked endearing, but their smell soon began to bother the men.

Things got worse. The team got diarrhoea, as it turned out some of the food in the hut was much older than they had thought. The stench of the penguins didn’t make them feel any better. They killed and ate a few to boost their supplies.

The men waited with increasing frustration, complaining of boredom on their radio transmissions to base. On Friday 13 August 1982, they were seen through a telescope, waving back to the main base. Radio batteries were running low. The sea ice had reformed again, providing a tantalising hope for escape.

Two days later, on Sunday 15 August, the group didn’t check in on the radio at the scheduled time. Then another large storm blew in.

The men at the base climbed up to a high point where they could see the island. All the sea ice was gone again, taken out by the storm.

“These guys had done something which we all did – go out on a little trip to the island,” says Pete Salino, who had been on the main base at the time. The three men were never seen again.

There were very strong currents around the island. Reliable, thick ice formed relatively rarely, Salino recalls. The way they tested whether the ice would hold them was primitive – they would whack it with a wooden stick tipped with metal to see if it would smash.

Even after an extensive search, the bodies were never found. Salino suspects the men went out onto the ice when it reformed and either got stuck or weren’t able to turn back when the storm blew in.

“It does sound mad now, sitting in a cosy room in Surrey,” Salino says. “When we used to go out, there was always a risk of falling through, but you’d always go prepared. We’d always have spare clothing in a sealed bag. We all accepted the risk and felt that it could have been any of us.”

Legacy of Death

For those who experience the loss of colleagues and friends in Antarctica, grieving can be uniquely difficult. When a friend disappears or a body cannot be recovered, the typical human rituals of death – a burial, a last goodbye – elude those left behind.

Clifford Shelley, a British geophysicist based at Argentine Islands off the Antarctic Peninsula in the late 1970s, lost friends who were climbing the nearby peak Mount Peary in 1976. It was thought that those men – Geoffrey Hargreaves, Michael Walker and Graham Whitfield – were trapped in an avalanche. Signs of their camp were found by an air search, but their bodies were never recovered.

The graves of past explorers. Credit: Getty Images.

“You just wait and wait, but there’s nothing. Then you just sort of lose hope,” Shelley says.

Even when the body is recovered, the demanding nature of life and work on Antarctica can make it a hard place to grieve. Ron Pinder, a radio operator in the South Orkneys in the late 1950s and early 1960s, still mourns someone who slipped from a cliff on Signy Island while tagging birds in 1961. The body of his friend, Roger Filer, was found at the foot of a 20ft (6m) cliff below the nests where he was thought to have been tagging birds. His body was buried on the island.

“It is 57 years ago now. It is in the distant past. But it affects me more now than it did then. Life was such that you had to get on with it,” Pinder says.

The same rings true for Shelley. “I don’t think we did really process it,” he says. “It remains at the back of your mind. But it’s certainly a mixed feeling, because Antarctica is superbly beautiful, both during the winter and the summer. It’s the best place to be and we were doing the things we wanted to do.”

The monument to those who lost their lives at the Scott Polar Research Institute. Credit: swancharlotte/Wikipedia/CC BY-SA 4.0.

These deaths have led to changes in how people work in Antarctica. As a result, the people there today can live more safely on this hazardous, isolated continent. Although terrible incidents still happen, much has been learned from earlier fatalities.

For the friends and families of the dead, there is an ongoing effort to make sure their lost loved ones are not forgotten. Outside the Scott Polar Research Institute in Cambridge, UK, two high curved oak pillars lean towards one another, gently touching at the top. It is half of a monument to the dead, erected by the British Antarctic Monument Trust, set up by Rod Rhys Jones and Brian Dorsett-Bailey, Jeremy’s brother, to recognise and honour those who died in Antarctica. The other half of the monument is a long slither of metal leaning slightly towards the sea at Port Stanley in the Falkland Islands, where many of the researchers set off for the last leg of their journey to Antarctica.

Viewed from one end so they align, the oak pillars curve away from each other, leaving a long tapering empty space between them. The shape of that void is perfectly filled by the tall steel shard mounted on a plinth on the other side of the world. It is a physical symbol that spans the hemispheres, connecting home with the vast and wild continent that drew these scientists away for the last time.

More from BBC Future

Water, Water, Every Where — And Now Scientists Know Where It Came From September 3rd 2020

Nell Greenfieldboyce 2010

Nell Greenfieldboyce

Water on Earth is omnipresent and essential for life as we know it, and yet scientists remain a bit baffled about where all of this water came from: Was it present when the planet formed, or did the planet form dry and only later get its water from impacts with water-rich objects such as comets?

A new study in the journal Science suggests that the Earth likely got a lot ofits precious water from the original materials that built the planet, instead of having water arrive later from afar.

The researchers who did this study went looking for signs of water in a rare kind of meteorite. Only about 2% of the meteorites found on Earth are so-called enstatite chondrite meteorites. Their chemical makeup suggests they’re close to the kind of primordial stuff that glommed together and produced our planet 4.5 billion years ago.

You wouldn’t necessarily know how special these meteorites are at first glance. “It’s a bit like a gray rock,” says Laurette Piani, a researcher in France at the Centre de Recherches Pétrographiques et Géochimiques.

What she wanted to know about these rocks is how much hydrogen was in there — because that’s what could produce water.

Space

NASA Braves The Heat To Get Up Close And Personal With Our Sun

Compared with planets such as Jupiter and Saturn, the Earth formed close to the sun. Scientists have long thought that the temperatures must have been hot enough to prevent any water from being in the form of ice. That means there would be no ice to join with the swirling bits of rock and dust that were smashing into each other and slowly building up the young Earth.

If this is all true, our home planet must have been watered later on, perhaps when it got hit by icy comets or meteorites with water-rich minerals coming from farther out in the solar system.

Space

Frosty Asteroid May Give Clues About Earth’s Oceans

Even though that’s been the prevailing view, some planetary scientists don’t buy it. After all, the story of Earth’s water would be a lot more simple and straightforward if the water was just present to begin with.

So Piani and her colleagues recently took a close look at 13 of those unusual meteorites, which are also thought to have formed close in to the sun.

“Before the study, there were almost no measurement of the hydrogen or water in this meteorite,” Piani says. Those measurements that did exist were inconsistent, she says, and were done on meteorites that could have undergone changes after falling to the Earth’s surface.

“We do not want to have meteorites that were altered and modified by the Earth processes,” Piani explains, saying that they deliberately selected the most pristine meteorites possible.

The researchers then analyzed the meteorite’s chemical makeup to see how much hydrogen was in there. Since hydrogen can react with oxygen to produce water, knowing how much hydrogen is in the rocks indicates how much water this material could have contributed to a growing Earth.

What they found was much less hydrogen than in more ordinary meteorites.

Still, what was there would be enough to explain plenty of Earth’s water — at least several times the amount of water in the Earth’s present-day oceans. “It’s a very big quantity of water in the initial material,” Piani says. “And this was never really considered before.”

What’s more, the team also measured the deuterium-to-hydrogen ratio in the meteorites and found that it’s similar to what’s known to exist in the interior of the Earth — which also contains a lot of water. This is additional evidence that there’s a link between our planet’s water and the basic building materials that were present when it formed.

The findings pleased Anne Peslier, a planetary scientist at NASA’s Johnson Space Center in Houston, who wasn’t part of the research team but has a special interest in water.

“I was happy because it makes it nice and simple,” Peslier says. “We don’t have to invoke complicated models where we have to bring material, water-rich material from the outer part of the solar system.”

She says the delivery of so much water from way out there would have required something unusual to disturb the orbits of this water-rich material, such as Jupiter having a little trip inside the inner solar system.

“So here, we just don’t need Jupiter. We don’t need to do anything weird. We just grab the material that was there where the Earth formed, and that’s where the water comes from,” Peslier says.

Even if a lot of the water was there at the start, however, she thinks some must have arrived later on. “I think it’s both,” she says.

Despite these convincing results, she says, there’s still plenty of watery mysteries to plumb. For example, researchers are still trying to determine exactly how much water is locked deep inside the Earth, but it’s surely substantial — several oceans’ worth.

“There is more water down beneath our feet,” Peslier says, “than there is that you see at the surface.”

A 16-Million-Year-Old Tree Tells a Deep Story of the Passage of Time

The sequoia tree slab is an invitation to begin thinking about a vast timescale that includes everything from fossils of armored amoebas to the great Tyrannosaurus rex.

Smithsonian Magazine

  • Riley Black
GettyImages-584372283.jpg

How many answers are hidden inside the giants? Photo by Kelly Cheng Travel Photography / Getty Images.

Paleobotanist Scott Wing hopes that he’s wrong. Even though he carefully counted each ring in an immense, ancient slab of sequoia, the scientist notes that there’s always a little bit of uncertainty in the count. Wing came up with about 260, but, he says, it’s likely a young visitor may one day write him saying: “You’re off by three.” And that would a good thing, Wing says, because it’d be another moment in our ongoing conversation about time.

The shining slab, preserved and polished, is the keystone to consideration of time and our place in it in the “Hall of Fossils—Deep Time” exhibition at the Smithsonian’s National Museum of Natural History. The fossil greets visitors at one of the show’s entrances and just like the physical tree, what the sequoia represents has layers.

Each yearly delineation on the sequoia’s surface is a small part of a far grander story that ties together all of life on Earth. Scientists know this as Deep Time. It’s not just on the scale of centuries, millennia, epochs, or periods, but the ongoing flow that goes back to the origins of our universe, the formation of the Earth, and the evolution of all life, up through this present moment. It’s the backdrop for everything we see around us today, and it can be understood through techniques as different as absolute dating of radioactive minerals and counting the rings of a prehistoric tree. Each part informs the whole.

In decades past, the Smithsonian’s fossil halls were known for the ancient celebrities they contained. There was the dinosaur hall, and the fossil mammal hall, surrounded by the remains of other extinct organisms. But now all of those lost species have been brought together into an integrated story of dynamic and dramatic change. The sequoia is an invitation to begin thinking about how we fit into the vast timescale that includes everything from fossils of armored amoebas called forams to the great Tyrannosaurus rex.

Exactly how the sequoia fossil came to be at the Smithsonian is not entirely clear. The piece was gifted to the museum long ago, “before my time,” Wing says. Still, enough of the tree’s backstory is known to identify it as a massive tree that grew in what’s now central Oregon about 16 million years ago. This tree was once a long-lived part of a true forest primeval.

There are fossils both far older and more recent in the recesses of the Deep Time displays. But what makes the sequoia a fitting introduction to the story that unfolds behind it, Wing says, is that the rings offer different ways to think about time. Given that the sequoia grew seasonally, each ring marks the passage of another year, and visitors can look at the approximately 260 delineations and think about what such a time span represents.

Wing says, people can play the classic game of comparing the tree’s life to a human lifespan. If a long human life is about 80 years, Wing says, then people can count 80, 160, and 240 years, meaning the sequoia grew and thrived over the course of approximately three human lifespans—but during a time when our own ancestors resembled gibbon-like apes. Time is not something that life simply passes through. In everything—from the rings of an ancient tree to the very bones in your body—time is part of life.

The record of that life—and even afterlife—lies between the lines. “You can really see that this tree was growing like crazy in its initial one hundred years or so,” Wing says, with the growth slowing as the tree became larger. And despite the slab’s ancient age, some of the original organic material is still locked inside.

“This tree was alive, photosynthesizing, pulling carbon dioxide out of the atmosphere, turning it into sugars and into lignin and cellulose to make cell walls,” Wing says. After the tree perished, water carrying silica and other minerals coated the log to preserve the wood and protect some of those organic components inside. “The carbon atoms that came out of the atmosphere 16 million years ago are locked in this chunk of glass.”

And so visitors are drawn even further back, not only through the life of the tree itself but through a time span so great that it’s difficult to comprehend. A little back of the envelope math indicates that the tree represents about three human lifetimes, but that the time between when the sequoia was alive and the present could contain about 200,000 human lifetimes. The numbers grow so large that they begin to become abstract. The sequoia is a way to touch that history and start to feel the pull of all those ages past, and what they mean to us. “Time is so vast,” Wing says, “that this giant slab of a tree is just scratching the surface.”

Riley Black is a freelance science writer specializing in evolution, paleontology and natural history who blogs regularly for Scientific American.Smithsonian Magazine

More from Smithsonian Magazine

This post originally appeared on Smithsonian Magazine and was published June 10, 2019. This article is republished here with permission.

“Holy Grail” Metallic Hydrogen Is Going to Change Everything

The substance has the potential to revolutionize everything from space travel to the energy grid. August 26th 2020

Inverse

  • Kastalia Medrano
GettyImages-532102097.jpg

Photo from Stocktrek Images / Getty Images.

Two Harvard scientists have succeeded in creating an entirely new substance long believed to be the “holy grail” of physics — metallic hydrogen, a material of unparalleled power that could one day propel humans into deep space. The research was published in January 2017 in the journal Science.

Scientists created the metallic hydrogen by pressurizing a hydrogen sample to more pounds per square inch than exists at the center of the Earth. This broke the molecule down from its solid state and allowed the particles to dissociate into atomic hydrogen.

The best rocket fuel we currently have is liquid hydrogen and liquid oxygen, burned for propellant. The efficacy of such substances is characterized by “specific impulse,” the measure of impulse fuel can give a rocket to propel it forward.

“People at NASA or the Air Force have told me that if they could get an increase from 450 seconds [of specific impulse] to 500 seconds, that would have a huge impact on rocketry,” Isaac Silvera, the Thomas D. Cabot Professor of the Natural Sciences at Harvard University, told Inverse by phone. “If you can trigger metallic hydrogen to recover to the molecular phase, [the energy release] calculated for that is 1700 seconds.”

Metallic hydrogen could potentially enable rockets to get into orbit in a single stage, even allowing humans to explore the outer planets. Metallic hydrogen is predicted to be “metastable” — meaning if you make it at a very high pressure then release it, it’ll stay at that pressure. A diamond, for example, is a metastable form of graphite. If you take graphite, pressurize it, then heat it, it becomes a diamond; if you take the pressure off, it’s still a diamond. But if you heat it again, it will revert back to graphite.

Scientists first theorized atomic metallic hydrogen a century ago. Silvera, who created the substance along with post-doctoral fellow Ranga Dias, has been chasing it since 1982 and working as a professor of physics at the University of Amsterdam.

Metallic hydrogen has also been predicted to be a high- or possibly room-temperature superconductor. There are no other known room-temperature superconductors in existence, meaning the applications are immense — particularly for the electric grid, which suffers for energy lost through heat dissipation. It could also facilitate magnetic levitation for futuristic high-speed trains; substantially improve performance of electric cars; and revolutionize the way energy is produced and stored.

But that’s all still likely a couple of decades off. The next step in terms of practical application is to determine if metallic hydrogen is indeed metastable. Right now Silvera has a very small quantity. If the substance does turn out to be metastable, it might be used to create room-temperature crystal and — by spraying atomic hydrogen onto the surface —use it like a seed to grow more, the way synthetic diamonds are made. Inverse

More from Inverse

This Is How Your Brain Becomes Addicted to Caffeine August 23rd 2020

Regular ingestion of the drug alters your brain’s chemical make up, leading to fatigue, headaches and nausea if you try to quit.

Smithsonian Magazine

Regular caffeine use alters your brain’s chemical makeup, leading to fatigue, headaches and nausea if you try to quit.

Within 24 hours of quitting the drug, your withdrawal symptoms begin. Initially, they’re subtle: The first thing you notice is that you feel mentally foggy, and lack alertness. Your muscles are fatigued, even when you haven’t done anything strenuous, and you suspect that you’re more irritable than usual.

Over time, an unmistakable throbbing headache sets in, making it difficult to concentrate on anything. Eventually, as your body protests having the drug taken away, you might even feel dull muscle pains, nausea and other flu-like symptoms.

This isn’t heroin, tobacco or even alcohol withdrawal. We’re talking about quitting caffeine, a substance consumed so widely (the FDA reports that more than 80 percent of American adults drink it daily) and in such mundane settings (say, at an office meeting or in your car) that we often forget it’s a drug—and by far the world’s most popular psychoactive one.

Like many drugs, caffeine is chemically addictive, a fact that scientists established back in 1994. In May 2013, with the publication of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM), caffeine withdrawal was finally included as a mental disorder for the first time—even though its merits for inclusion are symptoms that regular coffee-drinkers have long known well from the times they’ve gone off it for a day or more.

Why, exactly, is caffeine addictive? The reason stems from the way the drug affects the human brain, producing the alert feeling that caffeine drinkers crave.

Soon after you drink (or eat) something containing caffeine, it’s absorbed through the small intestine and dissolved into the bloodstream. Because the chemical is both water- and fat-soluble (meaning that it can dissolve in water-based solutions—think blood—as well as fat-based substances, such as our cell membranes), it’s able to penetrate the blood-brain barrier and enter the brain.

Structurally, caffeine closely resembles a molecule that’s naturally present in our brain, called adenosine (which is a by product of many cellular processes, including cellular respiration)—so much so, in fact, that caffeine can fit neatly into our brain cells’ receptors for adenosine, effectively blocking them off. Normally, the adenosine produced over time locks into these receptors and produces a feeling of tiredness.

Caffeine_and_adenosine.svg.png

Caffeine structurally resembles adenosine enough for it to fit into the brain’s adenosine receptors.

When caffeine molecules are blocking those receptors, they prevent this from occurring, thereby generating a sense of alertness and energy for a few hours.

Additionally, some of the brain’s own natural stimulants (such as dopamine) work more effectively when the adenosine receptors are blocked, and all the surplus adenosine floating around in the brain cues the adrenal glands to secrete adrenaline, another stimulant.

For this reason, caffeine isn’t technically a stimulant on its own, says Stephen R. Braun, the author or Buzzed: the Science and Lore of Caffeine and Alcohol, but a stimulant enabler: a substance that lets our natural stimulants run wild. Ingesting caffeine, he writes, is akin to “putting a block of wood under one of the brain’s primary brake pedals.” This block stays in place for anywhere from four to six hours, depending on the person’s age, size and other factors, until the caffeine is eventually metabolized by the body.

In people who take advantage of this process on a daily basis (i.e. coffee/tea, soda or energy drink addicts), the brain’s chemistry and physical characteristics actually change over time as a result. The most notable change is that brain cells grow more adenosine receptors, which is the brain’s attempt to maintain equilibrium in the face of a constant onslaught of caffeine, with its adenosine receptors so regularly plugged (studies indicate that the brain also responds by decreasing the number of receptors for norepinephrine, a stimulant). This explains why regular coffee drinkers build up a tolerance over time—because you have more adenosine receptors, it takes more caffeine to block a significant proportion of them and achieve the desired effect.

This also explains why suddenly giving up caffeine entirely can trigger a range of withdrawal effects. The underlying chemistry is complex and not fully understood, but the principle is that your brain is used to operating in one set of conditions (with an artificially-inflated number of adenosine receptors, and a decreased number of norepinephrine receptors) that depend upon regular ingestion of caffeine. Suddenly, without the drug, the altered brain chemistry causes all sorts of problems, including the dreaded caffeine withdrawal headache.

The good news is that, compared to many drug addictions, the effects are relatively short-term. To kick the thing, you only need to get through about 7-12 days of symptoms without drinking any caffeine. During that period, your brain will naturally decrease the number of adenosine receptors on each cell, responding to the sudden lack of caffeine ingestion. If you can make it that long without a cup of joe or a spot of tea, the levels of adenosine receptors in your brain reset to their baseline levels, and your addiction will be broken.

Joseph Stromberg Smithsonian

Is Consciousness an Illusion?

Philosopher Daniel Dennett holds a distinctive and openly paradoxical position on the question of consciousness. August 20th 2020

The New York Review of Books

  • Thomas Nagel
nagel_1-030917.jpg

Daniel Dennett at the Centro Cultural de la Ciencia, Buenos Aires, Argentina, June 2016. Photo by Soledad Aznarez / AP Images.

For fifty years the philosopher Daniel Dennett has been engaged in a grand project of disenchantment of the human world, using science to free us from what he deems illusions—illusions that are difficult to dislodge because they are so natural. In From Bacteria to Bach and Back, his eighteenth book (thirteenth as sole author), Dennett presents a valuable and typically lucid synthesis of his worldview. Though it is supported by reams of scientific data, he acknowledges that much of what he says is conjectural rather than proven, either empirically or philosophically.

Dennett is always good company. He has a gargantuan appetite for scientific knowledge, and is one of the best people I know at transmitting it and explaining its significance, clearly and without superficiality. He writes with wit and elegance; and in this book especially, though it is frankly partisan, he tries hard to grasp and defuse the sources of resistance to his point of view. He recognizes that some of what he asks us to believe is strongly counterintuitive. I shall explain eventually why I think the overall project cannot succeed, but first let me set out the argument, which contains much that is true and insightful.

The book has a historical structure, taking us from the prebiotic world to human minds and human civilization. It relies on different forms of evolution by natural selection, both biological and cultural, as its most important method of explanation. Dennett holds fast to the assumption that we are just physical objects and that any appearance to the contrary must be accounted for in a way that is consistent with this truth. Bach’s or Picasso’s creative genius, and our conscious experience of hearing Bach’s Fourth Brandenburg Concerto or seeing Picasso’s Girl Before a Mirror, all arose by a sequence of physical events beginning with the chemical composition of the earth’s surface before the appearance of unicellular organisms. Dennett identifies two unsolved problems along this path: the origin of life at its beginning and the origin of human culture much more recently. But that is no reason not to speculate.

The task Dennett sets himself is framed by a famous distinction drawn by the philosopher Wilfrid Sellars between the “manifest image” and the “scientific image”—two ways of seeing the world we live in. According to the manifest image, Dennett writes, the world is

full of other people, plants, and animals, furniture and houses and cars…and colors and rainbows and sunsets, and voices and haircuts, and home runs and dollars, and problems and opportunities and mistakes, among many other such things. These are the myriad “things” that are easy for us to recognize, point to, love or hate, and, in many cases, manipulate or even create…. It’s the world according to us.

According to the scientific image, on the other hand, the world

is populated with molecules, atoms, electrons, gravity, quarks, and who knows what else (dark energy, strings? branes?).

This, according to Dennett, is the world as it is in itself, not just for us, and the task is to explain scientifically how the world of molecules has come to include creatures like us, complex physical objects to whom everything, including they themselves, appears so different.

He greatly extends Sellars’s point by observing that the concept of the manifest image can be generalized to apply not only to humans but to all other living beings, all the way down to bacteria. All organisms have biological sensors and physical reactions that allow them to detect and respond appropriately only to certain features of their environment—“affordances,” Dennett calls them—that are nourishing, noxious, safe, dangerous, sources of energy or reproductive possibility, potential predators or prey.

For each type of organism, whether plant or animal, these are the things that define their world, that are salient and important for them; they can ignore the rest. Whatever the underlying physiological mechanisms, the content of the manifest image reveals itself in what the organisms do and how they react to their environment; it need not imply that the organisms are consciously aware of their surroundings. But in its earliest forms, it is the first step on the route to awareness.


The lengthy process of evolution that generates these results is first biological and then, in our case, cultural, and only at the very end is it guided partly by intelligent design, made possible by the unique capacities of the human mind and human civilization. But as Dennett says, the biosphere is saturated with design from the beginning—everything from the genetic code embodied in DNA to the metabolism of unicellular organisms to the operation of the human visual system—design that is not the product of intention and that does not depend on understanding.

One of Dennett’s most important claims is that most of what we and our fellow organisms do to stay alive, cope with the world and one another, and reproduce is not understood by us or them. It is competence without comprehension. This is obviously true of organisms like bacteria and trees that have no comprehension at all, but it is equally true of creatures like us who comprehend a good deal. Most of what we do, and what our bodies do—digest a meal, move certain muscles to grasp a doorknob, or convert the impact of sound waves on our eardrums into meaningful sentences—is done for reasons that are not our reasons. Rather, they are what Dennett calls free-floating reasons, grounded in the pressures of natural selection that caused these behaviors and processes to become part of our repertoire. There are reasons why these patterns have emerged and survived, but we don’t know those reasons, and we don’t have to know them to display the competencies that allow us to function.

Nor do we have to understand the mechanisms that underlie those competencies. In an illuminating metaphor, Dennett asserts that the manifest image that depicts the world in which we live our everyday lives is composed of a set of user-illusions,

like the ingenious user-illusion of click-and-drag icons, little tan folders into which files may be dropped, and the rest of the ever more familiar items on your computer’s desktop. What is actually going on behind the desktop is mind-numbingly complicated, but users don’t need to know about it, so intelligent interface designers have simplified the affordances, making them particularly salient for human eyes, and adding sound effects to help direct attention. Nothing compact and salient inside the computer corresponds to that little tan file-folder on the desktop screen.

He says that the manifest image of each species is “a user-illusion brilliantly designed by evolution to fit the needs of its users.” In spite of the word “illusion” he doesn’t wish simply to deny the reality of the things that compose the manifest image; the things we see and hear and interact with are “not mere fictions but different versions of what actually exists: real patterns.” The underlying reality, however, what exists in itself and not just for us or for other creatures, is accurately represented only by the scientific image—ultimately in the language of physics, chemistry, molecular biology, and neurophysiology.


Our user-illusions were not, like the little icons on the desktop screen, created by an intelligent interface designer. Nearly all of them—such as our images of people, their faces, voices, and actions, the perception of some things as delicious or comfortable and others as disgusting or dangerous—are the products of “bottom-up” design, understandable through the theory of evolution by natural selection, rather than “top-down” design by an intelligent being. Darwin, in what Dennett calls a “strange inversion of reasoning,” showed us how to resist the intuitive tendency always to explain competence and design by intelligence, and how to replace it with explanation by natural selection, a mindless process of accidental variation, replication, and differential survival.

As for the underlying mechanisms, we now have a general idea of how they might work because of another strange inversion of reasoning, due to Alan Turing, the creator of the computer, who saw how a mindless machine could do arithmetic perfectly without knowing what it was doing. This can be applied to all kinds of calculation and procedural control, in natural as well as in artificial systems, so that their competence does not depend on comprehension. Dennett’s claim is that when we put these two insights together, we see that

all the brilliance and comprehension in the world arises ultimately out of uncomprehending competences compounded over time into ever more competent—and hence comprehending—systems. This is indeed a strange inversion, overthrowing the pre-Darwinian mind-first vision of Creation with a mind-last vision of the eventual evolution of us, intelligent designers at long last.

And he adds:

Turing himself is one of the twigs on the Tree of Life, and his artifacts, concrete and abstract, are indirectly products of the blind Darwinian processes in the same way spider webs and beaver dams are….

An essential, culminating stage of this process is cultural evolution, much of which, Dennett believes, is as uncomprehending as biological evolution. He quotes Peter Godfrey-Smith’s definition, from which it is clear that the concept of evolution can apply more widely:

Evolution by natural selection is change in a population due to (i) variation in the characteristics of members of the population, (ii) which causes different rates of reproduction, and (iii) which is heritable.

In the biological case, variation is caused by mutations in DNA, and it is heritable through reproduction, sexual or otherwise. But the same pattern applies to variation in behavior that is not genetically caused, and that is heritable only in the sense that other members of the population can copy it, whether it be a game, a word, a superstition, or a mode of dress.


This is the territory of what Richard Dawkins memorably christened “memes,” and Dennett shows that the concept is genuinely useful in describing the formation and evolution of culture. He defines “memes” thus:

They are a kind of way of behaving (roughly) that can be copied, transmitted, remembered, taught, shunned, denounced, brandished, ridiculed, parodied, censored, hallowed.

They include such things as the meme for wearing your baseball cap backward or for building an arch of a certain shape; but the best examples of memes are words. A word, like a virus, needs a host to reproduce, and it will survive only if it is eventually transmitted to other hosts, people who learn it by imitation:

Like a virus, it is designed (by evolution, mainly) to provoke and enhance its own replication, and every token it generates is one of its offspring. The set of tokens descended from an ancestor token form a type, which is thus like a species.

ezgif.com-webp-to-jpg.jpg

Alan Turing; drawing by David Levine.

The distinction between type and token comes from the philosophy of language: the word “tomato” is a type, of which any individual utterance or inscription or occurrence in thought is a token. The different tokens may be physically very different—you say “tomayto,” I say “tomahto”—but what unites them is the perceptual capacity of different speakers to recognize them all as instances of the type. That is why people speaking the same language with different accents, or typing with different fonts, can understand each other.

A child picks up its native language without any comprehension of how it works. Dennett believes, plausibly, that language must have originated in an equally unplanned way, perhaps initially by the spontaneous attachment of sounds to prelinguistic thoughts. (And not only sounds but gestures: as Dennett observes, we find it very difficult to talk without moving our hands, an indication that the earliest language may have been partly nonvocal.) Eventually such memes coalesced to form languages as we know them, intricate structures with vast expressive capacity, shared by substantial populations.

Language permits us to transcend space and time by communicating about what is not present, to accumulate shared bodies of knowledge, and with writing to store them outside of individual minds, resulting in the vast body of collective knowledge and practice dispersed among many minds that constitutes civilization. Language also enables us to turn our attention to our own thoughts and develop them deliberately in the kind of top-down creativity characteristic of science, art, technology, and institutional design.

But such top-down research and development is possible only on a deep foundation of competence whose development was largely bottom-up, the result of cultural evolution by natural selection. Without denigrating the contributions of individual genius, Dennett urges us not to forget its indispensable precondition, the arms race over millennia of competing memes—exemplified by the essentially unplanned evolution, survival, and extinction of languages.

Of course the biological evolution of the human brain made all of this possible, together with some coevolution of brain and culture over the past 50,000 years, but at this point we can only speculate about what happened. Dennett cites recent research in support of the view that brain architecture is the product of bottom-up competition and coalition-formation among neurons—partly in response to the invasion of memes. But whatever the details, if Dennett is right that we are physical objects, it follows that all the capacities for understanding, all the values, perceptions, and thoughts that present us with the manifest image and allow us to form the scientific image, have their real existence as systems of representation in the central nervous system.


This brings us to the question of consciousness, on which Dennett holds a distinctive and openly paradoxical position. Our manifest image of the world and ourselves includes as a prominent part not only the physical body and central nervous system but our own consciousness with its elaborate features—sensory, emotional, and cognitive—as well as the consciousness of other humans and many nonhuman species. In keeping with his general view of the manifest image, Dennett holds that consciousness is not part of reality in the way the brain is. Rather, it is a particularly salient and convincing user-illusion, an illusion that is indispensable in our dealings with one another and in monitoring and managing ourselves, but an illusion nonetheless.

You may well ask how consciousness can be an illusion, since every illusion is itself a conscious experience—an appearance that doesn’t correspond to reality. So it cannot appear to me that I am conscious though I am not: as Descartes famously observed, the reality of my own consciousness is the one thing I cannot be deluded about. The way Dennett avoids this apparent contradiction takes us to the heart of his position, which is to deny the authority of the first-person perspective with regard to consciousness and the mind generally.

The view is so unnatural that it is hard to convey, but it has something in common with the behaviorism that was prevalent in psychology at the middle of the last century. Dennett believes that our conception of conscious creatures with subjective inner lives—which are not describable merely in physical terms—is a useful fiction that allows us to predict how those creatures will behave and to interact with them. He has coined the term “heterophenomenology” to describe the (strictly false) attribution each of us makes to others of an inner mental theater—full of sensory experiences of colors, shapes, tastes, sounds, images of furniture, landscapes, and so forth—that contains their representation of the world.

According to Dennett, however, the reality is that the representations that underlie human behavior are found in neural structures of which we know very little. And the same is true of the similar conception we have of our own minds. That conception does not capture an inner reality, but has arisen as a consequence of our need to communicate to others in rough and graspable fashion our various competencies and dispositions (and also, sometimes, to conceal them):

Curiously, then, our first-person point of view of our own minds is not so different from our second-person point of view of others’ minds: we don’t see, or hear, or feel, the complicated neural machinery churning away in our brains but have to settle for an interpreted, digested version, a user-illusion that is so familiar to us that we take it not just for reality but also for the most indubitable and intimately known reality of all.

The trouble is that Dennett concludes not only that there is much more behind our behavioral competencies than is revealed to the first-person point of view—which is certainly true—but that nothing whatever is revealed to the first-person point of view but a “version” of the neural machinery. In other words, when I look at the American flag, it may seem to me that there are red stripes in my subjective visual field, but that is an illusion: the only reality, of which this is “an interpreted, digested version,” is that a physical process I can’t describe is going on in my visual cortex.

I am reminded of the Marx Brothers line: “Who are you going to believe, me or your own eyes?” Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”

If I understand him, this requires us to interpret ourselves behavioristically: when it seems to me that I have a subjective conscious experience, that experience is just a belief, manifested in what I am inclined to say. According to Dennett, the red stripes that appear in my visual field when I look at the flag are just the “intentional object” of such a belief, as Santa Claus is the intentional object of a child’s belief in Santa Claus. Neither of them is real. Recall that even trees and bacteria have a manifest image, which is to be understood through their outward behavior. The same, it turns out, is true of us: the manifest image is not an image after all.


There is no reason to go through such mental contortions in the name of science. The spectacular progress of the physical sciences since the seventeenth century was made possible by the exclusion of the mental from their purview. To say that there is more to reality than physics can account for is not a piece of mysticism: it is an acknowledgment that we are nowhere near a theory of everything, and that science will have to expand to accommodate facts of a kind fundamentally different from those that physics is designed to explain. It should not disturb us that this may have radical consequences, especially for Dennett’s favorite natural science, biology: the theory of evolution, which in its current form is a purely physical theory, may have to incorporate nonphysical factors to account for consciousness, if consciousness is not, as he thinks, an illusion. Materialism remains a widespread view, but science does not progress by tailoring the data to fit a prevailing theory.

There is much in the book that I haven’t discussed, about education, information theory, prebiotic chemistry, the analysis of meaning, the psychological role of probability, the classification of types of minds, and artificial intelligence. Dennett’s reflections on the history and prospects of artificial intelligence and how we should manage its development and our relation to it are informative and wise. He concludes:

The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence….

We should hope that new cognitive prostheses will continue to be designed to be parasitic, to be tools, not collaborators. Their only “innate” goal, set up by their creators, should be to respond, constructively and transparently, to the demands of the user.

About the true nature of the human mind, Dennett is on one side of an old argument that goes back to Descartes. He pays tribute to Descartes, citing the power of what he calls “Cartesian gravity,” the pull of the first-person point of view; and he calls the allegedly illusory realm of consciousness the “Cartesian Theater.” The argument will no doubt go on for a long time, and the only way to advance understanding is for the participants to develop and defend their rival conceptions as fully as possible—as Dennett has done. Even those who find the overall view unbelievable will find much to interest them in this book.

Thomas Nagel is University Professor Emeritus at NYU. He is the author of “The View From Nowhere, Mortal Questions, and Mind and Cosmos,” among other books.
The New York Review of Books

More from The New York Review of Books

This post originally appeared on The New York Review of Books

Wireless Charging Risk August 16th 2020

Wireless charging is increasingly common in modern smartphones, and there’s even speculation that Apple might ditch charging via a cable entirely in the near future. But the slight convenience of juicing up your phone by plopping it onto a pad rather than plugging it in comes with a surprisingly robust environmental cost. According to new calculations from OneZero and iFixit, wireless charging is drastically less efficient than charging with a cord, so much so that the widespread adoption of this technology could necessitate the construction of dozens of new power plants around the world. (Unless manufacturers find other ways to make up for the energy drain, of course.)

On paper, wireless charging sounds appealing. Just drop a phone down on a charger and it will start charging. There’s no wear and tear on charging ports, and chargers can even be built into furniture. Not all of the energy that comes out of a wall outlet, however, ends up in a phone’s battery. Some of it gets lost in the process as heat.

While this is true of all forms of charging to a certain extent, wireless chargers lose a lot of energy compared to cables. They get even less efficient when the coils in the phone aren’t aligned properly with the coils in the charging pad, a surprisingly common problem.

To get a sense of how much extra power is lost when using wireless charging versus wired charging in the real world, I tested a Pixel 4 using multiple wireless chargers, as well as the standard charging cable that comes with the phone. I used a high-precision power meter that sits between the charging block and the power outlet to measure power consumption.

In my tests, I found that wireless charging used, on average, around 47% more power than a cable.

Charging the phone from completely dead to 100% using a cable took an average of 14.26 watt-hours (Wh). Using a wireless charger took, on average, 21.01 Wh. That comes out to slightly more than 47% more energy for the convenience of not plugging in a cable. In other words, the phone had to work harder, generate more heat, and suck up more energy when wirelessly charging to fill the same size battery.

How the phone was positioned on the charger significantly affected charging efficiency. The flat Yootech charger I tested was difficult to line up properly. Initially I intended to measure power consumption with the coils aligned as well as possible, then intentionally misalign them to detect the difference.

Instead, during one test, I noticed that the phone wasn’t charging. It looked like it was aligned properly, but while trying to fiddle with it, the difference between positions that charged properly and those that didn’t charge at all could be measured in millimeters. Without a visual indicator, it would be impossible to tell. Without careful alignment, this could make the phone take way more energy to charge than necessary or, more annoyingly, not charge at all.

The first test with the Yootech pad — before I figured out how to align the coils properly — took a whopping 25.62 Wh to charge, or 80% more energy than an average cable charge. Hearing about the hypothetical inefficiencies online was one thing, but here I could see how I’d nearly doubled the amount of power it took to charge my phone by setting it down slightly wrong instead of just plugging in a cable.

Google’s official Pixel Stand fared better, likely due to its propped-up design. Since the base of the phone sits flat, the coils can only be misaligned from left to right — circular pads like the Yootech allow for misalignment in any direction. Again, the threshold was a few millimeters of difference tops (as seen below), but the Pixel Stand continued charging while misaligned, albeit slower and using more power. In general, the propped-up design helped align the coils without much fiddling, but it still used an average of 19.8 Wh, or 39% more power, to charge the phone than cables.

On top of this, both wireless chargers independently consumed a small amount of power when no phone was charging at all — around 0.25 watts, which might not sound like much, but over 24 hours it would consume around six watt-hours. A household with multiple wireless chargers left plugged in — say, a charger by the bed, one in the living room, and another in the office — could waste the same amount of power in a day as it would take to fully charge a phone. By contrast, in my testing the normal cable charger did not draw any measurable amount of power.

While wireless charging might use relatively more power than a cable, it’s often written off as negligible. The extra power consumed by charging one phone with wireless charging versus a cable is the equivalent of leaving one extra LED light bulb on for a few hours. It might not even register on your power bill. At scale, however, it can turn into an environmental problem.

“I think in terms of power consumption, for me worrying about how much I’m paying for electricity, I don’t think it’s a factor,” Kyle Wiens, CEO of iFixit, told OneZero. “If all of a sudden, the 3 billion[-plus] smartphones that are in use, if all of them take 50% more power to charge, that adds up to a big amount. So it’s a society-wide issue, not a personal issue.”

To get a frame of reference for scale, iFixit helped me calculate the impact that the kind of excess power drain I experienced could have if every smartphone user on the planet switched to wireless charging — not a likely scenario any time soon, but neither was 3.5 billion people carrying around smartphones, say, 30 years ago.

“We worked out that at 100% efficiency from wall socket to battery, it would take about 73 coal power plantsrunning for a day to charge the 3.5 billion smartphone batteries once fully,” iFixit technical writer Arthur Shi told OneZero. But if people place their phones wrong and reduce the efficiency of their charging, the number grows: “If the wireless charging efficiency was only 50%, you would need to double the [73] power plants in order to charge all the batteries.”

If everyone in the world switched to wireless charging, it would have a measurable impact on the global power grid.

This is rough math, of course. Measuring power consumption by the number of power plants devices require is a bit like measuring how many vehicles it takes to transport a couple dozen people. It could take a dozen two-seat convertibles, or one bus. Shi’s math assumed relatively small coal power plants outputting 50 MW, as many power plants in the United States are, but those same needs could also be met by a couple very large power plants outputting more than 2,000 MW (of which the United States has only 29).

However, the broader point remains the same: If everyone in the world switched to wireless charging, it would have a measurable impact on the global power grid. While tech companies like Apple and Google tout how environmentally friendly their phones are, power consumption often goes overlooked. “They want to cover the carbon impact of the product over their entire life cycle?” Wiens said. “The entire life cycle includes all the power that these things ever consumed plugged into the wall.”

There are some things that companies can do to balance out the excess power wireless chargers use. Manufacturers can design phones to disable wireless charging if their coils aren’t aligned — instead of allowing excessively inefficient charging for the sake of user experience — or design chargers to hold phones so they align properly. They can also continue to offer wired charging, which might mean Apple’s rumored future port-less phone would have to wait.

Finally, tech companies can work to offset their excesses in one area with savings in another. Wireless charging is only one small piece of the environmental picture, and environmental reports for major phones from Google and Apple only loosely point to energy efficiency and make no mention of the impact of using wireless chargers. There are many ways tech companies could be more energy-efficient to put less strain on our power grids. Until wireless charging itself gets a more thorough examination, though, the world would probably be better off if we all stuck to good old-fashioned plugs.

Update:A previous version of this article misstated two units of measurement in reference to the Pixel Stand charger. It consumes 0.25 watts when plugged in without a phone attached, which over 24 hours would consume around six watt-hours.

Bill Gates on Covid: Most US Tests Are ‘Completely Garbage’

The techie-turned-philanthropist on vaccines, Trump, and why social media is “a poisoned chalice.”

For 20 years, Bill Gates has been easing out of the roles that made him rich and famous—CEO, chief software architect, and chair of Microsoft—and devoting his brainpower and passion to the Bill and Melinda Gates Foundation, abandoning earnings calls and antitrust hearings for the metrics of disease eradication and carbon reduction. This year, after he left the Microsoft board, one would have thought he would have relished shedding the spotlight directed at the four CEOs of big tech companies called before Congress.

But as with many of us, 2020 had different plans for Gates. An early Cassandra who warned of our lack of preparedness for a global pandemic, he became one of the most credible figures as his foundation made huge investments in vaccines, treatments, and testing. He also became a target of the plague of misinformation afoot in the land, as logorrheic critics accused him of planning to inject microchips in vaccine recipients. (Fact check: false. In case you were wondering.)

My first interview with Gates was in 1983, and I’ve long lost count of how many times I’ve spoken to him since. He’s yelled at me (more in the earlier years) and made me laugh (more in the latter years). But I’ve never looked forward to speaking to him more than in our year of Covid. We connected on Wednesday, remotely of course. In discussing our country’s failed responses, his issues with his friend Mark Zuckerberg’s social networks, and the innovations that might help us out of this mess, Gates did not disappoint. The interview has been edited for length and clarity.

WIRED: You have been warning us about a global pandemic for years. Now that it has happened just as you predicted, are you disappointed with the performance of the United States? Exclusive Offer.

Bill Gates: Yeah. There’s three time periods, all of which have disappointments. There is 2015 until this particular pandemic hit. If we had built up the diagnostic, therapeutic, and vaccine platforms, and if we’d done the simulations to understand what the key steps were, we’d be dramatically better off. Then there’s the time period of the first few months of the pandemic, when the US actually made it harder for the commercial testing companies to get their tests approved, the CDC had this very low volume test that didn’t work at first, and they weren’t letting people test. The travel ban came too late, and it was too narrow to do anything. Then, after the first few months, eventually we figured out about masks, and that leadership is important.Get WIRED Access SubscribeMost Popular

Advertisement

So you’re disappointed, but are you surprised?

I’m surprised at the US situation because the smartest people on epidemiology in the world, by a lot, are at the CDC. I would have expected them to do better. You would expect the CDC to be the most visible, not the White House or even Anthony Fauci. But they haven’t been the face of the epidemic. They are trained to communicate and not try to panic people but get people to take things seriously. They have basically been muzzled since the beginning. We called the CDC, but they told us we had to talk to the White House a bunch of times. Now they say, “Look, we’re doing a great job on testing, we don’t want to talk to you.” Even the simplest things, which would greatly improve this system, they feel would be admitting there is some imperfection and so they are not interested.

Do you think it’s the agencies that fell down or just the leadership at the top, the White House?

We can do the postmortem at some point. We still have a pandemic going on, and we should focus on that. The White House didn’t allow the CDC to do its job after March. There was a window where they were engaged, but then the White House didn’t let them do that. So the variance between the US and other countries isn’t that first period, it’s the subsequent period where the messages—the opening up, the leadership on masks, those things—are not the CDC’s fault. They said not to open back up; they said that leadership has to be a model of face mask usage. I think they have done a good job since April, but we haven’t had the benefit of it.

At this point, are you optimistic?

Yes. You have to admit there’s been trillions of dollars of economic damage done and a lot of debts, but the innovation pipeline on scaling up diagnostics, on new therapeutics, on vaccines is actually quite impressive. And that makes me feel like, for the rich world, we should largely be able to end this thing by the end of 2021, and for the world at large by the end of 2022. That is only because of the scale of the innovation that’s taking place. Now whenever we get this done, we will have lost many years in malaria and polio and HIV and the indebtedness of countries of all sizes and instability. It’ll take you years beyond that before you’d even get back to where you were at the start of 2020. It’s not World War I or World War II, but it is in that order of magnitude as a negative shock to the system.

In March it was unimaginable that you’d be giving us that timeline and saying it’s great.

Well it’s because of innovation that you don’t have to contemplate an even sadder statement, which is this thing will be raging for five years until natural immunity is our only hope.

Let’s talk vaccines, which your foundation is investing in. Is there anything that’s shaping up relatively quickly that could be safe and effective?

Before the epidemic came, we saw huge potential in the RNA vaccines—Moderna, Pfizer/BioNTech, and CureVac. Right now, because of the way you manufacture them, and the difficulty of scaling up, they are more likely—if they are helpful—to help in the rich countries. They won’t be the low-cost, scalable solution for the world at large. There you’d look more at AstraZeneca or Johnson & Johnson. This disease, from both the animal data and the phase 1 data, seems to be very vaccine preventable. There are questions still. It will take us awhile to figure out the duration [of protection], and the efficacy in elderly, although we think that’s going to be quite good. Are there any side effects, which you really have to get out in those large phase 3 groups and even after that through lots of monitoring to see if there are any autoimmune diseases or conditions that the vaccine could interact with in a deleterious fashion.Most Popular

Advertisement

Are you concerned that in our rush to get a vaccine we are going to approve something that isn’t safe and effective?

Yeah. In China and Russia they are moving full speed ahead. I bet there’ll be some vaccines that will get out to lots of patients without the full regulatory review somewhere in the world. We probably need three or four months, no matter what, of phase 3 data, just to look for side effects. The FDA, to their credit, at least so far, is sticking to requiring proof of efficacy. So far they have behaved very professionally despite the political pressure. There may be pressure, but people are saying no, make sure that that’s not allowed. The irony is that this is a president who is a vaccine skeptic. Every meeting I have with him he is like, “Hey, I don’t know about vaccines, and you have to meet with this guy Robert Kennedy Jr. who hates vaccines and spreads crazy stuff about them.”

Wasn’t Kennedy Jr. talking about you using vaccines to implant chips into people?

Yeah, you’re right. He, Roger Stone, Laura Ingraham. They do it in this kind of way: “I’ve heard lots of people say X, Y, Z.” That’s kind of Trumpish plausible deniability. Anyway, there was a meeting where Francis Collins, Tony Fauci, and I had to [attend], and they had no data about anything. When we would say, “But wait a minute, that’s not real data,” they’d say, “Look, Trump told you you have to sit and listen, so just shut up and listen anyway.” So it’s a bit ironic that the president is now trying to have some benefit from a vaccine.

What goes through your head when you’re in a meeting hearing misinformation, and the President of the United States wants you to keep your mouth shut?

That was a bit strange. I haven’t met directly with the president since March of 2018. I made it clear I’m glad to talk to him about the epidemic anytime. And I have talked to Debbie Birx, I’ve talked to Pence, I’ve talked to Mnuchin, Pompeo, particularly on the issue of, Is the US showing up in terms of providing money to procure the vaccine for the developing countries? There have been lots of meetings, but we haven’t been able to get the US to show up. It’s very important to be able to tell the vaccine companies to build extra factories for the billions of doses, that there is procurement money to buy those for the marginal cost. So in this supplemental bill, I’m calling everyone I can to get 4 billion through GAVI for vaccines and 4 billion through a global fund for therapeutics. That’s less than 1 percent to the bill, but in terms of saving lives and getting us back to normal, that under 1 percent is by far the most important thing if we can get it in there.

Speaking of therapeutics, if you were in the hospital and you have the disease and you’re looking over the doctor’s shoulder, what treatment are you going to ask for?

Remdesivir. Sadly the trials in the US have been so chaotic that the actual proven effect is kind of small. Potentially the effect is much larger than that. It’s insane how confused the trials here in the US have been. The supply of that is going up in the US; it will be quite available for the next few months. Also dexamethasone—it’s actually a fairly cheap drug—that’s for late-stage disease.Most Popular

Advertisement

I’m assuming you’re not going to have trouble paying for it, Bill, so you could ask for anything.

Well, I don’t want special treatment, so that’s a tricky thing. Other antivirals are two to three months away. Antibodies are two to three months away. We’ve had about a factor-of-two improvement in hospital outcomes already, and that’s with just remdesivir and dexamethasone. These other things will be additive to that.

You helped fund a Covid diagnostic testing program in Seattle that got quicker results, and it wasn’t so intrusive. The FDA put it on pause. What happened?

There’s this thing where the health worker jams the deep turbinate, in the back of your nose, which actually hurts and makes you sneeze on the healthy worker. We showed that the quality of the results can be equivalent if you just put a self-test in the tip of your nose with a cotton swab. The FDA made us jump through some hoops to prove that you didn’t need to refrigerate the result, that it could go back in a dry plastic bag, and so on. So the delay there was just normal double checking, maybe overly careful but not based on some political angle. Because of what we have done at FDA, you can buy these cheaper swabs that are available by the billions. So anybody who’s using the deep turbinate now is just out of date. It’s a mistake, because it slows things down.

But people aren’t getting their tests back quickly enough.

Well, that’s just stupidity. The majority of all US tests are completely garbage, wasted. If you don’t care how late the date is and you reimburse at the same level, of course they’re going to take every customer. Because they are making ridiculous money, and it’s mostly rich people that are getting access to that. You have to have the reimbursement system pay a little bit extra for 24 hours, pay the normal fee for 48 hours, and pay nothing [if it isn’t done by then]. And they will fix it overnight.

Why don’t we just do that?

Because the federal government sets that reimbursement system. When we tell them to change it they say, “As far as we can tell, we’re just doing a great job, it’s amazing!” Here we are, this is August. We are the only country in the world where we waste the most money on tests. Fix the reimbursement. Set up the CDC website. But I have been on that kick, and people are tired of listening to me.

As someone who has built your life on science and logic, I’m curious what you think when you see so many people signing onto this anti-science view of the world.

Well, strangely, I’m involved in almost everything that anti-science is fighting. I’m involved with climate change, GMOs, and vaccines. The irony is that it’s digital social media that allows this kind of titillating, oversimplistic explanation of, “OK, there’s just an evil person, and that explains all of this.” And when you have [posts] encrypted, there is no way to know what it is. I personally believe government should not allow those types of lies or fraud or child pornography [to be hidden with encryption like WhatsApp or Facebook Messenger].

Well, you’re friends with Mark Zuckerberg. Have you talked to him about this?

After I said this publicly, he sent me mail. I like Mark, I think he’s got very good values, but he and I do disagree on the trade-offs involved there. The lies are so titillating you have to be able to see them and at least slow them down. Like that video where, what do they call her, the sperm woman? That got over 10 million views! [Note: It was more than 20 million.] Well how good are these guys at blocking things, where once something got the 10 million views and everybody was talking about it, they didn’t delete the link or the searchability? So it was meaningless. They claim, “Oh, now we don’t have it.” What effect did that have? Anybody can go watch that thing! So I am a little bit at odds with the way that these conspiracy theories spread, many of which are anti-vaccine things. We give literally tens of billions for vaccines to save lives, then people turn around saying, “No, we’re trying to make money and we’re trying to end lives.” That’s kind of a wild inversion of what our values are and what our track record is.Most Popular

Advertisement

As you are the technology adviser to Microsoft, I think you can look forward in a few months to fighting this battle yourself when the company owns TikTok.

Yeah, my critique of dance moves will be fantastically value-added for them.

TikTok is more than just dance moves. There’s political content.

I know, I’m kidding. You’re right. Who knows what’s going to happen with that deal. But yes, it’s a poison chalice. Being big in the social media business is no simple game, like the encryption issue.

So are you wary of Microsoft getting into that game?

I mean, this may sound self-serving, but I think that the game being more competitive is probably a good thing. But having Trump kill off the only competitor, it’s pretty bizarre.

Do you understand what rule or regulation the president is invoking to demand that TikTok sell to an American company and then take a cut of the sales price?

I agree that the principle this is proceeding on is singly strange. The cut thing, that’s doubly strange. Anyway, Microsoft will have to deal with all of that.

You have been very cautious in staying away from the political arena. But the issues you care most about—public health and climate change—have had huge setbacks because of who leads the country. Are you reconsidering spending on political change?

The foundation needs to be bipartisan. Whoever gets elected in the US, we are going to want to work with them. We do care a lot about competence, and hopefully voters will take into account how this administration has done at picking competent people and should that weigh into their vote. But there’s going to be plenty of money on both sides of this election, and I don’t like diverting money to political things. Even though the pandemic has made it pretty clear we should expect better, there’s other people who will put their time into the campaigning piece.

Did you have deja vu last week when those tech CEOs testified remotely before Congress?

Yeah. I had a whole committee attacking me, and they had four at a time. I mean, Jesus Christ, what’s the Congress coming to? If you want to give a guy a hard time, give him at least a whole day that he has to sit there on the hot seat by himself! And they didn’t even have to get on a plane!

Do you think the antitrust concerns are the same as when Microsoft was under the gun, or has the landscape changed?

Even without antitrust rules, tech does tend to be quite competitive. And even though in the short run you don’t think it’s going to dislodge people, there will be changes that will keep bringing prices down. But there are a lot of valid issues, and if you’re super-successful, the pleasure of going in front of the Congress comes with the territory.

How has your life changed living under the pandemic?

I used to travel a lot. If I wanted to see President Macron and say, “Hey, give money for the coronavirus vaccine,” to really show I’m serious I’d go there. Now, we had a GAVI replenishment summit where I just sat at home and got up a little early. I am able to get a lot done. My kids are home more than I thought they would be, which at least for me is a nice thing. I’m microwaving more food. I’m getting fairly good at it. The pandemic sadly is less painful for those who were better off before the pandemic.

Do you have a go-to mask you use?

No, I use a pretty ugly normal mask. I change it every day. Maybe I should get a designer mask or something creative, but I just use this surgical-looking mask.

Comment Gates calls social media a poisoned challice because it was intended to be a disinformation highway. Covid 19 is very useful to Gates class. Philantropist he is not. His money grabbing organisation has exploited Chinese slave labour for years. Cheap manufactured computers have been crucial to the development of social media, making Gates super rich. He speaks for very profound and wealthy vested interests. As for the masks, there is no evidence that they or lockdown works.

The impact of Covid 19 has been on old, already sick and most importantly BAME – remember the mantra ‘Black Lives Matter.’ All white men are equally privileged and have no right to an opinion unless they are part of the devious manipulative controlling elite. As for herd immunity or vaccine, for that elite these dreams must be beyond the horizon. That is why they immediately rubbish the Russian vaccine. The elite have us right where they want us. Our fears and preoccupations must be BAME, domestic violence , sex crimes ,feminist demands and fighting racists – our fears focused on Russia and China. That elite faked the figures for the first wave and are determined to find or fake evidence of a second one. Robert Cook

Forget Everything You Think You Know About Time

Is a linear representation of time accurate? This physicist says no.

Nautilus

  • Brian Gallagher

In April 2018, in the famous Faraday Theatre at the Royal Institution in London, Carlo Rovelli gave an hour-long lecture on the nature of time. A red thread spanned the stage, a metaphor for the Italian theoretical physicist’s subject. “Time is a long line,” he said. To the left lies the past—the dinosaurs, the big bang—and to the right, the future—the unknown. “We’re sort of here,” he said, hanging a carabiner on it, as a marker for the present.

Then he flipped the script. “I’m going to tell you that time is not like that,” he explained.

Rovelli went on to challenge our common-sense notion of time, starting with the idea that it ticks everywhere at a uniform rate. In fact, clocks tick slower when they are in a stronger gravitational field. When you move nearby clocks showing the same time into different fields—one in space, the other on Earth, say—and then bring them back together again, they will show different times. “It’s a fact,” Rovelli said, and it means “your head is older than your feet.” Also a non-starter is any shared sense of “now.” We don’t really share the present moment with anyone. “If I look at you, I see you now—well, but not really, because light takes time to come from you to me,” he said. “So I see you sort of a little bit in the past.” As a result, “now” means nothing beyond the temporal bubble “in which we can disregard the time it takes light to go back and forth.”

Rovelli turned next to the idea that time flows in only one direction, from past to future. Unlike general relativity, quantum mechanics, and particle physics, thermodynamics embeds a direction of time. Its second law states that the total entropy, or disorder, in an isolated system never decreases over time. Yet this doesn’t mean that our conventional notion of time is on any firmer grounding, Rovelli said. Entropy, or disorder, is subjective: “Order is in the eye of the person who looks.” In other words the distinction between past and future, the growth of entropy over time, depends on a macroscopic effect—“the way we have described the system, which in turn depends on how we interact with the system,” he said.

“A million years of your life would be neither past nor future for me. So the present is not thin; it’s horrendously thick.”

Getting to the last common notion of time, Rovelli became a little more cautious. His scientific argument that time is discrete—that it is not seamless, but has quanta—is less solid. “Why? Because I’m still doing it! It’s not yet in the textbook.” The equations for quantum gravity he’s written down suggest three things, he said, about what “clocks measure.” First, there’s a minimal amount of time—its units are not infinitely small. Second, since a clock, like every object, is quantum, it can be in a superposition of time readings. “You cannot say between this event and this event is a certain amount of time, because, as always in quantum mechanics, there could be a probability distribution of time passing.” Which means that, third, in quantum gravity, you can have “a local notion of a sequence of events, which is a minimal notion of time, and that’s the only thing that remains,” Rovelli said. Events aren’t ordered in a line “but are confused and connected” to each other without “a preferred time variable—anything can work as a variable.”

Even the notion that the present is fleeting doesn’t hold up to scrutiny. It is certainly true that the present is “horrendously short” in classical, Newtonian physics. “But that’s not the way the world is designed,” Rovelli explained. Light traces a cone, or consecutively larger circles, in four-dimensional spacetime like ripples on a pond that grow larger as they travel. No information can cross the bounds of the light cone because that would require information to travel faster than the speed of light.

“In spacetime, the past is whatever is inside our past light-cone,” Rovelli said, gesturing with his hands the shape of an upside down cone. “So it’s whatever can affect us. The future is this opposite thing,” he went on, now gesturing an upright cone. “So in between the past and the future, there isn’t just a single line—there’s a huge amount of time.” Rovelli asked an audience member to imagine that he lived in Andromeda, which is two and a half million light years away. “A million years of your life would be neither past nor future for me. So the present is not thin; it’s horrendously thick.”

Listening to Rovelli’s description, I was reminded of a phrase from his book, The Order of Time: Studying time “is like holding a snowflake in your hands: gradually, as you study it, it melts between your fingers and vanishes.” 

Brian Gallagher is the editor of Facts So Romantic, the Nautilus blog. Follow him on Twitter @BSGallagher.
Nautilus

More from Nautilus

Big Bounce Simulations Challenge the Big Bang

Detailed computer simulations have found that a cosmic contraction can generate features of the universe that we observe today.

In a cyclic universe, periods of expansion alternate with periods of contraction. The universe has no beginning and no end.

Samuel Velasco/Quanta Magazine

Charlie Wood

Contributing Writer

August 4, 2020

Cyclic Universe

The standard story of the birth of the cosmos goes something like this: Nearly 14 billion years ago, a tremendous amount of energy materialized as if from nowhere.

In a brief moment of rapid expansion, that burst of energy inflated the cosmos like a balloon. The expansion straightened out any large-scale curvature, leading to a geometry that we now describe as flat. Matter also thoroughly mixed together, so that now the cosmos appears largely (though not perfectly) featureless. Here and there, clumps of particles have created galaxies and stars, but these are just minuscule specks on an otherwise unblemished cosmic canvas.

That theory, which textbooks call inflation, matches all observations to date and is preferred by most cosmologists. But it has conceptual implications that some find disturbing. In most regions of space-time, the rapid expansion would never stop. As a consequence, inflation can’t help but produce a multiverse — a technicolor existence with an infinite variety of pocket universes, one of which we call home. To critics, inflation predicts everything, which means it ultimately predicts nothing. “Inflation doesn’t work as it was intended to work,” said Paul Steinhardt, an architect of inflation who has become one of its most prominent critics.

In recent years, Steinhardt and others have been developing a different story of how our universe came to be. They have revived the idea of a cyclical universe: one that periodically grows and contracts. They hope to replicate the universe that we see — flat and smooth — without the baggage that comes with a bang.

In ‘Brief History of Time’ Stephen Hawkins suggests that all the matter in the universe ariginated from a pin head size store of infinitely dense matter. That seemed unlikely to me. The idea of an ever expanding universe is based on that concept. Robert Cook.

Abstractions​ navigates promising ideas in science and mathematics. Journey with us and join the conversation.

To that end, Steinhardt and his collaborators recently teamed up with researchers who specialize in computational models of gravity. They analyzed how a collapsing universe would change its own structure, and they ultimately discovered that contraction can beat inflation at its own game. No matter how bizarre and twisted the universe looked before it contracted, the collapse would efficiently erase a wide range of primordial wrinkles.

“It’s very important, what they claim they’ve done,” said Leonardo Senatore, a cosmologist at Stanford University who has analyzed inflation using a similar approach. There are aspects of the work he hasn’t yet had a chance to investigate, he said, but at first glance “it looks like they’ve done it.”

Squeezing the View

Over the last year and a half, a fresh view of the cyclic, or “ekpyrotic,” universe has emerged from a collaboration between Steinhardt, Anna Ijjas, a cosmologist at the Max Planck Institute for Gravitational Physics in Germany, and others — one that achieves renewal without collapse.

When it comes to visualizing expansion and contraction, people often focus on a balloonlike universe whose change in size is described by a “scale factor.” But a second measure — the Hubble radius, which is the greatest distance we can see — gets short shrift. The equations of general relativity let them evolve independently, and, crucially, you can flatten the universe by changing either.

Picture an ant on a balloon. Inflation is like blowing up the balloon. It puts the onus of smoothing and flattening primarily on the swelling cosmos. In the cyclic universe, however, the smoothing happens during a period of contraction. During this epoch, the balloon deflates modestly, but the real work is done by a drastically shrinking horizon. It’s as if the ant views everything through an increasingly powerful magnifying glass. The distance it can see shrinks, and thus its world grows more and more featureless.

Lucy Reading-Ikkanda/Quanta Magazine

Steinhardt and company imagine a universe that expands for perhaps a trillion years, driven by the energy of an omnipresent (and hypothetical) field, whose behavior we currently attribute to dark energy. When this energy field eventually grows sparse, the cosmos starts to gently deflate. Over billions of years a contracting scale factor brings everything a bit closer, but not all the way down to a point. The dramatic change comes from the Hubble radius, which rushes in and eventually becomes microscopic. The universe’s contraction recharges the energy field, which heats up the cosmos and vaporizes its atoms. A bounce ensues, and the cycle starts anew.

In the bounce model, the microscopic Hubble radius ensures smoothness and flatness. And whereas inflation blows up many initial imperfections into giant plots of multiverse real estate, slow contraction squeezes them essentially out of existence. We are left with a cosmos that has no beginning, no end, no singularity at the Big Bang, and no multiverse.

From Any Cosmos to Ours

One challenge for both inflation and bounce cosmologies is to show that their respective energy fields create the right universe no matter how they get started. “Our philosophy is that there should be no philosophy,” Ijjas said. “You know it works when you don’t have to ask under what condition it works.”

She and Steinhardt criticize inflation for doing its job only in special cases, such as when its energy field forms without notable features and with little motion. Theorists have explored these situations most thoroughly, in part because they are the only examples tractable with chalkboard mathematics. In recent computer simulations, which Ijjas and Steinhardt describe in a pair of preprints posted online in June, the team stress-tested their slow-contraction model with a range of baby universes too wild for pen-and paper analysis.

Adapting code developed by Frans Pretorius, a theoretical physicist at Princeton University who specializes in computational models of general relativity, the collaboration explored twisted and lumpy fields, fields moving in the wrong direction, even fields born with halves racing in opposing directions. In nearly every case, contraction swiftly produced a universe as boring as ours.

“You let it go and — bam! In a few cosmic moments of slow contraction it looks as smooth as silk,” Steinhardt said.

Katy Clough, a cosmologist at the University of Oxford who also specializes in numerical solutions of general relativity, called the new simulations “very comprehensive.” But she also noted that computational advances have only recently made this kind of analysis possible, so the full range of conditions that inflation can handle remains uncharted.

“It’s been semi-covered, but it needs a lot more work,” she said.

While interest in Ijjas and Steinhardt’s model varies, most cosmologists agree that inflation remains the paradigm to beat. “[Slow contraction] is not an equal contender at this point,” said Gregory Gabadadze, a cosmologist at New York University.

The collaboration will next flesh out the bounce itself — a more complex stage that requires novel interactions to push everything apart again. Ijjas already has one bounce theory that upgrades general relativity with a new interaction between matter and space-time, and she suspects that other mechanisms exist too. She plans to put her model on the computer soon to understand its behavior in detail.

Related:

Physicists Debate Hawking’s Idea That the Universe Had No Beginning
How the Universe Got Its Bounce Back
A Fight for the Soul of Science

The group hopes that after gluing the contraction and expansion stages together, they’ll identify unique features of a bouncing universe that astronomers might spot.

The collaboration has not worked out every detail of a cyclic cosmos with no bang and no crunch, much less shown that we live in one. But Steinhardt now feels optimistic that the model will soon offer a viable alternative to the multiverse. “The roadblocks I was most worried about have been surpassed,” he said. “I’m not kept up at night anymore.”

Editor’s note: Some of this research was funded in part by the Simons Foundation, which also funds this editorially independent magazine. Simons Foundation funding decisions play no role in our coverage. M

KODAK Digital Still Camera

This Scientist Believes Ageing Is Optional august 10th 2020

In his book, “Lifespan,” celebrated scientist David Sinclair lays out exactly why we age—and why he thinks we don’t have to.

Outside

  • Graham Averill
life-review-book_h.jpg

If scientist David Sinclair is correct about aging, we might not have to age as quickly as we do. Photo by tomazl / iStock.

The oldest-known living person is Kane Tanaka, a Japanese woman who is a mind-boggling 116 years old. But if you ask David Sinclair, he’d argue that 116 is just middle age. At least, he thinks it should be. Sinclair is one of the leading scientists in the field of aging, and he believes that growing old isn’t a natural part of life—it’s a disease that needs a cure.

Sounds crazy, right? Sinclair, a Harvard professor who made Time’s list of the 100 most influential people in the world in 2014, will acquiesce that everyone has to die at some point, but he argues that we can double our life expectancy and live healthy, active lives right up until the end.

His 2019 book, Lifespan: Why We Age and Why We Don’t Have To ($28, Atria Books), out this fall, details the cutting-edge science that’s taking place in the field of longevity right now. The quick takeaway from this not-so-quick read: scientists are tossing out previous assumptions about aging, and they’ve discovered several tools that you can employ right now to slow down, and in some cases, reverse the clock.

In the nineties, as a postdoc in an MIT lab, Sinclair caused a stir in the field when he discovered the mechanism that leads to aging in yeast, which offered some insight into why humans age. Using his work with yeast as a launching point, Sinclair and his lab colleagues have focused on identifying the mechanism for aging in humans and published a study in 2013 asserting that the malfunction of a family of proteins called sirtuins is the single cause of aging. Sirtuins are responsible for repairing DNA damage and controlling overall cellular health by keeping cells on task. In other words, sirtuins tell kidney cells to act like kidney cells. If they get overwhelmed, cells start to misbehave, and we see the symptoms of aging, like organ failure or wrinkles. All of the genetic info in our cells is still there as we get older, but our body loses the ability to interpret it. This is because our body starts to run low on NAD, a molecule that activates the sirtuins: we have half as much NAD in our body when we’re 50 as we do at 20. Without it, the sirtuins can’t do their job, and the cells in our body forget what they’re supposed to be doing.

Sinclair splits his time between the U.S. and Australia, running labs at Harvard Medical School and at the University of New South Wales. All of his research seeks to prove that aging is a problem we can solve—and figure out how to stop. He argues that we can slow down the aging process, and in some cases even reverse it, by putting our body through “healthy stressors” that increase NAD levels and promote sirtuin activity. The role of sirtuins in aging is now fairly well accepted, but the idea that we can reactivate them (and how best to do so) is still being worked out.

Getting cold, working out hard, and going hungry every once in a while all engage what Sinclair calls our body’s survival circuit, wherein sirtuins tell cells to boost their defenses in order to keep the organism (you) alive. While Sinclair’s survival-circuit theory has yet to be proven in a trial setting, there’s plenty of research to suggest that exercise, cold exposure, and calorie reduction all help slow down the side effects of aging and stave off diseases associated with getting older. Fasting, in particular, has been well supported by other research: in various studies, both mice and yeast that were fed restricted diets live much longer than their well-fed cohorts. A two-year-long human experiment in the 1990s found that participants who had a restricted diet that left them hungry often had decreased blood pressure, blood-sugar levels, and cholesterol levels. Subsequent human studies found that decreasing calories by 12 percent slowed down biological aging based on changes in blood biomarkers.

Longevity science is a bit like the Wild West: the rules aren’t quite established. The research is exciting, but human clinical trials haven’t found anything definitive just yet. Throughout the field, there’s an uncomfortable relationship between privately owned companies, researchers, and even research institutes like Harvard: Sinclair points to a biomarker test by a company called InsideTracker as proof of his own reduced “biological age,” but he is also an investor in that company. He is listed as an inventor on a patent held by a NAD booster that’s on the market right now, too.

While the dust settles, the best advice for the curious to take from Lifespan is to experiment with habits that are easy, free, and harmless—like taking a brisk, cold walk and eating a lighter diet. With cold exposure, Sinclair explains, moderation is the key. He believes that you can reap benefits by simply taking a walk in the winter without a jacket. He doesn’t prescribe an exact fasting regimen that works best, but he doesn’t recommend anything extreme—simply missing a meal here and there, like skipping breakfast and having a late lunch.

How the Pandemic Defeated America

A virus has brought the world’s most powerful country to its knees.

窗体顶端

Like ​The Atlantic? Subscribe to The Atlantic Daily​, our free weekday email newsletter.

窗体底端

窗体顶端

窗体底端

Editor’s Note: The Atlantic is making vital coverage of the coronavirus available to all readers. Find the collection here.

Updated at 1:12 p.m. ET on August 4, 2020.

How did it come to this? A virus a thousand times smaller than a dust mote has humbled and humiliated the planet’s most powerful nation. America has failed to protect its people, leaving them with illness and financial ruin. It has lost its status as a global leader. It has careened between inaction and ineptitude. The breadth and magnitude of its errors are difficult, in the moment, to truly fatho

In the first half of 2020, SARSCoV2—the new coronavirus behind the disease COVID19—infected 10 million people around the world and killed about half a million. But few countries have been as severely hit as the United States, which has just 4 percent of the world’s population but a quarter of its confirmed COVID19 cases and deaths. These numbers are estimates. The actual toll, though undoubtedly higher, is unknown, because the richest country in the world still lacks sufficient testing to accurately count its sick citizens.

Despite ample warning, the U.S. squandered every possible opportunity to control the coronavirus. And despite its considerable advantages—immense resources, biomedical might, scientific expertise—it floundered. While countries as different as South Korea, Thailand, Iceland, Slovakia, and Australia acted decisively to bend the curve of infections downward, the U.S. achieved merely a plateau in the spring, which changed to an appalling upward slope in the summer. “The U.S. fundamentally failed in ways that were worse than I ever could have imagined,” Julia Marcus, an infectious-disease epidemiologist at Harvard Medical School, told me.

Since the pandemic began, I have spoken with more than 100 experts in a variety of fields. I’ve learned that almost everything that went wrong with America’s response to the pandemic was predictable and preventable. A sluggish response by a government denuded of expertise allowed the coronavirus to gain a foothold. Chronic underfunding of public health neutered the nation’s ability to prevent the pathogen’s spread. A bloated, inefficient health-care system left hospitals ill-prepared for the ensuing wave of sickness. Racist policies that have endured since the days of colonization and slavery left Indigenous and Black Americans especially vulnerable to COVID19. The decades-long process of shredding the nation’s social safety net forced millions of essential workers in low-paying jobs to risk their life for their livelihood. The same social-media platforms that sowed partisanship and misinformation during the 2014 Ebola outbreak in Africa and the 2016 U.S. election became vectors for conspiracy theories during the 2020 pandemic.

The U.S. has little excuse for its inattention. In recent decades, epidemics of SARS, MERS, Ebola, H1N1 flu, Zika, and monkeypox showed the havoc that new and reemergent pathogens could wreak. Health experts, business leaders, and even middle schoolers ran simulated exercises to game out the spread of new diseases. In 2018, I wrote an article for The Atlantic arguing that the U.S. was not ready for a pandemic, and sounded warnings about the fragility of the nation’s health-care system and the slow process of creating a vaccine. But the COVID19 debacle has also touched—and implicated—nearly every other facet of American society: its shortsighted leadership, its disregard for expertise, its racial inequities, its social-media culture, and its fealty to a dangerous strain of individualism.

SARSCoV2 is something of an anti-Goldilocks virus: just bad enough in every way. Its symptoms can be severe enough to kill millions but are often mild enough to allow infections to move undetected through a population. It spreads quickly enough to overload hospitals, but slowly enough that statistics don’t spike until too late. These traits made the virus harder to control, but they also softened the pandemic’s punch. SARSCoV2 is neither as lethal as some other coronaviruses, such as SARS and MERS, nor as contagious as measles. Deadlier pathogens almost certainly exist. Wild animals harbor an estimated 40,000 unknown viruses, a quarter of which could potentially jump into humans. How will the U.S. fare when “we can’t even deal with a starter pandemic?,” Zeynep Tufekci, a sociologist at the University of North Carolina and an Atlantic contributing writer, asked me.

Despite its epochal effects, COVID19 is merely a harbinger of worse plagues to come. The U.S. cannot prepare for these inevitable crises if it returns to normal, as many of its people ache to do. Normal led to this. Normal was a world ever more prone to a pandemic but ever less ready for one. To avert another catastrophe, the U.S. needs to grapple with all the ways normal failed us. It needs a full accounting of every recent misstep and foundational sin, every unattended weakness and unheeded warning, every festering wound and reopened scar.

A pandemic can be prevented in two ways: Stop an infection from ever arising, or stop an infection from becoming thousands more. The first way is likely impossible. There are simply too many viruses and too many animals that harbor them. Bats alone could host thousands of unknown coronaviruses; in some Chinese caves, one out of every 20 bats is infected. Many people live near these caves, shelter in them, or collect guano from them for fertilizer. Thousands of bats also fly over these people’s villages and roost in their homes, creating opportunities for the bats’ viral stowaways to spill over into human hosts. Based on antibody testing in rural parts of China, Peter Daszak of EcoHealth Alliance, a nonprofit that studies emerging diseases, estimates that such viruses infect a substantial number of people every year. “Most infected people don’t know about it, and most of the viruses aren’t transmissible,” Daszak says. But it takes just one transmissible virus to start a pandemic.

Sometime in late 2019, the wrong virus left a bat and ended up, perhaps via an intermediate host, in a human—and another, and another. Eventually it found its way to the Huanan seafood market, and jumped into dozens of new hosts in an explosive super-spreading event. The COVID19 pandemic had begun.

“There is no way to get spillover of everything to zero,” Colin Carlson, an ecologist at Georgetown University, told me. Many conservationists jump on epidemics as opportunities to ban the wildlife trade or the eating of “bush meat,” an exoticized term for “game,” but few diseases have emerged through either route. Carlson said the biggest factors behind spillovers are land-use change and climate change, both of which are hard to control. Our species has relentlessly expanded into previously wild spaces. Through intensive agriculture, habitat destruction, and rising temperatures, we have uprooted the planet’s animals, forcing them into new and narrower ranges that are on our own doorsteps. Humanity has squeezed the world’s wildlife in a crushing grip—and viruses have come bursting out.

Curtailing those viruses after they spill over is more feasible, but requires knowledge, transparency, and decisiveness that were lacking in 2020. Much about coronaviruses is still unknown. There are no surveillance networks for detecting them as there are for influenza. There are no approved treatments or vaccines. Coronaviruses were formerly a niche family, of mainly veterinary importance. Four decades ago, just 60 or so scientists attended the first international meeting on coronaviruses. Their ranks swelled after SARS swept the world in 2003, but quickly dwindled as a spike in funding vanished. The same thing happened after MERS emerged in 2012. This year, the world’s coronavirus experts—and there still aren’t many—had to postpone their triennial conference in the Netherlands because SARSCoV2 made flying too risky.

In the age of cheap air travel, an outbreak that begins on one continent can easily reach the others. SARS already demonstrated that in 2003, and more than twice as many people now travel by plane every year. To avert a pandemic, affected nations must alert their neighbors quickly. In 2003, China covered up the early spread of SARS, allowing the new disease to gain a foothold, and in 2020, history repeated itself. The Chinese government downplayed the possibility that SARSCoV2 was spreading among humans, and only confirmed as much on January 20, after millions had traveled around the country for the lunar new year. Doctors who tried to raise the alarm were censured and threatened. One, Li Wenliang, later died of COVID19. The World Health Organization initially parroted China’s line and did not declare a public-health emergency of international concern until January 30. By then, an estimated 10,000 people in 20 countries had been infected, and the virus was spreading fast.

The United States has correctly castigated China for its duplicity and the WHO for its laxity—but the U.S. has also failed the international community. Under President Donald Trump, the U.S. has withdrawn from several international partnerships and antagonized its allies. It has a seat on the WHO’s executive board, but left that position empty for more than two years, only filling it this May, when the pandemic was in full swing. Since 2017, Trump has pulled more than 30 staffers out of the Centers for Disease Control and Prevention’s office in China, who could have warned about the spreading coronavirus. Last July, he defunded an American epidemiologist embedded within China’s CDC. America First was America oblivious.

Even after warnings reached the U.S., they fell on the wrong ears. Since before his election, Trump has cavalierly dismissed expertise and evidence. He filled his administration with inexperienced newcomers, while depicting career civil servants as part of a “deep state.” In 2018, he dismantled an office that had been assembled specifically to prepare for nascent pandemics. American intelligence agencies warned about the coronavirus threat in January, but Trump habitually disregards intelligence briefings. The secretary of health and human services, Alex Azar, offered similar counsel, and was twice ignored.

Being prepared means being ready to spring into action, “so that when something like this happens, you’re moving quickly,” Ronald Klain, who coordinated the U.S. response to the West African Ebola outbreak in 2014, told me. “By early February, we should have triggered a series of actions, precisely zero of which were taken.” Trump could have spent those crucial early weeks mass-producing tests to detect the virus, asking companies to manufacture protective equipment and ventilators, and otherwise steeling the nation for the worst. Instead, he focused on the border. On January 31, Trump announced that the U.S. would bar entry to foreigners who had recently been in China, and urged Americans to avoid going there.

Related Stories

Travel bans make intuitive sense, because travel obviously enables the spread of a virus. But in practice, travel bans are woefully inefficient at restricting either travel or viruses. They prompt people to seek indirect routes via third-party countries, or to deliberately hide their symptoms. They are often porous: Trump’s included numerous exceptions, and allowed tens of thousands of people to enter from China. Ironically, they create travel: When Trump later announced a ban on flights from continental Europe, a surge of travelers packed America’s airports in a rush to beat the incoming restrictions. Travel bans may sometimes work for remote island nations, but in general they can only delay the spread of an epidemic—not stop it. And they can create a harmful false confidence, so countries “rely on bans to the exclusion of the things they actually need to do—testing, tracing, building up the health system,” says Thomas Bollyky, a global-health expert at the Council on Foreign Relations. “That sounds an awful lot like what happened in the U.S.”

This was predictable. A president who is fixated on an ineffectual border wall, and has portrayed asylum seekers as vectors of disease, was always going to reach for travel bans as a first resort. And Americans who bought into his rhetoric of xenophobia and isolationism were going to be especially susceptible to thinking that simple entry controls were a panacea.

And so the U.S. wasted its best chance of restraining COVID19. Although the disease first arrived in the U.S. in mid-January, genetic evidence shows that the specific viruses that triggered the first big outbreaks, in Washington State, didn’t land until mid-February. The country could have used that time to prepare. Instead, Trump, who had spent his entire presidency learning that he could say whatever he wanted without consequence, assured Americans that “the coronavirus is very much under control,” and “like a miracle, it will disappear.” With impunity, Trump lied. With impunity, the virus spread.

On February 26, Trump asserted that cases were “going to be down to close to zero.” Over the next two months, at least 1 million Americans were infected.

As the coronavirus established itself in the U.S., it found a nation through which it could spread easily, without being detected. For years, Pardis Sabeti, a virologist at the Broad Institute of Harvard and MIT, has been trying to create a surveillance network that would allow hospitals in every major U.S. city to quickly track new viruses through genetic sequencing. Had that network existed, once Chinese scientists published SARSCoV2’s genome on January 11, every American hospital would have been able to develop its own diagnostic test in preparation for the virus’s arrival. “I spent a lot of time trying to convince many funders to fund it,” Sabeti told me. “I never got anywhere.”

The CDC developed and distributed its own diagnostic tests in late January. These proved useless because of a faulty chemical component. Tests were in such short supply, and the criteria for getting them were so laughably stringent, that by the end of February, tens of thousands of Americans had likely been infected but only hundreds had been tested. The official data were so clearly wrong that The Atlantic developed its own volunteer-led initiative—the COVID Tracking Project—to count cases.

Diagnostic tests are easy to make, so the U.S. failing to create one seemed inconceivable. Worse, it had no Plan B. Private labs were strangled by FDA bureaucracy. Meanwhile, Sabeti’s lab developed a diagnostic test in mid-January and sent it to colleagues in Nigeria, Sierra Leone, and Senegal. “We had working diagnostics in those countries well before we did in any U.S. states,” she told me.

It’s hard to overstate how thoroughly the testing debacle incapacitated the U.S. People with debilitating symptoms couldn’t find out what was wrong with them. Health officials couldn’t cut off chains of transmission by identifying people who were sick and asking them to isolate themselves.

Read: How the coronavirus became an American catastrophe

Water running along a pavement will readily seep into every crack; so, too, did the unchecked coronavirus seep into every fault line in the modern world. Consider our buildings. In response to the global energy crisis of the 1970s, architects made structures more energy-efficient by sealing them off from outdoor air, reducing ventilation rates. Pollutants and pathogens built up indoors, “ushering in the era of ‘sick buildings,’ ” says Joseph Allen, who studies environmental health at Harvard’s T. H. Chan School of Public Health. Energy efficiency is a pillar of modern climate policy, but there are ways to achieve it without sacrificing well-being. “We lost our way over the years and stopped designing buildings for people,” Allen says.

The indoor spaces in which Americans spend 87 percent of their time became staging grounds for super-spreading events. One study showed that the odds of catching the virus from an infected person are roughly 19 times higher indoors than in open air. Shielded from the elements and among crowds clustered in prolonged proximity, the coronavirus ran rampant in the conference rooms of a Boston hotel, the cabins of the Diamond Princess cruise ship, and a church hall in Washington State where a choir practiced for just a few hours.

The hardest-hit buildings were those that had been jammed with people for decades: prisons. Between harsher punishments doled out in the War on Drugs and a tough-on-crime mindset that prizes retribution over rehabilitation, America’s incarcerated population has swelled sevenfold since the 1970s, to about 2.3 million. The U.S. imprisons five to 18 times more people per capita than other Western democracies. Many American prisons are packed beyond capacity, making social distancing impossible. Soap is often scarce. Inevitably, the coronavirus ran amok. By June, two American prisons each accounted for more cases than all of New Zealand. One, Marion Correctional Institution, in Ohio, had more than 2,000 cases among inmates despite having a capacity of 1,500. 


Other densely packed facilities were also besieged. America’s nursing homes and long-term-care facilities house less than 1 percent of its people, but as of mid-June, they accounted for 40 percent of its coronavirus deaths. More than 50,000 residents and staff have died. At least 250,000 more have been infected. These grim figures are a reflection not just of the greater harms that COVID19 inflicts upon elderly physiology, but also of the care the elderly receive. Before the pandemic, three in four nursing homes were understaffed, and four in five had recently been cited for failures in infection control. The Trump administration’s policies have exacerbated the problem by reducing the influx of immigrants, who make up a quarter of long-term caregivers.

Read: Another coronavirus nursing-home disaster is coming

Even though a Seattle nursing home was one of the first COVID19 hot spots in the U.S., similar facilities weren’t provided with tests and protective equipment. Rather than girding these facilities against the pandemic, the Department of Health and Human Services paused nursing-home inspections in March, passing the buck to the states. Some nursing homes avoided the virus because their owners immediately stopped visitations, or paid caregivers to live on-site. But in others, staff stopped working, scared about infecting their charges or becoming infected themselves. In some cases, residents had to be evacuated because no one showed up to care for them.

America’s neglect of nursing homes and prisons, its sick buildings, and its botched deployment of tests are all indicative of its problematic attitude toward health: “Get hospitals ready and wait for sick people to show,” as Sheila Davis, the CEO of the nonprofit Partners in Health, puts it. “Especially in the beginning, we catered our entire [COVID19] response to the 20 percent of people who required hospitalization, rather than preventing transmission in the community.” The latter is the job of the public-health system, which prevents sickness in populations instead of merely treating it in individuals. That system pairs uneasily with a national temperament that views health as a matter of personal responsibility rather than a collective good.

At the end of the 20th century, public-health improvements meant that Americans were living an average of 30 years longer than they were at the start of it. Maternal mortality had fallen by 99 percent; infant mortality by 90 percent. Fortified foods all but eliminated rickets and goiters. Vaccines eradicated smallpox and polio, and brought measles, diphtheria, and rubella to heel. These measures, coupled with antibiotics and better sanitation, curbed infectious diseases to such a degree that some scientists predicted they would soon pass into history. But instead, these achievements brought complacency. “As public health did its job, it became a target” of budget cuts, says Lori Freeman, the CEO of the National Association of County and City Health Officials.

Today, the U.S. spends just 2.5 percent of its gigantic health-care budget on public health. Underfunded health departments were already struggling to deal with opioid addiction, climbing obesity rates, contaminated water, and easily preventable diseases. Last year saw the most measles cases since 1992. In 2018, the U.S. had 115,000 cases of syphilis and 580,000 cases of gonorrhea—numbers not seen in almost three decades. It has 1.7 million cases of chlamydia, the highest number ever recorded.

Since the last recession, in 2009, chronically strapped local health departments have lost 55,000 jobs—a quarter of their workforce. When COVID19 arrived, the economic downturn forced overstretched departments to furlough more employees. When states needed battalions of public-health workers to find infected people and trace their contacts, they had to hire and train people from scratch. In May, Maryland Governor Larry Hogan asserted that his state would soon have enough people to trace 10,000 contacts every day. Last year, as Ebola tore through the Democratic Republic of Congo—a country with a quarter of Maryland’s wealth and an active war.

Ripping unimpeded through American communities, the coronavirus created thousands of sickly hosts that it then rode into America’s hospitals. It should have found facilities armed with state-of-the-art medical technologies, detailed pandemic plans, and ample supplies of protective equipment and life-saving medicines. Instead, it found a brittle system in danger of collapse.

Compared with the average wealthy nation, America spends nearly twice as much of its national wealth on health care, about a quarter of which is wasted on inefficient care, unnecessary treatments, and administrative chicanery. The U.S. gets little bang for its exorbitant buck. It has the lowest life-expectancy rate of comparable countries, the highest rates of chronic disease, and the fewest doctors per person. This profit-driven system has scant incentive to invest in spare beds, stockpiled supplies, peacetime drills, and layered contingency plans—the essence of pandemic preparedness. America’s hospitals have been pruned and stretched by market forces to run close to full capacity, with little ability to adapt in a crisis.

When hospitals do create pandemic plans, they tend to fight the last war. After 2014, several centers created specialized treatment units designed for Ebola—a highly lethal but not very contagious disease. These units were all but useless against a highly transmissible airborne virus like SARSCoV2. Nor were hospitals ready for an outbreak to drag on for months. Emergency plans assumed that staff could endure a few days of exhausting conditions, that supplies would hold, and that hard-hit centers could be supported by unaffected neighbors. “We’re designed for discrete disasters” like mass shootings, traffic pileups, and hurricanes, says Esther Choo, an emergency physician at Oregon Health and Science University. The COVID19 pandemic is not a discrete disaster. It is a 50-state catastrophe that will likely continue at least until a vaccine is ready.

Wherever the coronavirus arrived, hospitals reeled. Several states asked medical students to graduate early, reenlisted retired doctors, and deployed dermatologists to emergency departments. Doctors and nurses endured grueling shifts, their faces chapped and bloody when they finally doffed their protective equipment. Soon, that equipment—masks, respirators, gowns, gloves—started running out.

Millions of Americans have found themselves impoverished and disconnected from medical care.

American In the middle of the greatest health and economic crises in generations, hospitals operate on a just-in-time economy. They acquire the goods they need in the moment through labyrinthine supply chains that wrap around the world in tangled lines, from countries with cheap labor to richer nations like the U.S. The lines are invisible until they snap. About half of the world’s face masks, for example, are made in China, some of them in Hubei province. When that region became the pandemic epicenter, the mask supply shriveled just as global demand spiked. The Trump administration turned to a larder of medical supplies called the Strategic National Stockpile, only to find that the 100 million respirators and masks that had been dispersed during the 2009 flu pandemic were never replaced. Just 13 million respirators were left.

In April, four in five frontline nurses said they didn’t have enough protective equipment. Some solicited donations from the public, or navigated a morass of back-alley deals and internet scams. Others fashioned their own surgical masks from bandannas and gowns from garbage bags. The supply of nasopharyngeal swabs that are used in every diagnostic test also ran low, because one of the largest manufacturers is based in Lombardy, Italy—initially the COVID19 capital of Europe. About 40 percent of critical-care drugs, including antibiotics and painkillers, became scarce because they depend on manufacturing lines that begin in China and India. Once a vaccine is ready, there might not be enough vials to put it in, because of the long-running global shortage of medical-grade glass—literally, a bottle-neck bottleneck.

The federal government could have mitigated those problems by buying supplies at economies of scale and distributing them according to need. Instead, in March, Trump told America’s governors to “try getting it yourselves.” As usual, health care was a matter of capitalism and connections. In New York, rich hospitals bought their way out of their protective-equipment shortfall, while neighbors in poorer, more diverse parts of the city rationed their supplies.

While the president prevaricated, Americans acted. Businesses sent their employees home. People practiced social distancing, even before Trump finally declared a national emergency on March 13, and before governors and mayors subsequently issued formal stay-at-home orders, or closed schools, shops, and restaurants. A study showed that the U.S. could have averted 36,000 COVID19 deaths if leaders had enacted social-distancing measures just a week earlier. But better late than never: By collectively reducing the spread of the virus, America flattened the curve. Ventilators didn’t run out, as they had in parts of Italy. Hospitals had time to add extra beds.

Social distancing worked. But the indiscriminate lockdown was necessary only because America’s leaders wasted months of prep time. Deploying this blunt policy instrument came at enormous cost. Unemployment rose to 14.7 percent, the highest level since record-keeping began, in 1948. More than 26 million people lost their jobs, a catastrophe in a country that—uniquely and absurdly—ties health care to employment. Some COVID19 survivors have been hit with seven-figure medical bills. In the middle of the greatest health and economic crises in generations, millions of Americans have found themselves disconnected from medical care and impoverished. They join the millions who have always lived that way.

The coronavirus found, exploited, and widened every inequity that the U.S. had to offer. Elderly people, already pushed to the fringes of society, were treated as acceptable losses. Women were more likely to lose jobs than men, and also shouldered extra burdens of child care and domestic work, while facing rising rates of domestic violence. In half of the states, people with dementia and intellectual disabilities faced policies that threatened to deny them access to lifesaving ventilators. Thousands of people endured months of COVID19 symptoms that resembled those of chronic postviral illnesses, only to be told that their devastating symptoms were in their head. Latinos were three times as likely to be infected as white people. Asian Americans faced racist abuse. Far from being a “great equalizer,” the pandemic fell unevenly upon the U.S., taking advantage of injustices that had been brewing throughout the nation’s history.

Read: COVID-19 can last for several months

Of the 3.1 million Americans who still cannot afford health insurance in states where Medicaid has not been expanded, more than half are people of color, and 30 percent are Black.* This is no accident. In the decades after the Civil War, the white leaders of former slave states deliberately withheld health care from Black Americans, apportioning medicine more according to the logic of Jim Crow than Hippocrates. They built hospitals away from Black communities, segregated Black patients into separate wings, and blocked Black students from medical school. In the 20th century, they helped construct America’s system of private, employer-based insurance, which has kept many Black people from receiving adequate medical treatment. They fought every attempt to improve Black people’s access to health care, from the creation of Medicare and Medicaid in the ’60s to the passage of the Affordable Care Act in 2010.

A number of former slave states also have among the lowest investments in public health, the lowest quality of medical care, the highest proportions of Black citizens, and the greatest racial divides in health outcomes. As the COVID19 pandemic wore on, they were among the quickest to lift social-distancing restrictions and reexpose their citizens to the coronavirus. The harms of these moves were unduly foisted upon the poor and the Black.

As of early July, one in every 1,450 Black Americans had died from COVID19—a rate more than twice that of white Americans. That figure is both tragic and wholly expected given the mountain of medical disadvantages that Black people face. Compared with white people, they die three years younger. Three times as many Black mothers die during pregnancy. Black people have higher rates of chronic illnesses that predispose them to fatal cases of COVID19. When they go to hospitals, they’re less likely to be treated. The care they do receive tends to be poorer. Aware of these biases, Black people are hesitant to seek aid for COVID19 symptoms and then show up at hospitals in sicker states. “One of my patients said, ‘I don’t want to go to the hospital, because they’re not going to treat me well,’ ” says Uché Blackstock, an emergency physician and the founder of Advancing Health Equity, a nonprofit that fights bias and racism in health care. “Another whispered to me, ‘I’m so relieved you’re Black. I just want to make sure I’m listened to.’ ”

Rather than countering misinformation during the pandemic, trusted sources often made things worse.

Black people were both more worried about the pandemic and more likely to be infected by it. The dismantling of America’s social safety net left Black people with less income and higher unemployment. They make up a disproportionate share of the low-paid “essential workers” who were expected to staff grocery stores and warehouses, clean buildings, and deliver mail while the pandemic raged around them. Earning hourly wages without paid sick leave, they couldn’t afford to miss shifts even when symptomatic. They faced risky commutes on crowded public transportation while more privileged people teleworked from the safety of isolation. “There’s nothing about Blackness that makes you more prone to COVID,” says Nicolette Louissaint, the executive director of Healthcare Ready, a nonprofit that works to strengthen medical supply chains. Instead, existing inequities stack the odds in favor of the virus.

Native Americans were similarly vulnerable. A third of the people in the Navajo Nation can’t easily wash their hands, because they’ve been embroiled in long-running negotiations over the rights to the water on their own lands. Those with water must contend with runoff from uranium mines. Most live in cramped multigenerational homes, far from the few hospitals that service a 17-million-acre reservation. As of mid-May, the Navajo Nation had higher rates of COVID19 infections than any U.S. state.

Americans often misperceive historical inequities as personal failures. Stephen Huffman, a Republican state senator and doctor in Ohio, suggested that Black Americans might be more prone to COVID19 because they don’t wash their hands enough, a remark for which he later apologized. Republican Senator Bill Cassidy of Louisiana, also a physician, noted that Black people have higher rates of chronic disease, as if this were an answer in itself, and not a pattern that demanded further explanation.

Clear distribution of accurate information is among the most important defenses against an epidemic’s spread. And yet the largely unregulated, social-media-based communications infrastructure of the 21st century almost ensures that misinformation will proliferate fast. “In every outbreak throughout the existence of social media, from Zika to Ebola, conspiratorial communities immediately spread their content about how it’s all caused by some government or pharmaceutical company or Bill Gates,” says Renée DiResta of the Stanford Internet Observatory, who studies the flow of online information. When COVID19 arrived, “there was no doubt in my mind that it was coming.”

Read: The great 5G conspiracy

Sure enough, existing conspiracy theories—George Soros! 5G! Bioweapons!—were repurposed for the pandemic. An infodemic of falsehoods spread alongside the actual virus. Rumors coursed through online platforms that are designed to keep users engaged, even if that means feeding them content that is polarizing or untrue. In a national crisis, when people need to act in concert, this is calamitous. “The social internet as a system is broken,” DiResta told me, and its faults are readily abused.

Beginning on April 16, DiResta’s team noticed growing online chatter about Judy Mikovits, a discredited researcher turned anti-vaccination champion. Posts and videos cast Mikovits as a whistleblower who claimed that the new coronavirus was made in a lab and described Anthony Fauci of the White House’s coronavirus task force as her nemesis. Ironically, this conspiracy theory was nested inside a larger conspiracy—part of an orchestrated PR campaign by an anti-vaxxer and QAnon fan with the explicit goal to “take down Anthony Fauci.” It culminated in a slickly produced video called Plandemic, which was released on May 4. More than 8 million people watched it in a week.

Doctors and journalists tried to debunk Plandemic’s many misleading claims, but these efforts spread less successfully than the video itself. Like pandemics, infodemics quickly become uncontrollable unless caught early. But while health organizations recognize the need to surveil for emerging diseases, they are woefully unprepared to do the same for emerging conspiracies. In 2016, when DiResta spoke with a CDC team about the threat of misinformation, “their response was: ‘ That’s interesting, but that’s just stuff that happens on the internet.’ ”

From the June 2020 issue: Adrienne LaFrance on how QAnon is more important than you think

Rather than countering misinformation during the pandemic’s early stages, trusted sources often made things worse. Many health experts and government officials downplayed the threat of the virus in January and February, assuring the public that it posed a low risk to the U.S. and drawing comparisons to the ostensibly greater threat of the flu. The WHO, the CDC, and the U.S. surgeon general urged people not to wear masks, hoping to preserve the limited stocks for health-care workers. These messages were offered without nuance or acknowledgement of uncertainty, so when they were reversed—the virus is worse than the flu; wear masks—the changes seemed like befuddling flip-flops.

The media added to the confusion. Drawn to novelty, journalists gave oxygen to fringe anti-lockdown protests while most Americans quietly stayed home. They wrote up every incremental scientific claim, even those that hadn’t been verified or peer-reviewed.

There were many such claims to choose from. By tying career advancement to the publishing of papers, academia already creates incentives for scientists to do attention-grabbing but irreproducible work. The pandemic strengthened those incentives by prompting a rush of panicked research and promising ambitious scientists global attention.

In March, a small and severely flawed French study suggested that the antimalarial drug hydroxychloroquine could treat COVID19. Published in a minor journal, it likely would have been ignored a decade ago. But in 2020, it wended its way to Donald Trump via a chain of credulity that included Fox News, Elon Musk, and Dr. Oz. Trump spent months touting the drug as a miracle cure despite mounting evidence to the contrary, causing shortages for people who actually needed it to treat lupus and rheumatoid arthritis. The hydroxychloroquine story was muddied even further by a study published in a top medical journal, The Lancet, that claimed the drug was not effective and was potentially harmful. The paper relied on suspect data from a small analytics company called Surgisphere, and was retracted in June.**

Science famously self-corrects. But during the pandemic, the same urgent pace that has produced valuable knowledge at record speed has also sent sloppy claims around the world before anyone could even raise a skeptical eyebrow. The ensuing confusion, and the many genuine unknowns about the virus, has created a vortex of fear and uncertainty, which grifters have sought to exploit. Snake-oil merchants have peddled ineffectual silver bullets (including actual silver). Armchair experts with scant or absent qualifications have found regular slots on the nightly news. And at the center of that confusion is Donald Trump.

During a pandemic, leaders must rally the public, tell the truth, and speak clearly and consistently. Instead, Trump repeatedly contradicted public-health experts, his scientific advisers, and himself. He said that “nobody ever thought a thing like [the pandemic] could happen” and also that he “felt it was a pandemic long before it was called a pandemic.” Both statements cannot be true at the same time, and in fact neither is true.

A month before his inauguration, I wrote that “the question isn’t whether [Trump will] face a deadly outbreak during his presidency, but when.” Based on his actions as a media personality during the 2014 Ebola outbreak and as a candidate in the 2016 election, I suggested that he would fail at diplomacy, close borders, tweet rashly, spread conspiracy theories, ignore experts, and exhibit reckless self-confidence. And so he did.

No one should be shocked that a liar who has made almost 20,000 false or misleading claims during his presidency would lie about whether the U.S. had the pandemic under control; that a racist who gave birth to birtherism would do little to stop a virus that was disproportionately killing Black people; that a xenophobe who presided over the creation of new immigrant-detention centers would order meatpacking plants with a substantial immigrant workforce to remain open; that a cruel man devoid of empathy would fail to calm fearful citizens; that a narcissist who cannot stand to be upstaged would refuse to tap the deep well of experts at his disposal; that a scion of nepotism would hand control of a shadow coronavirus task force to his unqualified son-in-law; that an armchair polymath would claim to have a “natural ability” at medicine and display it by wondering out loud about the curative potential of injecting disinfectant; that an egotist incapable of admitting failure would try to distract from his greatest one by blaming China, defunding the WHO, and promoting miracle drugs; or that a president who has been shielded by his party from any shred of accountability would say, when asked about the lack of testing, “I don’t take any responsibility at all.”

Left: A woman hugs her grandmother through a plastic sheet in Wantagh, New York. Right: An elderly woman has her oxygen levels tested in Yonkers, New York. (Al Bello / Getty; Andrew Renneisen / The New York Times / Redux)

Trump is a comorbidity of the COVID19 pandemic. He isn’t solely responsible for America’s fiasco, but he is central to it. A pandemic demands the coordinated efforts of dozens of agencies. “In the best circumstances, it’s hard to make the bureaucracy move quickly,” Ron Klain said. “It moves if the president stands on a table and says, ‘Move quickly.’ But it really doesn’t move if he’s sitting at his desk saying it’s not a big deal.”

In the early days of Trump’s presidency, many believed that America’s institutions would check his excesses. They have, in part, but Trump has also corrupted them. The CDC is but his latest victim. On February 25, the agency’s respiratory-disease chief, Nancy Messonnier, shocked people by raising the possibility of school closures and saying that “disruption to everyday life might be severe.” Trump was reportedly enraged. In response, he seems to have benched the entire agency. The CDC led the way in every recent domestic disease outbreak and has been the inspiration and template for public-health agencies around the world. But during the three months when some 2 million Americans contracted COVID19 and the death toll topped 100,000, the agency didn’t hold a single press conference. Its detailed guidelines on reopening the country were shelved for a month while the White House released its own uselessly vague plan.

Again, everyday Americans did more than the White House. By voluntarily agreeing to months of social distancing, they bought the country time, at substantial cost to their financial and mental well-being. Their sacrifice came with an implicit social contract—that the government would use the valuable time to mobilize an extraordinary, energetic effort to suppress the virus, as did the likes of Germany and Singapore. But the government did not, to the bafflement of health experts. “There are instances in history where humanity has really moved mountains to defeat infectious diseases,” says Caitlin Rivers, an epidemiologist at the Johns Hopkins Center for Health Security. “It’s appalling that we in the U.S. have not summoned that energy around COVID19.”

Instead, the U.S. sleepwalked into the worst possible scenario: People suffered all the debilitating effects of a lockdown with few of the benefits. Most states felt compelled to reopen without accruing enough tests or contact tracers. In April and May, the nation was stuck on a terrible plateau, averaging 20,000 to 30,000 new cases every day. In June, the plateau again became an upward slope, soaring to record-breaking heights.

Read: Ed Yong on living in a patchwork pandemic

Trump never rallied the country. Despite declaring himself a “wartime president,” he merely presided over a culture war, turning public health into yet another politicized cage match. Abetted by supporters in the conservative media, he framed measures that protect against the virus, from masks to social distancing, as liberal and anti-American. Armed anti-lockdown protesters demonstrated at government buildings while Trump egged them on, urging them to “LIBERATE” Minnesota, Michigan, and Virginia. Several public-health officials left their jobs over harassment and threats.

It is no coincidence that other powerful nations that elected populist leaders—Brazil, Russia, India, and the United Kingdom—also fumbled their response to COVID19. “When you have people elected based on undermining trust in the government, what happens when trust is what you need the most?” says Sarah Dalglish of the Johns Hopkins Bloomberg School of Public Health, who studies the political determinants of health.

“Trump is president,” she says. “How could it go well?”

The countries that fared better against COVID19 didn’t follow a universal playbook. Many used masks widely; New Zealand didn’t. Many tested extensively; Japan didn’t. Many had science-minded leaders who acted early; Hong Kong didn’t—instead, a grassroots movement compensated for a lax government. Many were small islands; not large and continental Germany. Each nation succeeded because it did enough things right.

Read: What really doomed America’s coronavirus response

Meanwhile, the United States underperformed across the board, and its errors compounded. The dearth of tests allowed unconfirmed cases to create still more cases, which flooded the hospitals, which ran out of masks, which are necessary to limit the virus’s spread. Twitter amplified Trump’s misleading messages, which raised fear and anxiety among people, which led them to spend more time scouring for information on Twitter. Even seasoned health experts underestimated these compounded risks. Yes, having Trump at the helm during a pandemic was worrying, but it was tempting to think that national wealth and technological superiority would save America. “We are a rich country, and we think we can stop any infectious disease because of that,” says Michael Osterholm, the director of the Center for Infectious Disease Research and Policy at the University of Minnesota. “But dollar bills alone are no match against a virus.”

COVID‐19 is an assault on America’s body, and a referendum on the ideas that animate its culture.

Public-health experts talk wearily about the panic-neglect cycle, in which outbreaks trigger waves of attention and funding that quickly dissipate once the diseases recede. This time around, the U.S. is already flirting with neglect, before the panic phase is over. The virus was never beaten in the spring, but many people, including Trump, pretended that it was. Every state reopened to varying degrees, and many subsequently saw record numbers of cases. After Arizona’s cases started climbing sharply at the end of May, Cara Christ, the director of the state’s health-services department, said, “We are not going to be able to stop the spread. And so we can’t stop living as well.” The virus may beg to differ.

At times, Americans have seemed to collectively surrender to COVID19. The White House’s coronavirus task force wound down. Trump resumed holding rallies, and called for less testing, so that official numbers would be rosier. The country behaved like a horror-movie character who believes the danger is over, even though the monster is still at large. The long wait for a vaccine will likely culminate in a predictable way: Many Americans will refuse to get it, and among those who want it, the most vulnerable will be last in line.

Still, there is some reason for hope. Many of the people I interviewed tentatively suggested that the upheaval wrought by COVID19 might be so large as to permanently change the nation’s disposition. Experience, after all, sharpens the mind. East Asian states that had lived through the SARS and MERS epidemics reacted quickly when threatened by SARSCoV2, spurred by a cultural memory of what a fast-moving coronavirus can do. But the U.S. had barely been touched by the major epidemics of past decades (with the exception of the H1N1 flu). In 2019, more Americans were concerned about terrorists and cyberattacks than about outbreaks of exotic diseases. Perhaps they will emerge from this pandemic with immunity both cellular and cultural.

There are also a few signs that Americans are learning important lessons. A June survey showed that 60 to 75 percent of Americans were still practicing social distancing. A partisan gap exists, but it has narrowed. “In public-opinion polling in the U.S., high-60s agreement on anything is an amazing accomplishment,” says Beth Redbird, a sociologist at Northwestern University, who led the survey. Polls in May also showed that most Democrats and Republicans supported mask wearing, and felt it should be mandatory in at least some indoor spaces. It is almost unheard-of for a public-health measure to go from zero to majority acceptance in less than half a year. But pandemics are rare situations when “people are desperate for guidelines and rules,” says Zoë McLaren, a health-policy professor at the University of Maryland at Baltimore County. The closest analogy is pregnancy, she says, which is “a time when women’s lives are changing, and they can absorb a ton of information. A pandemic is similar: People are actually paying attention, and learning.”

Redbird’s survey suggests that Americans indeed sought out new sources of information—and that consumers of news from conservative outlets, in particular, expanded their media diet. People of all political bents became more dissatisfied with the Trump administration. As the economy nose-dived, the health-care system ailed, and the government fumbled, belief in American exceptionalism declined. “Times of big social disruption call into question things we thought were normal and standard,” Redbird told me. “If our institutions fail us here, in what ways are they failing elsewhere?” And whom are they failing the most?

Americans were in the mood for systemic change. Then, on May 25, George Floyd, who had survived COVID19’s assault on his airway, asphyxiated under the crushing pressure of a police officer’s knee. The excruciating video of his killing circulated through communities that were still reeling from the deaths of Breonna Taylor and Ahmaud Arbery, and disproportionate casualties from COVID19. America’s simmering outrage came to a boil and spilled into its streets.

Defiant and largely cloaked in masks, protesters turned out in more than 2,000 cities and towns. Support for Black Lives Matter soared: For the first time since its founding in 2013, the movement had majority approval across racial groups. These protests were not about the pandemic, but individual protesters had been primed by months of shocking governmental missteps. Even people who might once have ignored evidence of police brutality recognized yet another broken institution. They could no longer look away.

It is hard to stare directly at the biggest problems of our age. Pandemics, climate change, the sixth extinction of wildlife, food and water shortages—their scope is planetary, and their stakes are overwhelming. We have no choice, though, but to grapple with them. It is now abundantly clear what happens when global disasters collide with historical negligence.

COVID19 is an assault on America’s body, and a referendum on the ideas that animate its culture. Recovery is possible, but it demands radical introspection. America would be wise to help reverse the ruination of the natural world, a process that continues to shunt animal diseases into human bodies. It should strive to prevent sickness instead of profiting from it. It should build a health-care system that prizes resilience over brittle efficiency, and an information system that favors light over heat. It should rebuild its international alliances, its social safety net, and its trust in empiricism. It should address the health inequities that flow from its history. Not least, it should elect leaders with sound judgment, high character, and respect for science, logic, and reason.

The pandemic has been both tragedy and teacher. Its very etymology offers a clue about what is at stake in the greatest challenges of the future, and what is needed to address them. Pandemic. Pan and demos. All people.

* This article has been updated to clarify why 3.1 million Americans still cannot afford health insurance.

** This article originally mischaracterized similarities between two studies that were retracted in June, one in The Lancet and one in the New England Journal of Medicine. It has been updated to reflect that the latter study was not specifically about hydroxychloroquine. It appears in the September 2020 print edition with the headline “Anatomy of an American Failure.”

Ed Yong is a staff writer at The Atlantic, where he covers science.

Connect Twitter

Why a Traffic Flow Suddenly Turns Into a Traffic Jam

Those aggravating slowdowns aren’t one driver’s fault. They’re everybody’s fault. August 3rd 2020

Nautilus

  • Benjamin Seibold
15967_97855ff80c2ef0cc2f1b586e78fb287b.png

Photo by Raymond Depardon / Magnum Photos.

Few experiences on the road are more perplexing than phantom traffic jams. Most of us have experienced one: The vehicle ahead of you suddenly brakes, forcing you to brake, and making the driver behind you brake. But, soon afterward, you and the cars around you accelerate back to the original speed—and it becomes clear that there were no obstacles on the road, and apparently no cause for the slowdown.

Because traffic quickly resumes its original speed, phantom traffic jams usually don’t cause major delays. But neither are they just minor nuisances. They are hot spots for accidents because they force unexpected braking. And the unsteady driving they cause is not good for your car, causing wear and tear and poor gas mileage.

So what is going on, exactly? To answer this question mathematicians, physicists, and traffic engineers have devised many types of traffic models. For instance, microscopic models resolve the paths of the individual vehicles, and are good at describing vehicle–vehicle interactions. In contrast, macroscopic models describe traffic as a fluid, in which cars are interpreted as fluid particles. They are effective at capturing large-scale phenomena that involve many vehicles. Finally, cellular models divide the road into segments and prescribe rules by which cars move from cell to cell, providing a framework for capturing the uncertainty that is inherent in real traffic.

It soon becomes clear that there were no obstacles on the road, and apparently no cause for the slowdown.

In setting out to understand how a phantom traffic jam forms, we first have to be aware of the many effects present in real traffic that could conceivably contribute to a jam: different types of vehicles and drivers, unpredictable behavior, on- and off-ramps, and lane switching, to name just a few. We might expect that some combination of these effects is necessary to cause a phantom jam. One of the great advantages of studying mathematical models is that these various effects can be turned off in theoretical analysis or computer simulations. This creates a host of identical, predictable drivers on a single-lane highway without any ramps. In other words, your perfect commute home.

Surprisingly, when all these effects are turned off, phantom traffic jams still occur! This observation tells us that phantom jams are not the fault of individual drivers, but result instead from the collective behavior of all drivers on the road. It works like this. Envision a uniform traffic flow: All vehicles are evenly distributed along the highway, and all drive with the same velocity. Under perfect conditions, this ideal traffic flow could persist forever. However, in reality, the flow is constantly exposed to small perturbations: imperfections on the asphalt, tiny hiccups of the engines, half-seconds of driver inattention, and so on. To predict the evolution of this traffic flow, the big question is to decide whether these small perturbations decay, or are amplified.

If they decay, the traffic flow is stable and there are no jams. But if they are amplified, the uniform flow becomes unstable, with small perturbations growing into backwards-traveling waves called “jamitons.” These jamitons can be observed in reality, are visible in various types of models and computer simulations, and have also been reproduced in tightly controlled experiments.

In macroscopic, or “fluid-dynamical,” models, each driver—interpreted as a traffic-fluid particle—observes the local density of traffic around her at any instant in time and accordingly decides on a target velocity: fast, when few cars are nearby, or slow, when the congestion level is high. Then she accelerates or decelerates towards this target velocity. In addition, she anticipates what the traffic will do next. This predictive driving effect is modeled by a “traffic pressure,” which acts in many ways like the pressure in a real fluid.

Phantom jams are not the fault of individual drivers, but result instead from the collective behavior of all drivers on the road.

The mathematical analysis of traffic models reveals that these two are competing effects. The delay before drivers reach their target velocity causes the growth of perturbations, while traffic pressure makes perturbations decay. A uniform flow profile is stable if the anticipation effect dominates, which it does when traffic density is low. The delay effect dominates when traffic densities are high, causing instabilities and, ultimately, phantom jams.

The transition from uniform traffic flow to jamiton-dominated flow is similar to water turning from a liquid state into a gas state. In traffic, this phase transition occurs once traffic density reaches a particular, critical threshold at which the drivers’ anticipation exactly balances the delay effect in their velocity adjustment. The most fascinating aspect of this phase transition is that the character of the traffic changes dramatically while individual drivers do not change their driving behavior at all.

The occurrence of jamiton traffic waves, then, can be explained by phase transition behavior. To think about how to prevent phantom jams, though, we also need to understand the details of the structure of a fully established jamiton. In macroscopic traffic models, jamitons are the mathematical analog of detonation waves, which naturally occur in explosions. All jamitons have a localized region of high traffic density and low vehicle velocity. The transition from high to low speed is extremely abrupt—like a shock wave in a fluid. Vehicles that run into the shock front are forced to brake heavily. After the shock is a “reaction zone,” in which drivers attempt to accelerate back to their original velocity. Finally, at the end of the phantom jam, from the drivers’ perspective, is the “sonic point.”

The name “sonic point” comes from the analogy with detonation waves. In an explosion, it is at this point that the flow turns from supersonic to subsonic. This has crucial implications for the information flow within a detonation wave, as well as in a jamiton. The sonic point provides an information boundary, similar to the event horizon in a black hole: no information from further downstream can affect the jamiton through the sonic point. This makes dispersing jamitons rather difficult—a vehicle can’t affect the jamiton through its driving behavior after passing through.

Instead, the driving behavior of a vehicle must be affected before it runs into a jamiton. Wireless communication between vehicles provides one possibility to achieve this goal, and today’s mathematical models allow us to develop appropriate ways to use tomorrow’s technology. For example, once a vehicle detects a sudden braking event followed by an immediate acceleration, it can broadcast a “jamiton warning” to the vehicles following it within a mile distance. The drivers of those vehicles can then, at the least, prepare for unexpected braking; or, better still, increase their headway so that they can eventually contribute to the dissipation of the traffic wave.

The character of the traffic changes dramatically while individual drivers don’t change their driving behavior
at all.

The insights we glean from fluid-dynamical traffic models can help with many other real-world problems. For example, supply chains exhibit a queuing behavior reminiscent of traffic jams. Jamming, queuing, and wave phenomena can also be observed in gas pipelines, information webs, and flows in biological networks—all of which can be understood as fluid-like flows.

Besides being an important mathematical case study, the phantom traffic jam is, perhaps, also an interesting and instructive social system. Whenever jamitons arise, they are caused by the collective behavior of all drivers—not a few bad apples on the road. Those who drive preventively can dissipate jamitons, and benefit all of the drivers behind them. It is a classic example of the effectiveness of the Golden Rule.

So the next time you are caught in a warrantless, pointless, and spontaneous traffic jam, remember just how much more it is than it seems.

Benjamin Seibold is an Assistant Professor of Mathematics at Temple University.

Could Air-Conditioning Fix Climate Change?

Researchers proposed a carbon-neutral “synthetic oil well” on every rooftop.

Scientific American

  • Richard Conniff
GettyImages-171242693.jpg

Photo from 4FR / Getty Images.

It is one of the great dilemmas of climate change: We take such comfort from air conditioning that worldwide energy consumption for that purpose has already tripled since 1990. It is on track to grow even faster through mid-century—and assuming fossil-fuel–fired power plants provide the electricity, that could cause enough carbon dioxide emissions to warm the planet by another deadly half-degree Celsius.

A paper published in the Nature Communications proposes a partial remedy:  Heating, ventilation and air conditioning (or HVAC) systems move a lot of air. They can replace the entire air volume in an office building five or 10 times an hour.  Machines that capture carbon dioxide from the atmosphere—a developing fix for climate change—also depend on moving large volumes of air.  So why not save energy by tacking the carbon capture machine onto the air conditioner?

This futuristic proposal, from a team led by chemical engineer Roland Dittmeyer at Germany’s Karlsruhe Institute of Technology, goes even further. The researchers imagine a system of modular components, powered by renewable energy, that would not just extract carbon dioxide and water from the air. It would also convert them into hydrogen, and then use a multistep chemical process to transform that hydrogen into liquid hydrocarbon fuels. The result: “Personalized, localized and distributed, synthetic oil wells” in buildings or neighborhoods, the authors write. “The envisioned model of ‘crowd oil’ from solar refineries, akin to ‘crowd electricity’ from solar panels,” would enable people “to take control and collectively manage global warming and climate change, rather than depending on the fossil power industrial behemoths.”

The research group has already developed an experimental model that can complete several key steps of the process, Dittmeyer says, adding, “The plan in two or three years is to have the first experimental showcase where I can show you a bottle of hydrocarbon fuel from carbon dioxide captured in an air-conditioning unit.”

Neither Dittmeyer nor co-author Geoffrey Ozin, a chemical engineer at the University of Toronto, would predict how long it might take before building owners could purchase and install such units. But Ozin claims much of the necessary technology is already commercially available. He says the carbon capture equipment could come from a Swiss “direct air capture” company called Climeworks, and the electrolyzers to convert carbon dioxide and water into hydrogen are available from Siemens, Hydrogenics or other companies. “And you use Roland’s amazing microstructure catalytic reactors, which convert the hydrogen and carbon dioxide into a synthetic fuel,” he adds. Those reactors are being brought to market by the German company Ineratec, a spinoff from Dittmeyer’s research. Because the system would rely on advanced forms of solar energy, Ozin thinks of the result as “photosynthetic buildings.”

The authors calculate that applying this system to the HVAC in one of Europe’s tallest skyscrapers, the MesseTurm, or Trade Fair Tower, in Frankfurt, would extract and convert enough carbon dioxide to yield at least 2,000 metric tons (660,000 U.S. gallons) of fuel a year. The office space in the entire city of Frankfurt could yield more than 370,000 tons (122 million gallons) annually, they say.

“This is a wonderful concept—it made my day,” says David Keith, a Harvard professor of applied physics and public policy, who was not involved in the new paper. He suggests that the best use for the resulting fuels would be to “help solve two of our biggest energy challenges”: providing a carbon-neutral fuel to fill the gaps left by intermittent renewables such as wind and solar power, and providing fuel for “the hard-to-electrify parts of transportation and industry,” such as airplanes, large trucks and steel- or cement-making. Keith is already targeting some of these markets through Carbon Engineering, a company he founded focused on direct air capture of carbon dioxide for large-scale liquid fuel production. But he says he is “deeply skeptical” about doing it on a distributed building or neighborhood basis. “Economies of scale can’t be wished away. There’s a reason we have huge wind turbines,” he says—and a reason we do not have backyard all-in-one pulp-and-paper mills for disposing of our yard wastes. He believes it is simply “faster and cheaper” to take carbon dioxide from the air and turn it into fuel “by doing it an appropriate scale.”

Other scientists who were not involved in the new paper note two other potential problems. “The idea that Roland has presented is an interesting one,” says Jennifer Wilcox, a chemical engineer at Worcester Institute of Technology, “but more vetting needs to be done in order to determine the true potential of the approach.” While it seems to make sense to take advantage of the air movement already being generated by HVAC systems, Wilcox says, building and operating the necessary fans is not what makes direct air capture systems so expensive. “The dominant capital cost,” she says, “is the solid adsorbent materials”—that is, substances to which the carbon dioxide adheres—and the main energy cost is the heat needed to recover the carbon dioxide from these materials afterward. Moreover, she contends that any available solar or other carbon-free power source would be put to better use in replacing fossil-fuel-fired power plants, to reduce the amount of carbon dioxide getting into the air in the first place.

“The idea of converting captured carbon into liquid fuel is persuasive,“ says Matthew J. Realff, a chemical engineer at Georgia Institute of Technology. “We have an enormous investment in our liquid fuel infrastructure, and using that has tremendous value. You wouldn’t have to build a whole new infrastructure. But this concept of doing it at the household level is a little bit fantastical”—partly because the gases involved (carbon monoxide and hydrogen) are toxic and explosive. The process to convert them to a liquid fuel is well understood, Realff says, but it produces a range of products that now typically get separated out in massive refineries—requiring huge amounts of energy. “It’s possible that it could be worked out at the scale that is being proposed,” he adds. “But we haven’t done it at this point, and it may not turn out to be the most effective way from an economic perspective.” There is, however, an unexpected benefit of direct air capture of carbon dioxide, says Realff, and it could help stimulate market acceptance of the technology: One reason office buildings replace their air so frequently is simply to protect workers from elevated levels of carbon dioxide. His research suggests that capturing the carbon dioxide from the air stream may be one way to cut energy costs, by reducing the frequency of air changes.

Dittmeyer disputes the argument that thinking big is always better. He notes that small, modular plants are a trend in some areas of chemical engineering, “because they are more flexible and don’t involve such a financial risk.” He also anticipates that cost will become less of a barrier as governments face up to the urgency of achieving a climate solution, and as jurisdictions increasingly impose carbon taxes or mandate strict energy efficiency standards for buildings.

“Of course, it’s a visionary perspective,” he says, “it relies on this idea of a decentralized product empowering people, not leaving it to industry. Industrial players observe the situation, but as long as there is no profit in the short term, they won’t do anything. If we have the technology that is safe and affordable, though maybe not as cheap, we can generate some momentum” among individuals, much as happened in the early stages of the solar industry. “And then I would expect the industrial parties to act, too.”

Could Air-Conditioning Fix Climate Change?

Researchers proposed a carbon-neutral “synthetic oil well” on every rooftop. August 2nd 2020

Scientific American

  • Richard Conniff
GettyImages-171242693.jpg

Photo from 4FR / Getty Images.

It is one of the great dilemmas of climate change: We take such comfort from air conditioning that worldwide energy consumption for that purpose has already tripled since 1990. It is on track to grow even faster through mid-century—and assuming fossil-fuel–fired power plants provide the electricity, that could cause enough carbon dioxide emissions to warm the planet by another deadly half-degree Celsius.

A paper published in the Nature Communications proposes a partial remedy:  Heating, ventilation and air conditioning (or HVAC) systems move a lot of air. They can replace the entire air volume in an office building five or 10 times an hour.  Machines that capture carbon dioxide from the atmosphere—a developing fix for climate change—also depend on moving large volumes of air.  So why not save energy by tacking the carbon capture machine onto the air conditioner?

This futuristic proposal, from a team led by chemical engineer Roland Dittmeyer at Germany’s Karlsruhe Institute of Technology, goes even further. The researchers imagine a system of modular components, powered by renewable energy, that would not just extract carbon dioxide and water from the air. It would also convert them into hydrogen, and then use a multistep chemical process to transform that hydrogen into liquid hydrocarbon fuels. The result: “Personalized, localized and distributed, synthetic oil wells” in buildings or neighborhoods, the authors write. “The envisioned model of ‘crowd oil’ from solar refineries, akin to ‘crowd electricity’ from solar panels,” would enable people “to take control and collectively manage global warming and climate change, rather than depending on the fossil power industrial behemoths.”

The research group has already developed an experimental model that can complete several key steps of the process, Dittmeyer says, adding, “The plan in two or three years is to have the first experimental showcase where I can show you a bottle of hydrocarbon fuel from carbon dioxide captured in an air-conditioning unit.”

Neither Dittmeyer nor co-author Geoffrey Ozin, a chemical engineer at the University of Toronto, would predict how long it might take before building owners could purchase and install such units. But Ozin claims much of the necessary technology is already commercially available. He says the carbon capture equipment could come from a Swiss “direct air capture” company called Climeworks, and the electrolyzers to convert carbon dioxide and water into hydrogen are available from Siemens, Hydrogenics or other companies. “And you use Roland’s amazing microstructure catalytic reactors, which convert the hydrogen and carbon dioxide into a synthetic fuel,” he adds. Those reactors are being brought to market by the German company Ineratec, a spinoff from Dittmeyer’s research. Because the system would rely on advanced forms of solar energy, Ozin thinks of the result as “photosynthetic buildings.”

The authors calculate that applying this system to the HVAC in one of Europe’s tallest skyscrapers, the MesseTurm, or Trade Fair Tower, in Frankfurt, would extract and convert enough carbon dioxide to yield at least 2,000 metric tons (660,000 U.S. gallons) of fuel a year. The office space in the entire city of Frankfurt could yield more than 370,000 tons (122 million gallons) annually, they say.

“This is a wonderful concept—it made my day,” says David Keith, a Harvard professor of applied physics and public policy, who was not involved in the new paper. He suggests that the best use for the resulting fuels would be to “help solve two of our biggest energy challenges”: providing a carbon-neutral fuel to fill the gaps left by intermittent renewables such as wind and solar power, and providing fuel for “the hard-to-electrify parts of transportation and industry,” such as airplanes, large trucks and steel- or cement-making. Keith is already targeting some of these markets through Carbon Engineering, a company he founded focused on direct air capture of carbon dioxide for large-scale liquid fuel production. But he says he is “deeply skeptical” about doing it on a distributed building or neighborhood basis. “Economies of scale can’t be wished away. There’s a reason we have huge wind turbines,” he says—and a reason we do not have backyard all-in-one pulp-and-paper mills for disposing of our yard wastes. He believes it is simply “faster and cheaper” to take carbon dioxide from the air and turn it into fuel “by doing it an appropriate scale.”

Other scientists who were not involved in the new paper note two other potential problems. “The idea that Roland has presented is an interesting one,” says Jennifer Wilcox, a chemical engineer at Worcester Institute of Technology, “but more vetting needs to be done in order to determine the true potential of the approach.” While it seems to make sense to take advantage of the air movement already being generated by HVAC systems, Wilcox says, building and operating the necessary fans is not what makes direct air capture systems so expensive. “The dominant capital cost,” she says, “is the solid adsorbent materials”—that is, substances to which the carbon dioxide adheres—and the main energy cost is the heat needed to recover the carbon dioxide from these materials afterward. Moreover, she contends that any available solar or other carbon-free power source would be put to better use in replacing fossil-fuel-fired power plants, to reduce the amount of carbon dioxide getting into the air in the first place.

“The idea of converting captured carbon into liquid fuel is persuasive,“ says Matthew J. Realff, a chemical engineer at Georgia Institute of Technology. “We have an enormous investment in our liquid fuel infrastructure, and using that has tremendous value. You wouldn’t have to build a whole new infrastructure. But this concept of doing it at the household level is a little bit fantastical”—partly because the gases involved (carbon monoxide and hydrogen) are toxic and explosive. The process to convert them to a liquid fuel is well understood, Realff says, but it produces a range of products that now typically get separated out in massive refineries—requiring huge amounts of energy. “It’s possible that it could be worked out at the scale that is being proposed,” he adds. “But we haven’t done it at this point, and it may not turn out to be the most effective way from an economic perspective.” There is, however, an unexpected benefit of direct air capture of carbon dioxide, says Realff, and it could help stimulate market acceptance of the technology: One reason office buildings replace their air so frequently is simply to protect workers from elevated levels of carbon dioxide. His research suggests that capturing the carbon dioxide from the air stream may be one way to cut energy costs, by reducing the frequency of air changes.

Dittmeyer disputes the argument that thinking big is always better. He notes that small, modular plants are a trend in some areas of chemical engineering, “because they are more flexible and don’t involve such a financial risk.” He also anticipates that cost will become less of a barrier as governments face up to the urgency of achieving a climate solution, and as jurisdictions increasingly impose carbon taxes or mandate strict energy efficiency standards for buildings.

“Of course, it’s a visionary perspective,” he says, “it relies on this idea of a decentralized product empowering people, not leaving it to industry. Industrial players observe the situation, but as long as there is no profit in the short term, they won’t do anything. If we have the technology that is safe and affordable, though maybe not as cheap, we can generate some momentum” among individuals, much as happened in the early stages of the solar industry. “And then I would expect the industrial parties to act, too.”

Richard Conniff is an award-winning science writer. His books include The Species Seekers: Heroes, Fools, and the Mad Pursuit of Life on Earth (W. W. Norton, 2011).

Richard Conniff is an award-winning science writer. His books include The Species Seekers: Heroes, Fools, and the Mad Pursuit of Life on Earth (W. W. Norton, 2011).

Could Consciousness All Come Down to the Way Things Vibrate?

A resonance theory of consciousness suggests that the way all matter vibrates, and the tendency for those vibrations to sync up, might be a way to answer the so-called ‘hard problem’ of consciousness.

The Conversation

  • Tam Hunt
file-20181109-74754-hj6p7i.jpg

What do synchronized vibrations add to the mind/body question? Photo by agsandrew / Shutterstock.com.

Why is my awareness here, while yours is over there? Why is the universe split in two for each of us, into a subject and an infinity of objects? How is each of us our own center of experience, receiving information about the rest of the world out there? Why are some things conscious and others apparently not? Is a rat conscious? A gnat? A bacterium?

These questions are all aspects of the ancient “mind-body problem,” which asks, essentially: What is the relationship between mind and matter? It’s resisted a generally satisfying conclusion for thousands of years.

The mind-body problem enjoyed a major rebranding over the last two decades. Now it’s generally known as the “hard problem” of consciousness, after philosopher David Chalmers coined this term in a now classic paper and further explored it in his 1996 book, “The Conscious Mind: In Search of a Fundamental Theory.”

Chalmers thought the mind-body problem should be called “hard” in comparison to what, with tongue in cheek, he called the “easy” problems of neuroscience: How do neurons and the brain work at the physical level? Of course they’re not actually easy at all. But his point was that they’re relatively easy compared to the truly difficult problem of explaining how consciousness relates to matter.

Over the last decade, my colleague, University of California, Santa Barbara psychology professor Jonathan Schooler and I have developed what we call a “resonance theory of consciousness.” We suggest that resonance – another word for synchronized vibrations – is at the heart of not only human consciousness but also animal consciousness and of physical reality more generally. It sounds like something the hippies might have dreamed up – it’s all vibrations, man! – but stick with me. file-20181109-74769-7ov2ol.jpg

How do things in nature – like flashing fireflies – spontaneously synchronize? Photo by Suzanne Tucker /Shutterstock.com.

All About the Vibrations

All things in our universe are constantly in motion, vibrating. Even objects that appear to be stationary are in fact vibrating, oscillating, resonating, at various frequencies. Resonance is a type of motion, characterized by oscillation between two states. And ultimately all matter is just vibrations of various underlying fields. As such, at every scale, all of nature vibrates.

Something interesting happens when different vibrating things come together: They will often start, after a little while, to vibrate together at the same frequency. They “sync up,” sometimes in ways that can seem mysterious. This is described as the phenomenon of spontaneous self-organization.

Mathematician Steven Strogatz provides various examples from physics, biology, chemistry and neuroscience to illustrate “sync” – his term for resonance – in his 2003 book “Sync: How Order Emerges from Chaos in the Universe, Nature, and Daily Life,” including:

  • When fireflies of certain species come together in large gatherings, they start flashing in sync, in ways that can still seem a little mystifying.
  • Lasers are produced when photons of the same power and frequency sync up.
  • The moon’s rotation is exactly synced with its orbit around the Earth such that we always see the same face.

Examining resonance leads to potentially deep insights about the nature of consciousness and about the universe more generally. file-20181109-74751-1503r83.jpg

External electrodes can record a brain’s activity. Photo by vasara / Shutterstock.com.

Sync Inside Your Skull

Neuroscientists have identified sync in their research, too. Large-scale neuron firing occurs in human brains at measurable frequencies, with mammalian consciousness thought to be commonly associated with various kinds of neuronal sync.

For example, German neurophysiologist Pascal Fries has explored the ways in which various electrical patterns sync in the brain to produce different types of human consciousness.

Fries focuses on gamma, beta and theta waves. These labels refer to the speed of electrical oscillations in the brain, measured by electrodes placed on the outside of the skull. Groups of neurons produce these oscillations as they use electrochemical impulses to communicate with each other. It’s the speed and voltage of these signals that, when averaged, produce EEG waves that can be measured at signature cycles per second. file-20181109-116826-1hsxqnf.jpg

Each type of synchronized activity is associated with certain types of brain function. Image from artellia / Shutterstock.com.

Gamma waves are associated with large-scale coordinated activities like perception, meditation or focused consciousness; beta with maximum brain activity or arousal; and theta with relaxation or daydreaming. These three wave types work together to produce, or at least facilitate, various types of human consciousness, according to Fries. But the exact relationship between electrical brain waves and consciousness is still very much up for debate.

Fries calls his concept “communication through coherence.” For him, it’s all about neuronal synchronization. Synchronization, in terms of shared electrical oscillation rates, allows for smooth communication between neurons and groups of neurons. Without this kind of synchronized coherence, inputs arrive at random phases of the neuron excitability cycle and are ineffective, or at least much less effective, in communication.

A Resonance Theory of Consciousness

Our resonance theory builds upon the work of Fries and many others, with a broader approach that can help to explain not only human and mammalian consciousness, but also consciousness more broadly.

Based on the observed behavior of the entities that surround us, from electrons to atoms to molecules, to bacteria to mice, bats, rats, and on, we suggest that all things may be viewed as at least a little conscious. This sounds strange at first blush, but “panpsychism” – the view that all matter has some associated consciousness – is an increasingly accepted position with respect to the nature of consciousness.

The panpsychist argues that consciousness did not emerge at some point during evolution. Rather, it’s always associated with matter and vice versa – they’re two sides of the same coin. But the large majority of the mind associated with the various types of matter in our universe is extremely rudimentary. An electron or an atom, for example, enjoys just a tiny amount of consciousness. But as matter becomes more interconnected and rich, so does the mind, and vice versa, according to this way of thinking.

Biological organisms can quickly exchange information through various biophysical pathways, both electrical and electrochemical. Non-biological structures can only exchange information internally using heat/thermal pathways – much slower and far less rich in information in comparison. Living things leverage their speedier information flows into larger-scale consciousness than what would occur in similar-size things like boulders or piles of sand, for example. There’s much greater internal connection and thus far more “going on” in biological structures than in a boulder or a pile of sand.

Under our approach, boulders and piles of sand are “mere aggregates,” just collections of highly rudimentary conscious entities at the atomic or molecular level only. That’s in contrast to what happens in biological life forms where the combinations of these micro-conscious entities together create a higher level macro-conscious entity. For us, this combination process is the hallmark of biological life.

The central thesis of our approach is this: the particular linkages that allow for large-scale consciousness – like those humans and other mammals enjoy – result from a shared resonance among many smaller constituents. The speed of the resonant waves that are present is the limiting factor that determines the size of each conscious entity in each moment.

As a particular shared resonance expands to more and more constituents, the new conscious entity that results from this resonance and combination grows larger and more complex. So the shared resonance in a human brain that achieves gamma synchrony, for example, includes a far larger number of neurons and neuronal connections than is the case for beta or theta rhythms alone.

What about larger inter-organism resonance like the cloud of fireflies with their little lights flashing in sync? Researchers think their bioluminescent resonance arises due to internal biological oscillators that automatically result in each firefly syncing up with its neighbors.

Is this group of fireflies enjoying a higher level of group consciousness? Probably not, since we can explain the phenomenon without recourse to any intelligence or consciousness. But in biological structures with the right kind of information pathways and processing power, these tendencies toward self-organization can and often do produce larger-scale conscious entities.

Our resonance theory of consciousness attempts to provide a unified framework that includes neuroscience, as well as more fundamental questions of neurobiology and biophysics, and also the philosophy of mind. It gets to the heart of the differences that matter when it comes to consciousness and the evolution of physical systems.

It is all about vibrations, but it’s also about the type of vibrations and, most importantly, about shared vibrations.

Tam Hunt is an Affiliate Guest in Psychology at the University of California, Santa Barbara.The Conversation

More from The Conversation

Science or Compliance ?

Getting There Social Theory July 25th 2020

There is a lot going on in the world, much of it quite bad. As a trained social scientist from the early 1970s, I was taught sociological and economic theories no longer popular with the global monstrously rich ruling elite.  By the way, in case you don’t know, you do not have to hold political office to be a part of that elite.  I was also taught philosophy and economic history of Britain and the United States.

So, firstly let’s get economics dealt with.  I was taught about its founders, men like Jeremy Bentham, also a philosopher, Malthus, Jevons and Marshall – the latter bringing order, principles and so called ‘rational economic man’ into the discipline.

Society was pretty rigid, wars were the way countries got richer and class systems became more objectified.  All went well until the post World War One ‘Great Depression’ when, in spite of rapidly falling interest rates, the rich decided to let the poor sink while they retrenched and had a good time- stirring up Nazism. .  

Economics revolutionary John Maynard Keynes concluded that governments needed to tax the rich and borrow to spend their way out of depression.  Britain’s elite would have none of it but the U.S.A , Italy and Germany took it up -I admit to being a nerd, and was reading Keynes ‘General Theory of Employment Interest and Money ‘ in my teens, much more interesting to me than following football.

Meanwhile Russia was locked out as a pariah. Britain had done its best to discredit and destroy Russia because they killed the British Royal family’s treacherous cousins – because they were terrible and corrupt rulers of Russia – and terrified the British rich with a fear of communism from the lower orders rising up.  

Only World War Two saved them offering a wonderful opportunity to slaughter more of the lower orders.  In the process, their empire was exposed and fell apart in the post war age – a ghost of it surviving as the Commonwealth ( sic ).

So we come to sociology.  Along the way, through this Industrial Revolution, Empire Building , oppression and decline a so called ‘Science of Society had been developing, with substantial data collected and poured into theories.  Marxism was the most famous, with Karl Marx’s forgotten friend and industrialist Friedrich Engels well placed to collect the data.  

The essence of Marxist theory, which was primarily based on the Hegelian dialectic and Marx’s historical studies, was that Capitalistm contained the seeds of its own destruction due to an inherent conflict between those who owned the means of production and the slaves being exploited for profit and greed.  Taking the opportunity provided by incompetent Russian elite rule in 1917, Germany helped smuggle Lenin back into Russia to ferment the Russian revolution.

That revolution and Russia has terrified the rich western elites ever since, with all manner of methods and episodes used to undermine it.  It is no wonder, leaving the vile Stalin to one side, that Russia developed police state methods, leading to what windbag Churchill called an Iron Curtain descending and dividing Europe.

By 1991, the West, dominated by Britain and the U.S elites who had the most to lose from what the U.S had called ‘The Domino Theory’ of one country falling to communism after another- because the masses might realise how they were being exploited – thought their day had come.  Gorbachev got rid of the Berlin Wall, the U.S undermined him to get their friend Yeltsin into power.  

But it didn’t last when Putin stepped up.  Oligarch’s allowed by Yeltsin, to rip off state assets rushed to Britain, even donating to Tory Party funds. Ever since, the Western elite have been in overdrive to discredit Putin has made in spite of the progress he has inspired and directed.  

Anglo US sanctions aren’t working fast enough and West Germany wants to buy Russian Gas – Nord Stream 2.  So now we have fake socialist, former head of Britain’s corrupt CPS, now Labour’s top man ( sic ) wanting RT ( Russia Today ) closed down.  There is a lot of worry about people not watching BBC TV in spite of being forced to pay for an expensive licence even though they do not want to watch BBC’s smug upper middle class drivel and biased news.  This is where sociology comes back into the picture.

The discipline ( sic ) of actual sociology derived from French thinkers like Auguste Comte who predicted that sociologists would become the priesthood of modern society – see where our government gets its ‘the science mantra ‘ from.  

As with economics, sociology was about understanding the increasingly complex way of life in an increasingly industrialised Industrialising world. Early schools of sociological thought drew comparisons with Darwin’s idea of organisms evolving.  So society’s head was its government, the transport system was its veins and arteries etc, with every part working towards functional integration.

Herbert Spencer, whose girlfriend Mary Ann Evans wrote social science orientated novels under the name George Elliot, and Frenchman Emile Durkeheim, founded this ‘functionalists’ school of sociology.  Frenchman Emile Durkheim, inspired by thinkers from the French Revolutionary era, took that school a stage further.  His theory considered dysfunctions which he called ‘pathological’ factors like suicide.  Robert K Merton went on after 1945, to write about dysfunctional aspects of society, building on Durkheim’s work.  Both men had a concept of ‘anomie.’  Durkheim talked of normlessness, Merton of people and societies never satisfied, having ever receding horizons.

To an old school person like myself, these ideas are still useful as is Keynes on economics.  One just has to look behind today’s self interested pseudo scientific jargon speak about ‘experts say,  the science and studies reveal.’  The important thing to remember about any social science, and epidemiologists are among them, is that you get or predict according to what you put in.  As far as Covid 19 is concerned, there are too many vested interests now to take anything they say seriously.  It is quite clear that there is no evidence that lockdown works. There is clear evidence that certain groups make themselves vulnerable or are deluded that death does not come with old age.  

I am old.  I am one of the ‘I’ and ‘Me’ generation whose interests should not come first and nor should the BAME.  The same goes for Africa, Indian sub continent and the Middle East, where overpopulation, foreign aid, corruption, Oxfam ,ignorance, dictators and religious bigotry are not solutions to Covid 19 or anything else.  If our pathetic fake caring politicians carry on like this, grovelling to the likes of the WHO, then we are all doomed.  

As for little Greta, she is a rather noisy poorly educated opinionated stooge. She has no idea what she is talking about.  As for modern sociology, it is pure feminist narrow minded dogma popular on police training courses for morons to use for profiling and fitting up innocent men.  They go on toilet paper degree courses, getting rather impressive letters, BSc to make them look and sound like experts. 

Robert Cook

New Alien Theory July 18th 2020

After decades of searching, we still haven’t discovered a single sign of extraterrestrial intelligence. Probability tells us life should be out there, so why haven’t we found it yet?

The problem is often referred to as Fermi’s paradox, after the Nobel Prize–winning physicist Enrico Fermi, who once asked his colleagues this question at lunch. Many theories have been proposed over the years. It could be that we are simply alone in the universe or that there is some great filter that prevents intelligent life progressing beyond a certain stage. Maybe alien life is out there, but we are too primitive to communicate with it, or we are placed inside some cosmic zoo, observed but left alone to develop without external interference. Now, three researchers think they think they may have another potential answer to Fermi’s question: Aliens do exist; they’re just all asleep.

According to a research paper accepted for publication in the Journal of the British Interplanetary Society, extraterrestrials are sleeping while they wait. In the paper, authors from Oxford’s Future of Humanity Institute and the Astronomical Observatory of BelgradeAnders Sandberg, Stuart Armstrong, and Milan Cirkovic argue that the universe is too hot right now for advanced, digital civilizations to make the most efficient use of their resources. The solution: Sleep and wait for the universe to cool down, a process known as aestivating (like hibernation but sleeping until it’s colder).

Understanding the new hypothesis first requires wrapping your head around the idea that the universe’s most sophisticated life may elect to leave biology behind and live digitally. Having essentially uploaded their minds onto powerful computers, the civilizations choosing to do this could enhance their intellectual capacities or inhabit some of the harshest environments in the universe with ease.

The idea that life might transition toward a post-biological form of existence is gaining ground among experts. “It’s not something that is necessarily unavoidable, but it is highly likely,” Cirkovic told me in an interview.

Once you’re living digitally, Cirkovic explained, it’s important to process information efficiently. Each computation has a certain cost attached to it, and this cost is tightly coupled with temperature. The colder it gets, the lower the cost is, meaning you can do more with the same amount of resources. This is one of the reasons why we cool powerful computers. Though humans may find the universe to be a pretty frigid place (the background radiation hovers about 3 kelvins above absolute zero, the very lower limit of the temperature scale), digital minds may find it far too hot.

But why aestivate? Surely any aliens wanting more efficient processing could cool down their systems manually, just as we do with computers. In the paper, the authors concede this is a possibility. “While it is possible for a civilization to cool down parts of itself to any low temperature,” the authors write, that, too, requires work. So it wouldn’t make sense for a civilization looking to maximize its computational capacity to waste energy on the process. As Sandberg and Cirkovic elaborate in a blog post, it’s more likely that such artificial life would be in a protected sleep mode today, ready to wake up in colder futures.

If such aliens exist, they’re in luck. The universe appears to be cooling down on its own. Over the next trillions of years, as it continues to expand and the formation of new stars slows, the background radiation will reduce to practically zero. Under those conditions, Sandberg and Cirkovic explain, this kind of artificial life would get “tremendously more done.” Tremendous isn’t an understatement, either. The researchers calculate that by employing such a strategy, they could achieve up to 1030 times more than if done today. That’s a 1 with 30 zeroes after it.

But just because the aliens are asleep doesn’t mean we can’t find signs of them. Any aestivating civilization has to preserve resources it intends to use in the future. Processes that waste or threaten these resources, then, should be conspicuously absent, thanks to interference from the aestivators. (If they are sufficiently advanced to upload their minds and aestivators, they should be able to manipulate space.) This includes galaxies colliding, galactic winds venting matter into intergalactic space, and stars converting into black holes, which can push resources beyond the reach of the sleeping civilization or change them into less-useful forms.

Another strategy to find the sleeping aliens, Cirkovic said, might be to try and meddle with the aestivators’ possessions and territory, which we may already reside within. One way of doing this would be to send out self-replicating probes into the universe that would steal the aestivators’ things. Any competent species ought to have measures in place to respond to these kind of threats. “It could be an exceptionally dangerous test,” he cautioned, “but if there really are very old and very advanced civilizations out there, we can assume there is a potential for danger in anything we do.”

Interestingly, neither Sandberg nor Cirkovic said they have much faith in finding anything. Sandberg, writing on his blog, states that he does not believe the hypothesis to be a likely one: “I personally think the likeliest reason we are not seeing aliens is not that they are aestivating.” He writes that he feels it’s more likely that “they do not exist or are very far away.”

Cirkovic concurred. “I don’t find it very likely, either,” he said in our interview. “I much prefer hypotheses that do not rely on assuming intentional decisions made by extraterrestrial societies. Any assumption is extremely speculative.” There could be forms of energy that we can’t even conceive of using now, he said—producing antimatter in bulk, tapping evaporating black holes, using dark matter. Any of this could change what we might expect to see from an advanced technical civilization.

Yet, he said, the theory has a place. It’s important to cover as much ground as possible. You need to test a wide set of hypotheses one by one—falsifying them, pruning them—to get closer to the truth. “This is how science works. We need to have as many hypotheses and explanations for Fermi’s paradox as possible,” he said.

Plus, there’s a modest likelihood their aestivating aliens idea might be part of the answer, Cirkovic said. We shouldn’t expect a single hypothesis to account for Fermi’s paradox. It will be more of a “patchwork-quilt kind of solution,” he said.

And it’s important to keep exploring solutions. Fermi’s paradox is so much more than an intellectual exercise. It’s about trying to understand what might be out there and how this might explain our past and guide our future.

“I would say that 90-plus percent of hypotheses that were historically proposed to account for Fermi’s paradox have practical consequences,” Cirkovic said. They allow us to think proactively about some of the problems we as a species face, or may one day face, and prompt us to develop strategies to actively shape a more prosperous and secure future for humanity.“We can apply this reasoning to our past, to the emergence of life and complexity. We can also apply similar reasoning to thinking about our future. It can help us avoid catastrophes and help us understand the most likely fate of intelligent species in the universe.”

Stephen Hawking Left Us Bold Predictions on AI, Superhumans, and Aliens

The great physicist’s thoughts on the future of the human race and the fragility of planet Earth.

Quartz

  • Max de Haldevang

The late physicist Stephen Hawking’s last writings predict that a breed of superhumans will take over, having used genetic engineering to surpass their fellow beings.

In Brief Answers to the Big Questions, to published in October 2018 and excerpted in the UK’s Sunday Times (paywall), Hawking pulls no punches on subjects like machines taking over, the biggest threat to Earth, and the possibilities of intelligent life in space.

Artificial Intelligence

Hawking delivers a grave warning on the importance of regulating AI, noting that “in the future AI could develop a will of its own, a will that is in conflict with ours.” A possible arms race over autonomous-weapons should be stopped before it can start, he writes, asking what would happen if a crash similar to the 2010 stock market Flash Crash happened with weapons. He continues:

In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.

Earth’s Bleak Future, Gene Editing, and Superhumans

The bad news: At some point in the next 1,000 years, nuclear war or environmental calamity will “cripple Earth.” However, by then, “our ingenious race will have found a way to slip the surly bonds of Earth and will therefore survive the disaster.” The Earth’s other species probably won’t make it, though.

The humans who do escape Earth will probably be new “superhumans” who have used gene editing technology like CRISPR to outpace others. They’ll do so by defying laws against genetic engineering, improving their memories, disease resistance, and life expectancy, he says

Hawking seems curiously enthusiastic about this final point, writing, “There is no time to wait for Darwinian evolution to make us more intelligent and better natured.”

Once such superhumans appear, there are going to be significant political problems with the unimproved humans, who won’t be able to compete. Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings who are improving themselves at an ever-increasing rate. If the human race manages to redesign itself, it will probably spread out and colonise other planets and stars.

Intelligent Life in Space

Hawking acknowledges there are various explanations for why intelligent life hasn’t been found or has not visited Earth. His predictions here aren’t so bold, but his preferred explanation is that humans have “overlooked” forms of intelligent life that are out there.

Does God Exist?

No, Hawking says.

The question is, is the way the universe began chosen by God for reasons we can’t understand, or was it determined by a law of science? I believe the second. If you like, you can call the laws of science “God”, but it wouldn’t be a personal God that you would meet and put questions to.

The Biggest Threats to Earth

Threat number one one is an asteroid collision, like the one that killed the dinosaurs. However, “we have no defense” against that, Hawking writes. More immediately: climate change. “A rise in ocean temperature would melt the ice caps and cause the release of large amounts of carbon dioxide,” Hawking writes. “Both effects could make our climate like that of Venus with a temperature of 250C.”

The Best Idea Humanity Could Implement

Nuclear fusion power. That would give us clean energy with no pollution or global warming.

More from Quartz

Advertisement


Could Invisible Aliens Really Exist Among Us? An Astrobiologist Explains

The Earth may be crawling with undiscovered creatures with a biochemistry that differs from life as we know it. July 13th 2020

The Conversation

  • Samantha Rolfe

They probably won’t look anything like this. Credit: Martina Badini / Shutterstock.

Life is pretty easy to recognise. It moves, it grows, it eats, it excretes, it reproduces. Simple. In biology, researchers often use the acronym “MRSGREN” to describe it. It stands for movement, respiration, sensitivity, growth, reproduction, excretion and nutrition.

But Helen Sharman, Britain’s first astronaut and a chemist at Imperial College London, recently said that alien lifeforms that are impossible to spot may be living among us. How could that be possible?

While life may be easy to recognise, it’s actually notoriously difficult to define and has had scientists and philosophers in debate for centuries – if not millennia. For example, a 3D printer can reproduce itself, but we wouldn’t call it alive. On the other hand, a mule is famously sterile, but we would never say it doesn’t live.

As nobody can agree, there are more than 100 definitions of what life is. An alternative (but imperfect) approach is describing life as “a self-sustaining chemical system capable of Darwinian evolution”, which works for many cases we want to describe.

The lack of definition is a huge problem when it comes to searching for life in space. Not being able to define life other than “we’ll know it when we see it” means we are truly limiting ourselves to geocentric, possibly even anthropocentric, ideas of what life looks like. When we think about aliens, we often picture a humanoid creature. But the intelligent life we are searching for doesn’t have to be humanoid.

Life, But Not as We Know It

Sharman says she believes aliens exist and “there’s no two ways about it”. Furthermore, she wonders: “Will they be like you and me, made up of carbon and nitrogen? Maybe not. It’s possible they’re here right now and we simply can’t see them.”

Such life would exist in a “shadow biosphere”. By that, I don’t mean a ghost realm, but undiscovered creatures probably with a different biochemistry. This means we can’t study or even notice them because they are outside of our comprehension. Assuming it exists, such a shadow biosphere would probably be microscopic.

So why haven’t we found it? We have limited ways of studying the microscopic world as only a small percentage of microbes can be cultured in a lab. This may mean that there could indeed be many lifeforms we haven’t yet spotted. We do now have the ability to sequence the DNA of unculturable strains of microbes, but this can only detect life as we know it – that contain DNA.

If we find such a biosphere, however, it is unclear whether we should call it alien. That depends on whether we mean “of extraterrestrial origin” or simply “unfamiliar”.

Silicon-Based Life

A popular suggestion for an alternative biochemistry is one based on silicon rather than carbon. It makes sense, even from a geocentric point of view. Around 90 percent of the Earth is made up of silicon, iron, magnesium and oxygen, which means there’s lots to go around for building potential life.

Artist’s impression of a silicon-based life form. Credit: Zita.

Silicon is similar to carbon, it has four electrons available for creating bonds with other atoms. But silicon is heavier, with 14 protons (protons make up the atomic nucleus with neutrons) compared to the six in the carbon nucleus. While carbon can create strong double and triple bonds to form long chains useful for many functions, such as building cell walls, it is much harder for silicon. It struggles to create strong bonds, so long-chain molecules are much less stable.

What’s more, common silicon compounds, such as silicon dioxide (or silica), are generally solid at terrestrial temperatures and insoluble in water. Compare this to highly soluble carbon dioxide, for example, and we see that carbon is more flexible and provides many more molecular possibilities.

Life on Earth is fundamentally different from the bulk composition of the Earth. Another argument against a silicon-based shadow biosphere is that too much silicon is locked up in rocks. In fact, the chemical composition of life on Earth has an approximate correlation with the chemical composition of the sun, with 98 percent of atoms in biology consisting of hydrogen, oxygen and carbon. So if there were viable silicon lifeforms here, they may have evolved elsewhere.

That said, there are arguments in favour of silicon-based life on Earth. Nature is adaptable. A few years ago, scientists at Caltech managed to breed a bacterial protein that created bonds with silicon – essentially bringing silicon to life. So even though silicon is inflexible compared with carbon, it could perhaps find ways to assemble into living organisms, potentially including carbon.

And when it comes to other places in space, such as Saturn’s moon Titan or planets orbiting other stars, we certainly can’t rule out the possibility of silicon-based life.

To find it, we have to somehow think outside of the terrestrial biology box and figure out ways of recognising lifeforms that are fundamentally different from the carbon-based form. There are plenty of experiments testing out these alternative biochemistries, such as the one from Caltech.

Regardless of the belief held by many that life exists elsewhere in the universe, we have no evidence for that. So it is important to consider all life as precious, no matter its size, quantity or location. The Earth supports the only known life in the universe. So no matter what form life elsewhere in the solar system or universe may take, we have to make sure we protect it from harmful contamination – whether it is terrestrial life or alien lifeforms.

So could aliens be among us? I don’t believe that we have been visited by a life form with the technology to travel across the vast distances of space. But we do have evidence for life-forming, carbon-based molecules having arrived on Earth on meteorites, so the evidence certainly doesn’t rule out the same possibility for more unfamiliar life forms.

Samantha Rolfe is a Lecturer in Astrobiology and Principal Technical Officer at the University of Hertfordshire’s Bayfordbury Observatory.

Memories Can Be Injected and Survive Amputation and Metamorphosis July 13th 2020

If a headless worm can regrow a memory, then where is the memory stored? And, if a memory can regenerate, could you transfer it?

Nautilus

  • Marco Altamirano

The study of memory has always been one of the stranger outposts of science. In the 1950s, an unknown psychology professor at the University of Michigan named James McConnell made headlines—and eventually became something of a celebrity

The study of memory has always been one of the stranger outposts of science. In the 1950s, an unknown psychology professor at the University of Michigan named James McConnell made headlines—and eventually became something of a celebrity—with a series of experiments on freshwater flatworms called planaria. These worms fascinated McConnell not only because they had, as he wrote, a “true synaptic type of nervous system” but also because they had “enormous powers of regeneration…under the best conditions one may cut [the worm] into as many as 50 pieces” with each section regenerating “into an intact, fully-functioning organism.” 

In an early experiment, McConnell trained the worms à la Pavlov by pairing an electric shock with flashing lights. Eventually, the worms recoiled to the light alone. Then something interesting happened when he cut the worms in half. The head of one half of the worm grew a tail and, understandably, retained the memory of its training. Surprisingly, however, the tail, which grew a head and a brain, also retained the memory of its training. If a headless worm can regrow a memory, then where is the memory stored, McConnell wondered. And, if a memory can regenerate, could he transfer it? 

McConnell’s work has recently experienced a sort of renaissance.

Perhaps. Swedish neurobiologist Holger Hydén had suggested, in the 1960s, that memories were stored in neuron cells, specifically in RNA, the messenger molecule that takes instructions from DNA and links up with ribosomes to make proteins, the building blocks of life. McConnell, having become interested in Hydén’s work, scrambled to test for a speculative molecule that he called “memory RNA” by grafting portions of trained planaria onto the bodies of untrained planaria. His aim was to transfer RNA from one worm to another but, encountering difficulty getting the grafts to stick, he turned to a “more spectacular type of tissue transfer, that of ‘cannibalistic ingestion.’” Planaria, accommodatingly, are cannibals, so McConnell merely had to blend trained worms and feed them to their untrained peers. (Planaria lack the acids and enzymes that would completely break down food, so he hoped that some RNA might be integrated into the consuming worms.) 

Shockingly, McConnell reported that cannibalizing trained worms induced learning in untrained planaria. In other experiments, he trained planaria to run through mazes and even developed a technique for extracting RNA from trained worms in order to inject it into untrained worms in an effort to transmit memories from one animal to another. Eventually, after his retirement in 1988, McConnell faded from view, and his work was relegated to the sidebars of textbooks as a curious but cautionary tale. Many scientists simply assumed that invertebrates like planaria couldn’t be trained, making the dismissal of McConnell’s work easy. McConnell also published some of his studies in his own journal, The Worm Runner’s Digest, alongside sci-fi humor and cartoons. As a result, there wasn’t a lot of interest in attempting to replicate his findings.

Nonetheless, McConnell’s work has recently experienced a sort of renaissance, taken up by innovative scientists like Michael Levin, a biologist at Tufts University specializing in limb regeneration, who has reproduced modernized and automated versions of his planarian maze-training experiments. The planarian itself has enjoyed a newfound popularity, too, after Levin cut the tail off a worm and shot a bioelectric current through the incision, provoking the worm to regrow another head in place of its tail (garnering Levin the endearing moniker of “young Frankenstein”). Levin also sent 15 worm pieces into space, with one returning, strangely enough, with two heads (“remarkably,” Levin and his colleagues wrote, “amputating this double-headed worm again, in plain water, resulted again in the double-headed phenotype.”) 

David Glanzman, a neurobiologist at the University of California, Los Angeles, has another promising research program that recently struck a chord reminiscent of McConnell’s memory experiments—although, instead of planaria, Glanzman’s lab works mostly with aplysia, the darling mollusk of neuroscience on account of its relatively simple nervous system. (Also known as “sea hares,” aplysia are giant, inky sea slugs that swim with undulating, ruffled wings.)

In 2015, Glanzman was testing the textbook theory on memory, which holds that memories are stored in synapses, the connective junctions between neurons. His team, attempting to create and erase a memory in aplysia, periodically delivered mild electric shocks to train the mollusk to prolong a reflex, one where it withdraws, upon touch, its siphon, a little breathing tube between the gill and the tail. After training, his lab witnessed new synaptic growth between the sensory neuron that felt touch and the motor neuron that triggered the siphon withdrawal reflex. Developing after the training, the increased connectivity between those neurons seemed to corroborate the theory that memories are stored in synaptic connections. Glanzman’s team tried to erase the memory of the training by dismantling the synaptic connections between the neurons and, sure enough, the snails subsequently behaved as if they’d lost the memory, further corroborating the synaptic memory theory. After Glanzman’s team administered a “reminder” shock to the snails, the researchers were surprised to quickly notice different, newer synaptic connections growing between the neurons. The snails then behaved, once again, as if they remembered the sensitizing training they seemed to have previously forgotten. 

If the memory persisted through such major synaptic change, where the synaptic connections that emerged through training had disappeared and completely different, newer connections had taken their place, then maybe, Glanzman thought, memories are not really stored in synapses after all. The experiment seems like something out of Eternal Sunshine of the Spotless Mind, a movie in which ex-lovers trying to forget each other undergo a questionable procedure that deletes the memory of a person, but evidently not to the point beyond recall. The lovers both hide a plan deep within their minds to meet in Montauk in the end. The movie suggests, in a way, that memories are never completely lost, that it always remains possible to go back, even to people and places that seem long forgotten.

But if memories aren’t stored in synaptic connections, where are they stored instead? Glanzman’s unpopular hypothesis was that they might reside in the nucleus of the neuron cell, where DNA and RNA sequences compose instructions for life processes. DNA sequences are fixed and unchanging, so most of an organism’s adaptability comes from supple epigenetic mechanisms, processes that regulate gene expression in response to environmental cues or pressures, which sometimes involve RNA. If DNA is printed sheet music, RNA-induced epigenetic mechanisms are like improvisational cuts and arrangements that might conduct learning and memory.

Perhaps memories reside in epigenetic changes induced by RNA, that improv molecule that scores protein-based adaptations of life. Glanzman’s team went back to their aplysia and trained them over two days to prolong their siphon-withdrawal reflex. They then dissected their nervous systems, extracting RNA involved in forming the memory of their training, and injected it into untrained aplysia, which were tested for learning a day later. Glanzman’s team found that the RNA from trained donors induced learning, while the RNA from untrained donors had no effect. They had transferred a memory, vaguely but surely, from one animal to another, and they had strong evidence that RNA was the memory-transferring agent.

Glanzman now believes that synapses are necessary for the activation of a memory, but that the memory is encoded in the nucleus of the neuron through epigenetic changes. “It’s like a pianist without hands,” Glanzman says. “He may know how to play Chopin, but he’d need hands to exercise the memory.” 

The work of Douglas Blackiston, an Allen Discovery Center scientist at Tufts University, who has studied memory in insects, paints a similar picture. He wanted to know if a butterfly could remember something about its life as a caterpillar, so he exposed caterpillars to the scent of ethyl acetate followed by a mild electric shock. After acquiring an aversion to ethyl acetate, the caterpillars pupated and, after emerging as adult butterflies several weeks later, were tested for memory of their aversive training. Surprisingly, the adult butterflies remembered—but how? The entire caterpillar becomes a cytoplasmic soup before it metamorphosizes into a butterfly. “The remodeling is catastrophic,” Blackiston says. “After all, we’re moving from a crawling machine to a flying machine. Not only the body but the entire brain has to be rewired.”

It’s hard to study exactly what goes on during pupation in vivo, but there’s a subset of caterpillar neurons that may persist in what are called “mushroom bodies,” a pair of structures involved in olfaction that many insects have located near their antennae. In other words, some structure remains. “It’s not soup,” Blackiston says. “Well, maybe it’s soup, but it’s chunky.” There’s near complete pruning of neurons during pupation, and the few neurons that remain become disconnected from other neurons, dissolving the synaptic connections between them in the process, until they reconnect with other neurons during the remodeling into the butterfly brain. Like Glanzman, Blackiston employs a hand analogy: “It’s like a small group of neurons were holding hands, but then let go and moved around, finally reconnecting with different neurons in the new brain.” If the memory was stored anywhere, Blackiston suspects it was stored in the subset of neurons located in the mushroom bodies, the only known carryover material from the caterpillar to the butterfly. 

In the end, despite its whimsical caricature of the science of memory, Eternal Sunshine may have stumbled on a correct premise. Not only do Glanzman and Blackiston believe their experiments harbor hopeful news for Alzheimer’s patients, it also might be possible to repair deteriorated neurons that could, at least theoretically, find their way back to lost memories, perhaps with the guidance of appropriate RNA.

Marco Altamirano is a writer based in New Orleans and the author of Time, Technology, and Environment: An Essay on the Philosophy of Nature. Follow him on

Blindsight: a strange neurological condition that could help explain consciousness

July 2, 2020 11.31am BST

Author

  1. Henry Taylor Birmingham Fellow in Philosophy, University of Birmingham

Disclosure statement

Henry Taylor previously received funding from The Leverhulme Trust and Isaac Newton Trust, but they do not stand to benefit from publication of this article.

Partners

University of Birmingham

University of Birmingham provides funding as a founding partner of The Conversation UK.

The Conversation UK receives funding from these organisations

View the full list

CC BY NDWe believe in the free flow of information
Republish our articles for free, online or in print, under Creative Commons licence.

Imagine being completely blind but still being able to see. Does that sound impossible? Well, it happens. A few years ago, a man (let’s call him Barry) suffered two strokes in quick succession. As a result, Barry was completely blind, and he walked with a stick.

One day, some psychologists placed Barry in a corridor full of obstacles like boxes and chairs. They took away his walking stick and told him to walk down the corridor. The result of this simple experiment would prove dramatic for our understanding of consciousness. Barry was able to navigate around the obstacles without tripping over a single one.

Barry has blindsight, an extremely rare condition that is as paradoxical as it sounds. People with blindsight consistently deny awareness of items in front of them, but they are capable of amazing feats, which demonstrate that, in some sense, they must be able to see them.

In another case, a man with blindsight (let’s call him Rick) was put in front of a screen and told to guess (from several options) what object was on the screen. Rick insisted that he didn’t know what was there and that he was just guessing, yet he was guessing with over 90% accuracy.

Into the brain

Blindsight results from damage to an area of the brain called the primary visual cortex. This is one of the areas, as you might have guessed, responsible for vision. Damage to primary visual cortex can result in blindness – sometimes total, sometimes partial.

So how does blindsight work? The eyes receive light and convert it into information that is then passed into the brain. This information then travels through a series of pathways through the brain to eventually end up at the primary visual cortex. For people with blindsight, this area is damaged and cannot properly process the information, so the information never makes it to conscious awareness. But the information is still processed by other areas of the visual system that are intact, enabling people with blindsight to carry out the kind of tasks that we see in the case of Barry and Rick.

Some blind people appear to be able to ‘see’. Akemaster/Shutterstock

Blindsight serves as a particularly striking example of a general phenomenon, which is just how much goes on in the brain below the surface of consciousness. This applies just as much to people without blindsight as people with it. Studies have shown that naked pictures of attractive people can draw our attention, even when we are completely unaware of them. Other studies have demonstrated that we can correctly judge the colour of an object without any conscious awareness of it.

Blindsight debunked?

Blindsight has generated a lot of controversy. Some philosophers and psychologists have argued that people with blindsight might be conscious of what is in front of them after all, albeit in a vague and hard-to-describe way.

This suggestion presents a difficulty, because ascertaining whether someone is conscious of a particular thing is a complicated and highly delicate task. There is no “test” for consciousness. You can’t put a probe or a monitor next to someone’s head to test whether they are conscious of something – it’s a totally private experience.

We can, of course, ask them. But interpreting what people say about their own experiences can be a thorny task. Their reports sometimes seem to indicate that they have no consciousness at all of the objects in front of them (Rick once insisted that he did not believe that there really were any objects there). Other individuals with blindsight report feeling “visual pin-pricks” or “dark shadows” indicating the tantalising possibility that they did have some conscious awareness left over.

The boundaries of consciousness

So, what does blindsight tell us about consciousness? Exactly how you answer this question will heavily depend on which interpretation you accept. Do you think that those who have blindsight are in some sense conscious of what is out there or not?

The visual cortex. Geyer S, Weiss M, Reimann K, Lohmann G and Turner R/wikipedia, CC BY-SA

If they’re not, then blindsight provides an exciting tool that we can use to work out exactly what consciousness is for. By looking at what the brain can do without consciousness, we can try to work out which tasks ultimately require consciousness. From that, we may be able to work out what the evolutionary function of consciousness is, which is something that we are still relatively in the dark about.

On the other hand, if we could prove that people with blindsight are conscious of what is in front of them, this raises no less interesting and exciting questions about the limits of consciousness. What is their consciousness actually like? How does it differ from more familiar kinds of consciousness? And precisely where in the brain does consciousness begin and end? If they are conscious, despite damage to their visual cortex, what does that tell us about the role of this brain area in generating consciousness?

In my research, I am interested in the way that blindsight reveals the fuzzy boundaries at the edges of vision and consciousness. In cases like blindsight, it becomes increasingly unclear whether our normal concepts such as “perception”, “consciousness” and “seeing” are up to the task of adequately describing and explaining what is really going on. My goal is to develop more nuanced views of perception and consciousness that can help us understand their distinctly fuzzy edges.

To ultimately understand these cases, we will need to employ careful philosophical reflection on the concepts we use and the assumptions we make, just as much as we will need a thorough scientific investigation of the mechanics of the mind.

Before you go…

The past year has been marked by record-breaking hurricanes, floods and heatwaves. As this extreme weather becomes the new normal, The Conversation’s academic authors have analysed these events and how they are (or aren’t) linked to climate change. Should you wish to make a donation, it will help us continue to provide research-led reactions to the climate crisis.

Scientists say most likely number of contactable alien civilisations is 36

New calculations come up with an estimate for worlds capable of communicating with others.

The Guardian

  • Nicola Davis
GettyImages-498384831.jpg

We’re listening … but is anything out there? Photo by dszc / Getty Images.

They may not be little green men. They may not arrive in a vast spaceship. But according to new calculations there could be more than 30 intelligent civilisations in our galaxy today capable of communicating with others.

Experts say the work not only offers insights into the chances of life beyond Earth but could shed light on our own future and place in the cosmos.

“I think it is extremely important and exciting because for the first time we really have an estimate for this number of active intelligent, communicating civilisations that we potentially could contact and find out there is other life in the universe – something that has been a question for thousands of years and is still not answered,” said Christopher Conselice, a professor of astrophysics at the University of Nottingham and a co-author of the research.

In 1961 the astronomer Frank Drake proposed what became known as the Drake equation, setting out seven factors that would need to be known to come up with an estimate for the number of intelligent civilisations out there. These factors ranged from the the average number of stars that form each year in the galaxy through to the timespan over which a civilisation would be expected to be sending out detectable signals.

But few of the factors are measurable. “Drake equation estimates have ranged from zero to a few billion [civilisations] – it is more like a tool for thinking about questions rather than something that has actually been solved,” said Conselice.

Now Conselice and colleagues report in the Astrophysical Journal how they refined the equation with new data and assumptions to come up with their estimates.

“Basically, we made the assumption that intelligent life would form on other [Earth-like] planets like it has on Earth, so within a few billion years life would automatically form as a natural part of evolution,” said Conselice.

The assumption, known as the Astrobiological Copernican Principle, is fair as everything from chemical reactions to star formation is known to occur if the conditions are right, he said. “[If intelligent life forms] in a scientific way, not just a random way or just a very unique way, then you would expect at least this many civilisations within our galaxy,” he said.

He added that, while it is a speculative theory, he believes alien life would have similarities in appearance to life on Earth. “We wouldn’t be super shocked by seeing them,” he said.

Under the strictest set of assumptions – where, as on Earth, life forms between 4.5bn and 5.5bn years after star formation – there are likely between four and 211 civilisations in the Milky Way today capable of communicating with others, with 36 the most likely figure. But Conselice noted that this figure is conservative, not least as it is based on how long our own civilisation has been sending out signals into space – a period of just 100 years so far.

The team add that our civilisation would need to survive at least another 6,120 years for two-way communication. “They would be quite far away … 17,000 light years is our calculation for the closest one,” said Conselice. “If we do find things closer … then that would be a good indication that the lifespan of [communicating] civilisations is much longer than a hundred or a few hundred years, that an intelligent civilisation can last for thousands or millions of years. The more we find nearby, the better it looks for the long-term survival of our own civilisation.”

Dr Oliver Shorttle, an expert in extrasolar planets at the University of Cambridge who was not involved in the research, said several as yet poorly understood factors needed to be unpicked to make such estimates, including how life on Earth began and how many Earth-like planets considered habitable could truly support life.

Dr Patricia Sanchez-Baracaldo, an expert on how Earth became habitable, from the University of Bristol, was more upbeat, despite emphasising that many developments were needed on Earth for conditions for complex life to exist, including photosynthesis. “But, yes if we evolved in this planet, it is possible that intelligent life evolved in another part of the universe,” she said.

Prof Andrew Coates, of the Mullard Space Science Laboratory at University College London, said the assumptions made by Conselice and colleagues were reasonable, but the quest to find life was likely to take place closer to home for now.

“[The new estimate] is an interesting result, but one which it will be impossible to test using current techniques,” he said. “In the meantime, research on whether we are alone in the universe will include visiting likely objects within our own solar system, for example with our Rosalind Franklin Exomars 2022 rover to Mars, and future missions to Europa, Enceladus and Titan [moons of Jupiter and Saturn]. It’s a fascinating time in the search for life elsewhere.”

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

More from The Guardian

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

More from The Guardian

Advertisement


‘Miles-wide anomaly’ over Texas sparks concerns HAARP weather manipulation has BEGUN

BIZARRE footage has emerged that proves the US government is testing weather manipulation technology, according to wild claims online.

The clip, captured in Texas, US, shows the moment radar was completely blotted out by an unknown source.

Another video shows a green blob forming above Sugar Land, quickly growing in size in a circular formation.

According to Travis Herzog, a meteorologist at ABC News, the phenomenon was caused by a flock of birds filling the sky.

But conspiracy theorist Tyler Glockner, who runs YouTube channel secureteam10, disagrees.

He posted a video yesterday speculating it could be something more sinister which was accidentally exposed by the news channel.

He also pointed out the number of birds needed to cause such an event would have been seen or recorded by someone.

And his video has now racked up more than 350,000 hits in less than 48 hours.

“I am willing to bet there is a power station near the centre of that burst. Some kind of HAARP technology,” one viewer suggested.

Another added: “This is HAARP and some kind of weather modification/manipulation technology.”

And a third simply claimed: “Scary weather manipulation in progress.”

The High-Frequency Active Auroral Research Programme was initiated as a research project between the US Air Force, Navy, University of Alaska Fairbanks and the Defence Advance Research Agency.

Many conspiracists believe the US government is already using the HAARP programme to control weather occurrences through the use of chemtrailing.

Over the years, HAARP has been blamed for generating natural catastrophes such as thunderstorms and power loss as well as strange cloud formations.

But it was actually designed and built by BAE Advance Technologies to help analyse the ionosphere and investigate the potential for developing enhanced technologies.

Climate change expert Janos Pasztor previously revealed to Daily Star Online how this technology could lead to weaponisation.

The following extract is from U.S Research on weather modification dating back to 1957 Posted July 5th 2020

COMMENT secretary ‘.’ From: Sent: Andrea Psoras-QEDI [apsoras@qedinternational.com] Monday, May 05, 2008 3:08 PM To: secretary • , Subject: CFTC Requests Public Input on Possible Regulation of “Event Contracts” Commodity Futures Trading Commission Three Lafayette Centre 1155 21st Street, NW Page 1 of35 C) -·1 ‘ C) i .-(:::::l -q ‘. .. -~ .. (~· …. ..: .. ~ Washington, DC 20581 f\1 I ” (/) -….,J : : 1 r…····~· c ) :.:S.2 202-418-5000 202-418-5521, fax 202-418-5514, 11lY questions@cftc. gov Dear Commissioners and Secretary: :u f’ \ }_:;~ :;: ) ,, -·· -f l’;y … c: N Not everything is a commodity, nor should something that is typically covered by some sort of property and casualty insurance suddenly become exchange tradable. Insurance companies for a number of years have provided compensation of some sort for random, but periodic events. Where the insurance industry wants to off-load their risk at the expense of other commodities markets participants, contributes to sorts of moral hazards – which I vigorously oppose. If where there is ‘interest’ to develop these sorts of risk event instruments, to me it seems an admission that the insurance sector is perhaps marginal or worse, incompetent or too greedy to determine how to offer insurance for events presumably produced by nature. Now where there are the weather and earth shaking technologies, or some circles call these weather and electro-magnetic weapons, used insidiously unfortunately by our military, our intelligence apparatus, and perhaps our military contractors for purposes contrary to that to which our public servants take their oath of office to the Constitution,

I suggest prohibiting the use of that technology rather than leaving someone else holding the bag in the event destruction produced by, and where so-called ‘natural’ events were produced by military contractor technology in the guise of ‘mother nature’. * Consider Rep Denis Kucinich as well as former Senator John Glenn attempted to have our Congress prohibit the use of space based weapons. That class ofweapons includes the ‘weather weapons’. http://www.globalresearch.ca/articles/CH0409F.html as well as other articles about this on the Global Research website. Respectfully, Andrea Psoras “CFTC Requests Public Input on Possible Regulation of “Event Contracts” Washington, DC-The Commodity Futures Trading Commission (CFTC) is asking for public 5/7/2008 ::0 r:1 :> I ,.1 -·· -·’ ” r·n 0 Page 2 of35 comment on the appropriate regulatory treatment of finanCial agreements offered by markets commonly referred to as event, prediction, or information markets.

During the past several years, the CFTC has received numerous requests for guidance involving the trading of event contracts. These contracts typically involve financial agreements that are linked to events or measurable outcomes and often serve as information collection vehicles. The contracts are based on a broad spectrum of events, such as the results of presidential elections, world population levels, or economic measures. “Event markets are rapidly evolving, and growing, presenting a host of difficult policy and legal questions including: What public purpose is served in the oversight of these markets and what differentiates these markets from pure gambling outside the CFTC’ s jurisdiction?” said CFTC Acting chairman Walt Lukken.

“The CFTC is evaluating how these markets should be regulated with the proper protections in place and I encourage members of the public to provide their views.” In response to requests for guidance, and to promote regulatory certainty, the CFTC has commenced a comprehensive review of the Commodity Exchange Act’s applicability to event contracts and markets.

The CFTC is issuing a Concept Release to solicit the expertise and opinions of all interested parties, including CFTC registrants, legal practitioners, economists, state and federal regulatory authorities, academics, and event market participants. The Concept Release will be published in the Federal Register shortly; comments will be accepted for 60 days after publication in the Federal Register.” Comments may also be submitted electronically to secretary@cftc.gov. All comments received will be posted on the CFTC’s website. * Weather as a Force Multiplier: Owning the Weather in 2025 A Research Paper Presented To Air Force 2025 August 1996 Below are highlights contained within the actual report. Please remember that this research report was issued in 1996 -8 years ago -and that much of what was discussed as being in preliminary stages back then is now a reality. In the United States, weather-modification will likely become a part of national security policy with both domestic and international applications. Our government will pursue such a policy, depending on its interests, at various levels. In this paper we show that appropriate application of weather-modification can provide battlespace dominance to a degree never before imagined. In the future, such operations will enhance air and space superiority and provide new options for battlespace shaping and battlespace awareness. “The technology is there, waiting for us to pull it all together” [General Gordon R. Sullivan, “Moving into the 21st Century: America’s Army and Modernization,” Military Review (July 1993) quoted in Mary Ann Seagraves and Richard Szymber, “Weather a Force Multiplier,” Military Review, November/December 1995, 75]. A global, precise, real-time, robust, systematic weather-modification capability 5/7/2008 would provide war-fighting CINCs [an acronym meaning “Commander IN Chief’ of a unified command] with a powerful force multiplier to achieve military objectives.

Since weather will be common to all possible futures, a weather-modification capability would be universally applicable and have utility across the entire spectrum of conflict. The capability of influencing the weather even on a small scale could change it from a force degrader to a force multiplier.

In 1957, the president’s advisory committee on weather control explicitly recognized the military potential of weather-modification, warning in their report that it could become a more important weapon than the atom bomb [William B. Meyer, “The Life and Times ofUS Weather: What Can We Do About It?” American Heritage 37, no. 4 (June/July 1986), 48]. Today [since 1969], weather-modification is the alteration of weather phenomena over a limited area for a limited period of time. [Herbert S. Appleman, An Introduction to Weather-modification (Scott AFB, Ill.: Air Weather Service/MAC, September 1969), 1]. In the broadest sense, weather-modification can be divided into two major categories: suppression and intensification of weather patterns. In extreme cases, it might involve the creation of completely new weather patterns, attenuation or control of severe storms, or even alteration of global climate on a far-reaching and/or long-lasting scale.

Extreme and controversial examples of weather modification-creation of made-to-order weather, large-scale climate modification, creation and/or control (or “steering”) of severe storms, etc.-were researched as part of this study … the weather-modification applications proposed in this report range from technically proven to potentially feasible. Applying Weather-modification to Military Operations How will the military, in general, and the USAF, in particular, manage and employ a weather-modification capability?

We envision this will be done by the weather force support element (WFSE), whose primary mission would be to support the war-fighting CINCs with weather-modification options, in addition to current forecasting support. Although the WFSE could operate anywhere as long as it has access to the GWN and the system components already discussed, it will more than likely be a component within the AOC or its 2025-equivalent. With the CINC’s intent as guidance, the WFSE formulates weather-modification options using information provided by the GWN, local weather data network, and weather-modification forecast model.

The options include range of effect, probability of success, resources to be expended, the enemy’s vulnerability, and risks involved. The CINC chooses an effect based on these inputs, and the WFSE then implements the chosen course, selecting the right modification tools and employing them to achieve the desired effect. Sensors detect the change and feed data on the new weather pattern to the modeling system which updates its forecast accordingly. The WFSE checks the effectiveness of its efforts by pulling down the updated current conditions and new forecast(s) from the GWN and local weather data network, and plans follow-on missions as needed. This concept is illustrated in figure 3-2. 5/7/2008 Page 3 of35

Two key technologies are necessary to meld an integrated, comprehensive, responsive, precise, and effective weather-modification system. Advances in the science of chaos are critical to this endeavor. Also key to the feasibility of such a system is the ability to model the extremely complex nonlinear system of global weather in ways that can accurately predict the outcome of changes in the influencing variables. Researchers have already successfully controlled single variable nonlinear systems in the lab and hypothesize that current mathematical techniques and computer capacity could handle systems with up to five variables.

Advances in these two areas would make it feasible to affect regional weather patterns by making small, continuous nudges to one or more influencing factors. Conceivably, with enough lead time and the right conditions, you could get “made-to-order” weather [William Brown, “Mathematicians Learn How to Tame Chaos,” New Scientist (30 May 1992): 16]. The total weather-modification process would be a real-time loop of continuous, appropriate, measured interventions, and feedback capable of producing desired weather behavior. The essential ingredient ofthe weather-modification system is the set of intervention techniques used to modify the weather.

The number of specific intervention methodologies is limited only by the imagination, but with few exceptions they involve infusing either energy or chemicals into the meteorological process in the right way, at the right place and time. The intervention could be designed to modify the weather in a number of ways, such as influencing clouds and precipitation, storm intensity, climate, space, or fog. 5/7/2008 Page 4 of35 PRECIPITATION ” … significant beneficial influences can be derived through judicious exploitation of the solar absorption potential of carbon black dust” [William M. Gray et al., “Weather-modification by Carbon Dust Absorption of Solar Energy,” Journal of Applied Meteorology 15 (April1976): 355]. The study ultimately found that this technology could be used to enhance rainfall on the mesoscale, generate cirrus clouds, and enhance cumulonimbus (thunderstorm) clouds in otherwise dry areas . . . .if we are fortunate enough to have a fairly large body of water available upwind from the targeted battlefield, carbon dust could be placed in the atmosphere over that water. Assuming the dynamics are supportive in the atmosphere, the rising saturated air will eventually form clouds and rainshowers downwind over the land. Numerous dispersal techniques [of carbon dust] have already been studied, but the most convenient, safe, and cost-effective method discussed is the use of afterburner-type jet engines to generate carbon particles while flying through the targeted air.

This method is based on injection ofliquid hydrocarbon fuel into the afterburner’s combustion gases [this explains why contrails have now become chemtrails]. To date, much work has been done on UAVs [Unmanned Aviation Vehicles] which can closely (if not completely) match the capabilities of piloted aircraft. If this UAV technology were combined with stealth and carbon dust technologies, the result could be a UAV aircraft invisible to radar while en route to the targeted area, which could spontaneously create carbon dust in any location. If clouds were seeded (using chemical nuclei similar to those used today or perhaps a more effective agent discovered through continued research) before their downwind arrival to a desired location, the result could be a suppression of precipitation. In other words, precipitation could be “forced” to fall before its arrival in the desired territory, thereby making the desired territory “dry.” FOG Field experiments with lasers have demonstrated the capability to dissipate warm fog at an airfield with zero visibility. Smart materials based on nanotechnology are currently being developed with gigaops computer capability at their core.

They could adjust their size to optimal dimensions for a given fog seeding situation and even make adjustments throughout the process. They might also enhance their dispersal qualities by adjusting their buoyancy, by communicating with each other, and by steering themselves within the fog. They will be able to provide immediate and continuous effectiveness feedback by integrating with a larger sensor network and can also change their temperature and polarity to improve their seeding effects [J. Storrs Hall, “Overview ofNanotechnology,” adapted from papers by Ralph C. Merkle and K. Eric Drexler, Rutgers University, November 1995]. As mentioned above, UAVs could be used to deliver and distribute these smart materials. Recent army research lab experiments have demonstrated the feasibility of 5/7/2008 Page 5 of35 generating fog.

They used commercial equipment to generate thick fog in an area 100 meters long. Further study has shown fogs to be effective at blocking much of the UV/IR/visible spectrum, effectively masking emitters of such radiation from IR weapons [Robert A. Sutherland, “Results of Man-Made Fog Experiment,” Proceedings of the 1991 Battlefield Atmospherics Coriference (Fort Bliss, Tex.: Hinman Hall, 3-6 December~1991)]. STORMS The damage caused by storms is indeed horrendous. For instance, a tropical storm has an energy equal to 10,000 one-megaton hydrogen bombs [Louis J. Battan, Harvesting the Clouds (Garden City, N.Y.: Doubleday & Co., 1960), 120]. At any instant there are approximately 2,000 thunderstorms taking place. In fact 45,000 thunderstorms, which contain heavy rain, hail, microbursts, wind shear, and lightning form daily [GeneS. Stuart, “Whirlwinds and Thunderbolts,” Nature on the Rampage (Washington, D.C.: National Geographic Society, 1986), 130]. Weather-modification technologies might involve techniques that would increase latent heat release in the atmosphere, provide additional water vapor for cloud cell development, and provide additional surface and lower atmospheric heating to increase atmospheric instability.

The focus of the weather-modification effort would be to provide additional “conditions” that would make the atmosphere unstable enough to generate cloud and eventually storm cell development. One area of storm research that would significantly benefit military operations is lightning modification … but some offensive military benefit could be obtained by doing research on increasing the potential and intensity of lightning. Possible mechanisms to investigate would be ways to modify the electropotential characteristics over certain targets to induce lightning strikes on the desired targets as the storm passes over their location. In summary, the ability to modify battlespace weather through storm cell triggering or enhancement would allow us to exploit the technological “weather” advances. SPACE WEATHER-MODIFICATION This section discusses opportunities for control and modification of the ionosphere and near-space environment for force enhancement. A number of methods have been explored or proposed to modify the ionosphere, including injection of chemical vapors and heating or charging via electromagnetic radiation or particle beams (such as ions, neutral particles, x-rays, MeV particles, and energetic electrons)-[Peter M. Banks, “Overview of Ionospheric Modification from Space Platforms,” in Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems (AGARD Conference Proceedings 485, October 1990) 19-1].

It is important to note that many techniques to modify the upper atmosphere have been successfully demonstrated experimentally. Ground-based modification techniques employed by the FSU include vertical HF heating, oblique HF heating, microwave heating, and magnetospheric modification [Capt Mike Johnson, Upper 5/7/2008 Page 6 of35 Atmospheric Research and Modification-Former Soviet Union (U), DST-18205-475-92 (Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992)]. Creation of an artificial uniform ionosphere was first proposed by Soviet researcher A. V. Gurevich in the mid-1970s. An artificial ionospheric mirror (AIM) would serve as a precise mirror for electromagnetic [EM] radiation of a selected source.

Atmospheric Research and Modification-Former Soviet Union (U), DST-18205-475-92 (Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992)]. Creation of an artificial uniform ionosphere was first proposed by Soviet researcher A. V. Gurevich in the mid-1970s. An artificial ionospheric mirror (AIM) would serve as a precise mirror for electromagnetic [EM] radiation of a selected frequency or a range of frequencies. AJ1iftcial Ionospheric Mirrors S’IATION ARTIFICIAL WEATHER GRO U.ND-IJAS..Iill AIM GENERATOR 121\niZ STATION While most weather-modification efforts rely on the existence of certain preexisting conditions, it may be possible to produce some weather effects artificially, regardless of preexisting conditions. For instance, virtual weather could be created by influencing the weather inforrilation received by an end user.

Nanotechnology also offers possibilities for creating simulated weather. A cloud, or several clouds, of microscopic computer particles, all communicating with each other and with a larger control system could provide tremendous capability. Interconnected, atmospherically buoyant, and having navigation capability in three dimensions, such clouds could be designed to have a wide-range of properties … Even if power levels achieved were insufficient to be an effective strike weapon [if power levels WERE sufficient, they would be an effective strike weapon], the potential for psychological operations in many situations could be fantastic. One major advantage of using simulated weather to achieve a desired effect is that unlike other approaches, it makes what are otherwise the results of deliberate actions appear to be the consequences of natural weather phenomena. In addition, it is potentially relatively inexpensive to do. According to J. Storrs Hall, a 5/7/2008 Page 7 of35

Andrea Psoras Senior Vice President The Electronic Frontier Foundation, an advocate for freedom of information on the Internet, has condemned Santorum’s bill. “It is a terrible precedent for information policy,” said staff member Ren Bucholz. “If the rule is, data provided by taxpayer money can’t be provided to the public but through a private entity, we won’t have a very useful public agency.” QED International Associates, Inc. US Agent for Rapid Ratings International 708 Third A venue, 23rd Fl New York, NY 10017 (212) 953-40580 apsoras@gmail.com (646) 709-9629c apsoras@qedinternational.com http://www.qedintemational.com

  • 07-13-19

Apollo 11 really landed on the Moon—and here’s how you can be sure (sorry, conspiracy nuts)

We went to the Moon. Here’s all the proof you’ll ever need.

By Charles Fishman7 minute Read

This is the 43rd in an exclusive series of 50 articles, one published each day until July 20, exploring the 50th anniversary of the first-ever Moon landing. You can check out 50 Days to the Moon here every day.

The United States sent astronauts to the Moon, they landed, they walked around, they drove around, they deployed lots of instruments, they packed up nearly half a ton of Moon rocks, and they flew home.

No silly conspiracy was involved.

There were no Hollywood movie sets.

Anybody who writes about Apollo and talks about Apollo is going to be asked how we actually know that we went to the Moon.

Not that the smart person asking the question has any doubts, mind you, but how do we know we went, anyway?

It’s a little like asking how we know there was a Revolutionary War. Where’s the evidence? Maybe it’s just made up by the current government to force us to think about America in a particular way.

How do we know there was a Titanic that sank?

And by the way, when I go to the battlefields at Gettysburg—or at Normandy, for that matter—they don’t look much like battlefields to me. Can you prove we fought a Civil War? World War II?

In the case of Apollo, in the case of the race to the Moon, there is a perfect reply.

The race to the Moon in the 1960s was, in fact, an actual race.

The success of the Soviet space program—from Sputnik to Strelka and Belka to Yuri Gagarin—was the reason for Apollo. John Kennedy launched America to the Moon precisely to beat the Russians to the Moon.

When Kennedy was frustrated with the fact that the Soviets were first to achieve every important milestone in space, he asked Vice President Lyndon Johnson to figure it out—fast. The opening question of JFK’s memo to LBJ:

“Do we have a chance of beating the Soviets by putting a laboratory in space, or by a trip around the Moon, or by a rocket to land on the Moon, or by a rocket to go to the Moon and back with a man. Is there any other space program which promises dramatic results in which we could win?”

Win. Kennedy wanted to know how to beat the Soviets—how to win in space.

That memo was written a month before Kennedy’s dramatic “go to the Moon” speech. The race to the Moon he launched would last right up to the moment, almost 100 months later, when Apollo 11 would land on the Moon.

The race would shape the American and Soviet space programs in subtle and also dramatic ways.

Apollo 8 was the first U.S. mission that went to the Moon: The Apollo capsule and the service module, with Frank Borman, Bill Anders, and Jim Lovell, flew to the Moon at Christmastime in 1968, but without a lunar module. The lunar modules were running behind, and there wasn’t one ready for the flight.

Apollo 8 represented a furious rejuggling of the NASA flight schedule to accommodate the lack of a lunar module. The idea was simple: Let’s get Americans to the Moon quick, even if they weren’t ready to land on the Moon. Let’s “lasso the Moon” before the Soviets do.

At the moment when the mission was conceived and the schedule redone to accommodate a different kind of Apollo 8, in late summer 1968, NASA officials were worried that the Russians might somehow mount exactly the same kind of mission: Put cosmonauts in a capsule and send them to orbit the Moon, without landing. Then the Soviets would have made it to the Moon first.

Apollo 8 was designed to confound that, and it did.

In early December 1968, in fact, the rivalry remained alive enough that Time magazine did a cover story on it. “Race for the Moon” was the headline, and the cover was an illustration of an American astronaut and a Soviet cosmonaut, in spacesuits, leaping for the surface of the Moon.

Seven months later, when Apollo 11, with Michael Collins, Neil Armstrong, and Buzz Aldrin aboard, entered orbit around the Moon on July 19, 1969, there was a Soviet spaceship there to meet them. It was Luna 15, and it had been launched a few days before Apollo 11. Its goal: Land on the Moon, scoop up Moon rocks and dirt, and then dash back to a landing in the Soviet Union before Collins, Aldrin, and Armstrong could return with their own Moon rocks.

If that had happened, the Soviets would at least have been able to claim that they had gotten Moon rocks back to Earth first (and hadn’t needed people to do it).

So put aside for a moment the pure ridiculousness of a Moon landing conspiracy that somehow doesn’t leak out. More than 410,000 Americans worked on Apollo, on behalf of 20,000 companies. Was their work fake? Were they all in on the conspiracy? And then, also, all their family members—more than 1 million people—not one of whom ever whispered a word of the conspiracy?

What of the reporters? Hundreds of reporters covering space, writing stories not just of the dramatic moments, but about all the local companies making space technology, from California to Delaware.

Put aside as well the thousands of hours of audio recordings—between spacecraft and mission control; in mission control, where dozens of controllers talked to each other; in the spacecraft themselves, where there were separate recordings of the astronauts just talking to each other in space. There were 2,502 hours of Apollo spaceflight, more than 100 days. It’s an astonishing undertaking not only to script all that conversation, but then to get people to enact it with authenticity, urgency, and emotion. You can now listen to all of it online, and it would take you many years to do so.

For those who believe the missions were fake, all that can, somehow, be waved off. A puzzling shadow in a picture from the Moon, a quirk in a single moment of audio recording, reveals that the whole thing was a vast fabrication. (With grace and straight-faced reporting, the Associated Press this week reviewed, and rebutted, the most popular sources of the conspiracy theories.)

Forget all that.

If the United States had been faking the Moon landings, one group would not have been in on the conspiracy: The Soviets.

The Soviet Union would have revealed any fraud in the blink of an eye, and not just without hesitation, but with joy and satisfaction.

In fact, the Russians did just the opposite. The Soviet Union was one of the few places on Earth (along with China and North Korea) where ordinary people couldn’t watch the landing of Apollo 11 and the Moon walk in real time. It was real enough for the Russians that they didn’t let their own people see it.

That’s all the proof you need. If the Moon landings had been faked—indeed, if any part of them had been made up, or even exaggerated—the Soviets would have told the world. They were watching. Right to the end, they had their own ambitions to be first to the Moon, in the only way they could muster at that point.

And that’s a kind of proof that the conspiracy-meisters cannot wriggle around.

But another thing is true about the Moon landings: You’ll never convince someone who wants to think they were faked that they weren’t. There is nothing in particular you could ever say, no particular moment or piece of evidence you could produce, that would cause someone like that to light up and say, “Oh! You’re right! We did go to the Moon.”

Anyone who wants to live in a world where we didn’t go to the Moon should be happy there. That’s a pinched and bizarre place, one that defies not just the laws of physics but also the laws of ordinary human relationships.

I prefer to live in the real world, the one in which we did go to the Moon, because the work that was necessary to get American astronauts to the Moon and back was extraordinary. It was done by ordinary people, right here on Earth, people who were called to do something they weren’t sure they could, and who then did it, who rose to the occasion in pursuit of a remarkable goal.

That’s not just the real world, of course. It’s the best of America.

We went to the Moon, and on the 50th anniversary of that first landing, it’s worth banishing forever the nutty idea that we didn’t, and also appreciating what the achievement itself required, and what it says about the people who were able to do it.


A Mysterious Anomaly Under Africa Is Radically Weakening Earth’s Magnetic Field Posted June 29th 2020

PETER DOCKRILL 6 MARCH 2018

Around Earth, an invisible magnetic field traps electrons and other charged particles.
(Image: © NASA’s Goddard Space Flight Center)

Above our heads, something is not right. Earth’s magnetic field is in a state of dramatic weakening – and according to mind-boggling new research, this phenomenal disruption is part of a pattern lasting for over 1,000 years.

The Earth‘s magnetic field is weakening between Africa and South America, causing issues for satellites and space craft.

Scientists studying the phenomenon observed that an area known as the South Atlantic Anomaly has grown considerably in recent years, though the reason for it is not entirely clear.

Using data gathered by the European Space Agency’s (ESA) Swarm constellation of satellites, researchers noted that the area of the anomaly dropped in strength by more than 8 per cent between 1970 and 2020.

“The new, eastern minimum of the South Atlantic Anomaly has appeared over the last decade and in recent years is developing vigorously,” said Jürgen Matzka, from the German Research Centre for Geosciences.

“We are very lucky to have the Swarm satellites in orbit to investigate the development of the South Atlantic Anomaly. The challenge now is to understand the processes in Earth’s core driving theses changes.”

Earth’s magnetic field doesn’t just give us our north and south poles; it’s also what protects us from solar winds and cosmic radiation – but this invisible force field is rapidly weakening, to the point scientists think it could actually flip, with our magnetic poles reversing.

As crazy as that sounds, this actually does happen over vast stretches of time. The last time it occurred was about 780,000 years ago, although it got close again around 40,000 years back.

When it takes place, it’s not quick, with the polarity reversal slowly occurring over thousands of years.

Nobody knows for sure if another such flip is imminent, and one of the reasons for that is a lack of hard data.

The region that concerns scientists the most at the moment is called the South Atlantic Anomaly – a huge expanse of the field stretching from Chile to Zimbabwe. The field is so weak within the anomaly that it’s hazardous for Earth’s satellites to enter it, because the additional radiation it’s letting through could disrupt their electronics.

“We’ve known for quite some time that the magnetic field has been changing, but we didn’t really know if this was unusual for this region on a longer timescale, or whether it was normal,” says physicist Vincent Hare from the University of Rochester in New York.

One of the reasons scientists don’t know much about the magnetic history of this region of Earth is it lacks what’s called archeomagnetic data – physical evidence of magnetism in Earth’s past, preserved in archaeological relics from bygone ages.

One such bygone age belonged to a group of ancient Africans, who lived in the Limpopo River Valley – which borders Zimbabwe, South Africa, and Botswana: regions that fall within the South Atlantic Anomaly of today.

Approximately 1,000 years ago, these Bantu peoples observed an elaborate, superstitious ritual in times of environmental hardship.

During times of drought, they would burn down their clay huts and grain bins, in a sacred cleansing rite to make the rains come again – never knowing they were performing a kind of preparatory scientific fieldwork for researchers centuries later.

“When you burn clay at very high temperatures, you actually stabilise the magnetic minerals, and when they cool from these very high temperatures, they lock in a record of the earth’s magnetic field,” one of the team, geophysicist John Tarduno explains.

As such, an analysis of the ancient artefacts that survived these burnings reveals much more than just the cultural practices of the ancestors of today’s southern Africans.

“We were looking for recurrent behaviour of anomalies because we think that’s what is happening today and causing the South Atlantic Anomaly,” Tarduno says.

“We found evidence that these anomalies have happened in the past, and this helps us contextualise the current changes in the magnetic field.”

Like a “compass frozen in time immediately after [the] burning”, the artefacts revealed that the weakening in the South Atlantic Anomaly isn’t a standalone phenomenon of history.

Similar fluctuations occurred in the years 400-450 CE, 700-750 CE, and 1225-1550 CE – and the fact that there’s a pattern tells us that the position of the South Atlantic Anomaly isn’t a geographic fluke.

“We’re getting stronger evidence that there’s something unusual about the core-mantel boundary under Africa that could be having an important impact on the global magnetic field,” Tarduno says.

The current weakening in Earth’s magnetic field – which has been taking place for the last 160 years or so – is thought to be caused by a vast reservoir of dense rock called the African Large Low Shear Velocity Province, which sits about 2,900 kilometres (1,800 miles) below the African continent.

“It is a profound feature that must be tens of millions of years old,” the researchers explained in The Conversation last year.

“While thousands of kilometres across, its boundaries are sharp.”

This dense region, existing in between the hot liquid iron of Earth’s outer core and the stiffer, cooler mantle, is suggested to somehow be disturbing the iron that helps generate Earth’s magnetic field.

There’s a lot more research to do before we know more about what’s going on here.

As the researchers explain, the conventional idea of pole reversals is that they can start anywhere in the core – but the latest findings suggest what happens in the magnetic field above us is tied to phenomena at special places in the core-mantle boundary.

If they’re right, a big piece of the field weakening puzzle just fell in our lap – thanks to a clay-burning ritual a millennia ago. What this all means for the future, though, no-one is certain.

“We now know this unusual behaviour has occurred at least a couple of times before the past 160 years, and is part of a bigger long-term pattern,” Hare says.

“However, it’s simply too early to say for certain whether this behaviour will lead to a full pole reversal.”

The findings are reported in Geophysical Review Letters.

Extending from Earth like invisible spaghetti is the planet’s magnetic field. Created by the churn of Earth’s core, this field is important for everyday life: It shields the planet from solar particles, it provides a basis for navigation and it might have played an important role in the evolution of life on Earth. 

But what would happen if Earth’s magnetic field disappeared tomorrow? A larger number of charged solar particles would bombard the planet, putting power grids and satellites on the fritz and increasing human exposure to higher levels of cancer-causing ultraviolet radiation. In other words, a missing magnetic field would have consequences that would be problematic but not necessarily apocalyptic, at least in the short term.

And that’s good news, because for more than a century, it’s been weakening. Even now, there are especially flimsy spots, like the South Atlantic Anomaly in the Southern Hemisphere, which create technical problems for low-orbiting satellites. 

Related: What Will Happen to Earth When the Sun Dies?

Read more

One possibility, according to the ESA, is that the weakening field is a sign that the Earth’s magnetic field is about to reverse, whereby the North Pole and South Pole switch places.

The last time a “geomagnetic reversal” took place was 780,000 years ago, with some scientists claiming that the next one is long overdue. Typically, such events take place every 250,000 years.

The repercussions of such an event could be significant, as the Earth’s magnetic field plays an important role in protecting the planet from solar winds and harmful cosmic radiation.

Telecommunication and satellite systems also rely on it to operate, suggesting that computers and mobile phones could experience difficulties.

The South Atlantic Anomaly has been captured by the Swarm satellite constellation (Division of Geomagnetism, DTU Space)

The South Atlantic Anomaly is already causing issues with satellites orbiting Earth, the ESA warned, while spacecrafts flying in the area could also experience “technical malfunctions”.

A 2018 study published in the scientific journal Proceedings of the National Academy of Sciences found that despite the weakening field, “Earth’s magnetic field is probably not reversing”.

The study also explained that the process is not an instantaneous one and could take tens of thousands of years to take place.

ESA said it would continue to monitor the weakening magnetic field with its constellation of Swarm satellites.

“The mystery of the origin of the South Atlantic Anomaly has yet to be solved,” the space agency stated. “However, one thing is certain: magnetic field observations from Swarm are providing exciting new insights into the scarcely understood processes of Earth’s interior.”

Alien life is out there, but our theories are probably steering us away from it May 22nd 2020

If we discovered evidence of alien life, would we even realise it? Life on other planets could be so different from what we’re used to that we might not recognise any biological signatures that it produces.

Recent years have seen changes to our theories about what counts as a biosignature and which planets might be habitable, and further turnarounds are inevitable. But the best we can really do is interpret the data we have with our current best theory, not with some future idea we haven’t had yet.

This is a big issue for those involved in the search for extraterrestrial life. As Scott Gaudi of Nasa’s Advisory Council has said: “One thing I am quite sure of, now having spent more than 20 years in this field of exoplanets … expect the unexpected.”

But is it really possible to “expect the unexpected”? Plenty of breakthroughs happen by accident, from the discovery of penicillin to the discovery of the cosmic microwave background radiation left over from the Big Bang. These often reflect a degree of luck on behalf of the researchers involved. When it comes to alien life, is it enough for scientists to assume “we’ll know it when we see it”?

Many results seem to tell us that expecting the unexpected is extraordinarily difficult. “We often miss what we don’t expect to see,” according to cognitive psychologist Daniel Simons, famous for his work on inattentional blindness. His experiments have shown how people can miss a gorilla banging its chest in front of their eyes. Similar experiments also show how blind we are to non-standard playing cards such as a black four of hearts. In the former case, we miss the gorilla if our attention is sufficiently occupied. In the latter, we miss the anomaly because we have strong prior expectations.

There are also plenty of relevant examples in the history of science. Philosophers describe this sort of phenomenon as “theory-ladenness of observation”. What we notice depends, quite heavily sometimes, on our theories, concepts, background beliefs and prior expectations. Even more commonly, what we take to be significant can be biased in this way.

For example, when scientists first found evidence of low amounts of ozone in the atmosphere above Antarctica, they initially dismissed it as bad data. With no prior theoretical reason to expect a hole, the scientists ruled it out in advance. Thankfully, they were minded to double check, and the discovery was made.

More than 200,000 stars captured in one small section of the sky by Nasa’s TESS mission. Nasa

Could a similar thing happen in the search for extraterrestrial life? Scientists studying planets in other solar systems (exoplanets) are overwhelmed by the abundance of possible observation targets competing for their attention. In the last 10 years scientists have identified more than 3,650 planets – more than one a day. And with missions such as NASA’s TESS exoplanet hunter this trend will continue.

Each and every new exoplanet is rich in physical and chemical complexity. It is all too easy to imagine a case where scientists do not double check a target that is flagged as “lacking significance”, but whose great significance would be recognised on closer analysis or with a non-standard theoretical approach.

The Müller-Lyer optical illusion. Fibonacci/Wikipedia, CC BY-SA

However, we shouldn’t exaggerate the theory-ladenness of observation. In the Müller-Lyer illusion, a line ending in arrowheads pointing outwards appears shorter than an equally long line with arrowheads pointing inwards. Yet even when we know for sure that the two lines are the same length, our perception is unaffected and the illusion remains. Similarly, a sharp-eyed scientist might notice something in her data that her theory tells her she should not be seeing. And if just one scientist sees something important, pretty soon every scientist in the field will know about it.

History also shows that scientists are able to notice surprising phenomena, even biased scientists who have a pet theory that doesn’t fit the phenomena. The 19th-century physicist David Brewster incorrectly believed that light is made up of particles travelling in a straight line. But this didn’t affect his observations of numerous phenomena related to light, such as what’s known as birefringence in bodies under stress. Sometimes observation is definitely not theory-laden, at least not in a way that seriously affects scientific discovery.

We need to be open-minded

Certainly, scientists can’t proceed by just observing. Scientific observation needs to be directed somehow. But at the same time, if we are to “expect the unexpected”, we can’t allow theory to heavily influence what we observe, and what counts as significant. We need to remain open-minded, encouraging exploration of the phenomena in the style of Brewster and similar scholars of the past.

Studying the universe largely unshackled from theory is not only a legitimate scientific endeavour – it’s a crucial one. The tendency to describe exploratory science disparagingly as “fishing expeditions” is likely to harm scientific progress. Under-explored areas need exploring, and we can’t know in advance what we will find.

In the search for extraterrestrial life, scientists must be thoroughly open-minded. And this means a certain amount of encouragement for non-mainstream ideas and techniques. Examples from past science (including very recent ones) show that non-mainstream ideas can sometimes be strongly held back. Space agencies such as NASA must learn from such cases if they truly believe that, in the search for alien life, we should “expect the unexpected”.

Could invisible aliens really exist among us? An astrobiologist explains . May 22nd 2020

Life is pretty easy to recognise. It moves, it grows, it eats, it excretes, it reproduces. Simple. In biology, researchers often use the acronym “MRSGREN” to describe it. It stands for movement, respiration, sensitivity, growth, reproduction, excretion and nutrition.

But Helen Sharman, Britain’s first astronaut and a chemist at Imperial College London, recently said that alien lifeforms that are impossible to spot may be living among us. How could that be possible?

While life may be easy to recognise, it’s actually notoriously difficult to define and has had scientists and philosophers in debate for centuries – if not millennia. For example, a 3D printer can reproduce itself, but we wouldn’t call it alive. On the other hand, a mule is famously sterile, but we would never say it doesn’t live.

As nobody can agree, there are more than 100 definitions of what life is. An alternative (but imperfect) approach is describing life as “a self-sustaining chemical system capable of Darwinian evolution”, which works for many cases we want to describe.

The lack of definition is a huge problem when it comes to searching for life in space. Not being able to define life other than “we’ll know it when we see it” means we are truly limiting ourselves to geocentric, possibly even anthropocentric, ideas of what life looks like. When we think about aliens, we often picture a humanoid creature. But the intelligent life we are searching for doesn’t have to be humanoid.

Life, but not as we know it

Sharman says she believes aliens exist and “there’s no two ways about it”. Furthermore, she wonders: “Will they be like you and me, made up of carbon and nitrogen? Maybe not. It’s possible they’re here right now and we simply can’t see them.”

Such life would exist in a “shadow biosphere”. By that, I don’t mean a ghost realm, but undiscovered creatures probably with a different biochemistry. This means we can’t study or even notice them because they are outside of our comprehension. Assuming it exists, such a shadow biosphere would probably be microscopic.

So why haven’t we found it? We have limited ways of studying the microscopic world as only a small percentage of microbes can be cultured in a lab. This may mean that there could indeed be many lifeforms we haven’t yet spotted. We do now have the ability to sequence the DNA of unculturable strains of microbes, but this can only detect life as we know it – that contain DNA.

If we find such a biosphere, however, it is unclear whether we should call it alien. That depends on whether we mean “of extraterrestrial origin” or simply “unfamiliar”.

Silicon-based life

A popular suggestion for an alternative biochemistry is one based on silicon rather than carbon. It makes sense, even from a geocentric point of view. Around 90% of the Earth is made up of silicon, iron, magnesium and oxygen, which means there’s lots to go around for building potential life.

Artist’s impression of a silicon-based life form. Zita

Silicon is similar to carbon, it has four electrons available for creating bonds with other atoms. But silicon is heavier, with 14 protons (protons make up the atomic nucleus with neutrons) compared to the six in the carbon nucleus. While carbon can create strong double and triple bonds to form long chains useful for many functions, such as building cell walls, it is much harder for silicon. It struggles to create strong bonds, so long-chain molecules are much less stable.

What’s more, common silicon compounds, such as silicon dioxide (or silica), are generally solid at terrestrial temperatures and insoluble in water. Compare this to highly soluble carbon dioxide, for example, and we see that carbon is more flexible and provides many more molecular possibilities.

Life on Earth is fundamentally different from the bulk composition of the Earth. Another argument against a silicon-based shadow biosphere is that too much silicon is locked up in rocks. In fact, the chemical composition of life on Earth has an approximate correlation with the chemical composition of the sun, with 98% of atoms in biology consisting of hydrogen, oxygen and carbon. So if there were viable silicon lifeforms here, they may have evolved elsewhere.

That said, there are arguments in favour of silicon-based life on Earth. Nature is adaptable. A few years ago, scientists at Caltech managed to breed a bacterial protein that created bonds with silicon – essentially bringing silicon to life. So even though silicon is inflexible compared with carbon, it could perhaps find ways to assemble into living organisms, potentially including carbon.

And when it comes to other places in space, such as Saturn’s moon Titan or planets orbiting other stars, we certainly can’t rule out the possibility of silicon-based life.

To find it, we have to somehow think outside of the terrestrial biology box and figure out ways of recognising lifeforms that are fundamentally different from the carbon-based form. There are plenty of experiments testing out these alternative biochemistries, such as the one from Caltech.

Regardless of the belief held by many that life exists elsewhere in the universe, we have no evidence for that. So it is important to consider all life as precious, no matter its size, quantity or location. The Earth supports the only known life in the universe. So no matter what form life elsewhere in the solar system or universe may take, we have to make sure we protect it from harmful contamination – whether it is terrestrial life or alien lifeforms.


Read more: Elon Musk’s Starship may be more moral catastrophe than bold step in space exploration


So could aliens be among us? I don’t believe that we have been visited by a life form with the technology to travel across the vast distances of space. But we do have evidence for life-forming, carbon-based molecules having arrived on Earth on meteorites, so the evidence certainly doesn’t rule out the same possibility for more unfamiliar life forms.

Project HAARP: Is The US Controlling The Weather? – YouTube

www.youtube.com/watch?v=InoHOvYXJ0Q

23/07/2013 · Project HAARP: US Weather Control? A secretive government radio energy experiment in Alaska, with the potential to control the weather or a simple scientific experiment?

The Science of Corona Spread according to Neil Ferguson et al of Imperial College London Posted May 14th 2020

Note this report is about spread and guesswork as to the nature and structure OF Corona with particular regard to mutation and effects of the Corona Virus. It is about a maths model of predicted spread, and rate of spread, with R representing the reinfection rate. R at 1 means each person with Corona can be expected or predicted to infect one other person who will go on to infect one other etc.

What Ferguson does know for certain as a bassis for his modelling is is that the virtually privatised, asset stripped debt loaded poorly equiped run down and management top heavy NHS will fail massively especially in densely populated urban areas of high ethnic diversity, religious bigotry, poverty and squalor

He also knows that a privatised very expensive profit based care homes will fail hideously, so those already close to natural death, especially if they have previous health conditions will die sooner with corona, which given the squalor of the homes will make sure they get it.

So operation smokescreen needs the Ferguson maths to justify putting key at risk voters’ peace of mind above the wider national interest – to hell with the young, scare them to death, blind them with science like the following report which they won’t understand, upon which there will be further analysis and comment here soon.

On the wider scene, Britain has been a massively malign influence on Europe, the U.S and beyond, so Ferguson must factor in no limit to borders, air traffic or illegal immigrants. Though he clearly did not believe his own advice because he broke it at least twice for sexual contact with a married mother.

The maths of his assessment for his affair with a married woman here was simple : M + F = S where M represents male F represents female and S represents sex. But we do not need algebra to explain the obvious anymore than we need what is below, from Fergusoon’s 14 page report.

We might also consider that M + F , because of other human factors/variables, could equal D where D reresents divorce, or MB where MB represents Male Bankruptcy or a number of other possibilities.

But for Ferguson, operation smokescreen, blinding people with science, has only one possibility, LOCKDOWN because that is what the government wanted, the media wanted it and now a lot of workers want it, especially teachers who do not want to go back to work. Britain is ridiculing and patronising European countries for doing the sensible thing and easing out of lockdown. People with brains should fear the British elite more than Europe’s.

Public sector workers are paid to stay at home. Furloughed private sector workers are going to be bankrolled by the taxpaper the Chancellor said so. Lockdown is costing £14 billion a day. Imagine if all that money had been invested in an NHS fit to cope with all the illegal and legal mass of third world immigrants and an ageing population. But moron politicians are always economical with the truth, out to feed their own egos and winging it.

As an ex maths teacher, I could convert all of this into alegbra and probable outcomes. British people are more likely to belive what they can’t understand which is why so many still believe in God. So if God made everything, then God made ‘the science’ so it must be true

It is not is necessary to tell us that if someone catches a cold it is an airborne virus which will spread to anyone in its path, the poorly and old being vulnerable to a cold turning fatal. That is the reality of Corona.

Ferguson made his report on the basis of probability, some limits to the masses, regardless of the damage caused long term, because he got paid, would look good and enhance his and pompous Imperial College’s reputation.

Robert Cook

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 1 of 14

10 February 2020 Imperial College London COVID-19 Response Team DOI: https://doi.org/10.25561/77154 Page 1 of 14 Report 4: Severity of 2019-novel coronavirus (nCoV) Ilaria Dorigatti+ , Lucy Okell+ , Anne Cori, Natsuko Imai , Marc Baguelin, Sangeeta Bhatia, Adhiratha Boonyasiri, Zulma Cucunubá, Gina Cuomo-Dannenburg, Rich FitzJohn, Han Fu, Katy Gaythorpe , Arran Hamlet, Wes Hinsley, Nan Hong , Min Kwun, Daniel Laydon, Gemma Nedjati-Gilani, Steven Riley, Sabine van Elsland, Erik Volz, Haowei Wang, Raymond Wang, Caroline Walters , Xiaoyue Xi, Christl Donnelly, Azra Ghani, Neil Ferguson*. With support from other volunteers from the MRC Centre.1 WHO Collaborating Centre for Infectious Disease Modelling MRC Centre for Global Infectious Disease Analysis Abdul Latif Jameel Institute for Disease and Emergency Analytics (J-IDEA) Imperial College London *Correspondence: neil.ferguson@imperial.ac.uk 1 See full list at end of document. +These two authors contributed equally. Summary We present case fatality ratio (CFR) estimates for three strata of 2019-nCoV infections. For cases detected in Hubei, we estimate the CFR to be 18% (95% credible interval: 11%-81%). For cases detected in travellers outside mainland China, we obtain central estimates of the CFR in the range 1.2- 5.6% depending on the statistical methods, with substantial uncertainty around these central values. Using estimates of underlying infection prevalence in Wuhan at the end of January derived from testing of passengers on repatriation flights to Japan and Germany, we adjusted the estimates of CFR from either the early epidemic in Hubei Province, or from cases reported outside mainland China, to obtain estimates of the overall CFR in all infections (asymptomatic or symptomatic) of approximately 1% (95% confidence interval 0.5%-4%). It is important to note that the differences in these estimates does not reflect underlying differences in disease severity between countries. CFRs seen in individual countries will vary depending on the sensitivity of different surveillance systems to detect cases of differing levels of severity and the clinical care offered to severely ill cases. All CFR estimates should be viewed cautiously at the current time as the sensitivity of surveillance of both deaths and cases in mainland China is unclear. Furthermore, all estimates rely on limited data on the typical time intervals from symptom onset to death or recovery which influences the CFR estimates.

Report 4: Severity of 2019-novel coronavirus (nCoV) 1

WHO Collaborating Centre for Infectious Disease Modelling

MRC Centre for Global Infectious Disease Analysis

Abdul Latif Jameel Institute for Disease and Emergency Analytics (J-IDEA)

Imperial College London

*

Correspondence: neil.ferguson@imperial.ac.uk

1 Summary

We present case fatality ratio (CFR) estimates for three strata of 2019-nCoV infections. For cases

detected in Hubei, we estimate the CFR to be 18% (95% credible interval: 11%-81%). For cases

detected in travellers outside mainland China, we obtain central estimates of the CFR in the range 1.2-

5.6% depending on the statistical methods, with substantial uncertainty around these central values.

Using estimates of underlying infection prevalence in Wuhan at the end of January derived from

testing of passengers on repatriation flights to Japan and Germany, we adjusted the estimates of CFR

from either the early epidemic in Hubei Province, or from cases reported outside mainland China, to

obtain estimates of the overall CFR in all infections (asymptomatic or symptomatic) of approximately

1% (95% confidence interval 0.5%-4%). It is important to note that the differences in these estimates

does not reflect underlying differences in disease severity between countries. CFRs seen in individual

countries will vary depending on the sensitivity of different surveillance systems to detect cases of

differing levels of severity and the clinical care offered to severely ill cases. All CFR estimates should

be viewed cautiously at the current time as the sensitivity of surveillance of both deaths and cases in

mainland China is unclear. Furthermore, all estimates rely on limited data on the typical time intervals

from symptom onset to death or recovery which influences the CFR estimates.

SUGGESTED CITATION

Ilaria Dorigatti, Lucy Okell, Anne Cori et al. Severity of 2019-novel coronavirus (nCoV). Imperial College London

(10-02-2020), doi: https://doi.org/10.25561/77154.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives

4.0 International License.10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 2 of 14

1. Introduction: Challenges in assessing the spectrum of severity

There are two main challenges in assessing the severity of clinical outcomes during an epidemic of a

newly emerging infection:

1. Surveillance is typically biased towards detecting clinically severe cases, particularly at the

start of an epidemic when diagnostic capacity is limited (Figure 1). Estimates of the proportion

of fatal cases (the case fatality ratio, CFR) may thus be biased upwards until the extent of

clinically milder disease is determined [1].

2. There can be a period of two to three weeks between a case developing symptoms,

subsequently being detected and reported and observing the final clinical outcome. During a

growing epidemic the final clinical outcome of the majority of the reported cases is typically

unknown. Dividing the cumulative reported deaths by reported cases will underestimate the

CFR among these cases early in an epidemic [1-3].

Figure 1 illustrates the first challenge. Published data from China suggest that the majority of detected

and reported cases have moderate or severe illness, with atypical pneumonia and/or acute respiratory

distress being used to define suspected cases eligible for testing. In these individuals, clinical outcomes

are likely to be more severe, and hence any estimates of the CFR are likely to be high.

Outside mainland China, countries alert to the risk of infection being imported via international travel

have instituted surveillance for 2019-nCoV infection with a broader set of clinical criteria for defining

a suspected case, typically including a combination of symptoms (e.g. cough + fever) combined with

recent travel history to the affected region (Wuhan and/or Hubei Province). Such surveillance is

therefore likely to pick up clinically milder cases as well as the more severe cases also being detected

in mainland China. However, by restricting testing to those with a travel history or link, it is also likely

to miss other symptomatic cases (and possibly hospitalised cases with atypical pneumonia) that have

occurred through local transmission or through travel to other affected areas of China.

Figure 1: Spectrum of cases for 2019-nCoV, illustrating imputed sensitivity of surveillance in

mainland China and in travellers arriving in other countries or territories from mainland China.10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 3 of 14

Finally, the bottom of the pyramid represents the likely largest population of those infected with

either mild, non-specific symptoms or who are asymptomatic. Quantifying the extent of infection

overall in the population requires random population surveys of infection prevalence. The only such

data at present for 2019-nCoV are the PCR infection prevalence surveys conducted in exposed

expatriates who have recently been repatriated to Japan, Germany and the USA from Wuhan city (see

below).

To obtain estimates of the severity of 2019-nCoV across the full severity range we examined aggregate

data from Hubei Province, China (representing the top two levels – deaths and hospitalised cases – in

Figure 1) and individual-level data from reports of cases outside mainland China (the top three levels

and perhaps part of the fourth level in Figure 1). We also analysed data on infections in repatriated

expatriates returning from Hubei Provence (representing all levels in Figure 1).

2. Current estimates of the case fatality ratio

The CFR is defined as the proportion of cases of a disease who will ultimately die from the disease. For

a given case definition, once all deaths and cases have been ascertained (for example at the end of an

epidemic), this is simply calculated as deaths/cases. However, at the start of the epidemic this ratio

underestimates the true CFR due to the time-lag between onset of symptoms and death [1-3]. We

adopted several approaches to account for this time-lag and to adjust for the unknown final clinical

outcome of the majority of cases reported both inside and outside China (cases reported in mainland

China and those reported outside mainland China) (see Methods section below). We present the range

of resulting CFR estimates in Table 1 for two parts of the case severity pyramid. Note that all estimates

have high uncertainty and therefore point estimates represent a snapshot at the current time and

may change as additional information becomes available. Furthermore, all data sources have inherent

potential biases due to the limits in testing capacity as outlined earlier. 10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 4 of 14

Table 1: Estimates of CFR for two severity ranges: cases reported in mainland China, and those

reported outside. All estimates quoted to two significant figures.

1Mode quoted for Bayesian estimates, given uncertainty in the tail of the onset-to-death distribution. 2Estimates made

without imputing onset dates in traveller cases for whom onset dates are unknown are slightly higher than when onset dates

are imputed. 3Maximum likelihood estimate. 4This estimate relies on information from just 2 deaths reported outside

mainland China thus far and therefore has wide uncertainty. Both of these deaths occurred a relatively short time after onset

compared with the typical pattern in China.

Use of data on those who have recovered among exported cases gives very similar point estimates to

just relying on death data, but a rather narrower uncertainty range. This highlights the value of case

follow-up data on both fatal and non-fatal cases.

Given that the estimates of CFR across all infections rely on a single point estimate of infection

prevalence, they should be treated cautiously. In particular, the sensitivity of the diagnostics used to

test repatriated passengers is not known, and it is unclear when infected people might test positive,

or how representative those passengers were of the general population of Wuhan (their infection risk

might have been higher or lower than the general population). Additional representative studies to

assess the extent of mildly symptomatic or asymptomatic infection are therefore urgently needed.

Figure 2 shows projected expected numbers of deaths detected in cases detected up to 4

th

February

outside mainland China over the next few weeks for different values of the CFR. If no further deaths

are reported amongst this group (and indeed if many of those now in hospital recover and are

Severity range Method and data used Time to outcome

distributions used

CFR

China: Epidemic

currently in

Hubei

Parametric model fitted to publicly

reported number of cases and deaths

in Hubei as of 5

th

February, assuming

exponential growth at rate 0.14/day.

Onset-to-death estimated

from 26 deaths in China;

assume 5-day period from

onset to report and 1-day

period from death to report.

18%

1

(95% credible

interval: 11-81%)

Outside mainland

China: cases in

travellers from

mainland China

to other

countries or

territories

(showing a

broader

spectrum of

symptoms than

cases in Hubei,

including milder

disease)

Parametric model fitted to reported

traveller cases up to 8

th

February using

both death and recovery outcomes

and inferring latest possible dates of

onset in traveller cases

2

.

Onset-to-death estimated

from 26 deaths in China;

onset-to-recovery estimated

from 36 cases detected

outside mainland China

4

.

5.1%

3

(95% credible

interval: 1.1%-38%)

Parametric model fitted to reported

traveller cases up to 8

th

February using

only death outcome and inferring

latest possible unreported dates of

onset in traveller cases

2

.

Onset-to-death estimated

from 26 deaths in China.

5.6%

1

(95% credible

(95% credible

interval: 11-81%)

Outside mainland

China: cases in

travellers from

mainland China

to other

countries or

territories

(showing a

broader

spectrum of

symptoms than

cases in Hubei,

including milder

disease)

Parametric model fitted to reported

traveller cases up to 8

th

February using

both death and recovery outcomes

and inferring latest possible dates of

onset in traveller cases

2

.

Onset-to-death estimated

from 26 deaths in China;

onset-to-recovery estimated

from 36 cases detected

outside mainland China

4

.

5.1%

3

(95% credible

interval: 1.1%-38%)

Parametric model fitted to reported

traveller cases up to 8

th

February using

only death outcome and inferring

latest possible unreported dates of

onset in traveller cases

2

.

Onset-to-death estimated

from 26 deaths in China.

5.6%

1

(95% credible

interval: 2.0%-85%)

Kaplan-Meier-like non-parametric

model (CASEFAT Stata module [4])

fitted to reported traveller cases up to

8

th

February using both death and

recovery outcomes

2

.

Hazards of death and

recovery estimated as part

of method.

1.2%

3,4

(95% confidence

interval: 0.9%-26%)

All infections

Scaling CFR estimate for Hubei for the

level of infection under-ascertainment

estimated from infection prevalence

detected in repatriation flights,

assuming infected individuals test

positive for 14 days

As first row 0.9%

(95% confidence

interval: 0.5%-4.0%)

As previous row, but assuming

infected individuals test positive for 7

days

As first row 0.8%

(95% confidence

interval: 0.4%-3.0%)0

2

4

6

8

10

12

21/01/2020 26/01/2020 31/01/2020 05/02/2020 10/02/2020 15/02/2020 20/02/2020

E

x

p

e

c

t

e

d

n

u

m

b

e

r

o

f

d

e

a

t

h

s

(

c

u

m

u

la

t

iv

e

)

Date of death

Case Fatality Ratio

1% 3% 5%

7% 9% 11%

Observed

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 5 of 14

discharged) in the next 5 to 10 days, then we expect the upper bound on estimates of the CFR in this

population to reduce. We note that the coming one to two weeks should allow CFR estimates to be

refined.

Figure 2: Projected numbers of deaths in cases detected outside China up to 8

th

February for

different values of the CFR in that population.

3. Methods

A) Intervals between onset of symptoms and outcome

During a growing epidemic, the reported cases are generally identified some time before knowing the

clinical outcome of each case. For example, during the 2003 SARS epidemic, the average time between

onset of symptoms and either death or discharge from hospital was approximately three weeks. To

interpret the relationship between reported cases and deaths, we therefore need to account for this

interval. Two factors need to be considered; a) that we have not observed the full distribution of

outcomes of the reported cases (i.e. censoring) and b) that our sample of cases is from a growing

epidemic and hence more reported cases have been infected recently compared to one to two weeks

ago. The latter effect is frequently ignored in analyses but leads to a downwards biased central

estimate of the CFR. dt t

0

( ) ( ) ( | )

( ‘) ( ‘) ‘

OD d

OD d

OD d

f o t g t

f o t d

t t t

t t t

¥ − =

− ò

ot()

0 ( )

rt

o t o e = 0 o

0

( ) ( | ) .

( ‘) ‘

r

OD

OD d

r

OD

f e g t

f e d

t

t

t t

t t

¥ =

ò

(.) OD g

(.) OD f

min

0

( ) ( | ) .

( ‘) ‘

d

r

OD

OD d t T

r

OD

f e g t

f e d

t

t

t t

t t

− =

ò

( ) OD f t

( | , ) OD f m s t

min

2

2

0

( | / (1 ), ) ( | , ‘, ‘) ,

( ‘ | / (1 ), ) ‘

d

OD

OD d t T

OD

f m rms s g t m s

f m rms s d

t t

t t

− + =

+ ò

2 ‘ , ‘ .

(1 )

m m s s

rms

= =

+

, ( , |{ , }) ( | , , ) ( , ), d OD i d i

i

P m s t g t m s P m s t t µÕ

{ , }d t t P m s ( , )

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 6 of 14

If fOD(.) denotes the probability density function (PDF) of time from symptom onset to death, then the

PDF that we observe a death at time with assumed onset days ago is

,

where denotes the observed number of onsets that occurred at time t. For an exponentially

growing epidemic, we assume that where is the initial number of onsets (at t=0) and

r is the epidemic growth rate. Substituting this, we get

We can therefore fit the distribution to the observed data and correct for the epidemic growth

rate to estimate parameters for , the true distribution for a given estimate of r.

If we additionally assume that onsets were poorly observed prior to time Tmin then we can include

censoring:

For the special case that we model as a gamma distribution parameterised in terms of its

mean m and the ratio of the standard deviation to the mean, s, namely , it can be shown

that

where the transformed mean and standard deviation-to-mean ratios are

Therefore, the Bayesian posterior distribution for m and s (up to a constant factor equal to the total

probability) is proportional to the likelihood (over all intervals):

where the product is over a dataset of observed intervals and times of death and is

the prior distribution for m and s. This is constant for a uniform prior distribution or can be derived,

for instance, by fitting this model to the complete dataset of observed onset-to-death intervals from

previous epidemics (e.g. in this case the 2003 SARS epidemic in Hong Kong). Note that for a fully (.) OR f

(.) OD f

(.) OR f

(.) OD f

(.) OR f

dt ot

( )

1

| , , ( ) .

d o

d o

t t

d d o OD OD OD

t t

p t t c m s c f d t t

− +

− = ò

rt ot

( )

1

| , , (1 ) ( ) .

d o

d o

t t

r r o OR OR OR

t t

p t t c m s c f d t t

− +

− = − ò

, m s OD OD

, m s OR OR

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 7 of 14

observed epidemic, it is not necessary to account for epidemic growth provided there was no change

in clinical management (and thus the interval distribution) over time.

We can infer other interval distributions such as the onset-to-recovery distribution, (but also

the serial interval distribution and incubation period distribution) in a similar manner, given relevant

data on the timing of events. It should be noted that inferring all such interval distributions needs to

take account of epidemic growth.

For the analyses presented here, we fitted to data from 26 deaths from 2019-nCoV reported

in mainland China early in the epidemic and we fitted to 29 cases detected outside mainland

China. Uninformative uniform prior distributions were used for both.

The estimates of key parameters are shown in Table 2.

Table 2: Estimates of parameters for onset-to-death and onset-to-recovery distributions

Distribution Data Source mean (mode, 95%

credible interval)

SD/mean (mode, 95%

credible interval)

Onset-to-recovery 29 2019-CoV cases

detected outside

mainland China

22.2 days (18-83) 0.45 (0.35-0.62)

Onset-to-death 26 2019-nCoV deaths

from mainland China

22.3 days (18-82) 0.42 (0.33-0.6)

B) Estimates of the Case Fatality Ratio from individual case data

Parametric models

We can infer the CFR from individual data on dates of symptom onset, death and recovery. Continuing

our notation from above, let denote the distribution of times from symptom onset to death,

denote the distribution of times from onset to recovery, and c denote the CFR.

The probability that a patient dies on day given onset at time , conditional on survival to that

time is given by:

Similarly, the probability that a patient recovers on day , given onset at time ,is given by:

Here are the mean and standard deviation-to-mean ratio for the onset-to-death

distribution, and are those for the onset-to-recovery distribution.( | , , , , (1 ) ( ) ( ) . ) o o

h o OD OD OR OR OR OD

T t T t

p T t c m s m s c f d c f d t t t t

¥ ¥

− −

− = − + ò ò

( , , , , , ) ( ) ( ) {dead by } {recovered by } {hospitalised at }

( , , | , , , , , )

| , , , | , , , | , , , , ,

OD OD OR OR

d d i o i OD OD r r i o i OR OR h i o i OD OD OR OR

i T i T i T

P T c m s m s

p t t c m s p t t c m s p T t c m s m s

Î Î Î

=

Õ Õ Õ

d r o t t t

( , , , ) ( ) {dead by } {not dead at }

( , , | , , , ) | , , , | , , , , , OD OD d d i o i OD OD h o i OD OD OR OR

i T i T

P T c m s p t t c m s p T t c m s m s

Î Î

t t t d r o = Õ Õ

old old

( | , , , )

( , , | , , , , , ) ( , |{ , } ) ( , |{ , } ) ( ) d OD OD OR OR OR OR OD OD OD OD OR OR

P c T

P T c m s m s P m s P m s P c m ds dm ds t t

µ

ò

d r o

d r o d d

t t t

t t t t t

old ( , |{ , } ) P m s OD OD t dt

old { , } t dt

old ( , |{ , } ) P m s OR OR r t t

, , , , OD OD OR OR c m s m s

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 8 of 14

Finally, the probability that a patient remains in hospital at the last date for which data are available,

T, is

The overall likelihood of all observed deaths, recoveries and cases remaining in hospital is

It is also possible to infer c from data just on deaths and ‘non-deaths’, grouping the currently

hospitalised and recoveries together:

In a Bayesian context, the posterior distribution is given by

where is the prior distribution for the onset-to-death distribution obtained by

fitting to previous epidemics (e.g. in this case the 2003 SARS epidemic in Hong Kong) , and

is the comparable prior distribution for time from onset to recovery.

We assumed gamma-distributed onset-to-death and onset-to-recovery distributions (see above).

We fitted this model to the observed onset, recovery and death times in 290 international travellers

from mainland China reported up to 8

th

February. For approximately 50% of these travellers, the date

of onset was not reported. To allow us to fit the model to all cases, for those travellers we imputed an

estimate of the onset date as the first known contact with healthcare services – taken as the earliest

of the date of hospitalisation, date of report or date of confirmation. We note this is the latest possible

onset date and may therefore increase our estimates of CFR.

We also fitted a variant of this model where recoveries were ignored, given that they may be

systematically under-ascertained and hence introduce a bias in the estimate.

Posterior distributions were calculated numerically on a hypercube grid of the parameters to be

inferred ( ). Marginal distributions were computed for c.(.) OD f

Dt() Ct()

(.) OD f

0

( ) ( ) ( ) D t c C t f d OD t t t

¥

= − ò

0 C t C rt ( ) exp( ) =

0

( ) ( ) ( ) ( )

r

D t cC t f e d czC t OD

t

t t

¥

= = ò

0

( )

r

OD z f e d

t

t t

¥

= ò

(.) OD f

( )

2

1/ 2

1 ( , , )

1

s z r m s

rms

=

+

Dt() Ct()

Ct()

c m, s

c

( )

( | ( ), ( ), , ) ( ( , , ) ( )) exp( ( , , ) ( )) ( , ) ( )

D t

P c C t D t m s cz r m s C t cz r m s C t P m s P c µ −

P c( ) c P m s ( , )

(.) OD f

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 9 of 14

Kaplan-Meier-like non-parametric model

We used a non-parametric Kaplan-Meier-like method originally developed and applied to the 2003

SARS epidemic in Hong Kong [2]. The analysis was implemented using the CASEFAT Stata Module [4].

C) Estimates of the Case Fatality Ratio from aggregated case data

With posterior estimates of derived from case data collected in the early epidemic in Wuhan,

it is possible to estimate the CFR from daily reports of confirmed cases and deaths in China, under the

assumption that the daily new incidence figures reported represent recent deaths and cases.

Let the incidence of deaths and onsets (newly symptomatic cases at time t be and ,

respectively. Given knowledge of the onset-to-death distribution, , the expected number of

deaths at time t is given by

Assuming cases are growing exponentially as , we have

where

Assuming a gamma distribution form for , and parameterising as above in terms of the mean

and the standard deviation-to-mean ratio, m and s, respectively, one can show that z is:

Thus we assumed the probability of observing deaths given at time t is a binomial draw

from with probability cz. The term z is a downscaling of the actual CFR, c, to reflect epidemic

growth. Heuristically, if the mean onset-to-death interval is 20 days, and the doubling time of the

epidemic is, say, 5 days, then deaths now correspond to onsets occurring when incidence of cases was

2

4

=16 fold smaller than today, meaning the crudely estimated CFR (cumulative deaths/ cumulative

cases) needs to be scaled up by the same factor.

Ignoring constant terms in the binomial probability not involving or , the posterior distribution

for is:

where is the prior distribution on (assumed uniform) and is the prior distribution

on the onset-to-death distribution, , which we took to be the posterior distribution obtained r = 0.14 / day

r

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 10 of 14

by fitting to observed onset-to-death distribution for 26 cases in the early epidemic in Wuhan, itself

fitted with a prior distribution based on SARS data (see above).

The official case reports do not give dates of symptom onset or death, so we assumed that deaths

were reported 4 days more promptly than onsets, given the delays in healthcare seeking and testing

involved in confirming new cases, versus the follow-up and recording of deaths of the cases already

in the database. Assuming this difference in reporting delays is longer than 4 days results in lower

estimates of the CFR, while assuming the difference is shorter than 4 days gives higher estimates.

Thus, we compared 45 new deaths reported on 1

st

February with 3156 new cases reported on 5

th

February.

In addition, while both cases and deaths were growing approximately exponentially in the 10 days

prior to 5

th

February, the numbers of cases have been growing faster than deaths. We assumed this

reflects improved surveillance of milder cases over time, and thus used an estimate of the growth

rate in deaths of , corresponding to a 5-day doubling time. Assuming a higher value of

gives a higher estimate of the CFR.

Resulting estimates of the CFR showed little variation if calculated for each of the 7 days prior to 5

th

February.

D) Translating prevalence to incidence and estimating a CFR for all infections

Translating the severity estimates in ( ) ( ) /

n l

l

y t C t d N t t

= − ò

0 C t C rt ( ) exp( ) =

y t C t l rn N ( ) ( ) 1 exp( ) / = + − − [ ]

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 11 of 14

Table 1 into estimates of CFR for all cases of infection with 2019-nCoV requires knowledge of the

proportion of all infections being detected in either China or overseas. To do so we use a single point

estimate of prevalence of infection from the testing of all passengers returning on four repatriation

flights to Japan and Germany in the period 29

th

January – 1

st

February. Infection was detected in

passengers from each flight. In total 10 infections were confirmed in approximately 750 passengers

(passenger numbers are known for 3 flights and were estimated to be ~200 for the fourth). This gives

an estimate of detectable infection prevalence of 1.3% (exact 95% binomial confidence interval: 0.7%-

2.4%).

Let us assume infected individuals test positive by PCR to 2019-nCoV infection from l days before onset

of clinical symptoms to n-l days after. Then the infection prevalence at time t, y(t) is related to the

incidence of new cases, C(t) by:

Here N is the population of the area sampled (here assumed to be Wuhan). Assuming incidence is

growing as , with r=0.14/day (5-day doubling time), this gives

Here we assume l=1 day and examine n=7 and 14 days.

Thus we estimated a daily incidence estimate of 220 (95% confidence interval: 120-400) case onsets

per day per 100,000 of population in Wuhan on 31

st

January assuming infections are detectable for 14

days, and 300 (95% confidence interval: 160-550) case onsets per day per 100,000 assuming infections

are detectable for 7 days. Taking the 11 million population of Wuhan city, this implied a total of 24,000

(95% confidence interval: 13,000-44,000) case onsets in the city on that date assuming infections are

detectable for 14 days, and 33,000 (95% confidence interval: 18,000-60,000) assuming infections are

detectable for 7 days. It should be noted that a number of the detected infections on the repatriation

flights were asymptomatic (at least at the time of testing), therefore these total estimates of incidence

might include a proportion of very mildly symptomatic or asymptomatic cases.

Assuming an average 4 days

1

between the onset of symptoms and case report in Wuhan City, the

above estimates can be compared with the 1242 reported confirmed cases on 3

rd

February in Wuhan

City [5]. This implies 19-fold (95% confidence interval: 11-35) under-ascertainment of infections in

Wuhan assuming infections are detectable for 14 days (including from 1 day prior to symptoms), and

26-fold (95% confidence interval: 15-48) assuming infections are detectable for 7 days (including 1 day

prior to symptoms).

Under the assumption that all 2019-nCoV deaths are being reported in Wuhan city, we can then divide

our estimates of CFR in China by these under-ascertainment factors. Taking our 18% CFR among cases

in Hubei (first row of Table 1), this implies a CFR among all infections of 0.9% (95% confidence interval:

0.5%-4.3%) assuming infections are detectable via PCR for 14 days, and 0.8% (95% confidence interval:

0.4%-3.1%) assuming infections are detectable for 7 days.

1

This value is plausible from publicly available case reports. Longer durations between onset of symptoms and

report will lead to higher estimates of the degree of under-ascertainment. ( ) travellers o t

max t T £ Tmax

( , , ) OD f m s t

Dt()

0

( ) ( ) ( ) D t c o t f d travellers OD t t t

¥

= − ò

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 12 of 14

Similar estimates are obtained if one uses estimates of CFR in exported cases as the comparator: we

estimate that surveillance outside mainland China is approximately 4 to 5-fold more sensitive at

detecting cases than that in China.

E) Forward Projections of Expected Deaths in Travellers

Using our previous notation, let denote the onsets in travellers from mainland China at time

where is the most recent time (8

th

February in this analysis). Using our central estimate

of the onset-to-death interval obtained from the 26 deaths in mainland China, , we obtain

an estimate of the expected number of deaths occurring at time t, from:

where c is the CFR.

4. Data Sources

A) Data on early deaths from mainland China

Data on the characteristics of 39 cases who died from 2019-nCoV infection in Hubei Province were

collated from several websites. Of these, the date of onset of symptoms was not available for 5 cases.

We restricted our analysis to those who died up to 21

st

January leaving 26 deaths for analysis. These

data are available from website as hubei_early_deaths_2020_07_02.csv

B) Data on cases in international travellers

We collated data on 290 cases in international travellers from websites and media reports up to 8

th

February. These data are available from website as international_cases_2020_08_02.csv

C) Data on infection in repatriated international Wuhan residents

Data on infection prevalence in repatriated expatriates returning to their home countries were

obtained from media reports. These data are summarised in Table 3. Further data from a flight

returning to Malaysia reported two positive cases on 5

th

February – giving a prevalence at this time

point of 2% which remains consistent with our estimate.

Table 3: Data on confirmed infections in passengers on repatriation flights from Wuhan.

Country of

Destination

Number of

Passengers

Number

Confirmed

Number

Confirmed

who were

Symptomatic

Number

Confirmed

who were

Asymptomatic

1 Japan 206 4 2 2

2 Japan 210 2 0 2

3 Japan Not reported –

assume 200

2 1 1

4 Germany 124 2 – –

5* Malaysia 207 2 0 210 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 13 of 14

*Not used in our analysis but noted here for completeness.

5. Acknowledgements

We are grateful to the following hackathon participants from the MRC Centre for Global Infectious

Disease Analysis for their support in extracting data: Kylie Ainsile, Lorenzo Cattarino, Giovanni Charles,

Georgina Charnley, Paula Christen, Victoria Cox, Zulma Cucunubá, Joshua D’Aeth, Tamsin Dewé, Amy

Dighe, Lorna Dunning, Oliver Eales, Keith Fraser, Katy Gaythorpe, Lily Geidelberg, Will Green,

David Jørgensen, Mara Kont, Alice Ledda, Alessandra Lochen, Tara Mangal, Ruth McCabe, Kate

Mitchell, Andria Mousa, Rebecca Nash, Daniela Olivera, Saskia Ricks, Nora Schmit, Ellie Sherrard

Smith, Janetta Skarp, Isaac Stopard, Hayley Thompson, Juliette Unwin, Juan Vesga, Caroline Walters. 10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 14 of 14

6. References

1. Garske, T., et al., Assessing the severity of the novel influenza A/H1N1 pandemic. BMJ, 2009.

339: p. b2840.

2. Ghani, A.C., et al., Methods for estimating the case fatality ratio for a novel, emerging

infectious disease. Am J Epidemiol, 2005. 162(5): p. 479-86.

3. Lipsitch, M., et al., Potential Biases in Estimating Absolute and Relative Case-Fatality Risks

during Outbreaks. PLoS Negl Trop Dis, 2015. 9(7): p. e0003846.

4. Griffin, J. and A. Ghani, CASEFAT: Stata module for estimating the case fatality ratio of a new

infectious disease. Statistical Software Components, 2005. S454601.

5. People’s Republic of China. National Health Commission of the People’s Republic of China.

39:27 Now playing The Future of Humanity | Michio Kaku | Talks at Google Talks at Google

Visit Ancieny Code Website /www.ancient-code.com

Ancient Code

Ancient Code Community Compose

Ancient Code
Ancient Code

NASA accidentally shows proof of Large-Scale Weather Manipulation in satellite…

Post Pagination

  1. Home
  2. Conspiracy
  3. NASA accidentally shows proof of Large-Scale Weather Manipulation in satellite images

3 years ago 2 years ago

Conspiracy, Unexplained

NASA accidentally shows proof of Large-Scale Weather Manipulation in satellite images


Janice Friedman

by Janice Friedman 10.2kviews

Is this a massive conspiracy? Or is it possible that NASA really is playing around with our weather on Earth?

Many people would most likely agree we are looking at a massive conspiracy, while others believe the evidence is right in front of us.

This year’s Caribbean hurricane season has turned ‘weather’ into a dominant subject in the world.

Catastrophic damage has been witnessed in the Caribbean, where entire Islands were swept away by the incredible power of mother nature. However, is this just mother nature’s work, or is there something ELSE going on?

For decades have ‘conspiracy theories’ about weather control circulated the internet, and rumors of weather control by the government have become ever so popular.

What was considered as an impossible feat, today is possible thanks to decade-long geoengineering efforts that have given us the ability to control the weather: resulting in a two-way street that can destroy our planet as much as it can help.

Climate engineering commonly referred to as geoengineering, also known as climate intervention, is the deliberate and large-scale intervention in the Earth’s climatic system with the aim of affecting adverse global warming.

So where is that EVIDENCE? Where can I see with my own eyes that our weather is actually being manipulated?

Well, see for yourself.

Located just off the coast of Africa. Changing the weather has become a reality fro humanity, but it seems that we arent really able to control it, are we?
Just off the coast of Australia, this images shows how bad it can get. The above image, perfectly explain what Dane Wigington, writing for Wakeup-World, and Davide Wolfe describes as “many variances of radio frequency cloud impacts”
This images shows the coast of California. Maybe its time to stop weather modification projects before we mess up Earth’s climate for good.
Off Africa’s west coast. Are we in danger to lose control?
Another image from Africa’s west coast.
Weather control off the coast of Spain. We are changing the weather, and its not for the good of the human population.
Here is another image off the African coast.

Africa’s coastal regions are a hot zone for weather geoengineering efforts even though they are referred to by mainstream media as nothing more than the result of “dust” in the air, notes Dane Wigington who quotes an excerpt from a Fox9 News article:

“Right now, much of the Gulf of Mexico and parts of the Caribbean have slightly warmer than normal ocean temperatures which would normally aid in tropical development.

“But there is so much dust and dry air in the atmosphere that storms are getting choked off before they even get started.”

Over One Million Plants and Animals Could Soon Be ExtinctNASA Simulation of Asteroid Hitting NY Predicts Mass DestructionFirst Image of a Blackhole Is Captured by AstronomersNow PlayingScientists Successfully 3D-Prints Heart From Human CellsElon Musk’s Starship Is Coming Together — And New Images Reveal What It Will Look Like1:33Russian Scientists Warn A Massive Asteroid Could Strike Earth In 20681:20AI Tech Can Identify Genetic Disorders From a Person’s Face1:02Green ‘Christmas Comet’ Is Visible This Month1:00Harvard Astronomer Still Believes ‘Oumuamua May Have Been An Alien Space Probe1:23NASA’s Spacecraft Beams Back Image Of Farthest Object Ever Visited By Mankind1:20Powered by

Dane indicates how radio frequency transmissions can alter cloud formations, and that its the result of the“spraying of toxic electrically conductive heavy metals”. Now take a wild guess and imagine everything we breathe.
Is HAARP really responsible for weather changes? In this next image, Dane clearly points out that the enigmatic set of clouds formed near a HAARP Station, which eventually generated the unique looking cloud patterns.

Comments are closed.

DON’T MISS

Advertise Here

© 2020 All Rights Reserved. Copyright Ancient Code.

‘Spooky’ quantum movements seen happening to large objects, scientists say

‘What’s special about this experiment is we’ve seen quantum effects on something as large as a hu

Scientists have seen “spooky” quantum behaviour happening to objects at the human scale, according to a new paper.

Researchers have seen quantum fluctuations “kick” large objects such as mirrors, moving them by a tiny degree but one big enough to measure.

Such behaviour has previously been predicted by quantum physicists. But it has never before been measured.

The movements are the result of the way the universe is structured, when seen at the level of quantum mechanics: researchers describe it as a “noisy” space, where particles are constantly switching in and out of existence, which creates a low-level fuzz at all times.

Normally, that background of quantum “noise” is too subtle to detect in objects that are visible at the human-scale. But the new research shows that scientists have finally detected those movements, using new technology to watch for those fluctuations.

Researchers at the MIT LIGO Laboratory saw that the those fluctuations could move an object as big as a 40-kilogram mirror. The movement pushed the large mirrors a tiny amount, as predicted theoretically, allowing it to be measured by scientists.

The researchers were able to use special equipment called a quantum squeezer that allowed them to “manipulate” the noise so that it could be better observed.

“What’s special about this experiment is we’ve seen quantum effects on something as large as a human,” said Nergis Mavalvala, the Marble Professor and associate head of the physics department at MIT, in a statement.

“We too, every nanosecond of our existence, are being kicked around, buffeted by these quantum fluctuations. It’s just that the jitter of our existence, our thermal energy, is too large for these quantum vacuum fluctuations to affect our motion measurably. With LIGO’s mirrors, we’ve done all this work to isolate them from thermally driven motion and other forces, so that they are now still enough to be kicked around by quantum fluctuations and this spooky popcorn of the universe.”

To see the changes, researchers used the LIGO equipment that was built to detect gravitational wave. To do so, researchers built two pieces of equipment in different parts of the US that sends light down long tunnels, where it bounces off a mirror, and then is reflected back to where it started – the mirrors at the two facilities should return to the same spot at the same time, unless a gravitational wave disrupts their journey.

In the new experiment, researchers used the very precise measurements of those mirrors and the unusual conditions of the LIGO detector to measure any possible quantum “kick”, instead. They did so for watching for quantum fluctuations within the equipment, and watched for movement in the mirrors.

“This quantum fluctuation in the laser light can cause a radiation pressure that can actually kick an object,” said Lee McCuller, a research scientist at MIT’s Kavli Institute for Astrophysics and Space Research. “The object in our case is a 40-kilogram mirror, which is a billion times heavier than the nanoscale objects that other groups have measured this quantum effect in.”

More about

Quantum Physics |  LIGO |  MIT 

Herd Immunity July 11th 2020

Most Humans are trapped by a need to think they are mini Gods – particlarly the most stupid who cling to religion in fear of life and death. We are not. We are animals. We are herd animals who can only deal with Covid19 through herd immunity.

The virus is probably man made, but you will never get confirmation because it is all about China being a threat to the Anglo US fake left elite. Nancy Pelosi is interchangeable with Thatcher, both Wolves in female clothing.


The original pre PC barrel scraping James Bond and Felix thing was a metaphor for the elite who fear losing power. They are delusional. That is why they don’t undersatnd and hate Trump. Robert Cook
Image Appledene Photographics/RJC

While much about the COVID-19 pandemic remains uncertain, we know how it will likely end: when the spread of the virus starts to slow (and eventually ceases altogether) because enough people have developed immunity to it. At that point, whether it’s brought on by a vaccine or by people catching the disease, the population has developed “herd immunity.”

“Once the level of immunity passes a certain threshold, then the epidemic will start to die out because there aren’t enough new people to infect,” said Natalie Dean of the University of Florida.

While determining that threshold for COVID-19 is critical, a lot of nuance is involved in calculating exactly how much of the population needs to be immune for herd immunity to take effect and protect the people who aren’t immune.

At first it seems simple enough. The only thing you need to know is how many people, on average, are infected by each infected person. This value is called R0 (pronounced “R naught”). Once you have that, you can plug it into a simple formula for calculating the herd immunity threshold: 1 − 1/R0.

Let’s say the R0 for COVID-19 is 2.5, meaning each infected person infects, on average, two and a half other people (a common estimate). In that case, the herd immunity threshold for COVID-19 is 0.6, or 60%. That means the virus will spread at an accelerating rate until, on average across different places, 60% of the population becomes immune.

Diagram showing how a person infected with a disease of R0 = 2 infects 2 people
Lucy Reading-Ikkanda/Quanta Magazine

At that point, the virus will still spread, but at a decelerating rate until it stops completely. Just as a car doesn’t come to a stop the moment you take your foot off the gas, the virus won’t vanish the moment herd immunity is reached.

“You could imagine that once 60% of the population is infected, the number of infections starts to drop. But it might be another 20% that gets infected while the disease is starting to die out,” said Joel Miller of La Trobe University in Australia.

That 60% is also the threshold past which new introductions of the virus — say, an infected passenger disembarking from a cruise ship into a healthy port with herd immunity — will quickly burn out.

Diagram showing how it becomes harder for a disease to spread if more people are immune
Lucy Reading-Ikkanda/Quanta Magazine

“It doesn’t mean you won’t be able to start a fire at all, but that outbreak is going to die,” said Kate Langwig of Virginia Polytechnic Institute and State University.

However, things quickly get complicated. The herd immunity threshold depends on how many people each infected person actually infects — a number that can vary by location. The average infected person in an apartment building may infect many more people than the average infected person in a rural setting. So while an R0 of 2.5 for COVID-19 may be a reasonable number for the whole world, it will almost certainly vary considerably on a more local level, averaging much higher in some places and lower in others. This means that the herd immunity threshold will also be higher than 60% in some places and lower in others.

“I think the range of R0 consistent with data for COVID-19 is larger than most people give credit to,” said Marc Lipsitch of Harvard University, who has been advising health officials in Massachusetts and abroad. He cited data indicating it could be more than twice as high in some urban settings as the overall U.S. average.

And just as R0 turns out to be a variable, and not a static number, the way people acquire their immunity also varies, with important implications for calculating that herd immunity threshold.

Usually, researchers only think about herd immunity in the context of vaccine campaigns, many of which assume that everyone is equally likely to contract and spread a disease. But in a naturally spreading infection, that’s not necessarily the case. Differences in social behaviors lead some people to have more exposure to a disease than others. Biological differences also play a role in how likely people are to get infected.

Photo of Gabriela Gomes against a blue background
Gabriela Gomes of the University of Strathclyde in Scotland studies how biological and behavioral differences can affect the spread of a virus. She concludes some parts of the world may already be close to reaching herd immunity. Courtesy of Gabriela Gomes

“We are born different, and then these differences accumulate as we live different experiences,” said Gabriela Gomes of the University of Strathclyde in Scotland. “This affects how able people are to fight a virus.”

Epidemiologists refer to these variations as the “heterogeneity of susceptibility,” meaning the differences that cause some people to be more or less likely to get infected.

But this is too much nuance for vaccination campaigns. “Vaccines are generally not distributed in a population with respect to how many contacts people have or how susceptible they are, because we don’t know that,” said Virginia Pitzer of the Yale School of Public Health. Instead, health officials take a maximalist approach and, in essence, vaccinate everyone.

However, in an ongoing pandemic with no guarantee that a vaccine will be available anytime soon, the heterogeneity of susceptibility has real implications for the disease’s herd immunity threshold.

In some cases it will make the threshold higher. This could be true in places like nursing homes, where the average person might be more susceptible to COVID-19 than the average person in the broader population.

But on a larger scale, heterogeneity typically lowers the herd immunity threshold. At first the virus infects people who are more susceptible and spreads quickly. But to keep spreading, the virus has to move on to people who are less susceptible. This makes it harder for the virus to spread, so the epidemic grows more slowly than you might have anticipated based on its initial rate of growth.

“The first person is going to be likely to infect the people who are most susceptible to begin with, leaving the people who are less susceptible toward the latter half of the epidemic, meaning the infection could be eliminated sooner than you’d expect,” Lipsitch said.

Estimating Heterogeneity

So how much lower is the herd immunity threshold when you’re talking about a virus spreading in the wild, like the current pandemic?

According to the standard models, about 60% of the U.S. population would need to be vaccinated against COVID-19 or recover from it to slow and ultimately stop the spread of the disease. But many experts I talked to suspect that the herd immunity threshold for naturally acquired immunity is lower than that.

“My guess would be it’s potentially between 40 and 50%,” Pitzer said.

Lipsitch agrees: “If I had to make a guess, I’d probably put it at about 50%.”

Abstractions​ navigates promising ideas in science and mathematics. Journey with us and join the conversation.


See all Abstractions blog

These are mostly just educated estimates, because it’s so hard to quantify what makes one person more susceptible than another. Many of the characteristics you might think to assign someone — like how much social distancing they’re doing — can change from week to week.

“The whole heterogeneity problem only works if the sources of heterogeneity are long-term properties of a person. If it’s being in a bar, that’s not in itself sustained enough to be a source of heterogeneity,” Lipsitch said.

Heterogeneity may be hard to estimate, but it’s also an important factor in determining what the herd immunity threshold really is. Langwig believes that the epidemiological community hasn’t done enough to try and get it right.

“We’ve kind of been a little sloppy in thinking about herd immunity,” she said. “This variability really matters, and we need to be careful to be more accurate about what the herd immunity threshold is.”

Some recent papers have tried. In June the journal Science published a study that incorporated a modest degree of heterogeneity and estimated the herd immunity threshold for COVID-19 at 43% across broad populations. But one of the study’s co-authors, Tom Britton of Stockholm University, thinks there are additional sources of heterogeneity their model doesn’t account for.

“If anything, I’d think the difference is bigger, so that in fact the herd immunity level is probably a bit smaller than 43%,” Britton said.

Another new study takes a different approach to estimating differences in susceptibility to COVID-19 and puts the herd immunity threshold even lower. The paper’s 10 authors, who include Gomes and Langwig, estimate that the threshold for naturally acquired herd immunity to COVID-19 could be as low as 20% of the population. If that’s the case, the hardest-hit places in the world may be nearing it.

“We’re getting to the conclusion that the most affected regions like Madrid may be close to reaching herd immunity,” said Gomes. An early version of the paper was posted in May, and the authors are currently working on an updated version, which they anticipate posting soon. This version will include herd immunity estimates for Spain, Portugal, Belgium and England.

Many experts, however, consider these new studies — not all of which have been peer-reviewed yet — to be unreliable.

We’ve kind of been a little sloppy in thinking about herd immunity.

Kate Langwig, Virginia Polytechnic Institute and State University

In a Twitter thread in May, Dean emphasized that there’s too much uncertainty around basic aspects of the disease — from the different values of R0 in different settings to the effects of relaxing social distancing — to place much confidence in exact herd immunity thresholds. The threshold could be one number as long as a lot of people are wearing masks and avoiding large gatherings, and another much higher number if and when people let their guard down.

Other epidemiologists are also skeptical of the low numbers. Jeffrey Shaman of Columbia University said that 20% herd immunity “is not consistent with other respiratory viruses. It’s not consistent with the flu. So why would it behave differently for one respiratory virus versus another? I don’t get that.”

Miller added, “I think the herd immunity threshold [for naturally acquired immunity] is less than 60%, but I don’t see clear evidence that any [place] is close to it.”

Ultimately, the only way to truly escape the COVID-19 pandemic is to achieve large-scale herd immunity — everywhere, not just in a small number of places where infections have been highest. And that will likely only happen once a vaccine is in widespread use.

In the meantime, to prevent the spread of the virus and lower that R0 value as much as possible, distancing, masks, testing and contact tracing are the order of the day everywhere, regardless of where you place the herd immunity threshold.

Related:


  1. What Other Coronaviruses Tell Us About SARS-CoV-2
  2. How Math (and Vaccines) Keep You Safe From the Flu
  3. The Animal Origins of Coronavirus and Flu

“I can’t think of any decision I’d make differently right now if I knew herd immunity was somewhere else in the range I think it is, which is 40-60%,” said Lipsitch.

Shaman, too, thinks that uncertainty about the naturally acquired herd immunity threshold, combined with the consequences for getting it wrong, leaves only one path forward: Do our best to prevent new cases until we can introduce a vaccine to bring about herd immunity safely.

“The question is: Could New York City support another outbreak?” he said. “I don’t know, but let’s not play with that fire.”

Show 5 comments

Why a vaccine could take years – and may not be possible at all.

By Paul Nuki, Global Health Security Editor, London 5 May 2020 • 10:19am

An increasing number of scientists are warning that finding an effective jab may take much longer than 18 months

There was good news and bad news on the hunt for a vaccine on Monday.

The good news is that nations are coming together and starting to move forward as one, not just in the search for a vaccine, but across the full gamut of technological innovations – including treatment and diagnostics – that hold promise in the fight against the virus.

At a virtual conference, co-hosted by the UK, representatives from countries across the globe came together to pledge funds and agree in principle that the innovations which result should be shared “equitably” around the world.

A total of €7.5 billion was raised from over 40 countries. And – perhaps most significantly – China made a surprise appearance, pledging funds and saying it too would share its technology. 

“Panic and blame games are not useful at all”, said its ambassador to the EU. “It is our conviction that together we can rise to the challenge and prevail.”

Of the major world players, only America, India and Russia were missing in action at yesterday’s event. Russia probably has little to add. India, as a major manufacturer of vaccines, maybe keeping its powder dry for the negotiations ahead. And there were tentative signs America may yet join the global flight.

He is never easy to judge, but during a Fox News broadcast on Sunday evening, President Donald Trump seemed to have twigged that ‘vaccine nationalism’ may not be the most sensible game to play before you know what cards you have been dealt.

Asked if he thought another country could beat the US to finding a vaccine, he said: “I’m now going to say something that is not like me . . . I don’t care, I just want to get a vaccine that works. If it’s another country I’ll take my hat off to them . . . We’re working with other countries, we’re working with Australia, we’re working with UK.”

The bad news on the vaccine front is that an increasing number of scientists are warning that finding an effective jab may take much longer than the year to 18 months people have been talking about. You can call them kill-joys – and many have – but the truth is they have a point.

Similarly optimistic positions were taken in the early days of the HIV pandemic and now – more than 30 years later – there is still no vaccine. Even the jabs we have for influenza are seasonal and only partially effective. 

In fact, the mumps vaccine – considered to be the fastest ever approved in 1967 – took four years to go from collecting viral samples to licensing a working drug. 

David States, chief medical officer of the US health technology company Angstrom Bio, spelt out the challenges in a powerful thread on Twitter last week. 

If you’re hoping a vaccine is going to be a knight in shining armor saving the day, you may be in for a disappointment. SARSCOV2 is a highly contagious virus. A vaccine will need to induce durable high level immunity, but coronaviruses often don’t induce that kind of immunity 1/— David States (@statesdj) April 21, 2020

“If you’re hoping a vaccine is going to be a knight in shining armour saving the day, you may be in for a disappointment. Sars-Cov-2 is a highly contagious virus. A vaccine will need to induce durable high level immunity, but coronaviruses often don’t induce that kind of immunity,” he said.

Pointing to a UK study recently released by scientists in Oxford, he noted that the crucial IgG antibodies detected in the blood of British Covid-19 patients appear to fade noticeably after just two months. 

“This is consistent with the other human coronaviruses. They induce an immune response, but it tends to fade so the same virus can reinfect us a year or two later”, said Mr States.

Coronaviruses have for a long time caused pneumonia in farm animals including chicken, pigs and cattle but here too vaccines have proved largely ineffective to date.

“The problem is that Sars-Cov-2 is a highly contagious virus”, said Mr States. “That means a vaccine will need to be quite effective if it’s going to stop the spread.

“The polio, measles and smallpox vaccines are really remarkable medicines inducing high level long-lasting immunity, but not all vaccines work so well.”

On the positive side, Sars-Cov-2 is an RNA based virus and these are generally more stable than influenza or HIV viruses which are DNA based. This, hope researchers, means they will not have to aim at a constantly moving target. If a vaccine can be found, it may stick.

It may also be possible to create a vaccine that reduces the symptoms of Covid-19 even if it does not prevent the disease itself. As the chief medical officer Professor Chris Whitty pointed out in a lecture last week, vaccines can be therapeutic as well as preventative.

The other big positive is the scale of the race. Never in history has so much human, financial and technical resource been dedicated to finding a vaccine for a single virus.

According to the World Health Organization, there are already 82 vaccine candidates being pursued for Sars-Cov-2 across the globe, six of which have already progressed to human trials.

A successful Chinese trial of a vaccine in monkeys was also reported at the weekend. Eight rhesus macaques were given the jab and none developed serious symptoms when the virus was later injected directly into their lungs. In contrast, four animals in a control group all developed pneumonia.

The results “give us a lot of confidence” that the vaccine will work in humans, one of the lead researchers told Science magazine.

Comment

This article is about blinding people with science. the reason why science teaching fails the masses. they are not meant to understand.

DNA, or deoxyribonucleic acid, is like a blueprint of biological guidelines that a living organism must follow to exist and remain functional. RNA, or ribonucleic acid, helps carry out this blueprint’s guidelines. Of the two, RNA is more versatile than DNA, capable of performing numerous, diverse tasks in an organism, but DNA is more stable and holds more complex information for longer periods of time.

DefinitionA nucleic acid that contains the genetic instructions used in the development and functioning of all modern living organisms. DNA’s genes are expressed, or manifested, through the proteins that its nucleotides produce with the help of RNA.The information found in DNA determines which traits are to be created, activated, or deactivated, while the various forms of RNA do the work.

StructureDouble-stranded. It has two nucleotide strands which consist of its phosphate group, five-carbon sugar (the stable 2-deoxyribose), and four nitrogen-containing nucleobases: adenine, thymine, cytosine, and guanine.Single-stranded. Like DNA, RNA is composed of its phosphate group, five-carbon sugar (the less stable ribose), and 4 nitrogen-containing nucleobases: adenine, uracil (not thymine), guanine, and cytosine.

There is the science, most won’t understand. Basically the article is saying that the virus has no stable DNA structure, only the RNA operating system. It is like calling it a building without fondations, Always helps to visualise things in simple terms.

To attack a virus based on DNA would be impossible, but with RNA, if they press enough buttons, using computers, they might just find a weakness making it fall a apart. How on earth they know so much is another matter, Also why stop the world economy killing more people than are saved, while you are doing it ? Well that is about politics, rich people and power. Robert Cook

Where Did Corona Come From ? May 3rd 2020

How Covid-19 began has become increasingly contentious, with the US and other allies suggesting China has not been transparent about the origins of the outbreak.
Donald Trump, the US president, has given credence to the idea that intelligence exists suggesting the virus may have escaped from a lab in Wuhan, although the US intelligence community has pointedly declined to back this up. The scientific community says there is no current evidence for this claim.
This follows reports that the White House had been pressuring US intelligence community on the claim, recalling the Bush administration’s pressure to “stove pipe” the intelligence before the war in Iraq.
What’s the problem with the Chinese version?
A specific issue is that the official origin story doesn’t add up in terms of the initial epidemiology of the outbreak, not least the incidence of early cases with no apparent connection to the Wuhan seafood market, where Beijing says the outbreak began. If these people were not infected at the market, or via contacts who were infected at the market, critics ask, how do you explain these cases?
The Wuhan labs
Two laboratories in Wuhan studying bat coronaviruses have come under the spotlight. The Wuhan Institute of Virology (WIV) is a biosecurity level 4 facility – the highest for biocontainment – and the level 2 Wuhan Centre for Disease Control, which is located not far from the fish market, had collected bat coronavirus specimens.
Several theories have been promoted. The first, and wildest, is that scientists at WIV were engaged in experiments with bat coronavirus, involving so-called gene splicing, and the virus then escaped and infected humans. A second version is that sloppy biosecurity among lab staff and in procedures, perhaps in the collection or disposal of animal specimens, released a wild virus.
Is there any evidence the virus was engineered?
The scientific consensus rejecting the virus being engineered is almost unanimous. In a letter to Nature in March, a team in California led by microbiology professor Kristian Andersen said “the genetic data irrefutably shows that [Covid-19] is not derived from any previously used virus backbone” – in other words spliced sections of another known virus.
Far more likely, they suggested, was that the virus emerged naturally and became stronger through natural selection. “We propose two scenarios that can plausibly explain the origin of Sars-CoV-2: natural selection in an animal host before zoonotic [animal to human] transfer; and natural selection in humans following zoonotic transfer.”
Peter Ben Embarek, an expert at the World Health Organization in animal to human transmission of diseases, and other specialists also explained to the Guardian that if there had been any manipulation of the virus you would expect to see evidence in both the gene sequences and also distortion in the data of the family tree of mutations – a so-called “reticulation” effect.
In a statement to the Guardian, James Le Duc, the head of the Galveston National Laboratory in the US, the biggest active biocontainment facility on a US academic campus, also poured cold water on the suggestion.
“There is convincing evidence that the new virus was not the result of intentional genetic engineering and that it almost certainly originated from nature, given its high similarity to other known bat-associated coronaviruses,” he said.”
What about an accidental escape of a wild sample because of poor lab safety practices?
The accidental release of a wild sample has been the focus of most attention, although the “evidence” offered is at best highly circumstantial.
The Washington Post has reported concerns in 2018 over security and management weakness from US embassy officials who visited the WIV several times, although the paper also conceded there was no conclusive proof the lab was the source of the outbreak.
Le Duc, however, paints a different picture of the WIV. “I have visited and toured the new BSL4 laboratory in Wuhan, prior to it starting operations in 2017- … It is of comparable quality and safety measures as any currently in operation in the US or Europe.”
He also described encounters with Shi Zhengli, the Chinese virologist at the WIV who has led research into bat coronaviruses, and discovered the link between bats and the Sars virus that caused disease worldwide in 2003, describing her as “fully engaged, very open and transparent about her work, and eager to collaborate”.
Maureen Miller, an epidemiologist who worked with Shi as part of a US-funded viral research programme, echoed Le Duc’s assessment. She said she believed the lab escape theory was an “absolute conspiracy theory” and referred to Shi as “brilliant”.
Problems with the timeline and map of the spread of the virus
While the experts who spoke to the Guardian made clear that understanding of the origins of the virus remained provisional, they added that the current state of knowledge of the initial spread also created problems for the lab escape theory.
When Peter Forster, a geneticist at Cambridge, compared sequences of the virus genome collected early in the Chines outbreak – and later globally – he identified three dominant strains.
Early in the outbreak, two strains appear to have been in circulation at roughly at the same time – strain A and strain B – with a C variant later developing from strain B.
But in a surprise finding, the version with the closest genetic similarity to bat coronavirus was not the one most prevalent early on in the central Chinese city of Wuhan but instead associated with a scattering of early cases in the southern Guangdong province.
Between 24 December 2019 and 17 January 2020, Forster explains, just three out of 23 cases in Wuhan were type A, while the rest were type B. In patients in Guangdong province, however, five out of nine were found to have type A of the virus.
“The very small numbers notwithstanding,” said Forster, “the early genome frequencies until 17 January do not favour Wuhan as an origin over other parts of China, for example five of nine Guangdong/Shenzhen patients who had A types.”
In other words, it still remains far from certain that Wuhan was even necessarily where the virus first emerged.
If there is no evidence of engineering and the origin is still so disputed, why are we still talking about the Wuhan labs theory?
The pandemic has exacerbated existing geopolitical struggles, prompting a disinformation war that has drawn in the US, China, Russia and others.
Journalists and scientists have been targeted by people with an apparent interest in pushing circumstantial evidence related to the virus’s origins, perhaps as part of this campaign and to distract from the fact that few governments have had a fault-free response.
What does this mean now?
The current state of knowledge about coronavirus and its origin suggest the most likely explanation remains the most prosaic. Like other coronaviruses before, it simply spread to humans via a natural event, the starting point for many in the scientific community including the World Health Organization.
Further testing in China in the months ahead may eventually establish the source of the outbreak. But for now it is too early.

Comment

The above article does not add up. On the one hand it was not man made, on the other the west does not believe China’s version. Add to that, they know exactly how it will mutate, how many phases of infection, that the heat won’t kill it and it will be back. They also know that social distancing and masks works, even though there is no evidence, we are all at risk even though we are not, and how many millions it is going to kill without social distancing and being forced to lose jobs and sell all we have to rich asset strippers.

I don’t know how anyone could believe official sources on this matter. The whole business is rather a puzzle and very much an outcome of global economics , conflict and free movement- which they never mention as anything to do with cause and rapid spread.

The cause is one thing, the consequences are many and officialdom will continue the propaganda to reinforce global elite greed at the expense of the ignorant terrified masses. Robert Cook

The “Black Goo” Conspiracy And The Falklands Cover-Up? May 1st 2020

Anyone who is a fan of The X-Files will be aware of a recurring theme about black oil. This is an alien substance and is central to the overall story that is interwoven into the series. Very plausible – for a sci-fi TV show, right?

However, there are conspiracies in reality that suggest a very similar material exists.

The Black Goo is said to be an alien life form of some kind, and was the reason the Falklands War raged as it did in 1982. The sinking of the Belgrano – already an affair covered in suspicion – and the apparently imminent threat to British citizens of the island was part of the deeper cover-up, and draw attention from the real reason the British military suddenly descended as it did upon the Falklands Isles.

​As far as conspiracies go, this is as outlandish a theory as you are ever likely to hear. Although that doesn’t mean that it might not be true, or at least elements of it. Regardless of “true” motives, the Falklands conflict itself was very real, and for many, very consequential or even fatal. Of that, there is no doubt. While we will not get into the specifics of the conflict – official or otherwise – but just to provide a backdrop, check out the video below that gives a brief outline to the gritty events of 1982.

The Real Reasons For The Falklands War?

It is not unusual at all for strange sightings to be witnessed and reported in times of conflict. Numerous accounts of “foo fighters” were reported in the battles over Europe in the Second World War for example. Similarly, as the Vietnam War raged, sightings of strange crafts exploded during this period in the area. It seems the Falklands conflict was to be no different.

For example, former soldier, Jeff Pearson, spoke of seeing a bright intense light hovering over the waters at the furtherst point of East Falkland, while on an R&R break. You can read about his experience in full here.

Another soldier spoke twenty-four years after the incident in 2006, about an “oval spacecraft with lights” that zipped across the sky at great speed. You can read his account in full here.

Of course, some believe the reason the UFO sightings were happening was due to the real reason the British military had been dispatched to the area in the first place. As outlandish as the claims are, the war was a distraction for them to obtain the mystery black goo, and bring it back to mainland Britain.  Does this picture show a UFO in the Falklands Islands? According to the conspiracy, this black goo substance made its way to the United Kingdom courtesy of Royal Navy ships. Deep in a hidden away lab, secret experiments and testing of the substance would take place. In short, things went wrong.

The Black Goo apparently found itself into the water supply and is now permeating every aspect of our lives. In the words of Miles Johnston, “it is in a state of learning!”

​The theory arguably seeped into the mainstream during the Channel 4 documentary by Dan Schreiber, The Great UFO Conspiracy from 2014. It was a tongue-in-check look at the general UFO community and their theories, but not something that mocked those who researched such things. Schreiber spoke on several occasions to Johnston, including in front of the MI6 building when he broke the news to him about the aforementioned black goo.

​The short video below is a snippet from that particular conversation between the two.

“Super Soldiers” And Top Secret Projects

One of the people that Schreiber also met during the aforementioned documentary was (then) little know conspiracy theorist and whistle-blower, Max Spiers. The two spoke under a bridge in a typical London location.

Spiers claimed he was a secret “super soldier” who had begun to recover his memories and had turned whistle-blower against the elite who oversee such programs as MK Ultra and Project Mannequin – both of which he claimed he had direct links to.

​The video below features Spiers speaking with Schreiber in the aforementioned Channel 4 documentary.

Spiers believed that due to his time in the Mannequin Project, that he was the subject of “programming” – so much so that he believed he could be “triggered” at any moment by unknown handlers. Indeed many people who associated with Spiers would later claim to have noticed strange, mysterious people appearing to shadow him.

In the months before his unfortunate death, he would speak of being subjected to “astral attacks” from those who didn’t wish him to convey the information he was slowly unlocking in his “honeycombed” mind. Although not everyone subscribed to his claims or theories, he was an interesting person and a charismatic speaker.

​The video below is worth checking out. It is Spiers’ final UK broadcast interview. 

Around two years after filming took place, Spiers would mysteriously die while in Poland. His death made the mainstream media platforms. Perhaps not surprising when details of him having text his mother before his death telling her to investigate if anything was to happen to him – or of him having vomited “two litres of black liquid” in the hours before his death.

It is interesting that a large voice behind the Black Goo conspiracy is Miles Johnston. Johnston is also a loud voice in the Project Mannequin conspiracy theories and the alleged secret base in Peasemore, Berkshire. He was also a close associate of the late Spiers – in fact, it is perhaps likely that it was Johnston who introduced Schreiber to Spiers during the making of the program.

Is it purely coincidence that “black liquid” was reported to have been vomited by Spiers in the hours before his death?

Check out the video below – the BBC documentary, Fractured, that was made several months following Spiers’ death in February 2017. 

“Death Ray” Technology?

​Dr David Clark even made claims that the British military wanted to test out their new “death ray”, and the conflict in the Falklands provided a perfect opportunity to do so.

A declassified document written by Michael Heseltine to (then) Prime Minister, Margaret Thatcher, appears to back up these claims. Written in 1983, shortly after the conflict, it states that they had conducted tests on “a naval laser weapon, designed to dazzle low flying Argentinian pilots attacking the ships!” He continues, “This weapon was not used in action, and knowledge of it has been kept to a very restricted circle!”

As the black goo conspiracy suggests, the “death ray” was not the only secret activity taking place during that time. And if there is any truth at all to the claims of Max Spiers, not to mention the truth behind his mysterious death, maybe these claims require another look. Regardless of how bizarre or outright crazy some of them are, perhaps we should look at some of them again, just to be sure!

​Check out the videos below for further viewing of this intriguing conspiracy theory. As always, make of them what you will.

[Marcus Lowth March 2017]

A scientist has a disturbing theory for why we haven’t met aliens yet April 29th 2020

Posted 1 hour ago by Louis Staples in discover

Right now, you’d understand if aliens didn’t want to pay Earth a visit, what with the potentially deadly Covid-19 pandemic and all.

But after 16 years of secrecy, the Pentagon has released footage of UFOs which has made many ask, for the zillionth time: are we really alone in the universe? Or are aliens already among us? 

For years, these questions have divided scientists, sci-fi fans and conspiracy theorists. Although many explanations have been put forward for why we haven’t met aliens yet, none have been completely convincing or universally accepted. 

In short, in a world where we’re obsessed with discovering every little thing, this is one big question we don’t really know.

But two years ago, when the world seemed like a very different place, Russian physicist Alexander Berezin, from the National Research University of Electronic Technology (MIET), had a theory. He suggested that once a civilisation reaches the capabilities of spreading across the stars, it will inevitably wipe out all other civilisations.

Just what we need to hear.

The grim solution doesn’t hypothesise a necessarily evil alien race (phew!) it’s more that they might not notice us and their exponential expansion across the galaxy might be more important to them than we are. Cheery indeed.

He wrote in 2018:

They simply won’t notice, the same way a construction crew demolishes an anthill to build real estate because they lack incentive to protect it.

So why haven’t we met them yet and been obliterated?

Well, it’s not all bad news, sort of. Berezin suggests that the reason humans are still here is that we are not likely to be the ants. In other words, we are the future destroyers of countless civilisations and we’re just not ready yet.

Assuming the hypothesis above is correct, what does it mean for our future? The only explanation is the invocation of the anthropic principle. We are the first to arrive at the [interstellar] stage. And, most likely, will be the last to leave.

He cites colonialism and capitalism are two historical example of the forces that will eventually prompt humans to destroy other cultures.

Berezin hopes that he’s wrong and, to be honest, so do we. 

So yeah, it seems like things could always be worse. If this theory is true, let’s hope that aliens steer clear of us for a little (a lot) longer and that it’s a very, very, very long time before we consider becomes the aliens who destroy other places.

HT: IFLScience

US Air Force Admit They Can Control the Weather

June 6, 2015Geopolitics101158 Comments

The US Air Force and DARPA would like us to believe that they have stopped using HAARP in Alaska for research and experiment. Even then, we all know that there are other HAARP systems out there in the form of radar communication and surveillance systems that are rigged on top of mobile platforms that are deployable in any international waters around the world.

All they need to do is twist a knob to change the frequency of the main wave to the microwave range for a modulated frequency [FM] broadcast, and increase the transmission power high enough to reach and heat up the atmosphere above the designated target.

The technology is covered under US Patent 4,686,605 on the “Method and Apparatus for Altering a Region in the Earth’s Atmosphere, Ionosphere, and/or Magnetosphere.”

How does it work?

For those who have no technical appetite, just imagine the same frequency signal used in your microwave oven that cooks your breakfast…

microwave_diagram.png

… is directed towards the atmosphere from the tip of the antennas in the array shown below.

antennae_haarp
HAARP Array

Again, the same principle used in the TV, or cell site, broadcast is being used on these HAARP and radar platforms. The only difference in cooking up the sky, as opposed to your breakfast, is that the voltage needed to transmit the same microwave frequency signal is in the range of hundreds of million volts due to the amount of distance that the same signal must traverse between the antenna and the target.
While,

“A typical consumer microwave oven consumes 1,100 W AC and produces 700 W of microwave power, an efficiency of 64%. The other 400 W are dissipated as heat, mostly in the magnetron tube.”

… your government’s HAARP emitter may need 100 billion watts of sheer power, as the patent above has indicated, in order to achieve a specific outcome. There’s no theoretical limit, of course, as to how much ego will play into the process.

What could possibly happen to the preselected region in the atmosphere when enough power is fed to the transmission array?
Understand, that even an increase of just 1 Celsius in the atmospheric temperature is more than enough to initiate a significant weather perturbation. Bear in mind that all gases move from high to low pressure, and gas pressure is directly proportional to its temperature.

In short, if one heats up at least three specific locations in the atmosphere, the common center region having relatively lower pressure than those three heated points will become the eye of the storm. Three competing forces moving towards a common region can only be resolved through a downward spiral rotation, the cooler gases being heavier than the former. Depending on the amount of energy being used, the whole process could take days to develop.
Chemtrailing and cloud seeding could certainly enhance the process far beyond weather manipulation. Notoriously, these maniacs almost always have multiple goals for a single action.

For those who understand frequency modulation, they know that any low frequency signal can be piggybacked into the high frequency radio signal. In short, it is possible to alter human behavior by FM broadcasting brain wave frequencies, to wit:

  • Delta waves (below 4 hz) occur during sleep
  • Theta waves (4-7 hz) are associated with sleep, deep relaxation (like hypnotic relaxation), and visualization
  • Alpha waves (8-13 hz) occur when we are relaxed and calm
  • Beta waves (13-38 hz) occur when we are actively thinking, problem-solving, etc.
  • The Sensory motor rhythm (or SMR; around 14 hz) was originally discovered to prevent seizure activity in cats. SMR activity seems to link brain and body functions.
  • Gamma brain waves (39-100 hz) are involved in higher mental activity and consolidation of information. An interesting study has shown that advanced Tibetan meditators produce higher levels of gamma than non-meditators both before and during meditation.

Those are the “good waves”. But there is also a band of waves that could induce hallucinations, blindness, agitation, pain and despair.

In short, the possibilities are endless using this simple broadcast technology, prompting Tesla to release it to a number of countries after SS George Scherff, Jr., aka George W. Bush, smuggled the electrical plans from the wizard’s own laboratory, for the Deep State’s own benefit.

For initiating earthquakes, an extremely low frequency [ELF] of 2.5Hz, or thereabouts, must be modulated into the radio carrier frequency.

That is the same frequency used against Japan to initiate a 7.6 earthquake, in 2011. However, the quake itself was only implemented to provide a cover for the detonation of a suitcase nuke 4 kilometers into the sea from the Fukushima Nuclear Plants, to create the 30 meter tsunami that the whole world were all witness of.

Just like any other geoweapon attack, the whole operation will pay for itself:

Other Effects of Radio Transmission

In order to have an idea of how dangerous this technology really is when under the control of psychopaths controlling the government, we need to look at the World Health Organization article about a few of the possible effects of being exposed to radar [1 megawatt] and radio broadcasts [50 kilowatts]:
Possible health effects
Most studies conducted to date examined health effects other than cancer. They probed into physiological and thermoregulatory responses, behavioural changes and effects such as the induction of lens opacities (cataracts) and adverse reproductive outcome following acute exposure to relatively high levels of RF fields. There are also a number of studies that report non-thermal effects, where no appreciable rise in temperature can be measured.
Cancer-related studies: Many epidemiological studies have addressed possible links between exposure to RF and excess risk of cancer. However, because of differences in the design and execution of these studies, their results are difficult to interpret. A number of national and international peer review groups have concluded that there is no clear evidence of links between RF exposure and excess risk of cancer. WHO has also concluded that there is no convincing scientific evidence that exposure to RF shortens the life span of humans, or that RF is an inducer or promoter of cancer. However, further studies are necessary.
Thermal effects: RF fields have been studied in animals, including primates. The earliest signs of an adverse health consequence, found in animals as the level of RF fields increased, include reduced endurance, aversion of the field and decreased ability to perform mental tasks. These studies also suggest adverse effects may occur in humans subjected to whole body or localized exposure to RF fields sufficient to increase tissue temperatures by greater than 1°C. Possible effects include the induction of eye cataracts, and various physiological and thermoregulatory responses as body temperature increases. These effects are well established and form the scientific basis for restricting occupational and public exposure to RF fields.
Non-thermal effects: Exposure to RF levels too low to involve heating, (i.e., very low SARs), has been reported by several groups to alter calcium ion mobility, which is responsible for transmitting information in tissue cells. However, these effects are not sufficiently established to provide a basis for restricting human exposure.
Pulsed RF fields: Exposure to very intense pulsed RF fields, similar to those used by radar systems, has been reported to suppress the startle response and evoke body movements in conscious mice. In addition, people with normal hearing have perceived pulse RF fields with frequencies between about 200 MHz and 6.5 GHz. This is called the microwave hearing effect. The sound has been variously described as a buzzing, clicking, hissing or popping sound, depending on the RF pulsing characteristics. Prolonged or repeated exposure may be stressful and should be avoided where possible.
RF shocks and burns: At frequencies less than 100 MHz, RF burns or shock may result from charges induced on metallic objects situated near radars. Persons standing in RF fields can also have high local absorption of the fields in areas of their bodies with small cross sectional areas, such as the ankles. In general, because of the higher frequencies that most modern radar systems operate, combined with their small beam widths, the potential for such effects is very small.

HAARP Can Be A Tool for Progress

One must understand that HAARP itself is not evil — but the people who are using it at the moment are. All technologies are double-edged sword that can be put to work for everyone’s benefit, or destruction.
Here’s why according to the same patent:

haarp uses.png

In short, they could end their “climate change” stupidity right now by producing enough ozone to replenish that protective layer, reduce carbon monoxide and similar toxic oxides in the atmosphere, etc. and provide adequate rainfall in arid desert in Africa, at will.
They can also eliminate toxicity in the oceans to produce as much planktons as possible, and increase the fisherman’s catch. No more stranded mammals in sandy beaches everywhere.
However, the same people having the capability to control the weather are themselves promoting the “climate change”, or “global warming” hoaxes. So, how can we expect them to be the solution?
In short, those same people who are occupying the halls of power are themselves creating all of these nonsense, that obviously they too can solve with what is already in their hands, and are retarding our full potential as a civilization due to the misapplication of these scientific knowledge. That’s the crux of the problem.

Air Force Bombshell: Admits They Can Control the Weather – HAARP

haarp-28


While HAARP and weather control has been called a conspiracy theory by the mainstream media and government officials, during a Senate hearing on Wednesday, David Walker, deputy assistant secretary of the Air Force for science, technology and engineering, dropped a bombshell in answer to a question asked by Lisa Murkowski in relation to the dismantling of the $300 million High Frequency Active Auroral Research Program in Gakona this summer.
Walker said this is “not an area that we have any need for in the future” and it would not be a good use of Air Force research funds to keep HAARP going. “We’re moving on to other ways of managing the ionosphere, which the HAARP was really designed to do,” he said. “To inject energy into the ionosphere to be able to actually control it. But that work has been completed.

Many believe HAARP was created and has been used for weather control, with enough juice to trigger hurricanes, tornadoes and earthquakes and comments such as this bring about the question of whether conspiracy theorists are more on target than anyone has admitted to date.
We need to be the solution. We need to act. Otherwise, just three units of these mobile HAARP platforms, masquerading as radars, can make the hurricane steerable by a common method known as triangulation.

060109-N-3019M-012 Pearl Harbor, Hawaii (Jan. 9, 2006) – The heavy lift vessel MV Blue Marlin enters Pearl Harbor, Hawaii with the Sea Based X-Band Radar (SBX) aboard after completing a 15,000-mile journey from Corpus Christi, Texas. SBX is a combination of the world???s largest phased array X-band radar carried aboard a mobile, ocean-going semi-submersible oil platform. It will provide the nation with highly advanced ballistic missile detection and will be able to discriminate a hostile warhead from decoys and countermeasures. SBX will undergo minor modifications, post-transit maintenance and routine inspections in Pearl Harbor before completing its voyage to its homeport of Adak, Alaska in the Aleutian Islands. U.S. Navy photo by Journalist 2nd Class Ryan C. McGinley (RELEASED)

Have they really dismantled the Alaska-based HAARP?

Unnatural weather patterns suggest HAARP is still here. But granting that they did, we suspect that the system has only mutated.
They may not look like it, but Doopler radars provide the same set of useful and treacherous functions, when so desired.

Doopler Weather Surveillance Radar
Doopler Weather Surveillance Radar

Above is just one of 159 high-resolution WSR-88D S-band Doppler weather radars scattered in the US mainland that comprise the NEXRAD System, as shown below, and are supposedly to monitor the weather.

As stated above, the NEXRAD Doopler Weather Surveillance System operates under the S-Band frequency range of 2 – 4 GHz (or 2 – 4 billion pulses per second), which incidentally the same frequency by which our microwave ovens, and other common wireless devices operate.

The S band is a designation by the Institute of Electrical and Electronics Engineers (IEEE) for a part of the microwave band of the electromagnetic spectrum covering frequencies from 2 to 4 gigahertz (GHz)… The S band also contains the 2.4–2.483 GHz ISM band, widely used for low power unlicensed microwave devices such as cordless phones, wireless headphones (Bluetooth), wireless networking (WiFi), garage door openers, keyless vehicle locks, baby monitors as well as for medical diathermy machines and microwave ovens (typically at 2.495GHz).

  • https://en.wikipedia.org/wiki/S_band

In short, we are not making this all up.  Cancer, euphoria in the midst of economic hardship, bad weather and global warming, are most of the time, artificially induced at this point of our existence.

This is not the first time that a public official has acknowledged that HAARP and weather control is not only possible, but has been and continues to be, used as a “super weapon,” as evidenced by a statement in 1997 by former U.S. Defense Secretary William Cohen, where he said:

“Others [terrorists] are engaging even in an eco-type of terrorism whereby they can alter the climate, set off earthquakes, volcanoes remotely through the use of electromagnetic waves… So there are plenty of ingenious minds out there that are at work finding ways in which they can wreak terror upon other nations…It’s real, and that’s the reason why we have to intensify our

efforts.”

Is it still just a conspiracy theory if public officials admit it is true?

The NEXRAD Doopler Weather Surveillance System cannot be installed along major thoroughfares without raising enough curiosity and concerns. Welcome to 5G wireless technology!

You can actually participate in the global efforts to cripple the Deep State organized criminal cabal’s ability for genocide, while enjoying healthcare freedom at the same time, by boycotting Big Pharma for good.

U.S. Attacked the Philippines with HAARP

February 4, 2014Geopolitics10153 Comments

Back in November 2013, we were knocked out of the Net after HAARP devastated much of Leyte and the entire Visayas, Philippines. Until now, people are still suffering the effects of that manmade disaster, e.g. homelessness, high food prices, epidemic, persistent live virus mass injections, etc.
We are also experiencing intermittent internet connections every now and then.
Just before the attack, we already knew it’s not going to be an “Act of God” but a deliberate attempt to pin us further down for reasons that we are also aware of already.
Before you continue reading the featured article below, understand that even the US Air Force officially admitted its ability to control the weather during a congressional budget hearing.

“The US Air Force and DARPA would like us to believe that they have stopped using HAARP in Alaska for research and experiment. Even then, we all know that there are other HAARP systems out there in the form of radar communication and surveillance systems that are rigged on top of mobile platforms that are deployable anywhere in the world.
All they need to do is twist a button to change the frequency to microwave range and increase transmission power enough to reach and heat up the atmosphere above the target.”

Here’s the US Air Force admitting the use of HAARP during a US congressional budgetary hearings…

A Weather Weapon is Just One Big Microwave Radio Transmitter

A weather weapon is not a strange technology that we can never have, or to say the least, ever understand.

The technology is composed of a wireless broadcast system, and the right microwave frequency to produce heat on any target, in this case, the atmosphere. To put it simply, it’s an FM broadcast system and a microwave oven in one device.

The only difference between your favorite FM radio station and a weather weapon is that the latter needs more power to push its signal to greater distances and reach the sky.

If its signals can reach the sky, all the more so with terrestrial targets to agitate long dormant volcanoes, or initiate an earthquake with extremely low frequency modulated into the usual radio carrier frequency through frequency modulation [FM] techniques.

Sex change hormonal treatments alter brain Chemistry

Sex change hormonal treatments alter brain chemistry

Date:

October 8, 2015

Source:

Elsevier

Summary:

Hormonal treatments administered as part of the procedures for sex reassignment have well-known and well-documented effects on the secondary sexual characteristics of the adult body, shifting a recipient’s physical appearance to that of the opposite sex. New research indicates that these hormonal treatments also alter brain chemistry.

FULL STORY

Hormonal treatments administered as part of the procedures for sex reassignment have well-known and well-documented effects on the secondary sexual characteristics of the adult body, shifting a recipient’s physical appearance to that of the opposite sex.

New research published in the current issue of Biological Psychiatry indicates that these hormonal treatments also alter brain chemistry.

Researchers at the Medical University of Vienna, led by senior authors Dr. Siegfried Kasper and Dr. Rupert Lanzenberger, show that administration of the male hormone testosterone in female-to-male transsexuals raises brain levels of SERT, the protein that transports the chemical messenger serotonin into nerve cells.

In contrast, male-to-female transsexuals who received a testosterone blocker and the female hormone estrogen showed decreased levels of this protein in the brain.

SERT plays an important role in the treatment of mood and anxiety disorders, as many common antidepressants, such as Prozac, block its activity by inhibiting serotonin reuptake. In addition, some genetics studies have suggested that higher levels of serotonin transporter may increase resilience to stress and reduce risk for stress and mood disorders.

Because women are twice as likely to be diagnosed with depression as men, these changes in the levels of SERT are consistent with the increased risk for mood and anxiety disorders in females relative to males.

Lanzenberger added, “These results may explain why testosterone improves symptoms in some forms of depression. Our study also increases our knowledge on the role of sex hormones in sex differences of mood disorders.”

Overall, these findings suggest that when people switch from female to male, their biology changes in a way that is consistent with a reduced risk for mood and anxiety disorders, whereas the reverse happens when males switch to females.

“This study is the first to show changes in brain chemistry associated with the hormonal treatments administered in the sex change process,” said Dr. John Krystal, Editor of Biological Psychiatry. “It provides new insight into the ways that the hormonal differences between men and women influence mood and the risk for mood disorders.”

Story Source:

The above post is reprinted from materials provided by Elsevier. Note: Materials may be edited for content and length.

Journal Reference:

Georg S. Kranz, Wolfgang Wadsak, Ulrike Kaufmann, Markus Savli, Pia Baldinger, Gregor Gryglewski, Daniela Haeusler, Marie Spies, Markus Mitterhauser, Siegfried Kasper, Rupert Lanzenberger. High-Dose Testosterone Treatment Increases Serotonin Transporter Binding in Transgender People. Biological Psychiatry, 2015; 78 (8): 525 DOI: 10.1016/j.biopsych.2014.09.010 

Elsevier. “Sex change hormonal treatments alter brain chemistry.” ScienceDaily. ScienceDaily, 8 October 2015. <www.sciencedaily.com/releases/2015/10/151008110522.htm>.

Georg S. Kranz, Wolfgang Wadsak, Ulrike Kaufmann, Markus Savli, Pia Baldinger, Gregor Gryglewski, Daniela Haeusler, Marie Spies, Markus Mitterhauser, Siegfried Kasper, Rupert Lanzenberger. High-Dose Testosterone Treatment Increases Serotonin Transporter Binding in Transgender People. Biological Psychiatry, 2015; 78 (8): 525 DOI: 10.1016/j.biopsych.2014.09.010

Sex change hormonal treatments alter brain chemistry

Date:

October 8, 2015

Source:

Elsevier

Summary:

Hormonal treatments administered as part of the procedures for sex reassignment have well-known and well-documented effects on the secondary sexual characteristics of the adult body, shifting a recipient’s physical appearance to that of the opposite sex. New research indicates that these hormonal treatments also alter brain chemistry.

FULL STORY

Hormonal treatments administered as part of the procedures for sex reassignment have well-known and well-documented effects on the secondary sexual characteristics of the adult body, shifting a recipient’s physical appearance to that of the opposite sex.

New research published in the current issue of Biological Psychiatry indicates that these hormonal treatments also alter brain chemistry.

Researchers at the Medical University of Vienna, led by senior authors Dr. Siegfried Kasper and Dr. Rupert Lanzenberger, show that administration of the male hormone testosterone in female-to-male transsexuals raises brain levels of SERT, the protein that transports the chemical messenger serotonin into nerve cells.

In contrast, male-to-female transsexuals who received a testosterone blocker and the female hormone estrogen showed decreased levels of this protein in the brain.

SERT plays an important role in the treatment of mood and anxiety disorders, as many common antidepressants, such as Prozac, block its activity by inhibiting serotonin reuptake. In addition, some genetics studies have suggested that higher levels of serotonin transporter may increase resilience to stress and reduce risk for stress and mood disorders.

Because women are twice as likely to be diagnosed with depression as men, these changes in the levels of SERT are consistent with the increased risk for mood and anxiety disorders in females relative to males.

Lanzenberger added, “These results may explain why testosterone improves symptoms in some forms of depression. Our study also increases our knowledge on the role of sex hormones in sex differences of mood disorders.”

Overall, these findings suggest that when people switch from female to male, their biology changes in a way that is consistent with a reduced risk for mood and anxiety disorders, whereas the reverse happens when males switch to females.

“This study is the first to show changes in brain chemistry associated with the hormonal treatments administered in the sex change process,” said Dr. John Krystal, Editor of Biological Psychiatry. “It provides new insight into the ways that the hormonal differences between men and women influence mood and the risk for mood disorders.”

Story Source:

The above post is reprinted from materials provided by Elsevier. Note: Materials may be edited for content and length.

Journal Reference:

Georg S. Kranz, Wolfgang Wadsak, Ulrike Kaufmann, Markus Savli, Pia Baldinger, Gregor Gryglewski, Daniela Haeusler, Marie Spies, Markus Mitterhauser, Siegfried Kasper, Rupert Lanzenberger. High-Dose Testosterone Treatment Increases Serotonin Transporter Binding in Transgender People. Biological Psychiatry, 2015; 78 (8): 525 DOI: 10.1016/j.biopsych.2014.09.010 

Elsevier. “Sex change hormonal treatments alter brain chemistry.” ScienceDaily. ScienceDaily, 8 October 2015. <www.sciencedaily.com/releases/2015/10/151008110522.htm>.

Georg S. Kranz, Wolfgang Wadsak, Ulrike Kaufmann, Markus Savli, Pia Baldinger, Gregor Gryglewski, Daniela Haeusler, Marie Spies, Markus Mitterhauser, Siegfried Kasper, Rupert Lanzenberger. High-Dose Testosterone Treatment Increases Serotonin Transporter Binding in Transgender People. Biological Psychiatry, 2015; 78 (8): 525 DOI: 10.1016/j.biopsych.2014.09.010

Sex change hormonal treatments alter brain chemistry

Date:

October 8, 2015

Source:

Elsevier

Summary:

Hormonal treatments administered as part of the procedures for sex reassignment have well-known and well-documented effects on the secondary sexual characteristics of the adult body, shifting a recipient’s physical appearance to that of the opposite sex. New research indicates that these hormonal treatments also alter brain chemistry.

FULL STORY

Hormonal treatments administered as part of the procedures for sex reassignment have well-known and well-documented effects on the secondary sexual characteristics of the adult body, shifting a recipient’s physical appearance to that of the opposite sex.

New research published in the current issue of Biological Psychiatry indicates that these hormonal treatments also alter brain chemistry.

Researchers at the Medical University of Vienna, led by senior authors Dr. Siegfried Kasper and Dr. Rupert Lanzenberger, show that administration of the male hormone testosterone in female-to-male transsexuals raises brain levels of SERT, the protein that transports the chemical messenger serotonin into nerve cells.

In contrast, male-to-female transsexuals who received a testosterone blocker and the female hormone estrogen showed decreased levels of this protein in the brain.

SERT plays an important role in the treatment of mood and anxiety disorders, as many common antidepressants, such as Prozac, block its activity by inhibiting serotonin reuptake. In addition, some genetics studies have suggested that higher levels of serotonin transporter may increase resilience to stress and reduce risk for stress and mood disorders.

Because women are twice as likely to be diagnosed with depression as men, these changes in the levels of SERT are consistent with the increased risk for mood and anxiety disorders in females relative to males.

Lanzenberger added, “These results may explain why testosterone improves symptoms in some forms of depression. Our study also increases our knowledge on the role of sex hormones in sex differences of mood disorders.”

Overall, these findings suggest that when people switch from female to male, their biology changes in a way that is consistent with a reduced risk for mood and anxiety disorders, whereas the reverse happens when males switch to females.

“This study is the first to show changes in brain chemistry associated with the hormonal treatments administered in the sex change process,” said Dr. John Krystal, Editor of Biological Psychiatry. “It provides new insight into the ways that the hormonal differences between men and women influence mood and the risk for mood disorders.”

Story Source:

The above post is reprinted from materials provided by Elsevier. Note: Materials may be edited for content and length.

Journal Reference:

Georg S. Kranz, Wolfgang Wadsak, Ulrike Kaufmann, Markus Savli, Pia Baldinger, Gregor Gryglewski, Daniela Haeusler, Marie Spies, Markus Mitterhauser, Siegfried Kasper, Rupert Lanzenberger. High-Dose Testosterone Treatment Increases Serotonin Transporter Binding in Transgender People. Biological Psychiatry, 2015; 78 (8): 525 DOI: 10.1016/j.biopsych.2014.09.010 

Elsevier. “Sex change hormonal treatments alter brain chemistry.” ScienceDaily. ScienceDaily, 8 October 2015. <www.sciencedaily.com/releases/2015/10/151008110522.htm>.

Georg S. Kranz, Wolfgang Wadsak, Ulrike Kaufmann, Markus Savli, Pia Baldinger, Gregor Gryglewski, Daniela Haeusler, Marie Spies, Markus Mitterhauser, Siegfried Kasper, Rupert Lanzenberger. High-Dose Testosterone Treatment Increases Serotonin Transporter Binding in Transgender People. Biological Psychiatry, 2015; 78 (8): 525 DOI: 10.1016/j.biopsych.2014.09.010

Coronavirus mutated into three distinct strains as it spread across the world April 11th 2020

Researchers who mapped some of the original spread of coronavirus in humans have discovered there are variants of the virus throughout the world. They reconstructed the early evolutionary paths of Covid-19 as infection spread from Wuhan, China, out to Europe and North America. By analysing the first 160 complete virus genomes to be sequenced from human patients, scientists found the variant closest to that discovered in bats was largely found in patients from the US and Australia, not Wuhan. Dr Peter Forster, geneticist and lead author from the University of Cambridge, said: ‘There are too many rapid mutations to neatly trace a Covid-19 family tree. We used a mathematical network algorithm to visualise all the plausible trees simultaneously. ‘These techniques are mostly known for mapping the movements of prehistoric human populations through DNA. We think this is one of the first times they have been used to trace the infection routes of a coronavirus like Covid-19.’

The team used data from samples taken from across the world between December 24, 2019 and March 4, 2020. They found three distinct, but closely related, variants of Covid-19, which they called A, B and C. For all the latest news and updates on Coronavirus, click here. For our Coronavirus live blog click here. Researchers found that the closest type of coronavirus to the one discovered in bats – type A, the original human virus genome – was present in Wuhan, but was not the city’s predominant virus type. Mutated versions of A were seen in Americans reported to have lived in Wuhan, and a large number of A-type viruses were found in patients from the US and Australia. Wuhan’s major virus type was B and was prevalent in patients from across east Asia, however it didn’t travel much beyond the region without further mutations.

The researchers say the C variant is the major European type, found in early patients from France, Italy, Sweden and England. The analysis also suggests one of the earliest introductions of the virus into Italy came via the first documented German infection on January 27, and that another early Italian infection route was related to a ‘Singapore cluster’. It is absent from the study’s Chinese mainland sample but seen in Singapore, Hong Kong and South Korea. Scientists argue their methods could be applied to the very latest coronavirus genome sequencing to help predict future global hotspots of disease transmission and surge.

The findings are published in the journal Proceedings of the National Academy of Sciences (PNAS). Variant A, most closely related to the virus found in both bats and pangolins, is described as the root of the outbreak by researchers. Type B is derived from A, separated by two mutations, then C is in turn a ‘daughter’ of B, the study suggests. The phylogenetic network methods used by researchers – which looks at evolutionary relationships among biological entities – allowed the visualisation of hundreds of evolutionary trees simultaneously in one simple graph. Get in touch with our news team by emailing us at webnews@metro.co.uk.

Read more: https://metro.co.uk/2020/04/10/coronavirus-mutated-three-distinct-strains-spread-across-world-12536852/?ito=cbshare

Twitter: https://twitter.com/MetroUK | Facebook: https://www.facebook.com/MetroUK/

The findings are published in the journal Proceedings of the National Academy of Sciences (PNAS). Variant A, most closely related to the virus found in both bats and pangolins, is described as the root of the outbreak by researchers. Type B is derived from A, separated by two mutations, then C is in turn a ‘daughter’ of B, the study suggests. The phylogenetic network methods used by researchers – which looks at evolutionary relationships among biological entities – allowed the visualisation of hundreds of evolutionary trees simultaneously in one simple graph.

Read more: https://metro.co.uk/2020/04/10/coronavirus-mutated-three-distinct-strains-spread-across-world-12536852/?ito=cbshare

Twitter: https://twitter.com/MetroUK | Facebook: https://www.facebook.com/MetroUK/

Get in touch with our news team by emailing us at webnews@metro.co.uk.

Read more: https://metro.co.uk/2020/04/10/coronavirus-mutated-three-distinct-strains-spread-across-world-12536852/?ito=cbshare

Twitter: https://twitter.com/MetroUK | Facebook: https://www.facebook.com/MetroUK/

Read more: https://metro.co.uk/2020/04/10/coronavirus-mutated-three-distinct-strains-spread-across-world-12536852/?ito=cbshare

Twitter: https://twitter.com/MetroUK | Facebook: https://www.facebook.com/MetroUK/

Read more: https://metro.co.uk/2020/04/10/coronavirus-mutated-three-distinct-strains-spread-across-world-12536852/?ito=cbshare

Twitter: https://twitter.com/MetroUK | Facebook: https://www.facebook.com/MetroUK/

Read more: https://metro.co.uk/2020/04/10/coronavirus-mutated-three-distinct-strains-spread-across-world-12536852/?ito=cbshare

Twitter: https://twitter.com/MetroUK | Facebook: https://www.facebook.com/MetroUK/

Search for new life forms Posted March 1st 2020

In the coming decades, new rovers will roam the sands of Mars. An orbiter will sample the seas of Jupiter’s moon Europa. A drone will grace the skies of Saturn’s moon Titan. Mission planners dream of equipping these mechanical scouts with instruments capable of scouring the unknown environments for signs of life, but the technology required to do so is deceptively complex.

Explorers seeking alien life must first grapple with questions of fundamental biology. What does it mean to be alive? What traits must all organisms share — even those that might inhabit methane lakes or ice-locked oceans? The burgeoning field of astrobiology seeks answers in the form of “biosignatures”— surefire signs of life that a simple experiment could identify, such as DNA or proteins.

DNA is like a toolkit that stores and transmits vital information passed from a living organism to its offspring. The molecule’s ingredients, called nucleotides, are four components coiled in a double helix called adenine, cytosine, guanine and thymine.

But as researchers debate which molecules to look for, recent work suggests casting a broader net. In 2019, for instance, a team of synthetic biologists showed that the four-molecule genetic code that describes all known life on Earth isn’t the only group of molecules that could support evolution.
“You set these grand challenges to make a new Darwinian system,” says Steven Benner, founder of the Foundation for Applied Molecular Evolution at the University of Florida and leader of the group. “That drags scientists kicking and screaming across uncharted terrain.”

The recent research, which was published in the journal Science, covered new ground regarding genetic information storage. Netflix represents digital movies as long strings of 0’s and 1’s, and all known Earth organisms follow a similar strategy. They store instructions for producing copies of themselves in their DNA — another long string, but one assembled from four molecules rather than two numbers. This system enables evolution by being reliable enough to safeguard those instructions between generations while maintaining the flexibility for occasional revisions.

But does the alphabet of life have to contain four letters? Some have argued yes — four elements strike the perfect balance between fitting in more information and a lower risk of typos. Up to 12 letters are possible on paper, though, and Benner has spent three decades (during two of which he received NASA funding totaling nearly $5 million) realizing some of them in the lab. In the new research, his group announced the construction of an eight-molecule system capable of storing, copying and editing information.

They dubbed it hachimoji DNA, meaning “eight letters” in Japanese.
The minor molecular tweak has major consequences for biotechnology. When it comes to manipulating biomolecules and microbes, modern techniques rely on a suite of tools that work only with the traditional four elements of DNA. For even the simplest tasks with hachimoji molecules, Benner’s group had to reinvent new biological equivalents of the wheel. “Everything that you take for granted in modern biotechnology, you have to do yourself,” he says. “You’re basically back to doing 1960s molecular biology.”

And hachimoji molecules represent just minor riffs on standard DNA, with a few oxygen and nitrogen atoms shuffled around here and there. Biologists would really struggle to get a handle on a truly alien system. Letting his imagination run wild, Benner speculates about exotic DNA molecules forming a flat sheet, as opposed to a linear strand. Good luck trying to fit that square peg into a round detector.

The universal search for life
In a recent proposal currently under review for funding by NASA, Benner’s team advocates for taking a more universal approach. They hope to build a device that searches for molecules with traits that are theoretically essential for any genetic molecule: traits determined by decades of experimentation with alternatives like hachimoji.

First, the molecule should be long. Short molecules can’t hold enough information to do anything useful. Second, the molecule should be complex enough that it isn’t mirror symmetric, for similar reasons. Third, its frame should feature repeating charges, either positive or negative. DNA keeps its stiff double helix shape because it has negatively charged edges that repel each other. Otherwise it could fold and get tangled. Alien DNA, the thinking goes, should fit this general archetype.

The group has designed a lunchbox-size “universal life detection” instrument that would take in water from Europa’s global ocean or Martian ice, attract long positively charged molecules to one plate and negatively charged ones to another, and use beams of light to gauge the complexity of the molecules.

This test should detect any life in the area, microbial or otherwise, as long as it has some sort of biological genetic code akin to DNA. “If you ran seawater through it, you could find DNA from shrimp,” says Nathan Bramall, CEO of Leiden Measurement Technology, the company that would build the prototype, “but not [computerized life like Apple’s assistant] Siri.”

And the lunchbox is just one of the life-detection projects aiming to catch a ride to another world. In 2018 NASA awarded Pennsylvania-based nanotechnology start-up Goeppert a $125,000 grant to develop a device that can analyze the size and shape of potential biomolecules as they pass through a nanometer-size pore. This type of “nanopore” experiment could complement Benner’s universal life-detection machine, according to Kathryn Bywaters, a SETI scientist working at NASA Ames. “I don’t think any one life-detection instrument will be the end all be all,” she says. “It’s going to be a suite of instruments looking for multiple different smoking guns.”

Development of any single device may stall, but these two projects represent small bets in NASA’s growing astrobiology portfolio, as the agency prepares for more ambitious missions to Mars and distant moons. And both Bywaters and Bramall feel optimistic about their devices, expecting that they’ll be ready to fly before the end of the decade.

Other researchers involved in the astrobiology community, however, wonder if even these out-of-the-box approaches don’t go far enough. Buried in Benner’s criteria for a genetic molecule lies an implicit definition of life: a structure that evolves. But Carol Cleland, a philosopher and the director of the Center for the Study of Origins at the University of Colorado Boulder, worries that any search based on checking rigid boxes is doomed to have blind spots. Must all life be Darwinian life?

She praises Benner’s work targeting long, charged molecules as a great starting point but urges an even more open-minded approach that focuses on more general anomalies — phenomena that resist simple physical or biological characterization. She points to the conflicting Viking experiments on Mars in the 1970s, when a lander found some support for metabolizing microbes but no evidence for organic molecules, as a prime example. Most researchers deemed the result negative because it failed to fit the prevailing definition of life, but Cleland suggests that such perplexing edge cases may be the most fruitful directions for discovering totally new biology.

Asking whether we are alone in the solar system may be simple, but teasing out the answer will take time and effort, she suggests, no matter what instruments NASA ends up sending to far-off worlds. “I don’t think you’re going to discover truly alien life in one mission,” she says. “You’re going to discover stuff that is provocative.”

UFO

Declassified CIA Files Reveal Encounter With ‘Green Circular’ UFO Over Soviet Union During Cold War

©

The area where the alleged UFO was spotted nearly five decades ago was apparently used by the USSR to test experimental missiles and laser weapon systems.

A recently declassified CIA report sheds light on an alleged UFO encounter that took place at the height of Cold War in Kazakhstan in 1973, back when it was part of the Soviet Union.

The document, whose redacted version was first released in 1978 and which has now been made available on The Black Vault, a website that publishes declassified government files, mentions how the witness, identified in the paper as “Source”, “stepped outside for some air” and spotted “an unidentified sharp (bright) green circular object or mass” hovering “above cloud level”.

“Within 10 to 15 seconds of observation, the green circle widened and within a brief period of time several green concentric circles formed around the mass. Within minutes the coloring disappeared. There was no sound, such as an explosion, associated with the phenomenon”, the document states citing the witness’ observations.

According to the website, the sighting took place in the vicinity of the Sary Shagan Weapons Testing Range that was allegedly used by the USSR back then to secretly launch “experimental missiles” and to test “laser weapon systems utilizing powerful antennas”.

During a telephone interview with Newsweek, The Black Vault’s founder John Greenewald compared the encounter with the so called USS Nimitz UFO incident which took place in 2004.

“This is very much simliar to the context we see today, with threats on military facilities,” he said. “The US Navy has gone on the record saying whatever this is, it’s a concern. They’re being encroached upon by this unidentified phenomena.”

Source Sputnik

‘Living robot’ developed by scientists using frog embryos

By Jamie Harris, PA Science Technology Reporter 2 hrs ago Australian PM Morrison admits regret over his response to fires The Queen’s statement ‘hints couple may lose royal titles’

Posted January 14th 2020

a close up of a bowl: Scientists create ‘living robot’ (Douglas Blackiston/Tufts University/PA)

© Provided by PA Media Scientists create ‘living robot’ (Douglas Blackiston/Tufts University/PA)

Scientists claim to have created the world’s first living robots using stem cells from frog embryos.

The tiny hybrids are “entirely new life-forms” known as xenobots – named after the African frog used in the research – and are able to move about.

It is hoped the millimetre-wide bots could one day be used to swim around human bodies to specific areas requiring medicine, or to gather microplastic in the oceans.

“We here present a method that designs completely biological machines from the ground up,” the team from the University of Vermont writes in Proceedings of the National Academy of Sciences.

Unlike the metal robots we have become accustomed to, biological tissues present the potential advantage of being able to heal too, though it remains very much in the early stages.

These are the world’s first living robots.

Human Ancestors May Have Evolved the Physical Ability to Speak More Than 25 Million Years Ago Smithsonian.com Posted December 11th 2019 /www.smithsonianmag.com/science-nature/human-ancestors-may-have-evolved-physical-ability-

Though when primates developed the cognitive abilities for language remains a mystery

Skulls
A human skull on display with earlier ancestor skulls and a picture of a Neanderthal man at the Museum of Natural History of Toulouse. (Alain Pitton / NurPhoto via Getty Images)

By Brian Handwerk smithsonian.com
2 hours ago

Speech is part of what makes us uniquely human, but what if our ancestors had the ability to speak millions of years before Homo sapiens even existed?

Some scientists have theorized that it only became physically possible to speak a wide range of essential vowel sounds when our vocal anatomy changed with the rise of Homo sapiens some 300,000 years ago. This theoretical timeline means that language, where the brain associates words with objects or concepts and arranges them in complex sentences, would have been a relatively recent phenomenon, developing with or after our ability to speak a diverse array of sounds.

But a comprehensive study analyzing several decades of research, from primate vocalization to vocal tract acoustic modeling, suggests the idea that only Homo sapiens could physically talk may miss the mark when it comes to our ancestors’ first speech—by a staggering 27 million years or more.

Linguist Thomas Sawallis of the University of Alabama and colleagues stress that functional human speech is rooted in the ability to form contrasting vowel sounds. These critical sounds are all that differentiates entirely unrelated words like “bat,” “bought,” “but” and “bet.” Building a language without the variety of these contrasting vowel sounds would be nearly impossible. The research team’s new study in Science Advances concludes that early human ancestors, long before even the evolution of the genus Homo, actually did have the anatomical ability to make such sounds.

When, over all those millions of years, human ancestors developed the cognitive ability to use speech to converse with each other remains an open question.

“What we’re saying is not that anyone had language any earlier,” Sawallis says. ”We’re saying that the ability to make contrasting vowel qualities dates back at least to our last common ancestor with Old World monkeys like macaques and baboons. That means the speech system had at least 100 times longer to evolve than we thought.”

Baboon Screaming
A screaming guinea baboon. Studies that have found monkeys such as baboons and macaques can make contrasting vowel sounds suggest that the last common ancestor between these primates and modern humans could make the sounds too. ( Andyworks via Getty Images)

The study explores the origins and abilities of speech with an eye toward the physical processes that primates use to produce sounds. “Speech involves the biology of using your vocal tracts and your lips. Messing around with that as a muscular production, and getting a sound out that can get into somebody else’s ear that can identify what was intended as sounds—that’s speech,” Sawallis says.

A long-popular theory of the development of the larynx, first advanced in the 1960s, held that an evolutionary shift in throat structure was what enabled modern humans, and only modern humans, to begin speaking. The human larynx is much lower, relative to cervical vertebrae, than that of our ancestors and other primates. The descent of the larynx, the theory held, was what elongated our vocal tract and enabled modern humans to begin making the contrasting vowel sounds that were the early building blocks of language. “The question is whether that’s the key to allowing a full, usable set of contrasting vowels,” Sawallis says. “That’s what we have, we believe, definitely disproven with the research that’s led up to this article.”

The team reviewed several studies of primate vocalization and communication, and they used data from earlier research to model speech sounds. Several lines of research suggested the same conclusion—humans aren’t alone in their ability to make these sounds, so the idea that our unique anatomy enabled them doesn’t appear to hold water.

Cognitive scientist Tecumseh Fitch and colleagues in 2016 used X-ray videos to study the vocal tracts of living macaques and found that monkey vocal tracts are speech ready. “Our findings imply that the evolution of human speech capabilities required neural changes rather than modifications of vocal anatomy. Macaques have a speech-ready vocal tract but lack a speech-ready brain to control it,” the study authors wrote in Science Advances.

In a 2017 study, a team led by speech and cognition researcher Louis-Jean Boë of Université Grenoble Alpes in France, also lead author of the new study, came to the same conclusion as the macaque study. By analyzing over 1,300 naturally produced vocalizations from a baboon troop, they determined that the primates could make contrasting proto-vowel sounds.

Some animals, including birds and even elephants, can mimic human voice sounds by using an entirely different anatomy. These amazing mimics illustrate how cautious scientists must be in assigning sounds or speech to specific places in the evolutionary journey of human languages.

“Of course, vocalization involves vowel production and of course, vocalization is a vital evolutionary precursor to speech, “ says paleoanthropologist Rick Potts of Smithsonian’s Human Origins Program, in an email. “The greatest danger is equating how other primates and mammals produce vowels as part of their vocalizations with the evolutionary basis for speech.”

While anatomy of the larynx and vocal tract help make speech physically possible, they aren’t all that’s required. The brain must also be capable of controlling the production and the hearing of human speech sounds. In fact, recent research suggests that while living primates can have a wide vocal range—at least 38 different calls in the case of the bonobo—they simply don’t have the brainpower to develop language.

“The fact that a monkey vocal tract could produce speech (with a human like brain in control) does not mean that they did. It just shows that the vocal tract is not the bottle-neck,” says University of Vienna biologist and cognitive scientist Tecumseh Fitch in an email.

Snow Monkey
A male Japanese macaque or snow monkey a making threatening expression in Jigokudani Yean-Koen National Park. ( Anup Shah)

Where, when, and in which human ancestor species a language-ready brain developed is a complicated and fascinating field for further research. By studying the way our primate relatives like chimpanzees use their hands naturally, and can learn human signs, some scientists suspect that language developed first through gestures and was later made much more efficient through speech.

Other researchers are searching backward in time for evidence of a cognitive leap forward which produced complex thought and, in turn, speech language abilities able to express those thoughts to others—perhaps with speech and language co-evolving at the same time.

Language doesn’t leave fossil evidence, but more enduring examples of how our ancestors used their brains, like tool-making techniques, might be used as proxies to better understand when ancient humans started using complex symbols—visual or vocal—to communicate with one another.

For example, some brain studies show that language uses similar parts of the brain as toolmaking, and suggest that by the time the earliest advanced stone tools emerged 2 million years ago, their makers might have had the ability to talk to each other. Some kind of cognitive advance in human prehistory could have launched both skills.

Sawallis says that the search for such advances in brain power can be greatly expanded, millions of years back in time, now that it’s been shown that the physical ability for speech has existed for so long. “You might think of the brain as a driver and the vocal tract as a vehicle,” he says. “There’s no amount of computing power that can make the Wright Flyer supersonic. The physics of the object define what that object can do in the world. So what we’re talking about is not the neurological component that drives the vocal tract, we’re just talking about the physics of the vocal tract.”

How long did it take for our ancestors to find the voices they were equipped with all along? The question is a fascinating one, but unfortunately their bones and stones remain silent. Like this article? SIGN UP for our newsletter

From Flat Earth to Flat Universe Posted November 5th 2019

Our understanding of the universe could be fundamentally wrong, scientists have said.
Newly released data from the Planck Telescope, which aimed to take very precise readings of the shape, size and ancient history of our universe, suggests that there could be something wrong in our physics, according to a new paper.

The issue could be an indication of a “crisis in cosmology” that may be as yet unrealised because of problems with our understanding of the shape of the universe, the authors of a new paper write.

At the moment, scientists generally believe that the universe is “flat”. That is in keeping with large amounts of data gathered from telescopes peering deep into space, including readings from the European Space Agency’s Planck Telescope.

But in a newly published paper, researchers note that the latest release of data from the same Planck telescope gave different readings than expected under our standard understanding of the universe. Those could be explained by the fact the universe is “closed”, the authors write – which would help explain issues with the readings.

That could mean that our assumption of a flat universe may actually be “mask[ing] a cosmological crisis where disparate observed properties of the Universe appear to be mutually inconsistent”, the authors write.

To resolve the problem, further research will be required to understand whether we have simply not detected another piece of the puzzle, or are simply a “statistical fluctuation”. But they could also suggest that we are lacking a “new physics” that is yet to be discovered, they write.

NHS to set up artificial intelligence lab September 13th 2019 

Stacy Liberatore For Dailymail.com Posted November 4th 2019 Trump: Farage and Johnson should ‘come together’ for general election Johnson and Corbyn in war of words as Labour to reveal Brexit strategy

a close up of a rock: Scientist recreated self-assembling cells in an environment similar to to underwater vents and found that the heat, alkalinity and salt did not hinder the protocell formation, but actively favored it

© Provided by Associated Newspapers Limited Scientist recreated self-assembling cells in an environment similar to to underwater vents and found that the heat, alkalinity and salt did not hinder the protocell formation, but actively favored it

Scientist have long hypothesized that the first cells evolved in warm, shallow pools of water – until now.

A recent study has produced new evidence that life could have originated in deep-sea hydrothermal vents.

Scientist recreated self-assembling cells in an environment similar to to underwater vents and found that the heat, alkalinity and salt did not hinder synthetic cell formation, but actively favored it. 

Because similar vents are found on other planets, experts believe these findings will help lead us to life in the distant worlds. 

Charles Darwin was the first to suggest the ‘little warm pond’ theory for how the first primitive cells evolved.

He noted that that simple chemicals in small or shallow bodies of water might spontaneously form organic compounds in the presence of energy from heat, light, or electricity from lightning strikes.   

However, new evidence has come from the University College London (UCL) that may rewrite science books.

By creating protocells, synthetic chemical particles that possess cell-like structures, in hot, alkaline seawater, a UCL-led research team has added to evidence that the origin of life could have been in deep-sea hydrothermal vents rather than shallow pools.

‘There are multiple competing theories as to where and how life started,’ said the study’s lead author, Professor Nick Lane (UCL Genetics, Evolution & Environment).

‘Underwater hydrothermal vents are among most promising locations for life’s beginnings — our findings now add weight to that theory with solid experimental evidence.’

What is AI? August 19th 2019

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Why research AI safety?

In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.

There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.

How can AI be dangerous?

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

  1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
  2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.

As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.

Why the recent interest in AI safety

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the headlines?

The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime. While some experts still guess that human-level AI is centuries away, most AI researches at the 2015 Puerto Rico Conference guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.

Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?

FLI’s position is that our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, FLI’s position is that the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.

The Top Myths About Advanced AI

A captivating conversation is taking place about the future of artificial intelligence and what it will/should mean for humanity. There are fascinating controversies where the world’s leading experts disagree, such as: AI’s future impact on the job market; if/when human-level AI will be developed; whether this will lead to an intelligence explosion; and whether this is something we should welcome or fear. But there are also many examples of of boring pseudo-controversies caused by people misunderstanding and talking past each other. To help ourselves focus on the interesting controversies and open questions — and not on the misunderstandings — let’s  clear up some of the most common myths.

AI myths

Timeline Myths

The first myth regards the timeline: how long will it take until machines greatly supersede human-level intelligence? A common misconception is that we know the answer with great certainty.

One popular myth is that we know we’ll get superhuman AI this century. In fact, history is full of technological over-hyping. Where are those fusion power plants and flying cars we were promised we’d have by now? AI has also been repeatedly over-hyped in the past, even by some of the founders of the field. For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon wrote this overly optimistic forecast about what could be accomplished during two months with stone-age computers: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College […] An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century. Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.” And Astronomer Royal Richard Woolley called interplanetary travel “utter bilge” in 1956. The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible. However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.

There have been a number of surveys asking AI researchers how many years from now they think we’ll have human-level AI with at least 50% probability. All these surveys have the same conclusion: the world’s leading experts disagree, so we simply don’t know. For example, in such a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by year 2045, but some researchers guessed hundreds of years or more.

There’s also a related myth that people who worry about AI think it’s only a few years away. In fact, most people on record worrying about superhuman AI guess it’s still at least decades away. But they argue that as long as we’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the eventuality. Many of the safety problems associated with human-level AI are so hard that they may take decades to solve. So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one on.

Controversy Myths

Another common misconception is that the only people harboring concerns about AI and advocating AI safety research are luddites who don’t know much about AI. When Stuart Russell, author of the standard AI textbook, mentioned this during his Puerto Rico talk, the audience laughed loudly. A related misconception is that supporting AI safety research is hugely controversial. In fact, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, merely non-negligible — just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down.

It may be that media have made the AI safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones. As a result, two people who only know about each other’s positions from media quotes are likely to think they disagree more than they really do. For example, a techno-skeptic who only read about Bill Gates’s position in a British tabloid may mistakenly think Gates believes superintelligence to be imminent. Similarly, someone in the beneficial-AI movement who knows nothing about Andrew Ng’s position except his quote about overpopulation on Mars may mistakenly think he doesn’t care about AI safety, whereas in fact, he does. The crux is simply that because Ng’s timeline estimates are longer, he naturally tends to prioritize short-term AI challenges over long-term ones.

Myths About the Risks of Superhuman AI

Many AI researchers roll their eyes when seeing this headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” And as many have lost count of how many similar articles they’ve seen. Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, and robots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car? Although this mystery of consciousness is interesting in its own right, it’s irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AI does, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim: “I’m not worried, because machines can’t have goals!”

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids, because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.

The Interesting Controversies

Not wasting time on the above-mentioned misconceptions lets us focus on true and interesting controversies where even the experts disagree. What sort of future do you want? Should we develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids? Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Further down the road, would you like us to create superintelligent life and spread it through our cosmos? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way? Please join the conversation!

Recommended References

Videos

Media Articles

Essays by AI Researchers

Research Articles

Research Collections

Case Studies

Blog posts and talks

Books

Organizations

Many of the organizations listed on this page and their descriptions are from a list compiled by the Global Catastrophic Risk institute; we are most grateful for the efforts that they have put into compiling it. These organizations above all work on computer technology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

Avoiding the Issue August 5th 2019

It may be amazing what the armed forces and engineers can do in an emergency like Whaley Bridge, but I doubt that lessons will be learned as far as human environmental damage is concerned.

Interestingly, and understandably, many local people didn’t want to leave their homes, thinkingh it was a case of the authorities going mad with health and safety over a storm in a teacup rather than a storm in reservoir.

There is indeed a great deal of not exactly nonsense, but some rather distracting talk about the environment, most if not all of it avoiding the main issues, human overpopulation and elite greed.

The truth is hidden by an elite owned and operated media only intereted in finding scapegoats among those of us in the lower order masses.

Between 25 and 33% of carbon emissions are due to land use, including farming for an unsustainable and growing human population, most of it coming from what we used to call the ‘Third World’. The Holy Grail of economic growth is out of control so as to feed the masses and elite media cries for equality down at the borrom, while the elite monopolise teh profits of raping the earth, spinning us into more And more disasters and wars.

The following articles should be read with this in mind. The U.S.A, an empire in decline, is fighting to keep its annual consumption of the earth’s natural resources up at 52% of the world’s total, for a nation that comprises only 6% of the world’s population. Britain dominates European policy and is the U.S’s lackey- hence them confiscating the Iranian tanker.

Robert Cook August 5th 2019

Whaley Bridge latest: race to save dam and halt flood continues as emergency crews say they still need two more days

UP NEXT

Emergency teams need at least two more days to pump water from a reservoir at risk of bursting its banks and flooding the town below, fire crews said today.

Hundreds of families evacuated from their homes of the Peak District town of Whaley Bridge on Thursday were forced to spend a fifth night away from home as the race-against-time continues.

Emergency crews, who have been working round-the-clock to shore up the Victorian structure which partially collapsed in torrential rain, today said it is now “relatively stable” as the threat of storms appeared to recede.

An RAF Chinook helicopter has been dropping sandbags on the crumbling wall while six rescue boats have been deployed in case the dam bursts.

© Provided by Independent Digital News & Media Limited

The reservoir is now just under half of its 300million gallon capcity, but water levels must drop to at least 25 per cent before the evacuees can return home. 

Derbyshire deputy chief fire officer Gavin Tomlinson said the operation would continue “for a few days yet”.

He said: “As soon as we get the water level down to a safe level, which is around 25 per cent of the contents of the dam, then the emergency phase is over and then the contractors can look at the repairing of the dam wall.”

RAF Wing Commander John Coles, who is in charge of the operation to shore up the wall, added: “I think the assessment is now that actually the dam is relatively stable.

“The military will stand by ready to come back up if required but I think the sense of the moment is very much we’ve got through the worst of it. We were fortunate with the weather.”

Derbyshire chief fire officer Terry McDermott said a seven-day estimate for how long people would be out of their homes was a “worst case scenario”.

Labour leader Jeremy Corbyn was expected to visit the flood-struck Derbyshire town this morning after Friday’s visit by Prime Minister Boris Johnson.

Met Office spokesman Grahame Madge said the next 36 hours were looking “largely dry” and unlikely to affect the rescue operation.  © Provided by Independent Digital News & Media Limited Emergency workers are pumping water out of the reservoir to ease the pressure on the dam (PA)

However, he said a heavy band of rain was expected later Thursday and into Friday: “That may bring a heavier pulse of rain. We will keep an eye on the forecast and issue any warnings that may be relevant. 

© Provided by Independent Digital News & Media Limited The emergency crews say they need about two more days to pump water from the reservoir (PA)

Police allowed one resident from each of the 400 properties to return to the evacuation zone for 15-minutes to gather essentials on Saturday.

However, 31 people, including a “small number” who were initially evacuated but have since returned to their homes, remained in 22 properties last night.

Deputy chief constable Rachel Swann told a residents meeting they were not only putting their own lives at risk, but also those of emergency services staff.

She also said the force was using a drone to patrol the streets after one resident claimed she had been burgled.

Is There Evidence of Reincarnation?

Portrait of young boy (4-5) wearing traditional Indian outfit, holding present
ImagesBazaar/Riser/Getty Images

Table of Contents

by Stephen Wagner Updated May 22, 2019

Have you lived before? The idea that our souls live through many lifetimes over the centuries is known as reincarnation. It has been part of virtually every culture since ancient times. The Egyptians, Greeks, Romans, and Aztecs all believed in the “transmigration of souls” from one body to another after death. Reincarnation is also a fundamental concept of Hinduism.

Although it is not a part of official Christian doctrine, many Christians believe in reincarnation or at least accept its possibility. Jesus, it is believed, was resurrected three days after his crucifixion. The idea that we can live again after death as another person, as a member of the opposite sex or in a completely different station in life, is intriguing and, for many people, highly appealing.

But is reincarnation just an idea, or is there real evidence to support it? Many researchers have tackled this question—and their results are surprising.

Past Life Regression Hypnosis

The practice of reaching past lives through hypnosis is controversial, primarily because hypnosis is not a reliable tool. It can certainly help researchers access the unconscious mind, but the information found there should not be taken as truth. For example, it has been shown that hypnosis can create false memories. That doesn’t mean, however, that regression hypnosis should be dismissed out of hand. If information from a “past life” can be verified through research, then the case for reincarnation becomes more compelling.

The most famous case of past life regression through hypnosis is that of Ruth Simmons. In 1952, her therapist, Morey Bernstein, encouraged her to travel in her mind back to a time before her birth. Suddenly, Ruth began to speak with an Irish accent and claimed that her name was Bridey Murphy, who lived in 19th-century Belfast, Ireland. Ruth recalled many details of her life as Bridey, but attempts to find out if Ms. Murphy actually existed were unfortunately unsuccessful. There was, however, some indirect evidence for the truth of her story. Under hypnosis, Bridey mentioned the names of two grocers in Belfast from whom she had bought food, Mr. Farr and John Carrigan. A Belfast librarian found a city directory for 1865-1866 that listed both men as grocers. Simmons’ story was told both in a book by Bernstein and in a 1956 movie, “The Search for Bridey Murphy.”

Unusual Illnesses and Physical Ailments

Do you have a lifelong illness or physical pain that you cannot account for? It may be the result of some past life trauma, some researchers suggest.

In “Have We Really Lived Before?,” Dr. Michael C. Pollack describes his lower back pain, which grew steadily worse over the years and limited his activities. He believes he found a possible explanation for the pain during a series of past life therapy sessions: “I discovered that I had lived at least three prior lifetimes in which I had been killed by being knifed or speared in the low back. After processing and healing the past life experiences, my back began to heal.”

Research conducted by Nicola Dexter, a past life therapist, has discovered correlations between some of her patients’ illnesses and their past lives. She found, for example, a case of bulimia caused by swallowing salt water in a previous life; a persistent pain in the shoulder and arm caused by participating, in a past life, in a dangerous game of tug-of-war; and a fear of razors and shaving that was the result of the sufferer having had his hand cut off in a previous life.

Phobias and Nightmares

Where does seemingly irrational fear come from? Fear of heights, fear of water, fear of flying? Many of us have normal reservations about such things, but some people have fears so great that they become debilitating. And some fears are completely baffling—a fear of carpets, for example. Where do such fears come from? The answer, of course, can be psychologically complex, but researchers think that in some cases there might be a connection to experiences from previous lifetimes.

In “Healing Past Lives Through Dreams,” author J.D. writes about his claustrophobia, which includes a tendency to panic whenever his arms or legs are confined or restricted. He believes that a dream of a past life uncovered a trauma that explains his fear. “One night in the dream state I found myself hovering over a disturbing scene,” he writes. “It was a town in 15th-century Spain, and a frightened man was being hog-tied by a small jeering crowd. He had expressed beliefs contrary to the church. Some local ruffians, with the blessing of the church officials, were eager to administer justice. The men bound the heretic hand and foot, then wrapped him very tightly in a blanket. The crowd carried him to an abandoned stone building, shoved him into a dark corner under the floor, and left him to die. I realized with horror the man was me.”

Physical Resemblances

In his book ​”Someone Else’s Yesterday,” Jeffrey J. Keene theorizes that a person in this life may strongly resemble the person he or she was in a previous life. Keene, an Assistant Fire Chief who lives in Westport, Connecticut, believes he is the reincarnation of John B. Gordon, a Confederate General of the Army of Northern Virginia, who died on January 9, 1904. As evidence, he offers photos of himself and the general. There is a striking resemblance. Beyond sharing physical similarities, Keene says that individuals and their past incarnations often “think alike, look alike and even share facial scars. Their lives are so intertwined that they appear to be one.”

Another case of such resemblance is that of artist Peter Teekamp, who believes he may be the reincarnation of artist Paul Gauguin. Here, too, there is a physical resemblance, along with similarities between the two painters’ work.

Children’s Spontaneous Recall and Special Knowledge

Many small children who claim to recall past lives also express knowledge that could not have come from their own experiences. Such cases are documented in Carol Bowman’s “Children’s Past Lives“:

“Eighteen-month-old Elsbeth had never spoken a complete sentence. But one evening, as her mother was bathing her, Elsbeth spoke up and gave her mother a shock. ‘I’m going to take my vows,’ she told her mother. Taken aback, she questioned the baby girl about her queer statement. ‘I’m not Elsbeth now,’ the child replied. ‘I’m Rose, but I’m going to be Sister Teresa Gregory.’”

Identical Handwriting

Can proof of past lives be demonstrated by comparing the handwriting of a living person to that of the deceased person he or she claims to have been? Indian researcher Vikram Raj Singh Chauhan believes so. Chauhan’s findings have been received favorably at the National Conference of Forensic Scientists at Bundelkhand University, Jhansi.

A six-year-old boy named Taranjit Singh from the village of Alluna Miana, India, claimed since he was two that he had previously been a person named Satnam Singh. This other boy had lived in the village of Chakkchela, Taranjit insisted, and Taranjit even knew Satnam’s father’s name. Satnam had been killed while riding his bike home from school. An investigation verified the many details Taranjit knew about Satnam’s life. But the clincher was that their handwriting, a trait experts know is as distinct as a fingerprint, was virtually identical.

Matching Birthmarks and Birth Defects

Dr. Ian Stevenson, head of the Department of Psychiatric Medicine at the University of Virginia School of Medicine, was one of the foremost researchers on the subject of reincarnation and past lives. In 1993, he wrote a paper entitled “Birthmarks and Birth Defects Corresponding to Wounds on Deceased Persons,” which described possible physical evidence for past lives. “Among 895 cases of children who claimed to remember a previous life (or were thought by adults to have had a previous life),” Stevenson writes, “birthmarks and/or birth defects attributed to the previous life were reported in 309 (35 percent) of the subjects. The birthmark or birth defect of the child was said to correspond to a wound (usually fatal) or other mark on the deceased person whose life the child said it remembered.”

But could any of these cases be verified?

In one fascinating case, an Indian boy claimed to remember the life of a man named Maha Ram, who was killed by a shotgun fired at close range. The boy had an array of birthmarks in the center of his chest that looked like they might correspond to a shotgun blast. So the story was investigated. Indeed, there was a man named Maha Ram who was killed by a shotgun blast to the chest. An autopsy report recorded the man’s chest wounds, which corresponded directly with the boy’s birthmarks.

In another case, a man from Thailand claimed that when he was a child he had distinct memories of a past life as his own paternal uncle. This man had a large scar-like birthmark on the back of his head. His uncle, it turned out, died from a severe knife wound to the same area.

Dr. Stevenson documented a number of cases like these, many of which he could verify through medical records.

Why did the US want to go to the Moon?

A space race developed between the US and the then Soviet Union, after the 1957 launch of the first Soviet Sputnik satellite.

When John F Kennedy became US President in 1961, many Americans believed they were losing the race for technological superiority to their Cold War enemy. Image copyright Getty Images Image caption Missions by Soviet cosmonauts including Yuri Gagarin and Valentina Tereshkova, the first woman in space, worried the US

It was in that year that Soviet Union made the first ever manned spaceflight.

The US was determined to get a manned mission there first and in 1962 Kennedy made a now-famous speech announcing: “We choose to go to the Moon!”

The space race continued and in 1965 the Soviets successfully guided an unmanned craft to touch down on the Moon.

How did the US plan for its mission?

The US space agency, Nasa, committed huge amounts of resources to what became known as the Apollo programme.

About 400,000 people worked on the 17 Apollo missions, at a cost of $25bn. Image copyright NASA Image caption The Saturn V rocket lifts off

Three astronauts were chosen for the Apollo 11 mission: Buzz Aldrin, Neil Armstrong and Michael Collins.

A powerful rocket – the Saturn V – carried the Apollo command and service module and the attached lunar module that was to touch down on the Moon.

The plan was to use the Earth’s orbit to reach that of the Moon, after which Armstrong and Aldrin would get into the lunar module. They would descend to the Moon’s surface while Collins stayed behind in the command and service module.

Did anything go wrong?

The first crewed flight that was meant to test going into orbit was Apollo 1 in 1967. Image copyright NASA Image caption The lunar module as seen from the command and service module

But disaster struck during pre-flight routine checks, when fire swept through the command module and killed three astronauts.

Manned space flights were suspended for months.

During the Apollo 11 mission itself, there were communications issues with ground control. And an alarm message sounded on the computer which the crew had never heard before.

The lunar module also ended up touching down away from the original target area.

Walking on the Moon

Despite these problems, on 20 July – nearly 110 hours after leaving Earth, Neil Armstrong became the first person to step on to the surface of the Moon. He was followed 20 minutes later by Buzz Aldrin.

Armstrong’s words, beamed to the world by TV, entered history: “That’s one small step for man, one giant leap for mankind.”

The two men spent more than two hours outside the lunar module, collecting samples from the surface, taking pictures, and setting up a number of scientific experiments.

After completing their Moon exploration, the two men successfully docked with the command and service module. Image copyright NASA Image caption The three astronauts after being picked up in the Pacific

The return journey to Earth began and the crew splashed down in the Pacific Ocean on 24 July.

An estimated 650 million people worldwide had watched the first Moon landing. For the US, the achievement helped it demonstrate its power to a world audience.

It was also an important boost to national self-esteem at the end of a tumultuous decade. It had seen Kennedy assassinated, race riots in major cities and unease about its military involvement in Vietnam.

How do we know it really happened?

A total of six US missions had landed men on the lunar surface by the end of 1972, yet to this day there are conspiracy theories saying that the landings were staged.

But Nasa has had a reconnaissance craft orbiting the Moon since 2009. It sends back high-resolution images showing signs of exploration on the surface by the Apollo missions, such as footprints and wheel tracks. Image copyright NASA Image caption The moon landings became a cause for national celebration

There is also geological evidence from rocks brought back from the surface.

What’s the point of going to the Moon?

The US remains the only country to have put people on the Moon’s surface. Image copyright NASA Image caption Sally Ride (far left) the first female US astronaut to go to space – pictured with astronauts Judith Resnik, Anna Fisher, Kathryn Sullivan and Rhea Seddon

However, Russia, Japan, China, the European Space Agency and India have either sent probes to orbit the Moon, or landed vehicles on its surface.

For a country to be able to do so is a sign of its technological prowess, giving it membership of an elite club.

There are also more practical reasons such as the desire to exploit its resources.

Ice found at both poles may make it easier for craft to reach deeper into space, as it contains hydrogen and oxygen which can be used to fuel rockets.

There’s also interest in mining the moon for gold, platinum and rare earth metals, although it’s not yet clear how easy it would be to extract such resources.

All images subject to copyright BBC

Climate Emergency June 5th 2019

The University of East Anglia (UEA) is joining forces with other organisations around the world to declare a climate and biodiversity emergency.

UEA is one of the world’s pre-eminent climate change research institutions and the work of the Tyndall Centre for Climate Change Research, the Climatic Research Unit (CRU) and researchers in other UEA schools, has pioneered understanding of Earth’s changing climate.

The CRU at UEA is responsible for the global temperature record, which first brought global warming to the world’s attention. Researchers at UEA’s Tyndall Centre also help publish the annual Global Carbon Budget, the update for policy-makers on fossil fuel emissions.

UEA’s declaration comes on World Environment Day (Wednesday 5 June), the UN’s flagship awareness day on environmental issues from global warming to marine pollution and wildlife crime.

UEA has made the most substantive and sustained contribution to the Intergovernmental Panel on Climate Change (IPCC) of any University in the world. UEA and the Tyndall Centre is a core partner of the new Climate and Social Transitions Centre.

Vice-Chancellor Professor David Richardson said: “Over the decades researchers from UEA have arguably done more to further our knowledge of humankind’s impact on Earth’s climate and eco-systems than from any other institution.

“As a University we have reduced our carbon emissions by five per cent since 1990 – despite the campus doubling in size. We also fully recognise that we need to move faster to deal with what is a climate and biodiversity emergency and that we all have a part to play in addressing this crisis.”

UEA has also signed up to the SDG Accord designed to inspire, celebrate and advance the critical role that education has in delivering the UN’s Sustainable Development Goals (SDGs) and the value it brings to our global society.

The Accord is a commitment learning institutions are making to do more to deliver the goals, to annually report on progress and to do so in ways which share the learning with each other nationally and internationally.

UEA absolutely recognises the indivisible and interconnected nature of the universal set of Goals – People, Prosperity, Planet, Partnership, Peace and that as educators we have a responsibility to play a central and transformational role in attaining the Sustainable Development Goals.

Director of the Climatic Research Unit, Professor Tim Osborn, said: “UEA has been monitoring climate change and researching its consequences for almost 50 years. We understand what is causing our climate to change and can assess the significant risks that it brings – for human society as well as for the natural world. 

“Together with other causes, especially continuing habitat loss in many parts of the world and overexploitation of marine species, climate change represents a huge challenge to biodiversity and places many species at risk.”

Professor of Evolutionary Biology and Science Engagement, Ben Garrod, said: “Global climate change is the single greatest threat facing our planet today. If we are to have any chance of success in tackling the problem and reducing its effects then we need to act swiftly, decisively and collaboratively.

“It is estimated that a million species are facing the imminent threat of extinction and predictions as to the effects on the global human population are severe. By joining the growing number of institutions declaring a climate emergency, UEA can help by not only raising awareness but by contributing to solutions through our pioneering studies and leading researchers.

“Now we need every university, every council, government, school, business and every individual citizen to declare an environmental emergency and to work together to ensure we have a future where our food, health and homes are not all at stake.”

Dr Lynn Dicks, Natural Environment Research Council (NERC) Research Fellow at UEA, said: “Nature is under serious pressure all over the world. Roughly a quarter of all species are at risk of extinction in the groups of animals and plants that have been assessed. Nature is essential for human existence and good quality of life, and yet we are trashing it in exchange for economic wealth.

“Ongoing conversion of wilderness to agriculture, and direct exploitation of wild species through hunting, fishing and logging, are the biggest problems. Climate change is already driving species to extinction as well, and this will get much worse in the coming century.

“Researchers in the Centre for Ecology, Evolution and Conservation at UEA are working with partners in Government, industry and the NGO sector to understand our impacts on nature and to develop strategies to protect and restore it.”

UEA operates a Sustainability Board, currently chaired by the Pro-Vice-Chancellor for the Faculty of Medicine and Health, Professor Dylan Edwards, and with representation from staff, estates and the UEA Students’ Union. It meets quarterly and reviews the performance of the implementation teams that are charged with achieving the targets for the campus.

These teams address the university’s sustainability goals across eight areas: Sustainable Food; Transport; Purchasing; Engagement and Communications; Energy and Carbon Reduction; Sustainable Labs; Biodiversity; Water and Waste.

UEA has a 15 year £300m estate strategy to improve and modernise our buildings, which will include improvements to their energy usage. The 10 year programme to refurbish the Lasdun Wall is just one factor precluding reaching net zero by 2025 and the University supports the UK committee on Climate Change target of carbon neutrality by 2050.  

The decision to declare a climate and biodiversity emergency follows a motion tabled at UEA Assembly on 22 May by Dr Hannah Hoechner, on behalf of Extinction Rebellion UEA. The University’s response to the specifics of the Extinction Rebellion UEA request:

a.  “Formally declare a climate emergency”.  From the introductory paragraph it is clear that UEA detected and recognises that there is a global climate crisis.  We also feel there is a connected Biodiversity emergency.  Our preference would be to declare a “Climate and Biodiversity emergency”, which may help to appreciate their inter-relationships.  We support therefore that UEA should declare a climate and biodiversity emergency. Professor Sir Robert Watson, UEA and the Tyndall Centre, is co-Chair of the Intergovernmental Platform on Biodiversity and Ecosystem Services.

b. “Commit to the target of carbon neutrality by 2025, in accordance with the precautionary principle”.  The University’s view is that this target is unattainable.  It takes the position that UEA should commit to net carbon neutrality by 2050, in alignment and support of the recent recommendation from the independent UK Committee on Climate Change. It is important to note the impressive improvements that UEA Estates and the Sustainability Board have made, in particular this year we are on target for a 50% reduction in CO2 emissions compared to the 1990 baseline data, despite the doubling in size of the university across the period. 

c.  “to appoint a senior staff member of the Executive Team with their sole responsibility being to achieve this carbon reduction target and to promote sustainability”  Two members of the Executive Team serve on the Sustainability Board (Professors Dylan Edwards and Mark Searcey) and they will be joined in future by Jenny Baxter, Chief Operating Officer.  The reporting process for Sustainability Board to ET will be enhanced, with quarterly reports of the Board’s activities. It was not felt that appointing one person was a sustainable way to promote sustainability.

d.  “create a consultative forum to harness the passion and expertise available among UEA staff and students to mount the necessary emergency response”. In essence, this is the function of the Sustainability Board.  At the May meeting there was extensive discussion about the need to enhance the visibility of the Board, ensure it is informed by the depth of research at the Tyndall Centre for Climate Change Research and build better engagement with its work to make a sustainable campus. We need to accelerate the pace of change and also communicate what we are doing.  We need also to cement our goals for sustainability into the next iteration of the UEA plan, with clear, explicit targets and ways to monitor our progress.  We recognise that there is much work to be done. 

For more information about UEA’s research into climate and the environment please visit www.uea.ac.uk/research/research-themes/understanding-human-and-natural-environments

Synchronicity
















From Wikipedia, the free encyclopedia Jump to navigationJump to search This article is about the philosophical concept. For other uses, see Synchronicity (disambiguation).

Carl Gustav Jung

Synchronicity (German: Synchronizität) is a concept, first introduced by analytical psychologist Carl Jung, which holds that events are “meaningful coincidences” if they occur with no causal relationship yet seem to be meaningfully related.[1] During his career, Jung furnished several different definitions of it.[2] Jung defined synchronicity as an “acausal connecting (togetherness) principle,” “meaningful coincidence”, and “acausal parallelism.” He introduced the concept as early as the 1920s but gave a full statement of it only in 1951 in an Eranos lecture.[3]

In 1952 Jung published a paper “Synchronizität als ein Prinzip akausaler Zusammenhänge” (Synchronicity – An Acausal Connecting Principle)[4] in a volume which also contained a related study by the physicist and Nobel laureate Wolfgang Pauli,[5] who was sometimes critical of Jung’s ideas.[6] Jung’s belief was that, just as events may be connected by causality, they may also be connected by meaning. Events connected by meaning need not have an explanation in terms of causality, which does not generally contradict the Axiom of Causality but in specific cases can lead to prematurely giving up causal explanation.

Jung used the concept in arguing for the existence of the paranormal.[7] A believer in the paranormal, Arthur Koestler wrote extensively on synchronicity in his 1972 book The Roots of Coincidence.[8]

Mainstream science explains synchronicities as mere coincidences that can be described by statistics laws – for instance law of truly large numbers.[9]

Description

Diagram illustrating Carl Jung’s concept of synchronicity

Jung coined the word “synchronicity” to describe “temporally coincident occurrences of acausal events.” In his book Synchronicity: An Acausal Connecting Principle, Jung wrote:[10]

How are we to recognize acausal combinations of events, since it is obviously impossible to examine all chance happenings for their causality? The answer to this is that acausal events may be expected most readily where, on closer reflection, a causal connection appears to be inconceivable.

In the introduction to his book, Jung on Synchronicity and the Paranormal, Roderick Main wrote:[11]

The culmination of Jung’s lifelong engagement with the paranormal is his theory of synchronicity, the view that the structure of reality includes a principle of acausal connection which manifests itself most conspicuously in the form of meaningful coincidences. Difficult, flawed, prone to misrepresentation, this theory none the less remains one of the most suggestive attempts yet made to bring the paranormal within the bounds of intelligibility. It has been found relevant by psychotherapists, parapsychologists, researchers of spiritual experience and a growing number of non-specialists. Indeed, Jung’s writings in this area form an excellent general introduction to the whole field of the paranormal.

In his book Synchronicity: An Acausal Connecting Principle, Jung wrote:[10]

…it is impossible, with our present resources, to explain ESP, or the fact of meaningful coincidence, as a phenomenon of energy. This makes an end of the causal explanation as well, for “effect” cannot be understood as anything except a phenomenon of energy. Therefore it cannot be a question of cause and effect, but of a falling together in time, a kind of simultaneity. Because of this quality of simultaneity, I have picked on the term “synchronicity” to designate a hypothetical factor equal in rank to causality as a principle of explanation.

Synchronicity was a principle which, Jung felt, gave conclusive evidence for his concepts of archetypes and the collective unconscious.[12] It described a governing dynamic which underlies the whole of human experience and history — social, emotional, psychological, and spiritual. The emergence of the synchronistic paradigm was a significant move away from Cartesian dualism towards an underlying philosophy of double-aspect theory. Some argue this shift was essential to bringing theoretical coherence to Jung’s earlier work.[13][14]

Even at Jung’s presentation of his work on synchronicity in 1951 at an Eranos lecture, his ideas on synchronicity were evolving. On Feb. 25, 1953, in a letter to Carl Seelig, the Swiss author and journalist who wrote a biography of Albert Einstein, Jung wrote, “Professor Einstein was my guest on several occasions at dinner. . . These were very early days when Einstein was developing his first theory of relativity [and] It was he who first started me on thinking about a possible relativity of time as well as space, and their psychic conditionality. More than 30 years later the stimulus led to my relation with the physicist professor W. Pauli and to my thesis of psychic synchronicity.”[4]

Following discussions with both Albert Einstein and Wolfgang Pauli, Jung believed there were parallels between synchronicity and aspects of relativity theory and quantum mechanics.[15][better source needed]

Jung believed life was not a series of random events but rather an expression of a deeper order, which he and Pauli referred to as Unus mundus. This deeper order led to the insights that a person was both embedded in a universal wholeness and that the realisation of this was more than just an intellectual exercise, but also had elements of a spiritual awakening.[16] From the religious perspective, synchronicity shares similar characteristics of an “intervention of grace”. Jung also believed that in a person’s life, synchronicity served a role similar to that of dreams, with the purpose of shifting a person’s egocentric conscious thinking to greater wholeness.

Examples

Cetonia aurata

In his book Synchronicity Jung tells the following story as an example of a synchronistic event:

My example concerns a young woman patient who, in spite of efforts made on both sides, proved to be psychologically inaccessible. The difficulty lay in the fact that she always knew better about everything. Her excellent education had provided her with a weapon ideally suited to this purpose, namely a highly polished Cartesian rationalism with an impeccably “geometrical” idea of reality. After several fruitless attempts to sweeten her rationalism with a somewhat more human understanding, I had to confine myself to the hope that something unexpected and irrational would turn up, something that would burst the intellectual retort into which she had sealed herself. Well, I was sitting opposite her one day, with my back to the window, listening to her flow of rhetoric. She had an impressive dream the night before, in which someone had given her a golden scarab — a costly piece of jewellery. While she was still telling me this dream, I heard something behind me gently tapping on the window. I turned round and saw that it was a fairly large flying insect that was knocking against the window-pane from outside in the obvious effort to get into the dark room. This seemed to me very strange. I opened the window immediately and caught the insect in the air as it flew in. It was a scarabaeid beetle, or common rose-chafer (Cetonia aurata), whose gold-green colour most nearly resembles that of a golden scarab. I handed the beetle to my patient with the words, “Here is your scarab.” This experience punctured the desired hole in her rationalism and broke the ice of her intellectual resistance. The treatment could now be continued with satisfactory results.[17]

The French writer Émile Deschamps claims in his memoirs that, in 1805, he was treated to some plum pudding by a stranger named Monsieur de Fontgibu. Ten years later, the writer encountered plum pudding on the menu of a Paris restaurant and wanted to order some, but the waiter told him that the last dish had already been served to another customer, who turned out to be de Fontgibu. Many years later, in 1832, Deschamps was at a dinner and once again ordered plum pudding. He recalled the earlier incident and told his friends that only de Fontgibu was missing to make the setting complete – and in the same instant, the now-senile de Fontgibu entered the room, having got the wrong address.[18]

Jung wrote, after describing some examples, “When coincidences pile up in this way, one cannot help being impressed by them – for the greater the number of terms in such a series, or the more unusual its character, the more improbable it becomes.”[19]

Wolfgang Pauli

In his book Thirty Years That Shook Physics – The Story of Quantum Theory (1966), George Gamow writes about Wolfgang Pauli, who was apparently considered a person particularly associated with synchronicity events. Gamow whimsically refers to the “Pauli effect“, a mysterious phenomenon which is not understood on a purely materialistic basis, and probably never will be. The following anecdote is told:

It is well known that theoretical physicists cannot handle experimental equipment; it breaks whenever they touch it. Pauli was such a good theoretical physicist that something usually broke in the lab whenever he merely stepped across the threshold. A mysterious event that did not seem at first to be connected with Pauli’s presence once occurred in Professor J. Franck’s laboratory in Göttingen. Early one afternoon, without apparent cause, a complicated apparatus for the study of atomic phenomena collapsed. Franck wrote humorously about this to Pauli at his Zürich address and, after some delay, received an answer in an envelope with a Danish stamp. Pauli wrote that he had gone to visit Bohr and at the time of the mishap in Franck’s laboratory his train was stopped for a few minutes at the Göttingen railroad station. You may believe this anecdote or not, but there are many other observations concerning the reality of the Pauli Effect! [20]

Relationship with causality

Causality, when defined expansively (as for instance in the “mystic psychology” book The Kybalion, or in the platonic Kant-style Axiom of Causality), states that “nothing can happen without being caused.” Such an understanding of causality may be incompatible with synchronicity. Other definitions of causality (for example, the neo-Humean definition) are concerned only with the relation of cause to effect. As such, they are compatible with synchronicity. There are also opinions which hold that, where there is no external observable cause, the cause can be internal.[21]

It is also pointed out that, since Jung took into consideration only the narrow definition of causality – only the efficient cause – his notion of “acausality” is also narrow and so is not applicable to final and formal causes as understood in Aristotelian or Thomist systems.[22] The final causality is inherent[23] in synchronicity (because it leads to individuation) or synchronicity can be a kind of replacement for final causality; however, such finalism or teleology is considered to be outside the domain of modern science.

Explanations

Probability theory

Mainstream mathematics argues that statistics and probability theory (exemplified in, e.g., Littlewood’s law or the law of truly large numbers) suffice to explain any purported synchronistic events as mere coincidences.[9][24] The law of truly large numbers, for instance, states that in large enough populations, any strange event is arbitrarily likely to happen by mere chance. However, some proponents of synchronicity question whether it is even sensible in principle to try to evaluate synchronicity statistically. Jung himself and von Franz argued that statistics work precisely by ignoring what is unique about the individual case, whereas synchronicity tries to investigate that uniqueness.

Among some psychologists, Jung’s works, such as The Interpretation of Nature and the Psyche, were received as problematic. Fritz Levi, in his 1952 review in Neue Schweizer Rundschau (New Swiss Observations), critiqued Jung’s theory of synchronicity as vague in determinability of synchronistic events, saying that Jung never specifically explained his rejection of “magic causality” to which such an acausal principle as synchronicity would be related. He also questioned the theory’s usefulness.[25]

Psychology

In psychology and cognitive science, confirmation bias is a tendency to search for or interpret new information in a way that confirms one’s preconceptions, and avoids information and interpretations that contradict prior beliefs. It is a type of cognitive bias and represents an error of inductive inference, or is a form of selection bias toward confirmation of the hypothesis under study, or disconfirmation of an alternative hypothesis. Confirmation bias is of interest in the teaching of critical thinking, as the skill is misused if rigorous critical scrutiny is applied only to evidence that challenges a preconceived idea, but not to evidence that supports it.[26]

Likewise, in psychology and sociology, the term apophenia is used for the mistaken detection of a pattern or meaning in random or meaningless data.[27] Skeptics, such as Robert Todd Carroll of the Skeptic’s Dictionary, argue that the perception of synchronicity is better explained as apophenia. Primates use pattern detection in their form of intelligence,[28] and this can lead to erroneous identification of non-existent patterns. A famous example of this is the fact that human face recognition is so robust, and based on such a basic archetype (essentially two dots and a line contained in a circle), that human beings are very prone to identify faces in random data all through their environment, like the “man in the moon”, or faces in wood grain, an example of the visual form of apophenia known as pareidolia.[29]

Charles Tart sees danger in synchronistic thinking: ‘This danger is the temptation to mental laziness. […] it would be very tempting to say, “Well, it’s synchronistic, it’s forever beyond my understanding,” and so (prematurely) give up trying to find a causal explanation.’[30]

Mathematics

Jung and his followers (e.g., Marie-Louise von Franz) share in common the belief that numbers are the archetypes of order, and the major participants in synchronicity creation.[31] This hypothesis has implications that are relevant to some of the “chaotic” phenomena in nonlinear dynamics. Dynamical systems theory has provided a new context from which to speculate about synchronicity because it gives predictions about the transitions between emergent states of order and nonlocality.[32] This view, however, is not part of mainstream mathematical thought.

Quantum Physics

According to a certain view, synchronicity serves as a way of making sense of or describing some aspects of quantum mechanics. It argues that quantum experiments demonstrate that, at least in the microworld of subatomic particles, there is an instantaneous connection between particles no matter how far away they are from one another. Known as quantum non-locality or entanglement, the proponents of this view argue that this points to a unified field that precedes physical reality.[33] As with archetypal reasoning, the proponents argue that two events can correspond to each other (e.g. particle with particle, or person with person) in a meaningful way.

F. David Peat saw parallels between Synchronicity and David Bohm‘s theory of implicate order. According to Bohm’s[34] theory, there are three major realms of existence: the explicate (unfolded) order, the implicate (enfolded) order, and a source or ground beyond both. The flowing movement of the whole can thus be understood as a process of continuous enfolding and unfolding of order or structure. From moment to moment there is a rhythmic pulse between implicate and explicate realities. Therefore, synchronicity would literally take place as the bridge between implicate and explicate orders, whose complementary nature define the undivided totality.

Religion

Many people believe that the Universe or God causes synchronicities. Among the general public, divine intervention is the most widely accepted explanation for meaningful coincidences.[7]

Publications

Search for:

Recent Posts

Recent Comments

Archives

Categories

Copyright 2020 | MH Newsdesk lite by MH Themes