Latest Posts

Neurons Unexpectedly Encode Information in the Timing of Their Firing

A temporal pattern of activity observed in human brains for the first time may explain how we can learn so quickly.

By Contributing Writer Elena Renkin (See end of article for links.)

An illustration of a network of neurons, each one equipped with its own stopwatch.
Samuel Velasco/Quanta Magazine

For decades, neuroscientists have treated the brain somewhat like a Geiger counter: The rate at which neurons fire is taken as a measure of activity, just as a Geiger counter’s click rate indicates the strength of radiation. But new research suggests the brain may be more like a musical instrument. When you play the piano, how often you hit the keys matters, but the precise timing of the notes is also essential to the melody.

“It’s really important not just how many [neuron activations] occur, but when exactly they occur,” said Joshua Jacobs, a neuroscientist and biomedical engineer at Columbia University who reported new evidence for this claim last month in Cell.

For the first time, Jacobs and two coauthors spied neurons in the human brain encoding spatial information through the timing, rather than rate, of their firing. This temporal firing phenomenon is well documented in certain brain areas of rats, but the new study and others suggest it might be far more widespread in mammalian brains. “The more we look for it, the more we see it,” Jacobs said.

Some researchers think the discovery might help solve a major mystery: how brains can learn so quickly.

The phenomenon is called phase precession. It’s a relationship between the continuous rhythm of a brain wave — the overall ebb and flow of electrical signaling in an area of the brain — and the specific moments that neurons in that brain area activate. A theta brain wave, for instance, rises and falls in a consistent pattern over time, but neurons fire inconsistently, at different points on the wave’s trajectory. In this way, brain waves act like a clock, said one of the study’s coauthors, Salman Qasim, also of Columbia. They let neurons time their firings precisely so that they’ll land in range of other neurons’ firing — thereby forging connections between neurons.

Researchers began noticing phase precession decades ago among the neurons in rat brains that encode information about spatial position. Human brains and rat brains both contain these so-called place cells, each of which is tuned to a specific region or “place field.” Our brains seem to scale these place fields to cover our current surroundings, whether that’s miles of freeway or the rooms of one’s home, said Kamran Diba, a neuroscientist at the University of Michigan. The closer you get to the center of a place field, the faster the corresponding place cell fires. As you leave one place field and enter another, the firing of the first place cell peters out, while that of the second picks up.

But along with rate, there’s also timing: As the rat passes through a place field, the associated place cell fires earlier and earlier with respect to the cycle of the background theta wave. As the rat crosses from one place field into another, the very early firing of the first place cell occurs close in time with the late firing of the next place cell. Their near-coincident firings cause the synapse, or connection, between them to strengthen, and this coupling of the place cells ingrains the rat’s trajectory into the brain. (Information seems to be encoded through the strengthening of synapses only when two neurons fire within tens of milliseconds of each other.)

Phase precession is obvious in rats. “It’s so prominent and prevalent in the rodent brain that it makes you want to assume it’s a generalizable mechanism,” Qasim said. Scientists had also identified phase precession in the spatial processing of bats and marmosets, but the pattern was elusive in humans until now.

Monitoring individual neurons is too invasive to do on the average human study participant, but the Columbia team took advantage of data collected years ago from 13 epilepsy patients who had already had electrodes implanted to map the electrical signals of their seizures. The electrodes recorded the firings of individual neurons while patients steered their way through a virtual-reality simulation using a joystick. As the patients maneuvered themselves around, the researchers identified phase precession in 12% of the neurons they were monitoring.

Pulling out these signals required sophisticated statistical analysis, because humans exhibit a more complicated pattern of overlapping brain waves than rodents do — and because less of our neural activity is devoted to navigation. But the researchers could say definitively that phase precession is there.

Other research suggests that phase precession may be crucial beyond navigation. In animals, the phenomenon has been tied to non-spatial perceptions, including processing sounds and smell. And in humans, research co-authored by Jacobs last year found phase precession in time-sensitive brain cells. A not-yet-peer-reviewed preprint by cognitive scientists in France and the Netherlands indicated that processing serial images involved phase precession, too. Finally, in Jacobs’ new study, it was found not just in literal navigation, but also as the humans progressed toward abstract goals in the simulation.

These studies suggest that phase precession allows the brain to link sequences of times, images and events in the same way as it does spatial positions. “Finding that first evidence really opens the door for it to be some sort of universal coding mechanism in the brain — across mammalian species, possibly,” Qasim said. “You might be missing a whole lot of information coding if you’re not tracking the relative timing of neural activity.”

Neuroscientists are, in fact, on the lookout for a new kind of coding in the brain to answer the longstanding question: How does the brain encode information so quickly? It’s understood that patterns in external data become ingrained in the firing patterns of the network through the strengthening and weakening of synaptic connections. But artificial intelligence researchers typically have to train artificial neural networks on hundreds or thousands of examples of a pattern or concept before the synapse strengths adjust enough for the network to learn the pattern. Mysteriously, humans can typically learn from just one or a handful of examples.

Phase precession could play a role in that disparity. One hint of this comes from a study by Johns Hopkins researchers who found that phase precession showed up in rats learning an unfamiliar track — on their first lap. “As soon as you’re learning something, this pattern for learning sequences is already in place,” Qasim added. “That might facilitate very rapid learning of sequences.”

Phase precession organizes the timing so that learning happens more often than it could otherwise. It arranges for neurons activated by related information to fire in quick-enough succession for the synapse between them to strengthen. “It would point to this notion that the brain is basically computing faster than you would imagine from rate coding alone,” Diba said.

There are other theories about our rapid learning abilities. And researchers stressed that it’s difficult to draw conclusions about any widespread role for phase precession in the brain from the limited studies so far.

Still, a thorough search for the phenomenon may be in order. Bradley Lega, a neurologist at the University of Texas Southwestern Medical Center, said, “There’s a lot of problems that phase precession can solve.”

By Contributing Writer Elena Renkin

https://www.quantamagazine.org/authors/elenarenken/

For the original article as it appears in Quanta Magazine, use this link:

New ‘Liquid AI’ Learns Continuously from the World

By Jason Dorrier January 31, 2021

For all its comparisons to the human brain, AI still isn’t much like us. Maybe that’s alright. In the animal kingdom, brains come in all shapes and sizes. So, in a new machine learning approach, engineers did away with the human brain and all its beautiful complexity—turning instead to the brain of a lowly worm for inspiration.

Turns out, simplicity has its benefits. The resulting neural network is efficient, transparent, and here’s the kicker: It’s a lifelong learner.

Whereas most machine learning algorithms can’t hone their skills beyond an initial training period, the researchers say the new approach, called a liquid neural network, has a kind of built-in “neuroplasticity.” That is, as it goes about its work—say, in the future, maybe driving a car or directing a robot—it can learn from experience and adjust its connections on the fly.

In a world that’s noisy and chaotic, such adaptability is essential.

ai learning white blue liquid ink cloud swirl water
Image Credit: benjamin henon / Unsplash

Worm-Brained Driver

The algorithm’s architecture was inspired by the mere 302 neurons making up the nervous system of C. elegans, a tiny nematode (or worm).

In work published last year, the group, which includes researchers from MIT and Austria’s Institute of Science and Technology, said that despite its simplicity, C. elegans is capable of surprisingly interesting and varied behavior. So, they developed equations to mathematically model the worm’s neurons and then built them into a neural network.

Their worm-brain algorithm was much simpler than other cutting-edge machine learning algorithms, and yet it was still able to accomplish similar tasks, like keeping a car in its lane.

“Today, deep learning models with many millions of parameters are often used for learning complex tasks such as autonomous driving,” Mathias Lechner, a PhD student at Austria’s Institute of Science and Technology and study author, said. “However, our new approach enables us to reduce the size of the networks by two orders of magnitude. Our systems only use 75,000 trainable parameters.”

Now, in a new paper, the group takes their worm-inspired system further by adding a wholly new capability.

Old Worm, New Tricks

The output of a neural network—turn the steering wheel to the right, for instance—depends on a set of weighted connections between the network’s “neurons.”

In our brains, it’s the same. Each brain cell is connected to many other cells. Whether or not a particular cell fires depends on the sum of the signals it’s receiving. Beyond some threshold—or weight—the cell fires a signal to its own network of downstream connections.

In a neural network, these weights are called parameters. As the system feeds data through the network, its parameters converge on the configuration yielding the best results.

Usually, a neural network’s parameters are locked into place after training, and the algorithm’s put to work. But in the real world, this can mean it’s a bit brittle—show an algorithm something that deviates too much from its training, and it’ll break. Not an ideal result.

In contrast, in a liquid neural network, the parameters are allowed to continue changing over time and with experience. The AI learns on the job.

This adaptibility means the algorithm is less likely to break as the world throws new or noisy information its way—like, for example, when rain obscures an autonomous car’s camera. Also, in contrast to bigger algorithms, whose inner workings are largely inscrutable, the algorithm’s simple architecture allows researchers to peer inside and audit its decision-making.

Neither its new ability nor its still-diminutive stature seemed to hold the AI back. The algorithm performed as well or better than other state-of-the art time-sequence algorithms in predicting next steps in a series of events.

“Everyone talks about scaling up their network,” said Ramin Hasani, the study’s lead author. “We want to scale down, to have fewer but richer nodes.”

An adaptable algorithm that consumes relatively little computing power would make an ideal robot brain. Hasani believes the approach may be useful in other applications that involve real-time analysis of new data like video processing or financial analysis.

He plans to continue dialing in the approach to make it practical.

“We have a provably more expressive neural network that is inspired by nature. But this is just the beginning of the process,” Hasani said. “The obvious question is how do you extend this? We think this kind of network could be a key element of future intelligence systems.”

Is Bigger Better?

At a time when big players like OpenAI and Google are regularly making headlines with gargantuan machine learning algorithms, it’s a fascinating example of an alternative approach headed in the opposite direction.

OpenAI’s GPT-3 algorithm collectively dropped jaws last year, both for its size—at the time, a record-setting 175 billion parameters—and its abilities. A recent Google algorithm topped the charts at over a trillion parameters.

Yet critics worry the drive toward ever-bigger AI is wasteful, expensive, and consolidates research in the hands of a few companies with cash to fund large-scale models. Further, these huge models are “black boxes,” their actions largely impenetrable. This can be especially problematic when unsupervised models are trained on the unfiltered internet. There’s no telling (or perhaps, controlling) what bad habits they’ll pick up.

Increasingly, academic researchers are aiming to address some of these issues. As companies like OpenAI, Google, and Microsoft push to prove the bigger-is-better hypothesis, it’s possible serious AI innovations in efficiency will emerge elsewhere—not despite a lack of resources but because of it. As they say, necessity is the mother of invention.

Jason is managing editor of Singularity Hub. He did research and wrote about finance and economics before moving on to science, technology, and the future. He is curious about pretty much everything, and sad he’ll only ever know a tiny fraction of it all.

Why Computers Will Never Write Good Novels

The power of narrative flows only from the human brain.

BY ANGUS FLETCHER FEBRUARY 10, 2021

This image has an empty alt attribute; its file name is image.jpg

You’ve been hoaxed.

The hoax seems harmless enough. A few thousand AI researchers have claimed that computers can read and write literature. They’ve alleged that algorithms can unearth the secret formulas of fiction and film. That Bayesian software can map the plots of memoirs and comic books. That digital brains can pen primitive lyrics1 and short stories—wooden and weird, to be sure, yet evidence that computers are capable of more.

But the hoax is not harmless. If it were possible to build a digital novelist or poetry analyst, then computers would be far more powerful than they are now. They would in fact be the most powerful beings in the history of Earth. Their power would be the power of literature, which although it seems now, in today’s glittering silicon age, to be a rather unimpressive old thing, springs from the same neural root that enables human brains to create, to imagine, to dream up tomorrows. It was the literary fictions of H.G. Wells that sparked Robert Goddard to devise the liquid-fueled rocket, launching the space epoch; and it was poets and playwrights—Homer in The Iliad, Karel Čapek in Rossumovi Univerzální Roboti—who first hatched the notion of a self-propelled metal robot, ushering in the wonder-horror of our modern world of automata.

At the bottom of literature’s strange and branching multiplicity is an engine of causal reasoning.

If computers could do literature, they could invent like Wells and Homer, taking over from sci-fi authors to engineer the next utopia-dystopia. And right now, you probably suspect that computers are on the verge of doing just so: Not too far in the future, maybe in my lifetime even, we’ll have a computer that creates, that imagines, that dreams. You think that because you’ve been duped by the hoax. The hoax, after all, is everywhere: college classrooms, public libraries, quiz games, IBM, Stanford, Oxford, Hollywood. It’s become such a pop-culture truism that Wired enlisted an algorithm, SciFiQ, to craft “the perfect piece of science fiction.”2

Yet despite all this gaudy credentialing, the hoax is a complete cheat, a total scam, a fiction of the grossest kind. Computers can’t grasp the most lucid haiku. Nor can they pen the clumsiest fairytale. Computers cannot read or write literature at all. And they never, never will.

I can prove it to you.

Computers possess brains of unquestionable brilliance, a brilliance that dates to an early spring day in 1937 when a 21-year-old master’s student found himself puzzling over an ungainly contraption that looked like three foosball tables pressed side-to-side in an electrical lab at the Massachusetts Institute of Technology.

The student was Claude Shannon. He’d earned his undergraduate diploma a year earlier from the University of Michigan, where he’d become fascinated with a system of logic devised during the 1850s by George Boole, a self-taught Irish mathematician who’d managed to vault himself, without a university degree, into an Algebra professorship at Queen’s College, Cork. And eight decades after Boole pulled off that improbable leap, Shannon pulled off another. The ungainly foosball contraption that sprawled before him was a “differential analyzer,” a wheel-and-disc analogue computer that solved physics equations with the help of electronic switchboards. Those switchboards were a convoluted mess of ad hoc cables and transistors that seemed to defy reason when suddenly Shannon had a world-changing epiphany: Those switchboards and Boole’s logic spoke the same language. Boole’s logic could simplify the switchboards, condensing them into circuits of elegant precision. And the switchboards could then solve all of Boole’s logic puzzles, ushering in history’s first automated logician.

The hoax is everywhere: college classrooms, IBM, Stanford, Oxford, Hollywood.

With this jump of insight, the architecture of the modern computer was born. And as the ensuing years have proved, the architecture is one of enormous potency. It can search a trillion webpages, dominate strategy games, and pick lone faces out of a crowd—and every day, it stretches still further, automating more of our vehicles, dating lives, and daily meals. Yet as dazzling as all these tomorrow-works are, the best way to understand the true power of computer thought isn’t to peer forward into the future fast-approaching. It’s to look backward in time, returning our gaze to the original source of Shannon’s epiphany. Just as that epiphany rested on the earlier insights of Boole, so too did Boole’s insights3 rest on a work more ancient still: a scroll authored by the Athenian polymath Aristotle in the fourth century B.C.

The scroll’s title is arcane: Prior Analytics. But its purpose is simple: to lay down a method for finding the truth. That method is the syllogism. The syllogism distills all logic down to three basic functions: AND, OR, NOT. And with those functions, the syllogism unerringly distinguishes what’s TRUE from what’s FALSE.

So powerful is Aristotle’s syllogism that it became the uncontested foundation of formal logic throughout Byzantine antiquity, the Arabic middle ages, and the European Enlightenment. When Boole laid the mathematical groundwork for modern computing, he could begin by observing:

The subject of Logic stands almost exclusively associated with the great name of Aristotle. As it was presented to ancient Greece … it has continued to the present day.

This great triumph prompted Boole to declare that Aristotle had identified “the fundamental laws of those operations of the mind by which reasoning is performed.” Inspired by the Greek’s achievement, Boole decided to carry it one step further. He would translate Aristotle’s syllogisms into “the symbolical language of a Calculus,” creating a mathematics that thought like the world’s most rational human.

In 1854, Boole published his mathematics as The Laws of ThoughtThe Laws converted Aristotle’s FALSE and TRUE into two digits—zero and 1—that could be crunched by AND-OR-NOT algebraic equations. And 83 years later, those equations were given life by Claude Shannon. Shannon discerned that the differential analyzer’s electrical off/on switches could be used to animate Boole’s 0/1 bits. And Shannon also experienced a second, even more remarkable, realization: The same switches could automate Boole’s mathematical syllogisms. One arrangement of off/on switches could calculate AND, and a second could calculate OR, and a third could calculate NOT, Frankensteining an electron-powered thinker into existence.

Shannon’s mad-scientist achievement established the blueprint for the computer brain. That brain, in homage to Boole’s arithmetic and Aristotle’s logic, is known now as the Arithmetic Logic Unit or ALU. Since Shannon’s breakthrough in 1937, the ALU has undergone a legion of upgrades: Its clunky off/on switch-arrangements have shrunk to miniscule transistors, been renamed logic gates, multiplied into parallel processors, and used to perform increasingly sophisticated styles of mathematics. But through all these improvements, the ALU’s core design has not changed. It remains as Shannon drew it up, an automated version of the syllogism, so syllogistic reasoning is the only kind of thinking that computers can do. Aristotle’s AND-OR-NOT is hardwired in.

This hardwiring has hardly seemed a limitation. In the late 19th century, the American philosopher C.S. Peirce deduced that AND-OR-NOT could be used to compute the essential truth of anything: “mathematics, ethics, metaphysics, psychology, phonetics, optics, chemistry, comparative anatomy, astronomy, gravitation, thermodynamics, economics, the history of science, whist, men and women, wine, meteorology.” And in our own time, Peirce’s deduction has been bolstered by the advent of machine learning. Machine learning marshals the ALU’s logic gates to perform the most astonishing feats of artificial intelligence, enabling Google’s DeepMind, IBM’s Watson, Apple’s Siri, Baidu’s PaddlePaddle, and Amazon’s Web Services to reckon a person’s odds of getting sick, alert companies to possible frauds, winnow out spam, become a whiz at multiplayer video games, and estimate the likelihood that you’d like to purchase something you don’t even know exists.

Although these remarkable displays of computer cleverness all originate in the Aristotelian syllogisms that Boole equated with the human mind, it turns out that the logic of their thought is different from the logic that you and I typically use to think.

Very, very different indeed.

The difference was detected back in the 16th century.

It was then that Peter Ramus, a half-blind, 20-something professor at the University of Paris, pointed out an awkward fact that no reputable academic had previously dared to admit: Aristotle’s syllogisms were extremely hard to understand.4 When students first encountered a syllogism, they were inevitably confused by its truth-generating instructions:

If no β is α, then no α is β, for if some α (let us say δ) were β, then β would be α, for δ is β. But if all β is α, then some α is β, for if no α were β, then no β could be α …

And even after students battled through their initial perplexity, valiantly wrapping their minds around Aristotle’s abstruse mathematical procedures, it still took years to acquire anything like proficiency in Logic.

This, Ramus thundered, was oxymoronic. Logic was, by definition, logical. So, it should be immediately obvious, flashing through our mind like a beam of clearest light. It shouldn’t slow down our thoughts, requiring us to labor, groan, and painstakingly calculate. All that head-strain was proof that Logic was malfunctioning—and needed a fix.

Ramus’ denunciation of Aristotle stunned his fellow professors. And Ramus then startled them further. He announced that the way to make Logic more intuitive was to turn away from the syllogism. And to turn toward literature.

Do we make ourselves more logical by using computers? Or by reading poetry?

Literature exchanged Aristotle’s AND-OR-NOT for a different logic: the logic of nature. That logic explained why rocks dropped, why heavens rotated, why flowers bloomed, why hearts kindled with courage. And by doing so, it equipped us with a handbook of physical power. Teaching us how to master the things of our world, it upgraded our brains into scientists.

Literature’s facility at this practical logic was why, Ramus declared, God Himself had used myths and parables to convey the workings of the cosmos. And it was why literature remained the fastest way to penetrate the nuts and bolts of life’s operation. What better way to grasp the intricacies of reason than by reading Plato’s Socratic dialogues? What better way to understand the follies of emotion than by reading Aesop’s fable of the sour grapes? What better way to fathom war’s empire than by reading Virgil’s Aeneid? What better way to pierce that mystery of mysteries—love—than by reading the lyrics of Joachim du Bellay?

Inspired by literature’s achievement, Ramus tore up Logic’s traditional textbooks. And to communicate life’s logic in all its rich variety, he crafted a new textbook filled with sonnets and stories. These literary creations explained the previously incomprehensible reasons of lovers, philosophers, fools, and gods—and did so with such graceful intelligence that learning felt easy. Where the syllogisms of Aristotle had ached our brains, literature knew just how to talk so that we’d comprehend, quickening our thoughts to keep pace with its own.

Ramus’ new textbook premiered in the 1540s, and it struck thousands of students as a revelation. For the first time in their lives, those students opened a Logic primer—and felt the flow of their innate method of reasoning, only executed faster and more precisely. Carried by a wave of student enthusiasm, Ramus’ textbooks became bestsellers across Western Europe, inspiring educators from Berlin to London to celebrate literature’s intuitive logic: “Read Homer’s Iliad and that most worthy ornament of our English tongue, the Arcadia of Sir Philip Sidney—and see the true effects of Natural Logic, far different from the Logic dreamed up by some curious heads in obscure schools.”5

Four-hundred years before Shannon, here was his dream of a logic-enhancer—and yet the blueprint was radically different. Where Shannon tried to engineer a go-faster human mind with electronics, Ramus did it with literature.

So who was right? Do we make ourselves more logical by using computers? Or by reading poetry? Does our next-gen brain lie in the CPU’s Arithmetic Logic Unit? Or in the fables of our bookshelf?

To our 21st-century eyes, the answer seems obvious: The AND-OR-NOT logic of Aristotle, Boole, and Shannon is the undisputed champion. Computers—and their syllogisms—rule our schools, our offices, our cars, our homes, our everything. Meanwhile, nobody today reads Ramus’ textbook. Nor does anyone see literature as the logic of tomorrow. In fact, quite the opposite: Enrollments in literature classes at universities worldwide are contracting dramatically. Clearly, there is no “natural logic” inside our heads that’s accelerated by the writings of Homer and Maya Angelou.

Except, there is. In a recent plot twist, neuroscience has shown that Ramus got it right.

Our neurons can fire—or not.

This basic on/off function, observed pioneering computer scientist John von Neumann, makes our neurons appear similar—even identical—to computer transistors. Yet transistors and neurons are different in two respects. The first difference was once thought to be very important, but is now viewed as basically irrelevant. The second has been almost entirely overlooked, but is very important indeed.

The first—basically irrelevant—difference is that transistors speak in digital while neurons speak in analogue. Transistors, that is, talk the TRUE/FALSE absolutes of 1 and 0, while neurons can be dialed up to “a tad more than 0” or “exactly ¾.” In computing’s early days, this difference seemed to doom artificial intelligences to cogitate in black-and-white while humans mused in endless shades of gray. But over the past 50 years, the development of Bayesian statistics, fuzzy sets, and other mathematical techniques have allowed computers to mimic the human mental palette, effectively nullifying this first difference between their brains and ours.

The second—and significant—difference is that neurons can control the direction of our ideas. This control is made possible by the fact that our neurons, as modern neuroscientists and electrophysiologists have demonstrated, fire in a single direction: from dendrite to synapse. So when a synapse of neuron A opens a connection to a dendrite of neuron Z, the ending of A becomes the beginning of Z, producing the one-way circuit A → Z.

This one-way circuit is our brain thinking: A causes Z. Or to put it technically, it’s our brain performing causal reasoning.

The best that computers can do is spit out word soups. They leave our neurons unmoved.

Causal reasoning is the neural root of tomorrow-dreaming teased at this article’s beginning. It’s our brain’s ability to think: this-leads-to-that. It can be based on some data or no data—or even go against all data. And it’s such an automatic outcome of our neuronal anatomy that from the moment we’re born, we instinctively think in its story sequences, cataloguing the world into mother-leads-to-pleasure and cloud-leads-to-rain and violence-leads-to-pain. Allowing us, as we grow, to invent afternoon plans, personal biographies, scientific hypotheses, business proposals, military tactics, technological blueprints, assembly lines, political campaigns, and other original chains of cause-and-effect.

But as natural as causal reasoning feels to us, computers can’t do it. That’s because the syllogistic thought of the computer ALU is composed of mathematical equations, which (as the term “equation” implies) take the form of A equals Z. And unlike the connections made by our neurons, A equals Z is not a one-way route. It can be reversed without changing its meaning: A equals Z means exactly the same as Z equals A, just as 2 + 2 = 4 means precisely the same as 4 = 2 + 2.

This feature of A equals Z means that computers can’t think in A causes Z. The closest they can get is “if-then” statements such as: “If Bob bought this toothpaste, then he will buy that toothbrush.” This can look like causation but it’s only correlation. Bob buying toothpaste doesn’t cause him to buy a toothbrush. What causes Bob to buy a toothbrush is a third factor: wanting clean teeth.

Computers, for all their intelligence, cannot grasp this. Judea Pearl, the computer scientist whose groundbreaking work in AI led to the development of Bayesian networks, has chronicled that the if-then brains of computers see no meaningful difference between Bob buying a toothbrush because he bought toothpaste and Bob buying a toothbrush because he wants clean teeth. In the language of the ALU’s transistors, the two equate to the very same thing.

This inability to perform causal reasoning means that computers cannot do all sorts of stuff that our human brain can. They cannot escape the mathematical present-tense of 2 + 2 is 4 to cogitate in was or will be. They cannot think historically or hatch future schemes to do anything, including take over the world.

And they cannot write literature.

Literature is a wonderwork of imaginative weird and dynamic variety. But at the bottom of its strange and branching multiplicity is an engine of causal reasoning. The engine we call narrative.

Narrative cranks out chains of this-leads-to-that. Those chains form literature’s story plots and character motives, bringing into being the events of The Iliad and the soliloquies of Hamlet. And those chains also comprise the literary device known as the narrator, which (as narrative theorists from the Chicago School6 onward have shown) generate novelistic style and poetic voice, creating the postmodern flair of “Rashōmon” and the fierce lyricism of I Know Why the Caged Bird Sings.

No matter how nonlogical, irrational, or even madly surreal literature may feel, it hums with narrative logics of cause-and-effect. When Gabriel García Márquez begins One Hundred Years of Solitude with a mind-bending scene of discovering ice, he’s using story to explore the causes of Colombia’s circular history. When William S. Burroughs dishes out delirious syntax in his opioid-memoir Naked Lunch—“his face torn like a broken film of lust and hungers of larval organs stirring”—he’s using style to explore the effects of processing reality through the pistons of a junk-addled mind.

Narrative’s technologies of plot, character, style, and voice are why, as Ramus discerned all those centuries ago, literature can plug into our neurons to accelerate our causal reasonings, empowering Angels in America to propel us into empathy, The Left Hand of Darkness to speed us into imagining alternate worlds, and a single scrap of Nas, “I never sleep, because sleep is the cousin of death,” to catapult us into grasping the anxious mindset of the street.

None of this narrative think-work can be done by computers, because their AND-OR-NOT logic cannot run sequences of cause-and-effect. And that inability is why no computer will ever pen a short story, no matter how many pages of Annie Proulx or O. Henry are fed into its data banks. Nor will a computer ever author an Emmy-winning television series, no matter how many Fleabag scripts its silicon circuits digest.

The best that computers can do is spit out word soups. Those word soups are syllogistically equivalent to literature. But they’re narratively different. As our brains can instantly discern, the verbal emissions of computers have no literary style or poetic voice. They lack coherent plots or psychologically comprehensible characters. They leave our neurons unmoved.

This isn’t to say that AI is dumb; AI’s rigorous circuitry and prodigious data capacity make it far smarter than us at Aristotelian logic. Nor is it to say that we humans possess some metaphysical creative essence—like freewill—that computers lack. Our brains are also machines, just ones with a different base mechanism.

But it is to say that there’s a dimension—the narrative dimension of time—that exists beyond the ALU’s mathematical present. And our brains, because of the directional arrow of neuronal transmission, can think in that dimension.

Our thoughts in time aren’t necessarily right, good, or true—in fact, strictly speaking, since time lies outside the syllogism’s timeless purview, none of our this-leads-to-that musings qualify as candidates for rightness, goodness, or truth. They exist forever in the realm of the speculative, the counterfactual, and the fictional. But even so, their temporality allows our mortal brain to do things that the superpowered NOR/NAND gates of computers never will. Things like plan, experiment, and dream.

Things like write the world’s worst novels—and the greatest ones, too.

Angus Fletcher is Professor of Story Science at Ohio State’s Project Narrative and the author of Wonderworks: The 25 Most Powerful Inventions in the History of LiteratureHis peer-reviewed proof that computers cannot read literature was published in January 2021 in the literary journal, Narrative. IT WAS THEN REPUBLISHED BY NAUTILUS MAGAZINE, AND NOW REPUBLISHED AGAIN HERE.

Link to the Nautilus piece: https://nautil.us/issue/95/escape/why-computers-will-never-write-good-novels?mc_cid=2836677c0f&mc_eid=16a8ac698c

Air Force working on sci-fi technology to “reprogram” skin cells and heal wounds

Story at a glance (written by Joseph Guzman)

  • University of Michigan researcher Indika Rajapakse is working with the Air Force on ways to reprogram human cells to speed up the natural human healing process.
  • The technology could eventually lead to the improvement of the long-term health outcomes of soldiers and veterans, as it could be used for treating war wounds, burns, skin grafts or organ transplants.
  • The research involves cellular reprogramming, which is the process of taking one type of human cell, such as a muscle cell, and reprogramming its genome so that it becomes a different type of cell, such as a skin or blood cell.

The U.S. Air Force is funding new cutting-edge technology that it says could heal a person’s wounds more than five times faster than the human body can naturally, officials announced Thursday. 

University of Michigan researcher Indika Rajapakse is working with the Air Force on ways to reprogram human cells to speed up the natural human healing process. Rajapakse secured funding from the military branch to purchase a special live cell imaging microscope and improve an elaborate algorithm to advance the research. 

The technology could eventually lead to the improvement of the long-term health outcomes of soldiers and veterans as it could be used for treating war wounds and burns and for skin grafts or organ transplants, according to the U.S. Air Force

“We have the resources to do this, and it is our obligation to take full advantage of them,” Rajapakse, associate professor of computational medicine & bioinformatics and associate professor of mathematics at the University of Michigan, said in a statement

The research involves cellular reprogramming, which is the process of taking one type of human cell, such as a muscle cell, and reprogramming its genome so that it becomes a different type of cell, such as a skin or blood cell. Proteins called transcription factors are used to turn various genes in a cell on and off to regulate cell growth, division, migration and organization. 

Researchers said the application of the right transcription factors to a wound could accelerate healing time fivefold, compared to if the wound healed naturally. The transcription factors could be administered through a “spray-on” bandage where they would be applied directly to wounds, theoretically converting exposed muscle cells into surface skin cells that would heal faster. 

The Air Force notes that “identifying which transcription factors make the required changes to create the right kind of cell requires a long process of trial and error.” Researchers are working on an algorithm to identify the right transcription factors.

Details about when the technology could possibly become a reality were not immediately available. 

Published on Jan 29, 2021

NOTE: The by-line, “Written by David Wolf” found near the top of these pages, is incorrect, but I can’t reach inside the format to remove it.

Link to original article: https://thehill.com/changing-america/well-being/medical-advances/536506-air-force-working-on-sci-fi-technology-to

This AI chip claims to mimic the human brain

Brings us one step closer to a brain-on-a-chip

Written By Mayank Sharma 2 days ago

Illustration of the new brain AI chip
(Image credit: RMIT)

Researchers have developed an electronic chip featuring artificial intelligence (AI) that imitates the way the human brain processes visual information.

AI usually relies heavily on software and off-site data processing – however this new prototype fuses the core AI software with image-capturing hardware onto a single device, delivering brain-like functionality.

“It’s getting us closer to an all-in-one AI device inspired by nature’s greatest computing innovation — the human brain,” said Sumeet Walia, who leads the team of Australian, American and Chinese researchers.

Scalable bionics

The research was led by Melbourne’s RMIT University and recently published in the Advanced Materials journal. According to a press release, the design of the prototype was inspired by optogenetics, which is itself an emerging biotechnology tool that allows scientists to use light to manipulate neurons in the human body. 

The AI chip mimics this behaviour since it’s been created using an ultra-thin black phosphorus material, which changes electrical resistance based on different wavelengths of light. The researchers note that by shining different colours of light on the chip, they could achieve different functionalities such as imaging or memory storage.

“Our prototype is a significant advance towards the ultimate in electronics: a brain-on-a-chip that can learn from its environment just like we do,” said Dr Taimur Ahmed, lead author of the study.

The report notes that the new chip builds on an earlier prototype from RMIT, which used light to create and modify memories. 

In addition to capturing and manipulating images, the new chip can now also enhance them, classify numbers, and be trained to recognize patterns and images with an accuracy rate of over 90%.

Dr. Ahmed notes that their chip brings them significantly closer to a “brain-on-a-chip” that learns from its environment just like us.

Walia agrees: “Our aim is to replicate a core feature of how the brain learns, through imprinting vision as memory. The prototype we’ve developed is a major leap forward towards neurorobotics, better technologies for human-machine interaction and scalable bionic systems.”

Link to source: https://www.techradar.com/news/this-ai-chip-claims-to-mimic-the-human-brain

How the brain forms sensory memories: from the inside out.

NOTE: Not “written by;” only collated by David Wolf

Date: November 16, 2020

Source: Max-Planck-Gesellschaft

Summary: A new study identifies a region of the thalamus as a key source of signals encoding past experiences in the neocortex.

Neurons illustration | Credit: © whitehoune / stock.adobe.com
Neurons illustration (stock image). Credit: © whitehoune / stock.adobe.com

The brain encodes information collected by our senses. However, to perceive our environment and to constructively interact with it, these sensory signals need to be interpreted in the context of our previous experiences and current aims. In the latest issue of Science, a team of scientists led by Dr. Johannes Letzkus, Research Group Leader at the Max Planck Institute for Brain Research, has identified a key source of this experience-dependent top-down information.

The neocortex is the largest area of the human brain. It has expanded and differentiated enormously during mammalian evolution, and is thought to mediate many of the capacities that distinguish humans from their closest relatives. Moreover, dysfunctions of this area also play a central role in many psychiatric disorders. All higher cognitive functions of the neocortex are enabled by bringing together two distinct streams of information: a ‘bottom-up’ stream carrying signals from the surrounding environment, and a ‘top-down’ stream that transmits internally-generated information encoding our previous experiences and current aims.

“Decades of investigation have elucidated how sensory inputs from the environment are processed. However, our knowledge of internally-generated information is still in its infancy. This is one of the biggest gaps in our understanding of higher brain functions like sensory perception,” says Letzkus. This motivated the team to search for the sources of these top-down signals. “Previous work by us and many other scientists had suggested that the top-most layer of neocortex is likely a key site that receives inputs carrying top-down information. Taking this as a starting point allowed us to identify a region of the thalamus — a brain area embedded deep within the forebrain — as a key candidate source of such internal information.”

Motivated by these observations Dr. M. Belén Pardi, the first author of the study and postdoctoral researcher in the Letzkus lab, devised an innovative approach that enabled her to measure the responses of single thalamic synapses in mouse neocortex before and after a learning paradigm. “The results were very clear,” Pardi remembers. “Whereas neutral stimuli without relevance were encoded by small and transient responses in this pathway, learning strongly boosted their activity and made the signals both faster and more sustained over time.” This suggests that the thalamic synapses in neocortex encode the previous experience of the animal. “We were really convinced that this is the case when we compared the strength of the acquired memory with the change in thalamic activity: This revealed a strong positive correlation, indicating that inputs from the thalamus prominently encode the learned behavioral relevance of stimuli,” says Letzkus.

But is this mechanism selective for these top-down memory-related signals? Sensory stimuli can be relevant because of what we have learned to associate with them, but also merely due to their physical properties. For instance, the louder sounds are the more readily they recruit attention in both humans and animals. However, this is a low-level function that has little to do with previous experience. “Intriguingly, we found very different, indeed opposite, encoding mechanisms for this bottom-up form of relevance” says Pardi.

Given their central importance, the scientists speculated that the way these signals are received in the neocortex must be tightly regulated. Pardi and co-workers addressed this in further experiments, combined with computational modeling in collaboration with the laboratory of Dr. Henning Sprekeler and his team at Technische Universität Berlin. The results indeed identified a previously unknown mechanism that can finely tune the information along this pathway, identifying a specialized type of neuron in the top-most layer of neocortex as a dynamic gatekeeper of these top-down signals.

“These results reveal the thalamic inputs to sensory neocortex as a key source of information about the past experiences that have been associated with sensory stimuli. Such top-down signals are perturbed in a number of brain disorders like autism and schizophrenia, and our hope is that the present findings will also enable a deeper understanding of the maladaptive changes that underlie these severe conditions,” concludes Letzkus.

Story Source:

Materials provided by Max-Planck-GesellschaftNote: Content may be edited for style and length.

Max-Planck-Gesellschaft. “From the inside out: How the brain forms sensory memories.” ScienceDaily. ScienceDaily, 16 November 2020. <www.sciencedaily.com/releases/2020/11/201116092246.htm>.

Humans are born with brains ‘prewired’ to see words

Study finds connections to language areas of the brain

Date:October 22, 2020

Source: Ohio State University (NOTE: This article, like all the others in the collection, were NOT written by David Wolf; they were curated by this site automatically. Mr. Wolf apologizes for any misunderstandings, but he is unable to remove that claim from the format of this web site.)

Mother reading to child | Credit: © fizkes / stock.adobe.com

Mother reading to child (stock image). Credit: © fizkes / stock.adobe.com

Summary: Humans are born with a part of the brain that is prewired to be receptive to seeing words and letters, setting the stage at birth for people to learn how to read, a new study suggests. Analyzing brain scans of newborns, researchers found that this part of the brain — called the ‘visual word form area’ (VWFA) — is connected to the language network of the brain.

“That makes it fertile ground to develop a sensitivity to visual words — even before any exposure to language,” said Zeynep Saygin, senior author of the study and assistant professor of psychology at The Ohio State University.

The VWFA is specialized for reading only in literate individuals. Some researchers had hypothesized that the pre-reading VWFA starts out being no different than other parts of the visual cortex that are sensitive to seeing faces, scenes or other objects, and only becomes selective to words and letters as children learn to read or at least as they learn language.

“We found that isn’t true. Even at birth, the VWFA is more connected functionally to the language network of the brain than it is to other areas,” Saygin said. “It is an incredibly exciting finding.”

Saygin, who is a core faculty member of Ohio State’s Chronic Brain Injury Program, conducted the study with graduate students Jin Li and Heather Hansen and assistant professor David Osher, all in psychology at Ohio State. Their results were published today in the journal Scientific Reports.

The researchers analyzed fMRI scans of the brains of 40 newborns, all less than a week old, who were part of the Developing Human Connectome Project. They compared these to similar scans from 40 adults who participated in the separate Human Connectome Project.

The VWFA is next to another part of visual cortex that processes faces, and it was reasonable to believe that there wasn’t any difference in these parts of the brain in newborns, Saygin said.

As visual objects, faces have some of the same properties as words do, such as needing high spatial resolution for humans to see them correctly.

But the researchers found that, even in newborns, the VWFA was different from the part of the visual cortex that recognizes faces, primarily because of its functional connection to the language processing part of the brain.

“The VWFA is specialized to see words even before we’re exposed to them,” Saygin said.

“It’s interesting to think about how and why our brains develop functional modules that are sensitive to specific things like faces, objects, and words,” said Li, who is lead author of the study.

“Our study really emphasized the role of already having brain connections at birth to help develop functional specialization, even for an experience-dependent category like reading.”

The study did find some differences in the VWFA in newborns and adults.

“Our findings suggest that there likely needs to be further refinement in the VWFA as babies mature,” Saygin said.

“Experience with spoken and written language will likely strengthen connections with specific aspects of the language circuit and further differentiate this region’s function from its neighbors as a person gains literacy.”

Saygin’s lab at Ohio State is currently scanning the brains of 3- and 4-year-olds to learn more about what the VWFA does before children learn to read and what visual properties the region is responsive to.

The goal is to learn how the brain becomes a reading brain, she said. Learning more about individual variability may help researchers understand differences in reading behavior and could be useful in the study of dyslexia and other developmental disorders.

“Knowing what this region is doing at this early age will tell us a bit more about how the human brain can develop the ability to read and what may go wrong,” Saygin said. “It is important to track how this region of the brain becomes increasingly specialized.”

The research was supported in part by the Alfred P. Sloan Foundation. Analyses were completed using the Ohio Supercomputer Center.

Story Source:

Materials provided by Ohio State University. Original written by Jeff Grabmeier. Note: Content may be edited for style and length.


Journal Reference:

  1. Jin Li, David E. Osher, Heather A. Hansen, Zeynep M. Saygin. Innate connectivity patterns drive the development of the visual word form areaScientific Reports, 2020; 10 (1) DOI: 10.1038/s41598-020-75015-7

Synapse-Saving Proteins Discovered, Opening Possibilities in Alzheimer’s and Schizophrenia

NOTE: NOT written by, but CURATED by David Wolf

This shows a brain made up of network nodes
In Alzheimer’s disease, loss of synapses leads to memory problems and other clinical symptoms. In schizophrenia, synapse losses during development predispose an individual to the disorder. Image is credited to UT San Antonio.

Summary: Study identified a new class of proteins that protect synapses from being destroyed. The findings have important implications for both Alzheimer’s disease and schizophrenia.

Source: UT San Antonio

Researchers at The University of Texas Health Science Center at San Antonio (UT Health San Antonio) have discovered a new class of proteins that protect synapses from being destroyed. Synapses are the structures where electrical impulses pass from one neuron to another.

The discovery, published July 13 in the journal Nature Neuroscience, has implications for Alzheimer’s disease and schizophrenia. If proven, increasing the number of these protective proteins could be a novel therapy for the management of those diseases, researchers said.

In Alzheimer’s disease, loss of synapses leads to memory problems and other clinical symptoms. In schizophrenia, synapse losses during development predispose an individual to the disorder.

“We are studying an immune system pathway in the brain that is responsible for eliminating excess synapses; this is called the complement system,” said Gek-Ming Sia, PhD, assistant professor of pharmacology in UT Health San Antonio’s Long School of Medicine and senior author of the research.

“Complement system proteins are deposited onto synapses,” Dr. Sia explained. “They act as signals that invite immune cells called macrophages to come and eat excess synapses during development. We discovered proteins that inhibit this function and essentially act as ‘don’t eat me’ signals to protect synapses from elimination.”

The system sometimes goes awry

During development, synapses are overproduced. Humans have the most synapses at the ages of 12 to 16, and from then to about age 20, there is net synapse elimination that is a normal part of the brain’s maturation. This process requires the complement system.

In adults, synapse numbers are stable, as synapse elimination and formation balance out. But in certain neurological diseases, the brain somehow is injured and begins to overproduce complement proteins, which leads to excessive synapse loss.

“This occurs most notably in Alzheimer’s disease,” Dr. Sia said.

In mouse models of Alzheimer’s disease, researchers have found that the removal of complement proteins from the brain protects it from neurodegeneration, he said.

“We’ve known about the complement proteins, but there was no data to show that there were actually any complement inhibitors in the brain,” Dr. Sia said. “We discovered for the first time that there are, that they affect complement activation in the brain, and that they protect synapses against complement activation.”

Future directions

Dr. Sia and his colleagues will seek to answer interesting questions, including:

  • Whether complement system biology can explain why some people are more resistant and more resilient against certain psychiatric disorders;
  • How the number of complement inhibitors can be changed and whether that could have clinical ramifications;
  • Whether different neurons produce different complement inhibitors, each protecting a certain subset of synapses.

Regarding the last question, Dr. Sia said:

“This could explain why, in certain diseases, there is preferential loss of certain synapses. It could also explain why some people are more susceptible to synapse loss because they have lower levels of certain complement inhibitors.”

The researchers focused on a neuronal complement inhibitor called SRPX2. The studies are being conducted in mice that lack the SRPX2 gene, that demonstrate complement system overactivation and that exhibit excessive synapse loss.

Funding: This project is funded by a NARSAD Young Investigator Award from the Brain and Behavior Research Foundation, a grant from the William and Ella Owens Medical Research Foundation, a Rising STARs Award from The University of Texas System, and grants from two branches of the U.S. National Institutes of Health – the National Institute of Neurological Disorders and Stroke, and the National Institute on Deafness and Other Communication Disorders.

About this neuroscience research article

Source:
UT San Antonio
Media Contacts:
Will Sansom – UT San Antonio
Image Source:
The image is credited to UT San Antonia.

Original Research: Open access
“The endogenous neuronal complement inhibitor SRPX2 protects against complement-mediated synapse elimination during development”. by Qifei Cong, Breeanne M. Soteros, Mackenna Wollet, Jun Hee Kim and Gek-Ming Sia. Nature Neuroscience


Abstract

The endogenous neuronal complement inhibitor SRPX2 protects against complement-mediated synapse elimination during developments

Complement-mediated synapse elimination has emerged as an important process in both brain development and neurological diseases, but whether neurons express complement inhibitors that protect synapses against complement-mediated synapse elimination remains unknown. Here, we show that the sushi domain protein SRPX2 is a neuronally expressed complement inhibitor that regulates complement-dependent synapse elimination. SRPX2 directly binds to C1q and blocks its activity, and SRPX2−/Y mice show increased C3 deposition and microglial synapse engulfment. They also show a transient decrease in synapse numbers and increase in retinogeniculate axon segregation in the lateral geniculate nucleus. In the somatosensory cortex, SRPX2−/Y mice show decreased thalamocortical synapse numbers and increased spine pruning. C3−/−;SRPX2−/Y double-knockout mice exhibit phenotypes associated with C3−/− mice rather than SRPX2−/Y mice, which indicates that C3 is necessary for the effect of SRPX2 on synapse elimination. Together, these results show that SRPX2 protects synapses against complement-mediated elimination in both the thalamus and the cortex.

Link to the source of this article: https://neurosciencenews.com/synapse-proteins-schizophrenia-alzheimers-16661/

New Wearable Whole Brain Scanner Can Help Diagnose and Monitor Neurological Conditions

Posted on: June 11, 2020 in News | Life Science News | Medical Device News
By: Ayesha Rashid, PhD

New Wearable Whole Brain Scanner Can Help Diagnose and Monitor Neurological Conditions
A new wearable brain scanning device has been developed by researchers which can measure magnetic signals to construct 3D images of the whole brain in real-time; this can aid in the diagnosis and monitoring of various neurological conditions.

University of Nottingham researchers have developed a wearable head scanner that can perform whole brain imaging to diagnose brain changes associated with mental illnesses as well as neurodegenerative diseases like Alzheimer’s. The device is worn like a headpiece and can perform scans even when a person is moving, allowing for functional neurological imaging.

The researchers first developed a prototype of the head scanner in 2018, and through further research, have expanded its capacity to include 49 fully-functional channels (an increase from the 13 channels that the initial model had). The device can scan the entire brain and track electrophysiological processes that underlie a number of different mental health problems. The findings are published in the journal Neuroimage.

The development of the wearable head scanner was led by Professor Matt Brookes from the University of Nottingham who, in talking about the device and its potential uses, said, “Understanding mental illness remains one of the greatest challenges facing 21st century science. From childhood illnesses such as autism, to neurodegenerative diseases such as Alzheimer’s, human brain health affects millions of people throughout the lifespan. In many cases, even highly detailed brain images showing what the brain looks like fail to tell us about underlying pathology, and consequently there is an urgent need for new technologies to measure what the brain actually does in health and disease.”

The brain scanner uses magnetoencephalography (MEG) to measure brain activity at a millisecond-by-millisecond level. Owing to their electrochemical properties, brain cells generate electrical signals which are used to function and communicate with other cells in complex neurological networks. These electrical signals emit very small magnetic fields, which can be measured outside of the head using MEG. Compared to measuring electrical activity by electroencephalography (EEG), measurement of magnetic fields is generally not affected by the variable conductivity profile of the head, which therefore yields better spatial resolution than EEG.

Detection of magnetic fields produced by the brain involves the use of superconducting quantum interference devices (SQUIDs), which contain MEG sensors and require cooling using cryogenic dewars (small flasks that store liquid nitrogen).

To overcome this, the researchers opted to use optically-pumped magnetometers (OPMs), which use the quantum mechanical properties of alkali atoms to measure small magnetic fields, and do not require a cooling system. They found that the OPMs had sensitivities similar to that of commercial SQUIDs. Although a nascent technology with a limited number of magnetic sensors, the OPM-MEG combination allows for the “adaptable, motion-robust MEG system, with improved data quality, at reduced cost,” according to the study.

In contrast to large, bulky brain scanners where patients must remain motionless, the wearable head scanner permits patients to move freely, allowing for the measurement of functional brain signals under a wider range of activities. The new and improved prototype not only has added functional features, but it can also be used to measure brain activity in children, for whom keeping still is often very difficult.

Measuring brain activity when a person is performing different tasks, such as physical movements or speaking, allows for the identification of specific parts of the brain that are engaged during the given activity. For example, it can show brain areas that control hand movement or vision at millimeter accuracy, demonstrating the device’s high level of precision.

The scanner is based on a novel 3D helmet design that was developed by the researchers in collaboration with Added Scientific in Nottingham. The higher channel count allows for the device to be used to scan the whole brain.

Two designs of the device were developed: a flexible (EEG-like) cap and a rigid helmet. The researchers found that while both designs generated high quality data, the rigid helmet was found to be the more robust option as it enabled better reconstruction of field data into 3D images.

“Our group in Nottingham, alongside partners at UCL, are now driving this research forward, not only to develop a new understanding of brain function, but also to commercialize the equipment that we have developed. Components of the scanner have already been sold, via industrial partners, to brain imaging laboratories across the world. It is thought that not only will the new scanner be significantly better than anything that currently exists, but also that it will be significantly cheaper,” said Professor Brooks.

The new whole-brain head scanner opens up tremendous possibilities in functional neuroimaging as it allows for the scanning of patients in real-time as they perform specific tasks, or experience disease-related neurological events. For example, it could be used to scan epileptic patients during seizures to examine abnormal brain activity patterns that may cause them. In this way, the device can help in the diagnosis and real-time monitoring of different neurological conditions and diseases.

To see the original article, use this link: https://xtalks.com/new-wearable-whole-brain-scanner-can-help-diagnose-and-monitor-neurological-conditions-2289/

Frames of consciousness

Can electrical impulses in the brain explain the stuff that dreams are made on? What a new consciousness-detector reveals

In 2014, a month-long bout of dizziness and vomiting brought a 24-year-old woman in China to the hospital. She was no stranger to these symptoms: she’d never been able to walk steadily and suffered from dizziness nearly her whole life. These were serious, debilitating symptoms. And yet, they might have seemed almost mild once CT and MRI scans presented a diagnosis: the woman was missing the majority of her brain – in a manner of speaking. Yes, most of the players on the brain’s ‘stage’ were present: the cerebral cortex, the largest, outermost part of the brain responsible for most of our thinking and cognition, was present and accounted for; the subcortex and the midbrain, with their myriad functions involving movement, memory and body regulation – also present; the brainstem, essential for controlling breathing, sleep and communicating with the rest of the body – present and accounted for.

But none of these arenas hold the majority of the brain’s currency – neurons, the cells that fire impulses to transmit information or relay motor commands. This distinction goes to the cerebellum, a structure situated behind the brainstem and below the cerebral cortex. Latin for ‘little brain’, the highly compact cerebellum occupies only 10 per cent of the brain’s volume, yet contains somewhere between 50 and 80 per cent of the brain’s neurons. And indeed, it was in this sense that the hospitalised Chinese woman was missing the majority of her brain. Incredibly, she had been born without a cerebellum, yet had made it through nearly two and a half decades of life without knowing it was missing. Compare that with strokes and lesions of the cerebral cortex, whose neuron-count is a fraction of the cerebellum’s. These patients can lose the ability to recognise colours or faces and to comprehend language – or they might develop what’s known as a ‘disorder of consciousness’, a condition resulting in loss of responsiveness or any conscious awareness at all.

Understanding consciousness might be the greatest scientific challenge of our time. How can physical stuff, eg electrical impulses, explain mental stuff, eg dreams or the sense of self? Why does a network of neurons in our brain feel like an experience, when a network of computers or a network of people doesn’t feel like anything, as far as we know? Alas, this problem feels impossible. And yet, an unmet need for progress in disorders of consciousness, in which the misdiagnosis rate is between 9 and 40 per cent, demands that we try harder. Without trying harder, we’ll never know if injured patients are truly unconscious – or unresponsive but covertly conscious with a true inner life. Without this knowledge, how can doctors know whether a patient is likely to recover or whether it’s ethical to withdraw care?

Covert consciousness includes not only locked-in syndrome, but also patients with damage to the cerebral cortex. In that instance, covert consciousness is harder to identify because the mental abilities these patients retain are likely impaired. For example, such a patient might be unresponsive not because she is unconscious, but because a lesion to her cerebral cortex has taken away her ability to understand spoken language. And, unlike Bauby, an MRI image of these patients’ brains shows widespread cerebral damage that spells uncertainty to a neurologist trying to determine if anybody’s home. Even after such patients open their eyes and awaken from a coma, a total lack of responsiveness or voluntary movement often follows, garnering a diagnosis of the vegetative state, also known as unresponsive wakefulness syndrome.

To detect covert consciousness in patients diagnosed with disorders of consciousness, an international team of researchers, including my current supervisor, Martin Monti, have used a clever task that exploits the mental imagery that some otherwise unresponsive patients can generate on command. The team wheeled 54 patients – ranging from those who were inconsistently responsive to others who where outright unresponsive – inside a brain scanner. There, the team imaged their brain function using functional MRI (fMRI) in order to deduce what fraction, if any, might be covertly conscious. ‘In a small minority of cases, we can use MRI to detect some awareness in patients who would otherwise seem unconscious,’ Monti said.

Monti and colleagues first asked seemingly unconscious patients to imagine walking through their homes. ‘We saw a flicker of fMRI activity in the parahippocampal gyrus in all but one participant,’ said Adrian Owen, another researcher involved with the project. But asking people to imagine walking through their homes wasn’t enough. To increase their confidence that the person in the scanner was awake and following instructions, the researchers also wanted a second task that would show a different pattern of activation. Finally, one of Monti and Owen’s colleagues, Melanie Boly, mentioned that, according to the research, complex tasks might work better than simple ones. ‘What about tennis?’ Owen responded.

We aren’t necessarily blissfully unconscious when the surgeon puts us under

Much to the researchers’ delight, asking healthy participants to imagine playing tennis yielded a clear and consistent signature of brain activation. Would the same task work in covertly conscious patients? Once inside the MRI machine, researchers asked unresponsive patients to imagine one of two tasks, playing tennis or walking around their house. Exactly how many patients would be responsive was anyone’s guess, Monti told me. But with the first swing of the racket, so to speak, the team found an otherwise unresponsive patient who appeared to understand the tennis task. The patient fulfilled all the criteria for a vegetative state diagnosis but was, in fact, conscious.

The resulting study, eventually published in 2010, was simultaneously hopeful and sobering: five out of 54 patients placed in the scanner were able to generate mental imagery on demand, evidence of minds that could think, feel and understand, but not communicate. Or could they? What if patients could use the two tasks to respond to questions, answering ‘yes’ through internal imagery itself. In short, ‘yes’ could be communicated by imagining tennis and ‘no’ by imagining walking around their house. Once again, the team found success on their very first attempt. Upon asking a patient several questions such as ‘Is your father’s name Thomas?’, the researchers received appropriate responses indicated by the signature image of each task, registered in the MRI. ‘Turns out, even patients who appear [to be] in a coma can have more cognitive function than can be observed with standard clinical methods,’ Monti said.

Following in the footsteps of Monti and his colleagues, a study published in 2018 by a different group of researchers from the University of Michigan used a similar fMRI mental-imagery task to show covert consciousness where no one wants to find it: anaesthesia. Alarmingly, one out of the five healthy volunteers who had undergone general anaesthesia for the sake of science using the drug propofol was able to do what shouldn’t be possible: generate mental imagery upon request in the scanner. The implications are clear: we aren’t necessarily blissfully unconscious when the surgeon puts us under.

The fMRI tennis task has shown that under the veils of general anaesthesia and the vegetative state, consciousness occasionally lurks. But the task’s efficacy depends on patients being able to hear questions and understand spoken language, an assumption that is not always true in brain-injured individuals with cerebral lesions.

Consciousness can also occur without language comprehension or hearing. In their absence, a patient might still experience pain, boredom or even silent dreams. Indeed, if only certain regions of the cerebral cortex were lesioned, vivid conscious experiences might persist while the patient remains unable to hear the questions asked by Monti’s team. Because of this, MRI scans might miss many people who are conscious, after all. Rather than depending on mental imagery that must be wilfully generated by brain-injured patients who can hear and understand language, an alternative marker of consciousness – a superior consciousness detector – is needed to shine a piercing light in the dark.

Consciousness is a mystery. A multitude of scientific theories attempt to explain why our brains experience the world, rather than simply receiving input and producing output without feeling. Some of those theories are ‘out there’ – for instance, the framework developed by the British theoretical physicist Sir Rodger Penrose and the American anaesthesiologist Stuart Hameroff.

Penrose and Hameroff link consciousness to microtubules, the filament-like structures that help form the skeletons of neurons and other cells. Inside microtubules, electrons can jump between different compartments. In fact, according to the rules that govern our universe at the level smaller than atoms, these electrons can exist in two compartments simultaneously, a state known as quantum superposition. Consciousness enters the picture largely due to an interpretation of quantum physics: the claim that a conscious observer is needed for a particle such as an electron to have a definite location in space, thus ending the superposition. As Hameroff said in an interview for the PBS series Closer to Truth:

You have a superposition of possibilities that collapse to one or the other, and when that happens there’s a moment of subjectivity. This seemed like a stretch and still does to many people, but as Sherlock Holmes said: ‘If you eliminate the impossible, whatever’s left, no matter how seemingly improbable, must be correct.’

This makes sense to Hameroff but not to me. Following that Holmesian adage to ‘eliminate the impossible’, I reject the conflation of quantum spookiness with consciousness. For one, the elaborate theory of consciousness developed by Penrose and Hameroff requires a new physics – quantum gravity – that hasn’t been developed yet. But more than that, Penrose and Hameroff’s framework fails to explain why the cerebellum is not involved in consciousness. Cerebellar neurons have microtubules too. So why can the cerebellum be lost or lesioned without affecting the conscious mind?

An approach that I find more promising comes from the neuroscientist Giulio Tononi at the University of Wisconsin. Rather than asking what brain processes or brain structures are involved in consciousness, Tononi approaches the question from the other direction, asking what essential features underly conscious experience itself.

For context, compare his approach to another big question: what is life? Living things pass traits to their offspring – so there must be genetic information passed from parent to child (or calf, or seedling). But living things also evolve and adapt to their environment – so this genetic information must be malleable, changing from generation to generation. By approaching the problem from this bottom-up angle, you might have predicted the existence of a complex molecule such as DNA, which stores genetic information but also mutates, allowing for evolution by natural selection. In fact, the physicist Erwin Schrödinger nearly predicted DNA in his book What is Life? (1944) by viewing the problem from this direction. The opposite approach – looking at lots of living things and asking what they have in common – might not show you DNA unless you have an enormously powerful microscope.

Just as life stumped biologists 100 years ago, consciousness stumps neuroscientists today. It’s far from obvious why some brain regions are essential for consciousness and others are not. So Tononi’s approach instead considers the essential features of a conscious experience. When we have an experience, what defines it? First, each conscious experience is specific. Your experience of the colour blue is what it is, in part, because blue is not yellow. If you had never seen any colour other than blue, you would most likely have no concept or experience of colour. Likewise, if all food tasted exactly the same, taste experiences would have no meaning, and vanish. This requirement that each conscious experience must be specific is known as differentiation.

The conscious brain is like a democratic society; the unconscious brain like a totalitarian society

But, at the same time, consciousness is integrated. This means that, although objects in consciousness have different qualities, we never experience each quality separately. When you see a basketball whiz towards you, its colour, shape and motion are bound together into a coherent whole. During a game, you’re never aware of the ball’s orange colour independently of its round shape or its fast motion. By the same token, you don’t have separate experiences of your right and your left visual fields – they are interdependent as a whole visual scene.

Tononi identified differentiation and integration as two essential features of consciousness. And so, just as the essential features of life might lead a scientist to infer the existence of DNA, the essential features of consciousness led Tononi to infer the physical properties of a conscious system.

Future engineers of consciousness-detectors, pay careful attention: these properties are exactly what such a fantastic machine should look for in the brains of unresponsive patients. Because consciousness is specific, a physical system such as the brain must select from a large repertoire of possible states to be conscious. As with the connection between life and DNA, this inference depends crucially on the concept of information. Experiences are informative because they rule out other experiences: the taste of chocolate is not the taste of salt, and the smell of roses is not the smell of garbage. Because these experiences are informative and the brain is identified with consciousness, we infer that, as one’s consciousness increases, so too does information in the brain. In fact, when the brain is packed full of information, its repertoire of possible states increases.

This is like a game of hangman. First, consider playing hangman in English. The English alphabet contains 26 letters, and each letter correctly guessed is moderately informative of the word being revealed. Common letters such as ‘e’ are less informative, and rare letters such as ‘x’ are more informative (after all, how many English words are spelled with ‘x’?) But imagine playing hangman with Chinese script, which contains thousands of characters. Each character is highly informative because it occurs less frequently than almost any letter in English, like a rare beacon signalling a special occasion. And so, because Chinese script has a larger repertoire of possible characters, the entire game of hangman can be won by guessing a single character.

So it is too with the brain: when the repertoire of possible brain states is larger, its informational content increases, as does its capacity for highly differentiated conscious experiences. But at the same time, consciousness depends on integration: neurons must communicate and share information, otherwise the qualities contained in a conscious experience are no longer bound together. This simultaneous requirement for both differentiation and integration might feel like a paradox. Here, a metaphor borrowed from Tononi offers clarity: the conscious brain is like a democratic society. Everyone is free to cast a different vote (differentiation) and talk to one another freely (integration). The unconscious brain, on the other hand, is more like a totalitarian society. Citizens might be forbidden to talk freely to one another (a lack of integration), or they might be forced to all vote the same way (a lack of differentiation).

Just as both differentiation and integration are necessary for democracy, they’re also necessary for consciousness. This is not merely an armchair musing: Tononi’s ideas are based on clinical observations. Among the most compelling of these data are reports of patients, such as the anonymous Chinese woman, who lack a cerebellum yet retain consciousness. The cerebellum, it turns out, is a totalitarian society. Its neurons, though they are many, don’t freely communicate with one another. Instead, cerebellar neurons are organised in chains: each neuron sends a message to the next neuron down the chain, but there is little communication between chains, nor is there feedback going the other direction down the chain. To visualise this communication style, you might imagine many people standing in a line, each tapping the next person on the shoulder. Thus, while the cerebellum contains the majority of neurons in the brain, its neurons are like citizens in a society with little to no integration. Without integration, there is no consciousness. The cerebral cortex, on the other hand, is a free a society. To visualise its communication style, imagine a community of people freely interacting, not only with their neighbours, but also with more distant people across town.

Of course, consciousness is not always present in the cerebral cortex. At night, during a dreamless sleep, differentiation is lost. Large populations of neurons are forced into agreement, firing together in the same pattern. Researchers eavesdropping with EEG (a technology that records electrical brain activity from the scalp) can hear these neurons chanting together like a monolithic crowd in a sports arena. The same loss of differentiation occurs during a generalised epileptic seizure, when large populations of neurons all fire together due to a runaway storm of excitation. As neurons lock into agreement, consciousness vanishes from the mind.

Tononi’s theory that both differentiation and integration are required for consciousness is known as integrated information theory (IIT). Using IIT, one can systematically predict which brain regions are involved in consciousness (the cerebral cortex) and which are not (the cerebellum). In the clinic, ideas derived from IIT are already helping researchers infer which brain-injured patients are conscious and which are not. Based on what IIT says that a conscious system should look like, researchers can infer what kind of response a conscious brain should give following a pulse of energy – a consciousness-detector far more probing than thinking of tennis in an MRI scan.

As Monti is fond of saying, it’s like knocking on wood to infer its density from the resulting sound. In this case, the ‘knocking’ is delivered using a coil of wire to create a magnetic pulse, a technique called transcranial magnetic stimulation (TMS). Researchers then use EEG to listen for the ‘echo’ of this magnetic perturbation, discovering what kind of ‘society’ this particular brain really is.

If the response is highly complex, integrated and differentiated, then we are dealing with a pluralistic society; different brain regions respond in different ways; the patient is probably conscious. But, if the response is uniform, appearing the same everywhere, then we are dealing with a totalitarian society, and the patient is probably unconscious.

The above approach – the best version yet of a consciousness-detector – was introduced by an international team of researchers led by the neuroscientist Marcello Massimini at the University of Milan in 2013. Formally known as the perturbational complexity index, the technique is sometimes referred to as ‘zap-and-zip’ because it involves first zapping the brain with a magnetic pulse and then looking at how difficult the EEG response is to compress, or zip, as a measure of its complexity. Researchers have already used zap-and-zip to determine whether an individual is awake, deeply sleeping, under anaesthesia, or in a disorder of consciousness such as the vegetative state. Soon, the approach could tell us which unresponsive, brain-injured patients (not to mention patients anaesthetised for surgery) are covertly conscious: still feeling and experiencing, despite an inability to communicate. Indeed, this is the closest science has ever come to ‘quantifying the unquantifiable’, as the achievement is described in the journal Science Translational Medicine.

We might be able to turn to artificial intelligences and assess whether they’re conscious

But more mysteries of consciousness remain. In the Monti Lab at the University of California, Los Angeles, I’m currently investigating why children with a rare genetic disorder called Angelman syndrome display electrical brain activity that lacks differentiation even when the kids are awake and experiencing the world around them. There’s no question that these children are conscious, as one clearly sees from watching their rich spectrum of purposeful behaviour. And yet, placing an EEG cap on the head of a child with Angelman syndrome reveals Tononi’s metaphorical totalitarian society – neurons that appear to be locked into agreement.

By showing us what types of brain activity are and aren’t essential for consciousness, patients with Angelman syndrome could offer insights into consciousness similar to those offered by patients lacking part or all of the cerebellum. My recent work in this area shows that, despite the loud chanting revealed by the EEG in Angelman syndrome, neurons nonetheless change their behaviour when these children sleep: they chant still louder, and their chanting is less rich and diverse. I’m optimistic that when someone finally does measure the neural echo through zap-and-zip on children with Angelman syndrome, it will confirm that zap-and-zip is sensitive enough to tell consciousness from dreamless sleep. If not, it will be back to the drawing board for the consciousness-detector.

Consciousness might be the last frontier of science. If IIT continues to guide us in the right direction, we’ll develop better methods of diagnosing disorders of consciousness. One day, we might even be able to turn to artificial intelligences – potential minds unlike our own – and assess whether or not they are conscious. This isn’t science fiction: many serious thinkers – including the late physicist Stephen Hawking, the technology entrepreneur Elon Musk, the computer scientist Stuart Russell at the University of California, Berkeley and the philosopher Nick Bostrom at the Future of Humanity Institute in Oxford – take recent advances in AI seriously, and are deeply concerned about the existential risk that could be posed by human- or superhuman-level AI in the future. When is unplugging an AI ethical? Whoever pulls the plug on the super AI of coming decades will want to know, however urgent their actions, whether there truly is an artificial mind slipping into darkness or just a complicated digital computer making sounds that mimic fear.

While this exact challenge has yet to confront us, scientists and philosophers are already struggling to understand a recent development of growing miniature brain-like organs in culture dishes outside of bodies. Currently, these ‘mini-brains’ are helping biomedical researchers understand diseases affecting the brain, but what if they eventually develop consciousness – and the capacity to suffer – as biological engineering grows more sophisticated in years to come? As the challenges of mini-brains and AI both make clear, the study of consciousness has shed its esoteric roots and is no longer a mere pastime of the ivory tower. Understanding consciousness really matters – after all, the wellbeing of conscious minds depends on it.

Joel Frolich is a postdoctoral researcher studying consciousness in the laboratory of Martin Monti at the University of California, Los Angeles. He is also a content producer at Knowing Neurons

Published in association with
Knowing Neurons
an Aeon Partner

Listen here

Brought to you by Curio, an Aeon partner

3,700 words

Edited by Pam Weintraub

This article appeared in Aeon Magazine. Here’s a link to it:

Scientists regenerate neurons in mice with spinal cord injury and optic nerve damage

Date: April 30, 2020

Source: Temple University Health System

Summary:  Each year thousands of patients face life-long losses in sensation and motor function from spinal cord injury and related conditions in which axons are badly damaged or severed. New research in mice shows, however, that gains in functional recovery from these injuries may be possible, thanks to a molecule known as Lin28, which regulates cell growth. NOTE: David Wolf is NOT the author of these articles. His program curates them and they are reshown on these pages. Where an original author is mentioned, he ALWAYS gives that person credit. Unfortunately, the existing format makes it impossible for him to remove his name from the postings.

Neuron illustration (stock image). | Credit: © peterschreiber.media / stock.adobe.com

Neuron illustration (stock image).  Credit: © peterschreiber.media / Adobe Stock

Like power lines in an electrical grid, long wiry projections that grow outward from neurons — structures known as axons — form interconnected communication networks that run from the brain to all parts of the body. But unlike an outage in a power line, which can be fixed, a break in an axon is permanent. Each year thousands of patients confront this reality, facing life-long losses in sensation and motor function from spinal cord injury and related conditions in which axons are badly damaged or severed.

New research by scientists at the Lewis Katz School of Medicine Temple University (LKSOM) shows, however, that gains in functional recovery from these injuries may be possible, thanks to a molecule known as Lin28, which regulates cell growth. In a study published online in the journal Molecular Therapy, the Temple researchers describe the ability of Lin28 — when expressed above its usual levels — to fuel axon regrowth in mice with spinal cord injury or optic nerve injury, enabling repair of the body’s communication grid.

“Our findings show that Lin28 is a major regulator of axon regeneration and a promising therapeutic target for central nervous system injuries,” explained Shuxin Li, MD, PhD, Professor of Anatomy and Cell Biology and in the Shriners Hospitals Pediatric Research Center at the Lewis Katz School of Medicine at Temple University and senior investigator on the new study. The research is the first to demonstrate the regenerative ability of Lin28 upregulation in the injured spinal cord of animals.

“We became interested in Lin28 as a target for neuron regeneration because it acts as a gatekeeper of stem cell activity,” said Dr. Li. “It controls the switch that maintains stem cells or allows them to differentiate and potentially contribute to activities such as axon regeneration.”

To explore the effects of Lin28 on axon regrowth, Dr. Li and colleagues developed a mouse model in which animals expressed extra Lin28 in some of their tissues. When full-grown, the animals were divided into groups that sustained spinal cord injury or injury to the optic nerve tracts that connect to the retina in the eye.

Another set of adult mice, with normal Lin28 expression and similar injuries, were given injections of a viral vector (a type of carrier) for Lin28 to examine the molecule’s direct effects on tissue repair.

Extra Lin28 stimulated long-distance axon regeneration in all instances, though the most dramatic effects were observed following post-injury injection of Lin28. In mice with spinal cord injury, Lin28 injection resulted in the growth of axons to more than three millimeters beyond the area of axon damage, while in animals with optic nerve injury, axons regrew the entire length of the optic nerve tract. Evaluation of walking and sensory abilities after Lin28 treatment revealed significant improvements in coordination and sensation.

“We observed a lot of axon regrowth, which could be very significant clinically, since there currently are no regenerative treatments for spinal cord injury or optic nerve injury,” Dr. Li explained.

One of his goals in the near-term is to identify a safe and effective means of getting Lin28 to injured tissues in human patients. To do so, his team of researchers will need to develop a vector, or carrier system for Lin28, that can be injected systemically and then hone in on injured axons to deliver the therapy directly to multiple populations of damaged neurons.

Dr. Li further wants to decipher the molecular details of the Lin28 signaling pathway. “Lin28 associates closely with other growth signaling molecules, and we suspect it uses multiple pathways to regulate cell growth,” he explained. These other molecules could potentially be packaged along with Lin28 to aid neuron repair.

Other researchers contributing to the work include Fatima M. Nathan, Yosuke Ohtake, Shuo Wang, Xinpei Jiang, Armin Sami, and Hua Guo, Shriners Hospitals Pediatric Research Center and the Department of Anatomy and Cell Biology at the Lewis Katz School of Medicine; and Feng-Quan Zhou, Department of Orthopaedic Surgery and The Solomon H. Snyder Department of Neuroscience at Johns Hopkins University School of Medicine, Baltimore.

The research was supported in part by National Institute of Health grants R01NS105961, 1R01NS079432, and 1R01EY024575 and by funding from Shriners Research Foundation.

Story Source:

Materials provided by Temple University Health SystemNote: Content may be edited for style and length.

Journal Reference:

  1. Fatima M. Nathan, Yosuke Ohtake, Shuo Wang, Xinpei Jiang, Armin Sami, Hua Guo, Feng-Quan Zhou, Shuxin Li. Upregulating Lin28a Promotes Axon Regeneration in Adult Mice with Optic Nerve and Spinal Cord InjuryMolecular Therapy, 2020; DOI: 10.1016/j.ymthe.2020.04.010

Link to the Science Daily page for this article:  https://www.sciencedaily.com/releases/2020/04/200430113041.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Ftop_news%2Ftop_science+%28ScienceDaily%3A+Top+Science+News%29

CITE THIS PAGE: Temple University Health System. “Scientists regenerate neurons in mice with spinal cord injury and optic nerve damage.” ScienceDaily. ScienceDaily, 30 April 2020. <www.sciencedaily.com/releases/2020/04/200430113041.htm>.

Sleeping Brain Waves Draw a Healthy Bath for Neurons

An organized tide of brain waves, blood and spinal fluid pulsing through a sleeping brain may flush away neural toxins that cause Alzheimer’s and other diseases.

NOTE: This article was written by Elena Renken

Illustration of brain formed by bubbles.Lucy Reading-Ikkanda/Quanta Magazine

Scientists are beginning to understand one of the ways in which sleep may benefit the health of the brain: by organizing the flow of fluids that can wash away harmful build-ups of proteins and wastes around neurons.

When you sink into a deep sleep, a cycle of activity starts behind your closed eyelids. First, a slow electrical wave pulses through the brain. A few seconds later, the amount of blood within the brain drops. Then a wave of cerebrospinal fluid reverses its usual direction of flow and moves upward through large cavities in the lower and central portions of the brain. The pattern repeats about three times a minute for the duration of non-REM sleep, the typically dreamless phases when your eyes remain still.

In a recent study, researchers observed the rhythmic sequence uniting these three phenomena in humans for the first time and found the causal links between them. Their finding clarifies how sleep may protect the brain’s well-being by driving elements of an obscure “plumbing system” found in the brain only a few years ago. Someday, the newly discovered mechanism might be the basis for new treatments to prevent cognitive decline in patients with Alzheimer’s disease and other conditions, but it’s already of value for deepening understanding of the physiological dynamics of sleep.

Sleep is a rather surreal phenomenon, says Laura Lewis, an assistant professor of biomedical engineering at Boston University and the senior author on the study. Every day, we enter an altered state of consciousness in which many aspects of our cognition and physiology change dramatically and simultaneously. Yet we can’t live without it. Animals deprived of sleep die, and even insufficient sleep is linked to cognitive decline.

For years, scientists have suspected that the harm caused by disturbed sleep has something to do with an overaccumulation of waste products or toxins in the brain. Studies showed that sleep is important for waste clearance, but the specifics were foggy. In 2012, research in the laboratory of Maiken Nedergaard, a neuroscientist at the University of Rochester Medical Center,  identified what appears to be the brain’s waste clearance pathway, the glymphatic (glial-lymphatic) system. This is a thin set of channels formed by the brain’s glial cells that can conduct fluid within the brain. The problem was that no plausible mechanism seemed to connect the neurological signs of sleep with the glymphatic system or even with movements of the cerebrospinal fluid (CSF) more generally. “We just weren’t sure what was changing, or how,” Lewis said.

Animal experiments did hint, however, that there was some correlation between sleeping brain wave activity and the flow of fluids through the brain. Looking for those patterns in sleeping human brains seemed like a good way to start getting some answers.

The researchers spied on natural sleep cycles late at night, said Nina Fultz, a research assistant in biomedical engineering at Boston University and first author of the study. They wanted to get measurements of brain waves, blood oxygenation and CSF movements simultaneously, so they affixed electrodes to the scalps of their 13 human subjects and put them to bed in MRI machines. Their collaborators at the Martinos Center for Biomedical Imaging helped formulate a novel way to image the brain: An accelerated MRI process took multiple pictures per second and detected the influx of fluid, which showed up as very bright areas on the scans.

That combination of measurements turned out to be powerfully informative and “revealed surprising patterns,” said Helene Benveniste, a professor of anesthesiology at Yale University, who did not work on the project. It showed that the elements of the cycle were aligned in time, one closely following the last. But that was only a correlation, and the researchers wanted proof that each part of the cycle caused the next.

While human subjects slept, scientists used an MRI technique to observe a cyclic pattern of fluid movement in their brains. In this video, red indicates where oxygenated blood flow is abundant, while blue represents influxes of cerebrospinal fluid. The results suggest that as blood moves out of the brain in response to lower neural activity, cerebrospinal fluid rushes in to fill the available spaceDOI: 10.1126/science.aax5440

To that end, they built a model based on the established dynamics in the brain. It was already known, for example, that electrical activity in the brain changes patterns of blood flow, because highly active neurons need more oxygen and energy. In the scientists’ model, the part of the slow wave in which neurons go quiet drove a reduction of blood flow in parts of the brain. But as the volume of blood in the tissues decreased, CSF flowed in to compensate, presumably through the glymphatic system. The model matched the timing they observed precisely enough to suggest a causal relationship.

It’s not known whether this pattern of fluid movement within large spaces in the brain — the third and fourth ventricles — extends to the rest of the brain. Earlier animal studies did not focus on the ventricles, instead examining how fluid spreads through smaller spaces within brain tissue and clears waste products that accrue there. But what the Boston group saw may reflect very similar fluid dynamics, said Jeffrey Iliff, a professor of psychiatry and neurology at the University of Washington School of Medicine who studies neurodegenerative conditions.

“There are lots of detailed questions that still aren’t settled, but this is a huge step forward,” said Jeffrey Tithof, an assistant professor of mechanical engineering at the University of Rochester. His team studies the glymphatic system, and some of their recent unpublished findings mesh well with the alternation of blood and CSF seen in this human imaging. “We’re all pretty excited about it.”

The study lacks more fine-grained information about how the CSF might clear waste from the brain, but even this limited outline of a mechanism could advance the understanding of cognitive diseases.

Scientists have observed a relationship between sleep and Alzheimer’s: Sleep disruptions often precede the disease, which in turn appears to further disrupt sleep. The biology behind this connection is unknown, but a plausible explanation is that, by changing blood flow in the brain, electrical signals associated with sleep trigger waves of CSF to wash away toxic amyloid plaque around neurons. Anything that interferes with sleep might allow the plaque to accumulate and cause Alzheimer’s. “One of the reasons why I think a study like this is so exciting is that it provides one possible explanation that really might fit,” Iliff said. “But we have to test that.”

Lewis and her team are preparing for studies that will examine these dynamics in subjects with brain disorders. Still, these recent observations in young, healthy brains unveiled a sequence that adds detail to an evolving picture of sleep’s function.

We’ve been asking these questions about sleep for centuries, Iliff said. “It’s something that everyone cares a lot about, and yet it’s something that’s pretty mysterious.” He added, “Our understanding of and experience with sleep is fundamental to who we are and what we do every single day.”

NOTE: This article originally appeared in Quanta Magazine. Here’s a link:

https://tinyurl.com/s7pw4yl

 

A Physics Magic Trick: Take 2 Sheets of Carbon and Twist

FROM THE NEW YORK TIMES, OCTOBER 30.

The study of graphene was starting to go out of style, but new experiments with sheets of the ultrathin material revealed there was much left to learn.

A device containing an unusual form of carbon: two one-atom-thick sheets pressed together with the lattice of one rotated slightly. Experiments by Dmitri Efetov and his colleagues show that this material can exhibit different electronic properties, including superconductivity.Credit…The Institute of Photonic Sciences

By 

In the universe of office supplies, pencil lead — a mixture of graphite and clay, which does not include any lead — appears unexceptional beyond its ability to draw dark lines.

But 15 years ago, scientists discovered that a single sheet of graphite — a one-atom-thick layer of carbon atoms laid out in a honeycomb pattern — is a wonder. This ultrathin carbon, called graphene, is flexible and lighter than paper yet 200 times stronger than steel. It is also a good conductor of heat and electrical current.

Scientists imagined all of the remarkable things that graphene might be made into: transistors, sensors, novel materials. But after studying and cataloging its properties, scientists moved on to other problems. Practical uses have been slow to come, because part of what makes graphene alluring — its strength — also makes the material difficult to cut into precise shapes.

Last year, graphene burst back on the physics research scene when physicists at the Massachusetts Institute of Technology discovered that stacking two sheets of the material, twisted at a small angle between them, opened up a treasure box of strange phenomena. It started a new field: twistronics.

paper published Wednesday in the journal Nature takes the most detailed look at this material known as magic-angle twisted bilayer graphene. The international team of scientists carried out a series of experiments and showed that by tweaking graphene’s temperature, magnetic field and the number of electrons able to move freely, the material shifted from behaving like an insulator, where electrical current does not flow, to becoming a superconductor, able to convey electrical current without resistance.

The hope of twistronics is that researchers will be able to take advantage of the superconductivity and other properties to engineer novel electronics for quantum computers and other uses yet to be imagined.

“Our work really sort of shows the richness of the whole system, where we observe all of these effects at once,” said Dmitri K. Efetov, a physicist at the Barcelona Institute of Science and Technology in Spain and the senior author of the paper.

The ability to easily nudge graphene into different types of behavior gives scientists a simple system to explore as they try to understand the underlying physics of its superconducting activity, as well as other behaviors.

“He’s the guy who’s done this the best,” Andrea Young, a physics professor at the University of California, San Diego who was not involved in the research, said of Dr. Efetov and his collaborators. “Somehow they have the magic touch.”

“There’s a lot of things that could happen, and which one does happen depends on a lot of experimental details,” he said. “We’re just beginning to understand and map out that space. But the hope is that there will be something there that isn’t seen in any other system.”

Stack and Twist

Graphene is an atom-thin sheet of carbon atoms arranged in a hexagonal pattern. Stacking two sheets and twisting one by the “magic angle” of 1.1 degrees yields a superconductive material with other strange properties.

Scientists have long known that graphite is made of stacked sheets of graphene, but they did not know how to look at just a single sheet. In 2004, two physicists at the University of Manchester in England, Andre Geim and Konstantin Novoselov, came up with a decidedly low-tech method to produce it. They used sticky tape — the same you buy at an office supplies store — to pull apart graphene layers until only one graphene layer was left.

In laboratories across the world, physicists rushed out to buy their own rolls of tape and pull apart slices of graphene. Dr. Geim and Dr. Novoselov were honored with the 2010 Nobel Prize in Physics. But after a few years, scientists had figured out what they could, and most moved on.

“Until last year, graphene was slowly becoming out of fashion,” said Pablo Jarillo-Herrero, a physicist at the Massachusetts Institute of Technology.

Still, some people like Allan H. MacDonald, a theoretical physicist at the University of Texas, thought that graphene’s mysteries had yet to be fully plumbed.

What if two pieces of graphene were stacked on top of each other? If the layers were aligned perfectly, two graphene layers would behave essentially the same as a single graphene sheet. But when one of the layers was twisted slightly compared to the other, the rotational misalignment of the two lattices produces a repeating “moiré pattern” stretching across many atoms.

“That’s where I started,” Dr. MacDonald said. “What if they were nearly aligned?”

Electrons could easily hop between the two sheets where their lattices lined up. But in places where they were misaligned, the flow would be more difficult. In 2011, Dr. MacDonald and Rafi Bistritzer, a postdoctoral researcher, calculated that at a small angle, the electronic structure would become “flat,” with the electrons jammed like cars trying to make their way across Times Square.

The slowly moving electrons would be more likely to interact with each other — “strongly correlated,” in the language of physics — and from experience, physicists knew that strongly correlated systems are often surprising ones.

“We threw out a few guesses,” Dr. MacDonald said.

The paper was intriguing but largely ignored. The equations, encompassing a multitude of particles at once, are generally far too complex to solve exactly. So Dr. MacDonald and Dr. Bistritzer had made some simplifications to come up with rough answers. Many scientists thought their results were an artifact of their approximations and not a likely description of what would actually be observed.

Philip Kim, a Harvard physicist who did many of the early graphene experiments — Dr. Efetov and Dr. Jarillo-Herrero both worked in his laboratory — thought the glossed-over details in the calculations would be important. “I was skeptical,” he said.

But Dr. Jarillo-Herrero decided to test the prediction. “There was good theoretical motivation to see what would happen,” he said.

Credit…The Institute of Photonic Sciences

The technique still involves sticky tape to pull apart a graphite crystal until just one layer of graphene is left. Then the graphene is torn in two to produce two flakes with perfectly lined-up lattices. Then one of the flakes is rotated by about 1.3 degrees and pressed down on the other.

The layers are only loosely bound and sometimes the scientists observed them snapping back into perfect alignment. Other times, the sheet starts to rotate but stops before lining up entirely, sometimes ending up at the desired 1.1 degrees. The angle does not have to be exact; the behavior seems to occur when the twist angle is between 1.0 and 1.2 degrees.

Last year, Dr. Jarillo-Herrero and his colleagues reported a startling finding. The two layers of graphene, now known as magic-angle twisted bilayer graphene, became a superconductor when cooled to a fraction of a degree above absolute zero. (Dr. MacDonald and Dr. Bistritzer had not predicted that.)

“When we saw superconductivity, all hell broke loose,” Dr. Jarillo-Herrero said. “Then we realized this was a very big thing.”

For all of the amazing tricks of the original work with graphene, scientists were never able to turn it into a superconductor. It was a revelation that its behavior could be transformed simply by putting another sheet on top and twisting it slightly. It was as if the color of two sheets of paper suddenly changed if one were rotated.

Other experimental physicists jumped back into graphene research. “I was completely wrong,” Dr. Kim admitted. “Allan MacDonald’s theory was right.”

In the new Nature paper, Dr. Efetov and his colleagues confirmed the findings of Dr. Jarillo-Herrero, but they found additional permutations of temperature, magnetic field and electron density that also turn the graphene into a superconductor.

They have also found that the graphene could also exhibit an unusual type of magnetism, arising from the movement of its electrons, not the intrinsic magnetism of its atoms, as seen in materials like iron. That behavior has seldom been observed.

Dr. Efetov said his improvement to the recipe of combining the graphene layers was to roll the second layer as it is pressed down, similar to how one puts pressure on a smartphone screen protector to prevent air bubbles from forming while applying it.

He also says the cleaner boundary between the two layers leads to his more detailed results. “What M.I.T. saw, we reproduce,” he said. “But on top of that we observe many more states, which most likely in his case were not seen, because of the dirty devices.”

The new field of twistronics goes beyond graphene. The electronic behavior of the material may depend on the material the graphene is placed on, typically boron nitride. Trying other materials or configurations could yield different results.

Scientists have begun to look at three layers of graphene and a multitude of other two-dimensional materials.

“I think this is just the beginning,” Dr. Kim of Harvard said.

With such a wide variety of materials to work with, he thought scientists might be able to devise novel superconductors that would be suited for quantum computers. “I think that could be really exciting.”

 

For the NY Times source of this article, please use this link: https://www.nytimes.com/2019/10/30/science/graphene-physics-superconductor.html?emc=rss&partner=rss

Nerve-like ‘optical lace’ gives robots a human touch

by 

Nerve-like 'optical lace' gives robots a human touch

LED light illuminating the optical lacework structure when left alone and when when deformed. Credit: Xu et al., Sci. Robot. 4, eaaw6304 (2019)

A new synthetic material that creates a linked sensory network similar to a biological nervous system could enable soft robots to sense how they interact with their environment and adjust their actions accordingly.

“We want to have a way to measure stresses and strains for highly deformable objects, and we want to do it using the hardware itself, not vision,” said lab director Rob Shepherd, associate professor of mechanical and aerospace engineering and the paper’s senior author. “A good way to think about it is from a biological perspective. A blind person can still feel because they have sensors in their fingers that deform when their finger deforms. Robots don’t have that right now.”

Shepherd’s lab previously created sensory foams that used optical fibers to detect such deformations. For the optical lace project, Xu used a flexible, porous lattice structure manufactured from 3-D-printed polyurethane. She threaded its core with stretchable optical fibers containing more than a dozen mechanosensors and then attached an LED light to illuminate the fiber.

When she pressed the lattice structure at various points, the sensors were able to pinpoint changes in the photon flow.

Nerve-like 'optical lace' gives robots a human touch
The proprioceptive foam cylinder. Credit: Xu et al., Sci. Robot. 4, eaaw6304 (2019)

 

“When the structure deforms, you have contact between the input line and the output lines, and the light jumps into these output loops in the structure, so you can tell where the contact is happening,” Xu said. “The intensity of this determines the intensity of the deformation itself.”

 

[EDITOR’S NOTE: The original article shows several videos. Link to article: https://techxplore.com/news/2019-09-nerve-like-optical-lace-robots-human.html

 

The optical lace would not be used as a skin coating for robots, Shepherd said, but would be more like the flesh itself. Robots fitted with the material would be better suited for the health care industry, specifically beginning-of-life and end-of-life care, and manufacturing.

 

While the optical lace does not have as much sensitivity as a human fingertip, which is jam-packed with nerve receptors, the material is more sensitive to touch than the human back. The material is washable, too, which leads to another application: Shepherd’s lab has launched a startup company to commercialize Xu’s sensors to make garments that can measure a person’s shape and movements for augmented reality training.

The paper, “Optical Lace for Synthetic Afferent Neural Networks,” was published Sept. 11 in Science Robotics.

 

More information: P.A. Xu el al., “Optical lace for synthetic afferent neural networks,” Science Robotics (2019). robotics.sciencemag.org/lookup … /scirobotics.aaw6304

Journal information: Science Robotics

 

To see the videos and read the original article, use this link: https://techxplore.com/news/2019-09-nerve-like-optical-lace-robots-human.html