Latest Posts

A Physics Magic Trick: Take 2 Sheets of Carbon and Twist

FROM THE NEW YORK TIMES, OCTOBER 30.

The study of graphene was starting to go out of style, but new experiments with sheets of the ultrathin material revealed there was much left to learn.

A device containing an unusual form of carbon: two one-atom-thick sheets pressed together with the lattice of one rotated slightly. Experiments by Dmitri Efetov and his colleagues show that this material can exhibit different electronic properties, including superconductivity.Credit…The Institute of Photonic Sciences

By 

In the universe of office supplies, pencil lead — a mixture of graphite and clay, which does not include any lead — appears unexceptional beyond its ability to draw dark lines.

But 15 years ago, scientists discovered that a single sheet of graphite — a one-atom-thick layer of carbon atoms laid out in a honeycomb pattern — is a wonder. This ultrathin carbon, called graphene, is flexible and lighter than paper yet 200 times stronger than steel. It is also a good conductor of heat and electrical current.

Scientists imagined all of the remarkable things that graphene might be made into: transistors, sensors, novel materials. But after studying and cataloging its properties, scientists moved on to other problems. Practical uses have been slow to come, because part of what makes graphene alluring — its strength — also makes the material difficult to cut into precise shapes.

Last year, graphene burst back on the physics research scene when physicists at the Massachusetts Institute of Technology discovered that stacking two sheets of the material, twisted at a small angle between them, opened up a treasure box of strange phenomena. It started a new field: twistronics.

paper published Wednesday in the journal Nature takes the most detailed look at this material known as magic-angle twisted bilayer graphene. The international team of scientists carried out a series of experiments and showed that by tweaking graphene’s temperature, magnetic field and the number of electrons able to move freely, the material shifted from behaving like an insulator, where electrical current does not flow, to becoming a superconductor, able to convey electrical current without resistance.

The hope of twistronics is that researchers will be able to take advantage of the superconductivity and other properties to engineer novel electronics for quantum computers and other uses yet to be imagined.

“Our work really sort of shows the richness of the whole system, where we observe all of these effects at once,” said Dmitri K. Efetov, a physicist at the Barcelona Institute of Science and Technology in Spain and the senior author of the paper.

The ability to easily nudge graphene into different types of behavior gives scientists a simple system to explore as they try to understand the underlying physics of its superconducting activity, as well as other behaviors.

“He’s the guy who’s done this the best,” Andrea Young, a physics professor at the University of California, San Diego who was not involved in the research, said of Dr. Efetov and his collaborators. “Somehow they have the magic touch.”

“There’s a lot of things that could happen, and which one does happen depends on a lot of experimental details,” he said. “We’re just beginning to understand and map out that space. But the hope is that there will be something there that isn’t seen in any other system.”

Stack and Twist

Graphene is an atom-thin sheet of carbon atoms arranged in a hexagonal pattern. Stacking two sheets and twisting one by the “magic angle” of 1.1 degrees yields a superconductive material with other strange properties.

Scientists have long known that graphite is made of stacked sheets of graphene, but they did not know how to look at just a single sheet. In 2004, two physicists at the University of Manchester in England, Andre Geim and Konstantin Novoselov, came up with a decidedly low-tech method to produce it. They used sticky tape — the same you buy at an office supplies store — to pull apart graphene layers until only one graphene layer was left.

In laboratories across the world, physicists rushed out to buy their own rolls of tape and pull apart slices of graphene. Dr. Geim and Dr. Novoselov were honored with the 2010 Nobel Prize in Physics. But after a few years, scientists had figured out what they could, and most moved on.

“Until last year, graphene was slowly becoming out of fashion,” said Pablo Jarillo-Herrero, a physicist at the Massachusetts Institute of Technology.

Still, some people like Allan H. MacDonald, a theoretical physicist at the University of Texas, thought that graphene’s mysteries had yet to be fully plumbed.

What if two pieces of graphene were stacked on top of each other? If the layers were aligned perfectly, two graphene layers would behave essentially the same as a single graphene sheet. But when one of the layers was twisted slightly compared to the other, the rotational misalignment of the two lattices produces a repeating “moiré pattern” stretching across many atoms.

“That’s where I started,” Dr. MacDonald said. “What if they were nearly aligned?”

Electrons could easily hop between the two sheets where their lattices lined up. But in places where they were misaligned, the flow would be more difficult. In 2011, Dr. MacDonald and Rafi Bistritzer, a postdoctoral researcher, calculated that at a small angle, the electronic structure would become “flat,” with the electrons jammed like cars trying to make their way across Times Square.

The slowly moving electrons would be more likely to interact with each other — “strongly correlated,” in the language of physics — and from experience, physicists knew that strongly correlated systems are often surprising ones.

“We threw out a few guesses,” Dr. MacDonald said.

The paper was intriguing but largely ignored. The equations, encompassing a multitude of particles at once, are generally far too complex to solve exactly. So Dr. MacDonald and Dr. Bistritzer had made some simplifications to come up with rough answers. Many scientists thought their results were an artifact of their approximations and not a likely description of what would actually be observed.

Philip Kim, a Harvard physicist who did many of the early graphene experiments — Dr. Efetov and Dr. Jarillo-Herrero both worked in his laboratory — thought the glossed-over details in the calculations would be important. “I was skeptical,” he said.

But Dr. Jarillo-Herrero decided to test the prediction. “There was good theoretical motivation to see what would happen,” he said.

Credit…The Institute of Photonic Sciences

The technique still involves sticky tape to pull apart a graphite crystal until just one layer of graphene is left. Then the graphene is torn in two to produce two flakes with perfectly lined-up lattices. Then one of the flakes is rotated by about 1.3 degrees and pressed down on the other.

The layers are only loosely bound and sometimes the scientists observed them snapping back into perfect alignment. Other times, the sheet starts to rotate but stops before lining up entirely, sometimes ending up at the desired 1.1 degrees. The angle does not have to be exact; the behavior seems to occur when the twist angle is between 1.0 and 1.2 degrees.

Last year, Dr. Jarillo-Herrero and his colleagues reported a startling finding. The two layers of graphene, now known as magic-angle twisted bilayer graphene, became a superconductor when cooled to a fraction of a degree above absolute zero. (Dr. MacDonald and Dr. Bistritzer had not predicted that.)

“When we saw superconductivity, all hell broke loose,” Dr. Jarillo-Herrero said. “Then we realized this was a very big thing.”

For all of the amazing tricks of the original work with graphene, scientists were never able to turn it into a superconductor. It was a revelation that its behavior could be transformed simply by putting another sheet on top and twisting it slightly. It was as if the color of two sheets of paper suddenly changed if one were rotated.

Other experimental physicists jumped back into graphene research. “I was completely wrong,” Dr. Kim admitted. “Allan MacDonald’s theory was right.”

In the new Nature paper, Dr. Efetov and his colleagues confirmed the findings of Dr. Jarillo-Herrero, but they found additional permutations of temperature, magnetic field and electron density that also turn the graphene into a superconductor.

They have also found that the graphene could also exhibit an unusual type of magnetism, arising from the movement of its electrons, not the intrinsic magnetism of its atoms, as seen in materials like iron. That behavior has seldom been observed.

Dr. Efetov said his improvement to the recipe of combining the graphene layers was to roll the second layer as it is pressed down, similar to how one puts pressure on a smartphone screen protector to prevent air bubbles from forming while applying it.

He also says the cleaner boundary between the two layers leads to his more detailed results. “What M.I.T. saw, we reproduce,” he said. “But on top of that we observe many more states, which most likely in his case were not seen, because of the dirty devices.”

The new field of twistronics goes beyond graphene. The electronic behavior of the material may depend on the material the graphene is placed on, typically boron nitride. Trying other materials or configurations could yield different results.

Scientists have begun to look at three layers of graphene and a multitude of other two-dimensional materials.

“I think this is just the beginning,” Dr. Kim of Harvard said.

With such a wide variety of materials to work with, he thought scientists might be able to devise novel superconductors that would be suited for quantum computers. “I think that could be really exciting.”

 

For the NY Times source of this article, please use this link: https://www.nytimes.com/2019/10/30/science/graphene-physics-superconductor.html?emc=rss&partner=rss

Nerve-like ‘optical lace’ gives robots a human touch

by 

Nerve-like 'optical lace' gives robots a human touch

LED light illuminating the optical lacework structure when left alone and when when deformed. Credit: Xu et al., Sci. Robot. 4, eaaw6304 (2019)

A new synthetic material that creates a linked sensory network similar to a biological nervous system could enable soft robots to sense how they interact with their environment and adjust their actions accordingly.

“We want to have a way to measure stresses and strains for highly deformable objects, and we want to do it using the hardware itself, not vision,” said lab director Rob Shepherd, associate professor of mechanical and aerospace engineering and the paper’s senior author. “A good way to think about it is from a biological perspective. A blind person can still feel because they have sensors in their fingers that deform when their finger deforms. Robots don’t have that right now.”

Shepherd’s lab previously created sensory foams that used optical fibers to detect such deformations. For the optical lace project, Xu used a flexible, porous lattice structure manufactured from 3-D-printed polyurethane. She threaded its core with stretchable optical fibers containing more than a dozen mechanosensors and then attached an LED light to illuminate the fiber.

When she pressed the lattice structure at various points, the sensors were able to pinpoint changes in the photon flow.

Nerve-like 'optical lace' gives robots a human touch
The proprioceptive foam cylinder. Credit: Xu et al., Sci. Robot. 4, eaaw6304 (2019)

 

“When the structure deforms, you have contact between the input line and the output lines, and the light jumps into these output loops in the structure, so you can tell where the contact is happening,” Xu said. “The intensity of this determines the intensity of the deformation itself.”

 

[EDITOR’S NOTE: The original article shows several videos. Link to article: https://techxplore.com/news/2019-09-nerve-like-optical-lace-robots-human.html

 

The optical lace would not be used as a skin coating for robots, Shepherd said, but would be more like the flesh itself. Robots fitted with the material would be better suited for the health care industry, specifically beginning-of-life and end-of-life care, and manufacturing.

 

While the optical lace does not have as much sensitivity as a human fingertip, which is jam-packed with nerve receptors, the material is more sensitive to touch than the human back. The material is washable, too, which leads to another application: Shepherd’s lab has launched a startup company to commercialize Xu’s sensors to make garments that can measure a person’s shape and movements for augmented reality training.

The paper, “Optical Lace for Synthetic Afferent Neural Networks,” was published Sept. 11 in Science Robotics.

 

More information: P.A. Xu el al., “Optical lace for synthetic afferent neural networks,” Science Robotics (2019). robotics.sciencemag.org/lookup … /scirobotics.aaw6304

Journal information: Science Robotics

 

To see the videos and read the original article, use this link: https://techxplore.com/news/2019-09-nerve-like-optical-lace-robots-human.html

Quantum Supremacy Is Coming: Here’s What You Should Know

NOTE: this article originally appeared in Quanta Magazine, and was NOT written by David Wolf; it was selected by him.

Researchers are getting close to building a quantum computer that can perform tasks a classical computer can’t. Here’s what the milestone will mean.

Photo of a cooling system made out of gold.

IBM’s quantum computer sits inside a device that cools the qubits to a fraction of a degree above absolute zero. The low temperature helps prevent noise from corrupting the qubits.

Quantum computers will never fully replace “classical” ones like the device you’re reading this article on. They won’t run web browsers, help with your taxes, or stream the latest video from Netflix.

What they will do — what’s long been hoped for, at least — will be to offer a fundamentally different way of performing certain calculations. They’ll be able to solve problems that would take a fast classical computer billions of years to perform. They’ll enable the simulation of complex quantum systems such as biological molecules, or offer a way to factor incredibly large numbers, thereby breaking long-standing forms of encryption.

The threshold where quantum computers cross from being interesting research projects to doing things that no classical computer can do is called “quantum supremacy.” Many people believe that Google’s quantum computing project will achieve it later this year. In anticipation of that event, we’ve created this guide for the quantum-computing curious. It provides the information you’ll need to understand what quantum supremacy means, and whether it’s really been achieved.

What is quantum supremacy and why is it important?

To achieve quantum supremacy, a quantum computer would have to perform any calculation that, for all practical purposes, a classical computer can’t.

In one sense, the milestone is artificial. The task that will be used to test quantum supremacy is contrived — more of a parlor trick than a useful advance (more on this shortly). For that reason, not all serious efforts to build a quantum computer specifically target quantum supremacy. “Quantum supremacy, we don’t use [the term] at all,” said Robert Sutor, the executive in charge of IBM’s quantum computing strategy. “We don’t care about it at all.”

There is historical justification for this view. In the 1990s, the first quantum algorithms solved problems nobody really cared about. But the computer scientists who designed them learned things that they could apply to the development of subsequent algorithms (such as Shor’s algorithm for factoring large numbers) that have enormous practical consequences.

“I don’t think those algorithms would have existed if the community hadn’t first worked on the question ‘What in principle are quantum computers good at?’ without worrying about use value right away,” said Bill Fefferman, a quantum information scientist at the University of Chicago.

The quantum computing world hopes that the process will repeat itself now. By building a quantum computer that beats classical computers — even at solving a single useless problem — researchers could learn things that will allow them to build a more broadly useful quantum computer later on.

“Before supremacy, there is simply zero chance that a quantum computer can do anything interesting,” said Fernando Brandão, a theoretical physicist at the California Institute of Technology and a research fellow at Google. “Supremacy is a necessary milestone.”

In addition, quantum supremacy would be an earthquake in the field of theoretical computer science. For decades, the field has operated under an assumption called the “extended Church-Turing thesis,” which says that a classical computer can efficiently perform any calculation that any other kind of computer can perform efficiently. Quantum supremacy would be the first experimental violation of that principle and so would usher computer science into a whole new world. “Quantum supremacy would be a fundamental breakthrough in the way we view computation,” said Adam Bouland, a quantum information scientist at the University of California, Berkeley.

How do you demonstrate quantum supremacy?

By solving a problem on a quantum computer that a classical computer cannot solve efficiently. The problem could be whatever you want, though it’s generally expected that the first demonstration of quantum supremacy will involve a particular problem known as “random circuit sampling.”

A simple example of a random sampling problem is a program that simulates the roll of a fair die. Such a program runs correctly when it properly samples from the possible outcomes, producing each of the six numbers on the die one-sixth of the time as you run the program repeatedly.

In place of a die, this candidate problem for quantum supremacy asks a computer to correctly sample from the possible outputs of a random quantum circuit, which is like a series of actions that can be performed on a set of quantum bits, or qubits. Let’s consider a circuit that acts on 50 qubits. As the qubits go through the circuit, the states of the qubits become intertwined, or entangled, in what’s called a quantum superposition. As a result, at the end of the circuit, the 50 qubits are in a superposition of 250 possible states. If you measure the qubits, the sea of 250 possibilities collapses into a single string of 50 bits. This is like rolling a die, except instead of six possibilities you have 250, or 1 quadrillion, and not all of the possibilities are equally likely to occur.

Quantum computers, which can exploit purely quantum features such as superpositions and entanglement, should be able to efficiently produce a series of samples from this random circuit that follow the correct distribution. For classical computers, however, there’s no known fast algorithm for generating these samples — so as the range of possible samples increases, classical computers quickly get overwhelmed by the task.

What’s the holdup?

As long as quantum circuits remain small, classical computers can keep pace. So to demonstrate quantum supremacy via the random circuit sampling problem, engineers need to be able to build quantum circuits of at least a certain minimum size — and so far, they can’t.

Circuit size is determined by the number of qubits you start with, combined with the number of times you manipulate those qubits. Manipulations in a quantum computer are performed using “gates,” just as they are in a classical computer. Different kinds of gates transform qubits in different ways — some flip the value of a single qubit, while others combine two qubits in different ways.  If you run your qubits through 10 gates, you’d say your circuit has “depth” 10.

To achieve quantum supremacy, computer scientists estimate a quantum computer would need to solve the random circuit sampling problem for a circuit in the ballpark of 70 to 100 qubits with a depth of around 10. If the circuit is much smaller than that, a classical computer could probably still manage to simulate it — and classical simulation techniques are improving all the time.

Yet the problem quantum engineers now face is that as the number of qubits and gates increases, so does the error rate. And if the error rate is too high, quantum computers lose their advantage over classical ones.

There are many sources of error in a quantum circuit. The most crucial one is the error that accumulates in a computation each time the circuit performs a gate operation.

At the moment, the best two-qubit quantum gates have an error rate of around 0.5%, meaning that there’s about one error for every 200 operations. This is astronomically higher than the error rate in a standard classical circuit, where there’s about one error every 1017operations. To demonstrate quantum supremacy, engineers are going to have to bring the error rate for two-qubit gates down to around 0.1%.

How will we know for sure that quantum supremacy has been demonstrated?

Some milestones are unequivocal. Quantum supremacy is not one of them. “It’s not like a rocket launch or a nuclear explosion, where you just watch and immediately know whether it succeeded,” said Scott Aaronson, a computer scientist at the University of Texas, Austin.

To verify quantum supremacy, you have to show two things: that a quantum computer performed a calculation fast, and that a classical computer could not efficiently perform the same calculation.

It’s the second part that’s trickiest. Classical computers often turn out to be better at solving certain kinds of problems than computer scientists expected. Until you’ve proved a classical computer can’t possibly do something efficiently, there’s always the chance that a better, more efficient classical algorithm exists. Proving that such an algorithm doesn’t exist is probably more than most people will need in order to believe a claim of quantum supremacy, but such a claim could still take some time to be accepted.

How close is anyone to achieving it?

By many accounts Google is knocking on the door of quantum supremacy and could demonstrate it before the end of the year. (Of course, the same was said in 2017.) But a number of other groups have the potential to achieve quantum supremacy soon, including those at IBMIonQRigetti and Harvard University.

These groups are using several distinct approaches to building a quantum computer. Google, IBM and Rigetti perform quantum calculations using superconducting circuits. IonQ uses trapped ions. The Harvard initiative, led by Mikhail Lukin, uses rubidium atoms. Microsoft’s approach, which involves “topological qubits,” seems like more of a long shot.

Each approach has its pros and cons.

Superconducting quantum circuits have the advantage of being made out of a solid-state material. They can be built with existing fabrication techniques, and they perform very fast gate operations. In addition, the qubits don’t move around, which can be a problem with other technologies. But they also have to be cooled to extremely low temperatures, and each qubit in a superconducting chip has to be individually calibrated, which makes it hard to scale the technology to the thousands of qubits (or more) that will be needed in a really useful quantum computer.

Ion traps have a contrasting set of strengths and weaknesses. The individual ions are identical, which helps with fabrication, and ion traps give you more time to perform a calculation before the qubits become overwhelmed with noise from the environment. But the gates used to operate on the ions are very slow (thousands of times slower than superconducting gates) and the individual ions can move around when you don’t want them to.

At the moment, superconducting quantum circuits seem to be advancing fastest. But there are serious engineering barriers facing all of the different approaches. A major new technological advance will be needed before it’s possible to build the kind of quantum computers people dream of. “I’ve heard it said that quantum computing might need an invention analogous to the transistor — a breakthrough technology that performs nearly flawlessly and which is easily scalable,” Bouland said. “While recent experimental progress has been impressive, my inclination is that this hasn’t been found yet.”

Say quantum supremacy has been demonstrated. Now what?

If a quantum computer achieves supremacy for a contrived task like random circuit sampling, the obvious next question is: OK, so when will it will do something useful?

The usefulness milestone is sometimes referred to as quantum advantage. “Quantum advantage is this idea of saying: For a real use case — like financial services, AI, chemistry — when will you be able to see, and how will you be able to see, that a quantum computer is doing something significantly better than any known classical benchmark?” said Sutor of IBM, which has a number of corporate clients like JPMorgan Chase and Mercedes-Benz who have started exploring applications of IBM’s quantum chips.

A second milestone would be the creation of fault-tolerant quantum computers. These computers would be able to correct errors within a computation in real time, in principle allowing for error-free quantum calculations. But the leading proposal for creating fault-tolerant quantum computers, known as “surface code,” requires a massive overhead of thousands of error-correcting qubits for each “logical” qubit that the computer uses to actually perform a computation. This puts fault tolerance far beyond the current state of the art in quantum computing. It’s an open question whether quantum computers will need to be fault tolerant before they can really do anything useful. “There are many ideas,” Brandão said, “but nothing is for sure.”

Correction July 18, 2019: Mikhail Lukin’s group at Harvard is making a quantum computer out of rubidium atoms controlled with laser light, not photons as the article originally stated.

NOTE: Original article can be found at this site: https://www.quantamagazine.org/quantum-supremacy-is-coming-heres-what-you-should-know-20190718/

Engineers make injectable tissues a reality

New and inexpensive device encases delicate cells into protective microgels

NOTE: This article NOT written by David Wolf, but rather curated by him.

Doctoral student Mohamed Gamal uses a newly developed cell encapsulation device.   Credit: Nathan Skolski, UBC Okanagan

A simple injection that can help regrow damaged tissue has long been the dream of physicians and patients alike. A new study from researchers at UBC Okanagan moves that dream closer to reality with a device that makes encapsulating cells much faster, cheaper and more effective.

“The idea of injecting different kinds of tissue cells is not a new one,” says Keekyoung Kim, assistant professor of engineering at UBC Okanagan and study co-author. “It’s an enticing concept because by introducing cells into damaged tissue, we can supercharge the body’s own processes to regrow and repair an injury.”

Kim says everything from broken bones to torn ligaments could benefit from this kind of approach and suggests even whole organs could be repaired as the technology improves.

The problem, he says, is that cells on their own are delicate and tend not to survive when injected directly into the body.

“It turns out that to ensure cell survival, they need to be encased in a coating that protects them from physical damage and from the body’s own immune system,” says Mohamed Gamal, doctoral student in biomedical engineering and study lead author. “But it has been extremely difficult to do that kind of cell encapsulation, which has until now been done in a very costly, time consuming and wasteful process.”

Kim and Gamal have solved that problem by developing an automated encapsulation device that encases many cells in a microgel using a specialized blue laser and purifies them to produce a clean useable sample in just a few minutes. The advantage of their system is that over 85 per cent of the cells survive and the process can be easily scaled up.

“Research in this area has been hampered by the cost and lack of availability of mass-produced cell encapsulated microgels,” says Kim. “We’ve solved that problem and our system could provide thousands or even tens of thousands of cell-encapsulated microgels rapidly, supercharging this field of bioengineering.”

In addition to developing a system that’s quick and efficient, Gamal says the equipment is made up of readily available and inexpensive components.

“Any lab doing this kind of work could set up a similar system anywhere from a few hundred to a couple of thousand dollars, which is pretty affordable for lab equipment,” says Gamal.

The team is already looking at the next step, which will be to embed different kinds of stem cells — cells that haven’t yet differentiated into specific tissue types — into the microgels alongside specialized proteins or hormones called growth factors. The idea would be to help the stem cells transform into the appropriate tissue type once they’re injected.

“I’m really excited to see where this technology goes next and what our encapsulated stem cells are capable of.”

The study was published in the journal Lab on a Chip with funding from the Natural Sciences and Engineering Research Council of Canada and the Canadian Foundation for Innovation.

Story Source:

Materials provided by University of British Columbia Okanagan campusNote: Content may be edited for style and length.

Use this link to find the Science Digest article: https://www.sciencedaily.com/releases/2019/04/190425104312.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Ftop_news%2Ftop_science+%28ScienceDaily%3A+Top+Science+News%29

Neuroscientists Just Found a Way to Image the Brain 1,000 Times Faster Than Ever Before, with Stunning Resolution

By Shelly Fan, February 4, 2019

You know those stories of scientific breakthroughs, in which the lone genius scientist struggles for years until his “eureka!” moment?

Yeah, that’s a lie.

With the big data revolution well under way, today scientific discoveries are the result of massive collaborations.

Case in point? Last week, 18 institutions teamed up and devised a method to image entire brains 1,000 times faster than anything before. Dubbed by the team as an “Avengers, unite!” moment, they combined strengths to physically blow up brain tissue to over 20 times its usual size, and scanned their inner circuits and molecular constituents—down to the nano-scale level—using a new type of blazingly fast microscopy.

The results are stunning videos of the fly and mouse brains, in which every nook and cranny is illuminated and reconstructed in neon colors.

(MOVIE 9 SHOWING INTERIOR VIEW OF BRAIN AT HIGH RESOLUTION. GO TO ORIGINAL ARTICLE TO VIEW. SEE LINK AT BOTTOM OF THIS ARTICLE.)

It’s not just eye-candy. Scientists have been eagerly devising new ways to map entire brains with increasing precision and resolution, with the hope of unlocking the brain’s mysteries—which circuits underlie what behaviors? How are memories formed, saved, and retrieved? How do circuits reorganize with learning or age?

But the end game is vastly more ambitious: simulate a whole brain inside a computer, a feat that some say will eventually lead to general AI. Previous attempts were stymied by the years it took to painstakingly piece together a single brain.

But now, “we’ve crossed a threshold in imaging performance,” said Dr. Ed Boyden at MIT, one of the leading investigators in this project. “That’s why we’re so excited. We’re not just scanning incrementally more brain tissue, we’re scanning entire brains.”

The teams published their results in Science.

The Brain Imaging Cheat Sheet

Most of us have heard of MRI as a way to image the brain and suss out potential problems in the clinic.

But things get much weirder in the lab, where scientists are mapping out every twist and turn of a neuron’s branches and their connections—synapses, which mainly sit on little protrusions on a branch called a dendritic spine.

To tease apart structures within the fatty mush of tissue, scientists have long used tiny little protein lights to illuminate parts of the brain. By tinkering with an animal’s genome, scientists can “tag” these lights—in all colors of the rainbow—to specific structures such as the synapses or the branches.

They then painstakingly take out the brain, slice it into wafer-thin sections, and treat those sections in various ways to enhance the light’s brilliance under the microscope. The sections are imaged one by one, and stitched back together into 3D using advanced algorithms.

It’s a painfully tedious process that takes weeks, if not months or years. What’s more, because microscopes are inherently limited in resolution, the reconstructions can end up blurry—not a great place to start if you want to simulate a brain.

Alice in Wonderland

A few years back, Boyden tackled the problem with a seemingly nutty solution: why not physically expand the brain, so that it’s details are easier to see?

Using a swellable gel found in—not joking—diapers, Boyden’s team encapsulated brains inside the material and upon adding water, blew the brains up to roughly 20 times their original size.

You’d think the process would tear delicate brain tissue apart. Instead, the neurons and circuits were protected by the gel, and every single molecule within the brain kept its relative position after expansion.

Gao et al before after brain imageBefore and after expanding the fruit fly brain. Image from Gao et al., SCIENCE Vol. 363, Issue 6424, eaau8302 (18 Jan 2019). Republished with permission from American Association for the Advancement of Science (AAAS).

Scientists around the world have since adapted the technique, resulting in stunning findings about our brain’s organizational patterns. Yet the teams all ran into a problem: with greater scale came greater imaging times, which made mapping large circuits a chore and whole brains impossible. What’s more, the longer you zap the light proteins, the easier it is to bleach the protein—resulting in black spots in the image—or photodamage the brain tissue.

Large-Scale Imaging

That’s when Boyden gave Dr. Eric Betzig at HHMI’s Janelia Research Campus a call.

Back in 2014, Betzig and colleagues developed a powerful new tool that rapidly collects high-resolution images and minimizes damage. The technology, “lattice light-sheet microscope,” sweeps a thin sheet of light multiple times across every plane of the brain tissue. This keeps only one focal plane in view, which helps minimize blurriness.

The scope also splits up its light beams in a way that greatly reduces the intensity of the light—the lower the intensity, the less photodamage, which allows researchers to image the brain for longer periods without frying their precious samples.

What’s more, because the light captures a whole plain instead of a set of points, it captures images much faster, without sacrificing resolution.

The microscope allowed Betzig and team to track the subcellular dynamics of healthy, living cells as they went about their business. The results caught Boyden’s eye: expansion microscopy, which turns brains virtually translucent, could be perfect for Betzig’s technology, which shines light from one side and snaps a photo from the other.

Gao et al organelles of various shapes and sizesOrganelles of various shapes and sizes. Image from Gao et al., SCIENCE Vol. 363, Issue 6424, eaau8302 (18 Jan 2019). Republished with permission from American Association for the Advancement of Science (AAAS).

Betzig was more cynical.

“I thought they were full of it…I was going to show them,” he said.

Instead, the combo proved much more powerful than anyone expected. Under Betzig’s microscope, expanded mouse tissue gave up all its structural secrets. Dotted along each neuron’s branches were mushroom-shaped protrusions, dendritic spines, that are normally hard to see clearly—the “neck” of the mushroom often looks like a blurry smudge. Instead, even the smallest necks were in sharp focus.

Gao et al dendritic spinesDendritic spines. Image from Gao et al., SCIENCE Vol. 363, Issue 6424, eaau8302 (18 Jan 2019). Republished with permission from American Association for the Advancement of Science (AAAS).

The team then compared the density of synapses in various parts of the mouse’s cortex as proof-of-concept, and the new technique allowed them to analyze millions of synapses in just a few days.

(MOVIE 2 SHOWING INTERIOR VIEW OF BRAIN AT HIGH RESOLUTION. GO TO ORIGINAL ARTICLE TO VIEW. SEE LINK AT BOTTOM OF THIS ARTICLE.)

“Using electron microscopy, this would have taken years to complete,” said first author Dr. Ruixuan Gao.

Whole-Brain Revolution

Encouraged, the team turned to whole brains, starting with the fly. A fly’s brain is roughly the size of a poppy seed and contains about 100,000 neurons linked up in complex circuits.

Breaking down the brains into 50,000 “cubes” of 3D blocks, the team devised new algorithms that stitched the blocks back together like a puzzle. Other members then came in to wrangle over 40 million synapses into stunning, neon-colored visuals to make the data more interpretable.

In one set of experiments, the team traced a smell-related circuit across several brain regions, identified a specific set of neurons within those circuits, and counted all of their synapses. It’s a scale and depth previously unimaginable.

Nevertheless, the whole-brain imaging revolution is just starting, and the team believes there’s more to improve. Certain brain regions don’t like to be tagged with protein light bulbs, and it’s always hard to squeeze enough in to light up tiny areas. Not all tissues like to be stretched—for example, collagen, the stuff that makes up connective tissue. Perhaps most importantly, each processing step could introduce artifacts into the tissue, which means scientists will have to carefully validate their results.

But the future looks brighter than ever before. Scientists could finally look at large-scale maps of the brain to pinpoint changes that drive brain disorders, explain how we make memories and decisions, or track circuit changes throughout an entire lifetime. In time, we could get to maps of entire nervous systems.

“That’s like the holy grail for neuroscience,” said Boyden.

Image Credit: Image from Gao et al., SCIENCE Vol. 363, Issue 6424, eaau8302 (18 Jan 2019). Republished with permission from American Association for the Advancement of Science (AAAS).

FOR ORIGINAL ARTICLE, USE THIS LINK: http://tinyurl.com/y8b5t8hj

How a Trippy 1980s Video Effect Might Help Explain Consciousness

Summary: Researchers argue consciousness may be caused by the way the brain generates energetic feedback loops.

Source: Robert Pepperell – The Conversation
Publisher: Organized by NeuroscienceNews.com.

Explaining consciousness is one of the hardest problems in science and philosophy. Recent neuroscientific discoveries suggest that a solution could be within reach – but grasping it will mean rethinking some familiar ideas. Consciousness, I argue in a new paper, may be caused by the way the brain generates loops of energetic feedback, similar to the video feedback that “blossoms” when a video camera is pointed at its own output.

I first saw video feedback in the late 1980s and was instantly entranced. Someone plugged the signal from a clunky video camera into a TV and pointed the lens at the screen, creating a grainy spiraling tunnel. Then the camera was tilted slightly and the tunnel blossomed into a pulsating organic kaleidoscope.

Video feedback is a classic example of complex dynamical behaviour. It arises from the way energy circulating in the system interacts chaotically with the electronic components of the hardware.

As an artist and VJ in the 1990s, I would often see this hypnotic effect in galleries and clubs. But it was a memorable if unnerving experience during an LSD-induced trip that got me thinking. I hallucinated almost identical imagery, only intensely saturated with colour. It struck me then there might be a connection between these recurring patterns and the operation of the mind.

Brains, information and energy

Fast forward 25 years and I’m a university professor still trying to understand how the mind works. Our knowledge of the relationship between the mind and brain has advanced hugely since the 1990s when a new wave of scientific research into consciousness took off. But a widely accepted scientific theory of consciousness remains elusive.

The two leading contenders – Stanislas Dehaene’s Global Neuronal Workspace Model and Giulio Tononi’s Integrated Information Theory – both claim that consciousness results from information processing in the brain, from neural computation of ones and zeros, or bits.

I doubt this claim for several reasons. First, there is little agreement among scientists about exactly what information is. Second, when scientists refer to information they are often actually talking about the way energetic activity is organised in physical systems. Third, brain imaging techniques such as fMRI, PET and EEG don’t detect information in the brain, but changes in energy distribution and consumption.

Brains, I argue, are not squishy digital computers – there is no information in a neuron. Brains are delicate organic instruments that turn energy from the world and the body into useful work that enables us to survive. Brains process energy, not information.

Recognising that brains are primarily energy processors is the first step to understanding how they support consciousness. The next is rethinking energy itself.

What is energy?

We are all familiar with energy but few of us worry about what it is. Even physicists tend not to. They treat it as an abstract value in equations describing physical processes, and that suffices. But when Aristotle coined the term energeia he was trying to grasp the actuality of the lived world, why things in nature work in the way they do (the word “energy” is rooted in the Greek for “work”). This actualised concept of energy is different from, though related to, the abstract concept of energy used in contemporary physics.

When we study what energy actually is, it turns out to be surprisingly simple: it’s a kind of difference. Kinetic energy is a difference due to change or motion, and potential energy is a difference due to position or tension. Much of the activity and variety in nature occurs because of these energetic differences and the related actions of forces and work. I call these actualised differences because they do actual work and cause real effects in the world, as distinct from abstract differences (like that between 1 and 0) which feature in mathematics and information theory. This conception of energy as actualised difference, I think, may be key to explaining consciousness.

swirly ball

Video feedback may be the nearest we have to visualising what conscious processing in the brain is like. NeuroscienceNews.com image is credited to Robert Pepperell.

The human brain consumes some 20% of the body’s total energy budget, despite accounting for only 2% of its mass. The brain is expensive to run. Most of the cost is incurred by neurons firing bursts of energetic difference in unthinkably complex patterns of synchrony and diversity across convoluted neural pathways.

What is special about the conscious brain, I propose, is that some of those pathways and energy flows are turned upon themselves, much like the signal from the camera in the case of video feedback. This causes a self-referential cascade of actualised differences to blossom with astronomical complexity, and it is this that we experience as consciousness. Video feedback, then, may be the nearest we have to visualising what conscious processing in the brain is like.

The neuroscientific evidence

The suggestion that consciousness depends on complex neural energy feedback is supported by neuroscientific evidence.

Researchers recently discovered a way to accurately index the amount of consciousness someone has. They fired magnetic pulses through healthy, anaesthetised, and severely injured peoples’ brains. Then they measured the complexity of an EEG signal that monitored how the brains reacted. The complexity of the EEG signal predicted the level of consciousness in the person. And the more complex the signal the more conscious the person was.

The researchers attributed the level of consciousness to the amount of information processing going on in each brain. But what was actually being measured in this study was the organisation of the neural energy flow (EEG measures differences of electrical energy). Therefore, the complexity of the energy flow in the brain tells us about the level of consciousness a person has.

Also relevant is evidence from studies of anaesthesia. No-one knows exactly how anaesthetic agents annihilate consciousness. But recent theories suggest that compounds including propofol interfere with the brain’s ability to sustain complex feedback loops in certain brain areas. Without these feedback loops, the functional integration between different brain regions breaks down, and with it the coherence of conscious awareness.

What this, and other neuroscientific work I cite in the paper, suggests is that consciousness depends on a complex organisation of energy flow in the brain, and in particular on what the biologist Gerald Edelman called “reentrant” signals. These are recursive feedback loops of neural activity that bind distant brain regions into a coherent functioning whole.

Explaining consciousness in scientific terms, or in any terms, is a notoriously hard problem. Some have worried it’s so hard we shouldn’t even try. But while not denying the difficulty, the task is made a bit easier, I suggest, if we begin by recognising what brains actually do.

The primary function of the brain is to manage the complex flows of energy that we rely on to thrive and survive. Instead of looking inside the brain for some undiscovered property, or “magic sauce”, to explain our mental life, we may need to look afresh at what we already know is there.

ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE

Source: Robert Pepperell – The Conversation
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is credited to Robert Pepperell.
Video Source: Video credited to Japhy Riddle.

The Conversation”How a Trippy 1980s Video Effect Might Help Explain Consciousness.” NeuroscienceNews. NeuroscienceNews, 21 December 2018.
<http://neurosciencenews.com/consciousness-80s-video-10386/&gt;

Scientists identify a new kind of human brain cell

This is a digital reconstruction of a rosehip neuron in the human brain.
Credit: Tamas Lab, University of Szeged

‘Rosehip’ neurons not found in rodents, may be involved in fine-level control between regions of the human brain, according to the Allen Institute.

One of the most intriguing questions about the human brain is also one of the most difficult for neuroscientists to answer: What sets our brains apart from those of other animals?

“We really don’t understand what makes the human brain special,” said Ed Lein, Ph.D., Investigator at the Allen Institute for Brain Science. “Studying the differences at the level of cells and circuits is a good place to start, and now we have new tools to do just that.”

In a new study published today in the journal Nature Neuroscience, Lein and his colleagues reveal one possible answer to that difficult question. The research team, co-led by Lein and Gábor Tamás, Ph.D., a neuroscientist at the University of Szeged in Szeged, Hungary, has uncovered a new type of human brain cell that has never been seen in mice and other well-studied laboratory animals.

Tamás and University of Szeged doctoral student Eszter Boldog dubbed these new cells “rosehip neurons” — to them, the dense bundle each brain cell’s axon forms around the cell’s center looks just like a rose after it has shed its petals, he said. The newly discovered cells belong to a class of neurons known as inhibitory neurons, which put the brakes on the activity of other neurons in the brain.

The study hasn’t proven that this special brain cell is unique to humans. But the fact that the special neuron doesn’t exist in rodents is intriguing, adding these cells to a very short list of specialized neurons that may exist only in humans or only in primate brains.

The researchers don’t yet understand what these cells might be doing in the human brain, but their absence in the mouse points to how difficult it is to model human brain diseases in laboratory animals, Tamás said. One of his laboratory team’s immediate next steps is to look for rosehip neurons in postmortem brain samples from people with neuropsychiatric disorders to see if these specialized cells might be altered in human disease.

When different techniques converge

In their study, the researchers used tissue samples from postmortem brains of two men in their 50s who had died and donated their bodies to research. They took sections of the top layer of the cortex, the outermost region of the brain that is responsible for human consciousness and many other functions that we think of as unique to our species. It’s much larger, compared to our body size, than in other animals.

“It’s the most complex part of the brain, and generally accepted to be the most complex structure in nature,” Lein said.

Tamás’ research lab in Hungary studies the human brain using a classical approach to neuroscience, conducting detailed examinations of cells’ shapes and electrical properties. At the Allen Institute, Lein leads a team working to uncover the suite of genes that make human brain cells unique from each other and from the brain cells of mice.

Several years ago, Tamás visited the Allen Institute to present his latest research on specialized human brain cell types, and the two research groups quickly saw that they’d hit on the same cell using very different techniques.

“We realized that we were converging on the same cell type from absolutely different points of view,” Tamás said. So they decided to collaborate.

The Allen Institute group, in collaboration with researchers from the J. Craig Venter Institute, found that the rosehip cells turn on a unique set of genes, a genetic signature not seen in any of the mouse brain cell types they’ve studied. The University of Szeged researchers found that the rosehip neurons form synapses with another type of neuron in a different part of the human cortex, known as pyramidal neurons.

This is one of the first studies of the human cortex to combine these different techniques to study cell types, said Rebecca Hodge, Ph.D., Senior Scientist at the Allen Institute for Brain Science and an author on the study.

“Alone, these techniques are all powerful, but they give you an incomplete picture of what the cell might be doing,” Hodge said. “Together, they tell you complementary things about a cell that can potentially tell you how it functions in the brain.”

How do you study humanity?

What appears to be unique about rosehip neurons is that they only attach to one specific part of their cellular partner, indicating that they might be controlling information flow in a very specialized way.

If you think of all inhibitory neurons like brakes on a car, the rosehip neurons would let your car stop in very particular spots on your drive, Tamás said. They’d be like brakes that only work at the grocery store, for example, and not all cars (or animal brains) have them.

“This particular cell type — or car type — can stop at places other cell types cannot stop,” Tamás said. “The car or cell types participating in the traffic of a rodent brain cannot stop in these places.”

The researchers’ next step is to look for rosehip neurons in other parts of the brain, and to explore their potential role in brain disorders. Although scientists don’t yet know whether rosehip neurons are truly unique to humans, the fact that they don’t appear to exist in rodents is another strike against the laboratory mouse as a perfect model of human disease — especially for neurological diseases, the researchers said.

“Our brains are not just enlarged mouse brains,” said Trygve Bakken, M.D., Ph.D., Senior Scientist at the Allen Institute for Brain Science and an author on the study. “People have commented on this for many years, but this study gets at the issue from several angles.”

“Many of our organs can be reasonably modeled in an animal model,” Tamás said. “But what sets us apart from the rest of the animal kingdom is the capacity and the output of our brain. That makes us human. So it turns out humanity is very difficult to model in an animal system.”

Story Source:

Materials provided by Allen Institute.

Link to the Science Daily article:  https://www.sciencedaily.com/releases/2018/08/180827180809.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Ftop_news%2Ftop_science+%28ScienceDaily%3A+Top+Science+News%29

Amazing New Brain Map of Every Synapse Points to the Roots of Thinking

By Shelly Fan – Aug 14, 2018*

Imagine a map of every single star in an entire galaxy. A map so detailed that it lays out what each star looks like, what they’re made of, and how each star is connected to another through the grand physical laws of the cosmos.

While we don’t yet have such an astronomical map of the heavens, thanks to a momentous study published last week in Neuron, there is now one for the brain.

If every neuron were a galaxy, then synapses—small structures dotted along the serpentine extensions of neurons—are its stars. In a technical tour-de-force, a team from the University of Edinburgh in the UK constructed the first detailed map of every single synapse in the mouse brain.

Using genetically modified mice, the team literally made each synapse light up under fluorescent light throughout the brain like the starry night. And similar to the way stars differ, the team found that synapses vastly varied, but in striking patterns that may support memory and thinking.

“There are more synapses in a human brain than there are stars in the galaxy. The brain is the most complex object we know of and understanding its connections at this level is a major step forward in unravelling its mysteries,” said lead author Dr. Seth Grant at the Center for Clinical Brain Sciences.

The detailed maps revealed a fundamental law of brain activity. With the help of machine learning, the team categorized roughly one billion synapses across the brain into 37 sub-types. Here’s the kicker: when sets of neurons receive electrical information, such as trying to decide between different solutions for a problem, unique sub-types of synapses spread out among different neurons unanimously spark with activity.

In other words: synapses come in types. And each type may control a thought, a decision, or a memory.

The neuroscience Twittersphere blew up.

“Whoa,” commented Dr. Ben Saunders simply at the University of Minnesota.

It’s an “amazing paper cataloguing the diversity and distribution of synapse sub-types across the entire mouse brain,” wrote neurogeneticist Dr. Kevin Mitchell. It “highlights [the] fact that synapses are the key computational elements in the nervous system.”

The Connectome Connection

The team’s interest in constructing the “synaptome”—the first entire catalog of synapses in the mouse brain—stemmed from a much larger project: the connectome.

In a nutshell, the connectome is all the neuronal connections within you. Evangelized by Dr. Sebastian Seung in a TED Talk, the connectome is the biological basis of who you are—your memories, personality, and how you reason and think. Capture the connectome, and one day scientists may be able to reconstruct you—something known as whole brain emulation.

Yet the connectome only describes how neurons functionally talk to each other. Where in the brain is it physically encoded?

Enter synapses. Neuroscientists have long known that synapses transmit information between neurons using chemicals and electricity. There’s also been hints that synapses are widely diverse in terms of what proteins they contain, but traditionally this diversity’s been mostly ignored. Until recently, most scientists believed that actual computations occur at the neuronal body—the bulbous part of a neuron from which branches reach out.

So far there’s never been a way to look at the morphology and function of synapses across the entire brain, the authors explained. Rather, we’ve been focused on mapping these crucial connection points in small areas.

“Synaptome mapping could be used to ask if the spatial distribution of synapses [that differ] is related to connectome architecture,” the team reasoned.

And if so, future brain emulators may finally have something solid to grasp onto.

SYNMAP

To construct the mouse synaptome, the authors developed a pipeline that they dubbed SYNMAP. They started with genetically modified mice, which have their synapses glow different colors. Each synapse is jam-packed with different proteins, with—stay with me—PSD-95 and SAP102 being two of the most prominent members. The authors added glowing proteins to these, which essentially acted as torches to light up each synapse in the brain.

Synaptome Mapping Pipeline

The team first bioengineered a mouse with glowing synapses under florescent light.

Next, they painstakingly chopped up the brain into slices, used a microscope to capture images of synapses in different brain regions, and pieced the photos back together.

An image of synapses looks like a densely-packed star map to an untrained eye. Categorizing each synapse is beyond the ability (and time commitment) of any human researcher, so the team took advantage of new machine learningclassification techniques, and developed an algorithm that could parse these data—more than 10 terabytes—automatically, without human supervision.

A Physical Connectome

Right off the bat, the team was struck by the “exquisite patterns” the glowing synapses formed. One tagged protein—PSD-95—seemed to hang out on the more exterior portions of the brain where higher cognitive functions occur. Although there is overlap, the other glowing protein preferred more interior regions of the brain.

Whole-Brain-Scale Mapping

Microscope images showing the two glowing synapse proteins, PSD-95 and SAP102, across brain sections.

When they looked closely, they found that the two glowing proteins represented different sets of synapses, the author explained. Each region of the brain has a characteristic “synaptome signature.” Like fingerprints that differ in shape and size, various brain regions also seemed to contain synapses that differ in their protein composition, size, and number.

Using a machine learning algorithm developed in-house, the team categorized the synapses into 37 subtypes. Remarkably, regions of the brain related to higher reasoning and thinking abilities also contained the most diverse synapse population, whereas “reptile brain regions” such as the brain stem were more uniform in synapse sub-type.

Synaptome dominant subtype maps

A graph of a brain cross-section showing some of the most commonly found synapse subtypes in each area. Each color represents a different synapse subtype. “Box 4” highlights the hippocampus.

Why?

To see whether synapse diversity helps with information processing, the team used computer simulations to see how synapses would respond to common electrical patterns within the hippocampus—the seahorse-shaped region crucial for learning and memory. The hippocampus was one of the regions that showed remarkable diversity in synapse subtypes, with each spread out in striking patterns throughout the brain structure.

Remarkably, each type of electrical information processing translated to a unique synaptome map—change the input, change the synaptome.

It suggests that the brain can process multiple electrical information using the same brain region, because different synaptomes are recruited.

The team found similar results when they used electrical patterns recorded from mice trying to choose between three options for a reward. Different synaptomes lit up when the choice was correct versus wrong. Like a map into internal thoughts, synaptomes drew a vivid picture of what the mouse was thinking when it made its choice.

Synaptome map function behavior and physiology

Each behavior activates a particular synaptome. Each synaptome is like a unique fingerprint of a thought process.

Synaptome Reprogramming

Like computer code, a synaptome seems to underlie a computational output—a decision or thought. So what if the code is screwed up?

Psychiatric diseases often have genetic causes that impact proteins in the synapse. Using mice that show symptoms similar to schizophrenia or autism, the team mapped their synaptome—and found dramatic changes in how the brain’s various synapse sub-types are structured and connected.

For example, in response to certain normal brain electrical patterns, some synaptome maps only weakly emerged, whereas others became abnormally strong in the mutant mice.

Synaptome reprogramming

Mutations can change the synaptome and potentially lead to psychiatric disorders

It seems like certain psychiatric diseases “reprogram” the synaptome, the authors concluded. Stronger or new synaptome maps could, in fact, be why patients with schizophrenia experience delusions and hallucinations.

So are you your synaptome?

Perhaps. The essence of you—memories, thought patterns—seems to be etched into how diverse synapses activate in response to input. Like a fingerprint for memories and decisions, synaptomes can then be “read” to decipher that thought.

But as the authors acknowledge, the study’s only the beginning. Along with the paper, the team launched a Synaptome Explorer tool to help neuroscientists further parse the intricate connections between synapses and you.

“This map opens a wealth of new avenues of research that should transform our understanding of behavior and brain disease,” said Grant.

Images Credit: Derivatives of Fei Zhu et al. / University of Edinburg / CC BY 4.0  

*This article appeared in Singularity Hub on August 15, 2018. Here’s a link back to that site: https://singularityhub.com/2018/08/14/amazing-map-of-every-synapse-in-the-mouse-brain-points-to-the-roots-of-thinking/#sm.000001x99u2bpcpcuzpz6z5ztcyuj

\

Shelly Xuelai Fan is a neuroscientist at the University of California, San Francisco, where she studies ways to make old brains young again. In addition to research, she’s also an avid science writer with an insatiable obsession with biotech, AI and all things neuro. She spends her spare time kayaking, bike camping and getting lost in the woods. To follow her, click on her name.

Brain discovery could block aging’s terrible toll on the mind

Faulty brain plumbing to blame in Alzheimer’s, age-related memory loss — and can be fixed

NOTE: this article not written by David T. Wolf, but selected by him from Science Digest

Date:July 26, 2018

Source:University of Virginia Health System

Summary:  Aging vessels connecting the brain and the immune system play critical roles in both Alzheimer’s disease and the decline in cognitive ability that comes with time, new research reveals. By improving the function of the lymphatic vessels, scientists have dramatically enhanced aged mice’s ability to learn and improved their memories. The work may provide doctors an entirely new path to treat or prevent Alzheimer’s disease, age-related memory loss and other neurodegenerative diseases.

Obstructing lymphatic vessels (in green) in a mouse model of Alzheimer’s disease significantly increased the accumulation of harmful plaques in the brain. “What was really interesting is that with the worsening pathology, it actually looks very similar to what we see in human samples in terms of all this aggregation of amyloid protein,” said researcher Jonathan Kipnis, PhD.   Credit: Courtesy Kipnis lab

 

The research is the latest from the lab of pioneering neuroscientist Jonathan Kipnis, PhD, whose team discovered in 2015 that the brain is surrounded by lymphatic vessels — vessels science textbooks insisted did not exist. That discovery made headlines around the world and was named one of the year’s biggest by Science, yet Kipnis sees his team’s new finding as their most important yet. “When you take naturally aging mice and you make them learn and remember better, that is really exciting,” he said. “If we can make old mice learn better, that tells me there is something that can be done. I’m actually very optimistic that one day we could live to a very, very, very old age and not develop Alzheimer’s.”

How the Brain Cleans Itself

It turns out that the lymphatic vessels long thought not to exist are essential to the brain’s ability to cleanse itself. The researchers’ new work gives us the most complete picture yet of the role of these vessels — and their tremendous importance for brain function and healthy aging.

Kipnis, the chairman of UVA’s Department of Neuroscience and the director of its Center for Brain Immunology and Glia (BIG), and his colleagues were able to use a compound to improve the flow of waste from the brain to the lymph nodes in the neck of aged mice. The vessels became larger and drained better, and that had a direct effect on the mice’s ability to learn and remember. “Here is the first time that we can actually enhance cognitive ability in an old mouse by targeting this lymphatic vasculature around the brain,” Kipnis said. “By itself, it’s super, super exciting, but then we said, ‘Wait a second, if that’s the case, what’s happening in Alzheimer’s?'”

The researchers determined that obstructing the vessels in mice worsens the accumulation of harmful amyloid plaques in the brain that are associated with Alzheimer’s. This may help explain the buildup of such plaques in people, the cause of which is not well understood. “In human Alzheimer’s disease, 98 percent of cases are not familial, so it’s really a matter of what is affected by aging that gives rise to this disease,” said researcher Sandro Da Mesquita, PhD. “As we did in mice, it will be interesting to try and figure out what specific changes are happening in the old [brain] lymphatics in humans so we can develop specific approaches to treat age-related sickness.”

Kipnis noted that impairing the vessels in mice had a fascinating consequence: “What was really interesting is that with the worsening pathology, it actually looks very similar to what we see in human samples in terms of all this aggregation of amyloid protein in the brain and meninges,” he said. “By impairing lymphatic function, we made the mouse model more similar to human pathology.”

Treating — or Preventing — Alzheimer’s

The researchers now will work to develop a drug to improve the performance of the lymphatic vessels in people. (Kipnis just inked a deal with biopharmaceutical company PureTech Health to explore the potential clinical applications of his discoveries.) Da Mesquita also noted that it would be important to develop a method to determine how well the meningeal lymphatic vasculature is working in people.

The researchers believe that the best way to treat Alzheimer’s might be to combine vasculature repair with other approaches. Improving the flow through the meningeal lymphatic vessels might even overcome some of the obstacles that have doomed previously promising treatments, moving them from the trash heap to the clinic, they said.

It may be, though, that the new discovery offers a way to stave off the onset of Alzheimer’s to the point that treatments are unnecessary — to delay it beyond the length of the current human lifespan.

“It may be very difficult to reverse Alzheimer’s, but maybe we would be able to maintain a very high functionality of this lymphatic vasculature to delay its onset to a very old age,” Kipnis said. “I honestly believe, down the road, we can see real results.”

Findings Published

The researchers have published their findings in Nature. Antoine Louveau, who was the first author on the original discovery of the meningeal lymphatics, and Da Mesquita are the first authors of the paper. The team also included Andrea Vaccari, Igor Smirnov, R. Chase Cornelison, Kathryn M. Kingsmore, Christian Contarino, Suna Onengut-Gumuscu, Emily Farber, Daniel Raper, Kenneth E. Viar, Romie D. Powell, Wendy Baker, Nisha Dabhi, Robin Bai, Rui Cao, Song Hu, Stephen S. Rich, Jennifer M. Munson, M. Beatriz Lopes, Christopher C. Overall and Scott T. Acton.

Kipnis emphasized the collaborative nature of the work, noting the importance of many different areas of expertise. For example, the project included big data processing by Christopher Overall from the Department of Neuroscience/BIG center and contributions from Acton and Vaccari from the Virginia Image and Video Analysis Laboratory at UVA. Other important contributions came from UVA’s Center for Public Health Genomics, the Department of Neurosurgery and UVA’s Department of Biomedical Engineering. (The Department of Biomedical Engineering itself is a joint collaboration of UVA’s School of Medicine and School of Engineering.) “It’s another exemplification of how today research cannot be done in one place and one lab,” Kipnis said.

The work was supported by the National Institutes of Health’s National Institute on Aging, grants AG034113 and AG057496; the Cure Alzheimer’s Fund; the Hobby Foundation; the Owens Family Foundation; the Thomas H. Lowder Family Foundation; and the American Cancer Society, grant IRG 81-001-26.

Story Source:

Materials provided by University of Virginia Health System.

Link to source article:

https://www.sciencedaily.com/releases/2018/07/180726085721.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Ftop_news%2Ftop_science+%28ScienceDaily%3A+Top+Science+News%29

Three dramatic new ways to visualize brain tissue and neuron circuits

May lead to breakthroughs in tracking brain disorders such as autism, schizophrenia, and Alzheimer’s
May 7, 2018 (NOT written by David Wolf; collected from Kurzweil AI and reproduced here.)
Visualizing the brain: Here, tissue from a human dentate gyrus (a part of the brain’s hippocampus that is involved in the formation of new memories) was imaged transparently in 3D and colored-coded to reveal the distribution and types of nerve cells. (credit: The University of Hong Kong)

Visualizing human brain tissue in vibrant transparent colors

Neuroscientists from The University of Hong Kong (HKU) and Imperial College London have developed a new method called “OPTIClear” for 3D transparent color visualization (at the microscopic level) of complex human brain circuits.

To understand how the brain works, neuroscientists map how neurons (nerve cells) are wired to form circuits in both healthy and disease states. To do that, the scientists typically cut brain tissues into thin slices. Then they trace the entangled fibers across those slices — a complex, laborious process.

Making human tissues transparent. OPTIClear replaces that process by “clearing” (making tissues transparent) and using fluorescent staining to identify different types of neurons. In one study of more than 3,000 large neurons in the human basal forebrain, the researchers were able to reduce the time from about three weeks to five days to visualize neurons, glial cells, and blood vessels in exquisite 3D detail. Previous clearing methods (such as CLARITY) have been limited to rodent tissue.

Reference (open access): Nature Communications March 14, 2018. Source: HKU and Imperial College London, May 7, 2018

Watching millions of brain cells in a moving animal for the first time

Neurons in the hippocampus flash on and off as a mouse walks around with tiny camera lenses on its head. (credit: The Rockefeller University)

It’s a neuroscientist’s dream: being able to track the millions of interactions among brain cells in animals that move about freely — allowing for studying brain disorders. Now a new invention, developed at The Rockefeller University and reported today, is expected to give researchers a dynamic tool to do that just that, eventually in humans.

The new tool can track neurons located at different depths within a volume of brain tissue in a freely moving rodent, or record the interplay among neurons when two animals meet and interact socially.

Microlens array for 3D recording. The technology consists of a tiny microscope attached to a mouse’s head, with a group of lenses called a “microlens array.” These lenses enable the microscope to capture images from multiple angles and depths on a sensor chip, producing a three-dimensional record of neurons blinking on and off as they communicate with each other through electrochemical impulses. (The mouse neurons are genetically modified to light up when they become activated.) A cable attached to the top of the microscope transmits the data for recording.

One challenge: Brain tissue is opaque, making light scatter, which makes it difficult to pinpoint the source of each neuronal light flash. The researchers’ solution: a new computer algorithm (program), known as SID, that extracts additional information from the scattered emission light.

Reference: Nature Methods. Source: The Rockefeller University May 7, 2018

Brain cells interacting in real time

Illustration: An astrocyte (green) interacts with a synapse (red), producing an optical signal (yellow). (credit: UCLA/Khakh lab)

Researchers at the David Geffen School of Medicine at UCLA can now peer deep inside a mouse’s brain to watch how star-shaped astrocytes (support glial cells in the brain) interact with synapses (the junctions between neurons) to signal each other and convey messages.

The method uses different colors of light that pass through a lens to magnify objects that are invisible to the naked eye. The viewable objects are now far smaller than those viewable by earlier techniques. That enables researchers to observe how brain damage alters the way astrocytes interact with neurons, and develop strategies to address these changes, for example.

Astrocytes are believed to play a key role in neurological disorders like Lou Gehrig’s, Alzheimer’s, and Huntington’s disease.

Reference: Neuron. Source: UCLA Khakh lab April 4, 2018.

Here’s the link to the original article: http://www.kurzweilai.net/three-dramatic-new-ways-to-visualize-brain-tissue-and-neuron-circuits?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=2f10ce3011-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-2f10ce3011-282002409

The brain learns differently than we’ve assumed, new learning theory says

March 28, 2018
neuronal_networkA revolutionary new theory contradicts a fundamental assumption in neuroscience about how the brain learns. According to researchers at Bar-Ilan University in Israel led by Prof. Ido Kanter, the theory promises to transform our understanding of brain dysfunction and may lead to advanced, faster, deep-learning algorithms.
New post-Hebb brain-learning model may lead to new brain treatments and breakthroughs in faster deep learning.
A biological schema of an output neuron, comprising a neuron’s soma (body, shown as gray circle, top) with two roots of dendritic trees (light-blue arrows), splitting into many dendritic branches (light-blue lines). The signals arriving from the connecting input neurons (gray circles, bottom) travel via their axons (red lines) and their many branches until terminating with the synapses (green stars). There, the signals connect with dendrites (some synapse branches travel to other neurons), which then connect to the soma. (credit: Shira Sardi et al./Sci. Rep)

The brain is a highly complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses. A neuron collects its many synaptic incoming signals through dendritic trees.

In 1949, Donald Hebb suggested that learning occurs in the brain by modifying the strength of synapses. Hebb’s theoryhas remained a deeply rooted assumption in neuroscience.

Synaptic vs. dendritic learning

In vitro experimental setup. A micro-electrode array comprising 60 extracellular electrodes separated by 200 micrometers, indicating a neuron patched (connected) by an intracellular electrode (orange) and a nearby extracellular electrode (green line). (Inset) Reconstruction of a fluorescence image, showing a patched cortical pyramidal neuron (red) and its dendrites growing in different directions and in proximity to extracellular electrodes. (credit: Shira Sardi et al./Scientific Reports adapted by KurzweilAI)

Hebb was wrong, says Kanter. “A new type of experiment strongly indicates that a faster and enhanced learning process occurs in the neuronal dendrites, similarly to what is currently attributed to the synapse,” Kanter and his team suggest in an open-access paper in Nature’s Scientific Reports, published Mar. 23, 2018.

“In this new [faster] dendritic learning process, there are [only] a few adaptive parameters per neuron, in comparison to thousands of tiny and sensitive ones in the synaptic learning scenario,” says Kanter. “Does it make sense to measure the quality of air we breathe via many tiny, distant satellite sensors at the elevation of a skyscraper, or by using one or several sensors in close proximity to the nose,?” he asks. “Similarly, it is more efficient for the neuron to estimate its incoming signals close to its computational unit, the neuron.”

Image representing the current synaptic (pink) vs. the new dendritic (green) learning scenarios of the brain. In the current scenario, a neuron (black) with a small number (two in this example) dendritic trees (center) collects incoming signals via synapses (represented by red valves), with many thousands of tiny adjustable learning parameters. In the new dendritic learning scenario (green) a few (two in this example) adjustable controls (red valves) are located in close proximity to the computational element, the neuron. The scale is such that if a neuron collecting its incoming signals is represented by a person’s faraway fingers, the length of its hands would be as tall as a skyscraper (left). (credit: Prof. Ido Kanter)

The researchers also found that weak synapses, which comprise the majority of our brain and were previously assumed to be insignificant, actually play an important role in the dynamics of our brain.

According to the researchers, the new learning theory may lead to advanced, faster, deep-learning algorithms and other artificial-intelligence-based applications, and also suggests that we need to reevaluate our current treatments for disordered brain functionality.

This research is supported in part by the TELEM grant of the Israel Council for Higher Education.

Abstract of Adaptive nodes enrich nonlinear cooperative learning
beyond traditional adaptation by links

Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is significantly larger. The nodal, neuronal, fast adaptation follows its relative anisotropic (dendritic) input timings, as indicated experimentally, similarly to the slow learning mechanism currently attributed to the links, synapses. It represents a non-local learning rule, where effectively many incoming links to a node concurrently undergo the same adaptation. The network dynamics is now counterintuitively governed by the weak links, which previously were assumed to be insignificant. This cooperative nonlinear dynamic adaptation presents a self-controlled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to learning by links, and also supports self-oscillations of the effective learning parameters. It hints on a hierarchical computational complexity of nodes, following their number of anisotropic inputs and opens new horizons for advanced deep learning algorithms and artificial intelligence based applications, as well as a new mechanism for enhanced and fast learning by neural networks.

references:

To access the original Science Digest article, use this link: http://www.kurzweilai.net/the-brain-learns-completely-differently-than-weve-assumed-new-learning-theory-says?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=303237c4e7-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-303237c4e7-282002409

Convert PDF to Word: <a href="https://www.onlineocr.net/" />PDF to Doc</a/>

Physicists Negate Century-Old Assumption Regarding Neurons and Brain Activity

neurons-physics-brain-activity-neurosciencenews-public Using new types of experiments on neuronal cultures, a group of scientists, led by Prof. Ido Kanter, of the Department of Physics at Bar-Ilan University, has demonstrated that this century-old assumption regarding brain activity is mistaken. (NeuroscienceNews.com image is in the public domain.)

The new realization for the computational scheme of a neuron calls into question the spike sorting technique which is at the center of activity of hundreds of laboratories and thousands of scientific studies in neuroscience. This method was mainly invented to overcome the technological barrier to measure the activity from many neurons simultaneously, using the assumption that each neuron tends to fire spikes of a particular waveform which serves as its own electrical signature. However, this assumption, which resulted from enormous scientific efforts and resources, is now questioned by the work of Kanter’s lab. (See abstract below.)

ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE

Funding: This research is supported in part by the TELEM grant of the Council for Higher Education in Israel.

Source: Elana Oberlander – Bar-Ilan University
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is credited to Tommy Leonardi.
Original Research: Full open access research for “New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units” by Shira Sardi, Roni Vardi, Anton Sheinin, Amir Goldental & Ido Kanter in Scientific Reports. Published online December 21 2017 doi:10.1038/s41598-017-18363-1

CITE THIS NEUROSCIENCENEWS.COM ARTICLE
Bar-Ilan University “Physicists Negate Century-Old Assumption Regarding Neurons and Brain Activity.” NeuroscienceNews. NeuroscienceNews, 21 December 2017.
<http://neurosciencenews.com/neurons-brain-activity-8227/&gt;.

Abstract

New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units

Neurons are the computational elements that compose the brain and their fundamental principles of activity are known for decades. According to the long-lasting computational scheme, each neuron sums the incoming electrical signals via its dendrites and when the membrane potential reaches a certain threshold the neuron typically generates a spike to its axon. Here we present three types of experiments, using neuronal cultures, indicating that each neuron functions as a collection of independent threshold units. The neuron is anisotropically activated following the origin of the arriving signals to the membrane, via its dendritic trees. The first type of experiments demonstrates that a single neuron’s spike waveform typically varies as a function of the stimulation location. The second type reveals that spatial summation is absent for extracellular stimulations from different directions. The third type indicates that spatial summation and subtraction are not achieved when combining intra- and extra- cellular stimulations, as well as for nonlocal time interference, where the precise timings of the stimulations are irrelevant. Results call to re-examine neuronal functionalities beyond the traditional framework, and the advanced computational capabilities and dynamical properties of such complex systems.

“New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units” by Shira Sardi, Roni Vardi, Anton Sheinin, Amir Goldental & Ido Kanter in Scientific Reports. Published online December 21 2017 doi:10.1038/s41598-017-18363-1

Link to the source: http://neurosciencenews.com/neurons-brain-activity-8227/

Single molecules can work as reproducible transistors — at room temperature

Researchers are first to reproducibly achieve the current blockade effect using atomically precise molecules at room temperature, a result that could lead to shrinking electrical components + boosting data storage + computing power

MOLECULE SWITCH 170814120959_1_540x360

Date: August 14, 2017Source:Columbia

Source: Columbia University School of Engineering and Applied Science

Summary:  Researchers have now reproducibly demonstrated current blockade — the ability to switch a device from the insulating to the conducting state where charge is added and removed one electron at a time — using atomically precise molecular clusters at room temperature. The study shows that single molecules can function as reproducible circuit elements such as transistors or diodes that can easily operate at room temperature.

A major goal in the field of molecular electronics, which aims to use single molecules as electronic components, is to make a device where a quantized, controllable flow of charge can be achieved at room temperature. A first step in this field is for researchers to demonstrate that single molecules can function as reproducible circuit elements such as transistors or diodes that can easily operate at room temperature.

A team led by Latha Venkataraman, professor of applied physics and chemistry at Columbia Engineering and Xavier Roy, assistant professor of chemistry (Arts & Sciences), published a study today in Nature Nanotechnology that is the first to reproducibly demonstrate current blockade — the ability to switch a device from the insulating to the conducting state where charge is added and removed one electron at a time — using atomically precise molecular clusters at room temperature.

Bonnie Choi, a graduate student in the Roy group and co-lead author of the work, created a single cluster of geometrically ordered atoms with an inorganic core made of just 14 atoms — resulting in a diameter of about 0.5 nanometers — and positioned linkers that wired the core to two gold electrodes, much as a resistor is soldered to two metal electrodes to form a macroscopic electrical circuit (e.g. the filament in a light bulb).

The researchers used a scanning tunneling microscope technique that they have pioneered to make junctions comprising a single cluster connected to the two gold electrodes, which enabled them to characterize its electrical response as they varied the applied bias voltage. The technique allows them to fabricate and measure thousands of junctions with reproducible transport characteristics.

“We found that these clusters can perform very well as room-temperature nanoscale diodes whose electrical response we can tailor by changing their chemical composition,” says Venkataraman. “Theoretically, a single atom is the smallest limit, but single-atom devices cannot be fabricated and stabilized at room temperature. With these molecular clusters, we have complete control over their structure with atomic precision and can change the elemental composition and structure in a controllable manner to elicit certain electrical response.”

A number of studies have used quantum dots to produce the similar effects but because the dots are much larger and not uniform in size, due to the nature of their synthesis, the results have not been reproducible — not every device made with quantum dots behaved the same way. The Venkataraman-Roy team worked with smaller inorganic molecular clusters that were identical in shape and size, so they knew exactly — down to the atomic scale — what they were measuring.

“Most of the other studies created single-molecule devices that functioned as single-electron transistors at four degrees Kelvin, but for any real-world application, these devices need to work at room temperature. And ours do,” says Giacomo Lovat, a postdoctoral researcher and co-lead author of the paper. “We’ve built a molecular-scale transistor with multiple states and functionalities, in which we have control over the precise amount of charge that flows through. It’s fascinating to see that simple chemical changes within a molecule, can have a profound influence on the electronic structure of molecules, leading to different electrical properties.”

The team evaluated the performance of the diode through the on/off ratio, which is the ratio between the current flowing through the device when it is switched on and the residual current still present in its “off” state. At room temperature, they observed an on/off ratio of about 600 in single-cluster junctions, higher than any other single-molecule devices measured to date. Particularly interesting was the fact that these junctions were characterized by a “sequential” mode of charge flow; each electron transiting through a cluster junction stopped on the cluster for a while. Usually, in small-molecule junctions, electrons “pushed” through the junction by the applied bias make the leap continuously, from one electrode into the other, so that the number of electrons on the molecule at each instant of time is not well-defined.

“We say the cluster becomes ‘charged’ since, for a short time interval before the transiting electron jumps off into the other metal electrode, it stores one extra charge,” says Roy. “Such sequential, or discrete, conduction mode is due to the cluster’s peculiar electronic structure that confines electrons in strongly localized orbitals. These orbitals also account for the observed ‘current blockade’ regime when a low bias voltage is applied to a cluster junction. The current drops to a very small value at low voltage as electrons in the metal contact don’t have enough energy to occupy one of the cluster orbitals. As the voltage is increased, the first cluster orbital that becomes energetically accessible opens up a viable route for electrons that can now jump on and off the cluster, resulting in consecutive ‘charging’ and ‘discharging’ events. The blockade is lifted, and current starts flowing across the junction.”

The researchers tailored the clusters to explore the impact of compositional change on the clusters’ electrical response and plan to build upon their initial study. They will design improved cluster systems with better electrical performances (e.g. higher on/off current ratio, different accessible states), and increase the number of atoms in the cluster core while maintaining the atomic precision and uniformity of the compound. This would increase the number of energy levels, each corresponding to a certain electron orbit that they can access with their voltage window. Increasing the energy levels would impact the on/off ratio of the device, perhaps also decreasing the power needed for switching on the device if more energy levels become accessible for transiting electrons at low bias voltages.

“Most single-molecule transport investigations have been performed on simple organic molecules because they are easier to work with,” Venkataraman notes. “Our collaborative effort here through the Columbia Nano Initiative bridges chemistry and physics, enabling us to experiment with new compounds, such as these molecular clusters, that may not only be more synthetically challenging, but also more interesting as electrical components.”

Story Source:

Materials provided by Columbia University School of Engineering and Applied ScienceNote: Content may be edited for style and length.

Link to this article as it appeared in Science Daily: 

https://www.sciencedaily.com/releases/2017/08/170814120959.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Ftop_news%2Ftop_science+%28ScienceDaily%3A+Top+Science+News%29

The Dark Secret at the Heart of AI

No one really knows how the most advanced algorithms do what they do. That could be a problem.

by Will Knight April 11, 2017

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

WIRES mj17-aiblackbox2The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network. ADAM FERRISS

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

JUNGLE-mj17-aiblackbox2b

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

This story is part of our May/June 2017 Issue

 

To see the story at its source, including more AI-generated images, use this link: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

For in-depth reviews of camping equipment, see http://Thesmartlad.com