Latest Posts

Three dramatic new ways to visualize brain tissue and neuron circuits

May lead to breakthroughs in tracking brain disorders such as autism, schizophrenia, and Alzheimer’s
May 7, 2018 (NOT written by David Wolf; collected from Kurzweil AI and reproduced here.)
Visualizing the brain: Here, tissue from a human dentate gyrus (a part of the brain’s hippocampus that is involved in the formation of new memories) was imaged transparently in 3D and colored-coded to reveal the distribution and types of nerve cells. (credit: The University of Hong Kong)

Visualizing human brain tissue in vibrant transparent colors

Neuroscientists from The University of Hong Kong (HKU) and Imperial College London have developed a new method called “OPTIClear” for 3D transparent color visualization (at the microscopic level) of complex human brain circuits.

To understand how the brain works, neuroscientists map how neurons (nerve cells) are wired to form circuits in both healthy and disease states. To do that, the scientists typically cut brain tissues into thin slices. Then they trace the entangled fibers across those slices — a complex, laborious process.

Making human tissues transparent. OPTIClear replaces that process by “clearing” (making tissues transparent) and using fluorescent staining to identify different types of neurons. In one study of more than 3,000 large neurons in the human basal forebrain, the researchers were able to reduce the time from about three weeks to five days to visualize neurons, glial cells, and blood vessels in exquisite 3D detail. Previous clearing methods (such as CLARITY) have been limited to rodent tissue.

Reference (open access): Nature Communications March 14, 2018. Source: HKU and Imperial College London, May 7, 2018

Watching millions of brain cells in a moving animal for the first time

Neurons in the hippocampus flash on and off as a mouse walks around with tiny camera lenses on its head. (credit: The Rockefeller University)

It’s a neuroscientist’s dream: being able to track the millions of interactions among brain cells in animals that move about freely — allowing for studying brain disorders. Now a new invention, developed at The Rockefeller University and reported today, is expected to give researchers a dynamic tool to do that just that, eventually in humans.

The new tool can track neurons located at different depths within a volume of brain tissue in a freely moving rodent, or record the interplay among neurons when two animals meet and interact socially.

Microlens array for 3D recording. The technology consists of a tiny microscope attached to a mouse’s head, with a group of lenses called a “microlens array.” These lenses enable the microscope to capture images from multiple angles and depths on a sensor chip, producing a three-dimensional record of neurons blinking on and off as they communicate with each other through electrochemical impulses. (The mouse neurons are genetically modified to light up when they become activated.) A cable attached to the top of the microscope transmits the data for recording.

One challenge: Brain tissue is opaque, making light scatter, which makes it difficult to pinpoint the source of each neuronal light flash. The researchers’ solution: a new computer algorithm (program), known as SID, that extracts additional information from the scattered emission light.

Reference: Nature Methods. Source: The Rockefeller University May 7, 2018

Brain cells interacting in real time

Illustration: An astrocyte (green) interacts with a synapse (red), producing an optical signal (yellow). (credit: UCLA/Khakh lab)

Researchers at the David Geffen School of Medicine at UCLA can now peer deep inside a mouse’s brain to watch how star-shaped astrocytes (support glial cells in the brain) interact with synapses (the junctions between neurons) to signal each other and convey messages.

The method uses different colors of light that pass through a lens to magnify objects that are invisible to the naked eye. The viewable objects are now far smaller than those viewable by earlier techniques. That enables researchers to observe how brain damage alters the way astrocytes interact with neurons, and develop strategies to address these changes, for example.

Astrocytes are believed to play a key role in neurological disorders like Lou Gehrig’s, Alzheimer’s, and Huntington’s disease.

Reference: Neuron. Source: UCLA Khakh lab April 4, 2018.

Here’s the link to the original article: http://www.kurzweilai.net/three-dramatic-new-ways-to-visualize-brain-tissue-and-neuron-circuits?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=2f10ce3011-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-2f10ce3011-282002409

Advertisements

The brain learns differently than we’ve assumed, new learning theory says

March 28, 2018
neuronal_networkA revolutionary new theory contradicts a fundamental assumption in neuroscience about how the brain learns. According to researchers at Bar-Ilan University in Israel led by Prof. Ido Kanter, the theory promises to transform our understanding of brain dysfunction and may lead to advanced, faster, deep-learning algorithms.
New post-Hebb brain-learning model may lead to new brain treatments and breakthroughs in faster deep learning.
A biological schema of an output neuron, comprising a neuron’s soma (body, shown as gray circle, top) with two roots of dendritic trees (light-blue arrows), splitting into many dendritic branches (light-blue lines). The signals arriving from the connecting input neurons (gray circles, bottom) travel via their axons (red lines) and their many branches until terminating with the synapses (green stars). There, the signals connect with dendrites (some synapse branches travel to other neurons), which then connect to the soma. (credit: Shira Sardi et al./Sci. Rep)

The brain is a highly complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses. A neuron collects its many synaptic incoming signals through dendritic trees.

In 1949, Donald Hebb suggested that learning occurs in the brain by modifying the strength of synapses. Hebb’s theoryhas remained a deeply rooted assumption in neuroscience.

Synaptic vs. dendritic learning

In vitro experimental setup. A micro-electrode array comprising 60 extracellular electrodes separated by 200 micrometers, indicating a neuron patched (connected) by an intracellular electrode (orange) and a nearby extracellular electrode (green line). (Inset) Reconstruction of a fluorescence image, showing a patched cortical pyramidal neuron (red) and its dendrites growing in different directions and in proximity to extracellular electrodes. (credit: Shira Sardi et al./Scientific Reports adapted by KurzweilAI)

Hebb was wrong, says Kanter. “A new type of experiment strongly indicates that a faster and enhanced learning process occurs in the neuronal dendrites, similarly to what is currently attributed to the synapse,” Kanter and his team suggest in an open-access paper in Nature’s Scientific Reports, published Mar. 23, 2018.

“In this new [faster] dendritic learning process, there are [only] a few adaptive parameters per neuron, in comparison to thousands of tiny and sensitive ones in the synaptic learning scenario,” says Kanter. “Does it make sense to measure the quality of air we breathe via many tiny, distant satellite sensors at the elevation of a skyscraper, or by using one or several sensors in close proximity to the nose,?” he asks. “Similarly, it is more efficient for the neuron to estimate its incoming signals close to its computational unit, the neuron.”

Image representing the current synaptic (pink) vs. the new dendritic (green) learning scenarios of the brain. In the current scenario, a neuron (black) with a small number (two in this example) dendritic trees (center) collects incoming signals via synapses (represented by red valves), with many thousands of tiny adjustable learning parameters. In the new dendritic learning scenario (green) a few (two in this example) adjustable controls (red valves) are located in close proximity to the computational element, the neuron. The scale is such that if a neuron collecting its incoming signals is represented by a person’s faraway fingers, the length of its hands would be as tall as a skyscraper (left). (credit: Prof. Ido Kanter)

The researchers also found that weak synapses, which comprise the majority of our brain and were previously assumed to be insignificant, actually play an important role in the dynamics of our brain.

According to the researchers, the new learning theory may lead to advanced, faster, deep-learning algorithms and other artificial-intelligence-based applications, and also suggests that we need to reevaluate our current treatments for disordered brain functionality.

This research is supported in part by the TELEM grant of the Israel Council for Higher Education.

Abstract of Adaptive nodes enrich nonlinear cooperative learning
beyond traditional adaptation by links

Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is significantly larger. The nodal, neuronal, fast adaptation follows its relative anisotropic (dendritic) input timings, as indicated experimentally, similarly to the slow learning mechanism currently attributed to the links, synapses. It represents a non-local learning rule, where effectively many incoming links to a node concurrently undergo the same adaptation. The network dynamics is now counterintuitively governed by the weak links, which previously were assumed to be insignificant. This cooperative nonlinear dynamic adaptation presents a self-controlled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to learning by links, and also supports self-oscillations of the effective learning parameters. It hints on a hierarchical computational complexity of nodes, following their number of anisotropic inputs and opens new horizons for advanced deep learning algorithms and artificial intelligence based applications, as well as a new mechanism for enhanced and fast learning by neural networks.

references:

To access the original Science Digest article, use this link: http://www.kurzweilai.net/the-brain-learns-completely-differently-than-weve-assumed-new-learning-theory-says?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=303237c4e7-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-303237c4e7-282002409

Convert PDF to Word: <a href="https://www.onlineocr.net/" />PDF to Doc</a/>

Physicists Negate Century-Old Assumption Regarding Neurons and Brain Activity

neurons-physics-brain-activity-neurosciencenews-public Using new types of experiments on neuronal cultures, a group of scientists, led by Prof. Ido Kanter, of the Department of Physics at Bar-Ilan University, has demonstrated that this century-old assumption regarding brain activity is mistaken. (NeuroscienceNews.com image is in the public domain.)

The new realization for the computational scheme of a neuron calls into question the spike sorting technique which is at the center of activity of hundreds of laboratories and thousands of scientific studies in neuroscience. This method was mainly invented to overcome the technological barrier to measure the activity from many neurons simultaneously, using the assumption that each neuron tends to fire spikes of a particular waveform which serves as its own electrical signature. However, this assumption, which resulted from enormous scientific efforts and resources, is now questioned by the work of Kanter’s lab. (See abstract below.)

ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE

Funding: This research is supported in part by the TELEM grant of the Council for Higher Education in Israel.

Source: Elana Oberlander – Bar-Ilan University
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is credited to Tommy Leonardi.
Original Research: Full open access research for “New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units” by Shira Sardi, Roni Vardi, Anton Sheinin, Amir Goldental & Ido Kanter in Scientific Reports. Published online December 21 2017 doi:10.1038/s41598-017-18363-1

CITE THIS NEUROSCIENCENEWS.COM ARTICLE
Bar-Ilan University “Physicists Negate Century-Old Assumption Regarding Neurons and Brain Activity.” NeuroscienceNews. NeuroscienceNews, 21 December 2017.
<http://neurosciencenews.com/neurons-brain-activity-8227/&gt;.

Abstract

New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units

Neurons are the computational elements that compose the brain and their fundamental principles of activity are known for decades. According to the long-lasting computational scheme, each neuron sums the incoming electrical signals via its dendrites and when the membrane potential reaches a certain threshold the neuron typically generates a spike to its axon. Here we present three types of experiments, using neuronal cultures, indicating that each neuron functions as a collection of independent threshold units. The neuron is anisotropically activated following the origin of the arriving signals to the membrane, via its dendritic trees. The first type of experiments demonstrates that a single neuron’s spike waveform typically varies as a function of the stimulation location. The second type reveals that spatial summation is absent for extracellular stimulations from different directions. The third type indicates that spatial summation and subtraction are not achieved when combining intra- and extra- cellular stimulations, as well as for nonlocal time interference, where the precise timings of the stimulations are irrelevant. Results call to re-examine neuronal functionalities beyond the traditional framework, and the advanced computational capabilities and dynamical properties of such complex systems.

“New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units” by Shira Sardi, Roni Vardi, Anton Sheinin, Amir Goldental & Ido Kanter in Scientific Reports. Published online December 21 2017 doi:10.1038/s41598-017-18363-1

Link to the source: http://neurosciencenews.com/neurons-brain-activity-8227/

Single molecules can work as reproducible transistors — at room temperature

Researchers are first to reproducibly achieve the current blockade effect using atomically precise molecules at room temperature, a result that could lead to shrinking electrical components + boosting data storage + computing power

MOLECULE SWITCH 170814120959_1_540x360

Date: August 14, 2017Source:Columbia

Source: Columbia University School of Engineering and Applied Science

Summary:  Researchers have now reproducibly demonstrated current blockade — the ability to switch a device from the insulating to the conducting state where charge is added and removed one electron at a time — using atomically precise molecular clusters at room temperature. The study shows that single molecules can function as reproducible circuit elements such as transistors or diodes that can easily operate at room temperature.

A major goal in the field of molecular electronics, which aims to use single molecules as electronic components, is to make a device where a quantized, controllable flow of charge can be achieved at room temperature. A first step in this field is for researchers to demonstrate that single molecules can function as reproducible circuit elements such as transistors or diodes that can easily operate at room temperature.

A team led by Latha Venkataraman, professor of applied physics and chemistry at Columbia Engineering and Xavier Roy, assistant professor of chemistry (Arts & Sciences), published a study today in Nature Nanotechnology that is the first to reproducibly demonstrate current blockade — the ability to switch a device from the insulating to the conducting state where charge is added and removed one electron at a time — using atomically precise molecular clusters at room temperature.

Bonnie Choi, a graduate student in the Roy group and co-lead author of the work, created a single cluster of geometrically ordered atoms with an inorganic core made of just 14 atoms — resulting in a diameter of about 0.5 nanometers — and positioned linkers that wired the core to two gold electrodes, much as a resistor is soldered to two metal electrodes to form a macroscopic electrical circuit (e.g. the filament in a light bulb).

The researchers used a scanning tunneling microscope technique that they have pioneered to make junctions comprising a single cluster connected to the two gold electrodes, which enabled them to characterize its electrical response as they varied the applied bias voltage. The technique allows them to fabricate and measure thousands of junctions with reproducible transport characteristics.

“We found that these clusters can perform very well as room-temperature nanoscale diodes whose electrical response we can tailor by changing their chemical composition,” says Venkataraman. “Theoretically, a single atom is the smallest limit, but single-atom devices cannot be fabricated and stabilized at room temperature. With these molecular clusters, we have complete control over their structure with atomic precision and can change the elemental composition and structure in a controllable manner to elicit certain electrical response.”

A number of studies have used quantum dots to produce the similar effects but because the dots are much larger and not uniform in size, due to the nature of their synthesis, the results have not been reproducible — not every device made with quantum dots behaved the same way. The Venkataraman-Roy team worked with smaller inorganic molecular clusters that were identical in shape and size, so they knew exactly — down to the atomic scale — what they were measuring.

“Most of the other studies created single-molecule devices that functioned as single-electron transistors at four degrees Kelvin, but for any real-world application, these devices need to work at room temperature. And ours do,” says Giacomo Lovat, a postdoctoral researcher and co-lead author of the paper. “We’ve built a molecular-scale transistor with multiple states and functionalities, in which we have control over the precise amount of charge that flows through. It’s fascinating to see that simple chemical changes within a molecule, can have a profound influence on the electronic structure of molecules, leading to different electrical properties.”

The team evaluated the performance of the diode through the on/off ratio, which is the ratio between the current flowing through the device when it is switched on and the residual current still present in its “off” state. At room temperature, they observed an on/off ratio of about 600 in single-cluster junctions, higher than any other single-molecule devices measured to date. Particularly interesting was the fact that these junctions were characterized by a “sequential” mode of charge flow; each electron transiting through a cluster junction stopped on the cluster for a while. Usually, in small-molecule junctions, electrons “pushed” through the junction by the applied bias make the leap continuously, from one electrode into the other, so that the number of electrons on the molecule at each instant of time is not well-defined.

“We say the cluster becomes ‘charged’ since, for a short time interval before the transiting electron jumps off into the other metal electrode, it stores one extra charge,” says Roy. “Such sequential, or discrete, conduction mode is due to the cluster’s peculiar electronic structure that confines electrons in strongly localized orbitals. These orbitals also account for the observed ‘current blockade’ regime when a low bias voltage is applied to a cluster junction. The current drops to a very small value at low voltage as electrons in the metal contact don’t have enough energy to occupy one of the cluster orbitals. As the voltage is increased, the first cluster orbital that becomes energetically accessible opens up a viable route for electrons that can now jump on and off the cluster, resulting in consecutive ‘charging’ and ‘discharging’ events. The blockade is lifted, and current starts flowing across the junction.”

The researchers tailored the clusters to explore the impact of compositional change on the clusters’ electrical response and plan to build upon their initial study. They will design improved cluster systems with better electrical performances (e.g. higher on/off current ratio, different accessible states), and increase the number of atoms in the cluster core while maintaining the atomic precision and uniformity of the compound. This would increase the number of energy levels, each corresponding to a certain electron orbit that they can access with their voltage window. Increasing the energy levels would impact the on/off ratio of the device, perhaps also decreasing the power needed for switching on the device if more energy levels become accessible for transiting electrons at low bias voltages.

“Most single-molecule transport investigations have been performed on simple organic molecules because they are easier to work with,” Venkataraman notes. “Our collaborative effort here through the Columbia Nano Initiative bridges chemistry and physics, enabling us to experiment with new compounds, such as these molecular clusters, that may not only be more synthetically challenging, but also more interesting as electrical components.”

Story Source:

Materials provided by Columbia University School of Engineering and Applied ScienceNote: Content may be edited for style and length.

Link to this article as it appeared in Science Daily: 

https://www.sciencedaily.com/releases/2017/08/170814120959.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Ftop_news%2Ftop_science+%28ScienceDaily%3A+Top+Science+News%29

The Dark Secret at the Heart of AI

No one really knows how the most advanced algorithms do what they do. That could be a problem.

by Will Knight April 11, 2017

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

WIRES mj17-aiblackbox2The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network. ADAM FERRISS

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

JUNGLE-mj17-aiblackbox2b

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

This story is part of our May/June 2017 Issue

 

To see the story at its source, including more AI-generated images, use this link: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

For in-depth reviews of camping equipment, see http://Thesmartlad.com

Why our brains may be 100 times more powerful than believed

dendrites-2
The dendrites in our brain have been underestimated for 60 years says a new study (Credit:vitstudio/Depositphotos)

A new study out of the University of California Los Angeles (UCLA) has found that one part of the neurons in our brains is more active than previously revealed. The finding implies that our brains are both analog and digital computers and could lead to better ways to treat neurological disorders.

The focus of the study was the dendrites, long branch-like structures that attach to a roundish body called the soma to form neurons. It was previously believed that dendrites were nothing more than conduits that sent spikes of electrical activity generated in the soma to other neurons. But the study has shown that the dendrites themselves are highly active, sending spikes of their own at a rate 10 times that previously believed.

The finding runs counter to the long-held belief that somatic spikes were the main way we learn and form memories and perceptions.

“Dendrites make up more than 90 percent of neural tissue,” said UCLA neurophysicist Mayank Mehta, the study’s senior author. “Knowing they are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information. It may pave the way for understanding and treating neurological disorders, and for developing brain-like computers.”

The researchers also found that unlike the spikes of electrical activity generated by the somas, the dendrites could put out longer-lasting voltages that in their sum total were actually more powerful than the somatic spikes. They say the spikes are like digital computing in that they are all-or-nothing events, while the dendritic flows are akin to analog computing.

“We found that dendrites are hybrids that do both analog and digital computations, which are therefore fundamentally different from purely digital computers, but somewhat similar to quantum computers that are analog,” said Mehta. “A fundamental belief in neuroscience has been that neurons are digital devices. They either generate a spike or not. These results show that the dendrites do not behave purely like a digital device. Dendrites do generate digital, all-or-none spikes, but they also show large analog fluctuations that are not all or none. This is a major departure from what neuroscientists have believed for about 60 years.”

Mehta adds that the fact that dendrites are about 100 times larger in volume than somas, it’s possible that our brains have 100 times more capacity to compute information than previously believed.

dendrites-1

A neuron with the dendrites shown in green (Credit: Shelly Halpain/UC San Diego)

In making their discovery, the UCLA team was able to implant electrodes in the brains of rats that went next to dendrites. This was a departure from previous work where the sensors went straight into the dendrites, killing them and making their activity impossible to measure. They found that the dendrites were five times more active than the somas when the rats were sleeping, and 10 times more active when they were awake and moving about.

The discovery shows that learning likely takes place with more flexibility than previously believed.

“Many prior models assume that learning occurs when the cell bodies of two neurons are active at the same time,” said Jason Moore, a UCLA postdoctoral researcher and the study’s first author. “Our findings indicate that learning may take place when the input neuron is active at the same time that a dendrite is active — and it could be that different parts of dendrites will be active at different times, which would suggest a lot more flexibility in how learning can occur within a single neuron.”

“Due to technological difficulties, research in brain function has largely focused on the cell body,” added Mehta. “But we have discovered the secret lives of neurons, especially in the extensive neuronal branches. Our results substantially change our understanding of how neurons compute.”

The research has been published in the journal Science.

Source: UCLA

Link to article: http://newatlas.com/brains-more-powerful/48357/

For in-depth reviews of camping equipment, see http://Thesmartlad.com

Electronic synapses that can learn: First step towards an artificial brain?

Artist’s impression of the electronic synapse: the particles represent electrons circulating through oxide, by analogy with neurotransmitters in biological synapses. The flow of electrons depends on the oxide’s ferroelectric domain structure, which is controlled by electric voltage pulses. Credit: © Sören Boyn / CNRS/Thales physics joint research unit.

Researchers from the CNRS, Thales, and the Universities of Bordeaux, Paris-Sud, and Evry have created an artificial synapse capable of learning autonomously. They were also able to model the device, which is essential for developing more complex circuits. The research was published in Nature Communications on 3 April 2017.

One of the goals of biomimetics is to take inspiration from the functioning of the brain in order to design increasingly intelligent machines. This principle is already at work in information technology, in the form of the algorithms used for completing certain tasks, such as image recognition; this, for instance, is what Facebook uses to identify photos. However, the procedure consumes a lot of energy. Vincent Garcia (Unité mixte de physique CNRS/Thales) and his colleagues have just taken a step forward in this area by creating directly on a chip an artificial synapse that is capable of learning. They have also developed a physical model that explains this learning capacity. This discovery opens the way to creating a network of synapses and hence intelligent systems requiring less time and energy.

Our brain’s learning process is linked to our synapses, which serve as connections between our neurons. The more the synapse is stimulated, the more the connection is reinforced and learning improved. Researchers took inspiration from this mechanism to design an artificial synapse, called a memristor. This electronic nanocomponent consists of a thin ferroelectric layer sandwiched between two electrodes, and whose resistance can be tuned using voltage pulses similar to those in neurons. If the resistance is low the synaptic connection will be strong, and if the resistance is high the connection will be weak. This capacity to adapt its resistance enables the synapse to learn.

Although research focusing on these artificial synapses is central to the concerns of many laboratories, the functioning of these devices remained largely unknown. The researchers have succeeded, for the first time, in developing a physical model able to predict how they function. This understanding of the process will make it possible to create more complex systems, such as a series of artificial neurons interconnected by these memristors.

As part of the ULPEC H2020 European project, this discovery will be used for real-time shape recognition using an innovative camera: the pixels remain inactive, except when they see a change in the angle of vision. The data processing procedure will require less energy, and will take less time to detect the selected objects. The research involved teams from the CNRS/Thales physics joint research unit, the Laboratoire de l’intégration du matériau au système (CNRS/Université de Bordeaux/Bordeaux INP), the University of Arkansas (US), the Centre de nanosciences et nanotechnologies (CNRS/Université Paris-Sud), the Université d’Evry, and Thales.

 

Link to the article: https://www.sciencedaily.com/releases/2017/04/170403140249.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Ftop_news%2Ftop_science+%28ScienceDaily%3A+Top+Science+News%29

The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs

AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.

“Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.

Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.

What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.

Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.

This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.

Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.

Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.

“Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,” Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me. (If this video doesn’t work, go to the original article from Wired Magazine. Link is at the bottom of this page.)

Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.

Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.

“While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.

“Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.

When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.

Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.

With the rise of neuromorphics, and tools like Nengo, we could soon have AI’s capable of exhibiting a stunning level of natural intelligence – right on our phones.

This article first appeared in Wired Magazine. Here’s a link to the original: http://www.wired.co.uk/article/ai-neuromorphic-chips-brains

Artificial Synapse Developed for Neural Networks

synapse

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design — an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

Alberto Salleo, associate professor of materials science and engineering, with graduate student Scott Keene characterizing the electrochemical properties of an artificial synapse for neural network computing. They are part of a team that has created the new device.  Credit: L.A. Cicero

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.


Story Source:

Materials provided by Stanford University. Original written by Taylor Kubota. Note: Content may be edited for style and length.

Here is a link to the source article:  https://www.sciencedaily.com/releases/2017/02/170221142046.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Ftop_news%2Ftop_science+%28ScienceDaily%3A+Top+Science+News%29

Brain Plasticity: How Adult Born Neurons Get Wired

Image shows a brain.The study opens the door to look at how this redistribution of synapses between the old and new neurons helps the dentate gyrus function. [NeuroscienceNews.com image is for illustrative purposes only.]

Summary: Researchers report adult neurogenesis not only helps increase the number of cells in a neural network, it also promotes plasticity in the existing network. Additionally, they have identified the role the Bax gene plays in synaptic pruning. Source: University of Alabama at Birmingham.

One goal in neurobiology is to understand how the flow of electrical signals through brain circuits gives rise to perception, action, thought, learning and memories.

Linda Overstreet-Wadiche, Ph.D., and Jacques Wadiche, Ph.D., both associate professors in the University of Alabama at Birmingham Department of Neurobiology, have published their latest contribution in this effort, focused on a part of the brain that helps form memories — the dentate gyrus of the hippocampus.

The dentate gyrus is one of just two areas in the brain where new neurons are continuously formed in adults. When a new granule cell neuron is made in the dentate gyrus, it needs to get ‘wired in,’ by forming synapses, or connections, in order to contribute to circuit function. Dentate granule cells are part of a circuit that receive electrical signals from the entorhinal cortex, a cortical brain region that processes sensory and spatial input from other areas of the brain. By combining this sensory and spatial information, the dentate gyrus can generate a unique memory of an experience.

Overstreet-Wadiche and UAB colleagues posed a basic question: Since the number of neurons in the dentate gyrus increases by neurogenesis while the number of neurons in the cortex remains the same, does the brain create additional synapses from the cortical neurons to the new granule cells, or do some cortical neurons transfer their connections from mature granule cells to the new granule cells?

Their answer, garnered through a series of electrophysiology, dendritic spine density and immunohistochemistry experiments with mice that were genetically altered to produce either more new neurons or kill off newborn neurons, supports the second model — some of the cortical neurons transfer their connections from mature granule cells to the new granule cells.

This opens the door to look at how this redistribution of synapses between the old and new neurons helps the dentate gyrus function. And it opens up tantalizing questions. Does this redistribution disrupt existing memories? How does this redistribution relate to the beneficial effects of exercise, which is a natural way to increase neurogenesis?

“Over the last 10 years there has been evidence supporting a redistribution of synapses between old and new neurons, possibly by a competitive process that the new cells tend to ‘win,’” Overstreet-Wadiche said. “Our findings are important because they directly demonstrate that, in order for new cells to win connections, the old cells lose connections. So, the process of adult neurogenesis not only adds new cells to the network, it promotes plasticity of the existing network.”

“It will be interesting to explore how neurogenesis-induced plasticity contributes to the function of this brain region,” she continued. “Neurogenesis is typically associated with improved acquisition of new information, but some studies have also suggested that neurogenesis promotes ‘forgetting’ of existing memories.”

The researchers also unexpectedly found that the Bax gene, known for its role in apoptosis, appears to also play a role in synaptic pruning in the dentate gyrus.

“There is mounting evidence that the cellular machinery that controls cell death also controls the strength and number of synaptic connections,” Overstreet-Wadiche said. “The appropriate balance of synapses strengthening and weakening, collectively termed synaptic plasticity, is critical for appropriate brain function. Hence, understanding how synaptic pruning occurs may shed light on neurodevelopmental disorders and on neurodegenerative diseases in which a synaptic pruning gone awry may contribute to pathological synapse loss.”

ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE

All of the work was performed in the Department of Neurobiology at UAB. In addition to Overstreet-Wadiche and Wadiche, co-authors of the paper, “Adult born neurons modify excitatory synaptic transmission to existing neurons,” published in eLife, are Elena W. Adlaf, Ryan J. Vaden, Anastasia J. Niver, Allison F. Manuel, Vincent C. Onyilo, Matheus T. Araujo, Cristina V. Dieni, Hai T. Vo and Gwendalyn D. King.

Much of the data came from the doctoral thesis research of Adlaf, a former UAB Neuroscience graduate student who is now a postdoctoral fellow at Duke University.

Funding: Funding for this research came from Civitan International Emerging Scholars awards, and National Institutes of Health awards or grants NS098553, NS064025, NS065920 and NS047466.

Source: Jeff Hansen – University of Alabama at Birmingham
Image Source: NeuroscienceNews.com image is in the public domain.
Original Research: Full open access research for “Adult-born neurons modify excitatory synaptic transmission to existing neurons” by Elena W Adlaf, Ryan J Vaden, Anastasia J Niver, Allison F Manuel, Vincent C Onyilo, Matheus T Araujo, Cristina V Dieni, Hai T Vo, Gwendalyn D King, Jacques I Wadiche, and Linda Overstreet-Wadiche in eLife. Published online January 30 2017 doi:10.7554/eLife.19886

A Brain-Wide Chemical Signal that Enhances Memory

Summary: Study sheds light on how nicotine affects the brains of those with schizophrenia, and why those with the disease tend to be heavy smokers.

Source: University of Bristol.

Image shows the acetylcholine pathway in the brain.

Researchers studied how the release of the chemical acetylcholine fluctuates during the day but found that the release is at its highest when the brain is engaged with more challenging mental tasks. NeuroscienceNews.com image is for illustrative purposes only and is credited to BruceBlaus.

The new discoveries indicate how current drugs used in the treatment of Alzheimer’s, designed to boost this chemical signal, counter the symptoms of dementia. The results could also lead to new ways of enhancing cognitive function to counteract the effects of diseases such as Alzheimer’s and schizophrenia, as well as enhancing memory in healthy people.

The team of medical researchers at the Universities of Bristol and Maynooth in collaboration with pharmaceutical company Eli Lilly & Company, studied how the release of the chemical acetylcholine fluctuates during the day but found that the release is at its highest when the brain is engaged with more challenging mental tasks. The fluctuations are coordinated across the brain indicating a brain-wide signal to increase mental capacity with specific spikes in acetylcholine release occurring at particularly arousing times such as gaining reward.

Professor Jack Mellor, lead researcher from Bristol’s Centre for Synaptic Plasticity, said: “These findings are about how brain state is regulated and updated on a rapid basis to optimise the encoding of memory and cognitive performance. Many current and future drug therapies for a wide range of brain disorders including Alzheimer’s and schizophrenia are designed to target chemical systems such as acetylcholine so understanding when they are active and therefore how they function will be crucial for their future development and clinical use.”

Professor Lowry, who led the team at Maynooth University, added: “This work highlights the importance of cross-disciplinary basic research between universities and industry. Using real-time biosensor technology to improve our understanding of the role of important neurochemicals associated with memory is very exciting and timely, particularly given the increasing multifaceted societal burden caused by memory affecting neurological disorders such as dementia.”

Primary author Dr Leonor Ruivo added: “This collaboration gave us access to a new generation of tools which, in combination with other powerful techniques, will allow researchers to build on our findings and provide a much more detailed map of the action of brain chemicals in health, disease and therapeutic intervention.”

————-ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE————-

The research team involved the University of Bristol’s Centre for Synaptic Plasticity within the School of Physiology, Pharmacology & Neuroscience and the University of Maynooth department of Chemistry in collaboration with researchers at Lilly.

Funding: The work was supported by the Wellcome Trust, BBSRC and Lilly.

Source: Caroline Clancy – University of Bristol
Image Source: NeuroscienceNews.com image is credited to BruceBlaus and is licensed CC BY-SA 4.0.
Original Research: The study will appear in Cell Reports.

CITE THIS NEUROSCIENCENEWS.COM ARTICLE
University of Bristol “A Brain Wide Chemical Signal That Enhances Memory.” NeuroscienceNews. NeuroscienceNews, 24 January 2017.
<http://neurosciencenews.com/acetylcholine-memory-alzheimers-5999/&gt;.
FEEL FREE TO SHARE THIS NEUROSCIENCE NEWS.

Categorizing Brain Cells

Researchers at the Society for Neuroscience meeting in San Diego discuss new efforts to perform single-cell analyses on the brain’s billions of cells.

By Jef Akst | November 16, 2016

WIKIMEDIA COMMONS, GERRYSHAW

The deeper scientists probe into the complexity of the human brain, the more questions seem to arise. One of the most fundamental questions is how many different types of brain cells there are, and how to categorize individual cell types. That dilemma was discussed during a session yesterday (November 11) at the ongoing Society for Neuroscience (SfN) conference in San Diego, California.

As Evan Macosko of the Broad Institute said, the human brain comprises billions of brain cells—about 170 billion, according to one recent estimate—and there is a “tremendous amount of diversity in their function.” Now, new tools are supporting the study of single-cell transcriptomes, and the number of brain cell subtypes is skyrocketing. “We saw even greater degrees of heterogeneity in these cell populations than had been appreciated before,” Macosko said of his own single-cell interrogations of the mouse brain. He and others continue to characterize more brain regions, clustering cell types based on differences in gene expression, and then creating subclusters to look for diversity within each cell population.

Following Macosko’s talk, Bosiljka Tasic of the Allen Institute for Brain Science emphasized that categorizing cell types into subgroups based on gene expression is not enough. Researchers will need to combine such data with traditional metrics, such as morphology and electrophysiology to “ultimately come up with an integrative taxonomy of cell types,” Tasic said. “Multimodal data acquisition—it’s a big deal and I think it’s going to be a big focus of our future endeavors.”

Link to this article:  http://www.the-scientist.com/?articles.view/articleNo/47527/title/Categorizing-Brain-Cells/&hootPostID=d75df926134333e229fcead4839f5e1f/

Speaking of Neuroscience…

A selection of notable quotes from the annual Society for Neuroscience meeting

It’s slow but it’s steady and eventually it hits big pay dirt.

Walter Koroshetz, National Institute of Neurological Disorders and Stroke (NINDS) director, on the process of science

We’ve come to recognize that we know actually very little [about] the neuroscience of pain.

Nora Volkow, National Institute on Drug Abuse (NIDA) director

We all like discreteness but sometimes things are not simple.

Bosiljka Tasic, Allen Institute for Brain Science

It’s often said that we don’t really know how anesthesia works. Nothing could be further from the truth. We just haven’t paid attention.

Emery Brown, MIT

This pie in the sky idea [antisense oligonucleotides] from animal models has now moved forward into the clinic.

Timothy Miller, Washington University in St. Louis

Exercise is not like this big green glow around the whole brain. It seems to be circuit specific, and that’s interesting.

Giselle Petzinger, University of Southern California Keck School of Medicine

We became very excited that tau antisense oligonucleotides could be a rapid way to get into the clinic.

Angela Cacace, Bristol-Myers Squibb

See “Probing the Role of Tau Protein in Disease

It is incumbent to maintain the free exchange of people and ideas across borders.

David Julius, University of California, San Francisco, before delivering his invited lecture “Natural Products as Probes of the Pain Pathway”

We know nothing about the mechanism of cortical neuron sensitization in chronic pain.

Cheryl Stucky, Medical College of Wisconsin

There have been only a handful of posters on this subject at SfN in recent years, but the number of attendees today attests to the growing popularity of this research topic.

Barbara Sorg, Washington State University Vancouver, introducing a minisymposium on perineuronal nets and neural plasticity

The neuronal DNA methylome has many surprising properties.

Hongjun Song, Johns Hopkins University

Sound is absolutely central to a tremendous amount of human communication.

Nina Kraus, Northwestern University

Researchers uncover algorithm which may solve human intelligence

screen-shot-2016-11-29-at-02-35-54.jpg

By for Between the Lines | November 29, 2016

If we have the algorithm, we also have the key to true artificial intelligence.

Can Quantum Physics Explain Consciousness?

Written by JENNIFER OUELLETTE  NOV 7, 2016  The Atlantic

A new approach to a once-farfetched theory is making it plausible that the brain functions like a quantum computer.

The mere mention of “quantum consciousness” makes most physicists cringe, as the phrase seems to evoke the vague, insipid musings of a New Age guru. But if a new hypothesis proves to be correct, quantum effects might indeed play some role in human cognition. Matthew Fisher, a physicist at the University of California, Santa Barbara, raised eyebrows late last year when he published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain—which would essentially enable the brain to function like a quantum computer.

As recently as 10 years ago, Fisher’s hypothesis would have been dismissed by many as nonsense. Physicists have been burned by this sort of thing before, most notably in 1989, when Roger Penrose proposed that mysterious protein structures called “microtubules” played a role in human consciousness by exploiting quantum effects. Few researchers believe such a hypothesis plausible. Patricia Churchland, a neurophilosopher at the University of California, San Diego, memorably opined that one might as well invoke “pixie dust in the synapses” to explain human cognition.

Fisher’s hypothesis faces the same daunting obstacle that has plagued microtubules: a phenomenon called quantum decoherence. To build an operating quantum computer, you need to connect qubits—quantum bits of information—in a process called entanglement. But entangled qubits exist in a fragile state. They must be carefully shielded from any noise in the surrounding environment. Just one photon bumping into your qubit would be enough to make the entire system “decohere,” destroying the entanglement and wiping out the quantum properties of the system. It’s challenging enough to do quantum processing in a carefully controlled laboratory environment, never mind the warm, wet, complicated mess that is human biology, where maintaining coherence for sufficiently long periods of time is well nigh impossible.

Over the past decade, however, growing evidence suggests that certain biological systems might employ quantum mechanics. In photosynthesis, for example, quantum effects help plants turn sunlight into fuel. Scientists have also proposedthat migratory birds have a “quantum compass” enabling them to exploit Earth’s magnetic fields for navigation, or that the human sense of smell could be rooted in quantum mechanics.

Fisher’s notion of quantum processing in the brain broadly fits into this emerging field of quantum biology. Call it quantum neuroscience. He has developed a complicated hypothesis, incorporating nuclear and quantum physics, organic chemistry, neuroscience and biology. While his ideas have met with plenty of justifiable skepticism, some researchers are starting to pay attention. “Those who read his paper (as I hope many will) are bound to conclude: This old guy’s not so crazy,” wrote John Preskill, a physicist at the California Institute of Technology, after Fisher gave a talk there. “He may be on to something. At least he’s raising some very interesting questions.”

Matthew Fisher has proposed a way for quantum effects to influence the workings of the brain. (Courtesy of Matthew Fisher)

Senthil Todadri, a physicist at the Massachusetts Institute of Technology and Fisher’s longtime friend and colleague, is skeptical, but he thinks that Fisher has rephrased the central question—is quantum processing happening in the brain?—in such a way that it lays out a road map to test the hypothesis rigorously. “The general assumption has been that of course there is no quantum information processing that’s possible in the brain,” Todadri said. “He makes the case that there’s precisely one loophole. So the next step is to see if that loophole can be closed.” Indeed, Fisher has begun to bring together a team to do laboratory tests to answer this question once and for all.

* * *

Fisher belongs to something of a physics dynasty: His father, Michael E. Fisher, is a prominent physicist at the University of Maryland, College Park, whose work in statistical physics has garnered numerous honors and awards over the course of his career. His brother, Daniel Fisher, is an applied physicist at Stanford University who specializes in evolutionary dynamics. Matthew Fisher has followed in their footsteps, carving out a highly successful physics career. He shared the prestigious Oliver E. Buckley Prize in 2015 for his research on quantum phase transitions.

So what drove him to move away from mainstream physics and toward the controversial and notoriously messy interface of biology, chemistry, neuroscience and quantum physics? His own struggles with clinical depression.

Fisher vividly remembers that February 1986 day when he woke up feeling numb and jet-lagged, as if he hadn’t slept in a week. “I felt like I had been drugged,” he said. Extra sleep didn’t help. Adjusting his diet and exercise regime proved futile, and blood tests showed nothing amiss. But his condition persisted for two full years. “It felt like a migraine headache over my entire body every waking minute,” he said. It got so bad he contemplated suicide, although the birth of his first daughter gave him a reason to keep fighting through the fog of depression.

Eventually he found a psychiatrist who prescribed a tricyclic antidepressant, and within three weeks his mental state started to lift. “The metaphorical fog that had so enshrouded me that I couldn’t even see the sun—that cloud was a little less dense, and I saw there was a light behind it,” Fisher said. Within nine months he felt reborn, despite some significant side effects from the medication, including soaring blood pressure. He later switched to Prozac and has continuously monitored and tweaked his specific drug regimen ever since.

His experience convinced him that the drugs worked. But Fisher was surprised to discover that neuroscientists understand little about the precise mechanisms behind how they work. That aroused his curiosity, and given his expertise in quantum mechanics, he found himself pondering the possibility of quantum processing in the brain. Five years ago he threw himself into learning more about the subject, drawing on his own experience with antidepressants as a starting point.

Since nearly all psychiatric medications are complicated molecules, he focused on one of the most simple, lithium, which is just one atom—a spherical cow, so to speak, that would be an easier model to study than Prozac, for instance. The analogy is particularly appropriate because a lithium atom is a sphere of electrons surrounding the nucleus, Fisher said. He zeroed in on the fact that the lithium available by prescription from your local pharmacy is mostly a common isotope called lithium-7. Would a different isotope, like the much more rare lithium-6, produce the same results? In theory it should, since the two isotopes are chemically identical. They differ only in the number of neutrons in the nucleus.

When Fisher searched the literature, he found that an experiment comparing the effects of lithium-6 and lithium-7 had been done. In 1986, scientists at Cornell University examined the effects of the two isotopes on the behavior of rats. Pregnant rats were separated into three groups: One group was given lithium-7, one group was given the isotope lithium-6, and the third served as the control group. Once the pups were born, the mother rats that received lithium-6 showed much stronger maternal behaviors, such as grooming, nursing and nest-building, than the rats in either the lithium-7 or control groups.

This floored Fisher. Not only should the chemistry of the two isotopes be the same, the slight difference in atomic mass largely washes out in the watery environment of the body. So what could account for the differences in behavior those researchers observed?

Fisher believes the secret might lie in the nuclear spin, which is a quantum property that affects how long each atom can remain coherent—that is, isolated from its environment. The lower the spin, the less the nucleus interacts with electric and magnetic fields, and the less quickly it decoheres.

Because lithium-7 and lithium-6 have different numbers of neutrons, they also have different spins. As a result, lithium-7 decoheres too quickly for the purposes of quantum cognition, while lithium-6 can remain entangled longer.

Fisher had found two substances, alike in all important respects save for quantum spin, and found that they could have very different effects on behavior. For Fisher, this was a tantalizing hint that quantum processes might indeed play a functional role in cognitive processing.

* * *

That said, going from an intriguing hypothesis to actually demonstrating that quantum processing plays a role in the brain is a daunting challenge. The brain would need some mechanism for storing quantum information in qubits for sufficiently long times. There must be a mechanism for entangling multiple qubits, and that entanglement must then have some chemically feasible means of influencing how neurons fire in some way. There must also be some means of transporting quantum information stored in the qubits throughout the brain.

This is a tall order. Over the course of his five-year quest, Fisher has identified just one credible candidate for storing quantum information in the brain: phosphorus atoms, which are the only common biological element other than hydrogen with a spin of one-half, a low number that makes possible longer coherence times. Phosphorus can’t make a stable qubit on its own, but its coherence time can be extended further, according to Fisher, if you bind phosphorus with calcium ions to form clusters.In 1975, Aaron Posner, a Cornell University scientist, noticed an odd clustering of calcium and phosphorous atoms in his X-rays of bone. He made drawings of the structure of those clusters: nine calcium atoms and six phosphorous atoms, later called “Posner molecules” in his honor. The clusters popped up again in the 2000s, when scientists simulating bone growth in artificial fluid noticed them floating in the fluid. Subsequent experiments found evidence of the clusters in the body. Fisher thinks that Posner molecules could serve as a natural qubit in the brain as well.That’s the big picture scenario, but the devil is in the details that Fisher has spent the past few years hammering out. The process starts in the cell with a chemical compound called pyrophosphate. It is made of two phosphates bonded together—each composed of a phosphorus atom surrounded by multiple oxygen atoms with zero spin. The interaction between the spins of the phosphates causes them to become entangled. They can pair up in four different ways: Three of the configurations add up to a total spin of one (a “triplet” state that is only weakly entangled), but the fourth possibility produces a zero spin, or “singlet” state of maximum entanglement, which is crucial for quantum computing.

Next, enzymes break apart the entangled phosphates into two free phosphate ions. Crucially, these remain entangled even as they move apart. This process happens much more quickly, Fisher argues, with the singlet state. These ions can then combine in turn with calcium ions and oxygen atoms to become Posner molecules. Neither the calcium nor the oxygen atoms have a nuclear spin, preserving the one-half total spin crucial for lengthening coherence times. So those clusters protect the entangled pairs from outside interference so that they can maintain coherence for much longer periods of time—Fisher roughly estimates it might last for hours, days or even weeks.In this way, the entanglement can be distributed over fairly long distances in the brain, influencing the release of neurotransmitters and the firing of synapses between neurons—spooky action at work in the brain.

* * *

Researchers who work in quantum biology are cautiously intrigued by Fisher’s proposal. Alexandra Olaya-Castro, a physicist at University College London who has worked on quantum photosynthesis, calls it “a well-thought hypothesis. It doesn’t give answers, it opens questions that might then lead to how we could test particular steps in the hypothesis.”

The University of Oxford chemist Peter Hore, who investigates whether migratory birds’ navigational systems make use of quantum effects, concurs. “Here’s a theoretical physicist who is proposing specific molecules, specific mechanics, all the way through to how this could affect brain activity,” he said. “That opens up the possibility of experimental testing.”

Experimental testing is precisely what Fisher is now trying to do. He just spent a sabbatical at Stanford University working with researchers there to replicate the 1986 study with pregnant rats. He acknowledged the preliminary results were disappointing, in that the data didn’t provide much information, but thinks if it’s repeated with a protocol closer to the original 1986 experiment, the results might be more conclusive.

Fisher has applied for funding to conduct further in-depth quantum chemistry experiments. He has cobbled together a small group of scientists from various disciplines at UCSB and the University of California, San Francisco, as collaborators. First and foremost, he would like to investigate whether calcium phosphate really does form stable Posner molecules, and whether the phosphorus nuclear spins of these molecules can be entangled for sufficiently long periods of time.

Even Hore and Olaya-Castro are skeptical of the latter, particularly Fisher’s rough estimate that the coherence could last a day or more. “I think it’s very unlikely, to be honest,” Olaya-Castro said. “The longest time scale relevant for the biochemical activity that’s happening here is the scale of seconds, and that’s too long.” (Neurons can store information for microseconds.) Hore calls the prospect “remote,” pegging the limit at one second at best. “That doesn’t invalidate the whole idea, but I think he would need a different molecule to get long coherence times,” he said. “I don’t think the Posner molecule is it. But I’m looking forward to hearing how it goes.”

Others see no need to invoke quantum processing to explain brain function. “The evidence is building up that we can explain everything interesting about the mind in terms of interactions of neurons,” said Paul Thagard, a neurophilosopher at the University of Waterloo in Ontario, Canada, to New Scientist. (Thagard declined our request to comment further.)

Plenty of other aspects of Fisher’s hypothesis also require deeper examination, and he hopes to be able to conduct the experiments to do so. Is the Posner molecule’s structure symmetrical? And how isolated are the nuclear spins?

Most important, what if all those experiments ultimately prove his hypothesis wrong? It might be time to give up on the notion of quantum cognition altogether. “I believe that if phosphorus nuclear spin is not being used for quantum processing, then quantum mechanics is not operative in longtime scales in cognition,” Fisher said. “Ruling that out is important scientifically. It would be good for science to know.”

This post (in The Atlantic) appears courtesy of Quanta Magazine.

Here is the link back to The Atlantic article: http://www.theatlantic.com/science/archive/2016/11/quantum-brain/506768/?utm_source=nl-politics-daily-110716