Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs
AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.
“Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.
Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.
What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.
Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.
This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.
Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.
Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.
“Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,” Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me. (If this video doesn’t work, go to the original article from Wired Magazine. Link is at the bottom of this page.)
Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.
Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.
“While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.
“Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.
When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.
Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.
With the rise of neuromorphics, and tools like Nengo, we could soon have AI’s capable of exhibiting a stunning level of natural intelligence – right on our phones.
This article first appeared in Wired Magazine. Here’s a link to the original: http://www.wired.co.uk/article/ai-neuromorphic-chips-brains
For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design — an artificial version of the space over which neurons communicate, called a synapse.
“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”
The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.
This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.
Building a brain
When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.
Alberto Salleo, associate professor of materials science and engineering, with graduate student Scott Keene characterizing the electrochemical properties of an artificial synapse for neural network computing. They are part of a team that has created the new device. Credit: L.A. Cicero
“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”
The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.
Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.
Testing a network of artificial synapses
Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.
Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.
“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”
This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.
This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.
Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.
All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.
Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.
This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.
Here is a link to the source article: https://www.sciencedaily.com/releases/2017/02/170221142046.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Ftop_news%2Ftop_science+%28ScienceDaily%3A+Top+Science+News%29
The study opens the door to look at how this redistribution of synapses between the old and new neurons helps the dentate gyrus function. [NeuroscienceNews.com image is for illustrative purposes only.]
Summary: Researchers report adult neurogenesis not only helps increase the number of cells in a neural network, it also promotes plasticity in the existing network. Additionally, they have identified the role the Bax gene plays in synaptic pruning. Source: University of Alabama at Birmingham.
One goal in neurobiology is to understand how the flow of electrical signals through brain circuits gives rise to perception, action, thought, learning and memories.
Linda Overstreet-Wadiche, Ph.D., and Jacques Wadiche, Ph.D., both associate professors in the University of Alabama at Birmingham Department of Neurobiology, have published their latest contribution in this effort, focused on a part of the brain that helps form memories — the dentate gyrus of the hippocampus.
The dentate gyrus is one of just two areas in the brain where new neurons are continuously formed in adults. When a new granule cell neuron is made in the dentate gyrus, it needs to get ‘wired in,’ by forming synapses, or connections, in order to contribute to circuit function. Dentate granule cells are part of a circuit that receive electrical signals from the entorhinal cortex, a cortical brain region that processes sensory and spatial input from other areas of the brain. By combining this sensory and spatial information, the dentate gyrus can generate a unique memory of an experience.
Overstreet-Wadiche and UAB colleagues posed a basic question: Since the number of neurons in the dentate gyrus increases by neurogenesis while the number of neurons in the cortex remains the same, does the brain create additional synapses from the cortical neurons to the new granule cells, or do some cortical neurons transfer their connections from mature granule cells to the new granule cells?
Their answer, garnered through a series of electrophysiology, dendritic spine density and immunohistochemistry experiments with mice that were genetically altered to produce either more new neurons or kill off newborn neurons, supports the second model — some of the cortical neurons transfer their connections from mature granule cells to the new granule cells.
This opens the door to look at how this redistribution of synapses between the old and new neurons helps the dentate gyrus function. And it opens up tantalizing questions. Does this redistribution disrupt existing memories? How does this redistribution relate to the beneficial effects of exercise, which is a natural way to increase neurogenesis?
“Over the last 10 years there has been evidence supporting a redistribution of synapses between old and new neurons, possibly by a competitive process that the new cells tend to ‘win,’” Overstreet-Wadiche said. “Our findings are important because they directly demonstrate that, in order for new cells to win connections, the old cells lose connections. So, the process of adult neurogenesis not only adds new cells to the network, it promotes plasticity of the existing network.”
“It will be interesting to explore how neurogenesis-induced plasticity contributes to the function of this brain region,” she continued. “Neurogenesis is typically associated with improved acquisition of new information, but some studies have also suggested that neurogenesis promotes ‘forgetting’ of existing memories.”
The researchers also unexpectedly found that the Bax gene, known for its role in apoptosis, appears to also play a role in synaptic pruning in the dentate gyrus.
“There is mounting evidence that the cellular machinery that controls cell death also controls the strength and number of synaptic connections,” Overstreet-Wadiche said. “The appropriate balance of synapses strengthening and weakening, collectively termed synaptic plasticity, is critical for appropriate brain function. Hence, understanding how synaptic pruning occurs may shed light on neurodevelopmental disorders and on neurodegenerative diseases in which a synaptic pruning gone awry may contribute to pathological synapse loss.”
ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE
All of the work was performed in the Department of Neurobiology at UAB. In addition to Overstreet-Wadiche and Wadiche, co-authors of the paper, “Adult born neurons modify excitatory synaptic transmission to existing neurons,” published in eLife, are Elena W. Adlaf, Ryan J. Vaden, Anastasia J. Niver, Allison F. Manuel, Vincent C. Onyilo, Matheus T. Araujo, Cristina V. Dieni, Hai T. Vo and Gwendalyn D. King.
Much of the data came from the doctoral thesis research of Adlaf, a former UAB Neuroscience graduate student who is now a postdoctoral fellow at Duke University.
Funding: Funding for this research came from Civitan International Emerging Scholars awards, and National Institutes of Health awards or grants NS098553, NS064025, NS065920 and NS047466.
Source: Jeff Hansen – University of Alabama at Birmingham Image Source: NeuroscienceNews.com image is in the public domain. Original Research: Full open access research for “Adult-born neurons modify excitatory synaptic transmission to existing neurons” by Elena W Adlaf, Ryan J Vaden, Anastasia J Niver, Allison F Manuel, Vincent C Onyilo, Matheus T Araujo, Cristina V Dieni, Hai T Vo, Gwendalyn D King, Jacques I Wadiche, and Linda Overstreet-Wadiche in eLife. Published online January 30 2017 doi:10.7554/eLife.19886
How does heightened attention improve our mental capacity? This is the question tackled by new research published today in the journal Cell Reports, which reveals a chemical signal released across the brain in response to attention demanding or arousing situations.
Researchers studied how the release of the chemical acetylcholine fluctuates during the day but found that the release is at its highest when the brain is engaged with more challenging mental tasks. NeuroscienceNews.com image is for illustrative purposes only and is credited to BruceBlaus.
The new discoveries indicate how current drugs used in the treatment of Alzheimer’s, designed to boost this chemical signal, counter the symptoms of dementia. The results could also lead to new ways of enhancing cognitive function to counteract the effects of diseases such as Alzheimer’s and schizophrenia, as well as enhancing memory in healthy people.
The team of medical researchers at the Universities of Bristol and Maynooth in collaboration with pharmaceutical company Eli Lilly & Company, studied how the release of the chemical acetylcholine fluctuates during the day but found that the release is at its highest when the brain is engaged with more challenging mental tasks. The fluctuations are coordinated across the brain indicating a brain-wide signal to increase mental capacity with specific spikes in acetylcholine release occurring at particularly arousing times such as gaining reward.
Professor Jack Mellor, lead researcher from Bristol’s Centre for Synaptic Plasticity, said: “These findings are about how brain state is regulated and updated on a rapid basis to optimise the encoding of memory and cognitive performance. Many current and future drug therapies for a wide range of brain disorders including Alzheimer’s and schizophrenia are designed to target chemical systems such as acetylcholine so understanding when they are active and therefore how they function will be crucial for their future development and clinical use.”
Professor Lowry, who led the team at Maynooth University, added: “This work highlights the importance of cross-disciplinary basic research between universities and industry. Using real-time biosensor technology to improve our understanding of the role of important neurochemicals associated with memory is very exciting and timely, particularly given the increasing multifaceted societal burden caused by memory affecting neurological disorders such as dementia.”
Primary author Dr Leonor Ruivo added: “This collaboration gave us access to a new generation of tools which, in combination with other powerful techniques, will allow researchers to build on our findings and provide a much more detailed map of the action of brain chemicals in health, disease and therapeutic intervention.”
————-ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE————-
The research team involved the University of Bristol’s Centre for Synaptic Plasticity within the School of Physiology, Pharmacology & Neuroscience and the University of Maynooth department of Chemistry in collaboration with researchers at Lilly.
Funding: The work was supported by the Wellcome Trust, BBSRC and Lilly.
The deeper scientists probe into the complexity of the human brain, the more questions seem to arise. One of the most fundamental questions is how many different types of brain cells there are, and how to categorize individual cell types. That dilemma was discussed during a session yesterday (November 11) at the ongoing Society for Neuroscience (SfN) conference in San Diego, California.
As Evan Macosko of the Broad Institute said, the human brain comprises billions of brain cells—about 170 billion, according to one recent estimate—and there is a “tremendous amount of diversity in their function.” Now, new tools are supporting the study of single-cell transcriptomes, and the number of brain cell subtypes is skyrocketing. “We saw even greater degrees of heterogeneity in these cell populations than had been appreciated before,” Macosko said of his own single-cell interrogations of the mouse brain. He and others continue to characterize more brain regions, clustering cell types based on differences in gene expression, and then creating subclusters to look for diversity within each cell population.
Following Macosko’s talk, Bosiljka Tasic of the Allen Institute for Brain Science emphasized that categorizing cell types into subgroups based on gene expression is not enough. Researchers will need to combine such data with traditional metrics, such as morphology and electrophysiology to “ultimately come up with an integrative taxonomy of cell types,” Tasic said. “Multimodal data acquisition—it’s a big deal and I think it’s going to be a big focus of our future endeavors.”
If we have the algorithm, we also have the key to true artificial intelligence.
The key element which separates today’s artificial intelligence (AI) systems and what we consider to be human thought and learning processes could be boiled down to no more than an algorithm.
That’s according to a recent paper published in the journal Frontiers in Systems Neuroscience, which suggests that despite the complexity of the human brain, an algorithm may be all it takes for our technological creations to mimic our way of thinking.
As reported by Business Insider, the idea that human thought can be whittled down to an algorithm lies in the “Theory of Connectivity,” which proposes that human intelligence is rooted in “a power-of-two-based permutation logic (N = 2i-1)” algorithm, capable of producing perceptions, memories, generalized knowledge and flexible actions, according to the paper.
First proposed in 2015, the theory suggests that how we acquire and process knowledge can be explained by how different neurons interact and align in separate areas of the brain.
It may also be that our brain power is based on “a relatively simple mathematical logic,” according to Dr. Joe Tsien, neuroscientist at the Medical College of Georgia at Augusta University and author of the paper.
The logic proposed, N = 2i-1, relates to how groups of similar neurons come together to handle tasks such as recognizing food, shelter, and threats. These cliques then cluster together to form functional connectivity motifs (FCMs), which handle additional ideas and conclusions.
The more complex the task, the larger the group of FCMs.
In order to test the theory and how many cliques are necessary to create an FCM, the researchers analyzed how the algorithm performed in seven different regions of the brain, all of which handled primal, basic responses such as food, shelter, and fear in lab mice and hamsters.
By offering different food combinations and monitoring brain responses, the team was able to document 15 unique combinations of neuron clusters.
Furthermore, these cliques “appear prewired,” according to the researchers, as they appeared immediately when the food choices did.
“The fundamental mathematical rule even remained largely intact when the NMDA receptor, a master switch for learning and memory, was disabled after the brain matured,” the scientists say.
Such research is an important step in improving our understanding of how the brain, and mind, works — and therefore how this scientific understanding could hypothetically be implied to future AI projects. It may not give us the key to improving our own intelligence, but if the basic components of how the brain is wired could be applied to artificial intelligence models, then who knows how far future AI will advance.
NOTE: Link to original article here: http://www.zdnet.com/article/researchers-uncover-algorithm-which-may-solve-human-intelligence/
A new approach to a once-farfetched theory is making it plausible that the brain functions like a quantum computer.
The mere mention of “quantum consciousness” makes most physicists cringe, as the phrase seems to evoke the vague, insipid musings of a New Age guru. But if a new hypothesis proves to be correct, quantum effects might indeed play some role in human cognition. Matthew Fisher, a physicist at the University of California, Santa Barbara, raised eyebrows late last year when he published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain—which would essentially enable the brain to function like a quantum computer.
As recently as 10 years ago, Fisher’s hypothesis would have been dismissed by many as nonsense. Physicists have been burned by this sort of thing before, most notably in 1989, when Roger Penrose proposed that mysterious protein structures called “microtubules” played a role in human consciousness by exploiting quantum effects. Few researchers believe such a hypothesis plausible. Patricia Churchland, a neurophilosopher at the University of California, San Diego, memorably opined that one might as well invoke “pixie dust in the synapses” to explain human cognition.
Fisher’s hypothesis faces the same daunting obstacle that has plagued microtubules: a phenomenon called quantum decoherence. To build an operating quantum computer, you need to connect qubits—quantum bits of information—in a process called entanglement. But entangled qubits exist in a fragile state. They must be carefully shielded from any noise in the surrounding environment. Just one photon bumping into your qubit would be enough to make the entire system “decohere,” destroying the entanglement and wiping out the quantum properties of the system. It’s challenging enough to do quantum processing in a carefully controlled laboratory environment, never mind the warm, wet, complicated mess that is human biology, where maintaining coherence for sufficiently long periods of time is well nigh impossible.
Over the past decade, however, growing evidence suggests that certain biological systems might employ quantum mechanics. In photosynthesis, for example, quantum effects help plants turn sunlight into fuel. Scientists have also proposedthat migratory birds have a “quantum compass” enabling them to exploit Earth’s magnetic fields for navigation, or that the human sense of smell could be rooted in quantum mechanics.
Fisher’s notion of quantum processing in the brain broadly fits into this emerging field of quantum biology. Call it quantum neuroscience. He has developed a complicated hypothesis, incorporating nuclear and quantum physics, organic chemistry, neuroscience and biology. While his ideas have met with plenty of justifiable skepticism, some researchers are starting to pay attention. “Those who read his paper (as I hope many will) are bound to conclude: This old guy’s not so crazy,” wroteJohn Preskill, a physicist at the California Institute of Technology, after Fisher gave a talk there. “He may be on to something. At least he’s raising some very interesting questions.”
Senthil Todadri, a physicist at the Massachusetts Institute of Technology and Fisher’s longtime friend and colleague, is skeptical, but he thinks that Fisher has rephrased the central question—is quantum processing happening in the brain?—in such a way that it lays out a road map to test the hypothesis rigorously. “The general assumption has been that of course there is no quantum information processing that’s possible in the brain,” Todadri said. “He makes the case that there’s precisely one loophole. So the next step is to see if that loophole can be closed.” Indeed, Fisher has begun to bring together a team to do laboratory tests to answer this question once and for all.
* * *
Fisher belongs to something of a physics dynasty: His father, Michael E. Fisher, is a prominent physicist at the University of Maryland, College Park, whose work in statistical physics has garnered numerous honors and awards over the course of his career. His brother, Daniel Fisher, is an applied physicist at Stanford University who specializes in evolutionary dynamics. Matthew Fisher has followed in their footsteps, carving out a highly successful physics career. He shared the prestigious Oliver E. Buckley Prize in 2015 for his research on quantum phase transitions.
So what drove him to move away from mainstream physics and toward the controversial and notoriously messy interface of biology, chemistry, neuroscience and quantum physics? His own struggles with clinical depression.
Fisher vividly remembers that February 1986 day when he woke up feeling numb and jet-lagged, as if he hadn’t slept in a week. “I felt like I had been drugged,” he said. Extra sleep didn’t help. Adjusting his diet and exercise regime proved futile, and blood tests showed nothing amiss. But his condition persisted for two full years. “It felt like a migraine headache over my entire body every waking minute,” he said. It got so bad he contemplated suicide, although the birth of his first daughter gave him a reason to keep fighting through the fog of depression.
Eventually he found a psychiatrist who prescribed a tricyclic antidepressant, and within three weeks his mental state started to lift. “The metaphorical fog that had so enshrouded me that I couldn’t even see the sun—that cloud was a little less dense, and I saw there was a light behind it,” Fisher said. Within nine months he felt reborn, despite some significant side effects from the medication, including soaring blood pressure. He later switched to Prozac and has continuously monitored and tweaked his specific drug regimen ever since.
His experience convinced him that the drugs worked. But Fisher was surprised to discover that neuroscientists understand little about the precise mechanisms behind how they work. That aroused his curiosity, and given his expertise in quantum mechanics, he found himself pondering the possibility of quantum processing in the brain. Five years ago he threw himself into learning more about the subject, drawing on his own experience with antidepressants as a starting point.
Since nearly all psychiatric medications are complicated molecules, he focused on one of the most simple, lithium, which is just one atom—a spherical cow, so to speak, that would be an easier model to study than Prozac, for instance. The analogy is particularly appropriate because a lithium atom is a sphere of electrons surrounding the nucleus, Fisher said. He zeroed in on the fact that the lithium available by prescription from your local pharmacy is mostly a common isotope called lithium-7. Would a different isotope, like the much more rare lithium-6, produce the same results? In theory it should, since the two isotopes are chemically identical. They differ only in the number of neutrons in the nucleus.
When Fisher searched the literature, he found that an experiment comparing the effects of lithium-6 and lithium-7 had been done. In 1986, scientists at Cornell University examined the effects of the two isotopes on the behavior of rats. Pregnant rats were separated into three groups: One group was given lithium-7, one group was given the isotope lithium-6, and the third served as the control group. Once the pups were born, the mother rats that received lithium-6 showed much stronger maternal behaviors, such as grooming, nursing and nest-building, than the rats in either the lithium-7 or control groups.
This floored Fisher. Not only should the chemistry of the two isotopes be the same, the slight difference in atomic mass largely washes out in the watery environment of the body. So what could account for the differences in behavior those researchers observed?
Fisher believes the secret might lie in the nuclear spin, which is a quantum property that affects how long each atom can remain coherent—that is, isolated from its environment. The lower the spin, the less the nucleus interacts with electric and magnetic fields, and the less quickly it decoheres.
Because lithium-7 and lithium-6 have different numbers of neutrons, they also have different spins. As a result, lithium-7 decoheres too quickly for the purposes of quantum cognition, while lithium-6 can remain entangled longer.
Fisher had found two substances, alike in all important respects save for quantum spin, and found that they could have very different effects on behavior. For Fisher, this was a tantalizing hint that quantum processes might indeed play a functional role in cognitive processing.
* * *
That said, going from an intriguing hypothesis to actually demonstrating that quantum processing plays a role in the brain is a daunting challenge. The brain would need some mechanism for storing quantum information in qubits for sufficiently long times. There must be a mechanism for entangling multiple qubits, and that entanglement must then have some chemically feasible means of influencing how neurons fire in some way. There must also be some means of transporting quantum information stored in the qubits throughout the brain.
This is a tall order. Over the course of his five-year quest, Fisher has identified just one credible candidate for storing quantum information in the brain: phosphorus atoms, which are the only common biological element other than hydrogen with a spin of one-half, a low number that makes possible longer coherence times. Phosphorus can’t make a stable qubit on its own, but its coherence time can be extended further, according to Fisher, if you bind phosphorus with calcium ions to form clusters.In 1975, Aaron Posner, a Cornell University scientist, noticed an odd clustering of calcium and phosphorous atoms in his X-rays of bone. He made drawings of the structure of those clusters: nine calcium atoms and six phosphorous atoms, later called “Posner molecules” in his honor. The clusters popped up again in the 2000s, when scientists simulating bone growth in artificial fluid noticed them floating in the fluid. Subsequent experiments found evidence of the clusters in the body. Fisher thinks that Posner molecules could serve as a natural qubit in the brain as well.That’s the big picture scenario, but the devil is in the details that Fisher has spent the past few years hammering out. The process starts in the cell with a chemical compound called pyrophosphate. It is made of two phosphates bonded together—each composed of a phosphorus atom surrounded by multiple oxygen atoms with zero spin. The interaction between the spins of the phosphates causes them to become entangled. They can pair up in four different ways: Three of the configurations add up to a total spin of one (a “triplet” state that is only weakly entangled), but the fourth possibility produces a zero spin, or “singlet” state of maximum entanglement, which is crucial for quantum computing.
Next, enzymes break apart the entangled phosphates into two free phosphate ions. Crucially, these remain entangled even as they move apart. This process happens much more quickly, Fisher argues, with the singlet state. These ions can then combine in turn with calcium ions and oxygen atoms to become Posner molecules. Neither the calcium nor the oxygen atoms have a nuclear spin, preserving the one-half total spin crucial for lengthening coherence times. So those clusters protect the entangled pairs from outside interference so that they can maintain coherence for much longer periods of time—Fisher roughly estimates it might last for hours, days or even weeks.In this way, the entanglement can be distributed over fairly long distances in the brain, influencing the release of neurotransmitters and the firing of synapses between neurons—spooky action at work in the brain.
* * *
Researchers who work in quantum biology are cautiously intrigued by Fisher’s proposal. Alexandra Olaya-Castro, a physicist at University College London who has worked on quantum photosynthesis, calls it “a well-thought hypothesis. It doesn’t give answers, it opens questions that might then lead to how we could test particular steps in the hypothesis.”
The University of Oxford chemist Peter Hore, who investigates whether migratory birds’ navigational systems make use of quantum effects, concurs. “Here’s a theoretical physicist who is proposing specific molecules, specific mechanics, all the way through to how this could affect brain activity,” he said. “That opens up the possibility of experimental testing.”
Experimental testing is precisely what Fisher is now trying to do. He just spent a sabbatical at Stanford University working with researchers there to replicate the 1986 study with pregnant rats. He acknowledged the preliminary results were disappointing, in that the data didn’t provide much information, but thinks if it’s repeated with a protocol closer to the original 1986 experiment, the results might be more conclusive.
Fisher has applied for funding to conduct further in-depth quantum chemistry experiments. He has cobbled together a small group of scientists from various disciplines at UCSB and the University of California, San Francisco, as collaborators. First and foremost, he would like to investigate whether calcium phosphate really does form stable Posner molecules, and whether the phosphorus nuclear spins of these molecules can be entangled for sufficiently long periods of time.
Even Hore and Olaya-Castro are skeptical of the latter, particularly Fisher’s rough estimate that the coherence could last a day or more. “I think it’s very unlikely, to be honest,” Olaya-Castro said. “The longest time scale relevant for the biochemical activity that’s happening here is the scale of seconds, and that’s too long.” (Neurons can store information for microseconds.) Hore calls the prospect “remote,” pegging the limit at one second at best. “That doesn’t invalidate the whole idea, but I think he would need a different molecule to get long coherence times,” he said. “I don’t think the Posner molecule is it. But I’m looking forward to hearing how it goes.”
Others see no need to invoke quantum processing to explain brain function. “The evidence is building up that we can explain everything interesting about the mind in terms of interactions of neurons,” said Paul Thagard, a neurophilosopher at the University of Waterloo in Ontario, Canada, to New Scientist. (Thagard declined our request to comment further.)
Plenty of other aspects of Fisher’s hypothesis also require deeper examination, and he hopes to be able to conduct the experiments to do so. Is the Posner molecule’s structure symmetrical? And how isolated are the nuclear spins?
Most important, what if all those experiments ultimately prove his hypothesis wrong? It might be time to give up on the notion of quantum cognition altogether. “I believe that if phosphorus nuclear spin is not being used for quantum processing, then quantum mechanics is not operative in longtime scales in cognition,” Fisher said. “Ruling that out is important scientifically. It would be good for science to know.”
OCT 17, 2016 An excerpt from Sebastian Seung’s book, Connectome
NO ROAD, NO trail can penetrate this forest. The long and delicate branches of its trees lie everywhere, choking space with their exuberant growth. No sunbeam can fly a path tortuous enough to navigate the narrow spaces between these entangled branches. All the trees of this dark forest grew from 100 billion seeds planted together. And, all in one day, every tree is destined to die.
This forest is majestic, but also comic and even tragic. It is all of these things. Indeed, sometimes I think it is everything. Every novel and every symphony, every cruel murder and every act of mercy, every love affair and every quarrel, every joke and every sorrow — all these things come from the forest.
You may be surprised to hear that it fits in a container less than one foot in diameter. And that there are seven billion on this earth. You happen to be the caretaker of one, the forest that lives inside your skull. The trees of which I speak are those special cells called neurons. The mission of neuroscience is to explore their enchanted branches — to tame the jungle of the mind.
Neuroscientists have eavesdropped on its sounds, the electrical signals inside the brain. They have revealed its fantastic shapes with meticulous drawings and photos of neurons. But from just a few scattered trees, can we hope to comprehend the totality of the forest?
In the seventeenth century, the French philosopher and mathematician Blaise Pascal wrote about the vastness of the universe:
Let man contemplate Nature entire in her full and lofty majesty; let him put far from his sight the lowly objects that surround him; let him regard that blazing light, placed like an eternal lamp to illuminate the world; let the earth appear to him but a point within the vast circuit which that star describes; and let him marvel that this immense circumference is itself but a speck from the viewpoint of the stars that move in the firmament.
Shocked and humbled by these thoughts, he confessed that he was terrified by “the eternal silence of these infinite spaces.” Pascal meditated upon outer space, but we need only turn our thoughts inward to feel his dread. Inside every one of our skulls lies an organ so vast in its complexity that it might as well be infinite.
As a neuroscientist myself, I have come to know firsthand Pascal’s feeling of dread. I have also experienced embarrassment. Sometimes I speak to the public about the state of our field. After one such talk, I was pummeled with questions. What causes depression and schizophrenia? What is special about the brain of an Einstein or a Beethoven? How can my child learn to read better? As I failed to give satisfying answers, I could see faces fall. In my shame I finally apologized to the audience. “I’m sorry,” I said. “You thought I’m a professor because I know the answers. Actually I’m a professor because I know how much I don’t know.”
Studying an object as complex as the brain may seem almost futile. The brain’s billions of neurons resemble trees of many species and come in many fantastic shapes. Only the most determined explorers can hope to capture a glimpse of this forest’s interior, and even they see little, and see it poorly. It’s no wonder that the brain remains an enigma. My audience was curious about brains that malfunction or excel, but even the humdrum lacks explanation. Every day we recall the past, perceive the present, and imagine the future. How do our brains accomplish these feats? It’s safe to say that nobody really knows.
Daunted by the brain’s complexity, many neuroscientists have chosen to study animals with drastically fewer neurons than humans. The worm shown in Figure 2 lacks what we’d call a brain. Its neurons are scattered throughout its body rather than centralized in a single organ. Together they form a nervous system containing a mere 300 neurons. That sounds manageable. I’ll wager that even Pascal, with his depressive tendencies, would not have dreaded the forest of C. elegans. (That’s the scientific name for the one-millimeter-long worm.)
Every neuron in this worm has been given a unique name and has a characteristic location and shape. Worms are like precision machines mass-produced in a factory: Each one has a nervous system built from the same set of parts, and the parts are always arranged in the same way.
What’s more, this standardized nervous system has been mapped completely. The result is something like the flight maps we see in the back pages of airline magazines. The four-letter name of each neuron is like the three-letter code for each of the world’s airports. The lines represent connections between neurons, just as lines on a flight map represent routes between cities. We say that two neurons are “connected” if there is a small junction, called a synapse, at a point where the neurons touch. Through the synapse one neuron sends messages to the other.
Engineers know that a radio is constructed by wiring together electronic components like resistors, capacitors, and transistors. A nervous system is likewise an assembly of neurons, “wired” together by their slender branches. That’s why the map shown in Figure 3 was originally called a wiring diagram. More recently, a new term has been introduced — connectome. This word invokes not electrical engineering but the field of genomics. You have probably heard that DNA is a long molecule resembling a chain. The individual links of the chain are small molecules called nucleotides, which come in four types denoted by the letters A, C, G, and T. Your genome is the entire sequence of nucleotides in your DNA, or equivalently a long string of letters drawn from this four-letter alphabet.
In the same way, a connectome is the totality of connections between the neurons in a nervous system. The term, like genome, implies completeness. A connectome is not one connection, or even many. It is all of them. In principle, your brain could also be summarized by a diagram that is like the worm’s, though much more complex. Would your connectome reveal anything interesting about you?
The first thing it would reveal is that you are unique. You know this, of course, but it has been surprisingly difficult to pinpoint where, precisely, your uniqueness resides. Your connectome and mine are very different. They are not standardized like those of worms. That’s consistent with the idea that every human is unique in a way that a worm is not (no offense intended to worms!).
Differences fascinate us. When we ask how the brain works, what mostly interests us is why the brains of people work so differently. Why can’t I be more outgoing, like my extroverted friend? Why does my son find reading more difficult than his classmates do? Why is my teenage cousin starting to hear imaginary voices? Why is my mother losing her memory? Why can’t my spouse (or I) be more compassionate and understanding?
This book proposes a simple theory: Minds differ because connectomes differ. The theory is implicit in newspaper headlines like “Autistic Brains Are Wired Differently.” Personality and IQ might also be explained by connectomes. Perhaps even your memories, the most idiosyncratic aspect of your personal identity, could be encoded in your connectome.
Although this theory has been around a long time, neuroscientists still don’t know whether it’s true. But clearly the implications are enormous. If it’s true, then curing mental disorders is ultimately about repairing connectomes. In fact, any kind of personal change — educating yourself, drinking less, saving your marriage — is about changing your connectome.
— Sebastian Seung is Professor of Computational Neuroscience and Physics at the Massachusetts Institute of Technology, where he is currently inventing technologies for mapping connections between the brain’s neurons, and investigating the hypothesis that we are all unique because we are “wired differently.” This article is an excerpt from his book Connectome: How the Brain’s Wiring Makes Us Who We Are.
This article first appeared here: http://sharpbrains.com/blog/2016/10/17/understand-your-connectome-understand-yourself/
Date: October 3, 2016Source: Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute
Columbia scientists have developed a new mathematical model that helps to explain how the human brain’s biological complexity allows it to lay down new memories without wiping out old ones — illustrating how the brain maintains the fidelity of memories for years, decades or even a lifetime. This model could help neuroscientists design more targeted studies of memory, and also spur advances in neuromorphic hardware — powerful computing systems inspired by the human brain.
This work is published online in Nature Neuroscience.
“The brain is continually receiving, organizing and storing memories. These processes, which have been studied in countless experiments, are so complex that scientists have been developing mathematical models in order to fully understand them,” said Stefano Fusi, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute, associate professor of neuroscience at Columbia University Medical Center and the paper’s senior author. “The model that we have developed finally explains why the biology and chemistry underlying memory are so complex — and how this complexity drives the brain’s ability to remember.”
Memories are widely believed to be stored in synapses, tiny structures on the surface of neurons. These synapses act as conduits, transmitting the information housed inside electrical pulses that normally pass from neuron to neuron. In the earliest memory models, the strength of electrical signals that passed through synapses was compared to a volume knob on a stereo; it dialed up to boost (or down to lower) the connection strength between neurons. This allowed for the formation of memories.
These models worked extremely well, as they accounted for enormous memory capacity. But they also posed an intriguing dilemma.
“The problem with a simple, dial-like model of how synapses function was that it was assumed their strength could be dialed up or down indefinitely,” said Dr. Fusi, who is also a member of Columbia’s Center for Theoretical Neuroscience. “But in the real world this can’t happen. Whether it’s the volume knob on a stereo, or any biological system, there has to be a physical limit to how much it could turn.”
When these limits were imposed, the memory capacity of these models collapsed. So Dr. Fusi, in collaboration with fellow Zuckerman Institute investigator Larry Abbott, PhD, an expert in mathematical modeling of the brain, offered an alternative: each synapse is more complex than just one dial, and instead should be described as a system with multiple dials.
In 2005, Drs. Fusi and Abbott published research explaining this idea. They described how different dials (perhaps representing clusters of molecules) within a synapse could operate in tandem to form new memories while protecting old ones. But even that model, the authors later realized, fell short of what they believed the brain — particularly the human brain — could hold.
“We came to realize that the various synaptic components, or dials, not only functioned at different timescales, but were also likely communicating with each other,” said Marcus Benna, PhD, an associate research scientist at Columbia’s Center for Theoretical Neuroscience and the first author of today’s Nature Neuroscience paper. “Once we added the communication between components to our model, the storage capacity increased by an enormous factor, becoming far more representative of what is achieved inside the living brain.”
Dr. Benna likened the components of this new model to a system of beakers connected to each other through a series of tubes.
“In a set of interconnected beakers, each filled with different amounts of water, the liquid will tend to flow between them such that the water levels become equalized. In our model, the beakers represent the various components within a synapse,” explained Dr. Benna. “Adding liquid to one of the beakers — or removing some of it — represents the encoding of new memories. Over time, the resulting flow of liquid will diffuse across the other beakers, corresponding to the long-term storage of memories.”
Drs. Benna and Fusi are hopeful that this work can help neuroscientists in the lab, by acting as a theoretical framework to guide future experiments — ultimately leading to a more complete and more detailed characterization of the brain.
“While the synaptic basis of memory is well accepted, in no small part due to the work of Nobel laureate and Zuckerman Institute codirector Dr. Eric Kandel, clarifying how synapses support memories over many years without degradation has been extremely difficult,” said Dr. Abbott. “The work of Drs. Benna and Fusi should serve as a guide for researchers exploring the molecular complexity of the synapse.”
The technological implications of this model are also promising. Dr. Fusi has long been intrigued by neuromorphic hardware, computers that are designed to imitate a biological brain.
“Today, neuromorphic hardware is limited by memory capacity, which can be catastrophically low when these systems are designed to learn autonomously,” said Dr. Fusi. “Creating a better model of synaptic memory could help to solve this problem, speeding up the development of electronic devices that are both compact and energy efficient — and just as powerful as the human brain.”
This paper is titled: “Computational principles of synaptic memory consolidation.”
This research was supported by the Gatsby Charitable Foundation, the Simons Foundation, the Swartz Foundation, the Kavli Foundation, the Grossman Foundation and Columbia’s Research Initiatives for Science and Engineering (RISE).
The authors report no financial or other conflicts of interest.
The Allen Institute for Brain Science has published and released a comprehensive, high-resolution map of the brain anyone can access online. They mapped 862 brain structures from a single donor brain.
MAPPING THE BRAIN
Even with our growing knowledge of the cosmos, we are relatively clueless about how our own brains function. That’s why making neurological maps is such an important exercise — it allows us to see the structural basis of how our brains work.
The Allen Institute for Brain Science has just created one of the best maps ever. The Seattle-based organization published a comprehensive, high-resolution atlas of the entire human brain.
“This is the most structurally complete atlas to date and we hope it will serve as a new reference standard for the human brain across different disciplines,” said Ed Lein, investigator at the Allen Institute, in a press release.
The researchers put a donor brain through MRI and diffusion tensor imaging and then sliced it up by specific regions. The end result is a map of 862 annotated structures that comprise the human brain.
Studying the brain is so complex that researchers also had to create an entirely new scanner. The machine can image tissue sections the size of a complete human brain hemisphere at the resolution of roughly a hundredth the width of a human hair.
And in a bid to make the map a gold standard for brain research, the atlas was published both as a peer-reviewed paper in The Journal of Comparative Neurology and an online collectionthat anyone can access.
The new brain atlas fills a niche in a surprising vacuum of reliable brain maps. “Human brain atlases have long lagged behind atlases of the brain of worms, flies or mice, both in terms of spatial resolution and in terms of completeness,” Lein said.
byDan EltononSeptember 2, 2016, published on the Singularity Weblog
Recently we have seen a slew of popular films that deal with artificial intelligence – most notably The Imitation Game, Chappie, Ex Machina, and Her. However, despite over five decades of research into artificial intelligence, there remain many tasks which are simple for humans that computers cannot do. Given the slow progress of AI, for many the prospect of computers with human-level intelligence seems further away today than it did when Isaac Asimov‘s classic I, Robot was published in 1950. The fact is, however, that today the development of neuromorphic chips offers a plausible path to realizing human-level artificial intelligence within the next few decades.
Starting in the early 2000’s there was a realization that neural network models – based on how the human brain works – could solve many tasks that could not be solved by other methods. The buzzphrase ‘deep learning‘ has become a catch-all term for neural network models and related techniques….
Most deep learning practitioners acknowledge that the recent popularity of ‘deep learning’ is driven by hardware, in particular GPUs . The core algorithms of neural networks, such as the backpropagation algorithm for calculating gradients was developed in the 1970s and 80s, and convolutional neural networks were developed in the late 90s.
Neuromorphic chips are the logical next step from the use of GPUs. While GPU architecture is still designed for computer graphics, neuromorphic chips can implement neural networks directly into hardware. Neuromorphic chips are currently being developed by a variety of public and private entities, including DARPA, the EU, IBM and Qualcomm.
The representation problem
A key difficulty solved by neural networks is the problem of programming conceptual categories into a computer, also called the “representation problem”. Programming a conceptual category requires constructing a representation in the computer’s memory to which phenomena in the world can be mapped. For example “Clifford” would be mapped to the category of “dog” and also “animal” and “pet”, while a VW Beatle would be mapped to “car”. Constructing a robust mapping is very difficult since the members of a category can vary greatly in their appearance – for instance a “human”¬ may be male or female, old or young, and tall or short. Even a simple object, like a cube, will appear different depending on the angle it is viewed from and how it is lit. Since such conceptual categories are constructs of the human mind, it makes sense that we should look at how the brain itself stores representations. Neural networks store representations in the connections between neurons (called synapses), each of which contains a value called a “weight”. Instead of being programmed, neural networks learn what weights to use through a process of training. After observing enough examples, neural networks can categorize new objects they have never seen before, or at least offer a best guess. Today neural networks have become a dominant methodology for solving classification tasks such as handwriting recognition, speech to text, and object recognition.
Neural networks are based on simplified mathematical models of how the brain’s neurons operate. Today’s hardware is very inefficient when it comes to simulating neural network models, however. This inefficiency can be traced to fundamental differences between how the brain operates vs how digital computers operate. While computers store information as a string of 0s and 1s, the synaptic “weights” the brain uses to store information can fall anywhere in a range of values – ie. the brain is analog rather than digital. More importantly, in a computer the number of signals that can be processed at the same time is limited by the number of CPU cores – this may be between 8-12 on a typical desktop or 1000-10,000 on a supercomputer. While 10,000 sounds like a lot, this is tiny compared to the brain, which simultaneous processes up to a trillion (1,000,000,000,000) signals in a massively parallel fashion.
Low power consumption
The two main differences between brains and today’s computers (parallelism & analog storage) contribute to another difference, which is the brain’s energy efficiency. Natural selection made the brain remarkably energy efficient, since hunting for food is difficult. The human brain consumes only 20 Watts of a power, while a supercomputing complex capable of simulating a tiny fraction of the brain can consume millions of Watts. The main reason for this is that computers operate at much higher frequencies than the brain and power consumption typically grows with the cube of frequency. Additionally, as a general rule digital circuitry consumes more power than analog – for this reason, some parts of today’s cellphones are being built with analog circuits to improve battery life. A final reason for the high power consumption of today’s chips is that they require all signals be perfectly synchronized by a central clock, requiring a timing distribution system that complicates circuit design and increases power consumption by up to 30%. Copying the brain’s energy efficient features (low frequencies, massive parallelism, analog signals, and asynchronicity) makes a lot of economic sense and is currently one of the main driving forces behind the development of neuromorphic chips.
Another difference between neuromorphic chips and conventional computer hardware is the fact that, like the brain, they are fault-tolerant – if a few components fail the chip continues functioning normally. Some neuromorphic chip designs can sustain defect rates as high as 25%. This is very different than today’s computer hardware, where the failure of a single component usually renders the entire chip unusable. The need for precise fabrication has driven up the cost of chip production exponentially as component sizes have become smaller. Neuromorphic chips require lower fabrication tolerances and thus are cheaper to make.
The Crossnet approach
Many different design architectures are being pursued and developed, with varying degrees of brain-like architecture. Some chips, like Google’s tensor processing unit — which powered Deep Mind’s much lauded victory in Go – are proprietary. Plenty of designs for neuromorphic hardware can be found in the academic literature, though. Many designs use a pattern called a crossbar latch, which is a grid of nanowires connected by ‘latching switches’. At Stony Brook University, professor Konstantin K. Likharev has designed a neuromorphic network called the “Crossnet”.
[Figure above depicts a layout, showing two ‘somas’, or circuits that simulate the basic functions of a neuron. The green circles play the role of synapses. From presentation of K.K. Likharev, used with permission.]
One possible layout is shown above. Electronic devices called ‘somas’ play the role of the neuron’s cell body, which is to add up the inputs and fire an output. In neuromorphic hardware, somas may mimic neurons with several different levels of sophistication, depending on what is required for the task at hand. For instance, somas may generate spikes (sequences of pulses) just like neurons in the brain. There is growing evidence that sequences of spikes in the brain carry more information than just the average firing rate alone, which previously had been considered the most important quantity. Spikes are carried through the two types of neural wires, axons and dendrites, which are represented by the red and blue lines in figure 2. The green circles are connections between these wires that play the role of synapses. Each of these ‘latching switches’ must be able to hold a ‘weight’, which is encoded in either a variable capacitance or variable resistance. In principle, memristors would be an ideal component here, if one could be developed that could be mass produced. Crucially, all of the crossnet architecture can be implemented in traditional silicon-based (“CMOS”-like) technology. Each crossnet (as shown in the figure) is designed so they can be stacked, with additional wires connecting somas on different layers. In this way, neuromorphic crossnet technology can achieve component densities that rival the human brain.
Likarev’s design is still theoretical, but there are already several neuromorphic chips in production, such asIBM’s TrueNorth chip, which features spiking neurons, and Qualcomm’s “Zeroeth” project. NVIDIA is currently making major investments in deep learning hardware, and the next generation of NVIDIA devices dedicated for deep learning will likely look closer to neuromorphic chips than traditional GPUs. Another important player is the startup Nervana systems, which was recently acquired by Intel for $400 million. Many governments are are investing large amounts of money into academic research on neuromorphic chips as well. Prominent examples include the EU’s BrainScaleS project, the UK’s SpiNNaker project, and DARPA’sSyNAPSE program.
Neuromorphic hardware will make deep learning orders of magnitude faster and more cost effective and thus will be the key driver behind enhanced AI in the areas of big data mining, character recognition, surveillance, robotic control and driverless car technology. Because neuromorphic chips have low power consumption it is conceivable that some day in the near future all cell phones will contain a neuromorphic chip which will perform tasks such as speech to text or translating road signs from foreign languages. Currently apps that perform deep learning tasks require connecting to the cloud to perform the necessary computations. The low power consumption of neuromorphic chips also makes them attractive for military field robotics, which currently are limited by their high power consumption, which quickly drains their batteries.
According to Prof. Likharev, neuromorphic chips are the only current technology which can conceivably “mimic the mammalian cortex with practical power consumption”. Prof. Likharev estimates that his own ‘crossnet’ technology can in principle implement the same number of neurons and connections as the brain on approximately 10 x 10 cms of silicon. Conceivably, production of a 10×10 cm chip could be practical in only a few years, as most of the requisite technologies are already in place. However, implementing a human level AI or artificial general intelligence (AGI) with a neuromorphic chip will require much more than just just creating the requisite number of neuron and connections. The human brain consists of thousands of interacting components or subnetworks. A collection of components and their pattern of connection is known as a ‘cognitive architecture’. The cognitive architecture of the brain is largely unknown, but there are serious efforts underway to map it, most notably Obama’s BRAIN initiative and the EU’s Human Brain Project, which has the ambitious (some say overambitious) goal of simulating the entire human brain in the next decade. Neuromorphic chips are perfectly suited to testing out different hypothetical cognitive architectures and simulating how cognitive architectures may change due to aging or disease. In principle, AGI could also be developed using an entirely different cognitive architecture, that bears little resemblance to the human brain.
Considering how much money is being invested in neuromorphic chips, already one can now see a path which leads to AGI. The major unknown is how long it will take for a suitable cognitive architecture to be developed. The fundamental physics of neuromorphic hardware is solid – they can mimic the brain in component density and power consumption and with thousands of times the speed. Even if some governments seek to ban the development of AGI, it will be realized by someone, somewhere. What happens then is a matter of speculation. If the AGI is capable of recursive self-improvement and had access to the internet, the results could be disastrous for humanity. As discussed by the philosopher Nick Bolstrom and others, developing containment and ‘constrainment’ methods for AI is not as easy as merely ‘installing a kill switch’ or putting the hardware in a Faraday cage. Therefore, we best start thinking hard about such issues now, before it is too late.
About the Author:
Dan Elton is a physics PhD candidate at the Institute for Advanced Computational Science at Stony Brook University. He is currently looking for employment in the areas of machine learning and data science. In his spare time he enjoys writing about the effects of new technologies on society. He blogs at www.moreisdifferent.com and tweets at@moreisdifferent.
To see the original article as it appeared on the Singularity Weblog, use this link: https://www.singularityweblog.com/neuromorphic-chips-and-human-level-ai/
New non-invasive technique may lead to low-cost therapy for patients with severe brain injury — possibly for those in a vegetative or minimally conscious state
August 26, 2016
NOTE: this article appeared in KurtzweilAI blog. It was NOT written by me, David Wolf, as indicated above. (That label is an artifact of this format and I can’t remove it!)
The non-invasive technique uses ultrasound to target the brain’s thalamus (credit: Martin Monti/UCLA)
UCLA neurosurgeons used ultrasound to “jump-start” the brain of a 25-year-old man from a coma, and he has made remarkable progress following the treatment.
The technique, called “low-intensity focused ultrasound pulsation” (LIFUP), works non-invasively and without affecting intervening tissues. It excites neurons in the thalamus, an egg-shaped structure that serves as the brain’s central hub for processing information.
“It’s almost as if we were jump-starting the neurons back into function,” said Martin Monti, the study’s lead author and a UCLA associate professor of psychology and neurosurgery. “Until now, the only way to achieve this was a risky surgical procedure known as deep brain stimulation, in which electrodes are implanted directly inside the thalamus,” he said. “Our approach directly targets the thalamus but is noninvasive.”
What about using it on vegetative or minimally conscious patients?
Monti cautioned that the procedure requires further study on additional patients before the scientists can determine whether it could be used consistently to help other people recovering from comas.
“It is possible that we were just very lucky and happened to have stimulated the patient just as he was spontaneously recovering,” Monti said.
If the technology helps other people recovering from coma, Monti said, it could eventually be used to build a portable device — perhaps incorporated into a helmet — as a low-cost way to help “wake up” patients, perhaps even those who are in a vegetative or minimally conscious state (MCS). Currently, there is almost no effective treatment for such patients, he said.
Safer than DBS and tDCS
A report on the treatment is published in the journal Brain Stimulation. This is the first time the approach has been used to treat severe brain injury.
Bystritsky is also a founder of Brainsonix, a Sherman Oaks, California-based company that provided the device (BXPulsar 1001) the researchers used in the study.
That device, about the size of a coffee cup saucer, creates a small sphere of acoustic energy that can be aimed at different regions of the brain to excite brain tissue.
For the new study, researchers placed it by the side of the man’s head and activated it 10 times for 30 seconds each, in a 10-minute period.
Monti said the device is safe because it emits only a small amount of energy — less than a conventional Doppler ultrasound.
“First-in-man” clinical trial
The patient was brought to the Ronald Reagan Medical Center (RRMC) at UCLA after suffering a road-traffic-related severe brain injury, with a field Glasgow Coma Scale (GCS) of 3 (“severe”). The patient had severe traumatic brain injury with prolonged loss of consciousness (more than 24 hours) post-injury.
Before the procedure began, the man showed only minimal signs of being conscious and of understanding speech. For example, he could perform small, limited movements when asked. By the day after the treatment, his responses had improved measurably.
Three days later, the patient had regained full consciousness and full language comprehension, and he could reliably communicate by nodding his head “yes” or shaking his head “no,” consistent with emergence from MCS (eMCS). He even made a fist-bump gesture to say goodbye to one of his doctors.
“The changes were remarkable,” Monti said.
The technique targets the thalamus because, in people whose mental function is deeply impaired after a coma, thalamus performance is typically diminished. Medications that are commonly prescribed to people who are coming out of a coma only indirectly target the thalamus.
Under the direction of Paul Vespa, a UCLA professor of neurology and neurosurgery at the David Geffen School of Medicine at UCLA, the researchers plan to test the procedure on several more people beginning this fall at the Ronald Reagan UCLA Medical Center. Those tests will be conducted in partnership with the UCLA Brain Injury Research Center and funded in part by the Dana Foundation and the Tiny Blue Dot Foundation.
To read this article in its original posting, including many interesting comments, please use this link: http://www.kurzweilai.net/ultrasound-jump-starts-brain-of-man-in-coma?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=b09001f21c-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-b09001f21c-282002409
Human Connectome Project neuroscientists have created a program to make individualized brain maps.
The human brain is a little bit less of a mystery today, thanks to new maps from neuroscientists at Washington University Medical School. Not only did they identify more brain regions than previous maps, they also made a machine-learning program to re-create a new map for any brain, which will help scientists and doctors study individual differences in brain structure and disease, and will hopefully lead to new ways to diagnose brain disorders.
The new map of the brain’s outermost crinkled layer, called the cerebral cortex, was published in Nature today. David Van Essen, the lead mapmaker, calls it a landmark study for the Human Connectome Project, which he heads.
Researcher Matthew Glasser says that unlike many previous studies, this map considers several features of the brain simultaneously to mark its boundaries. Some neuroscientists still define brain regions based on a historical map called Brodmann’s areas that was published in 1909. That map divided each half of the brain into 52 regions. Each hemisphere on the new map has 180 regions.
Glasser defined these regions by looking for places where multiple traits—such as the thickness of the cortex, its function, or its connectivity to other regions—were changing together. After drawing the map onto one set of brains, the researchers developed an algorithm to recognize the regions in a new set of brains where the size and boundaries vary from person to person. “It’s not just a map that people can make reference to,” Glasser says. “You can actually find the areas in the individuals that somebody is studying.”
Dani Bassett, a neuroscientist at the University of Pennsylvania who was not involved in the study, says that using this map to better understand individual differences is the most exciting part for her. She also noted it’s important for defining brain regions based on both anatomy and function, not one or the other. “It addressed a longstanding question that has been very contentious and they do it in a beautiful, data-driven approach,” she says.
Researchers used an MRI scanner to watch brain activity while participants listened to stories. Red and yellow indicate regions of activation.
Duan Xu, a researcher at the University of California, San Francisco, who was not involved in the study, says “it is great to see that the Human Connectome Project is delivering on some of these high-quality investigations of the cortex.” The $40 million National Institutes of Health endeavor to create the most detailed circuit diagram, or connectome, of the human brain began in 2010.
This map could help scientists and doctors create better ways for diagnosing brain disorders, although exactly how it will be used remains to be seen. “I am an optimist,” Xu says. “I think in a few years we should be able to do this in the clinical setting.”
For original article, use this link: https://www.technologyreview.com/s/601940/the-map-of-the-human-brain-is-finally-getting-more-useful/
Coronal section through the neocortex and cerebellum of an adult rhesus monkey brain labeled with Nissl stain which labels all neuronal and glial cell bodies. NeuroscienceNews.com image is credited to Allen Institute for Brain Science
Summary: Researchers have released a new, in-depth molecular atlas of brain development in non-human primates. Source: Allen Institute for Brain Science.
Transcriptional atlas sheds crucial light on what makes human brain development distinct.
Researchers at the Allen Institute for Brain Science have published an in-depth analysis of a comprehensive molecular atlas of brain development in the non-human primate. This analysis uncovers features of the genetic code underlying brain development in our close evolutionary relative, while revealing distinct features of human brain development by comparison. The study is based on the NIH Blueprint Non-Human Primate (NHP) Atlas, a publicly available resource created by the Allen Institute and colleagues at the University of California, Davis and the California National Primate Research Center. This resource enables researchers to understand the underpinnings of both healthy brain development and many neuropsychiatric diseases. Analysis of the atlas is featured this week in the journal Nature.
“This is the most complete spatiotemporal map we have for any mammal’s development, and we have it in a model system that provides directly meaningful insight into human brain development, structure, and function,” says Ed Lein, Ph.D., Investigator at the Allen Institute for Brain Science. “This exceptional dataset is useful for exploring precisely where and when genes are active in relation to the events of brain development and the onset of brain disorders.”
“Collaborating with the NIH on this project allowed us to make use of the Allen Institute’s unique capabilities to generate high-quality, large scale data resources that enable the scientific community around the world to make valuable discoveries,” says Allan Jones, Ph.D., CEO of the Allen Institute.
“While we know many of the details of gene expression in the adult brain, mapping gene expression across development has been one of the missing links for understanding the genetics of disorders like autism and schizophrenia,” says Thomas R. Insel, Ph.D., former Director of the National Institute of Mental Health. “This new atlas will be the foundation for the next generation of studies linking the genetics of neurodevelopmental disorders to the development of specific brain pathways.”
The goal of the NHP atlas was to marry the techniques of modern transcriptomics with the rich history of anatomical developmental studies by measuring gene activity at a series of ten important stages in prenatal and postnatal brain development. At each stage a technique called laser microdissection was used to precisely isolate fine layers and nuclei of cortical and subcortical brain regions associated with human psychiatric disease, thereby creating a high resolution time series of the generation and maturation of these brain regions and their underlying cell types. The gene expression data are complemented by neuroimaging and histological and cellular resolution gene expression reference data.
“This time series reveals how genes code for the enormous complexity of the human brain,” says Trygve Bakken, M.D., Ph.D., Scientist II at the Allen Institute for Brain Science. “Prenatal development is a time of exceptionally rapid change reflected in gene usage, yet many of the molecular characteristics of the mature brain are not achieved until surprisingly late in postnatal development when brain development can be affected by physical activity and social interaction.”
Because the atlas targeted areas of the brain associated with human disease, the authors collaborated with colleagues at the Baylor College of Medicine to use this molecular map to pinpoint when and where candidate genes for diseases like autism and schizophrenia become active. Genes associated with autism are particularly active in the prenatal neocortex in newly generated neurons, consistent with other studies and the early onset of autistic pathology. In contrast, genes for schizophrenia become active much later in development, also in neurons in the neocortex, which correlates with the disease’s later onset.
“This tremendous resource is freely available to the research community and will guide important research into the etiology of many developmental disorders for years to come’, says Michelle Freund, Ph.D., program officer at the National Institute of Mental Health.
Finally, by comparing these data to similar human and rat gene expression data, the researchers demonstrate that many genes show different developmental trajectories in primates compared to rodents, with many fewer differences between monkey and human. Human brain development is uniquely characterized by an unusually protracted period of developmental plasticity, referred to as neoteny. “We found evidence for genes showing regulation consistent with neoteny, but with a twist,” says Lein. A set of human genes showed two patterns, a sharp change in expression earlier than other species, followed by a prolonged increase lasting longer than monkeys. “These findings show the value of closely related non-human primates to study shared characteristics of close evolutionary relatives and to identify unique features of the human brain related to our cognitive abilities and susceptibility to certain diseases.”
NOTE: The data for the NIH Blueprint Non-Human Primate Atlas are publicly accessible throughblueprintnhpatlas.org and with the suite of Allen Institute resources at brain-map.org.
Funding: The project described was supported by contract HHSN-271-2008-0047 from the National Institute of Mental Health. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health or the National Institute of Mental Health.