Tuesday, August 28, 2007

Making Neurons Remember

Recorded activity from the defendant’s brain is fed into a cybrid system. She has pleaded insanity, but the court requires hard evidence. Inside the cybrid, a living network of neurons communicates with a programmable chip and a computer. The system has been trained to recognized ordered and disordered brain activity. While a human psychiatrist might be fooled by an exceptional act, the difference between a calculated dissimilation and mental illness is obvious to the computer. Its assessment appears on the screen; she’s perfectly lucid.
Eshel Ben-Jacob.

This is just one of the applications that Eshel Ben-Jacob foresees for integrated systems. People often mentioned cyborgs on the world wide web, he said of the buzz that followed the first press releases of his work. “The vision that I have is to interface the neural network and the computer in a different way. And to do it with something which is called evolvable hardware.”

Cyborgs are human beings with added technological components. A cybrid is sort of the reverse, starting with a computer and adding living elements. Some technology for such a system has yet t
Itay Baruchi.
o be developed, but cross-disciplinary researchers in physics, biology, and computer technology are making strides toward evolvable hardware and living neuro-hybrids.

Eshel Ben-Jacob is one of two physicists at Tel Aviv University whose recent research is paving the way for neuro-memory chips. He and his doctoral student Itay Baruchi imprinted rudimentary memories onto a network of neurons interfaced with a computer. Despite the constraints of a tight budget, they are first to accomplish this feat. And they aren’t even formally neurobiologists.

Neural networks lend themselves to cross-over scientists. Electricity, traditionally the domain of physicists, is the signaling method used by neurons. Although he knew as early as junior high that he was interested in questions of consciousness and intelligence, Ben-Jacob chose to study physics and math to learn a clearer-cut set of rules that govern the universe before moving into the more labyrinthine realm of biology. Baruchi, also approaching the field through physics, encountered Ben-Jacob in his undergraduate years and has worked with him through most of his studies.

The physicist’s approach employed by Ben-Jacob offers a simplified model of neural networks and seeks to decipher the hidden principles that govern their behavior. He views neural networks as electrical circuits. While still complex, electrical circuits are more predictable than live neurons. Even the neural networks are just a basic component of Ben-Jacob’s real target: the brain. Physicists, as Ben-Jacob pointed out, like to start with the simplest, most fundamental version of a problem.

The problem is memory. How are memories recorded on neural networks? Their original motivation was pure curiosity about how a brain learns and remembers. After they had actually imprinted a memory onto a network, they realized that the experiment had taken a step toward living memory chips.


Inhibiting the Inhibitors
Inna Brainis.

Before researchers can imprint new memories onto a neural network, they must first create a network of neurons interfaced with a computer. Inna Brainis, research assistant in the laboratory of Baruchi and Ben-Jacob, grew theirs on an array of electrodes, following a method that was developed in the last two decades.

Brainis began with cortical material from the brains of day-old rats which contains both neurons and glia cells. The glia cells are responsible for feeding, protecting, repairing, and cleaning up after the neurons. Research in the last decade has demonstrated that the glia does more than just support the neurons, serving regulatory purposes in neural communication through chemical transmitters.

Neurons have two ends: axons and dendrites. Axons send out signals, and dendrites receive them.

A neural network, synapses in green.


Across the cell membrane of a neuron, charges separate. The difference in charge between the inside of the neuron and outside becomes large, storing energy. Using the same principles, energy is stored by charge separation in a capacitor, a common element in electrical circuits. Voltage, or energy stored by the difference in charge, builds up between two plates that conduct electricity.

When a capacitor discharges, it quickly releases the energy stored by the charge difference. This is analogous to the firing of a neuron. The voltage across the neuron membrane builds until a threshold difference between inside and outside the neuron is achieved. Then, the neuron fires an action potential.

An action potential is a voltage pulse. The voltage runs along the axons and into the dendrites of connected cells, perhaps as many as 10,000 other neurons. As neurons receive pulses, voltage builds across their membranes. Once they reach the threshold, they also begin to fire, sending the signal on.

The places where axons meet dendrites, the junctions where signals are transmitted between cells, are called synapses. If a network of capacitors is connected by wires, the current can flow in either direction. Astrocytes, glia cells that engulf synapses, can serve as gatekeepers and regulate whether current is allowed to pass. This makes the cell junctions like transistors, another element in electrical circuits.

Although neural networks are far more complex than electrical circuits, it can be useful, in the mind of a physicist, to simplify the system into a circuit of capacitors and transistors.

To create a neural network, Brainis mixed cortical cells in a fluid and poured them over the multi-electrode surface, allowing them to settle into an even layer. They had ten minutes to attach themselves to the plate or else they were rinsed away. Over the next ten days, the neurons that had survived the rinsing sent out axons and dendrites, connecting themselves into a network that exhibits rich spontaneous activity.

Red neurons, green glia, blue cell nuclei.


Ben-Jacob finds the activity resulting from their interconnection surprising. “If you spread the neurons homogeneously, you don’t expect that they would show something that has some order to it. You’d think that it’s a big mess and there would be no sense in the mess.”

However, the connecting neurons adopted a particular firing pattern. In essence, they created their own simple memory. Such firing patterns have been dubbed synchronized bursting events.

“You can think about it like a Christmas tree,” Ben-Jacob suggested. Each flickering light is a neuron. “It’s quiet, quiet, quiet, and then all of a sudden it goes bzzzzz!” The lights flicker in a pattern – red, green, blue, red, yellow and so on. “The neurons fire quite rapidly for, altogether, 200 milliseconds or so, and then it’s quiet again.” A few seconds later, the lights flicker in the same order and with the same timing. The pattern continues at irregular intervals.

The pattern of signals from electrodes. Each row represents an electrode, and each black dot represents an action potential fired near the electrode. The x-axis is time in milliseconds. The burst takes place in about 100 milliseconds.


Inspiring firing patterns

Because synchronized bursting events repeat the same firing pattern, they are a form of stored information. Recently, scientists have interpreted them as rudimentary memories, and many researchers have conducted experiments seeking to imprint new memories onto the networks.

Neurons signal among themselves by voltage pulses, so electrical stimulation seemed a natural choice. While electrical pulses successfully changed the course of existing firing patterns, these patterns returned to normal after the stimulation session ended. Researchers turned to chemical stimulants.

Within neural networks, there are two kinds of neurons: excitatory and inhibitory. Stimulating excitatory neurons will increase network activity while stimulating inhibitory neurons reduces it.

A New Way to Teach

Researchers attempted to teach neural networks by reward, increasing network activity, through the stimulation of excitatory neurons. They also attempted to teach by punishment, reducing network activity, through the stimulation of inhibitory neurons or the inhibition of the excitatory neurons. None of these approaches successfully imprinted a new memory.

Baruchi and Ben-Jacob employed a third training method, “teaching by liberation,” as they describe it. Instead of exciting either type of neuron, Baruchi chose to inhibit the effects of the inhibitory neurons. If inhibitory signals are restrained, then the excitatory neurons are essentially free to do as they please. With their creativity unleashed, the neurons form a new synchronized bursting pattern.

Note that inhibitory neurons are not repressed directly. Rather, the inhibitory synapses are dampened, reducing the signals originating from inhibitory neurons. As a result, the inhibitory neuron may still send the command, “Stop that!” but it comes out so muffled that the other neurons don’t obey.

Stimulated Activity

Further setting their work apart from previous studies, Baruchi and Ben-Jacob placed the stimulant at a specially selected location rather than distributing it across the network. The injections were so small and the concentration dropped so quickly that the stimulant was localized to a minute region around a single electrode. Baruchi injected a tiny droplet every twenty seconds for a total dose of twenty droplets.

Array of electrodes, zoom to neuron next to electrode with syringe in position.


The neurons near this electrode became the starting point for a new synchronized bursting pattern. The firing pattern imprinted through exposure to the chemical stimulant coexisted with the original firing pattern. Now, the network stored two memories.

Twenty-four hours after the first round of stimulant, Baruchi and Ben-Jacob applied a second dose. The second round of stimulant was introduced at a location where neither of the two existing firing patterns began. Because the third firing pattern would begin wherever they placed the stimulant, they would risk overwriting one of the previous firing patterns if they tried to start two near the same point.

Twenty little droplets of stimulant later, a third firing pattern periodically expressed itself in the neural network. The three patterns coexisted in the network for more than forty hours. They mark the first example of persisting memories in a neural network that have been imprinted by scientists.

Synchronized bursting event, repeated over time.


Limited resources, boundless resourcefulness

The method used to deliver these miniscule amounts of stimulant to such a specific location deserves some attention. The electrodes have a diameter slightly smaller than the width of an average person’s hair. Our clumsy human hands cannot manipulate objects on this level, so positioning a syringe over a particular electrode required some cunning.

A micrometer is a device ordinarily used to measure lengths at accuracies around 1000 times smaller than a millimeter. By rotating a knob, researchers can move an arm forward and backward on this tiny scale. Baruchi attached the syringe to this arm, allowing researchers to place the stimulant over a single electrode that had been selected according to network activity.

The syringe itself requires a bit of cleverness. How does one make a needle that small? It’s not actually that tough; simply heat a small glass tube until it is soft in the middle and then stretch it thin. Such a tube was mounted on a syringe. The researchers controlled the injections of stimulant very precisely by connecting a second micrometer to the piston of the syringe.

Micromanipulator.


Beyond the ingenuity of the setup, it is highly cost-effective. Most of it is made from standard laboratory materials plus around thirty US dollars in additional supplies.

“When you do pioneering and conceptually daring research, you a have hard time getting financed since referees question the likelihood of success,” Ben-Jacob explained. “Most of the research was not supported by a grant. It’s lucky that I have students like Itay who are highly motivated, who have the intellectual courage and self-confidence to try something that has never been done. They research for the sake of doing the research.”

After seven years working together, he feels that Baruchi is more of a collaborator and a colleague. This experiment is among Baruchi’s last projects as a student. His thesis is currently under review for his Ph.D.

Student Baruchi has earned the profound respect of Professor Ben-Jacob for his diligence and scientific curiosity. Baruchi was employed part-time at hi-tech companies, doing work in algorithms and optics as he completed masters and doctorate degrees. “In some sense he funded the research,” said Ben-Jacob. “That’s something that’s very special. It’s very rare.”

While it is important that Baruchi and Ben-Jacob could manipulate the syringe on a microscopic level, they also needed to see what they were doing. Baruchi improved a special chamber that was originally built by Ronen Segev, a former student of Ben-Jacob. The chamber supports the neural network, records the data from the electrodes, and incorporates a microscope so that researchers can position the syringe.

The chamber is like an incubator, the environment in which neural networks are usually maintained. The temperature was held at 37˚C, or 98.6˚F. Likewise, the humidity and carbon dioxide levels were controlled to keep the environment suitable for growing neural networks, mimicking the conditions inside a mammal’s cranium. The neural network can live inside it for days or weeks during an experiment.

The "Stimulator" supports a neural network while simultaneously allowing researchers to view and manipulate the sample.


Building this chamber, Baruchi incorporated materials available at the laboratory, such as aquarium pumps, in addition to some relatively inexpensive parts that had to be purchased. “It might look like we’re really really poor,” Baruchi said with a laugh. “We’re okay. It’s not that we don’t have money to eat.”

Neuro-memory Chips and Beyond

Right now, Baruchi and Ben-Jacob can only control the starting point of the neuron firing patterns. Ben-Jacob believes that with strategically placed electrical stimulation, as well as the use of other chemicals, scientists may be able to control the order of the firing pattern as well.

Finding a way to structure the neurons will mark another step toward a neuro-memory chip. For this experiment, the neurons were spread homogeneously on the plate. The firing pattern of the neurons depends on their connections, so researchers will need to control their structure before they can make the behavior of the neural networks reproducible.

Perhaps Baruchi will return to the field later in life, but for now, he is starting a company in the field of renewable resources. Teaming up with Yael Hanein from the Faculty of Engineering at Tel Aviv University, Ben-Jacob is already working on controlling the geometry of the neural network using nano-technology. More specifically, they developed special electrodes made from islands of carbon nanotubes.

Neurons and glia on a carbon nanotube electrode.


These biocompatible electrodes are employed to control the arrangement of the network since the neurons and glia cells prefer to attach to the electrodes. At the same time, thanks to their conductance properties, those electrodes record neural activity and apply electrical stimulations.

In the next stage, they will mount the carbon nanotube electrodes on microfluid chips. These chips contain tiny channels and gates that could be controlled by a computer, removing the need for the labor-intensive delivery of chemical stimulants via the micromanipulator device.

A computer connected to this integrated system would record and analyze network activity. It could operate the microfluid chip, delivering stimulant to the neural network and changing its activity. This cybrid, a hybrid of biological material and silicon, could be tuned to accomplish a specific task.

Ben-Jacob sees neuro-memory chips opening the way for evolvable systems. A neural network would be connected to a computer through a programmable chip that uses genetic algorithms.

A programmable chip running a genetic algorithm can change its connections at random, as biological organisms mutate. Then, the chip is tested for fitness. When given a particular input, does it return an appropriate response? If not, the chip mutates until it does.

This evolvable system constitutes a “toddler” computer, capable of learning. The neural networks are made from the cortical neurons of day-old rats, “infant networks,” as Ben-Jacob calls them. “If you give them a task, both the network and the evolvable, programmable chip will grow up and develop together during the few days that the network becomes mature.”

For example, neural networks may be used to detect toxins, quite similar to canaries used in mines of old. The desired system would report a “one” if there was toxin present and a “zero” if the area was toxin-free.

In order to get such a result, scientists would teach the system. A neural network would be exposed to a toxin. It would communicate with the programmable chip, which would change its connections until it reported a one, signifying toxin. In the absence of toxin, the programmable chip would be trained to interpret the neural network’s activity and report zero. A more sophisticated system could even differentiate among different toxins.

As in the courtroom scenario, an integrated system could be trained to recognize ordered and disordered minds. Researchers could give it examples of neural activity from healthy and mentally ill brains until the network could tell them apart. Again, a more sophisticated system might identify specific mental illnesses.

In current computer systems, human specialists know exactly how each program runs. Someone had to make each piece of hardware and software, someone designed every magnet, circuit board, and string of code. Evolvable hardware, capable of learning by example, would have unknown methods of deciding whether or not brain function is ordered or toxins are present. The next generation of computer technology may almost be able to think for itself.

Credits
Baruchi and Ben-Jacob's research: Physical Review E 75, 050901(R) (2007)

Images
All images from Ben-Jacob's group except
4. Network -- Pablo Blinder and Danny Baranes
5. Neurons and glia -- Pablo Blinder and Danny Baranes
7. Delivering stimulant -- Ben-Jacob's group in collaboration with Yael Hanein
10. Carbon nanotube electrode -- Ben-Jacob's group in collaboration with Yael Hanein

3 comments:

  1. What's amazing is: Yes we can simply try to model the neuron cell
    as a storage node, but what intelligence system drove them to interconnect on their own. That indicates some motive and motor behavior. Almost as an amoeba might seek food.

    ReplyDelete
  2. Chips schmips, this could be used someday to heal damaged brains, especially those in comas!

    ReplyDelete
  3. Wow. I'm and undergrad who has been working with brain-computer interface, and I wish I saw this sooner. Very brilliant work.

    ReplyDelete