Thursday, November 29, 2007

Nearer Thy Brain to Sim

IBM has validated some of the results from their simulation of a 10,000-neuron cortical column. The human cortex is a handkerchief-sized, six-layer organ that covers the older parts of the brain. (The handkerchief has been wadded up, creating the familiar convoluted appearance.) One of the most interesting things about the cortex is that it appears to work the same everywhere. So when you hear about a part of the brain having a particular function, you're often hearing about a region of the cortex that, connected in a unique way to some underlying set of more primitive brain structures, is producing a particular behavior. But it's not the cortex that causes the behavior by itself, because it behaves the same everywhere.

A good analogy is that the cortex is like a massively parallel, general-purpose computer and the older portions of the brain are its input-output system. Each of the cortical columns can be thought of as a single processor in that computer. So understanding the architecture and behavior of a column--how it learns and responds to stimuli--is essential to understanding the architecture of human consciousness.

The problem with computer simulation of the cortex is that processors and corticies have a fundamental difference. A computer processor executes hundreds of millions of instructions per second, but the pathways through which the instructions are fetched and the data on which they operate are restricted to a set of no more than hundreds of individual wires. Each neuron in the cortex, on the other hand, can change state (the neural version of an "instruction") fewer than 20 times a second, but the number of connections made by each neuron to other neurons is huge. Some neurons have upwards of 20,000 synapses.

Assuming that a processor running 100 million instructions per second can simulate a neuron state-change in about 100 instructions, you could simulate 100,000 neurons performing 10 state-changes per second on one processor. There are roughly 20 billion neurons in the human cortex, so it would "only" take 20,000 to perform a full simulation if you could somehow distill all the synaptic inputs and outputs down to only a few instructions.

But the cortex contains about 150 trillion synapses. You can think of those synapses are representing about 500 trillion "wires." (Cogniscenti will forgive my glossing over the topology of axons and dendrites so egregiously.) If you were to assume that each simulated neuron would have to do one input and one output per synapse during each state change, and those I/Os only required 2 instructions, to simulate ten state-changes per second, you've suddenly added in another 3 quadrillion instructions per second to your simulation. At 100 MIPS, that's another 3 billion processors. Which is not so good.

So computer simulations are all well and good, but it's hard to scale a simulation up to the size of even a rat's brain without making the individual processors, however fast they are, more interconnected, so they can simulate synapses without expending a ludicrous amount of processor power. This is something we simply don't know how to do today. Wiring is always the limiting factor in computer architecture. Until we have processors that can grow the equivalent of axons and dendrites, we have a problem.

But the ability to produce accurate, high-scale, cortical analogues in silicon could be one of the biggest technological advances in human history. There's a lot of evidence that, if you wire up a good-sized hunk of cortex to a massively parallel set of sensors and actuators, the cortex will automatically discover patterns and organize itself to act on those patterns. The nature of the action obviously depends largely on what the actuators--or other output devices--are intended to do. A silicon cortex isn't a panacea--there's still lots of science and engineering to do--but the power of this kind of pattern recognition is such that a broad class of problems, from machine vision, robotics, natural language understanding and production, and literally thousands of others, can be engineered with radically new methodologies. Such a technology even holds the possiblity of producing genuinely intelligent, maybe even conscious, machines.

The IBM simulation is a small but key step on the road to this technology. Many others are in the works. The next ten to twenty years are going to be exciting.

I will close by shilling for Jeff Hawkins's book, On Intelligence. Hawkins is not a neuroscientist; he's the computer geek that architected the Palm PDA. His book describes an over-broad theory for the architecture of the cortex and how it can be organized to produce the time-variant, predictive, and adaptive behavior that we call intelligence. It's far from the definitive work on modern neuroscience, but it provides a beautifully elegant framework for thinking about the mind. It's completely accessible to non-techies, too.

No comments: