Thoughts

Cortical simulations on the feline scale and the complexity of models

Billions and trillions

Step by slow, super-computed step, we approach singularity.

This step: Two massively parallel cortical simulations, run at the Lawrence Livermore National Labs by Rajagopal Ananthanarayanan, Steven Esser, and Dharmendra Modha of the IBM Almaden Research Center, and Horst Simon of the aforementioned labs--these are the guys who previously simulated at the scale of mouse and rat cortices. They used a Blue Gene supercomputer (with a whopping 456 CPUs and 144 TB of main memory--just wait, ten years from now I'll look back on this sentence and laugh at how little computing power and memory that is). The first, and larger, simulation included 1.6 billion neurons and 8.87 trillion synapses. Human brains still dwarf these numbers: roughly 20 billion neurons and 200 trillion synapses. But it's a cat-sized step with the complexity and scale of a feline brain.

The first simulation used experimentally-measured gray matter thalamocortical connectivity from a cat's visual cortex--the simulations neurons were connected in a biologically plausible fashion. Phenomenological spiking neurons, individual learning synapses, axonal delays, and dynamic synaptic channels were all included in the software. The second simulation, with 900 million neurons and 9 trillion synapses, had probabilistic connectivity.

Speed-wise, the researchers report that their simulation runs 2-3 orders of magnitude slower than real-time, when compared to a human cortex. With near perfect weak scaling (doubling the memory resource doubles the model size that can be simulated), human-scale models may be just around the corner... well, relatively speaking; the researchers predict it'll happen in less than ten years. Just as soon as there's a supercomputer super enough.

The research paper is also available at researcher Dharmendra Modha's blog [PDF].

But bigger isn't necessarily better

We may have to wait ten years for human-scale simulations, but we may not need a human-scale platform to be able to build intelligent AI. Researchers at Queen Mary, University of London suggest that bigger may not necessarily be better, when it comes to brains. A lot of complexity can be found even in tiny insect brains. Maybe it'll be a swarm of honeybee robots that takes over the world!

The complexity of models

For a time, I was convinced that every model out there would not be an adequate model of what a human brain could do because every model out there had to simplify, and thus, that no model or computer software would ever be able truly intelligent until we had the computing power to make an electronic human. I knew there was value to models, but deep down, I retained the conviction that no model, no simulation, no AI would ever manage the same level of complexity or intelligence as a human without being, simply put, a human.

Fortunately, I was relieved of this notion around the same time I started taking Cognitive Science classes: Humans aren't the only intelligent creatures, the point of a model is not to create the thing you are modeling, all models simplify some aspect (it's just a matter of choosing which aspects are most important to get exactly right). The world may be its own best representation, as Rodney Brooks so aptly said, but that should not preclude us from simplifying the world to better understand how it works, nor should that, in return, prevent us from trying to simulate ourselves in software.

I, for one, am looking forward to watching the intelligent honeybee robots and the supercomputer human brains band together to overthrow the government.


Sunday, November 22, 2009 - tags: cognitive-science

Comments

Comments are closed.