Saturday, September 15, 2012

A Modest Post on the Nature of Consciousness, Self-Awareness, and What Makes Humans Human

The political campaigns are so dismal that I can’t bring myself to think much about them. So, now for something completely different: Let’s discuss some speculative neurology.

Let me see if I can get enough of this into a small enough space that I don’t get bored with it and it’s semi-intelligible. How brains compute is a moderate obsession of mine, and I’ve come to hold an idea of what I think consciousness and self awareness are. I’ll also take a crack at what makes humans different from other animals.

To start with, remember that evolution hardly ever throws anything away. If something works and provides a survival advantage, it gets conserved. Other stuff comes along later and may leverage the older stuff, or it may simply evolve to accommodate the older stuff.

All life needs to sense its environment, so that it knows how to react to consume resources, and how to avoid being consumed as resources—food. The first step to this involved chemical receptors arrayed on the outside of cells. Some interesting molecule would float by, the receptor would glom onto it, and a cascade of chemical reactions would cause the cell to do something. The “something” could be a simple as opening channel in the cell membrane to consume the molecule, or as complex as using a flagellum to turn toward or run away from the molecule.

Once you get to multi-cellular creatures, more information can be processed if cells specialize to detect certain types of information and relay that information to nerve cells. Nerve cells can then detect patterns of different kinds of information and respond to them with certain actions. Organisms are now capable of learning, but the learned behavior certainly isn’t conscious. It’s merely, “Detect stimulus A, respond with B. Detect stimulus C, respond with D,” and so on.

Animals can go a long way with this type of system, but it becomes energetically inefficient to strew neural processing nets all over the organism. The next step in complexity is to route all stimuli to a central spot, then have that central spot respond with a set of actions that get routed back out to the portions of the animal that can take action. Welcome to the age of brains. A slight step up from this further simplifies the transport of neural signals via a bundle of neurons that feed into the brain, and we’re now into the chordata phyllum.

Note that, evolutionarily, everything makes sense so far. Differentiating from a single-celled organism with receptors to a multi-celled animal with sets of cells doing the receiving and acting to other cells coordinating the receiving and acting starts out as separate organisms in symbiosis, then slowly evolves into true multi-cellularity. Everything is gradual. Similarly, once you get neural nets distributed around the animal’s body, evolution can gradually make things more energy-efficient by having the processing cells migrate together and more densely interconnect.

But brains at this point are still pretty much just bags of loosely-coupled stimulus-response loops. You get lots of variations of neurons that do very specialized tasks, depending on what kinds of stimuli they’re processing, or what kinds of actions—including motor actions—they’re controlling. If you look at the human brainstems, and those of more primitive chordates, you’ll find lots of neural nets like this. There are lots of slightly different tissues performing slightly different actions.

We’re now getting to the point where the array of stimuli from the environment is so large, so varied, that the brain is unlikely to be able to respond to everything that’s going on all at once. As brains have evolved, the environments into which they’ve evolved have become more and more complex. A single-celled animal in a tide-pool has a very simple environment: It “smells” food and tries to move toward it, and maybe it knows how to secrete something nasty or move away to avoid being eaten. An insect or a chordate on land has a bewildering array of features to its environment. Vision is almost essential, and the arms race between predators and prey floods the brain with conflicting stimuli.

Animals at this point have “memory”; they’ve learned to respond to patterns of stimuli with other patterns of response. But they now need a way to prioritize responses. To do this, the brain links lots of disparate neural nets together with another kind of memory, which can weigh conflicting stimuli, hold them together for long enough to select the important ones, and then coordinate a set of behaviors in response. We call this behavior “attention”, and the anatomical structures that contains the working memory for directing attention are the limbic system and the cortex.

Even with specialized tissues, it’s unlikely that all of the neurons are set with response patterns solely from genetic instructions. Things at least need to be fine-tuned, and the animal needs to be plastic enough to recover from damage if it’s to live long enough to reproduce. Brainstem elements therefore “learn”—they modify neural connections to get optimal response from the animal, even though the animal’s sensory and motor systems may change (mostly for the worse) over time.

I don’t know a lot about how those systems learn in things without a cortex. I suspect that learning strategies for various brainstems tissues co-evolved with the tissues themselves. But when an animal has a cortex, it can employ a much more powerful learning strategy.

Part of the attentional system works by detecting a moderate stimulus and, if attention is granted, greatly strengthening it by having the attentional system add further stimulus. This mechanism has another purpose. The combination of attention and stimulus can greatly enhance learning. The attention system can now simulate (no ‘t’) hunks of stimulus so that the motor system can “practice” its response—without actually having to experience the stimulus from the environment or execute the response. We’re pretty sure that this is what dreaming is for. If this is true, then most animals with a cortex should dream.

Being a computer geek, I’ll now make a computer analogy that is only slightly accurate, but will give us a framework for moving forward: The brainstem is the brain’s I/O system, the limbic system and cortex is the CPU, and attention is the active task. The learning mechanism doesn’t have a direct analogy here, but you might think of it as the memory refresh logic, constantly reading in memory and rewriting it. Alternatively, learning can be viewed as subtasks that are launched from an executive in the attentional system.

The more cortex you have, the more sophisticated the relationships are that can be held between the heavily-processed I/O from the peripheral sensory systems and the behaviors to be sent out to the motor (and endocrine) systems. Attention is merely the act of progressively abstracting these relationships and deciding to activate a small number of them instead of activating thousands of them.

I think you can have an organism with attention that isn’t what we would consider self-aware, but I’m willing to bet that the next step is where an organism becomes self-aware. That step is easy to evolve: We already have an attentional mechanism that gets stimulated by the outside world. Those stimuli get more and more hierarchically abstracted, so that a small stimulus pattern to the attentional mechanism can represent an incredibly complex stimulus. We have a word for these stimuli: we call them “concepts”.

We also have the ability for the cortical learning mechanism, which must be closely related to the attentional system, to activate concepts during dreaming. It’s only a small step from dreaming to allowing the attentional system to stimulate concepts during when it’s awake.

This sounds an awful lot like “thinking” to me, and my guess is that the act of thinking is the hallmark of self-awareness. When the attentional system isn’t being consumed with environmental stimuli, it simply activates concepts based on recent experience. The activation of those concepts cause other concepts to be activated, and the attentional system then chooses which concepts to attend to, in the process stimulating those concepts more strongly, which activates still other concepts, and so on.

Note that all of these mechanisms are merely an outgrowth of the more primitive attentional system and the cortical learning system. Again, evolution doesn’t have to add a lot to get greatly enhanced behavior.

If this is right, then almost everything with a cortex is to some extent “self-aware”. So what distinguishes humans from other chordates?

One answer might be “Nothing”. Perhaps human self awareness and processing is merely better as a matter of degree rather than kind. But there is one last piece to the puzzle that it appears that few animals have: language.

At one level, language is easy to extrapolate from what we already have. Since the attentional system can internally stimulate concepts, it may be natural to tag concepts with a motor activity corresponding to a sound. We call these sounds “words”.

A lot of animals are limited in the number of different sounds they can make. It would be natural for them to associate words only with the most important concepts, so that their inventory of sounds is used productively. However, the human vocal tract is capable of producing an astonishing number of sounds. Hence, it can associate a unique word with almost any concept. Note that the key evolution here isn’t really neural; the evolution of the vocal tract naturally leads to more and more words.

Eventually, there are enough words that ever more complex concepts can be created by stringing words together. Patterns of words become new concepts, eventually leading to syntax, which is just an attentional set of tricks to allow the sequencing of words.

At the end of this process, we have a whole new emergent property, in which transient concepts can be sent as stimuli from one brain to another, via words and syntax. Some of those concepts will become new, permanent concepts. Others may persist just long enough to allow groups to coordinate actions, socialize, or merely have a nice chat about last night’s basketball game. But the end result is a whole new level of emergent behavior that only humans (and perhaps a couple of other species) possess.

Those last tiny steps of evolution are interesting. If I’m right, being human is possible only because we can make a lot of distinct noises, which has led to an explosion in the number of concepts we can handle, which has further driven the expansion of the cortex. If you could give other animals the ability to make more noises (say, by grafting vocoders into their brains at or near birth), you might discover that the differences between humans and animals really was more one of degree than of kind. Perhaps words and syntax are merely an outgrowth of the need to handle more and more concepts. That might make for a future where the definition of humanity might have to be broadened considerably.

UPDATE 9/19/12:  Just fixed some formatting.


Karl Hallowell said...

Your comments on language are thought-provoking.

The idea of installing vocoders in animals is a fascinating idea. It's worth noting that some animals, for example, parrots, already can speak as humans do and some of those animals apparently can carry on conversations, though at a simple level.

A more intelligent animal such as a chimpanzee or dolphin might be able to do a lot with a vocoder.

The point about how language helps us think (or perhaps, directs our thinking), got me thinking in turn about whether language on its own could be improved to help us think better.

One trait of a language is the degree of descriptiveness of the language. For example, a study was done on children from different cultures. They apparently sorted out a pack of crayons and said what each color was via several different matching games.

There apparently was some degree of correlation between the number of words that the child had for describing colors and their ability to distinguish colors.

So it seems that one way language helps is by increasing our ability to discern between various similar phenomena, such as colors or clouds. This is particularly useful for specialists who need to work with subtle variations such as architects or meteorologists.

I'd say another way is to create abstract models which we can use as a framework for describing specialized situations. For example, there was a peculiar jargon surrounding the dotcom businesses around 2000. One might use words such as disruptive, burn rate, maturity, value add, killer app, IP, startup, etc. These could be used to describe formally businesses (particularly startups) in this sector.

They could also to a limited extent be used outside of the sector, such as describing an expensive vacation ("Ugh, my beach trip had a high burn rate. Let me tell you what happened to my car."), but that would only be useful with people who understood the nomenclature.

I think one of the most powerful uses of language has been in mathematics and the fields of science which use mathematics extensively. Sometimes this is beneficial (such as describing a complex and perhaps unintuitive algorithm) and sometimes it is counterproductive (such as the common academic bias on exams towards math problems over harder to test knowledge).

But the language of mathematics hasn't changed substantially those who use it a lot. They are great at thinking about mathematical concepts, but that doesn't seem to have much effect outside of the realm.

Finally, there's the benefit of speaking easier. For example, counting from one to ten is pretty short with only one two syllable word ("seven") in there.

I happen to work in accounting and a speed trick for counting money faster is to count with single syllable words in your head. For example, I usually count from one to three or five repeatedly when I count bills. It's somewhat faster than counting from one to twenty five (for counting standard "clips" of US $1 bills) in your head and that actually leads to a physical speed boost.

More sophisticated attempts laong these lines would be something like the constructed language, Esperanto. I gather the primary goal here is to get rid of some linguistic complexity, for example, removing gender of words, spell things like they sound, and removing most irregular verbs.

But Esperanto speakers aren't to my knowledge much better thinkers than non-speakers. Yet again, our attempts to improve language seem to be fumbling on some fundamental obstacle.

For whatever reason, we seem to be on a plateau of what language can, by itself, do for our thinking.

TheRadicalModerate said...


I'm more inclined to think that language is probably as optimal a way of thinking as humans are going to achieve, at least without cybernetic augmentation.

Human cognition has two fundamental axes that are associated with abstract thought. The first has to do with the richness of concepts. When the attention system lights a concept up, how many associated concepts get activated with it? How deep can the attention system see into that network of concepts while still focusing on the initial concept?

In linguistic terms, this represents the richness of a particular word. In my experience, though, the more connotations a word has, the more likely you are to apply stereotypes to it. This would indicate that the attention system has trouble perceiving rich concept networks holistically--it has to flatten things to make the word useful.

The second attribute has to do with how many sequential steps can be held in short-term memory without losing the concepts along the way. If I think of the word "dog", I get an instant idea of "dogness", but I have to sequentially think of attributes associated with a dog before I can place a dog in any kind of context: it has four legs, it's shaggy, it wags its tail, it pants, it's just smart enough to draw the wrong conclusion, and so on. How many of those attributes I can load into short-term memory will determine the quality of my cognition surrounding dogs. This is closely related to coding span and is a key determinant of human intelligence.

Both of these attributes, the richness of the concept and the coding span associated with it, seem to have hard-wired limitations on them. The first limitation puts a limit on the semantic content of any word, and hence any concept. The second limitation puts a limit on how rich spoken syntax can be, because you've got to be able to hold a certain amount of stuff in memory simultaneously to construct a syntactically valid statement.

This may explain why writing something produces such a different quality of thought than saying something or thinking it. When you write, you can go back over what you've written recently, which somewhat extends your coding span, allowing you a richer expression of related concepts.

Languages differ in their organization of concepts, which profoundly influences how native speakers see the world. Language profoundly influences culture. But I'm betting that almost all languages are oriented around words with approximately the same amount of content and syntax that requires the same coding span.

With cybernetic enhancement of attention, this may all go out the window. If an attentional system can handle words with richer semantics and coding span can be increased by one or two orders of magnitude, language is going to be... different.