Saturday, August 10, 2013

meta- + pherein = ana- + logos ?

I was asked recently if I knew what the difference between metaphor and analogy is. My first thought: "Of course!" My second: "Wait. No."

Thankfully, the great Google knows all. It took me to my old friends dictionary.com and etymonline.com and had them explain things to me in ways I can understand. (The great Google is too powerful to spend time explaining things to me.)

The first thing I learned is a fun fact. Did you know [met-a-fawr] can be pronounced [met-a-fer]? What is that? British? Who cares, I like it!

Anyway, according to dry definition, the difference between metaphor and analogy is the difference between
"a figure of speech in which a term or phrase is applied to something to which it is not literally applicable in order to suggest a resemblance; something used [...] to represent something else"
and
"agreement or similarity, especially in a certain limited number of features or details; a comparison made to show such similarity."
It's easy to see how these concepts might get conflated. Both involve the comparison between two entities that aren't obviously related. Hopefully etymology will shed some light on this question that seems in a certain limited number of details like a shadow...

Metaphor began in its Greek infancy as meta- "over, across" (or "after," "along with," "beyond," "among," "behind" -- it's a broad prefix) plus pherein "to carry, bear." (The suffix -phore indicates "a person or thing that bears or produces".) Metapherein was a verb synonymous with transfer, carry over, change, and alter and was from its inception applied to words used in a strange way. In the 1500s it appeared in the English language with connotations bizarrely intact.

Analogy, on the other hand, began as a strictly mathematical term, which Plato picked up and began using more broadly. Today it is still used as a formal term in logical mathematics to mean "a form of reasoning in which a similarity between two or more things is inferred from a known similarity between them in other respects." Analogy came from the Greek word analogia, "proportion," itself born of the marriage between the roots ana- "upon, according to" and logos "ratio" or "word, speech, reckoning". 

Analogy appeared in English within several decades of its cousin metaphor. The fine distinction between them has been preserved through Greek, Latin, French, and English cultures for millennia, suggesting we might be the first people to question whether there is a useful difference between them! And we may decide that there is none, but it's worth first finding out what we are leaving behind when we confuse them.


To all my practical readers who want to know, "So can you just tell me when to use one or the other or not?" -- If you analyze objects with mathematical precision in one of the many realms of science ("The sky reminds me in a particular way of the hues of a Chiquita banana sitting beside a deep orange mango."), you're probably smart and probably using an analogy. If you do the same thing without thinking too hard about it, you're likely creating a metaphor ("Check that tropical smoothie sunset!" "Dude, you're weird.").

Monday, July 22, 2013

"An Assembly of Daemons"

Despite my firm predilection for the symbolic and abstract, I wonder about the neurophysiological mechanics of language processing as much as the next guy.

So I picked up French cognitive neuroscientist Stanislas Dehaene's Reading in the Brain from the library last week. (Published in 2009, it's probable that it's simultaneously the local library's youngest book and science's most outdated information.) It turns out that scientists love metaphors too! Go figure. I've never thought much about how integral to an understanding of most natural phenomena and structures analogy must be. I could go on about this for quite some time, but let me get into the point of today's post: PANDEMONIUM.

In his first chapter, Dahaene is trying to answer the question "How do we read?" from a cognitive neuro perspective, i.e. very literally. He describes our tiny fovea darting spastically back and forth across a sentence to take in the elaborate detail of written characters, etc. He talks about the brain's multiple parallel pathway structure. Then he starts speaking my language: After a stimulus is recognized as a word, "mental dictionaries open up, one after the other[...] our mind houses a reference library in several volumes." Sweet! I know what dictionaries are! Maybe I'm not totally lost!

Continuing, he writes, "The number of entries in our mental dictionaries is gigantic. The extent of human lexical knowledge is often grossly underestimated. [...] Any reader easily retrieves a single meaning out of at least 50,000 candidate words, in the space of a few tenths of a second, based on nothing more than a few strokes of light on the retina" (42).

Well, that's just fascinating, I thought as I read. But it turns out accessing lexical information is even MORE exciting than suggested by the stacks-of-dictionaries metaphor (if that's even possible). If our brains are full of dictionaries, the dictionaries themselves are full of fiery little congressional spirits. This is apparently the most accurate description the last 50 years of scientific thinking could come up with -- which I think is pretty cool. What it means is basically that all of the words in the world that we know have representatives crouched in our cortical folds, constantly at the ready to call out their names. Our brains are not like factories pumping out one product at a time or like a librarian leafing through dead pages; they contain assemblies of competitive neurons fighting each other for their chance to fire. They argue about who has the most legitimate claim, who has the best reason for doing his thing. The whole system is chaotic, aggressive, and ultimately very efficient. Every. Word. You. Read. Is. Causing. A. Little. Political. War. In. Your. Brain.

In case my brief description of Dahaene's explanation of the entire nervous system failed to satisfy you, here's some of the direct text. It feels really strange to be reading about reading. Nothing pulls you out of the magic of the moment like a detailed description of the intricate mechanics of that very moment.

"Several models of lexical access manage to imitate the performance of the human reading system under conditions close to those imposed by our nervous system. Almost all of them derive from a set of ideas first defined by Oliver Selfridge in 1959. Selfridge suggested that our lexicon works like a huge assembly of 'daemons,' or a 'pandemonium.' This lively metaphor holds that the mental lexicon can be pictured as an immense semicircle where tens of thousands of daemons compete with each other. Each daemon responds to only one word, and makes this known by yelling whenever the word is called and must be defended. When a letter string appears on the retina, all the daemons examine it simultaneously. Those that think that their word is likely to be present yell loudly. Thus when the word 'scream' appears, the daemon in charge of the response to it begins to shout, but so does its neighbor who codes for the word 'cream.' 'Scream' or 'cream'? After a brief competition, the champion of 'cream' has to yield - it is clear that his adversary has had stronger support from the stimulus string 's-c-r-e-a-m.' At this point the word is recognized and its identity can be passed on to the rest of the system. 
Behind the apparent simplicity of this metaphor lie several key ideas on how the nervous system works during reading: 
- Massive Parallel Processing: All the daemons work at the same time. There is thus no need to serially examine each of the 50,000 words one by one, a procedure whose duration would be proportional to the size of our mental dictionary. The massive parallelism of the pandemonium thus results in a substantial gain in time. 
- Simplicity: Each daemon accomplishes an elementary task by checking to what extent the stimulus letters match its target word. Thus the pandemonium model does not succumb to the pitfall of postulating a homunculus, or the little man who according to folk psychology holds the reins of our brain. (Who controls his brain? Another even tinier homunculus?) In this respect, the pandemonium model can be compared to the philosopher Dan Dennett's motto: 'One discharges fancy homunculi from one's scheme by organizing armies of such idiots to do the work.' 
- Competition and Robustness: Daemons fight for the right to represent the correct word. This competition process yields both flexibility and robustness. The pandemonium automatically adapts to the complexity of the task at hand. When there are no other competitors around, even a rare and misspelled word like 'astrqlabe' can be recognized very quickly - the daemon that represents it, even if it initially shouts softly, always ends up beating all the others by a comfortable margin. If, however, the stimulus is a word such as 'lead,' many daemons will activate (those for 'bead,' 'head,' 'read,' 'lean,' 'leaf,' 'lend'...) and there will be a fierce argument before the 'lead' daemon manages to take over. 
All of these properties, in simplified form, fit with the main characteristics of our nervous system. Composed of close to one hundred billion (10^11) cells, the human brain is the archetype of a massively parallel system where all neurons compete simultaneously. The connections that link them, called synapses, bring them evidence from the external sensory stimulus. Furthermore, some of these synapses are inhibitory, which means that when the source neuron fires, the firing of other neurons is suppressed. The result has been likened by the Canadian neurophysiologist Donald Hebb to a network of 'cell assemblies,' coalitions of neurons that constantly compete. It is therefore no surprise that Selfridge's pandemonium has been a source of inspiration for many theoretical models of the nervous system, including the first neural network models of reading"(42-43).
Moral of the story: metaphor is everywhere, even underpinning scientific models! I wonder how often this happens? Does anyone know of other examples?

Monday, July 8, 2013

X = Y

If you haven't heard of TED talks, I'm here to bring you good news. The news is, there's an awesome site called ted.com. It has over 1,000 short videos of fascinating talks and performances. They're called TED talks.

You're welcome. 

James Geary, former Europe editor of Time and aphorism writer (definitely did not know that was a thing), did a great talk on metaphors a couple years back. He summarized the way metaphors work in some simple and thought-provoking ways (ie better than me) that are worth quoting. 

He also made a lot of references to Elvis, going so far as to call him the King of Metaphor or some similar nonsense. Elvis is fine, I guess, but I'll leave those quotes out.

Metaphor lives a secret life all around us.

The personification of a concept: Brilliant. Makes you feel infiltrated, doesn't it? Like your brain is just a plaything for this shadowy Metaphor. Does it have intentions for good? I hope so. 

We utter about 6 metaphors a minute.

What! That's awesome! A good fact for small talk. I'll tell skeptics to take it up with Geary.

Metaphor is a way of thought before it is a way with words.

This is actually a great point. I think it's incorrect to think of language as a tool to communicate independent thoughts, though I know there are a lot of theories out there and a lot of ways to think about this. From what I see and understand, though, language is essential to the structure and content of thought; I suppose you can have one without the other in very particular circumstances, if you give both language and thought very loose definitions... Thoughts on this would be appreciated.

He then quotes Aristotle, whom we'll return to sometime hence:





Metaphor consists in giving the thing a name that belongs to something else.






and elaborates:

When we give something a name that belongs to something else, we give it a whole network of analogies too.
The idea of "giving a name" (especially one that "belongs" to another) is deceptively simple. Again, theories abound. What I can't help but marvel at is the power of our mouths! We can ruin lives by giving names; we can bring down a movement or a nation. That's terrifying. It also reminds me of an English proverb I read recently (probably it is too old to be said anymore, or I am too American to hear it): He that has an ill name is half-hanged. Reputation is just another word for name, right? "Doctor," "alcoholic," "goody-two-shoes." Of course we can restore rightful names, like Robin Hood pries the name "King" from John to give to Richard. Justice and injustice are very much alive and active in the conceptual/linguistic realm. But I digress...



After his overly Elvisy overview, Geary continues in a more scientific mode:

This is the mathematics of metaphor: X = Y.
I love the absurdity of this. That's exactly what metaphor is! And we can't live without it; the illogicalness of it undergirds almost all reasoning, at least in the sense that we use language when reasoning, and language is so heavily metaphorical. Does this make metaphor inherently misleading, or productive but dangerous? Mathematicians out there, please weigh in.

In answer to the question "How do we make and understand metaphor?" Geary described three steps:


1. Pattern recognition -- not just the recognition of obvious patterns but the creation of them too, such as when we see two triangles in the following image:


2. Conceptual synesthesia, demonstrated in the famous bouba/kiki effect. Which of these shapes below would you name Bouba and which Kiki?


If you called the spiky shape Kiki and the blob Bouba, you're among the 95-98% of American students and Tamil speakers who answered the same way. The researchers responsible for the study "suggest that the kiki/bouba effect has implications for the evolution of language, because it suggests that the naming of objects is not completely arbitrary." That's kind of a big deal! Am I the only one who automatically thinks of Adam in the Garden? But we'll get back to Adam and the Kiki/Bouba guys later.

3. Cognitive dissonance. This was the hardest point for me to get. (Watch the thing and tell me what he's saying please?) From what I understand, Geary was comparing the difficulty of separating literal and metaphorical truth to the difficulty of reading the name of a color written in another color. (Which is surprisingly difficult, no matter how many times I try.) 

Geary then gave an interesting example of the effect of metaphor on perceptions of the NASDAQ that I cannot repeat, and gave some quotes I will repeat:
Combinatory play seems to be the essential feature in productive thought. - Albert Einstein

Language is fossil poetry. - Ralph Waldo Emerson
Geary closes with a fascinating mini-lesson in the etymology of cogito ergo sum. Apparently cogito literally means together (co) shake (gito), making the most accurate translation of this oft-quoted Cartesian proposition (though the original phrase was in French) is "I shake things up, therefore I am." Intentional/accurate or not, it's a pretty cool phrase, which Geary makes the most of in his closing metaphor:

The mind is a plastic snowdome: most interesting, most beautiful, and most itself when [...] it’s all shook up. And metaphor keeps the mind shaking, rattling, and rolling [...]
As you can imagine, Elvis allusions abound in the uncut version.