inferentiallypromiscuous

Metaphors for Meaning (b1)

Advertisements

These are some of my favorite metaphors. But are any of them meaningful? This post is going to formalize, on a basic level, a mathematical framework for interpreting Meaning through metaphors. Specifically, it does this with hopes of indirectly entangling with the more technical context of Neural Nets and Deep Learning. Its end-goal is therefor aimed at creating an aesthetic argument that may motivate people who are interested in symbolic systems to study this stuff on a more technical level. It is Part 1 of a 5 part series. To start off I’ll talk a little about my sources for this post.


Several of the concepts come from the first half of Godel, Escher, Bach (GEB). This book is an 800 page Pulitzer tomb from the 1970s that laid out the foundations for Symbolic System, which is an attempt to create a multidisciplinary theory of the mind by pulling from computer science, linguistics, philosophy, and psychology.

This post also pulls from the very beginning of Stephen Pinker’s How The Mind Works, a more recent Pulitzer. This book, along with GEB, will help us build the terminology we need to define how to meaningfully interpret symbols.

I also use the premise of a book called Metaphors We Live By, which talks about metaphors as the structure of thought. It claims that everything we interpret is a metaphor, which we will dig into in the first two posts.


I hope this beginning part doesn’t turn into a dry glossary, but the words are useful, so bare with me. The goal is to construct a precise understanding of a Meaningful Metaphor. Lets start with an example:

We interpret this metaphor as a correspondence between a symbol (the One Ring), and some concept (the desire for power, and the influence of the small).

So again, an interpretation is a correspondence between a material object (a symbol), and some concept. Through interpretations, a symbol comes to represent abstractions. At the same time, the symbol is also a piece of matter, able to do whatever that kind of matter in that kind of state can do according to the laws of physics and chemistry.

“Tree rings carry information about age, but they also reflect light and absorb staining material. Footprints carry information about animal motions, but they also trap water and cause eddies in the wind” (Stephen Pinker)

Tree rings and footprints can take on a bazillion different interpretations. Such as being able to summon the flying spaghetti monster when making a footprint on the fourth concentric ring of The Chosen Tree.

But this is hardly meaningful. Why? In order to formalize a system for answering this we need to go down a quick rabbit hole (oscillating among a vertical stack of definitions seems necessary for precise definitions that eliminate implicit assumptions). First, lets look at the different characteristics of mappings between some Set A, to some other Set B:

Moving from left to right: (1) is a general function. It contains ambiguity (more than one element from Set A maps onto one element from Set B), and is not exhaustive (there is at least one point within Set A that cannot be mapped to from Set B). The second and third examples decouple these characteristics. The last example (far right) is what is important right now, because it defines an ideal for a metaphor: being isomorphic:

No information is lost in the above transformation [1]. Its unambiguous and exhaustive. Like a quality hash function for engineers, or a loss-less FLAC for audiophiles. It preserves all information in the transformation. Its almost like Set A and B are identifying two objects as the same. Set A totally respects the structure of Set B, but just renames the elements. That’s the ideal! If you define Math as language which has very precise words, an isomorphism is like an optimal mathematical metaphor.

Put differently: “An isomorphism applies when two complex structures can be mapped onto each other, in such a way that to each part of one structure corresponds to a part in the other structure, where ‘corresponding’ means that the two parts play similar roles in their respective structures.” (Douglas Hofstadter)

Isomorphisms formalize the ideal of a metaphor into a quantifiable mathematical system. Providing additional mathematical rigor to support this previous statement requires similarity measures between vectors in high-dimensional spaces, which I’m saving for the next post. For now lets just take what we’ve got: A metaphor is a less strict version of an isomorphism. That is, some structure is preserved, while some is lost to ambiguity and being non-exhaustive. Kinda like this cat:

This image (courtesy of Google’s Deep Dream) transformed certain pieces of the curtain until it resembled a cat to some significant degree. The cat is not a curtain, nor is the curtain a cat. Therefor they are not isomorphic. But the fuzzy connection is a metaphor: some structure of the curtain (Set A) is preserved in the image of the cat (Set B). In general, the metaphorical commonalities between fuzzy concepts create a conceptual “skeleton” of sorts. Like lining up two concepts side-by-side and mapping all their similarities in a high dimensional space.

Defining Objective Meaning
Lets take this in two pieces: First, consider platonic meaning as an Injective with Reality. An Injective is a non-ambiguous but non-exhaustive mapping (see figure below). For example, we all have an idea of what a spoon is in our head, which is at least created by general material characteristics, as well as the common functions of a spoon. There also exists physical spoons, examples of spoons, that we can hold in our hand, and actually eat with.

Reification is the process of making an abstract concept real. And when perceiving a physical object in relation to its abstract generalization, you are inherently going to find additional information; the edge cases of what a spoon can be: unique traits, quirks, Wabi Sabis. Information is lost in the reification, but the benefit is a usable piece of technology from a ‘scientific’ idea. Technology is the great vehicle of reification. So don’t be fooled by the reflection of the monk-child below. There is, in fact, a spoon.

This helps us define General Objective Meaning as a subset of interpretations which correspond with reality to some significant degree. The degree of meaning within a concept is synonymous to the degree to which that concept exists within reality.

So if we have a sequence of associations between neurons, or a representation of bits on a computer chip which represent a piece of reality in its entirety, then we have an isomorphism with that system. Its the ideal because it captures all the meaning.

In practice, this is hardly the case: “All models are wrong, some models are useful.” (George Box) This aphorism is interesting because it defines an isomorphism as a quixotic asymptote, approached by being less wrong.

Less wrong, more Meaning, via Information
Information is preserved wherever causes leave effects. So if there exists some general function that transforms one pile of matter in reference to another, then information is preserved, to some degree. And if Information is preserved, then the resulting Pile-O-Matter is our Symbol: it “stands for” the state of affairs of the causal Pile-O-Matter.

“Information is a correlation between two things that is produced by a lawful process (as oppose to coming about by sheer chance). We say that the rings in a stump carry information about the age of the tree because their number correlates with the tree’s age (the older the tree, the more rings it has), and the correlation is not a coincidence but is caused by the way trees grow.” -Stephen Pinker [2]

Information Processing & the Symbolic Definition of a Computer
Information is nothing special. What is special is when symbols become active; when they start processing information. This allows a symbol to both stand for some concept and mechanically cause things to happen. Like chips in a computer, or neurons in the brain.

“Now here is an idea. Suppose one were to build a machine with parts that are affected by the physical properties of some symbol. Some lever or electric eye or tripwire or magnet is set in motion by the pigment absorbed by a tree ring, or the water trapped by a footprint, or the light reflected by a chalk mark, or the magnetic charge of a bit of oxide. And suppose that the machine then causes something to happen in some other pile of matter. It burns new marks onto a piece of wood, or stamps impressions into nearby dirt, or charges some other bit of oxide. Nothing special has happened so far; all I have described is a chain of physical events accomplished by a pointless contraption.

“Here is the special step. Imagine that we now try to interpret the newly arranged piece of matter using the scheme according to which the original piece carried information. Say we count the newly burned wood rings and interpret them as the age of some tree at some time, even though they were not caused by the growth of any tree. And let’s say that the machine was carefully designed so that they carried information about something in the world. For example, imagine a machine that scans the rings in a stump, burns one mark on a nearby plank for each ring, moves over to a smaller stump from a tree that was cut down at the same time, scans its rings, and sands off one mark in the plank for each ring. When we count the marks on the plank, we have the age of the first tree at the time that the second one was planted. We would have a kind of rational machine, a machine that produces true conclusions from true premises–not because of any special kind of matter or energy, or because of any part that was itself intelligent or rational. All we have is a carefully contrived chain of ordinary physical events, whose first link was a configuration of matter that carries information. Our rational machine owes its rationality to two properties glued together in the entity we call a symbol: a symbol carries information, and it causes things to happen. (Tree rings correlate with the age of the tree, and they can absorb the light beam of a scanner) When the caused things themselves carry information, we call the whole thing an information processor, or a computer.” (Stephen Pinker)


[1] The formal definition of an isomorphism: Two vector spaces V and W are said to be isomorphic if there exists an invertible linear transformation (aka an isomorphism) T from V to W.

[2] Human brains were selected because of our ability to use our sense organs to distill useful meaning in the world.

Advertisements