The Overall Philosophical Consequences
We began with a cognitive semantic analysis of the concepts of events and causation. If one accepts that analysis, a great deal follows. Given that causation is a multivalent radial concept with inherently metaphorical senses, the theory of the one true causation becomes not merely false, but silly. Once we know that it is multivalent, not monolithic, and that it is largely metaphorical, it turns out not to be the kind of thing that could have a single logic or could be an objective feature of the world. Since the concept of causation has ineliminably metaphorical subcases, those forms of causation, as conceptualized metaphorically, cannot literally be objective features of the world. There can be no one true causation.
That does not mean that causation does not exist, that there are no determining factors in the world. If one gives up the correspondence theory of truth and adopts the experientialist account of truth as based on embodied understanding, then there is a perfectly sensible view of causation to be given. We do not claim to know whether the world, in itself, contains “determining factors.” But the world as we normally conceptualize it certainly does. Those determining factors consist in all the very different kinds of situations we call causal.
When we see or hypothesize a determining factor of some kind, we conceptualize it using one of our forms of causation, either literal or metaphorical. If metaphorical, we choose a metaphor with which to conceptualize the situation, preferably a metaphor whose logic is appropriate to the kind of determining factor noticed. Using that metaphor we can make claims about that determining factor. The claims can be “true” relative to our understanding, which itself may be literal or metaphorical.
This does not eliminate all problems of truth with respect to metaphor. It moves many of them to another place, but a more appropriate place. It leads us to ask, “When is a metaphorical conceptualization of a situation apt?” Is it an apt use of metaphor to apply the metaphor of Causal Paths to democracy in the arena of foreign policy? Only relative to a decision concerning the aptness of the metaphor can we draw conclusions on the basis of the Causal Paths metaphor. Continue reading
Brains tend to optimize on the basis of what they already have, to add only what is necessary. Over the course of evolution, newer parts of the brain have built on, taken input from, and used older parts of the brain. Is it really plausible that, if the sensorimotor system can be put to work in the service of reason, the brain would build a whole new system to duplicate what it could do already?
From a biological perspective, it is eminently plausible that reason has grown out of the sensory and motor systems and that it still uses those systems or structures developed from them. This explains why we have the kinds of concepts we have and why our concepts have the properties they have. It explains why our spatial-relations concepts should be topological and orientational. And it explains why our system for structuring and reasoning about events of all kinds should have the structure of a motor-control system.
It is only from a conservative philosophical position that one would want to believe in the old faculty psychology-in the idea that the human mind has nothing about it that animals share, that reason has nothing about it that smells of the body.
Philosophically, the embodiment of reason via the sensorimotor system is of great importance. It is a crucial part of the explanation of why it is possible for our concepts to fit so well with the way we function in the world. They fit so well because they have evolved from our sensorimotor systems, which have in turn evolved to allow us to function well in our physical environment. The embodiment of mind thus leads us to a philosophy of embodied realism. Our concepts cannot be a direct reflection of external, objective, mind-free reality because our sensorimotor system plays a crucial role in shaping them. On the other hand, it is the involvement of the sensorimotor system in the conceptual system that keeps the conceptual system very much in touch with the world.
Our subjective mental life is enormous in scope and richness. We make subjective judgments about such abstract things as importance, similarity, difficulty, and morality, and we have subjective experiences of desire, affection, intimacy, and achievement. Yet, as rich as these experiences are, much of the way we conceptualize them, reason about them, and visualize them comes from other domains of experience. These other domains are mostly sensorimotor domains, as when we conceptualize understanding an idea (subjective experience) in terms of grasping an object (sensorimotor experience) and failing to understand an idea as having it go right by us or over our heads. The cognitive mechanism for such conceptualizations is conceptual metaphor, which allows us to use the physical logic of grasping to reason about understanding.
Metaphor allows conventional mental imagery from sensorimotor domains to be used for domains of subjective experience. For example, we may form an image of something going by us or over our heads (sensorimotor experience) when we fail to understand (subjective experience). A gesture tracing the path of something going past us or over our heads can indicate vividly a failure to understand.
Conceptual metaphor is pervasive in both thought and language. It is hard to think of a common subjective experience that is not conventionally conceptualized in terms of metaphor. But why does such a huge range of conventional conceptual metaphor exist? How is it learned and what are the precise details? What is the mechanism by which we reason metaphorically? And which metaphors are universal (or at least widespread) and why?
George Lakoff and Mark Johnson, Philosophy in the flesh : the embodied mind and its challenge to Western thought.
Patterns in language yield patterns in thought. Extensive research has now demonstrated that differences between languages can yield differences, often subtle ones, in the cognitive habits of their speakers. This finding, commonly referred to as linguistic relativity, has now been supported by dozens of studies on topics like spatial awareness, the perceptions of time, and the categorisation of colours. For instance, “where” the future and past “are” depends on the language you speak. Similarly, the manner in which you recall and discriminate colours is affected in sublte ways by the basic colour term inventory of your native language. Our tour of the numberless worlds ultimately led to the conclusion that numeric language also yields difference in how people think. Number words, present in the vast majority of the world’s languages (though not all of them), certainly influence quantitative cognition. Only those people who are familiar with number words and counting can exactly differentiate most quantities. The presence of numbers in a language does not just subtly influence how we think about certain quantities, then; it also opens up a door to the world of arithmetic and mathematics. The first step through that door is the realisation that quantities, regardless of size, can be precisely differentiated. But how exactly do numbers first open this door? And what happens after we walk through it?
The findings from numberless worlds suggests plainly that we need numbers to really “get” quantities in ways that are uniquely human, but, this raises a paradox. If we need numbers to appreciate most quantities precisely, how did we get numbers in the first place? How could we ever name the amounts in particular sets of items, if we could not recognise the amount?
Given the apparent intractability of this paradox, some have concluded that humans must be innately predisposed to acquire number concepts. But, if we are predisposed to recognise different set sizes as separate abstract entities, then what is the limit to this predisposition? Are we naturally predisposed, for example, to eventually realise that 1,023 is not 1,024? This seems fairly implausible. Framed differently, nativist views on numbers just delay the point at which we reach the paradox.
James Hurford noted that number words are names for the “non-linguistic entities denoted by numbers.” That is, the number words label conceptual entities. In a related vein, Karenleigh Overmann recently suggested that “quantity concepts must surely precede their lexical labels, or there would be nothing to name… A method of invention cannot presuppose that which it invents.” This latter stance is understandable, but it arguably trivialises the extensive evidence, according to which, words for quantities beyond three do not simply label pre-existing concepts, because these concepts do not exist for most people until they actually learn numbers.
In my view, this is the key to resolving the paradox: words for quantities beyond three make concrete the precise numerical abstractions that are only occasionally and inconsistently made by some people. Some of these people may eventually invent numbers, but if they do not, their fleeting abstractions are not transferred to others. The naming of such ephemeral realisations is what eventually enables people to consistently show the ability to make a simple but powerful realisation, the realisation that sets of quantities greater than three can be identified precisely. This simple realisation has led, in all likelihood more times than could be documented, to the invention of symbols for such larger quantities. These symbols are chiefly verbal in nature, judging from the fact that the overwhelming majority of the world’s cultures have words for such quantities though most cultures traditionally lack written numerals or elaborate tally systems. Some people invented number words to concretise the potentially transient recognition of the existence of exact higher quantities.
Does this mean that number words simply serve as labels for the concepts? Not really. The truth seems a bit more nuanced than the forced dichotomous choice assumed by the paradox. Number words are not simply labels, yet they do describe conceptual realisations that some people make some times. The term ‘label’ implies that the words simply denote concepts that we all think about: concepts all humans are born ready to appreciate (at least eventually), regardless of their cultural environment. But clearly not all humans have such concepts at the ready even as adults, and likely most people would never make the relevant realisations that can be described via numbers. Just as clearly, though, some people have made those realisations, even if inconsistently. In those real historical cases in which people managed to describe that realisation with a word, they invented numbers. The concept they named was subsequently recognised by other members of their culture through the adoption of the relevant word(s). Number words are conceptual tools that get passed around with ease, tools most people want to borrow.
Caleb Everett, Numbers and the Making of us: Counting and the Course of Human Cultures
In the field of religion there are dogmatists of no-faith as there are of faith, and both seem to me closer to one another than those who try to keep the door open to the possibility of something beyond the customary ways in which we think, but which we would have to find, painstakingly, for ourselves. Similarly as regards science, there are those who are certain, God knows how, of what it is that patient attention to the world reveals, and those who really do not care, because their minds are already made up that science cannot tell them anything profound. Both seem to me profoundly mistaken. Though we cannot be certain what it is our knowledge reveals, this is in fact a much more fruitful position – in fact the only one that permits the possibility of belief. And what has limited the power of both art and science in our time has been the absence of belief in anything except the most diminished version of the world and our selves. Certainty is the greatest of all illusions: whatever kind of fundamentalism it may underwrite, that of religion or of science, it is what the ancients meant by hubris. The only certainty, it seems to me, is that those who believe they are certainly right are certainly wrong. The difference between scientific materialists and the rest is only this: the intuition of the one is that mechanistic application of reason will reveal everything about the world we inhabit, where the intuition of the others leads them to be less sure. Virtually every great physicist of the last century – Einstein, Bohr, Planck, Heisenberg, Bohm, amongst many others – has made the same point. A leap of faith is involved, for scientists as much as anyone. According to Max Planck, ‘Anybody who has been seriously engaged in scientific work of any kind realizes that over the entrance to the gates of the temple of science are written the words: Ye must have faith. It is a quality which the scientist cannot dispense with.’ And he continued: ‘Science cannot solve the ultimate mystery of nature. And that is because, in the last analysis, we ourselves are part of nature and therefore part of the mystery that we are trying to solve.
In this book certainty has certainly not been my aim. I am not so much worried by the aspects that remain unclear, as by those which appear to be clarified, since that almost certainly means a failure to see clearly. I share Wittgenstein’s mistrust of deceptively clear models: and, as Waismann said, ‘any psychological explanation is ambiguous, cryptic and open-ended, for we ourselves are many-layered, contradictory and incomplete beings, and this complicated structure, which fades away into indeterminacy, is passed on to all our actions.’ I am also sympathetic to those who think that sounds like a cop-out. But I do think that things as they exist in practice in the real world, rather than as they exist in theory in our re-presentations, are likely to be intrinsically resistant to precision and clarification. That is not our failure, but an indication of the nature of what we are dealing with. That does not mean we should give up the attempt. It is the striving that enables us to achieve a better understanding, but only as long as it is imbued with a tactful recognition of the limits to human understanding. The rest is hubris.
If it could eventually be shown definitively that the two major ways, not just of thinking, but of being in the world, are not related to the two cerebral hemispheres, I would be surprised, but not unhappy. Ultimately what I have tried to point to is that the apparently separate ‘functions’ in each hemisphere fit together intelligently to form in each case a single coherent entity; that there are, not just currents here and there in the history of ideas, but consistent ways of being that persist across the history of the Western world, that are fundamentally opposed, though complementary, in what they reveal to us; and that the hemispheres of the brain can be seen as, at the very least, a metaphor for these. One consequence of such a model, I admit, is that we might have to revise the superior assumption that we understand the world better than our ancestors, and adopt a more realistic view that we just see it differently – and may indeed be seeing less than they did.
Iain McGilchrist, The Master and His Emissary: The Divided Brain and the Making of the Western World.
Let’s [address] the question of how humans acquired music and language, since it helps us to understand the revolutionary power of imitation. Music and language are skills, and skills are not like physical attributes – bigger wings, longer legs: not only can they be imitated, which obviously physical characteristics on the whole can’t, but in the case of music and language they are reciprocal skills, of no use to individuals on their own, though of more than a little use to a group. An account of the development of skills such as language purely by the competitive force of classical natural selection has to contend not only with the fact that the skills could easily be mimicked by those not genetically related, thus seriously eroding the selective power in favour of the gene, but also with the fact that unless they were mimicked they wouldn’t be much use. Imitation would itself have a selective advantage: it would enable those who were skilled imitators to strengthen the bonds that tied them to others within the group, and make social groups stable and enduring. Those groups that were most cohesive would survive best, and the whole group’s genes would do better, or not, depending on the acquisition of shared skills that promote bonding – such as music, or ultimately language. Those individuals less able to imitate would be less well bound into the group, and would not prosper to the same degree.
The other big selective factor in acquiring skills and fitting in with the group would be flexibility, which comes with expansion of the frontal lobes – particularly the right frontal lobe, which is also the seat of social intelligence. Skills are intuitive, ‘inhabited’ ways of being and behaving, not analytically structured, rule-based techniques. So it may be that we were selected – not for specific abilities, with specific genes for each, such as the ‘language gene(s)’ or the ‘music gene(s)’ – not even ‘group selected’ for such genes – but individually for the dual skills of flexibility and the power to mimic, which are what is required to develop skills in general.
From a philosophical perspective, the discovery of mirror neurons is exciting because it gave us an idea of how motor primitives could have been used as semantic primitives: that is, how meaning could be communicated between agents. Thanks to our mirror neurons, we can consciously experience another human being’s movements as meaningful.Perhaps the evolutionary precursor of language was not animal calls but gestural communication. The transmission of meaning may initially have grown out of the unconscious bodily self-model and out of motor agency, based, in our primate ancestors, on elementary gesturing. Sounds may only later have been associated with gestures, perhaps with facial gestures—such as scowling, wincing, or grinning—that already carried meaning. Still today, the silent observation of another human being grasping an object is immediately understood, because, without symbols or thought in between, it evokes the same motor representation in the parieto-frontal mirror system of our own brain. As Professor Rizzolatti and Dr. Maddalena Fabbri Destro from the Department of Neuroscience at the University of Parma put it: “[T]he mirror mechanism solved, at an initial stage of language evolution, two fundamental communication problems: parity and direct comprehension. Thanks to the mirror neurons, what counted for the sender of the message also counted for the receiver. No arbitrary symbols were required. The comprehension was inherent in the neural organization of the two individuals.”
Such ideas give a new and rich meaning not only to the concepts of “grasping” and “mentally grasping the intention of another human being,” but, more important, also to the concept of grasping a concept—the essence of human thought itself. It may have to do with simulating hand movements in your mind but in a much more abstract manner. Humankind has apparently known this for centuries, intuitively: “Concept” comes from the Latin conceptum, meaning “a thing conceived,” which, like our modern “to conceive of something,” is rooted in the Latin verb concipere, “to take in and hold.” As early as 1340, a second meaning of the term had appeared: “taking into your mind.” Surprisingly, there is a representation of the human hand in Broca’s area, a section of the human brain involved in language processing, speech or sign production, and comprehension. A number of studies have shown that hand/arm gestures and movements of the mouth are linked through a common neural substrate. For example, grasping movements influence pronunciation— and not only when they are executed but also when they are observed. It has also been demonstrated that hand gestures and mouth gestures are directly linked in humans, and the oro-laryngeal movement patterns we create in order to produce speech are a part of this link.
Broca’s area is also a marker for the development of language in human evolution, so it is intriguing to see that it also contains a motor representation of hand movements; here may be a part of the bridge that led from the “body semantics” of gestures and the bodily self-model to linguistic semantics, associated with sounds, speech production, and abstract meaning expressed in our cognitive self-model, the thinking self. Broca’s area is present in fossils of Homo habilis, whereas the presumed precursors of these early hominids lacked it. Thus the mirror mechanism is conceivably the basic mechanism from which language evolved. By providing motor copies of observed actions, it allowed us to extract the action goals from the minds of other human beings—and later to send abstract meaning from one Ego Tunnel to the next.
The mirror-neuron story is attractive not only because it bridges neuroscience and the humanities but also because it illuminates a host of simpler social phenomena. Have you ever observed how infectious a yawn is? Have you ever caught yourself starting to laugh out loud with others, even though you didn’t really understand the joke? The mirror-neuron story gives us an idea of how groups of animals—fish schools, flocks of birds—can coordinate their behavior with great speed and accuracy; they are linked through something one might call a low-level resonance mechanism. Mirror neurons can help us understand why parents spontaneously open their mouths while feeding their babies, what happens during a mass panic, and why it is sometimes hard to break away from the herd and be a hero. Neuroscience contributes to the image of humankind: We are all connected in an intersubjective space of meaning—what Vittorio Gallese calls a “shared manifold.”
Thomas Metzinger, The Ego Tunnel: The Science of The Mind and The Myth of The Self.