What is Mental Representation?
About Us | Contact Us | E-Paper
Title :    Text :    Source : 

What is Mental Representation?

Human cognition consists in the manipulation of mental symbols according to “rules of thought.” Cognition requires a “language of thought” operating according to computational principles

Post by on Sunday, August 15, 2021

First slide

 All representations have content; but do they typically possess other features as well? One proposal is that, for an entity to be a representation, it must also be capable of standing for its object in the absence of that object. In this case, for instance, the level of mercury in a thermometer would not represent the temperature of a room because the mercury cannot stand for that temperature in its absence: if the temperature were different, the level would change.

 

Even when this constraint is satisfied, different types of mental representation are distinguishable. For example, throughout the day, the sunflower rotates to face the sun. Moreover, this rotation continues even when the sun is occluded. Consequently, it stands to reason that somewhere there is a physical process that represents the location of the sun and that guides the flower’s rotation during those instances when the sun is not present. However, there is very little a sunflower can do with this representation besides guide its rotation. Humans, in contrast, are not subject to this limitation. For instance, when seeing a cat, a person may represent its presence at that moment. Furthermore, the person may think about the cat in its absence. But, unlike the sunflower, that person can also think arbitrary thoughts about the cat: “That cat was fat”; “That cat belonged to Napoléon,” etc. It seems as if the “cat” representation can be used in the formation of larger—and completely novel—aggregate representations. So, while possessing content is a necessary feature of mental representation, there may be additional features as well.

 

Some representations play certain roles better than others. For example, a French sentence conveys information to a native French speaker more effectively than does the same sentence in Swahili, despite the two sentences having the same meaning. In this case, the two sentences have the same content yet differ in the way in which they represent it; that is, they utilize different representational formats. The problem of determining the correct representational format (or formats) for mental representation is a topic of ongoing interdisciplinary research in the cognitive sciences.

 

Human cognition consists in the manipulation of mental symbols according to “rules of thought.” Cognition requires a “language of thought” operating according to computational principles. Individual concepts are the “words” of the language, and rules govern how concepts are assembled into complex thoughts—“sentences” in the language. For example, to think that the cat is on the mat is to take the required concepts (i.e., the “cat” concept, the “mat” concept, the relational concept of one thing being on top of another, etc.) and assemble them into a mental sentence expressing that thought. The theory enjoys a number of benefits, not least that it can explain the human capacity to think arbitrary thoughts.

 

Just as the grammar for English allows the construction of an infinite number of sentences from a finite set of words, given the right set of rules, and a sufficient number of basic concepts, any number of complex thoughts can be assembled. However, artificial neural networks may provide an alternative. Inspired by the structure and functioning of biological neural networks, artificial neural networks consist of networks of interconnected nodes, where each node in a network receives inputs from and sends outputs to other nodes in the network. Networks process information by propagating activation from one set of nodes (the input nodes) through intervening nodes (the hidden nodes) to a set of out- put nodes.

 

In the mid-1980s, important theoretical advances in neural network research heralded their emergence as an alternative to the language of thought. Where the latter theory holds that thinking a thought involves assembling some mental sentence from constituent concepts, the neural network account conceives of mental representations as patterns of activity across nodes in a network. Since a set of nodes can be considered as an ordered n-tuple, activity patterns can be understood as points in n-dimensional space. For example, if a network contained two nodes, then at any given moment their activations could be plotted on a two-dimensional plane. Thinking, then, consists in transitions between points in this space.

 

Artificial neural networks exhibit a number of features that agree with aspects of human cognition. For example, they are architecturally similar to biological networks, are capable of learning, can generalize to novel inputs, and are resistant to noise and damage. Neural network accounts of mental representation have been defended by thinkers in a variety of disciplines. However, proponents of the language of thought continue to wield a powerful set of arguments against the viability of neural network accounts of cognition. One of these has already been encountered above: humans can think arbitrary thoughts. Detractors charge that networks are unable to account for this phenomenon—unless, of course, they realize a representational system that facilitates the construction of complex representations from primitive components, that is, unless they implement a language of thought.

 

Regardless, research on artificial neural networks continues, and it is possible that these objections will be met. Moreover, there exist other candidates. One such hypothesis—extensively investigated by the psychologist Stephen Kosslyn—is that mental representations are imagistic, a kind of “mental picture.” For example, when asked how many windows are in their homes, people typically report that they answer by imagining walking through their home. Likewise, in one experiment, subjects are shown a map with objects scattered across it. The map is removed, and they are asked to decide, given two objects, which is closest to a third. The time it takes to decide varies with the distance be- tween the objects.

 

A natural explanation is that people make use of mental imagery. In the first case, they form an image of their home and mentally explore it; in the second, a mental image of the map is inspected, and the subject “travels” at a fixed speed from one object to another.

 

The results of the map experiment seem difficult to explain if, for example, the map were represented mentally as a set of sentences in a language of thought, for why would there then be differences in response times? That is, the differences in response times seemed to obtained “for free” from the imagistic format of the mental representations. A recent elaboration on the theory of mental imagery proposes that cognition involves elaborate “scale models” that not only encode spatial relationships between objects but also implement a simulated physics, thereby providing predictions of causal interactions as well. Despite the potential benefits, opponents argue that a non imagistic account is available for every phenomenon in which mental imagery is invoked, and furthermore, purported neuro scientific evidence for the existence of images is inconclusive.

 

Perhaps the most radical proposal is that there is no such thing as mental representation, at be successfully analyzed by positing discrete mental representations, such as those described above. Instead, mathematical equations should be used to describe the behavior of cognitive systems, analogous to the way they are used to describe the behavior of liquids, for example. Such descriptions do not posit contentful representations; instead, they track features of a system relevant to explaining and predicting its behavior. In favor of such a theory, some philosophers have argued that traditional analyses of cognition are insufficiently robust account for the subtleties of cognitive behavior, while dynamical equations are. That is, certain sorts of dynamical systems are computationally more powerful than traditional computational systems: they can do things (compute functions) that traditional systems cannot. Consequently, the question arises whether an adequate analysis of mental representations will require this additional power. At present the issue remains unresolved.

 

(Excerpt from: Lary Gillman “Nature of Mental Representation”)

Latest Post