pre-final draft of an article appearing in:
Annual Review of Cognitive Linguistics 4, 253-268. 2006.

INTERVIEW
LEONARD TALMY. A WINDOWING ONTO CONCEPTUAL STRUCTURE AND LANGUAGE.
PART 2: LANGUAGE AND COGNITION: PAST AND FUTURE.
[Second written interview on my work conducted by Iraide Ibarretxe-Antuqano]

Question 1: In a research career of more than thirty years, you have studied a wide range of linguistic phenomena. However, if you had to briefly characterize your research trajectory in just a few lines, what would you say has been your main concern? In other words, are there any topics or issues that you have been pursuing in your work all throughout these years?

My interest has always been how the mind works, especially at what is now often called higher levels of cognition. As I see it, such cOgnition includes major "substantive" cognitive systems like perception (in its various modalities), motor organization, affect, thought (including inferencing, planning, and imagining), culture (I proposed a cognitive culture system in Talmy 2000b, ch. 7), and language. Applying to these in turn are such major "operational" cognitive systems as memory, perspective, and attention and consciousness. Further, these can all function with respect to different organized "domains" -- also implemented as systems in cognition -- such as spatial structure, temporal structure, and causal structure. Critical to understanding how all these systems work is determining their principles of organization -- that is, their structure and patterns of operating. Some further major issues that concern these cognitive systems are: how genuinely separate (modular) they are as against grading into each other; how extensively they function with distinct principles of organization vs. with common ones; how they interact and do or do not integrate; and the extent to which they or related forms of them exist in other animals. When I was in college, I think I might have concentrated on any branch of psychology that addressed cognitive structure if one had existed at the time or on cultural anthropology as easily as on linguistics. But a combination of factors did lead me to focusing on language. Still, I’ve always seen language as one system of mental functioning through which the mind could be studied more generally. So I’m glad that in more recent years, while keeping language as the base of my expertise, I’ve been progressively examining its relations to other cognitive systems under an aegis I call the "overlapping systems model of cognitive organization".

Question 2: Why do you think the analysis of conceptual structure in language is so central and fundamental for the study of language and cognition?

Understanding how the mind works entails understanding the principles of organization that characterize it overall and that characterize its various systems. And understanding the organizing principles of any single cognitive system is not only valuable in its own right, but can also serve as an entree to further understanding those of other systems or of the whole, whether by generalizing the similarities or by contrasting the differences. This certainly holds for language. More specifically, language consists of components with relatively distinct principles of organization, perhaps even distinct principles for different subcomponents within phonology, morphosyntax, and semantics. Each of these sets of organizational principles -- besides their necessity in understanding language as a cognitive system -- needs to be compared for similarities and differences against the principles found in other cognitive systems and in cognition overall so that we can map out how cognition is organized.

Semantics -- that is, how conceptual content is organized in language -- may well have several different subcomponents each with its own set of principles. One such subcomponent may be the system of tropes such as hyperbole, sarcasm, and metaphor. As part of this system’s principles of organization, the speaker produces an expression with certain kinds of cues (depending on the type of trope) that it is not to be interpreted literally and, from this cued input, the hearer uses a specific procedure (one for each type of trope) to construct the actually intended conception. My forthcoming book on attention goes into some detail on the cognitive operations at work here.

But another subcomponent of semantics that I’ve already written much on is the semantics of the closed-class system. This system consists of those classes of forms that have relatively few members and have difficulty adding more. Closed-class forms can not only be morphemes, but also word order patterns, lexical categories, grammatical relations or, importantly, grammatical complexes such as case frames and constructions. While the closed-class system is largely the same as what is generally meant by "grammar", my work has focused on its meaning -- that is, on its representation of conceptual material -- hence, it has focused on the semantics of grammar. The principles of organization for the closed-class system and its conceptual representations include the following features.

There may well be an approximately closed, universally available inventory of concepts that can ever be expressed by closed-class forms. This inventory consists not only of the basic concepts, but also of the immediate conceptual categories that these concepts belong to, and of the large-scale schematic systems that these conceptual categories in turn belong to. The closed-class forms in any one language express a selection of the concepts and categories from the universally available inventory -- no language expresses them all -- though possibly all the schematic systems have at least some closed-class expression in every language. Any individual closed-class form could represent a single basic concept, but more often it represents a schema consisting of a selection of basic concepts in a particular arrangement (as described most fully in Talmy 2006). The crucial finding is that there is a difference in the functions performed by the open-class system and the closed-class system of a language. In the conceptual complex expressed by any portion of discourse, the open-class system determines most of the conceptual content, while the closed-class system determines most of the conceptual structure. As a whole, the universally available inventory -- its basic concepts, conceptual categories, and schematic systems -- constitutes a pervasive and important -- perhaps the most fundamental -- conceptual structuring system of language. And the selection from this inventory present in any single language plays the same role for that language.

It is valuable to have this understanding about language in its own right -- that it has a system dedicated to representing structure, in particular, a system of formal structure for representing conceptual structure. Even more striking, though, is the possibility that this organizational feature might be unique among cognitive systems. To use the colloquial expression, language hands you structure on a silver platter. It has an explicit structured formal system readily distinguished by it’s peculiar properties that represents conceptual structure as distinguished from conceptual content. For a contrast, consider visual perception. Much of the work of perception psychologists seems to consist of tracing out what they often implicitly take to be structural about vision. Yet there may be no overtly distinct subsystem in perception dedicated to establishing visual structure as distinguished from visual content. Put another way, no readily identifiable system responsible for a "grammar of vision" seems to mark itself out. As one consequence, there is no definitive way to conclude that any particular visual feature is a feature of structure or of content, nor that such a distinction can be made. For example, which of these two, if either, is color, as with the colors present in a scene? Further, there seems to be no definitive way to settle on some point along the visual processing stream, from the retina through the visual cortex and beyond, as being responsible for the main system of structuring in the perception of a visual scene or activity (whatever such structure might be thought to be). Accordingly, insofar as comparisons can be made across different cognitive systems, since language has an explicit indicator of its structural properties, it may offer the best entree to a cross-systems study of cognitive organization.

Question 3: How exactly do you propose to study how language shapes concepts?

The main methodology is first to look through successive languages for their closed-class forms and the concepts that these represent. There will of course be procedural issues to address, such as what to do about mid-sized classes like Mandarin numeral classifiers and Atsugewi Cause prefixes; or whether the two attributes of a closed class -- small size and resistance to increase -- should be decoupled for cases like the class of Mayan position verbs, which is large but rather stable. It is also important to work with the most fine-grained semantic characterizations of the closed-class forms that are available.

With this approach, semanticists can derive three main kinds of information directly from languages. One is the conceptual complexes -- or schemas -- that closed-class forms across the world’s languages represent. Another is the subclasses into which individual languages seem to group certain sets of closed-class forms and the schemas they represent through various formal patterns -- for example, on the basis of mutual exclusivity in their occurrence. The third is the patterns in individual languages in which particular closed-class forms and their represented schemas occur together in a phrase or longer portion of discourse, and the conceptual structuring that results from such cooccurrence.

Semanticists can then analyze this material in several ways that are not given directly by languages themselves -- and, accordingly, are more open to reinterpretation. First, they can compare the schemas they find against each other to abstract from them the basic conceptual elements that make them up. These conceptual elements should be no finer than is justified by the articulations along which the schemas actually differ from each other. Semanticists can then group these basic conceptual elements into larger conceptual categories -- noting whether, in doing this, they are relying on the subclasses that languages have already been observed to form, or on their own judgments. Next, semanticists can group these conceptual categories into still larger schematic systems but, in this case, it should be clear that such a move is mostly based on their own judgments, since languages do not seem to exhibit any formal patterns distinguishing one putative schematic system from another.

This whole procedure might in principle have led to finding an open-ended set of conceptual components, categories, and schematic systems. But what we in fact find is that the inventory of these elements and groupings is approximately closed. It might at first seem to anyone starting out on a cross-linguistic examination of closed-class meanings that these multiply indefinitely but, in time, successive languages yield progressively more conceptual entities already found, until only a slowing trickle of novelties remains. Actually, I suspect that there is no water-tight compartment for closed-class meanings in language -- nor for much of anything else in cognition -- so that periodic closed-class semantic novelties should be expected. It is for this reason that I’ve called the inventory of closed -class meanings "approximately closed" -- and any linguistic theory characterizing this body of phenomena should have this structural provision built in at its foundation.

Finally within the present methodology, semanticists can look for any well-formedness principles that determine why it is that the basic conceptual elements combine in certain selections and arrangements, but not in others, to form the full schemas actually found to be represented by closed-class forms across languages. (This aspect of the investigation is the one I’ve had the least success with).

With these fundamentals of conceptual organization in language, semanticists can then further examine portions of discourse to determine the principles that govern how different schemas -- with the conceptual elements, categories, and schematic systems that they imply -- combine to form a larger conceptual complex running through the discourse.

Question 4: What exactly are these schematic systems?

Question 5: Which are the main schematic systems? Could you give us a couple of more specific and detailed examples?

Question 6: How do you apply these schematic systems to the analysis of specific languages?

Well, much of the answer that could be given here has already been laid out more systematically in the very first chapter of my two volumes (Talmy 2000), which sets the tone and much of the organization of those volumes. But to give a taste of the matter here, three of the schematic systems that I propose might be considered together as the "architectonic" systems.

The first of these schematic systems I call that of "configurational structure". This system comprehends all the respects in which closed-class schemas represent structure for space or time or other conceptual domains often in virtually geometric patterns. It thus includes much that is within the schemas represented by spatial prepositions, by temporal conjunctions, and by aspect and tense markers. It also includes the uniplex or multiplex instantiation of a type of object at various points of space -- what is often represented by number markers on nominals -- parallel to similar distinctions for events in time already understood as part of aspect. Two instances of configurational schemas can be seen in a sentence like Poles stood across the road. The spatial schema represented by the English preposition across includes -- to select only some of its elements and portray them broadly -- a line extending perpendicularly between two parallel lines. The plural marker -s in poles represents the multiple instantiation of a ‘pole’ -- understood as located at different points of space (not superimposed). By principles that govern the accommodation of different schemas occurring through a discourse to each other, the resulting conception includes the understanding that the poles are located at points on the transverse line and that these points have a representative distribution over this line (and are not, for example, all adjacent to each other in one spot).

While the first schematic system, configuration, establishes the basic delineations by which a scene or event being referred to is structured, the second schematic system, perspective, directs one as to where to place one’s "mental eyes" to look out at the structured scene or event. This perspective system includes at least these conceptual categories:
a perspective point’s spatial or temporal positioning within a larger frame, its distance away from the referent entity, its change or lack of change of location in the course of time as well as the path it follows in the case of change, and the viewing direction from the perspective point to the regarded entity.

And the third schematic system, attention, establishes how one is to distribute one’s attention over the structured scene or event from the selected perspective point. Different strengths of attention in this distribution can form a pattern. And patterns of different types underlie various conceptual categories within this schematic system, such as scope of attention, focus of attention (in a center-surround pattern), level of attention, and the windowing of attention.

An example that involves both these second and third schematic systems, as with the previous example, rests on the distinct schemas of more than one closed-class form appearing throughout a sentence and on the interaction and integration of these schemas. Sentences (1) and (2) can both refer to the same scene. But in sentence (1), the marker for plurality (multiple instantiation) on house along with its plural verb agreement, the collectivity of the determiner some, and the stationariness represented by the spatial preposition in together call for a conceptual structuring in which one in effect regards the referent scene from a stationary distal perspective point with a global scope of attention. But in sentence (2), the singularity (unitary instantiation) of house along with its singular verb agreement, the distributedness of the temporal phrase, and the motion represented by the spatial preposition through together call for a moving proximal perspective point with local scope of attention -- as if one were in succession regarding a series of houses from up close.

(1) There are some houses in the valley. (2) There is a house every now and then through the valley.

These first three schematic systems might together be thought to comprise a group of architectonic systems because they can all operate together designating static or changing geometric-like patterns in a single spatiotemporal matrix (as just seen in the last example). If so, then a fourth schematic system, force dynamics, might be thought to complement the architectonic systems. While the first three systems deal with geometric-type delineations, the fourth system deals with the forces exerted by and the causal interactions among the entities marked out by the delineations. The schematic system of force dynamics includes an organized set of basic patterns -- both steady-state and changing -- that involve the exertion of force by one entity on another. It covers concepts like an entity’s natural tendency toward motion or rest, an outside entity’s opposition to such a tendency, resistance to this opposition, and overcoming of such resistance, as well as of helping and hindering, causing and letting. To illustrate, a sentence like John doesn’t leave the house is force dynamically neutral and simply reports on a state of affairs. If a camera were set up outside the house, it would not record John’s presence. Now consider the sentence John can’t leave the house. The same camera would still show John’s absence. But here this absence is conceptualized as the resultant of two opposing forces, John’s tendency (here, desire) to leave and some obstacle that opposes that tendency, where the latter is stronger and so prevails.

A fifth schematic system of "cognitive state" could also be readily posited. Although I have written much about different aspects of such a system, I have not yet tried to work out how it might as a whole gather together and organize the extensive array of relevant closed-class representations. Actually, the schematic systems of perspective and attention are properly divisions within the system of cognitive state, but they are themselves so extensive that I have spun them off as separate schematic systems in their own right. Another extensive division within cognitive state, one that also might easily be spun off as a separate schematic system, comprises volition and intention, the criterial attributes of a sentient agent. Let me call this the "agency" division. Much as perspective and attention interact closely with configurational structure to comprise the architectonic systems, so agency interacts closely with force dynamics in what might together be dubbed the "ergal" systems. My analysis of agency appears mainly in Talmy (2000a ch. 4 and 8 and 2000b ch. 3), and it distinguishes between volition and intention. Volition is a cognitive event in a sentient agent that causes some motion of the agent’s body or body parts, where this in turn can initiate a causal chain of events in the physical realm that culminates in a certain so-conceived final event. As a separate cognitive state, the agent’s scope of intention is the amount of such a causal chain that the Agent intends to happen, necessarily starting with the volitional act, but able to terminate at various events before the final event, or to extent so as to include it, or even to extend beyond the final event. The three sentences in (3) illustrate these relationships. In the throwing situation of (3a), the length of the reported causal chain, involving a path of motion for a branch, is coextensive with the scope of John’s intention for that chain. In the hunting situation of (3b) the actual actions carried out, such as moving about to inspect tracks, fall short of John’s full scope of intention, which extends beyond those with the intent that they lead to finding and capturing the rat. And in the misplacing situation of (3c), whether or not it accords with a causal-chain account, at least it can be said that the length of the reported succession of events exceeds John’s scope of intention. The latter only extends through John’s placing the trowel down at some spot, not to his subsequent inability to remember that spot and find the trowel again (I term the subject of such a sentence the "Author" rather than the "Agent" of the final event). Such agency relationships can be lexicalized in open-class verbs -- as they are for example, in hide / hunt / spill, respectively. But they can also be represented by closed-class forms, as they are in (3a) by down behind, in (3b) by for, and in (3c) by mis-.

(3) a. John threw the dead branch down behind the bush in his back yard.
b. John hunted for the rat that had been bothering him in his back yard.
c. John misplaced his trowel in his back yard.

Another major division within the schematic system of cognitive state can be termed "epistemics", which covers characterizations of a sentient entity’s states of knowledge. This division certainly includes the evidential systems found in many languages, but it also includes many indicative-subjunctive type distinctions, factivity, forms for probability and possibility, and the like. Perhaps the most general characterization of this division is that it addresses the gradient from certainty to uncertainty. The forms within an evidential system in a language might well fall into two main classes with respect to this gradient, one class located at the certainty end and the other near it at the ‘considered probable’ location -- in particular, where the speaker either knows or infers the stated proposition. More specifically, for one class of forms, the speaker considers the proposition to be a fact, while for the other class of forms, the speaker infers that the proposition is likely to have occurred, to be occurring, or to be going to occur. Within these classes, the forms differ as to the basis for concluding this factuality or likelihood. Thus, some languages distinguish three factual forms, one for where the speaker has witnessed the reported event ("John was chopping wood -- I saw him"), another for where the speaker performed the action herself ("The beads are on the string"), and the third for situations considered common knowledge ("Horses eat grass"). And distinctions among the types of inference that some languages make include one for where the speaker has non-visually perceived the reported event ("John must have been chopping wood -- I heard the whacks"), one for where the speaker observes telltale evidence (John must be chopping wood -- the ax is gone from the house"), one based on observed periodicity ("John must be chopping wood -- it’s 3 PM and he usually chops wood now"), and one based on hearsay ("John is out chopping wood I hear").

Cognitive state includes still further divisions, such as "expectation" which covers both the expected and the surprising, as represented by the closed-class mirative systems found in some languages, as well as in such ‘surprise’ forms as the how / so forms in English as in How big your eyes are! / Your eyes are so big!. And there is certainly the affective division, which includes hypochoristic (diminutive) and pejorative closed-class forms although, as discussed in Talmy (2000a, ch. 1), affect never seems to become organized in a language as a replete closed-class system. But an overall analysis of the schematic system of cognitive state is still pending. Beyond the five schematic systems just proposed, others no doubt await positing. One candidate is a schematic system of "quantity". But more research is needed.

Question 7: In the last part of this interview, I’d like to focus on your most recent work. Lets start with your investigation into signed language in relation to spoken language and space structuring. Why do you think it is necessary to extend your analysis to signed language? In other words, what can signed language reveal about conceptual structure and, more concretely, about space structure that spoken language cannot?

Question 8: What are the main similarities between signed and spoken language with respect to conceptual structure?

Question 9: What are the main differences?

Again, much of the answer to your spoken language - signed language questions has already been laid out more systematically in a published paper, Talmy (2003). But again I can give some sense of the issues.

From accumulating evidence, it appears that spoken language and signed language differ in many respects in their structure and organizational principles. This finding challenges the Fodor-Chomsky model of a discrete language module that has prevailed for some time. Early signed language researchers may have felt obliged to adopt that model in part to establish that signed language was a full genuine language, contrary to much of the view at that time. But the modern response to observations of differences -- far from once again calling into question whether signed language is a genuine language -- should be to rethink what the general nature of language is. My proposal is that instead of some discrete whole-language module, spoken language and signed language are both based on some more limited core linguistic system -- responsible for their similarities -- that then connects with different further cognitive subsystems for the full functioning of the two different language modalities. Thus, we can see that investigating any differences between spoken and signed language structure has great import for cognitive theory -- as well as for theories of the evolution of language.

An account of differences between the spoken and signed language systems should begin by noting that the two modalities comprise subsystems that to a large extent don’t correspond to each other. Without here listing all the candidate subsystems I propose in Talmy (2003), maybe the most important observation is that all signed languages have a formally distinct subsystem, one misleadingly named in the literature as the "classifier" subsystem, that has no counterpart in spoken language. This classifier subsystem is dedicated solely to the schematic structural representation of objects moving or located with respect to each other in space. Perhaps the closest aspect of spoken language to compare this system with is that set of closed-class forms in a language that pertains to spatial structure, for example, the set in English that includes spatial prepositions.

To illustrate a classifier expression in American Sign Language (ASL), a signer can move his dominant hand with respect to his nondominant hand in a certain complex pattern that simultaneously represents all the following parameters of a Motion event: the category identity of the Figure object -- e.g., a car if the dominant classifier handshape is that for a ‘ground vehicle’; the category identity of the Ground -- e.g., a tree, if the nondominant classifier handshape is that for a ‘tree-like object’; the motive state of the Figure -- e.g., moving, if the Figure hand moves; the path of the Figure relative to the Ground -- e.g., past it if the Figure hand moves past the Ground hand; the elevation angle of the path -- e.g., upward along a steep hill, if the hand moves up at a steep angle; the curvature of the path -- e.g., curved, as along a curved road, if the Figure hand moves upward in an arc; the distance between Figure and Ground -- e.g., proximal if the Figure hand passes close to the Ground hand; the relative length of the path before and after encounter with the Ground -- e.g., a short pre-path and long post-path if the Figure hand moves that way relative to the Ground hand; the manner of the motion -- e.g., bumpily if the Figure hand exhibits a certain jumpy movement; and the speed of the motion -- e.g., fast if the Figure hand moves fast. The closest English sentence I can come up with that includes most of the same information is: The car sped bumpily upward in an arc close past the tree, starting near it and ending further away. However the closed-class vs. open-class distribution of forms in the English sentence come out, though, all the parameters and their values in the classifier subsystem are essentially structural in character with respect to a conceptual structure vs. content distinction. It is another subsystem within signed language, the lexical subsystem, whose forms -- lexical signs -- represent conceptual content.

In comparing these two means for representing spatial structure, further differences appear in two venues: in the inventory of basic spatial distinctions, and in the expression.

In its inventory, the classifier subsystem of signed language has more basic spatial elements and more categories to which these belong -- some of each have no spoken-language counterparts -- as well as a generally greater number of elements per category than is the case for the spatial portion of the spoken-language closed-class inventory. Its finer spatial distinctions seem more like those in visual parsing.

Within a single expression of motion or location, the classifier subsystem allows many more distinct structural aspects of space to be represented concurrently than does spoken language. By my count, some 30 aspects of spatial structure might in principle appear together in a classifier expression -- although in practice a smaller though still high number might occur -- whereas 6 distinct aspects of spatial structure is the highest number I’ve found represented by closed-class forms in a spoken-language clause. Further, each of the 30 spatial parameters of the classifier subsystem can be varied through its range of values independently of the others, whereas the spatial schemas represented by spoken-language closed-class forms are largely a fixed selection from the basic elements in a pre-packaged arrangement. Again, the numerous structural spatial distinctions made concurrently and independently in the classifier subsystem seem more akin to the properties of visual scene parsing.

Finally, the hand movements that express the 30 structural parameters of the classifier subsystem -- both by themselves and in combination within an expression -- are mostly iconic with the aspects of space that they represent. Moreover, where those aspects of space are gradient in character, rather than discrete, the hand movements are also gradient in iconicity with them. By contrast, spatial closed-class forms in spoken language are minimally iconic and generally discrete with respect to the aspects of spatial structure they represent. One exception is the manner in which the vowel of the closed-class form way, as in It’s way over there, can be lengthened in gradient iconicity with the additional magnitude of separation that it represents. But the rarity of such cases simply highlights the general lack of iconicity and gradience in spoken language representations. Thus, once again, the signed representations are closer to the character of visual structure.

The preceding sketch addresses your question about how spoken and signed language differ. But you also asked how these two language modalities are alike. For this, it will be easiest simply to quote the section from my 2003 paper that summarizes similarities between the spatial portion of closed-class subsystem of spoken language and the classifier subsystem of signed language. Both subsystems can represent multifarious and subtly distinct spatial situations -- that is, situations of objects moving or located with respect to each other in space. Both represent such spatial situations schematically and structurally. Both have basic elements that in combination make up the structural schematizations. Both group their basic elements within certain categories that themselves represent particular categories of spatial structure. Both have certain conditions on the combination of basic elements and categories into a full structural schematization. Both have conditions on the cooccurrence and sequencing of such schematizations within a larger spatial expression. Both permit semantic amplification of certain elements or parts of a schematization by open-class or lexical forms outside the schema. And in both subsystems, a spatial situation can often be conceptualized in more than one way, so that it is amenable to alternative schematizations.

I reason that some single cognitive system is responsible for this set of similarities across the two language modalities and so attribute them to a proposed core language system. But since each language modality has further properties not shared by the other, I reason that the core language system interacts with different outside cognitive systems for the full functioning of each language modality. The distinctive properties of signed language, or at least its classifier subsystem, so much more closely resemble properties of visual parsing than in the case of spoken language that it seems logical to consider that here the core language system interacts with the visual processing system. What the core language system might interact with for spoken language is less clear, but I propose an outside cognitive system for what might be termed "modulated packaging". Talmy (2006) lays this out in detail, but the basic notion is that -- unlike the independently variable parameters of the classifier subsystem -- the spatial closed-class forms of spoken language occur in any given language in a format already packaged as a full schema -- one consisting of a selection of basic elements in a particular arrangement -- and that a further subsystem then extends and modifies these extant schemas so that they can cover a much broader range of spatial structure. Why such a cognitive system of modulated packaging might have evolved is the next question. It seems to me that something like it might already have been important for motor control. Such a system might arrange some set of basic movement elements into a schematic pattern -- say, one for sitting -- and then modify the pattern for particular exigencies -- as for sitting on the ground or astraddle a log.

This line of thought raises the whole evolutionary issue for language. I propose a critical feature of language evolution in Talmy (2007) (which also appears on my website: http://linguistics.buffalo.edu/people/faculty/talmy/talmyweb/index.html). An outline of the argument goes as follows.

In pre-language hominids, the vocal auditory channel, as it was then constituted, may have been inadequate as a means of transmission for communication involving certain levels of thought and interaction. This circumstance, if regarded metaphorically in terms of conflicting evolutionary pressures or forces, could be seen as a bottleneck.

On the one hand, there would have been a selective advantage to an increased capacity for the communicative transfer of thought, that is, conceptual content, between individuals. This would be especially the case if individual thought had the near potential to increase or was already increasing, in (the range of) the qualitatively different kinds of concepts dealt with; the granularity of concepts, from broad to fine; the abstractness of concepts, from concrete to abstract; the complexity of concepts and conceptual interrelations, from simple to intricate; as well as in speed. And it would be further the case if communicative interactions among individuals had the near potential to increase in the encoding and decoding of advanced individual thought, as well as in speed.

On the other hand, the vocal-auditory channel had certain serious limitations as a means for transmitting such content for at least four reasons. It occupied a relatively low-fidelity medium. It had relatively limited distinctional capacity. It had relatively few independently variable parameters -- what I term "degree of parallelness" -- some 8 by my count. And it had little relevant iconicity, including that for gradients. The reason that greater parallelness would be an advantage is that, with it, more conceptual content can be transmitted in the same amount of time. And the reason that greater iconicity would be an advantage is that, with it, fewer arbitrary symbols are needed to represent conceptual content and, if extensive, an entire system of symbols is not needed, thus presumably lessening the cognitive load otherwise involved in establishing stable symbols, encoding concepts into them, and decoding them into concepts.

If the manual-visual channel -- or, more generally, the bodily-visual channel -- in serving as a means of transmission, had instead formed the basis for language evolution, it might well have been adequate as it was then constituted, for it lacked the limitations of the vocal-auditory channel. It occupies a higher-fidelity medium; it has a greater distinctional capacity; it has high parallelness -- with some 30 independent parameters, as described earlier for signed language; and it has extensive iconicity, including that for gradience.

Due to whatever circumstances, though, language evolution did involve the vocal-auditory channel. And this channel underwent one major evolutionary shift that enabled it to surpass its limitations. In a word, it went digital. Whereas the vocal-auditory channel had been largely analog, it now became a mainly digital system. My term "digital" is not intended to suggest binary representation or a computational model of the brain. Rather, digitalness, exhibiting a greater or lesser extent, is cumulatively built up from 4 successive factors. These are A) discreteness: distinctly chunked elements, rather than gradients, form the basis of the domain; B) categoriality: the chunked elements function as qualitatively distinct categories rather than, say, merely as steps along a single dimension; C) recombination: these categorial chunks systematically combine with each other in alternative arrangements, rather than occurring only at their home sites; and D) emergentness: these arrangements each have their own new higher-level identities rather than remaining simply as patterns. I apply the term "recombinance" to any cognitive domain that includes both recombination and emergentness. The increase of digitalness -- and especially of recombinance -- in the vocal -auditory channel compensated for its low parallelness, iconicity, and distinctional capacity and afforded greater fidelity and speed.

Human language is extensively recombinant. By one analysis, it has six distinct forms of recombination, of which three or possibly four also exhibit emergentness. In particular, there are four formal types of recombination: phonetic features combining into phonemes, phonemes combining into morphemes, morphemes combining into idioms, and morphemes and idioms combining into expressions -- with the first three of these producing new emergent identities. And there are two semantic types of recombination: semantic components combining into morphemic meanings, and morphemic meanings combining into expression meanings -- with the first of these perhaps yielding a new emergent identity.

A heuristic survey of various cognitive systems such as visual perception and motor control suggests that discreteness and categoriality appear in many of them. But candidates for recombination and emergentness in these systems seem rarer and more problematic. Language evolved recently and may have borrowed or tapped into organizational features of the extant cognitive systems. The cognitive system of language thus could have readily acquired its discrete and categorial characteristics from other systems. But language seems to be the cognitive system with the most types and the most extensive use of recombinance. The question thus arises whether language, as it evolved, adopted a full level of recombinance already present in another cognitive system, increasing it somewhat; adopted a minor level of recombinance from another cognitive system, elaborating it greatly; or developed full recombinance newly as an innovation. In the 2003 paper, I survey a range of cognitive systems for evidence bearing on this issue. And in more recent work, not yet published, I’ve been considering the respects in which thought in humans may have coevolved with language, perhaps also incorporating increased digitalness in general, and recombination in particular.

Question 10: I know that at the moment you are busy working on your new book: The attention system of language (MIT Press). Could you briefly tell us what the main topic is?

One way to look at this book is as an expansion of my previous work on the schematic system of attention, as described earlier. But the full system in language for affecting a speaker’s attention and directing a hearer’s attention turns out to be so vast and so intricately organized, that the updated description of it would be largely unrecognizable from the prior analyses. Language has over 100 factors that raise or lower attention on one or another aspect of language and its use. These factors combine and interact in larger attentional patterns. There seem to be some dozen fundamental parameters whose combinations yield the numerous factors and which, accordingly, constitute the underlying attentional system. Trying to work this out is currently what is occupying my, uh, attention.

Question 11: This research is really compelling and I’m sure ARCL readers will be really looking forward to reading it. When do you think it will be finished?

It’s hard to say since, as with much research, each new discovery opens further areas to investigate. But I’d like to think in terms of a couple of years before it’s ready to be sent off to the publisher.

References

Talmy, Leonard. 2000. Toward a Cognitive Semantics. volume I: Concept structuring systems. i-viii, 1-565. volume II: Typology and process in concept structuring. i-viii, 1-495. Cambridge: MIT Press.

Talmy, Leonard. 2003. The representation of spatial structure in spoken and signed language. In Perspectives on Classifier Constructions in Sign Language, ed. by Karen Emmorey. Mahwah, NJ: Lawrence Erlbaum.

Talmy, Leonard. 2006. The fundamental system of spatial schemas in language. In: From perception to meaning:
Image Schemas in Cognitive Linguistics
ed. by Beate Hamp. Mouton de Gruyter

Talmy, Leonard. 2007. Recombinance in the Evolution of Language. in Proceedings of the 39th Annual Meeting of the Chicago Linguistic Society: The Panels. eds. Jonathon E. Cihlar, David Kaiser, Irene Kimbara, and Amy Franklin. Chicago: Chicago Linguistic Society.