The grammar of engagement

This blog post is written by Nicholas Evans, inspired by the Language and Cognition article “The grammar of engagement I: framework and initial exemplification” by Nicholas Evans, Henrik Bergqvist, and Lila San Roque. Read it online now.

‘Philosophy must plough over the whole of language’, as Wittgenstein famously stated. But which language? Singularising the noun allows a deceptive slippage between some language whose premises we take for granted (‘The limits of my language are the limits of my world’ was another great, and corrective, line of his) and ‘language’ in some dangerously, presumptively general sense. One of the great what-if questions for linguistics, philosophy and cognitive science is how different the last two millennia of western thought would be if we had built our disciplines on the foundations of languages radically different in what their grammars prioritise from Greek, Latin, Hebrew – or, more recently, German, French or English.

If we trace the development of language studies in the west, we find an early bisection between logic – which came to be associated with the study of meaning and reasoning, which should be shorn of all context – and rhetoric, where the relations between speaker and hearer are all-important but which does not primarily focus on grammar. The echoes of this continue today as, for example, formal semanticists continue to wrestle with how to define definiteness (the noisy neighbour) in ways that are based on set theory and mathematics rather than embodying representations of the speech setting or of ‘theory of mind’. But the recent ‘subjective’ turn among cognitive linguists, and the even more recent ‘intersubjective turn’, continue to throw up evidence about how deeply woven into language we find the search for common ground and its constant renegotiation, rooted in the speaker’s attention to their addressee’s attentional, emotional and belief states.

I’ve long been interested in ‘multiple perspective’ (Evans 2006) – originally sparked by special ‘triangular’ kin terms in Australian languages with meanings like ‘the one who is your mother and my daughter, me being your grandmother’. The conversation that was the genesis of this article took place with Jon Landaburu in 2004, on a bus trip from Mexico to Teotihuacan. In discussing the general issue of multiple perspective, I asked him whether he thought that similar biperspectival constructions ever get grammaticalised with modality. He then told me about his work on ‘engagement’ in the Colombian language Andoke, which later came out in the seminal article we refer to in ours. I was particularly struck by his suggestion that ‘it is the traditional influence on grammar of logic, whether scholastic or formal, which has led led to the separation of these two dimensions (i.e. speaker knowledge and intersubjective assessment) which are necessarily present and interwoven in any act of communication’ (Landaburu 2007:30-1, my translation). The crucial part of his article is that it allows for a bidimensional distribution of attention/knowledge or its absence: both speaker and hearer (assertion based on common ground/attention), neither (question for which no informed answer is expected), speaker-only (authoritative statement by speaker, who assumes the hearer doesn’t know or attend), addressee-only (inquiry of presumably better-informed addressee).

Although I loved the article, I was frustrated by the paucity of examples, and the lack of an interrogable corpus. I also thought that the full significance of his discovery wouldn’t be clear until comparable phenomena were found in other languages, and a proper analytic framework developed based on cross-linguistic comparison. In the succeeding years I gradually accumulated case studies that could form the basis of a typology. Perhaps more importantly, I was joined in this research by my two co-authors in this article, Henrik Bergqvist and Lila San Roque, each of whom, unlike me, has deep, firsthand experience working on languages in which engagement is a central part of the grammar – respectively Kogi in Colombia (close to, but unrelated to, Andoke) and Duna in Papua New Guinea. Their detailed fieldwork-based insights allowed a deeper probing of how this phenomenon really works in those languages. Even so, however, our understanding of how the grammar of engagement works is at an early stage and a major priority in the next stage of research is to gain more detailed field data, through a combination of recording naturalistic conversation and developing protocols for what Nikolaus Himmelmann calls ‘staged communicative events’ which allow us to vary, with more confidence, the factors at work.

In talks I’ve given on this phenomenon, I’m often asked: ‘isn’t this just the same as saying “so” or ‘hey!” or “actually” in English’. Well, there are clear overlaps in communicative function, and these words are the closest we can get to translating sentences from languages like Andoke in a reasonably natural way. But they are not systematically integrated into the grammar in the same way, forming tightly bound paradigms that require constant scanning of the addressee’s attention and knowledge states by the speaker.
Linguists have long believed that it makes a difference whether something is part of the grammar or simply an optional add-on – just as it makes a difference whether we need to obligatorily specify tense and definiteness, as in English, or do so in more roundabout ways, as in Chinese. Viewed another way, from real-time processing, psycholinguist Paula Rubio-Fernandez asks: is attributing mental states to others .. too costly to be used as a basis for online interaction? As an inflectional category, engagement definitely needs to be processed online, and it definitely involves attributing mental states to others – so our emerging typology is directly relevant to this central question for the psychology of social cognition.

Had the intellectual tradition of linguistics begun with Foe, Duna, or Kogi rather than Greek and Latin our notions of what is basic might have looked very different – and intersubjective categories would have demanded treatment from the very beginning. Ironically, what I have presented as a third, ‘intersubjective’ stage in linguistic approaches to the grammar of meaning – collaborative communication between speakers attending to each other, altering their common ground through time, and representing what each other knows – is arguably the precondition for the evolution of language in the first place.

One language’s grammar is another language’s discourse. The fact that we have had to travel to far-flung lands to find the ‘pure’ examples of engagement discussed in our article does not imply that their functional equivalents won’t turn up in interesting ways in all languages. At the same time, the interesting question arises of what difference does it make for a category like engagement to be elevated to a central grammatical position. For this we need sensitive cross-linguistic studies, preferably involving parallel or semi-parallel corpora, as well as psycholinguistic studies comparing the learning and processing of intersubjectively relevant categories, and taking as the independent variable the gquestion of whether the language has clearly grammaticalised engagement categories.

This piece is inspired by the Language and Cognition article “The grammar of engagement I: framework and initial exemplification” by Nicholas Evans, Henrik Bergqvist, and Lila San Roque. Read it online now.


Evans, N. (2006). View with a view: Towards a typology of multiple perspective. Berkeley Linguistics Society (BLS) 32, 93–120.
Landaburu, J. (2007). La modalisation du savoir en langue andoke (Amazonie colombienne). In Z. Guentchéva & J. Landaburu (eds.), L’énonciation médiatisée II: Le traitement épistémologique de l’information: Illustrations amérindiennes et caucasiennes (pp. 23–47). Leuven: Peeters.

Learning Construction Grammars Computationally

Blog post by Jonathan Dunn, Ph.D.

Construction Grammar, or CxG, takes a usage-based approach to describing grammar. In practice, this term usage-based means two different things:

First, it means that idiomatic constructions belong in the grammar. For example, the ditransitive construction “John sent Mary a letter” has item-specific cases like “John gave Mary a hand” and “John gave Mary a hard time.” These idiomatic versions of the ditransitive have distinct meanings. While other grammatical paradigms consider these different meanings to be outside the scope of grammar, CxG argues that idiomatic constructions are actually an important part of grammar.

Second, CxG is usage-based because it argues that we learn grammar by observing actual idiomatic usage: language is more nurture than nature. The role of innate structure is limited to general cognitive constraints such as limits on working memory and the ability to recognize and categorize differences. CxG views language learning as a bottom-up process of systematicity spreading from idiomatic constructions to generalized constructions.

The problem is that the usage-based approach to grammar has struggled to live up to its own expectations. First, a very large number of idiomatic constructions could be posited to resolve any descriptive challenge. As a result, CxG has struggled to show that its grammars are falsifiable. Second, there are potentially large numbers of overlapping idiomatic constructions each with its own distinct meaning; thus, without relying on innate constraints, CxG has struggled to show that its grammars are learnable.

This paper takes a computational approach to learning CxGs in order to resolve these difficulties. Can stable, generalized grammars be learned from actual usage? Without innate structure to limit the space of possible constructions, this approach faces four challenges that make it difficult to learn the best grammar:

First, we do not know how many items or slots a construction contains, so the algorithm must be able to perform segmentation in order to find construction boundaries. Second, CxG allows multiple types of representation (lexical, semantic, syntactic), so the algorithm must be able to find the best way to describe each slot in a construction. Third, CxG allows unfilled slots, so the algorithm must be able to find constructions that do not appear to be continuous. Fourth, slots can have recursive internal structure, so the algorithm must be able to find complex fillers.

The difficulty is that these challenges must be solved with as few language-specific assumptions as possible in order to qualify as usage-based in the senses described above. This paper shows that a learnable and falsifiable usage-based CxG is possible, the first step in reconciling the claims and the actuality of the Construction Grammar paradigm.

Jonathan Dunn, Ph.D., is Research Assistant Professor of Computer Science at Illinois Institute of Technology. His recent article, “Computational learning of construction grammars,” can be accessed without charge until March 15th. Explore all of Language and Cognition by clicking here.

Figurative and non-figurative motion in the expression of result in English

LCO 2015 coverBlog post written by Francisco Ruiz de Mendoza Ibáñez and Alba Luzondo Oyón based on and article in the journal Language and Cognition 

There is a variety of ways in which English can express resulting events. Some take the form of non-figurative changes of state, as in Cold temperatures froze the river solid, which is an example of the intransitive resultative constructions. Others, like the intransitive motion syntactic frame (e.g. The horse jumped over the fence) and the caused-motion configuration (e.g. Tom kicked the ball into the net) depict literal changes of location. Interestingly enough, many outcome events require a figurative interpretation. Some cases in point are the following: changes of state expressed in terms of figurative motion (e.g. Miners drank themselves into oblivion); self-instigated change of location figuratively expressed as the result of caused motion (e.g. They laughed me out of the studio); self-instigated changes of location re-construed as externally caused events (e.g. Sheena walked me to the library), etc. In the context of this varied array of realizations codifying change, this paper provides readers with a qualitative cognitive-constructionist approach of the role played by motion in the conceptualization of result in English.

To this end, our analysis discusses three related aspects of the motional component of result events. First, it explores the nature of some of the constructions exemplified above. These are labeled Adjectival Phrase (AP) resultatives (e.g. The joggers ran their Nikes threadbare) and Prepositional Phrase (PP) resultatives (e.g. Steven worked himself to exhaustion), both of which express changes of state. The crucial feature setting apart these two types of resultative constructions is that only the latter builds on the high-level metaphor CHANGES OF STATE ARE CHANGES OF LOCATION. For example, in Steven worked himself to exhaustion, the change of state (i.e. become exhausted) is understood in terms of the destination of metaphorical motion. But, what lies behind the choice of one structure over the other? What are the differences between pairs employing an AP and those adding motion to the state of affairs, as in He hammered the metal flat/He hammered hot iron into knives? These are some of the questions that this paper explores. A second important aspect that we address is the relation between the prototypical AP resultative (e.g. He hammered the metal flat) and the literal caused-motion construction (e.g. Pat threw the book off the table). The connection between these two constructions has been the object of some debate. Thus, given the relevance of this issue in the context of a paper which revolves around the connection between motion and result, an entire section is devoted to revisiting the hypothesis that AP resultatives are a metaphorical extension of caused-motion configurations. This claim is based on the idea that the resultative element in AP resultatives codes a metaphorical type of goal (i.e. metaphorical change of location) by virtue of the ubiquitous metaphor CHANGES OF STATE ARE CHANGES OF LOCATION.

Third, because our study is additionally concerned with specifying the underlying mechanisms that motivate lexical-constructional integration in expressions involving change with some kind of motion ingredient, the remainder of our paper examines the role of high-level metaphors and metonymies such as AN ACTIVITY IS AN EFFECTUAL ACTION and A CAUSED EVENT FOR AN ACTIVITY, which, like CHANGES OF STATE ARE CHANGES OF LOCATION, are vital licensing cognitive mechanisms in the conceptualization of result events.

Read the full article ‘Figurative and non-figurative motion in the expression of result in English’ here.




Dynamic conceptualizations of threat in obsessive-compulsive disorder (OCD)

LCO 2015 coverBlog post written by Olivia Knapton based on an article in the journal Language and Cognition

Obsessive-compulsive disorder (OCD) is a severe mental health problem of a heterogeneous nature.  While OCD is characterised by distressing obsessions and repetitive compulsions, the nature of the obsessions and compulsions can vary greatly between individuals.  Recent clinical work has thus sought to define coherent subtypes of OCD in order to improve diagnosis, treatment and hopefully recovery rates. The overwhelming majority of this clinical work adopts quantitative approaches that ask participants to respond to questionnaires and inventories.  In contrast, this research article published in Language and Cognition adds to discussions on OCD subtypes through a qualitative, cognitive linguistic analysis.

In cognitive linguistics, it is argued that linguistic patterns can provide evidence for stable conceptualisations in the mind that structure our experiences and information.  The aims of this study were to provide linguistic evidence for underlying conceptualisations of threat within OCD and to show how subtypes of OCD can be differentiated based on threat conceptualisation.

Data were collected from participants with OCD, who were interviewed about their experiences of the disorder.  Narratives of OCD episodes were then identified in the transcripts and were analysed using image schema theory (Johnson, 1987) and cognitive approaches to deixis in discourse (Chilton, 2004).

Through an exploration of the participants’ subjective experiences of time, space and uncertainty in the recounted OCD episodes, the findings demonstrate that perceptions of threats fluctuate as OCD episodes unfold, and that it is the perceived movement (or not) of the threat that induces distress. The paper thus argues that the blanket notion of threat as often investigated in clinical models of OCD is not sensitive enough to capture these shifting perspectives.

Moreover, the dynamism of the threat was also found to be conceptualised differently for different subtypes of OCD.  For example, in some subtypes of OCD, the threat is conceptualised as moving rapidly away from the self, whereas in other subtypes, the threat is conceptualised largely as close to the self and as highly static.  This variation is in part attributed to the role of two image schemas in structuring OCD episodes: the SOURCE-PATH-GOAL image schema and the CONTAINER image schema. The paper recommends that threat perception in OCD is therefore researched as a highly subjective experience that shows distinct variation between subtypes.

Read the full article ‘Dynamic conceptualizations of threat in obsessive-compulsive disorder (OCD)’



Chilton, P. (2004). Analysing Political Discourse. London: Routledge.

Johnson, M. (1987). The Body in the Mind: the Bodily Basis of Meaning, Imagination and Reason. Chicago: Chicago University Press.


Introducing a new special issue of Language & Cognition on “Cognitive Linguistics and interactional discourse”

LCO 2015 coverBlog post written Elisabeth Zima based on a new issue of the journal Language and Cognition

Usage-based theories hold that the sole resource for language users’ linguistic systems is language use. It is a well-established fact that the primary setting for language use is interaction, with spontaneous face-to-face interaction playing a primordial role. Although researchers working in the usage-based paradigm, which is often equated with cognitive-functional linguistics, seem to widely agree on this, the overwhelming majority of the literature in Cognitive Linguistics does not deal with the analysis of dialogic data or with issues of interactional conceptualization. One may find that this is at odds with the interactional foundation of the usage-based postulate.

The papers in this special issue of Language & Cognition argue that models of language which subscribe to the usage-based view should not only be fully compatible with evidence from communication research but they should be intrinsically grounded in authentic, multi-party language use in all its diversity and complexities. Therefore, they all involve the analysis of interactional discourse phenomena by drawing on tools and methods from the broad field of Cognitive Linguistics. They show that perspectives on interactional language use that are inspired by Cognitive Linguistics may provide insights that other, non-cognitive approaches to discourse and interaction are bound to overlook. Furthermore, the papers illustrate why an ‘interactional turn’ in Cognitive Linguistics is essential to its credibility and further development as a theory of language and cognition. Contributions come from Alan Cienki, Andreas Langlotz, Kerstin Fischer, Bert Oben, Geert Brône and Elisabeth Zima.

Until the end of the year you can explore the entire special issue without charge


The truth about transitions: What psycholinguistics can teach us about writing

Blog post written by Yellowlees Douglas author of The Reader’s Brain: How Neuroscience Can Make You A Better Writer

The Reader's Brain Journalists, particularly those writing for American audiences, practically have transitions drilled into their heads from their first forays into writing for the public. Where’s your transition? their editors persist, as they linger over each sentence. However, those editors and newsroom sages handed on advice with well-established roots in psycholinguistics—and with particularly striking benefits for the reading public. I explore what linguistics, psychology, and neuroscience can teach us about writing in my forthcoming The Reader’s Brain: How Neuroscience Can Make You a Better Writer. And using an abundance of transitions is perhaps the simplest advice you can follow to make your writing easy to read, in addition to bolstering your readers’ speed and comprehension of even complex, academic prose.

As a species, we evolved to learn from observing cause and effect—and from making predictions based on those observations. For example, your everyday survival relies on your ability to predict how the driver to your right will behave on entering a roundabout, just as we predict hundreds of events that unfold in our daily lives, all of which dictate our behavior. But we feel relatively minimal cognitive strain from all these predictions, mostly made without any conscious awareness, because we can make predictions based on prior experience. We expect the familiar.

Similarly, in reading, we expect sequential sentences to relate to one another. However, most writers assume that their readers see the ideas represented in one sentence as inherently connected to the preceding sentence. But sentences can become islands of meaning, especially when writers fail to provide explicit linguistic cues that inform readers how one sentence follows another.

Take, for example, your typical university mission statement, the kind invariably featured in American university catalogues and websites:

Teaching—undergraduate and graduate through the doctorate—is the fundamental purpose of the university. Research and scholarship are integral to the education process and to expanding humankind’s understanding of the natural world, the mind and the senses. Service is the university’s obligation to share the benefits of its knowledge for the public good.

Chances are, even if someone offered you the lottery jackpot for recalling this content in a mere half-hour, you’d fail—at least, not without some serious sweat put into rote memoriziation. Why? Despite the mission statement containing a mere three sentences, nothing connects any sentence to the others—aside from the writer’s implicit belief that everyone knows that universities focus on teaching, research, and service. Unfortunately, only an academic would understand that research, teaching, and service form the bedrock of any research university. As a result, we can safely guess that the writer was an academic. Sadly, the actual audience for the mission statement—the family members tendering up their retirement savings or mortgaging the house for tuition—fail to see any connections at all. As studies documented as early as the 1970s, readers read these apparently disconnected sentences more slowly and with greater activity in the parts of the brain dedicated to reading. In addition, readers also show poorer recall of sentences lacking any apparently logical or referential continuity.

Because prediction is the engine that enables readers’ comprehension, transitions play a vital role in enabling us to understand how sentences refer to one another. In fact, certain types of transitions—particularly those flagging causation, time, space, protagonist, and motivation—bind sentences more tightly together. When you use as a result, thus, then, because, or therefore, your reader sees the sentence she’s about to read as causally related to the sentence she’s just read. Moreover, when writers place transitions early in sentences, prior to the verb, readers grasp the relationship before they finish making predictions about how the sentence will play out. These predictions stem from our encounters with tens of thousands of sentences we’ve previously read. But put the transition after the verb, and your readers have already completed the heavy lifting of prediction. Or, worse, they’ve made the wrong predictions and need to reread your sentences again.

You might think that a snippet like too or also or even flies beneath your readers’ radar. Think again. Transitions are your readers’ linguistic lifelines that link sentences and ideas smoothly together, making your reading easy to understand and recall. You can discover more about not only transitions but also of how your readers’ brains work through every facet of your writing—from the words you choose to the cadence of your sentences in The Reader’s Brain: How Neuroscience Can Make You a Better Writer.

Cognitive Discourse Analysis: What language use can reveal about mental representations and concepts

LCO 2015 coverPost written by Thora Tenbrink based on an article in Language and Cognition

 What do we actually ‘see’ when we observe a picture or a scene, or watch an event unfold? How do we solve complex problems, and what are the steps of thought that we go through? How can we learn about such thoughts, as we cannot access people’s minds directly? Questions such as these have a lot to do with our everyday life, and they are quite relevant to many fields in cognitive science as well as applied research, for example design cognition or pedagogy.  Cognitive Discourse Analysis (CODA) is a methodology that helps identifying people’s thoughts in a systematic way. People are asked to speak out lout what they’re thinking; their language is transcribed, and analysed in depth. Besides the (often quite revealing) content of what people are saying, the features of their language (how they say it) point to underlying concepts and aspects that the speakers themselves are not necessarily aware of: their focus of attention, things taken for granted or perceived as new, levels of granularity, conceptual perspective, and so on.

 CODA uses linguistic insights to analyse verbal data collected in relation to cognitively challenging tasks. When formulating their thoughts, speakers draw in systematic ways from their general repertory of language to express their current concepts. Their choices in relation to a cognitively demanding situation or scenario can reveal crucial aspects of their underlying conceptualisations, shedding light on how people solve complex problem solving tasks, as well as how they describe complex problems or situations.

As a simple example consider a route description. The utterance ‘Turn right at the shopping mall’ shows that the speaker has a concept of a unique shopping mall that distinguishes it from other buildings in the environment, and can therefore be referred to by a definite article and used as a landmark to anchor a direction change. The formulation ‘turn right’ also reveals the underlying perspective (egocentric as perceived by the traveller, rather than compass based). In these and other ways, linguistic choices can reflect crucial aspects about the speakers’ conceptualisations. This provides a useful pathway to access cognition, drawing on knowledge about relevant features of language supported by grammatical theory, cognitive linguistic semantics, and other linguistic findings. In situations of communication (for example to complete a joint task or discuss a rationale for action), the different perspectives and conceptualisations of the speakers are flexibly negotiated in dialogue.

Read the full article ‘Cognitive Discourse Analysis: accessing cognitive representations and processes through language data’ here

Are Idioms Metaphorical?

LCO 2015 coverBlog Post Written by Daniel Sanford, based on an article in Language and Cognition

Idiom is so interesting to linguists because it exists at the intersection of the study of figurative language and of syntax effects, and has proven a singularly problematic issue in both areas of inquiry. For syntacticians who have challenged the Chomskyan model of language that’s been dominant since the 1960s, idiom has demonstrated the impossibility of drawing a clear distinction between lexical items and rules which operate upon them. Cognitive linguists and student of figurative language, meanwhile, have asked about the relationship between idiom and metaphor: Are idioms processed, on the fly, as metaphors? Or is the role of metaphor purely historical, with idiomatic meaning accessed simply as lexical entry?

These questions are related, and considered together, they point to a resolution shy of Lakoff’s claim that conceptual metaphors are as active in idioms as in novel metaphors, but well beyond the traditional view that idioms as a class are non-metaphorical, their meanings retrieved as an irreducible whole from the lexicon: idioms can be a little bit metaphorical. The extent to which an idiom is metaphorical is a function of the extent of its autonomy from a sanctioning metaphorical schema. Idioms are ready-made metaphors: their meaning can, in many cases, be analyzed out on the basis of reference to existing forms, but the idiom itself, with a set metaphorical interpretation, is entrenched discretely from an overall metaphorical mapping. Metaphorical idioms cannot be wholly understood as highly entrenched instances of metaphorical mappings, nor can they be analyzed entirely as syntactic constructions: it is out of the interaction of these two types of schemas that the rich properties of idioms emerge, and a complete understanding of figurative idioms is possible only when this dual nature is embraced.

We invite you to read the full article, Idiom as the Intersection of Conceptual and Syntactic Schemas, here.

I’m glad that you like it or I’m glad you like it?

LCO-cover-image-for-Cambridge-ExtraBlog post written by  Stefanie Wulff based on an article in the latest issue of Language and Cognition 

When do native speakers say I’m glad that you like it, and when do they drop the complementizer and simply say I’m glad you like it? The factors influencing complementizer realization have intrigued many researchers over the last few decades. The emerging consensus is that various factors jointly determine whether the complementizer is realized, such as how long or complex the parts are that the complementizer connects or how frequent the verb is in actual language use. This study elaborated on previous studies by asking: what about non-native speakers of English? The “rules” for complementizer variation are never taught in the English language classroom, and yet second language learners, at least at an advanced level of proficiency, also drop the complementizer in specific contexts. We wanted to know: are these contexts the same as for native speakers? That is, do the same factors that govern native speakers’ choices also impact learners’ choices?

To that end, we retrieved 3,622 instances of complement constructions from native English corpora and German as well as Spanish learner English corpora. We coded all instances for the factors proposed in previous research and ran a logistic regression model. Our final model suggests that advanced-level learners of English are in fact closely aligned with native speakers in that their choices are, generally speaking, influenced by the same factors. However, a closer look reveals that the relative importance of these factors differs: learners rely more on processing-related factors such as clause complexity, and they are comparatively less sensitive to the statistical associations between a given verb and the complementizer. Overall, learners realize the complementizer more often in contexts in which a native speaker may well omit it. Also, comparing the German and the Spanish learners, we observed that the German speakers make more native-like choices than their Spanish peers. We interpret these findings to reflect (i) the comparatively higher cognitive cost associated with deploying your second language vs. your first, (ii) that second language learning will vary as a function of the first language the learner speaks, and (iii) the crucial role that the input learners receive plays in learning to make native-like choices.

Read the full article ‘That-variation in German and Spanish L2 English’ here

The evolution of language – A special issue from Language and Cognition

Post written by David Kemmerer, General Editor of Language and Cognition 

As part of the continuing growth and diversification of Language and Cognition, a special double issue in 2013 focuses on the evolution of language.

Although this controversial topic has been discussed for centuries from different perspectives, it is probably safe to say that genuine progress has only begun to take place during the past 25 years or so, as increasing numbers of researchers have started pooling a broad array of relevant ideas and discoveries from a tremendous range of disciplines, including, in alphabetical order, anthropology, archeology, artificial life, biology, cognitive science, genetics, linguistics, modeling, neuroscience, paleontology, primatology, and psychology.

The aim of the special double issue is to give readers a unique window onto some recent advances in this exciting multidisciplinary field. Inspired by the format of Current Anthropology and Behavioral and Brain Sciences, the lead paper is a précis of Michael Arbib’s 2012 book entitled How the brain got language: The Mirror System Hypothesis. Although the framework that Arbib has constructed is only one of several accounts that are currently being debated, it stands out from most of the others in the breadth of the phenomena that it attempts to explain, in the amount of theoretical and empirical work that it draws upon, and in the coherence of the overall, multi-step story. Following the précis of the book, there are 12 commentaries that have been specially commissioned by experts in the wide spectrum of disciplines that are relevant to Arbib’s framework. And following those commentaries, there is a detailed response from Arbib.

Given that the evolution of language is an inherently fascinating topic that has been attracting the attention of a growing number of scientists, and given that this topic is treated here from many different vantage points, there should be something of interest for everyone!

Access the double special issue of Language and Cognition here