Where is Applied Linguistics headed? Cambridge Journal editors weigh in

In advance of the upcoming AAAL Annual Meeting in Chicago, we asked editors of Cambridge applied linguistics journals for their thoughts on the state of the field.

Where is applied linguistics headed? Are there new approaches, methods or priorities that you think will have real impact on research and related practice in coming years?

Martha Crago, editor of Applied Psycholinguistics“In the next year’s two major developments, one technological and one social, will have a striking impact on applied linguistics: 1)The disruptive technology of machine learning (artificial intelligence) is based on the early work on neural networks in neuropsychology as well as on reinforcement learning that was once considered a learning mechanism for language acquisition. These new technological developments are likely to circle back and inform or intersect with work in applied psycholinguistics and its underlying theories. In addition, “big data” (computational linguistics) and its growing ability to look at large data sets in increasingly sophisticated ways will become a future direction for the field. 2) Human migration has reached vast proportions in the last few years. It is leading to very large numbers if refugees who are either in transit, often for years, or who are arriving to become residents, both legal and illegal, in a new country. These migratory patterns have striking implications for multi-lingualism and -literacy in people of all ages. This in turn has consequences for social integration and education. As a result, refugee populations will become a major preoccupation for applied psycholinguistic researchers.”

Alex Boulton, Editor of ReCALL “Applied linguistics is itself a controversial term which means different things to different people, and covers different domains in different languages. In French, for example, “linguistique appliquée” fell largely out of favour in the 1990s as it suggested simply applying linguistics to real-world problems. What is probably the largest domain is now referred to as “didactique” – i.e. language teaching and learning. Various initiatives have been undertaken to explore this at national and international levels, notably through AILA – the International Association of Applied Linguistics, founded in France in the 1960s.

Published by CUP and owned by EUROCALL, ReCALL is a leading journal focusing specifically on computer-assisted language learning. In the 30 years of its existence, we have seen increasing democratisation of technology and access to it, especially via the internet. This is evident in everyday practices (learners no longer have to be in a classroom or a computer room) as well as in the research being conducted into informal online learning. While early papers tended to place the software itself at the centre of the paper, today the emphasis is more on what actually happens in the learning process when using various types of technologies in different situations for different purposes.
In terms of methodologies, various surveys have found the majority of studies in applied linguistics to be quantitative in nature; while these were traditionally considered the most prestigious by many researchers, the situation is certainly evolving. There is no question of abandoning quantitative work, especially for learning outcomes or large-scale surveys, but there seems to be increasing room for more qualitative approaches, which allow greater emic understanding of the complexity of the learning process and the individuals involved. Of particular interest are mixed methods studies which, appropriately conducted, can draw on the strengths of both quantitative and qualitative work. Another evolution is the rise of rigorous research syntheses of various types, from the quantitative meta-analysis to the more qualitative narrative synthesis, each with its advantages and disadvantages.

Julia R. Herschenesohn, Coordinating Editor, Journal of French Language Studies “As we approach the third decade of the 21st Century, the most important opportunity that I see in applied linguistics research is the accessibility of big data—large corpora of empirical evidence that are available online to all researchers. Cloud storage, open access and increased computational power open a range of options for obtaining and analyzing evidence of language use and acquisition. Open access databases allow scholars to use statistically significant quantities to form generalizations, test hypotheses, replicate earlier studies and reanalyze previous research using different methodologies. The combination of language data—including controlled experiments, monitored production, informal speech and spontaneous dialogue—and sophisticated statistical software has already impacted research and related practices and will continue to expand in the following decades. As Editor of JFLS, I have seen a shift in the submissions we receive to a much larger number of articles including evidence from public access databases. For example, our next special themed issue comprises articles drawing from a few corpora of carefully transcribed and annotated examples of contemporary French speech that are analyzed by several authors in terms of lexical, morphosyntactic and phonological characteristics. The contributors bring to bear different methodologies and sub-discipline perspectives while mining the same source of data. The availability of big data allows scholars to test theoretical hypotheses with solid statistical tools to further our knowledge of how language is acquired and used under various circumstances.”

Graeme Porte, Editor of Language Teaching

Recurrence, revitalization, and replication in Applied Linguistics

“Like any dynamic field of science, Applied Linguistics (AL) is both in constant change and ever eager to be of practical use to those who benefit from its research discoveries. As researchers we are urged to “apply” our discoveries – ideally to some kind of language learning context. Since those contexts will almost certainly involve a practitioner, the nexus between the FL teacher and the AL researcher should be a close and mutually-benefitting one.

We have been lucky in that both AL researchers and practitioners have traditionally embraced new methodologies and promising trends – together with the occasional fad and damp squib – with anticipation. A cursory historical overview of these apparently novel approaches will, however, reveal timely re-emergences of elements which are key to many of these movements.

There has been a tendency actually to re-discover what we often think we are discovering and then mould it through more modern hands into something more acceptably novel, consistent with current attitudes and/or linguistic fashion (Cook, 2003[1]). Such “discoveries” can be seen as heralding in a new age for practitioners or even paradigm shifts for researchers. Whole new careers can be forged, exciting new angles on L2 learning revealed – and novel text book series sold by the thousands! Some teaching methods – such as TPR or Suggestopaedia – can be short-lived; others, such as the “communicative approach”, can become thoroughly regenerated into other methods. Yet others, as Michael Swan reminds us in his latest position piece for us (Language Teaching, 51.2 April), are regularly dismissed in their entirety as deficient approaches only for latter-day AL pioneers to uncover seemingly redeeming kernels of wisdom in their theoretical and practical bases. In the case of “Grammar-Translation”, for example, there are still many L2 learners who feel knowledge of grammar and L1-L2 equivalences improve their understanding of the target language and continues to satisfy a perceived need for going about “serious” language learning.

A similar picture might be painted of our research paradigms. In our embracing of AL as an essentially social science endeavour, we might be accused of being over keen to dismiss methodological approaches which smack too much of a “pure science” rather than a “social science” approach. Once again, however, we are witnessing a recent re-visiting of these previously out-of-favour research approaches.

Language Teaching is now at the forefront of a push for a renewed effort to recognise the contribution of replication studies to our literature. Replicating previous studies as a serious research methodology has only emerged onto the applied linguistics scene relatively recently; it has been a subject of interest elsewhere for much longer and has appeared as a fleeting subject of debate in the general social sciences literature for decades. Its feted re-appearance owes much to the concern expressed by many who depend on our research for its possible pedagogical implications and applications and who are rightly concerned about the presence of undetected error or the lack of confirmatory evidence provided across many of our empirical endeavors.

We may go back empirically to a study for several reasons, but that revisiting is predicated on the idea that no one piece of research (or researcher!) can include, or control for, all the many variables that might affect an 0utcome. It follows that a particularly important study only stands to benefit from such renewed attention if it can have its findings more precisely validated, its reliability focused on, its generalization tested, or even delimited, and its eventual application in learning contexts more finely tuned.

[1] Cook G. (2003). Applied Linguistics. Oxford: Oxford University Press

Andrew Moody, Editor of English Today “The question of where Applied Linguistics is headed is a very difficult one to address because the field is already quite diverse. As a new editor (for English Today), I don’t feel highly qualified to be making predictions about the future of the disciplines that work within Applied Linguistics, but there are two developments that I have noticed as a reader and researcher in sociolinguistics and I think that these two are likely to become more prominent.

First, sociolinguists (and this is especially relevant to sociolinguists who are working with the English language) have become increasingly comfortable working with data that would traditionally have been discarded as ‘non-spontaneous’ or ‘not naturally occurring’. Data sources might include English-language media, literary texts or texts from popular culture. These texts show a rich interplay between local voices (ones that might be thought of as ‘authentic’ languages) and global voices, and the sociolinguistic analyses of these kinds of interplays and tensions (between, for example, ‘global English’ and ‘local English’) have grown in sophistication and cogency. Consequently, the relationship between language and identity — a relationship that all too often had been conceptualised as a simple and static one-to-one exchange between identity and language use — is a relationship that is increasingly being explored as more pluralistic, situated, complex and performative. I imagine that this trend will continue within the disciplines of Applied Linguistics for some time.

Secondly, I have also noticed within the space of my career in English sociolinguistics an increasing degree of comfort that teachers and researchers have when discussing ‘Englishes’, and the linguistic variation that is represented by such a term. When I was writing my PhD dissertation on Hong Kong English, the consensus opinion among scholars working in Hong Kong (with only a few very prominent exceptions) was that ‘there was no such thing as Hong Kong English’. The justification for that point of view was that the variety of English used in Hong Kong was a ‘learner variety’ and that this somehow negated or diminished any status that the language might have as a variety of English that deserved to be studied sociolinguistically. Increasingly there is a willingness to accept the existence and the status of varieties like Hong Kong English, Japanese English, Chinese English, etc. and to allow these varieties to be studied more fully as English varieties. I expect that this trend will also continue for some time within English sociolinguistics, and within applied linguistics more generally.”

 

Going to AAAL? Visit the Cambridge booth to browse our journals, pick up new books, and grab a few freebies! Even if you are not attending, visit our website for 20% off all books on display.

Applied Psycholinguistics Call For Editor Proposals

Professor Martha Crago is completing her tenure in December 2018 from her position as Editor of Applied Psycholinguistics (AP). Cambridge University Press is now inviting applications for the position of Editor. A team of Co-Editors will also be considered. Final appointment decisions will be made by the Syndicate of Cambridge University Press.

The deadline for applications is January 15, 2018.

AP is a refereed journal of international scope publishing original research papers on the psychological and linguistic processes involved in language. Each volume contains six issues with articles examining language processing, language development, language use and language disorders in adults and children with a particular emphasis on cross-language and second language/bilingual studies. The journal gathers together the best work from a variety of disciplines including linguistics, psychology, reading, education, language acquisition, communication disorders and neurosciences. In addition to research reports, special theme- based issues are considered for publication as are invited keynote articles and commentaries.

AP published volume 38 in 2017. Its 2016 Impact Factor was 1.970, placing it 16 out of 182 journals in the Linguistics JCR (ranked by Impact Factor).

Full details and instructions for proposal submission can be found on AP‘s website (click here).

Applied Psycholinguistics Readership Survey

Applied Psycholinguistics publishes original research papers on the psychological processes involved in language. It examines language development, language use and language disorders in adults and children with a particular emphasis on cross-language studies. The journal gathers together the best work from a variety of disciplines including linguistics, psychology, reading, education, language learning, speech and hearing, and neurology.

The journal is currently conducting a readership survey and the editor invites you to share your thoughts. The survey is completely anonymous. However, we are offering a prize draw as thanks for your input. Participants who complete the survey and submit contact information will be entered into a prize draw to win one of two Amazon.com gift cards for $125 / £100.

The readership survey will take approximately five to ten minutes to complete and your feedback is greatly appreciated.

If you are not familiar with Applied Psycholinguistics, the survey will provide the option of temporary free access, after which you may complete the full survey and enter the prize draw.

The survey is open until May 31 – click here to take it now.

The merits of a case study approach in communication disorders

Blog post by Louise Cummings, Nottingham Trent University.

The case study has had something of a bad press in recent years. How often do we hear that they provide low-quality evidence of the effectiveness of an intervention in speech and language therapy? The emphasis on evidence-based practice in healthcare has seen the case study relegated to the bottom of the hierarchy of evidence. From this lowly position, the case study is seen to fall of scientific objectivity and rigour which are the hallmarks of other types of investigation, most notably systematic reviews and randomized controlled trials. The result is that researchers, teachers and practitioners in a wide range of disciplines feel almost duty-bound to preface their use of case studies with a health warning – these studies are of limited scientific value and should be treated as such. I have no intention of issuing health warnings or adopting an apologetic approach to the use of case studies. Indeed, I believe they offer immeasurable benefits in instructional and research contexts in communication disorders and elsewhere. These benefits are threefold.

First, case studies are the most effective way of introducing students of communication disorders to the key skill which all clinicians must possess, namely, clinical decision-making. Speech and language therapists must make decisions on a daily basis about how best to assess and treat their clients, when to terminate a course of therapy and refer clients to other medical and health professionals, and how to measure the outcomes of intervention. Of course, it is true that clinicians acquire and refine most of their skills of clinical decision-making ‘on the job’. But it is also possible to get a head start on this process by interrogating the basis of decisions that are taken in the management of actual clients. This is where the case study comes into its own. By exploring the basis of the full gamut of decisions which clinicians must make in relation to a client, students can begin to assimilate the very essence of this most elusive of clinical skills. The case study is not just the most effective, but the only, method by means of which this can be achieved.

Second, case studies provide an invaluable opportunity for students of communication disorders to put their skills of linguistic analysis into practice. The narrative produced by an adult with a traumatic brain injury or the conversational exchange between a client with aphasia and his or her spouse is the richest possible data on which to fine tune these skills. I will not be alone in lamenting the lack of such data in modern research articles in communication disorders, the emphasis of which is on the reporting of largely quantitative results in the shortest space possible. It is something of an irony that as electronic publications have surpassed print publications, in journals at least, the extended extracts of language often seen in older research papers have all but disappeared in more modern articles. If anything, an electronic format should make the inclusion of client narratives and conversational exchanges more, not less, likely to be published. There is simply nowhere for the student of communication disorders to get this practice other than through case studies.

Third, all medical and health professionals are encouraged to see the client first, and their medical condition or other disorder second. This is no less the case for speech and language therapists who must learn that aphasia, dysarthria and other communication disorders sit alongside an array of factors which can influence a client’s adjustment to communication disability. Case studies are the best context in which to appreciate the complex interplay that exists between communication disorders and these factors.

For all these reasons, I have championed a case study approach to communication disorders in my recent book Case Studies in Communication Disorders (Cambridge University Press, 2016). I urge other researchers, teachers and practitioners in speech and language therapy to do likewise.

Click here for a free extract

The child’s journey into language: Some frequently asked questions…

First Language Acquisition Eve ClarkBlog Post written by Eve V. Clark (Stanford University), author of the recently published First Language Acquisition (3rd Edition)

 How early do infants start in on language?

Even before birth, babies recognize intonation contours they hear in utero, and after birth, they prefer listening to a familiar language over an unfamiliar one.  And in their first few months, they can already discriminate between speech sounds that are the same or different.

 How early do infants understand their first words, word-endings, phrases, utterances? 

Children learn meanings in context, both from hearing repeated uses of words in relation to their referents, and from feedback from adults when they use a word correctly or incorrectly.  When a child is holding a ball, the mother might say “Ball.  That’s a ball”, and the child could decide that “ball” picks out round objects of that type. Still, it may take many examples to establish the link between a word-form  (“ball”) and a word-meaning (round objects of a particular type) and to relate the word “ball” to neighbouring words (throw, catch, pick up, hold).  It takes even longer for the child’s meaning of a word to fully match the adult’s.

When do infants produce their first words and truly begin to talk?  

Infants babble from 5-10 months on, giving them practice on simple syllables, but most try their first true words at some time between age 1 and age 2 (a broad range).  They find certain sounds harder to pronounce than others, and certain combinations (e.g., clusters of consonants) even harder.  It therefore takes practice to arrive at the adult pronunciations of words –– to go from “ba” to “bottle”, or from “ga” to “squirrel”.   Like adults, though, children understand much more than they can say.

 What’s the relation between what children are able to understand and what they are able to say?  

Representing the sound and meaning of a word in memory is essential for recognizing it from other speakers.  Because children are able to understand words before they produce them, they can make use of the representations of words they already understand as models to aim for when they try to pronounce those same words.

 How early do children begin to communicate with others?   

A few months after birth, infants follow adult gaze, and they respond to adult gaze and to adult speech face-to-face, with cooing and arm-waving.  As they get a little older, they attend to the motion in adult hand-gestures.  By 8 months or so, they recognize a small number of words, and by 10 months, they can also attend to the target of an adult’s pointing gestures.  They themselves point to elicit speech from caregivers, and they use gestures to make requests – e.g., pointing at a cup as a request for juice.  They seem eager to communicate very early.

 How do young children learn their first language?  

Parents check up on what their children mean, and offer standard ways to say what the children seem to be aiming for.  Children use this adult feedback to check on whether or not they have been understood as they intended.

Do all children follow the same path in acquisition? 

No, and the reason for this depends in part on the language being learnt.  English, for example, tends to have fixed word order and relatively few word endings, while Turkish has much freer word order and a large number of different word-endings.  Languages differ in their sound systems, their grammar, and their vocabulary, all of which has an impact on early acquisition.

These and many other questions about first language acquisition are explored in the new edition of First Language Acquisition.  In essence, children learn language in interaction with others: adults talk with them about their daily activities – eating, sleeping, bathing, dressing, playing; they expose them to language and to how it’s used, offer feedback when they make mistakes, and provide myriad opportunities for practice.  This book reviews findings from many languages as it follows the trajectories children trace during their acquisition of a first language and of the many skills language use depends on.

First Language Acquisition (third edition), Cambridge University Press 2016

 

 

Why We Gesture: The surprising role of hand movements in communication

Blog post by David McNeill author of Why We Gesture: The Surprising role of the hands in communication

Why do we gesture? Many would say it brings emphasis, energy and ornamentation to speech (which is assumed to be the core of what is taking place); in short, gesture is an “add-on.” (as Adam Kendon, who also rejects the idea, phrases it). However,the evidence is against this. The lay view of gesture is that one “talks with one’s hands.” You can’t find a word so you resort to gesture. Marianne Gullberg debunks this ancient idea. As she succinctly puts it, rather than gesture starting when words stop,gesture stops as well.  So if, contrary to lay belief, we don’t “talk with our hands”, why do we gesture? This book offers an answer.

The reasons we gesture are more profound. Language itself is inseparable from it. While gestures enhance the material carriers of meaning, the core is gesture and speech together. They are bound more tightly than saying the gesture is an“add-on” or “ornament” implies. They are united as a matter of thought itself. Thought with language is actually thought with language and gesture indissolubly tied. Even if the hands are restrained for some reason and a gesture is not externalized, the imagery it embodies can still be present, hidden but integrated with speech (and may surface in some other part of the body, the feet for example).

The book’s answer to the question, why we gesture is not that speech triggers gesture but that gesture orchestrates speech; we speak because we gesture, not we gesture because we speak. In bald terms, to orchestrate speech is why we gesture. This is the “surprise” of the subtitle—“The surprising role of the hands in communication.”

To present this hypothesis is the purpose of the current book. The book is the capstone of three previous books—an inadvertent trilogy over 20 years—“How Language Began: Gesture and Speech in Human Evolution,” “Gesture and Thought,” and “Hand and Mind: What Gestures reveal about Thought.” It merges them in to one multifaceted hypothesis. The integration itself—that it is possible—is part of the hypothesis. Integration is possible because of its central idea—implicit in the trilogy, explicit here—that gestures orchestrate speech.

A gesture automatically orchestrates speech when it and speech co-express the same meaning; then the gesture dominates the speech; syntax is subordinate and breaks apart or interrupts to preserve the integrity of the gesture–speech unit.Orchestration is the action of the vocal tract organized around a manual gesture. The gesture sets its parameters, the order of events within it, and the content of the speech with which it works. The amount of time speakers take to utter sentences is remarkably constant, between 1 and 2 seconds regardless of the number of embedded sentences. It is also the duration of a gesture. All of this is experienced by the speaker as the two awarenesses of the sentence that Wundt in the 19th C. distinguished.The “simultaneous” is awareness of the whole gesture–speech unit. It begins with the first stirrings of gesture preparation and ends with the last motion of gesture retraction. The “successive” is awareness of “…individual constituents moving into the focus of attention and out again,” and includes the gesture–speech unit as it and its gesture come to surface and then sink again beneath it.

The gesture in the first illustration, synchronized with “it down”, is a gesture–speech unit, and using the Wundt concepts we have:

“and Tweety Bird runs and gets a bowling ba simultaneous awareness of gesture–speech unity

starts[ll and ∅tw  drops gesture–speech unity enters successive awareness it down gesture–speech unity

leaves successive awareness the drainpipe]simultaneous awareness of  gesture–speech unity  ends.”

The transcript [1] shows the speech the gesture orchestrated and when – the entire stretch, from “ball” to “drainpipe” is the core meaning of “it down” plus the image of thrusting the bowling ball into the drainpipe in simultaneous awareness. The same meaning appeared in successive awareness, the gesture stroke in the position the construction provided, there orchestrating “it” and “down”together.

Why We Gesture

The “drops” construction provides the unpacking template and adds linguistic values. Its job is to present the gesture–speech unit, including Tweety’s agent-power in the unit. Gesture–speech unity is alive and not effaced by constructions. To the contrary,Sylvester-up/Tweety-down conflict in socially accessible form. This unit must be kept intact in the speech flow. What is striking and why the example is illustrative, is that “it down” was divided by the construction into different syntactic constituents (“it”the direct object, “down” a locative complement), yet the word pair remained a unit orchestrated by the gesture. In other examples, speech stops when continuing would break up a gesture–speech

it controls them.  A gesture–speech unity dominates.

How did it all come about? It occurred because “it down,” plus the co-expressive thrusting gesture, was the source (the “growth point”) of the sentence. The growth point came about as the differentiation of a field of equivalents having to do with HOWTO  THWART  SYLVESTER: THE  BOWLING  BALL DOWN.   It unpacked itself into shareable form by “summoning” the causative construction (possible because a causative

Why We Gesture 2 meaning was in the gesture–speech unit from the start of the preparation – the speaker’s hands already in the shape of Tweety’s “hands” as the agent of thrusting). Thus “it down”and its stroke were inviolate from the start: the stroke orchestrated the two words as a unit, and the gesture phrase the construction as a whole. I believe the situation illustrated with “it down” permeates the production of speech in all conditions and different languages.

—————————————————————————————————————————————————-

1 Participants retell an 8-minute Tweety and Sylvester classic they have just watched from memory to a listener (a friend, not the experimenter). Using Kendon’s terminology and our notation, the gesturephrase is marked by “[” and “]”.  The stroke, the image-bearing phase and only obligatory phase of the

gesture, is marked in boldface (“it down”). Preparation is the hand getting into position to makethestrokeandisindicatedbythespanfromtheleftbrackettothestartofboldface(“ba[lland∅twdrops”).Preparation shows that the gesture, with all its significance, is coming into being – there is n oreasonthe hands move into position and take on form than to perform the stroke. Holds are cessations of movement, either prestroke (“drops”), the hand frozen awaiting co-expressive speech, or poststroke

(“down”), the hand frozen in the stroke’s ending position and hand shape after movement has ceased until co-expressive speech ends. Holds of either kind are indicated with underlining. They provide a precise synchrony of gesture-orchestrated speech in successive awareness. Retraction is also an active phase, the gesture not simply abandoned but closing down ( “the drainpipe,” movement ending as the last syllable ended – in some gestures, though not here, the fingers creep along the chair arm rest until this point is reached). In writing growth points – a field of equivalents being differentiated and the psychological predicate differentiating it–we use FIELD OF EQUIVALENTS:PSYCHOLOGICAL PREDICATE (“HOW TO THWART SYLVESTER: THE BOWLING BALLDOWN”).

—————————————————————————————————————————————————-

A “strong prediction.” Our arguments predict that GPs in successive awareness remain intact no matter the constructions that unpack them. This follows from the expectation that unpacking will not disrupt a field of equivalents or its differentiation. Belonging to different syntactic constituents – the “it” with “drops” and the“down” with “the drainpipe” – did not break apart the “it down” GP. Instead, syntactic form adapted to gesture. The example shows that gesture is a force shaping speech not speech shaping gesture. Gesture–speech unity means that speech and gesture are equals, and in gesture-orchestrated speech the dynamic dimension enters from the growth point. In a second version of the “strong prediction,” speech stops if continuing would break the GP apart. The absolute need to preserve the GP in successive awareness then puts a brake on speech flow, even when it means restarting with a less cohesive gesture–speech match up that doesn’t break apart the GP.

Gestures of course do not always occur. This is itself an aspect of gesture. There is a natural variation of gesture occurrence. Apart from forced suppressions (as informal contexts), gestures fall on an elaboration continuum, their position an aspect of the gesture itself. The reality is imagery with speech ranging over the entire continuum.It is visuoactional imagery, not a photo. Gesture imagery linked to speech is what natural selection chose, acting on gesture–speech units free to vary in elaboration. As what Jan Firbas called communicative dynamism varies, the gesture–speech unit moves from elaborate movement to no movement at all. To speak of gesture–speech unity we include gestures at all levels of elaboration, including micro-level steps.

An example of the difference it makes is a word-finding study by Sahin et al of conscious patients about to undergo open-skull surgery, from which the authors conclude that lexical, grammatical and phonological steps occur with distinctive delays of about 200 ms, 320 ms and 450 ms, respectively. We hypothesize that gesture should affect this timing for the 1~2 seconds the orchestration lasts(no gestures were recorded in the Sahin study). If the idea unit differentiating a past time in a field of meaningful equivalents begins with an inflected verb plus imagery,does the GP’s on flashing wait 320 or 450 ms? Delay seems unlikely (although would be fascinating to find). It may be no faster (and perhaps slower) to say “bounced” in an experiment where a subject is told to make the root word into a past tense than to differentiate a field of equivalents with past time gesturally spatialized and the gesture in this space.

To see gesture as orchestrating speech opens many windows—how language is a dynamic process; a glimpse of how language possibly began; that children do not acquire one language but two or three in succession; that gestures are unique forms of human action; that a specific memory evolved just for gesture–speech unity; and how speech works so swiftly, everything (word-finding, unpacking, gesture–speech unity, gesture-placement, and context-absorption) done in a couple of seconds with workable (not necessarily complete)accuracy.

The Acquisition of Syntactic Structure: Animacy and Thematic Alignment

The Acquisition of Syntactic StructurePost written by author Misha Becker discussing her recently published book ‘The Acquisition of Syntactic Struture‘.

Young children are fascinated by animals and captivated when inanimate things are made to come alive. Is there some way their understanding of the difference between “alive” and “not alive” can help them learn language?

In this book I explain a well-known puzzle in linguistic theory by arguing just that. Children expect the sentence subject (often the “do-er” of an action) to be animate, alive. So when they encounter a sentence where the subject is the rock or the house they are led to revise their understanding of the sentence to create a more complex underlying structure. This is what helps them understand the difference between a sentence like The house is easy to see, where the house is the thing being seen, and The girl is eager to see, where the girl is (or will be) doing the seeing. If you didn’t know the meaning of easy or eager, as very young children will not, how would you interpret these sentences? Imagine you hear a sentence like The girl/house is daxy to see. Does it matter whether the subject is girl or house in your guess about what daxy means, and in your interpretation of the seeing event?

I came to the idea for this book when I noticed how strongly adult speakers were influenced by animacy when I tried to make them think of certain abstract structures. When presented with “The girl ____ to be tall” people were more likely to write a verb like want or claim in the blank, but presented with “The mountain ____ to be tall” they were more likely to write seem or appear. Yet the underlying structure of the sentence differs, depending on whether the sentence contains want/claim or seem/appear. In linguistic parliance, the subject of seem/appear is “derived”–it doesn’t really belong, thematically, to the verb, and in this sense the structure is more abstract and complex. It occurred to me that if adults were so strongly influenced by animate vs. inanimate subjects, then children might be as well.

This book describes numerous studies with children showing how the fundamental distinction between alive and not-alive interacts with their understanding of language and the world around them. But it also examines other facets of the animacy distinction with regard to language: how languages around the world place restrictions on animate and inanimate sentence subjects, how adults use animacy in their understanding of sentence structure, how and when babies first begin to represent the concept of animacy, and how computational models can be developed to simulate the use of a distinction like animacy in language learning. The final chapter of the book address the timeless question of where this understanding comes from–is the concept of animacy innate or learned, or both?

Find out more on Misha Becker’s new book ‘The Acquisition of Syntactic Struture‘. published by Cambridge University Press.

Bilingual Cognitive Advantage: Where Do We Stand?

Bilingual-post-Nov-14---V2Linguistic experience and its effect on cognition.

The following post by Dr. Aneta Pavlenko appeared on the Psychology Today blog, “Life as a bilingual”

Like all other walks of life, academia is not immune to fashions. In the study of bilingualism, one such trend has been the study of “the bilingual cognitive advantage”, the theory that experience of using two languages – and selecting one, while inhibiting the other – affects brain structure and strengthens ‘executive control’ akin to other experiences, such as musical training, navigation, and even juggling. This strengthening has been linked to a variety of findings: the superiority of bilingual children and adults in performance on tasks requiring cognitive control, resistance of bilingual brains to cognitive decline, and the delayed onset of dementia (see here).

Touted in the popular media, these findings captured our hearts and minds and for good reason: for those of us who are bi- and multilingual, this is good news and the focus itself is a pleasant change from concerns about bilingual disadvantage that permeated many early debates on bilingualism. But has the pendulum swung too much in the other direction? Has bilingualism become a commodity we are trying to sell, instead of an experience we are trying to understand? And is there, in fact, a consensus that the knowledge of more than one language offers us something more than the joys of reading and conversing in two languages and a leg up in learning the third, among other things?

For the remainder of the post, please click here

References:
Baum, S. & Titone D. (2014). Moving towards a neuroplasticity view of bilingualism, executive control, and aging. Applied Psycholinguistics, 35, 857-894.
Valian, V. (2014, in press) Bilingualism and cognition. Bilingualism: Language and Cognition.

 

 

Predicting risk for oral and written language learning difficulties in students educated in a second language

Post written by Dr. Caroline Erdos based on an article from Applied Psycholinguistics

Students who struggle with oral language and literacy are at increased risk for dropping out of school. The gap between struggling students and their typically-developing peers is smallest early on and therefore, the chances of bridging that gap are greatest in the early grades. However, more and more students have had little or no exposure to the language of schooling until their first day of school and this makes it difficult for school personnel to disentangle true risk for learning disability from incomplete second language acquisition. The result is that identification and intervention is often delayed in the case of second language learners, even those in immersion classes (ex: native speakers of English attending French immersion school), thus placing them at a significant disadvantage as compared to native speakers of the language of schooling (ex: native speakers of English attending English school) who often begin to receive help with oral language or (pre)literacy as early as kindergarten.

A promising avenue is to use student’s skills in oral language and literacy in their first language to predict how they will eventually perform in these areas in their second language. It is crucial to fully understand the possibilities and limitations of this method, however.

A second related issue is the importance of providing help that is most likely to have the greatest impact on student’s academic success. Numerous studies and clinical experience have shown that the more targeted the help, the more likely students will make gains. Therefore, once a child has been identified as presenting with oral language or literacy difficulties, it is imperative to identify the specific area of difficulty within each domain — in the area of oral language: vocabulary, grammar, phonology, discourse, or pragmatics; and in the area of literacy: phonological processing, letter-sound knowledge, decoding accuracy, decoding speed, lexical knowledge, or reading comprehension. Targeted intervention is key to making gains. For example, a child who struggles to understand what he reads is not likely to benefit from intervention targeting letter-sound knowledge, unless poor letter-sound knowledge was the primary cause of his inability to understand what he reads. Exactly how to provide targeted intervention is better understood for some areas, for example decoding accuracy or decoding speed,  than for others, for example oral language or reading comprehension. However, even in these less understood domains there is a general consensus that intervention that focuses on vocabulary (breadth and depth) and complex language skills would be useful.

Read the full article until July 31, 2014:

“Predicting risk for oral and written language learning difficulties in students educated in a second language” by Caroline Erdos, Fred Genesee, Robert Savage and Corinne Haigh