Applied Psycholinguistics Readership Survey

Applied Psycholinguistics publishes original research papers on the psychological processes involved in language. It examines language development, language use and language disorders in adults and children with a particular emphasis on cross-language studies. The journal gathers together the best work from a variety of disciplines including linguistics, psychology, reading, education, language learning, speech and hearing, and neurology.

The journal is currently conducting a readership survey and the editor invites you to share your thoughts. The survey is completely anonymous. However, we are offering a prize draw as thanks for your input. Participants who complete the survey and submit contact information will be entered into a prize draw to win one of two Amazon.com gift cards for $125 / £100.

The readership survey will take approximately five to ten minutes to complete and your feedback is greatly appreciated.

If you are not familiar with Applied Psycholinguistics, the survey will provide the option of temporary free access, after which you may complete the full survey and enter the prize draw.

The survey is open until May 31 – click here to take it now.

The merits of a case study approach in communication disorders

Blog post by Louise Cummings, Nottingham Trent University.

The case study has had something of a bad press in recent years. How often do we hear that they provide low-quality evidence of the effectiveness of an intervention in speech and language therapy? The emphasis on evidence-based practice in healthcare has seen the case study relegated to the bottom of the hierarchy of evidence. From this lowly position, the case study is seen to fall of scientific objectivity and rigour which are the hallmarks of other types of investigation, most notably systematic reviews and randomized controlled trials. The result is that researchers, teachers and practitioners in a wide range of disciplines feel almost duty-bound to preface their use of case studies with a health warning – these studies are of limited scientific value and should be treated as such. I have no intention of issuing health warnings or adopting an apologetic approach to the use of case studies. Indeed, I believe they offer immeasurable benefits in instructional and research contexts in communication disorders and elsewhere. These benefits are threefold.

First, case studies are the most effective way of introducing students of communication disorders to the key skill which all clinicians must possess, namely, clinical decision-making. Speech and language therapists must make decisions on a daily basis about how best to assess and treat their clients, when to terminate a course of therapy and refer clients to other medical and health professionals, and how to measure the outcomes of intervention. Of course, it is true that clinicians acquire and refine most of their skills of clinical decision-making ‘on the job’. But it is also possible to get a head start on this process by interrogating the basis of decisions that are taken in the management of actual clients. This is where the case study comes into its own. By exploring the basis of the full gamut of decisions which clinicians must make in relation to a client, students can begin to assimilate the very essence of this most elusive of clinical skills. The case study is not just the most effective, but the only, method by means of which this can be achieved.

Second, case studies provide an invaluable opportunity for students of communication disorders to put their skills of linguistic analysis into practice. The narrative produced by an adult with a traumatic brain injury or the conversational exchange between a client with aphasia and his or her spouse is the richest possible data on which to fine tune these skills. I will not be alone in lamenting the lack of such data in modern research articles in communication disorders, the emphasis of which is on the reporting of largely quantitative results in the shortest space possible. It is something of an irony that as electronic publications have surpassed print publications, in journals at least, the extended extracts of language often seen in older research papers have all but disappeared in more modern articles. If anything, an electronic format should make the inclusion of client narratives and conversational exchanges more, not less, likely to be published. There is simply nowhere for the student of communication disorders to get this practice other than through case studies.

Third, all medical and health professionals are encouraged to see the client first, and their medical condition or other disorder second. This is no less the case for speech and language therapists who must learn that aphasia, dysarthria and other communication disorders sit alongside an array of factors which can influence a client’s adjustment to communication disability. Case studies are the best context in which to appreciate the complex interplay that exists between communication disorders and these factors.

For all these reasons, I have championed a case study approach to communication disorders in my recent book Case Studies in Communication Disorders (Cambridge University Press, 2016). I urge other researchers, teachers and practitioners in speech and language therapy to do likewise.

Click here for a free extract

The child’s journey into language: Some frequently asked questions…

First Language Acquisition Eve ClarkBlog Post written by Eve V. Clark (Stanford University), author of the recently published First Language Acquisition (3rd Edition)

 How early do infants start in on language?

Even before birth, babies recognize intonation contours they hear in utero, and after birth, they prefer listening to a familiar language over an unfamiliar one.  And in their first few months, they can already discriminate between speech sounds that are the same or different.

 How early do infants understand their first words, word-endings, phrases, utterances? 

Children learn meanings in context, both from hearing repeated uses of words in relation to their referents, and from feedback from adults when they use a word correctly or incorrectly.  When a child is holding a ball, the mother might say “Ball.  That’s a ball”, and the child could decide that “ball” picks out round objects of that type. Still, it may take many examples to establish the link between a word-form  (“ball”) and a word-meaning (round objects of a particular type) and to relate the word “ball” to neighbouring words (throw, catch, pick up, hold).  It takes even longer for the child’s meaning of a word to fully match the adult’s.

When do infants produce their first words and truly begin to talk?  

Infants babble from 5-10 months on, giving them practice on simple syllables, but most try their first true words at some time between age 1 and age 2 (a broad range).  They find certain sounds harder to pronounce than others, and certain combinations (e.g., clusters of consonants) even harder.  It therefore takes practice to arrive at the adult pronunciations of words –– to go from “ba” to “bottle”, or from “ga” to “squirrel”.   Like adults, though, children understand much more than they can say.

 What’s the relation between what children are able to understand and what they are able to say?  

Representing the sound and meaning of a word in memory is essential for recognizing it from other speakers.  Because children are able to understand words before they produce them, they can make use of the representations of words they already understand as models to aim for when they try to pronounce those same words.

 How early do children begin to communicate with others?   

A few months after birth, infants follow adult gaze, and they respond to adult gaze and to adult speech face-to-face, with cooing and arm-waving.  As they get a little older, they attend to the motion in adult hand-gestures.  By 8 months or so, they recognize a small number of words, and by 10 months, they can also attend to the target of an adult’s pointing gestures.  They themselves point to elicit speech from caregivers, and they use gestures to make requests – e.g., pointing at a cup as a request for juice.  They seem eager to communicate very early.

 How do young children learn their first language?  

Parents check up on what their children mean, and offer standard ways to say what the children seem to be aiming for.  Children use this adult feedback to check on whether or not they have been understood as they intended.

Do all children follow the same path in acquisition? 

No, and the reason for this depends in part on the language being learnt.  English, for example, tends to have fixed word order and relatively few word endings, while Turkish has much freer word order and a large number of different word-endings.  Languages differ in their sound systems, their grammar, and their vocabulary, all of which has an impact on early acquisition.

These and many other questions about first language acquisition are explored in the new edition of First Language Acquisition.  In essence, children learn language in interaction with others: adults talk with them about their daily activities – eating, sleeping, bathing, dressing, playing; they expose them to language and to how it’s used, offer feedback when they make mistakes, and provide myriad opportunities for practice.  This book reviews findings from many languages as it follows the trajectories children trace during their acquisition of a first language and of the many skills language use depends on.

First Language Acquisition (third edition), Cambridge University Press 2016

 

 

Why We Gesture: The surprising role of hand movements in communication

Blog post by David McNeill author of Why We Gesture: The Surprising role of the hands in communication

Why do we gesture? Many would say it brings emphasis, energy and ornamentation to speech (which is assumed to be the core of what is taking place); in short, gesture is an “add-on.” (as Adam Kendon, who also rejects the idea, phrases it). However,the evidence is against this. The lay view of gesture is that one “talks with one’s hands.” You can’t find a word so you resort to gesture. Marianne Gullberg debunks this ancient idea. As she succinctly puts it, rather than gesture starting when words stop,gesture stops as well.  So if, contrary to lay belief, we don’t “talk with our hands”, why do we gesture? This book offers an answer.

The reasons we gesture are more profound. Language itself is inseparable from it. While gestures enhance the material carriers of meaning, the core is gesture and speech together. They are bound more tightly than saying the gesture is an“add-on” or “ornament” implies. They are united as a matter of thought itself. Thought with language is actually thought with language and gesture indissolubly tied. Even if the hands are restrained for some reason and a gesture is not externalized, the imagery it embodies can still be present, hidden but integrated with speech (and may surface in some other part of the body, the feet for example).

The book’s answer to the question, why we gesture is not that speech triggers gesture but that gesture orchestrates speech; we speak because we gesture, not we gesture because we speak. In bald terms, to orchestrate speech is why we gesture. This is the “surprise” of the subtitle—“The surprising role of the hands in communication.”

To present this hypothesis is the purpose of the current book. The book is the capstone of three previous books—an inadvertent trilogy over 20 years—“How Language Began: Gesture and Speech in Human Evolution,” “Gesture and Thought,” and “Hand and Mind: What Gestures reveal about Thought.” It merges them in to one multifaceted hypothesis. The integration itself—that it is possible—is part of the hypothesis. Integration is possible because of its central idea—implicit in the trilogy, explicit here—that gestures orchestrate speech.

A gesture automatically orchestrates speech when it and speech co-express the same meaning; then the gesture dominates the speech; syntax is subordinate and breaks apart or interrupts to preserve the integrity of the gesture–speech unit.Orchestration is the action of the vocal tract organized around a manual gesture. The gesture sets its parameters, the order of events within it, and the content of the speech with which it works. The amount of time speakers take to utter sentences is remarkably constant, between 1 and 2 seconds regardless of the number of embedded sentences. It is also the duration of a gesture. All of this is experienced by the speaker as the two awarenesses of the sentence that Wundt in the 19th C. distinguished.The “simultaneous” is awareness of the whole gesture–speech unit. It begins with the first stirrings of gesture preparation and ends with the last motion of gesture retraction. The “successive” is awareness of “…individual constituents moving into the focus of attention and out again,” and includes the gesture–speech unit as it and its gesture come to surface and then sink again beneath it.

The gesture in the first illustration, synchronized with “it down”, is a gesture–speech unit, and using the Wundt concepts we have:

“and Tweety Bird runs and gets a bowling ba simultaneous awareness of gesture–speech unity

starts[ll and ∅tw  drops gesture–speech unity enters successive awareness it down gesture–speech unity

leaves successive awareness the drainpipe]simultaneous awareness of  gesture–speech unity  ends.”

The transcript [1] shows the speech the gesture orchestrated and when – the entire stretch, from “ball” to “drainpipe” is the core meaning of “it down” plus the image of thrusting the bowling ball into the drainpipe in simultaneous awareness. The same meaning appeared in successive awareness, the gesture stroke in the position the construction provided, there orchestrating “it” and “down”together.

Why We Gesture

The “drops” construction provides the unpacking template and adds linguistic values. Its job is to present the gesture–speech unit, including Tweety’s agent-power in the unit. Gesture–speech unity is alive and not effaced by constructions. To the contrary,Sylvester-up/Tweety-down conflict in socially accessible form. This unit must be kept intact in the speech flow. What is striking and why the example is illustrative, is that “it down” was divided by the construction into different syntactic constituents (“it”the direct object, “down” a locative complement), yet the word pair remained a unit orchestrated by the gesture. In other examples, speech stops when continuing would break up a gesture–speech

it controls them.  A gesture–speech unity dominates.

How did it all come about? It occurred because “it down,” plus the co-expressive thrusting gesture, was the source (the “growth point”) of the sentence. The growth point came about as the differentiation of a field of equivalents having to do with HOWTO  THWART  SYLVESTER: THE  BOWLING  BALL DOWN.   It unpacked itself into shareable form by “summoning” the causative construction (possible because a causative

Why We Gesture 2 meaning was in the gesture–speech unit from the start of the preparation – the speaker’s hands already in the shape of Tweety’s “hands” as the agent of thrusting). Thus “it down”and its stroke were inviolate from the start: the stroke orchestrated the two words as a unit, and the gesture phrase the construction as a whole. I believe the situation illustrated with “it down” permeates the production of speech in all conditions and different languages.

—————————————————————————————————————————————————-

1 Participants retell an 8-minute Tweety and Sylvester classic they have just watched from memory to a listener (a friend, not the experimenter). Using Kendon’s terminology and our notation, the gesturephrase is marked by “[” and “]”.  The stroke, the image-bearing phase and only obligatory phase of the

gesture, is marked in boldface (“it down”). Preparation is the hand getting into position to makethestrokeandisindicatedbythespanfromtheleftbrackettothestartofboldface(“ba[lland∅twdrops”).Preparation shows that the gesture, with all its significance, is coming into being – there is n oreasonthe hands move into position and take on form than to perform the stroke. Holds are cessations of movement, either prestroke (“drops”), the hand frozen awaiting co-expressive speech, or poststroke

(“down”), the hand frozen in the stroke’s ending position and hand shape after movement has ceased until co-expressive speech ends. Holds of either kind are indicated with underlining. They provide a precise synchrony of gesture-orchestrated speech in successive awareness. Retraction is also an active phase, the gesture not simply abandoned but closing down ( “the drainpipe,” movement ending as the last syllable ended – in some gestures, though not here, the fingers creep along the chair arm rest until this point is reached). In writing growth points – a field of equivalents being differentiated and the psychological predicate differentiating it–we use FIELD OF EQUIVALENTS:PSYCHOLOGICAL PREDICATE (“HOW TO THWART SYLVESTER: THE BOWLING BALLDOWN”).

—————————————————————————————————————————————————-

A “strong prediction.” Our arguments predict that GPs in successive awareness remain intact no matter the constructions that unpack them. This follows from the expectation that unpacking will not disrupt a field of equivalents or its differentiation. Belonging to different syntactic constituents – the “it” with “drops” and the“down” with “the drainpipe” – did not break apart the “it down” GP. Instead, syntactic form adapted to gesture. The example shows that gesture is a force shaping speech not speech shaping gesture. Gesture–speech unity means that speech and gesture are equals, and in gesture-orchestrated speech the dynamic dimension enters from the growth point. In a second version of the “strong prediction,” speech stops if continuing would break the GP apart. The absolute need to preserve the GP in successive awareness then puts a brake on speech flow, even when it means restarting with a less cohesive gesture–speech match up that doesn’t break apart the GP.

Gestures of course do not always occur. This is itself an aspect of gesture. There is a natural variation of gesture occurrence. Apart from forced suppressions (as informal contexts), gestures fall on an elaboration continuum, their position an aspect of the gesture itself. The reality is imagery with speech ranging over the entire continuum.It is visuoactional imagery, not a photo. Gesture imagery linked to speech is what natural selection chose, acting on gesture–speech units free to vary in elaboration. As what Jan Firbas called communicative dynamism varies, the gesture–speech unit moves from elaborate movement to no movement at all. To speak of gesture–speech unity we include gestures at all levels of elaboration, including micro-level steps.

An example of the difference it makes is a word-finding study by Sahin et al of conscious patients about to undergo open-skull surgery, from which the authors conclude that lexical, grammatical and phonological steps occur with distinctive delays of about 200 ms, 320 ms and 450 ms, respectively. We hypothesize that gesture should affect this timing for the 1~2 seconds the orchestration lasts(no gestures were recorded in the Sahin study). If the idea unit differentiating a past time in a field of meaningful equivalents begins with an inflected verb plus imagery,does the GP’s on flashing wait 320 or 450 ms? Delay seems unlikely (although would be fascinating to find). It may be no faster (and perhaps slower) to say “bounced” in an experiment where a subject is told to make the root word into a past tense than to differentiate a field of equivalents with past time gesturally spatialized and the gesture in this space.

To see gesture as orchestrating speech opens many windows—how language is a dynamic process; a glimpse of how language possibly began; that children do not acquire one language but two or three in succession; that gestures are unique forms of human action; that a specific memory evolved just for gesture–speech unity; and how speech works so swiftly, everything (word-finding, unpacking, gesture–speech unity, gesture-placement, and context-absorption) done in a couple of seconds with workable (not necessarily complete)accuracy.

The Acquisition of Syntactic Structure: Animacy and Thematic Alignment

The Acquisition of Syntactic StructurePost written by author Misha Becker discussing her recently published book ‘The Acquisition of Syntactic Struture‘.

Young children are fascinated by animals and captivated when inanimate things are made to come alive. Is there some way their understanding of the difference between “alive” and “not alive” can help them learn language?

In this book I explain a well-known puzzle in linguistic theory by arguing just that. Children expect the sentence subject (often the “do-er” of an action) to be animate, alive. So when they encounter a sentence where the subject is the rock or the house they are led to revise their understanding of the sentence to create a more complex underlying structure. This is what helps them understand the difference between a sentence like The house is easy to see, where the house is the thing being seen, and The girl is eager to see, where the girl is (or will be) doing the seeing. If you didn’t know the meaning of easy or eager, as very young children will not, how would you interpret these sentences? Imagine you hear a sentence like The girl/house is daxy to see. Does it matter whether the subject is girl or house in your guess about what daxy means, and in your interpretation of the seeing event?

I came to the idea for this book when I noticed how strongly adult speakers were influenced by animacy when I tried to make them think of certain abstract structures. When presented with “The girl ____ to be tall” people were more likely to write a verb like want or claim in the blank, but presented with “The mountain ____ to be tall” they were more likely to write seem or appear. Yet the underlying structure of the sentence differs, depending on whether the sentence contains want/claim or seem/appear. In linguistic parliance, the subject of seem/appear is “derived”–it doesn’t really belong, thematically, to the verb, and in this sense the structure is more abstract and complex. It occurred to me that if adults were so strongly influenced by animate vs. inanimate subjects, then children might be as well.

This book describes numerous studies with children showing how the fundamental distinction between alive and not-alive interacts with their understanding of language and the world around them. But it also examines other facets of the animacy distinction with regard to language: how languages around the world place restrictions on animate and inanimate sentence subjects, how adults use animacy in their understanding of sentence structure, how and when babies first begin to represent the concept of animacy, and how computational models can be developed to simulate the use of a distinction like animacy in language learning. The final chapter of the book address the timeless question of where this understanding comes from–is the concept of animacy innate or learned, or both?

Find out more on Misha Becker’s new book ‘The Acquisition of Syntactic Struture‘. published by Cambridge University Press.

Bilingual Cognitive Advantage: Where Do We Stand?

Bilingual-post-Nov-14---V2Linguistic experience and its effect on cognition.

The following post by Dr. Aneta Pavlenko appeared on the Psychology Today blog, “Life as a bilingual”

Like all other walks of life, academia is not immune to fashions. In the study of bilingualism, one such trend has been the study of “the bilingual cognitive advantage”, the theory that experience of using two languages – and selecting one, while inhibiting the other – affects brain structure and strengthens ‘executive control’ akin to other experiences, such as musical training, navigation, and even juggling. This strengthening has been linked to a variety of findings: the superiority of bilingual children and adults in performance on tasks requiring cognitive control, resistance of bilingual brains to cognitive decline, and the delayed onset of dementia (see here).

Touted in the popular media, these findings captured our hearts and minds and for good reason: for those of us who are bi- and multilingual, this is good news and the focus itself is a pleasant change from concerns about bilingual disadvantage that permeated many early debates on bilingualism. But has the pendulum swung too much in the other direction? Has bilingualism become a commodity we are trying to sell, instead of an experience we are trying to understand? And is there, in fact, a consensus that the knowledge of more than one language offers us something more than the joys of reading and conversing in two languages and a leg up in learning the third, among other things?

For the remainder of the post, please click here

References:
Baum, S. & Titone D. (2014). Moving towards a neuroplasticity view of bilingualism, executive control, and aging. Applied Psycholinguistics, 35, 857-894.
Valian, V. (2014, in press) Bilingualism and cognition. Bilingualism: Language and Cognition.

 

 

Predicting risk for oral and written language learning difficulties in students educated in a second language

Post written by Dr. Caroline Erdos based on an article from Applied Psycholinguistics

Students who struggle with oral language and literacy are at increased risk for dropping out of school. The gap between struggling students and their typically-developing peers is smallest early on and therefore, the chances of bridging that gap are greatest in the early grades. However, more and more students have had little or no exposure to the language of schooling until their first day of school and this makes it difficult for school personnel to disentangle true risk for learning disability from incomplete second language acquisition. The result is that identification and intervention is often delayed in the case of second language learners, even those in immersion classes (ex: native speakers of English attending French immersion school), thus placing them at a significant disadvantage as compared to native speakers of the language of schooling (ex: native speakers of English attending English school) who often begin to receive help with oral language or (pre)literacy as early as kindergarten.

A promising avenue is to use student’s skills in oral language and literacy in their first language to predict how they will eventually perform in these areas in their second language. It is crucial to fully understand the possibilities and limitations of this method, however.

A second related issue is the importance of providing help that is most likely to have the greatest impact on student’s academic success. Numerous studies and clinical experience have shown that the more targeted the help, the more likely students will make gains. Therefore, once a child has been identified as presenting with oral language or literacy difficulties, it is imperative to identify the specific area of difficulty within each domain — in the area of oral language: vocabulary, grammar, phonology, discourse, or pragmatics; and in the area of literacy: phonological processing, letter-sound knowledge, decoding accuracy, decoding speed, lexical knowledge, or reading comprehension. Targeted intervention is key to making gains. For example, a child who struggles to understand what he reads is not likely to benefit from intervention targeting letter-sound knowledge, unless poor letter-sound knowledge was the primary cause of his inability to understand what he reads. Exactly how to provide targeted intervention is better understood for some areas, for example decoding accuracy or decoding speed,  than for others, for example oral language or reading comprehension. However, even in these less understood domains there is a general consensus that intervention that focuses on vocabulary (breadth and depth) and complex language skills would be useful.

Read the full article until July 31, 2014:

“Predicting risk for oral and written language learning difficulties in students educated in a second language” by Caroline Erdos, Fred Genesee, Robert Savage and Corinne Haigh