The child’s journey into language: Some frequently asked questions…

First Language Acquisition Eve ClarkBlog Post written by Eve V. Clark (Stanford University), author of the recently published First Language Acquisition (3rd Edition)

 How early do infants start in on language?

Even before birth, babies recognize intonation contours they hear in utero, and after birth, they prefer listening to a familiar language over an unfamiliar one.  And in their first few months, they can already discriminate between speech sounds that are the same or different.

 How early do infants understand their first words, word-endings, phrases, utterances? 

Children learn meanings in context, both from hearing repeated uses of words in relation to their referents, and from feedback from adults when they use a word correctly or incorrectly.  When a child is holding a ball, the mother might say “Ball.  That’s a ball”, and the child could decide that “ball” picks out round objects of that type. Still, it may take many examples to establish the link between a word-form  (“ball”) and a word-meaning (round objects of a particular type) and to relate the word “ball” to neighbouring words (throw, catch, pick up, hold).  It takes even longer for the child’s meaning of a word to fully match the adult’s.

When do infants produce their first words and truly begin to talk?  

Infants babble from 5-10 months on, giving them practice on simple syllables, but most try their first true words at some time between age 1 and age 2 (a broad range).  They find certain sounds harder to pronounce than others, and certain combinations (e.g., clusters of consonants) even harder.  It therefore takes practice to arrive at the adult pronunciations of words –– to go from “ba” to “bottle”, or from “ga” to “squirrel”.   Like adults, though, children understand much more than they can say.

 What’s the relation between what children are able to understand and what they are able to say?  

Representing the sound and meaning of a word in memory is essential for recognizing it from other speakers.  Because children are able to understand words before they produce them, they can make use of the representations of words they already understand as models to aim for when they try to pronounce those same words.

 How early do children begin to communicate with others?   

A few months after birth, infants follow adult gaze, and they respond to adult gaze and to adult speech face-to-face, with cooing and arm-waving.  As they get a little older, they attend to the motion in adult hand-gestures.  By 8 months or so, they recognize a small number of words, and by 10 months, they can also attend to the target of an adult’s pointing gestures.  They themselves point to elicit speech from caregivers, and they use gestures to make requests – e.g., pointing at a cup as a request for juice.  They seem eager to communicate very early.

 How do young children learn their first language?  

Parents check up on what their children mean, and offer standard ways to say what the children seem to be aiming for.  Children use this adult feedback to check on whether or not they have been understood as they intended.

Do all children follow the same path in acquisition? 

No, and the reason for this depends in part on the language being learnt.  English, for example, tends to have fixed word order and relatively few word endings, while Turkish has much freer word order and a large number of different word-endings.  Languages differ in their sound systems, their grammar, and their vocabulary, all of which has an impact on early acquisition.

These and many other questions about first language acquisition are explored in the new edition of First Language Acquisition.  In essence, children learn language in interaction with others: adults talk with them about their daily activities – eating, sleeping, bathing, dressing, playing; they expose them to language and to how it’s used, offer feedback when they make mistakes, and provide myriad opportunities for practice.  This book reviews findings from many languages as it follows the trajectories children trace during their acquisition of a first language and of the many skills language use depends on.

First Language Acquisition (third edition), Cambridge University Press 2016

 

 

Entrainment of prosody in the interaction of mothers with their young children

JCL blog post - Mar 16Blog post based on an article in Journal of Child Language 

Written by Written by Melanie Soderstrom in consultation with article co-authors Eon-Suk Ko, Amanda Seidl, and Alejandrina Crista

It has long been known that adults’ speech patterns unconsciously become more similar over the course of a conversation, but do children converge in this way with their caregivers? Across many areas of child development, children’s imitation of caregivers has long been understood to be an important component of the developmental process. These concepts are similar, but we tend to think of imitation as one-sided and static, while convergence is more dynamic and involves both interlocutors influencing each other. In our study, we set out to examine how duration and pitch characteristics of vocalizations by 1- and 2-year-olds and their caregivers dynamically influence each other in real-world conversational interactions.

We recorded 13 mothers and their children using LENA, a system for gathering full-day recordings, which also provides an automated tagging of the audio stream into speakers. We analyzed pitch and duration characteristics of these segments both within and across conversational exchanges between the mother and child to see whether mothers and children modulated the characteristics of their speech based on each other’s speech. Instead of examining mother-child correlations across mother-child dyads, as previous studies have done, we examined correlations within a given dyad, across conversations. We found small, but significant correlations, particularly in pitch measures, suggesting that mothers and children are dynamically influencing each other’s speech characteristics.

We also looked at who started the conversation, and measured mother and child utterance durations and response latencies (i.e., how quickly mothers responded to their child’s utterance and vice versa). Overall, unsurprisingly, mothers produced longer utterances and shorter responses latencies (faster responding) than their children. However, both the mothers and the children produced longer utterances and shorter response latencies in conversations that they themselves initiated. This finding is exploratory, but suggests that providing children with the conversational “space” to initiate conversations may lead to more mature vocalization, and may therefore be beneficial for the language-learning process.

Read the full article ‘Entrainment of prosody in the interaction of mothers with their young children’ here 

 

 

(Un)separated by a common language?

ENG 2016Blog post supplementary to an article in English Today written by © M. Lynne Murphy

Last night, I wondered ‘aloud’ on Twitter if British-American English dictionaries are the worst lexicographical products out there. This was after flipping through The Anglo-American Interpreter: a word and phrase book by H. W. Horwill (1939). At first, when I read Horwill’s claims that Americans ask for the time with What time have you?, I thought ‘Wow, American English has changed a lot since 1939’. But as I kept reading the unexpected items in the American column on each page, the British column sounded more and more like contemporary American English. I started to suspect something was amiss. And in the preface I found it: ‘The present book is an original compilation based on more than thirty years’ reading of American books and newspapers, supplemented by what the author has heard with his own ears during two periods of residence in the United States’. The author is bragging that he didn’t reproduce information from earlier works ‘without independent verification’. But did he get independent verification about the things he experienced with his own eyes and ears?

You and I have a great advantage over Mr Horwill, in that we live in the computer age. So we can do things like look in the Corpus of Historical American English (Davies 2010–) and see that the corpus has four examples of What time have you? between 1800 and 1940, but 219 examples of What time is it? We would not conclude that What time have you? is what Americans routinely said in 1939, but we might wonder if it was used in certain circumstances or regions.

I enjoyed finding this book and its oddities because it is the British mirror of a American book that I mention in my recent article ‘(Un)separated by a common language?’ (Murphy 2016). This is the first of a series of four pieces I’m writing for English Today about American and British Englishes: what can be studied about them and how we might think about them. The essay argues that American and British differences should not be dismissed as ‘minor and uninteresting’. Whether they’re minor or not depends on one’s standards for ‘minority’, but they’re certainly not uninteresting. What they are is misunderstood.

Like Horwill, the author of Understanding British English (Moore, 1989) was an enthusiast for the other country. She watched British television, read British and Australian books, and took two vacations in the UK where she acquired some British pen-pals. The book’s listing of British English vocabulary thus contains Australianisms, some misapprehensions of meaning, quite a few questionable part-of-speech judgements, and some words that are perfectly good American English (but apparently not used by Moore).

The problem for Horwill, Moore and many other interested observers of language, is that our experience of English is deeply personal (no one else has heard/read/said all the same words and phrases as you have) and we have a deep need to generalize and stereotype. If you phrase something in a way that I’ve not heard before and we have similar accents, I might think ‘There’s an expression I didn’t know’ or ‘Wow, isn’t she poetic?’ or ‘Hey, he’s saying that wrong’. But if someone with a different accent says it, we are apt to conclude ‘Oh, that must be how those people say it’. The fact is: it still could have been an expression I didn’t know. Or poetic. Or a speech error. And another fact is: I probably didn’t notice the dozens of earlier times when they expressed a similar notion using words I would have used.

We’re so confident that we know our own dialects that we are more than willing to make conclusions about others’. It’s not just enthusiastic-but-amateur dictionary-writers who do this. Articles in the news about Britishisms or Americanisms routinely misidentify the sources of words and phrases (for examples, see Murphy 2006–). Now that we’re in the information age, we have the tools to avoid these mistakes: well-researched dictionaries, accessible linguistic corpora, and the ability to ask people on the other side of the world whether they’d say X or Y—and to get an almost immediate response. It concerns me when those tools aren’t used.

So, before you conclude that that thing you heard on Downton Abbey is ‘how the British say it’ or that Americans ‘don’t use adverbs’ (see Pullum 2014), remind yourself that:

(a) you heard an individual speak, not a nation,

(b) your mind biases you to notice differences rather than similarities, and

(c) you could look it up!

 

References

Davies, Mark. 2010-. The Corpus of Historical American English: 400 million words, 1810-2009. Available at http://corpus.byu.edu/coha/.

Horwill, H. W. 1939. An Anglo-American interpreter. Oxford University Press.

Moore, Margaret E. 1989. Understanding British English. New York: Citadel Press.

Murphy, M. Lynne 2006. Separated by a Common Language (blog). http://separatedbyacommonlanguage.blogspot.com

Murphy, M. Lynne. 2016. (Un)separated by a common language? English Today, 32, 56-59.

Pullum, Geoffrey K. 2014. ‘Undivided by a Common Language’. Lingua Franca (blog), Chronicle of Higher Education, 17 March. Available at <http://chronicle.com/blogs/linguafranca/2014/03/17/undivided-by-a-common-language/> (Accessed September 30, 2015).

 

 

Checking in on grammar checking

shutterstock_384737923 Grammar Letters‘Checking in on Grammar Checking’ by Robert Dale is the latest Industry Watch column to be published in the journal Natural Language Engineering.

Reflecting back to 2004, industry expert Robert Dale reminds us of a time when Microsoft Word was the dominant software used for grammar checking. Bringing us up-to-date in 2016, Dale discusses the evolution, capabilities and current marketplace for grammar checking and its diverse range of users: from academics, men on dating websites to the fifty top celebrities on Twitter.

Below is an extract from the article, which is available to read in full here.

An appropriate time to reflect
I am writing this piece on a very special day. It’s National Grammar Day, ‘observed’ (to use Wikipedia’s crowdsourced choice of words) in the US on March 4th. The word ‘observed’ makes me think of citizens across the land going about their business throughout the day quietly and with a certain reverence; determined, on this day of all days, to ensure that their subjects agree with their verbs, to not their infinitives split, and to avoid using prepositions to end their sentences with. I can’t see it, really. I suspect that, for most people, National Grammar Day ranks some distance behind National Hug Day (January 21st) and National Cat Day (October 29th). And, at least in Poland and Lithuania, it has to compete with St Casimir’s Day, also celebrated on March 4th. I suppose we could do a study to see whether Polish and Lithuanian speakers have poorer grammar than Americans on that day, but I doubt we’d find a significant difference. So National Grammar Day might not mean all that much to most people, but it does feel like an appropriate time to take stock of where the grammar checking industry has got to. I last wrote a piece on commercial grammar checkers for the Industry Watch column over 10 years ago (Dale 2004). At the time, there really was no alternative to the grammar checker in Microsoft Word. What’s changed in the interim? And does anyone really need a grammar checker when so much content these days consists of generated-on-a-whim tweets and SMS messages?

The evolution of grammar checking
Grammar checking software has evolved through three distinct paradigms. First-generation tools were based on simple pattern matching and string replacement, using tables of suspect strings and their corresponding corrections. For example, we might search a text for any occurrences of the string isnt and suggest replacing them by isn’t. The basic technology here was pioneered by Bell Labs in the UNIX Writer’s Workbench tools (Macdonald 1983) in the late 1970s and early 1980s, and was widely used in a range of more or less derivative commercial software products that appeared on the market in the early ’80s. Anyone who can remember that far back might dimly recall using programs like RightWriter on the PC and Grammatik on the Mac. Second-generation tools embodied real syntactic processing. IBM’s Epistle (Heidorn et al. 1982) was the first really visible foray into this space, and key members of the team that built that application went on to develop the grammar checker that, to this day, resides inside Microsoft Word (Heidorn 2000). These systems rely on large rule-based descriptions of permissible syntax, in combination with a variety of techniques for detecting ungrammatical elements and posing potential corrections for those errors. Perhaps not surprisingly, the third generation of grammar-checking software is represented by solutions that make use of statistical language models in one way or another. The most impressive of these is Google’s context-aware spell checker (Whitelaw et al. 2009)—when you start taking context into account, the boundary between spell checking and grammar checking gets a bit fuzzy. Google’s entrance into a marketplace is enough to make anyone go weak at the knees, but there are other third-party developers brave enough to explore what’s possible in this space. A recent attempt that looks interesting is Deep Grammar (www.deepgrammar.com). We might expect to find that modern grammar checkers draw on techniques from each of these three paradigms. You can get a long way using simple table lookup for common errors, so it would be daft to ignore that fact, but each generation adds the potential for further coverage and capability.

The remainder of the article discusses the following:

  • Today’s grammar-checking marketplace
  • Capabilities
  • Who needs a grammar checker?

‘Checking in on grammar checking’ is an Open Access article. You may also be interested in complimentary access to a collection of related articles about grammar published in Natural Language Engineering. These papers are fully available until 30th June 2016.

Other recent Industry Watch articles by Robert Dale:

Text Messaging and the Downfall of Civilization

By Abby Kaplan author of Women Talk More Than Men and Other Myths about Language Explained

For years now, observers have been alert to a growing social menace. Like Harold Hill, they warn that there’s trouble in River City — with a capital T, and that rhymes with P, and that stands for Phone.

Mobile phones are a multifaceted scourge; they’ve been blamed for everything from poor social skills to short attention spans. As a linguist, I’m intrigued by one particular claim: that texting makes people illiterate. Not only are text messages short (and thus unsuited for complex ideas), they’re riddled with near-uninterpretable abbreviations: idk, pls, gr8. Young people are especially vulnerable to these altered forms; critics frequently raise the specter of future students studying a Hamlet who texts 2B or not 2B.

The puzzling thing is that none of these abominable abbreviations are unique to text messaging, or even to electronic communication more generally. There’s nothing inherently wrong with acronyms and initialisms like idk; similar abbreviations like RSVP are perfectly acceptable, even in formal writing. The only difference is that idk, lol, and other ‘textisms’ don’t happen to be on the list of abbreviations that are widely accepted in formal contexts. Non-acronym shortenings like pls for please are similarly unremarkable; they’re no different in kind from appt for appointment.

Less obvious is the status of abbreviations like gr8, which use the rebus principle: 8 is supposed to be read, not as the number between 7 and 9, but as the sound of the English word that it stands for. The conventions for formal written English don’t have anything similar. But just because a technique isn’t used in formal English writing doesn’t mean that technique is linguistically suspect; in fact, there are other written traditions that use exactly this principle. In Ancient Egyptian, for example, the following hieroglyph was used to represent the word ḥr ‘face’:Language Myths Blog

It’s not a coincidence, of course, that the symbol for the word meaning ‘face’ looks like a face. But the same symbol could also be used to represent the sound of that word embedded inside a larger word. For example, the word ḥryt ‘terror’ could be written as follows:Language Myths Blog

Here, the symbol has nothing to do with faces, just as the 8 in gr8 has nothing to do with numbers. The rebus principle was an important part of hieroglpyhic writing, and I’ve never heard anyone argue that this practice led to the downfall of ancient Egyptian civilization. So why do we think textisms are so dangerous?

Even if there’s nothing wrong with these abbreviations in principle, it could still be that using them interferes with your ability to read and write the standard language. If you see idk and pls on a daily basis, maybe you’ll have a hard time remembering that they’re informal (as opposed to RSVP and appt). But on the other hand, all these abbreviations require considerable linguistic sophistication — maybe texting actually improves your literacy by encouraging you to play with language. We all command a range of styles in spoken language, from formal to informal, and we’re very good at adjusting our speech to the situation; why couldn’t we do the same thing in writing?

At the end of the day, the only way to find out what texting really does is to go out and study it in the real world. And that’s exactly what research teams in the UK, the US, and Australia have done. The research in this area has found no consistent negative effect of texting; in fact, a few studies have even suggested that texting might have a modest benefit. It seems that all the weeping and gnashing of teeth about the end of literacy as we know it was premature: the apocalypse is not nigh.

Of course, this doesn’t mean that we should all spend every spare minute texting. (I’m a reluctant texter myself, and I have zero interest in related services like Twitter.) There are plenty of reasons to be thoughtful about how we use any technology, mobile phones included. What we’ve seen here is just that the linguistic argument against texting doesn’t hold water.

View the Women Talk More Than Men…and Other Language Myths Explained Book Trailer or by clicking on the image below…

Book Trailer

SSLA Announces the 2016 Albert Valdman Award Winner

Cambridge University Press and Studies in Second Language Acquisition are pleased to announce that the recipients of the 2016 Albert Valdman Award for outstanding publication in 2015 are Gregory D. Keating and Jill Jegerski for their March 2015 article, “Experimental designs in sentence processing research: A methodological review and user’s guide”, Volume 37, Issue 1.  Please join us in congratulating these authors on their contribution to the journal and to the field.


Post written by Gregory D. Keating and Jill Jegerski

We wish to express our utmost thanks and gratitude to the editorial and review boards at SSLA for selecting our article, ‘Research designs in sentence processing research: A methodological review and user’s guide,’ (March, 2015) for the Albert Valdman Award for outstanding publication. The two of us first became research collaborators several years ago as a result of our mutual interests in sentence processing, research methods, research design, and statistics. With each project that we have undertaken, we’ve had many fruitful and engaging conversations about best practices in experimental design and data analysis for sentence processing research. This article is the product of many of our own questions, which led us to conduct extensive reviews of existing processing studies. Our recommendations are culled from and informed by the body of work we reviewed, as well as our own experiences conducting sentence processing research. Stimulus development and data analysis can pose great challenges. It is our hope that the information provided in our paper will be a useful resource to researchers and students who wish to incorporate psycholinguistic methods into their research agenda and that the study of second language processing will continue to flourish in the future.

Why We Gesture: The surprising role of hand movements in communication

Blog post by David McNeill author of Why We Gesture: The Surprising role of the hands in communication

Why do we gesture? Many would say it brings emphasis, energy and ornamentation to speech (which is assumed to be the core of what is taking place); in short, gesture is an “add-on.” (as Adam Kendon, who also rejects the idea, phrases it). However,the evidence is against this. The lay view of gesture is that one “talks with one’s hands.” You can’t find a word so you resort to gesture. Marianne Gullberg debunks this ancient idea. As she succinctly puts it, rather than gesture starting when words stop,gesture stops as well.  So if, contrary to lay belief, we don’t “talk with our hands”, why do we gesture? This book offers an answer.

The reasons we gesture are more profound. Language itself is inseparable from it. While gestures enhance the material carriers of meaning, the core is gesture and speech together. They are bound more tightly than saying the gesture is an“add-on” or “ornament” implies. They are united as a matter of thought itself. Thought with language is actually thought with language and gesture indissolubly tied. Even if the hands are restrained for some reason and a gesture is not externalized, the imagery it embodies can still be present, hidden but integrated with speech (and may surface in some other part of the body, the feet for example).

The book’s answer to the question, why we gesture is not that speech triggers gesture but that gesture orchestrates speech; we speak because we gesture, not we gesture because we speak. In bald terms, to orchestrate speech is why we gesture. This is the “surprise” of the subtitle—“The surprising role of the hands in communication.”

To present this hypothesis is the purpose of the current book. The book is the capstone of three previous books—an inadvertent trilogy over 20 years—“How Language Began: Gesture and Speech in Human Evolution,” “Gesture and Thought,” and “Hand and Mind: What Gestures reveal about Thought.” It merges them in to one multifaceted hypothesis. The integration itself—that it is possible—is part of the hypothesis. Integration is possible because of its central idea—implicit in the trilogy, explicit here—that gestures orchestrate speech.

A gesture automatically orchestrates speech when it and speech co-express the same meaning; then the gesture dominates the speech; syntax is subordinate and breaks apart or interrupts to preserve the integrity of the gesture–speech unit.Orchestration is the action of the vocal tract organized around a manual gesture. The gesture sets its parameters, the order of events within it, and the content of the speech with which it works. The amount of time speakers take to utter sentences is remarkably constant, between 1 and 2 seconds regardless of the number of embedded sentences. It is also the duration of a gesture. All of this is experienced by the speaker as the two awarenesses of the sentence that Wundt in the 19th C. distinguished.The “simultaneous” is awareness of the whole gesture–speech unit. It begins with the first stirrings of gesture preparation and ends with the last motion of gesture retraction. The “successive” is awareness of “…individual constituents moving into the focus of attention and out again,” and includes the gesture–speech unit as it and its gesture come to surface and then sink again beneath it.

The gesture in the first illustration, synchronized with “it down”, is a gesture–speech unit, and using the Wundt concepts we have:

“and Tweety Bird runs and gets a bowling ba simultaneous awareness of gesture–speech unity

starts[ll and ∅tw  drops gesture–speech unity enters successive awareness it down gesture–speech unity

leaves successive awareness the drainpipe]simultaneous awareness of  gesture–speech unity  ends.”

The transcript [1] shows the speech the gesture orchestrated and when – the entire stretch, from “ball” to “drainpipe” is the core meaning of “it down” plus the image of thrusting the bowling ball into the drainpipe in simultaneous awareness. The same meaning appeared in successive awareness, the gesture stroke in the position the construction provided, there orchestrating “it” and “down”together.

Why We Gesture

The “drops” construction provides the unpacking template and adds linguistic values. Its job is to present the gesture–speech unit, including Tweety’s agent-power in the unit. Gesture–speech unity is alive and not effaced by constructions. To the contrary,Sylvester-up/Tweety-down conflict in socially accessible form. This unit must be kept intact in the speech flow. What is striking and why the example is illustrative, is that “it down” was divided by the construction into different syntactic constituents (“it”the direct object, “down” a locative complement), yet the word pair remained a unit orchestrated by the gesture. In other examples, speech stops when continuing would break up a gesture–speech

it controls them.  A gesture–speech unity dominates.

How did it all come about? It occurred because “it down,” plus the co-expressive thrusting gesture, was the source (the “growth point”) of the sentence. The growth point came about as the differentiation of a field of equivalents having to do with HOWTO  THWART  SYLVESTER: THE  BOWLING  BALL DOWN.   It unpacked itself into shareable form by “summoning” the causative construction (possible because a causative

Why We Gesture 2 meaning was in the gesture–speech unit from the start of the preparation – the speaker’s hands already in the shape of Tweety’s “hands” as the agent of thrusting). Thus “it down”and its stroke were inviolate from the start: the stroke orchestrated the two words as a unit, and the gesture phrase the construction as a whole. I believe the situation illustrated with “it down” permeates the production of speech in all conditions and different languages.

—————————————————————————————————————————————————-

1 Participants retell an 8-minute Tweety and Sylvester classic they have just watched from memory to a listener (a friend, not the experimenter). Using Kendon’s terminology and our notation, the gesturephrase is marked by “[” and “]”.  The stroke, the image-bearing phase and only obligatory phase of the

gesture, is marked in boldface (“it down”). Preparation is the hand getting into position to makethestrokeandisindicatedbythespanfromtheleftbrackettothestartofboldface(“ba[lland∅twdrops”).Preparation shows that the gesture, with all its significance, is coming into being – there is n oreasonthe hands move into position and take on form than to perform the stroke. Holds are cessations of movement, either prestroke (“drops”), the hand frozen awaiting co-expressive speech, or poststroke

(“down”), the hand frozen in the stroke’s ending position and hand shape after movement has ceased until co-expressive speech ends. Holds of either kind are indicated with underlining. They provide a precise synchrony of gesture-orchestrated speech in successive awareness. Retraction is also an active phase, the gesture not simply abandoned but closing down ( “the drainpipe,” movement ending as the last syllable ended – in some gestures, though not here, the fingers creep along the chair arm rest until this point is reached). In writing growth points – a field of equivalents being differentiated and the psychological predicate differentiating it–we use FIELD OF EQUIVALENTS:PSYCHOLOGICAL PREDICATE (“HOW TO THWART SYLVESTER: THE BOWLING BALLDOWN”).

—————————————————————————————————————————————————-

A “strong prediction.” Our arguments predict that GPs in successive awareness remain intact no matter the constructions that unpack them. This follows from the expectation that unpacking will not disrupt a field of equivalents or its differentiation. Belonging to different syntactic constituents – the “it” with “drops” and the“down” with “the drainpipe” – did not break apart the “it down” GP. Instead, syntactic form adapted to gesture. The example shows that gesture is a force shaping speech not speech shaping gesture. Gesture–speech unity means that speech and gesture are equals, and in gesture-orchestrated speech the dynamic dimension enters from the growth point. In a second version of the “strong prediction,” speech stops if continuing would break the GP apart. The absolute need to preserve the GP in successive awareness then puts a brake on speech flow, even when it means restarting with a less cohesive gesture–speech match up that doesn’t break apart the GP.

Gestures of course do not always occur. This is itself an aspect of gesture. There is a natural variation of gesture occurrence. Apart from forced suppressions (as informal contexts), gestures fall on an elaboration continuum, their position an aspect of the gesture itself. The reality is imagery with speech ranging over the entire continuum.It is visuoactional imagery, not a photo. Gesture imagery linked to speech is what natural selection chose, acting on gesture–speech units free to vary in elaboration. As what Jan Firbas called communicative dynamism varies, the gesture–speech unit moves from elaborate movement to no movement at all. To speak of gesture–speech unity we include gestures at all levels of elaboration, including micro-level steps.

An example of the difference it makes is a word-finding study by Sahin et al of conscious patients about to undergo open-skull surgery, from which the authors conclude that lexical, grammatical and phonological steps occur with distinctive delays of about 200 ms, 320 ms and 450 ms, respectively. We hypothesize that gesture should affect this timing for the 1~2 seconds the orchestration lasts(no gestures were recorded in the Sahin study). If the idea unit differentiating a past time in a field of meaningful equivalents begins with an inflected verb plus imagery,does the GP’s on flashing wait 320 or 450 ms? Delay seems unlikely (although would be fascinating to find). It may be no faster (and perhaps slower) to say “bounced” in an experiment where a subject is told to make the root word into a past tense than to differentiate a field of equivalents with past time gesturally spatialized and the gesture in this space.

To see gesture as orchestrating speech opens many windows—how language is a dynamic process; a glimpse of how language possibly began; that children do not acquire one language but two or three in succession; that gestures are unique forms of human action; that a specific memory evolved just for gesture–speech unity; and how speech works so swiftly, everything (word-finding, unpacking, gesture–speech unity, gesture-placement, and context-absorption) done in a couple of seconds with workable (not necessarily complete)accuracy.

Mouse tracking reveals that bilinguals behave like experts

 BIL Blog image - Apr 16Blog post written by Sara Incera and Conor T. McLennan based on an article in the journal Bilingualism: Language and Cognition

We analyzed how participants moved a computer mouse in order to compare the performance of bilinguals and monolinguals in a Stroop task. Participants were instructed to respond to the color of the words by clicking on response options on the screen.  For example, if the word blue appeared in the center of the screen and was presented in the color yellow, he or she was supposed to click on the response option containing yellow, which appeared in one of the top corners of the screen, and not on the response option containing blue, which appeared in the opposite corner. The ability to inhibit the blue response in this example is one measure of executive control. The bilingual advantage hypothesis states that lifelong bilingualism enhances executive control (e.g., Bialystok, 1999). Nevertheless, there is a debate in the literature regarding these effects. A number of studies have reported null effects of bilingualism across different executive control tasks (e.g., De Bruin, Traccani, & Della Sala, 2014).

We recorded when participants started moving the mouse (initiation times), and how fast they moved toward the correct response (x-coordinates over time). We compared two bilingual groups and one monolingual group. There were two bilingual groups to measure how different levels of conflict monitoring (having both or one language active) influences performance. Initiation times were longer for bilinguals than monolinguals; however, bilinguals moved faster toward the correct response. Taken together, these results indicate that bilinguals behave qualitatively differently from monolinguals; bilinguals are “experts” at managing conflicting information. Experts across many different domains take longer to initiate a response, but then outperform novices. These qualitative differences in performance could be at the root of apparently contradictory findings in the bilingual literature. The bilingual expertise hypothesis may be one way to account for these conflicting results.

In conclusion, bilinguals performed differently (started later but then moved faster toward the correct response) than monolinguals. These effects were maximized in the incongruent condition and in the bilingual group that had both languages active. One possible explanation for the conflicting findings in the literature related to the bilingual advantage is that bilinguals have a qualitatively different processing style that can elude detection by traditional reaction time measures. Bilinguals wait longer to initiate a response and then respond faster; therefore, an advantage would only be detected using reaction time measures when the benefits of faster responding outweigh the delay in initiating a response.

Read the full article ‘Mouse tracking reveals that bilinguals behave like experts’ here

 

English and international students in China today

ENG 2016Blog post written by Werner Botha based on an article in English Today 

Between 2009 and 2010, and again between 2012 and 2014, I visited a number of higher education institutes in China in order to research the role of English in the Chinese higher education system. One interesting finding from this research was that China has evidently started promoting itself as a hub for international education. Although the largest proportion of foreign students in China today are attracted by Chinese language programmes, an increasing number of such students are signing up for full degree courses in subjects such as medicine and engineering. An interesting phenomenon is that some university degree programmes in the country are being offered as English-medium degrees to foreign students, from undergraduate to postgraduate levels. So far, very little research has been carried out on how these programmess are being conducted, the reception of these programs by foreign students in China, and the impact this is having on the use of languages on China’s university campuses. It certainly is the impression that the attraction of international students to China’s higher education institutions would no doubt alter the dynamics of language use on these university campuses. In order to investigate this, I set out to study the reception and use of English by foreign university students in an international degree program: the Bachelor of Medicine and Bachelor of Surgery (MBBS) in the School of Medicine of one of China’s leading universities.

My case study provides an example of how English-medium instruction programmes are currently being used to attract foreign students to China’s universities, partly in order for these universities to promote themselves as ‘international’ institutions. This case study also shows that the most of the international students were recruited from the Asian region and almost all of these students speak English only as a second or additional language. Although many of these students indicated that they value the opportunity to study medicine in China in the English language, some felt that there was still room for improvement in how these courses were being delivered, especially in terms using English as a medium of teaching. Furthermore,  it is my impression from this research that the language ecologies on Chinese university campuses are in fact often quite diverse, with students (both foreign and local) using a number of languages and language varieties in their extra-curricular lives, while using English and Putonghua (or Mandarin) in their formal education. One other interesting finding from this study is that the international students I surveyed were required to graduate from their medical degree programme with a certain level of proficiency in Putonghua. This requirement appears to provide additional opportunities for these international students to expand their already multilingual repertoires even further, thus adding to the linguistic diversity in their lives. I believe that much more sociolinguistic fieldwork is required in order to further understand and explain the dynamics of language use and the role of English (and other languages and language varieties) on China’s university campuses today.

Read the full article ‘English and international students in China today’ here 

 

Figurative and non-figurative motion in the expression of result in English

LCO 2015 coverBlog post written by Francisco Ruiz de Mendoza Ibáñez and Alba Luzondo Oyón based on and article in the journal Language and Cognition 

There is a variety of ways in which English can express resulting events. Some take the form of non-figurative changes of state, as in Cold temperatures froze the river solid, which is an example of the intransitive resultative constructions. Others, like the intransitive motion syntactic frame (e.g. The horse jumped over the fence) and the caused-motion configuration (e.g. Tom kicked the ball into the net) depict literal changes of location. Interestingly enough, many outcome events require a figurative interpretation. Some cases in point are the following: changes of state expressed in terms of figurative motion (e.g. Miners drank themselves into oblivion); self-instigated change of location figuratively expressed as the result of caused motion (e.g. They laughed me out of the studio); self-instigated changes of location re-construed as externally caused events (e.g. Sheena walked me to the library), etc. In the context of this varied array of realizations codifying change, this paper provides readers with a qualitative cognitive-constructionist approach of the role played by motion in the conceptualization of result in English.

To this end, our analysis discusses three related aspects of the motional component of result events. First, it explores the nature of some of the constructions exemplified above. These are labeled Adjectival Phrase (AP) resultatives (e.g. The joggers ran their Nikes threadbare) and Prepositional Phrase (PP) resultatives (e.g. Steven worked himself to exhaustion), both of which express changes of state. The crucial feature setting apart these two types of resultative constructions is that only the latter builds on the high-level metaphor CHANGES OF STATE ARE CHANGES OF LOCATION. For example, in Steven worked himself to exhaustion, the change of state (i.e. become exhausted) is understood in terms of the destination of metaphorical motion. But, what lies behind the choice of one structure over the other? What are the differences between pairs employing an AP and those adding motion to the state of affairs, as in He hammered the metal flat/He hammered hot iron into knives? These are some of the questions that this paper explores. A second important aspect that we address is the relation between the prototypical AP resultative (e.g. He hammered the metal flat) and the literal caused-motion construction (e.g. Pat threw the book off the table). The connection between these two constructions has been the object of some debate. Thus, given the relevance of this issue in the context of a paper which revolves around the connection between motion and result, an entire section is devoted to revisiting the hypothesis that AP resultatives are a metaphorical extension of caused-motion configurations. This claim is based on the idea that the resultative element in AP resultatives codes a metaphorical type of goal (i.e. metaphorical change of location) by virtue of the ubiquitous metaphor CHANGES OF STATE ARE CHANGES OF LOCATION.

Third, because our study is additionally concerned with specifying the underlying mechanisms that motivate lexical-constructional integration in expressions involving change with some kind of motion ingredient, the remainder of our paper examines the role of high-level metaphors and metonymies such as AN ACTIVITY IS AN EFFECTUAL ACTION and A CAUSED EVENT FOR AN ACTIVITY, which, like CHANGES OF STATE ARE CHANGES OF LOCATION, are vital licensing cognitive mechanisms in the conceptualization of result events.

Read the full article ‘Figurative and non-figurative motion in the expression of result in English’ here.