Parents Mixing Languages has No Impact on Children’s Vocabulary Development

Mother and ToddlerPost from the University of Maryland College of Behavioural and Social Sciences Blog The Solution

Many adults speak more than one language, and often “mix” those languages when speaking to their children, a practice called “code-switching.” An eye-opening study by researchers in the Department of Hearing and Speech Sciences has found that this “code-switching” has no impact on children’s vocabulary development. The study, “Look at the gato! Code-switching in speech to toddlers” appears in the Journal of Child Language.

Professor Rochelle S. Newman, chair of the department, and then-graduate students Amelie Bail and Giovanna Morini studied 24 parents and 24 children aged 18 to 24 months during a 15-minute play session.

Key Findings:

•Every parent in the study switched languages at least once during a play session with their child; more than 80 percent of parents did so in the middle of a sentence.
•An average of 4 percent of parents’ individual sentences included more than one language.
•The children of parents who switched languages more often than average, or had more mixed-language sentences did not have poorer vocabulary skills.
•The researchers found no indication that the mixing of languages by the parents resulted in poorer vocabulary learning by the children.

“Parents tend to use very short sentences when talking to children this young—yet despite this, they often switched languages in the middle of sentences, saying things like, ‘el otro fishy’ or ‘can I have the beso?’ We were surprised that so many parents would use two languages in the same sentence when speaking to such young children,” Newman said.

The study was conducted in part to address parental concerns.

“A lot of parents worry that using more than one language in the same sentence might cause confusion for a young child. So it is reassuring to know that children whose parents mixed their languages more often didn’t show any poorer vocabulary skills,” Newman said.

Read the full article ‘Look at the gato! Code-switching in speech to toddlers’ here

How can a parent, a dedicated teacher or a speech-language pathologist improve memory performance of a child?

JCL blog post Sep 15 - memoryPost written by Michal Icht and Yaniv Mama based on an article in Journal of Child Language 

How can a parent, a dedicated teacher or a speech-language pathologist improve memory performance of a child? How can we help a kid better remember study material, such as new vocabulary?

A promising and straightforward technique may be simply saying the relevant material aloud. This simple method is based on the ‘Production Effect’ in memory. This effect refers to a memory advantage (of about 20%) for words that were read aloud over words that were silently read. Reading aloud was found to enhance memory for other types of material, such as sentences (text), and was proven useful for students and older adults.

Although many other types of mnemonic have been suggested in the literature (e.g., using acronyms), the Production Effect seems especially appropriate for young children. Saying words aloud is simple (does not involve literacy skills) and can be easily applied in many educational settings and contexts (does not require special equipment). The present study is a first investigation of the Production Effect in pre-school children (five-year-olds). As this population cannot read, we used for the first time pictures of objects as stimuli. Will saying the object names aloud improve their memory?

In the first experiment, we used pictures of familiar objects (e.g., ball, teddy-bear, t-shirt). The children were presented with the pictures and were asked to memorize them. Third of the words were studied by looking at the picture, another third by looking and listening to the experimenter saying the word (the object name), and the remaining third by looking and saying the word aloud. As expected, words that were vocally produced were better recalled (29%) than the heard words (21%) and the “silent” words (14%), a significant Production Effect!

In order to suggest the Production Effect as an effective and valid learning method, we wanted to demonstrate its occurrence with the acquisition of new vocabulary. Hence, in the second experiment we used pictures of novel, unfamiliar words (pictures of rare objects, such as: anchor, manger, cuff, pestle). Our five-years old participants learned these rare words by looking at the pictures and listening to the experimenter saying each word twice, or by looking, listening to the experimenter saying the word once, and vocally repeating it once again. The results showed better memory (recognition rates) for words said aloud (54%) relative to heard words (40%). These results support the Production Effect as a prominent memory and learning tool, even for pre-school children. Vocalizing may serve as a mnemonic that can be used to assist learners in improving their memory for new concepts.

We invite you to read the full article ‘The production effect in memory: a prominent mnemonic in children’ here



Grab that mat, Bat Rat, said Fat Cat! How rhyming words help children with phonological disorders

Blog post written by Judith A. Gierut based on an article in the latest issue of Journal of Child Language

It has long been thought that children’s acquisition of the sound system of a language follows directly from lexical learning. Indeed, some words are better than others in promoting mastery of new sounds and generalized productive use of those sounds across the lexicon. In particular, rhyming words (dubbed lexical neighbors) provide distinct advantages to phonological learning, but the learning mechanism responsible for the effect is not well understood. Some suppose that rhyming words afford a naturalistic case of long-term auditory priming, such that repeated exposure to similar sounding words of the input enhances phonemic distinctiveness. Others suggest that rhyming words benefit phonological working memory, such that exposure to similar sounding words helps retention of sounds and sound sequences.

These hypotheses take on added intrigue when considered relative to the population of children with phonological disorders, which happens to be the most prevalent language learning disability of childhood. We wondered whether it might be possible to take advantage of rhyming words to jumpstart phonological learning for these children, and in the process, to disambiguate hypotheses about relevant learning mechanisms.

Two intervention studies were conducted enrolling preschool children with phonological disorders. We crafted two sets of illustrated stories, one comprised of rhyming words akin to Dr. Seuss books and a second, using the same illustrations but comprised of non-rhyming words that were phonologically unrelated. Children were exposed to either stories with rhyming words or those with non-rhyming words in treatment. In Study 1, stories were presented before teaching production of sounds in error as a test of the priming hypothesis. In Study 2, stories were presented after teaching production of sounds in error as a test of the phonological working memory hypothesis. Results showed that rhyming words promoted greater phonological learning when compared to non-rhyming words, but only when stories preceded production training. The magnitude of phonological gain was on the order of 2:1. By comparison, there was little phonological learning and no differential effect when rhyming or non-rhyming words were presented after production training. The findings are consistent with priming as a mechanism of learning with benefits to phonological acquisition. There is also new promise for treatment of children with phonological disorders in that rhyming words may be employed before production training to advance broader phonological gains.

We invite you to read the full article ‘Dense neighborhoods and mechanisms of learning: evidence from children with phonological delay’ here


Talker familiarity and spoken word recognition in school-age children

JCL blosg post - Jun 15Blog post written by Susannah Levi based on an article in Journal of Child Language 

When people listen to speech, they hear two types of information:  what is being said (such as “That’s a ball”) and who said it (such as MOMMY).

Prior studies have shown that when adults understand speech better when it is spoken by a familiar voice. In this study, we tested whether school-age children also understand speech better when listening to a familiar voice. First, children learned the voices of three previously unknown speakers over five days. Following this voice familiarization, children listened to words mixed with background noise and were asked to tell us what they heard. These words were spoken both by the now familiar speakers and by another set of unfamiliar speakers.

Our results showed that, like adults, children understand speech better when it is produced by a familiar voice. Interestingly, this benefit of voice familiarity only occurred when listening to highly familiar words (such as “book” or “cat”) and not to words that are less familiar to school-age children (such as “fate” or “void”). We also found that the benefit of familiarity with a voice was most noticeable in children with the poorest performance, suggesting that familiarity with a voice may be especially useful for children who have difficulty understanding spoken language.

We invite you to read the full article ‘Talker familiarity and spoken word recognition in school-age children’ here

Measuring a very young child’s language and communication skills in developing countries

JCL Blog Jun 15Blog post written by Katie Alcock based on an article in Journal of Child Language 

The best way to find out about a very young child’s language and communication is to ask their parents – but in developing countries parents can’t always fill in a written questionnaire – so we have created a very successful interview technique to do this.

To start with, we wanted to visit homes close to the Wellcome Trust Unit in Kilifi Town, Coastal Kenya but even in town, there are few paved roads.  We went out in a four-wheel-drive from the Unit in to a Kiswahili-speaking family home; homes here range from concrete to mud walls, from tin to thatch rooves.

Families in Kilifi are used to nosy questions from researchers but we weren’t sure how easy they’d find it to talk about their children’s language. Parents worldwide find it very difficult to answer open-ended questions about words their young child knows. When asked simple yes/no questions about individual words though, we think the information is accurate.

The family we saw that day had a 15 month old boy. For children of this age we ask parents  about words for animals and noises, foods, household objects, toys, verbs. Children often know few words at this age so we also ask about gestures such as waving and (very important in this culture) shaking hands.

“Can your child understand or say the following words… mee mee [what a goat says]; boo boo [what a cow says]… maji [water]… ndizi  [banana]… taa [lamp]” “Yes! He says “taa” and he thinks the moon is a lamp too, he says “taa” when he sees the moon!”.

Bingo! A classic example of overextension – a child using a word for one thing to refer to something similar. We were unsure when we started what kind of answers parents would give and how patient they would be with quite a lengthy questionnaire (over 300 words, even for 8 month old babies).  Our researchers didn’t know either whether local parents would be aware of children making animal noises – like “baa” and “moo”. It turned out this was very much a “thing” that parents noticed – the baby word for “cat” is “nyau”.

We do this research through the MRC unit because we need good tools to assess how child is affected by factors such as HIV, cerebral malaria, and malnutrition development.

We found out find that parents were very accurate in telling us how well their child communicates, and very patient! They told us about the same kinds of mistakes children make learning other languages. We also went on to use our questionnaire to look at whether children exposed to HIV had delayed language compared to their peers.

Read the full article ‘Developmental inventories using illiterate parents as informants: Communicative Development Inventory (CDI) adaptation for two Kenyan languages’ here

Caregivers provide more labeling responses to infants’ pointing than to infants’ object-directed vocalizations

JCL June 2015Blog post written by Julie Gros-Louis based on an article in a recent issue of Journal of Child Language

One main context for language learning is in social interactions with parents and caregivers. Infants produce vocal and gestural behaviors and caregivers respond to these behaviors, which supports language development. Prior studies have shown a strong relationship between infants’ pointing gestures and language outcomes. One reason for this association is that parents translate the apparent meaning of infants’ points, thus providing infants with language input associated with their pointing behavior. In contrast to the relationship between pointing and language development, infants’ overall vocal production is not related to language outcomes. One possible explanation for the different association between pointing and language outcomes, compared to vocalizations and language outcomes, is that pointing may elicit more verbal responses from social partners that are facilitative for language learning.

To examine this possibility, we observed twelve-month-olds during free play interactions with their mothers and fathers. At this age, infants do not have many words in their vocabulary and thus communicate primarily with gestures and vocalizations. We compared parents’ verbal responses to infants’ pointing gestures and object-directed vocalizations. Results showed that infants’ pointing elicited more verbal responses from parents compared to object-directed vocalizations. Also, these verbal responses were mainly object labels. These results may help explain why pointing is associated with indices of language acquisition, but the production of vocalizations is not. Furthermore, the study highlights the importance of examining moment-to-moment interactions to uncover social mechanisms that support language development.

We invite to read the full article ‘Caregivers provide more labeling responses to infants’ pointing than to infants’ object-directed vocalizations’ here


Losing a language in childhood

JCL blog post May 2015Blog post written by Cristina Flores based on an article in the latest issue of Journal of Child Language

What happens if a bilingual child with immigration background moves (back) to the country of origin of his/her family and loses contact with the language that until this moment was his/her dominant language? Does the non-used language disappear from the child’s mind?

This new paper analyses such a situation of remigration of a bilingual child and its consequence for language development. The analysis is based upon a longitudinal study of language attrition in a bilingual child, Ana, who grew up in Germany and moved to the country of origin of her parents, Portugal, at the age of nine. Since she has few opportunities to contact with German after remigration, Ana experiences a DOMINANCE SHIFT from German, the (until now) dominant language, to Portuguese, her heritage language.

Data collection, based on oral interviews and story retelling, started three weeks after the child’s immersion in the Portuguese setting and ended eighteen months later. Results show first effects of language attrition after five months of reduced exposure to German, namely lexical difficulties and deviant syntactic omissions. At the end of the study the informant showed severe word retrieval difficulties and was unable to produce complete sentences in German. The findings thus confirm the conclusions of other studies on child language attrition, which attest to strong effects of attrition when the loss of contact with the target language occurs in childhood, i.e. before the age of eleven/twelve. It remains an open question if the language «comes back» easily after re-immersion in a German environment.

Read the full article ‘Losing a language in childhood: a longitudinal case study on language attrition’ here.

Can Audio Storybooks Improve Children’s Second-Language Accent?

JCL Blog Mar 15Blog post written by Terry Kit-Fong Au based on an article in the latest issue of Journal of Child Language 

With globalization, speaking more than one language is useful.  No wonder many children are learning a second or even a third language.  The younger children are when they start geting input from native speakers, the better their accent will be.  Yet because of resource constraints, interaction with native speakers is not always possible – especially for children learning a foreign language that is not the societal language (e.g., children learning English in much of Asia and Latin America).  Audios are commonly used as an affordable substitute.  But do they work?

Research recently published in the Journal of Child Language has revealed the usefulness of audio storybooks.  First-and second-grade children in Hong Kong – whose native language was Cantonese Chinese – listened to audio storybooks either in English or Mandarin Chinese.  To give children more diverse input, each audio storybook contained six recordings of the same very short story read by different native speakers.

These Cantonese Chinese children listened to a few dozens of such audio storybooks at home in only one of their non-native languages: either English or Mandarin Chinese.  Those who had listened to Mandarin stories improved significantly more in their Mandarin accent than those who had listened to English stories.  Those who had listened to English stories improved in their English accent somewhat more than those who had listened to Mandarin stories.

Audio storybooks may well prove to be a cost-effective strategy to enrich the language environment of young second-language learners.

We invite you to read the full article  ‘Can non-interactive language input benefit young second-language learners?‘ TERRY KIT-FONG AU, WINNIE WAILAN CHAN, LIAO CHENG, LINDA S. SIEGEL and RICKY VAN YIP TSO (2015)



The ubiquity of frequency effects in first language acquisition

JCL 2015 CoverBlog post written by Ben Ambridge based on an article in the latest issue of Journal of Child Language 

Pretty much every kind of human (and, for that matter, animal) learning shows frequency effects: the more we hear or see something, the better we learn it, remember it, and even like it. But in the domain of children’s language acquisition, both the existence and meaningfulness of frequency effects have proved controversial, particularly because they have implications for the (in)famous nature-nurture debate. In this target article, we argue that frequency effects can be found absolutely everywhere in language acquisition, from the level of abstract strings to the level of abstract syntactic cues. In fact, high frequency items are not only early-acquired and resistant to errors (when children are attempting to produce them), but also cause errors, when children use them in place of lower-frequency targets.

What does all this mean in terms of theory? Well, we argue that while frequency effects are often taken as evidence for constructivist/usage-based accounts, they are not necessarily incompatible with nativist/UG accounts in principle. However, because these accounts draw a sharp distinction between the lexicon and grammar, for instance assuming that even infants’ grammatical rules are formulated in terms of syntactic categories and phrases rather than individual lexical items, they do not straightforwardly explain frequency effects that cut across these levels of representation.

There are commentaries too; nine of them, in fact. While most of them are generally supportive, many point out that the real trick is going to be disentangling the effects of frequency from those of other factors (e.g., serial position, communicative intent) with which frequency frequently interacts. In our response, we acknowledge that this disentangling work has only just begun, but conclude that – nevertheless – frequency effects are real, and are therefore something that any serious theory of language acquisition – of whatever theoretical stripe – must explain.

We invite you to read the entire article ‘The ubiquity of frequency effects in first language acquisition’ and the commentaries that follow here.

She refers therefore she is: Morphosyntax and pragmatics in referential communication


Post written by Aylin C. Küntay, Koç University, Istanbul & Utrecht University, Utrecht

Based on an upcoming keynote talk to be given at IASCL 2014 this week (14th – 18th July, Amsterdam)

Referential communication is talking about things and people, an essential ability upon which many human communicative interactions build. To be able to communicate effectively, speakers and addressees should concur on what they are talking about. Although this sounds trivial, even adults sometimes have trouble in pinpointing exactly what their interlocutor has in mind, or might fail to express their referential intentions in the clearest way.

The evidence we have about children’s referential abilities is mixed. An 18-month-old can be quite effective in making us pick the right diaper with the desired picture out from a heap of clean diapers. A 5-year-old, on the other hand, might lose us among the many characters he introduces in his retelling of a movie. Many factors distinguish the situation of the diaper-picker from the film-narrator. Yet in our methodological and analytical frameworks, we forget that the act of referential behavior is embedded in certain contexts and geared towards a particular type of interactive experience. My talk will focus on the contextual conditions that render toddlers and preschoolers referentially (in)effective.

For my keynote talk in the conference organized by the IASCL in 2014 to take place in Amsterdam, I will focus on the contributions interactive contexts, interactive goals, and interactive partners make to the development of referential communication. I will introduce data from narrative interactions and conversational discourse of children in addition to experimental studies.

These studies show how naturalistic interactions with others and their feedback impact (monolingual) children’s development of referential communication. Infants are presented by their caregivers with richly textured patterns of referential sets, where the referent remains constant across extended stretches of discourse. This constancy is accompanied by integration of nonverbal cues such as gestures, gazes, and touches in addition to linguistic expansions and reductions regarding the referent.

Preschool children display morphosyntactically more sophisticated and referentially clearer structures when they build their discourse structures conversationally rather than via being prompted by picturebooks, when they assume more audience-oriented interactive goals, and when they are trained on referential effectiveness. In brief, children need to learn to form a variety of (often language-specific) expressive devices in addition to learning how to use these devices for particular interactive contexts and discourse functions.

Discover more about the IASCL 2014 here