Blog post based on an article in Journal of Child Language, written by Carla Hudson Kam
It is obvious that language learning in children is affected by input: children exposed to English learn English and children exposed to Mandarin learn Mandarin. But the relationship between the input children receive and what they learn is more specific than that.
For instance, we know that children whose caregivers produce lots of complex sentences produce complex sentences earlier than children who hear fewer of them. Children’s input comes from sources beyond their caregivers’ natural speech, however, and researchers have become increasingly interested in input from these other sources. My co-author and I were interested in the prevalence of a specific type of information in children’s books but needed to know which books we should analyze (if you’re interested in books as input, presumably you should analyze books that children are actually being exposed to).
We conducted a survey of parents and caregivers asking about the English-language books they were reading to their child, to help us select our books for analysis. Although we initially created this dataset to conduct our own research, we quickly realized the potential the dataset had, and so decided to share it with the wider research community.
The resulting database – which we call the Infant Bookreading Database (or IBDb) – includes responses from 1,107 caregivers of children aged 0-36 months, who answered questions about the five books they were reading to their child most often at the time. The dataset also includes demographic and language development information, so the data can be analyzed separately for children of different ages, genders, or language skill levels, or for the caregiver’s age, gender or education level.
There were 2,227 unique titles listed by caregivers, and 1,617 are identifiable (meaning we could figure out exactly which book the respondent was referring to). One of the most striking things about the identifiable titles is how much variation there is in which books children are hearing. Only one book was listed by more than 200 respondents (Goodnight Moon by M. Wise Brown), two were listed more than 100 but less than 200 times (The Very Hungry Caterpillar by E. Carle and Brown Bear, Brown Bear, What do you See? by B. Martin and E. Carle), and only four books were listed 50-99 times (so by at least 4.5% of our sample). The overwhelming majority of titles were only listed by one or two respondents. So while there are a small number of fairly popular books that show up again and again in our data, most books are being read to only one or two children in our sample.
Read Introducing the Infant Bookreading Database (IBDb) for free.
The IBDb is available for download for use by researchers (and anyone else who is interested) at linguistics.ubc.ca/ubc-ibdb/.
Blog post based on an article in Journal of Child Language, written by Rana Abu-Zhaya
Studies have shown that both caregiver touch and speech play an important role in the early development of infants. Research examining early caregiver-infant interactions showed that touch is prominently present and is a key component of those interactions.
Among other significant effects, touch plays a role in directing infants’ attention and regulating their arousal. On the other hand, studies that have examined speech that is directed to infants showed that adults modify their speech when interacting with young infants. These modifications result in what researchers call “infant-directed speech”, which has been shown to aid in language learning. However, despite the common use of speech and touch in early interactions, very little is known about how they are naturally combined during interactions with infants.
In a study designed to specifically examine how touch and speech are combined and used in early interactions, mothers were asked to read to their 5-month-old infants a book about body-parts and one about animals. In order to keep the interactions as naturalistic as possible, mothers were asked to read the books to their infants the way they would normally do at home. The interactions were videotaped and audio-recorded; analyses were performed on the video stream separately from the audio stream.
The results of the study revealed some interesting features of mother-infant interactions. First, the study confirmed the finding from previous research that touch is a common component of early interactions and is produced naturally by caregivers without specific elicitation. More importantly, the study showed that touch+speech events are different from touch alone or speech alone events. Specifically, touches that were produced with speech were longer in their duration than touch alone events. Further, an examination of a specific set of words, i.e. body-part words and animal names, revealed that when words were accompanied by touch, they were produced with a higher average pitch than words that were spoken without any touches. Hence, when touch and speech are produced together creating multimodal events, they have more exaggerated features than when each is produced separately.
Further, the results suggest that maternal touches tend to be well aligned with their speech and that mothers tend to touch their infants in locations that are congruent with names of body parts they are producing while touching their infant.
The significance of these findings lies in the fact that infants are presented with language in a rich multimodal context and understanding the way different cues are naturally combined with speech can help researchers better characterize the early input that infants receive. The better the knowledge researchers have on how language is presented to infants, and how various cues (such as touch) are used and weighted differently throughout development, the better is their ability to aid infants and children who are struggling in learning language.
Access ‘Multimodal infant-directed communication: how caregivers combine tactile and linguistic cues‘ for free through 31st October.
Blog Post written by Eve V. Clark (Stanford University), author of the recently published First Language Acquisition (3rd Edition)
How early do infants start in on language?
Even before birth, babies recognize intonation contours they hear in utero, and after birth, they prefer listening to a familiar language over an unfamiliar one. And in their first few months, they can already discriminate between speech sounds that are the same or different.
How early do infants understand their first words, word-endings, phrases, utterances?
Children learn meanings in context, both from hearing repeated uses of words in relation to their referents, and from feedback from adults when they use a word correctly or incorrectly. When a child is holding a ball, the mother might say “Ball. That’s a ball”, and the child could decide that “ball” picks out round objects of that type. Still, it may take many examples to establish the link between a word-form (“ball”) and a word-meaning (round objects of a particular type) and to relate the word “ball” to neighbouring words (throw, catch, pick up, hold). It takes even longer for the child’s meaning of a word to fully match the adult’s.
When do infants produce their first words and truly begin to talk?
Infants babble from 5-10 months on, giving them practice on simple syllables, but most try their first true words at some time between age 1 and age 2 (a broad range). They find certain sounds harder to pronounce than others, and certain combinations (e.g., clusters of consonants) even harder. It therefore takes practice to arrive at the adult pronunciations of words –– to go from “ba” to “bottle”, or from “ga” to “squirrel”. Like adults, though, children understand much more than they can say.
What’s the relation between what children are able to understand and what they are able to say?
Representing the sound and meaning of a word in memory is essential for recognizing it from other speakers. Because children are able to understand words before they produce them, they can make use of the representations of words they already understand as models to aim for when they try to pronounce those same words.
How early do children begin to communicate with others?
A few months after birth, infants follow adult gaze, and they respond to adult gaze and to adult speech face-to-face, with cooing and arm-waving. As they get a little older, they attend to the motion in adult hand-gestures. By 8 months or so, they recognize a small number of words, and by 10 months, they can also attend to the target of an adult’s pointing gestures. They themselves point to elicit speech from caregivers, and they use gestures to make requests – e.g., pointing at a cup as a request for juice. They seem eager to communicate very early.
How do young children learn their first language?
Parents check up on what their children mean, and offer standard ways to say what the children seem to be aiming for. Children use this adult feedback to check on whether or not they have been understood as they intended.
Do all children follow the same path in acquisition?
No, and the reason for this depends in part on the language being learnt. English, for example, tends to have fixed word order and relatively few word endings, while Turkish has much freer word order and a large number of different word-endings. Languages differ in their sound systems, their grammar, and their vocabulary, all of which has an impact on early acquisition.
These and many other questions about first language acquisition are explored in the new edition of First Language Acquisition. In essence, children learn language in interaction with others: adults talk with them about their daily activities – eating, sleeping, bathing, dressing, playing; they expose them to language and to how it’s used, offer feedback when they make mistakes, and provide myriad opportunities for practice. This book reviews findings from many languages as it follows the trajectories children trace during their acquisition of a first language and of the many skills language use depends on.
First Language Acquisition (third edition), Cambridge University Press 2016
Blog post based on an article in Journal of Child Language
Written by Written by Melanie Soderstrom in consultation with article co-authors Eon-Suk Ko, Amanda Seidl, and Alejandrina Crista
It has long been known that adults’ speech patterns unconsciously become more similar over the course of a conversation, but do children converge in this way with their caregivers? Across many areas of child development, children’s imitation of caregivers has long been understood to be an important component of the developmental process. These concepts are similar, but we tend to think of imitation as one-sided and static, while convergence is more dynamic and involves both interlocutors influencing each other. In our study, we set out to examine how duration and pitch characteristics of vocalizations by 1- and 2-year-olds and their caregivers dynamically influence each other in real-world conversational interactions.
We recorded 13 mothers and their children using LENA, a system for gathering full-day recordings, which also provides an automated tagging of the audio stream into speakers. We analyzed pitch and duration characteristics of these segments both within and across conversational exchanges between the mother and child to see whether mothers and children modulated the characteristics of their speech based on each other’s speech. Instead of examining mother-child correlations across mother-child dyads, as previous studies have done, we examined correlations within a given dyad, across conversations. We found small, but significant correlations, particularly in pitch measures, suggesting that mothers and children are dynamically influencing each other’s speech characteristics.
We also looked at who started the conversation, and measured mother and child utterance durations and response latencies (i.e., how quickly mothers responded to their child’s utterance and vice versa). Overall, unsurprisingly, mothers produced longer utterances and shorter responses latencies (faster responding) than their children. However, both the mothers and the children produced longer utterances and shorter response latencies in conversations that they themselves initiated. This finding is exploratory, but suggests that providing children with the conversational “space” to initiate conversations may lead to more mature vocalization, and may therefore be beneficial for the language-learning process.
Read the full article ‘Entrainment of prosody in the interaction of mothers with their young children’ here
Blog post written by J. Douglas Mastin and Paul Vogt based on an article in Journal of Child Language
This study analyzes how individuals in rural and urban Mozambican engage with infants during naturalistic observations. We assess how the proportion of time spent at 13-months in different types of engagement (i.e., being alone, observing others, interacting with and without goals) relates to infants’ language development over the second year of life. We created an extended version of Bakeman and Adamson’s (1984) categorization of infant engagement, and investigated how a more detailed analysis of infant engagement can contribute to our understanding of vocabulary development in natural settings.
In addition, we explored how different engagements relate to vocabulary size, and how these differ between the rural and urban communities. Results show that rural infants spend significantly more time in forms of solitary engagement, whereas urban infants spend more time in forms of triadic joint engagement (e.g., involving another person and a shared object or event). In regard to correlations with reported productive vocabulary, we find that dyadic PERSONS engagement (i.e. interactions not about concrete objects) have positive correlations with vocabulary measures in both rural and urban communities. In addition, we find that triadic COORDINATED JOINT ATTENTION has a positive relationship with vocabulary in the urban community, but a contrasting negative correlation with vocabulary in the rural community.
These similarities and differences are explained, based upon the parenting beliefs and socialization practices of different prototypical learning environments. Specifically, we assess how views on child-centered activities differ between rural and urban populations in traditional cultures. Overall, this study concludes that the extended categorization of engagement provides a valuable contribution to the analysis of infant engagement and their relation to language acquisition, especially for analyzing naturalistic observations as compared to semi-structured studies. Moreover, with respect to vocabulary development, Mozambican infants appear to benefit strongest from dyadic engagements without object, while they do not necessarily benefit from joint attention, as tends to more often be the case for children from industrial, developed communities.
Read the full article ‘Infant engagement and early vocabulary development: a naturalistic observation study of Mozambican infants from 1;1 to 2;1’ here
Blog post written by Ashley de Marchena based on an article in Journal of Child Language
People with autism spectrum disorder (ASD) often struggle with imagining or understanding another person’s perspective or state of mind, so-called “theory of mind abilities.” Such individuals also have difficulties with social and conversational language (termed “pragmatic” skills). Research on ASD has been guided by the assumption that pragmatic difficulties are a simple reflection of problems with theory of mind. Thus, we might imagine that someone with ASD may not tailor his language based on what another person already knows (such as when conversational partners share background knowledge).
Our recent study published in the Journal of Child Language unveils a more complicated and perhaps surprising picture of conversational interactions and pragmatic language in ASD. We studied storytelling in order to answer the central question, “how do adolescents with ASD respond to shared knowledge with a conversational partner?”
Our results demonstrated that adolescents with typical development subtly altered their language in response to shared knowledge; specifically, their stories were shorter in the context of shared knowledge. In contrast, adolescents with ASD did not make these subtle adjustments – their stories were no shorter – demonstrating that, in this sense, they did not communicate differently based on the shared social context. On the other hand, additional study measures revealed that teens with ASD were sensitive to the social context and attempted to modify their stories accordingly. Specifically, we asked college students to rate participant stories for overall communicative quality. We found that college students were sensitive to differences in story quality based on the participants’ social context (that is: shared knowledge or no shared knowledge). These ratings revealed that adolescents with ASD did change how they communicated based on what their conversational partner knew; however, their strategy for incorporating shared knowledge was unsuccessful, resulting in less effective communication.
Next we probed why teens with ASD were essentially telling worse stories when they shared background knowledge with their conversation partner. We discovered that, in the context of shared knowledge, those with ASD were less likely to clarify or correct themselves when they stumbled during speech. One way to interpret this is that, since they were aware that their partner already knew what they were talking about, they exerted less effort in explaining some parts of their stories, resulting in stories that were harder for others to follow.
This method of combining the big picture (for example, ratings of communicative quality) with a detailed analysis of discourse revealed that adolescents with ASD were indeed aware of the social context and its relevance, highlighting a critical, yet under-recognized strength. Unfortunately, their strategy for incorporating the shared experience was unsuccessful, perhaps because storytelling itself is highly effortful. Pinpointing exactly where and how communication breakdowns occur will help inform targets for pragmatic language interventions.
Read the full article ‘The art of common ground: emergence of a complex pragmatic language skill in adolescents with autism spectrum disorders’ here.
A note from the Editor of Journal of Child Language Johanne Paradis
I consider it an honour to have been asked to serve as editor of JCL, one of the long-standing and core journals in our field. JCL has a solid and growing Impact Factor and an impressive volume size with 6 issues each year.
With early online publication – FirstView, green open access policy for all articles, an option for authors to choose full open access at a competitive fee, and the continued production of print copies, JCL offers a healthy mix of both traditional and innovative publishing practices.
The breadth of papers published in JCL is one its greatest strengths. Among the top cited JCL articles for the Impact Factor, there are papers on bilingual and monolingual children, typically-developing children and children with developmental disorders, children learning European and non-European languages. I intend for JCL to continue to be a venue where there is diversity in the populations of children studied because a comprehensive understanding of language development in all children depends on such diversity.
Special issue in 2016
I am delighted to announce that in the 2016 volume of JCL, we will include a special issue on Age of Acquisition Effects in Child Language, with Elma Blom and myself as co-editors. While age of acquisition effects have been researched extensively in adult second language acquisition, there is less research focussed on examining age of acquisition effects in child language acquisition. This issue will consist of papers examining rate, patterns and mechanisms of development in a language children were not exposed to at birth, for example children who are early second language learners and children with cochlear implants. We are confident this set of papers will generate a lively debate about the relative contribution of age of acquisition versus input factors in child language development.
Changes in the editorial and review process
Even in an established and well-run journal, there is always room for improvement in the process. In response to feedback from authors about turnaround times, in the fall of 2015, we have put in place a series of minor changes at every step of the process from submission to final decision. These changes are designed to streamline the review process to reduce the time to final decision. Also in the fall of 2015, we started reviewing and revising the JCL style sheet in order to bring it up to date and closer to APA style, which should facilitate manuscript preparation.
Changes in the editorial team
I am taking over the editorship from Heike Behrens (University of Basel), who has steered the ship since 2011 and whose sage advice has smoothed my transition to editor and provided me with an excellent model of cooperative leadership. Three associate editors will be finishing their terms by the end of 2015: Misha Becker (University of North Carolina), Aylin Küntay (Koç University) and Carol Stoel-Gammon (University of Washington). New associate editors in 2016 will be Elma Blom (Utrecht University), Cecile DeCat (University of Leeds) and Melanie Soderstrom (University of Manitoba) and Laura Wagner (Ohio State University), joining Caroline Rowland (University of Liverpool), Holly Storkel (University of Kansas) and Elizabeth Wonnacott (University of Warwick).
On behalf of Heike Behrens and myself, I would like to express our immense appreciation to the outgoing members for their dedication and hard work and give a warm welcome to the new members of the team.
Please join us in welcoming Johanne as Editor of Journal of Child Language
Post from the University of Maryland College of Behavioural and Social Sciences Blog The Solution
Many adults speak more than one language, and often “mix” those languages when speaking to their children, a practice called “code-switching.” An eye-opening study by researchers in the Department of Hearing and Speech Sciences has found that this “code-switching” has no impact on children’s vocabulary development. The study, “Look at the gato! Code-switching in speech to toddlers” appears in the Journal of Child Language.
Professor Rochelle S. Newman, chair of the department, and then-graduate students Amelie Bail and Giovanna Morini studied 24 parents and 24 children aged 18 to 24 months during a 15-minute play session.
•Every parent in the study switched languages at least once during a play session with their child; more than 80 percent of parents did so in the middle of a sentence.
•An average of 4 percent of parents’ individual sentences included more than one language.
•The children of parents who switched languages more often than average, or had more mixed-language sentences did not have poorer vocabulary skills.
•The researchers found no indication that the mixing of languages by the parents resulted in poorer vocabulary learning by the children.
“Parents tend to use very short sentences when talking to children this young—yet despite this, they often switched languages in the middle of sentences, saying things like, ‘el otro fishy’ or ‘can I have the beso?’ We were surprised that so many parents would use two languages in the same sentence when speaking to such young children,” Newman said.
The study was conducted in part to address parental concerns.
“A lot of parents worry that using more than one language in the same sentence might cause confusion for a young child. So it is reassuring to know that children whose parents mixed their languages more often didn’t show any poorer vocabulary skills,” Newman said.
Read the full article ‘Look at the gato! Code-switching in speech to toddlers’ here
Post written by Michal Icht and Yaniv Mama based on an article in Journal of Child Language
How can a parent, a dedicated teacher or a speech-language pathologist improve memory performance of a child? How can we help a kid better remember study material, such as new vocabulary?
A promising and straightforward technique may be simply saying the relevant material aloud. This simple method is based on the ‘Production Effect’ in memory. This effect refers to a memory advantage (of about 20%) for words that were read aloud over words that were silently read. Reading aloud was found to enhance memory for other types of material, such as sentences (text), and was proven useful for students and older adults.
Although many other types of mnemonic have been suggested in the literature (e.g., using acronyms), the Production Effect seems especially appropriate for young children. Saying words aloud is simple (does not involve literacy skills) and can be easily applied in many educational settings and contexts (does not require special equipment). The present study is a first investigation of the Production Effect in pre-school children (five-year-olds). As this population cannot read, we used for the first time pictures of objects as stimuli. Will saying the object names aloud improve their memory?
In the first experiment, we used pictures of familiar objects (e.g., ball, teddy-bear, t-shirt). The children were presented with the pictures and were asked to memorize them. Third of the words were studied by looking at the picture, another third by looking and listening to the experimenter saying the word (the object name), and the remaining third by looking and saying the word aloud. As expected, words that were vocally produced were better recalled (29%) than the heard words (21%) and the “silent” words (14%), a significant Production Effect!
In order to suggest the Production Effect as an effective and valid learning method, we wanted to demonstrate its occurrence with the acquisition of new vocabulary. Hence, in the second experiment we used pictures of novel, unfamiliar words (pictures of rare objects, such as: anchor, manger, cuff, pestle). Our five-years old participants learned these rare words by looking at the pictures and listening to the experimenter saying each word twice, or by looking, listening to the experimenter saying the word once, and vocally repeating it once again. The results showed better memory (recognition rates) for words said aloud (54%) relative to heard words (40%). These results support the Production Effect as a prominent memory and learning tool, even for pre-school children. Vocalizing may serve as a mnemonic that can be used to assist learners in improving their memory for new concepts.
We invite you to read the full article ‘The production effect in memory: a prominent mnemonic in children’ here
Blog post written by Judith A. Gierut based on an article in the latest issue of Journal of Child Language
It has long been thought that children’s acquisition of the sound system of a language follows directly from lexical learning. Indeed, some words are better than others in promoting mastery of new sounds and generalized productive use of those sounds across the lexicon. In particular, rhyming words (dubbed lexical neighbors) provide distinct advantages to phonological learning, but the learning mechanism responsible for the effect is not well understood. Some suppose that rhyming words afford a naturalistic case of long-term auditory priming, such that repeated exposure to similar sounding words of the input enhances phonemic distinctiveness. Others suggest that rhyming words benefit phonological working memory, such that exposure to similar sounding words helps retention of sounds and sound sequences.
These hypotheses take on added intrigue when considered relative to the population of children with phonological disorders, which happens to be the most prevalent language learning disability of childhood. We wondered whether it might be possible to take advantage of rhyming words to jumpstart phonological learning for these children, and in the process, to disambiguate hypotheses about relevant learning mechanisms.
Two intervention studies were conducted enrolling preschool children with phonological disorders. We crafted two sets of illustrated stories, one comprised of rhyming words akin to Dr. Seuss books and a second, using the same illustrations but comprised of non-rhyming words that were phonologically unrelated. Children were exposed to either stories with rhyming words or those with non-rhyming words in treatment. In Study 1, stories were presented before teaching production of sounds in error as a test of the priming hypothesis. In Study 2, stories were presented after teaching production of sounds in error as a test of the phonological working memory hypothesis. Results showed that rhyming words promoted greater phonological learning when compared to non-rhyming words, but only when stories preceded production training. The magnitude of phonological gain was on the order of 2:1. By comparison, there was little phonological learning and no differential effect when rhyming or non-rhyming words were presented after production training. The findings are consistent with priming as a mechanism of learning with benefits to phonological acquisition. There is also new promise for treatment of children with phonological disorders in that rhyming words may be employed before production training to advance broader phonological gains.
We invite you to read the full article ‘Dense neighborhoods and mechanisms of learning: evidence from children with phonological delay’ here