Talker familiarity and spoken word recognition in school-age children

JCL blosg post - Jun 15Blog post written by Susannah Levi based on an article in Journal of Child Language 

When people listen to speech, they hear two types of information:  what is being said (such as “That’s a ball”) and who said it (such as MOMMY).

Prior studies have shown that when adults understand speech better when it is spoken by a familiar voice. In this study, we tested whether school-age children also understand speech better when listening to a familiar voice. First, children learned the voices of three previously unknown speakers over five days. Following this voice familiarization, children listened to words mixed with background noise and were asked to tell us what they heard. These words were spoken both by the now familiar speakers and by another set of unfamiliar speakers.

Our results showed that, like adults, children understand speech better when it is produced by a familiar voice. Interestingly, this benefit of voice familiarity only occurred when listening to highly familiar words (such as “book” or “cat”) and not to words that are less familiar to school-age children (such as “fate” or “void”). We also found that the benefit of familiarity with a voice was most noticeable in children with the poorest performance, suggesting that familiarity with a voice may be especially useful for children who have difficulty understanding spoken language.

We invite you to read the full article ‘Talker familiarity and spoken word recognition in school-age children’ here

Measuring a very young child’s language and communication skills in developing countries

JCL Blog Jun 15Blog post written by Katie Alcock based on an article in Journal of Child Language 

The best way to find out about a very young child’s language and communication is to ask their parents – but in developing countries parents can’t always fill in a written questionnaire – so we have created a very successful interview technique to do this.

To start with, we wanted to visit homes close to the Wellcome Trust Unit in Kilifi Town, Coastal Kenya but even in town, there are few paved roads.  We went out in a four-wheel-drive from the Unit in to a Kiswahili-speaking family home; homes here range from concrete to mud walls, from tin to thatch rooves.

Families in Kilifi are used to nosy questions from researchers but we weren’t sure how easy they’d find it to talk about their children’s language. Parents worldwide find it very difficult to answer open-ended questions about words their young child knows. When asked simple yes/no questions about individual words though, we think the information is accurate.

The family we saw that day had a 15 month old boy. For children of this age we ask parents  about words for animals and noises, foods, household objects, toys, verbs. Children often know few words at this age so we also ask about gestures such as waving and (very important in this culture) shaking hands.

“Can your child understand or say the following words… mee mee [what a goat says]; boo boo [what a cow says]… maji [water]… ndizi  [banana]… taa [lamp]” “Yes! He says “taa” and he thinks the moon is a lamp too, he says “taa” when he sees the moon!”.

Bingo! A classic example of overextension – a child using a word for one thing to refer to something similar. We were unsure when we started what kind of answers parents would give and how patient they would be with quite a lengthy questionnaire (over 300 words, even for 8 month old babies).  Our researchers didn’t know either whether local parents would be aware of children making animal noises – like “baa” and “moo”. It turned out this was very much a “thing” that parents noticed – the baby word for “cat” is “nyau”.

We do this research through the MRC unit because we need good tools to assess how child is affected by factors such as HIV, cerebral malaria, and malnutrition development.

We found out find that parents were very accurate in telling us how well their child communicates, and very patient! They told us about the same kinds of mistakes children make learning other languages. We also went on to use our questionnaire to look at whether children exposed to HIV had delayed language compared to their peers.

Read the full article ‘Developmental inventories using illiterate parents as informants: Communicative Development Inventory (CDI) adaptation for two Kenyan languages’ here

Machine learning helps computers predict near-synonyms

The article is published in Natural Language Engineering, a journal that meets the needs of professionals and researchers working in all areas of computerised language processing

Choosing the best word or phrase for a given context from among candidate near-synonyms, such as “slim” and “skinny”, is something that human writers, given some experience, do naturally; but for choices with this level of granularity, it can be a difficult selection problem for computers.

Researchers from Macquarie University in Australia have published an article in the journal Natural Language Engineering, investigating whether they could use machine learning to re-predict a particular choice among near-synonyms made by a human author – a task known as the lexical gap problem.

They used a supervised machine learning approach to this problem in which the weights of different features of a document are learned computationally. Through using this approach, the computers were able to predict synonyms with greater accuracy and reduce errors.

The initial approach solidly outperformed some standard baselines, and predictions of synonyms made using a small window around the word outperformed those made using a wider context (such as the whole document).

However, they found that this was not the case uniformly across all types of near-synonyms.  Those that embodied connotational or affective differences — such as “slim” versus “skinny”, with differences in how positively the meaning is presented — behaved quite differently, in a way that suggested that broader features related to the ‘tone’ of the document could be useful, including document sentiment, document author, and a distance metric for weighting the wider lexical context of the gap itself  (For instance, if the chosen near-synonym was negative in sentiment, this might be linked to other expressions of negative sentiment in the document).

The distance weighting was particularly effective, resulting in a 38% decrease in errors, and these models turned out to improve accuracy not just on affective word choice, but on non-affective word choice also.

Read the full article ‘Predicting word choice in affective text’ online in the journal Natural Language Engineering

How do children who have recently begun to learn English map new L2 words into their existing mental lexicon?

BIL SI Cover 2015Blog post written by Greg Poarch based on an article in Bilingualism: Language and Cognition 

How do children who have recently begun to learn English map new L2 words into their existing mental lexicon? We tested the predictions of the Revised Hierarchical Model (Kroll & Stewart, 1994), originally introduced to explain language production processes and the relative strengths of the underlying connections between L1 and L2 word forms and the corresponding concepts. To examine how children map novel words to concepts during early stages of L2 learning, we tested fifth grade Dutch L2 learners with eight months of English instruction.

In Study 1, the children performed a translation recognition task, in which an English word (bike) was shown followed by a Dutch word and the children had to indicate whether the Dutch word was the correct translation. The Dutch word could be on of three: the correct translation equivalent (fiets), a semantically related incorrect translation (wiel [wheel]), or an unrelated incorrect translation (melk [milk]). The critical stimuli here were the semantically related incorrect translations: the RHM predicts that beginning learners should not be sensitive yet to L2 semantics, and hence perform equally on both kinds of incorrect translations. The children, however, were already sensitive to L2 word meaning and took longer to decide that a word was an incorrect translation when it was semantically related than unrelated.

In Study 2 the children performed backward and forward translation production tasks, and were faster in the backward direction, indicating direct translation from the L2 word to the L1 word without the detour via the concept, as predicted by the RHM. Our results indicate that depending on the task, Dutch beginning L2 learners do exploit conceptual information during L2 processing and map L2 word-forms to concepts, but evidently more so in recognition tasks than in production tasks. Critically, the children in our study had learned L2 words in contexts enriched by pictures and listening/speaking exercises.

This is further evidence that manner of L2 instruction may majorly impact the activation of lexical and conceptual information during translation.

Read the full article ‘Accessing word meaning in beginning second language learners: Lexical or conceptual mediation?’ here

Can your phone make you laugh?

Funny Texting

Examples of humorous and sometimes awkward autocorrect substitutions happen all the time. Typing ‘funny autocorrect’ into Google brings up page upon page of examples where phones seem to have a mind of their own.

A group of researchers at the University of Helsinki, under the lead of Professor Hannu Toivonen, have been examining word substitution and sentence formation, to see the extent to which they can implement a completely automatic form of humour generation. The results have been published online in the in the journal Natural Language Engineering.

Basing the experiment on the ideas and methods of computational humour explored by Alessandro Valitutti for several years, the researchers worked with short length text messages changing one word to another one, turning the text to a pun, possibly using a taboo word. By isolating and manipulating the main components of such pun-based texts, they were able to generate humorous texts in a more controllable way.

For example, it was proved that replacing a word at the end of the sentence surprised recipients, contributing to the humorous effect. They also proved that word replacement is funnier if the word is phonetically similar to the original word and when the word is a “humorously inappropriate” taboo word.

The experiment involved over 70,000 assessments in total, and used crowd sourcing to test funniness of the texts. This is the largest experiment that Professor Toivonen knows of related to this field of research.

How funny?

People were asked to assess individual messages for their funniness on a scale of 0 to 4 with 0 indicating the text wasn’t funny. And comedians can sigh with relief – the initial median score from the research was just 0.55, indicating that on average the text can hardly be called funny. But by following a combination of rules, this median increased by 67% showing that by inserting certain criteria could impact upon how funny the text message was.

Does this mean that in the future people will ‘rofl’ (roll on the floor laughing) in response to a funny quip or witty banter made by a phone?

Professor Toivonen sees a future where programs will be able to generate humorous automated responses and sentences:

“Some of the first applications of this type of research are likely to be seen in the automated production of funny marketing messages and help with creative writing. But who knows, maybe phones will one day be intelligent enough to make you laugh.”

Read the article ‘Computational generation and dissection of lexical replacement humor’ online in the journal Natural Language Engineering– please note that the article contains language that some may find offensive.

5 of the funniest texts*

Message Original Word Replacement word
Okie, pee ya later see pee
How come u r back so fart? fast fart
Now u makin me more curious…smell me pls… Tell smell
Dunno…My mum is kill bathing. still kill
No choice have to eat her treat eat

*There were funnier texts but due to offensive language we were not able to publish them on this blog

Caregivers provide more labeling responses to infants’ pointing than to infants’ object-directed vocalizations

JCL June 2015Blog post written by Julie Gros-Louis based on an article in a recent issue of Journal of Child Language

One main context for language learning is in social interactions with parents and caregivers. Infants produce vocal and gestural behaviors and caregivers respond to these behaviors, which supports language development. Prior studies have shown a strong relationship between infants’ pointing gestures and language outcomes. One reason for this association is that parents translate the apparent meaning of infants’ points, thus providing infants with language input associated with their pointing behavior. In contrast to the relationship between pointing and language development, infants’ overall vocal production is not related to language outcomes. One possible explanation for the different association between pointing and language outcomes, compared to vocalizations and language outcomes, is that pointing may elicit more verbal responses from social partners that are facilitative for language learning.

To examine this possibility, we observed twelve-month-olds during free play interactions with their mothers and fathers. At this age, infants do not have many words in their vocabulary and thus communicate primarily with gestures and vocalizations. We compared parents’ verbal responses to infants’ pointing gestures and object-directed vocalizations. Results showed that infants’ pointing elicited more verbal responses from parents compared to object-directed vocalizations. Also, these verbal responses were mainly object labels. These results may help explain why pointing is associated with indices of language acquisition, but the production of vocalizations is not. Furthermore, the study highlights the importance of examining moment-to-moment interactions to uncover social mechanisms that support language development.

We invite to read the full article ‘Caregivers provide more labeling responses to infants’ pointing than to infants’ object-directed vocalizations’ here

 

Research trends in mobile assisted language learning from 2000 to 2012

REC Blog May 15Blog post written by Guler Duman based on an article in the latest issue of ReCALL

The widespread ownership of sophisticated but affordable mobile technologies has extended opportunities for making language teaching and learning available beyond the traditional classroom. Researchers have therefore begun to investigate new uses for various mobile technologies to facilitate language learning. It is not surprising, then, that a growing body of research into using these technologies for language learning has been documented over the past several decades, making mobile assisted language learning (MALL) an emerging research field. We believe that a comprehensive analysis of MALL-related literature is necessary for those interested in MALL research tounderstand current practices and to direct future research in the field.

In order to trace how MALL has evolved in recent years and show
to what extent mobile devices are being used to support language learning, in this article, we analysed the MALL studies published from 2000 to 2012 with regard to the distribution of research topics and theoretical bases, the variety of mobile devices supported by the many mobile platforms and functions, and the diversity of methodological approaches.

A systematic and extensive review of MALL-related literature revealed that research in the field increased at a fast pace from 2008 and reached a peak in 2012. Teaching vocabulary with the use of cell phones and PDAs remained popular over this period. The writing process and grammar
acquisition tended to be neglected in the MALL studies. Furthermore, the need for solid theoretical bases helping to establish a link between theory and practice emerged since a significant number of studies did not base their research on any theoretical framework. MALL research also remained limited by its methodological approaches. Applied and design-based research dominated the field, and these studies generally adopted quantitative research methods.

Ultimately, this study provides an important reference base for future research in the field of MALL with the identification of the most widely examined areas and issues.

Read the full article ‘Research trends in mobile assisted language learning from 2000 to 2012′ here.

Losing a language in childhood

JCL blog post May 2015Blog post written by Cristina Flores based on an article in the latest issue of Journal of Child Language

What happens if a bilingual child with immigration background moves (back) to the country of origin of his/her family and loses contact with the language that until this moment was his/her dominant language? Does the non-used language disappear from the child’s mind?

This new paper analyses such a situation of remigration of a bilingual child and its consequence for language development. The analysis is based upon a longitudinal study of language attrition in a bilingual child, Ana, who grew up in Germany and moved to the country of origin of her parents, Portugal, at the age of nine. Since she has few opportunities to contact with German after remigration, Ana experiences a DOMINANCE SHIFT from German, the (until now) dominant language, to Portuguese, her heritage language.

Data collection, based on oral interviews and story retelling, started three weeks after the child’s immersion in the Portuguese setting and ended eighteen months later. Results show first effects of language attrition after five months of reduced exposure to German, namely lexical difficulties and deviant syntactic omissions. At the end of the study the informant showed severe word retrieval difficulties and was unable to produce complete sentences in German. The findings thus confirm the conclusions of other studies on child language attrition, which attest to strong effects of attrition when the loss of contact with the target language occurs in childhood, i.e. before the age of eleven/twelve. It remains an open question if the language «comes back» easily after re-immersion in a German environment.

Read the full article ‘Losing a language in childhood: a longitudinal case study on language attrition’ here.

Is the Portuguese version of the passage ‘The North Wind and the Sun’ phonetically balanced?

IPA May 2015Blog post written by Luís Jesus based on an article in Journal of the International Phonetic Association

Those who have worked over the years on Portuguese have often wondered if there was a standard phonetically balanced text that could be used for research. There is no easy answer to this, and one is not able to find any consensus.

We had been wondering for a while if (perhaps with some luck) the Portuguese version of “The North Wind and the Sun” passage would be phonetically balanced (sometimes the solution is just around the corner…).

The same concerns about other languages, not least English, have also been raised and discussed for a long time. This discussion has recently been revitalised by Martin Ball at wordpress.com (2012/2013).

A phonetically balanced text covers all phones in a target language (with a frequency as close as possible to the natural language), presentsexamples of all phonotactic rules, based on a small number of words in current use. Portuguese is spoken by over 244 million people in nine countries, but there is no standard phonetically balanced passage for research and clinical practice.

This paper aims to determine if the short text “The North Wind and the Sun” (NWS passage – a text used as part of the illustrations of the International Phonetic Alphabet, over the last 100 years) is phonetically balanced for European Portuguese (EP) and Brazilian Portuguese (BP).

Two phonetic transcriptions of the text based on specific grapheme-phone transcription algorithms for EP and BP are presented. The frequency of phonemes in the NWS phonetic transcriptions are compared with the phoneme frequency information contained in the most recent and largest EP and BP databases. Statistical analysis revealed that there were significant differences between the frequency of phonemes in the BP NWS transcription and the frequency of phonemes in the BP databases. Additionally, the BP version of the NWS does not cover all BP phonemes. So, the NWS cannot be considered as a phonetically balanced text for BP. Only in terms of manner of articulation, the BP NWS transcription could be viewed as phonetically balanced.

For EP, statistical analysis revealed that there was no difference between the number of phonemes in the EP NWS transcription and the frequency of phonemes in the databases. Moreover, the NWS passage covers all phonemes and all phonotactic rules of EP. The NWS passage can therefore be used for EP speech and hearing research, and clinical assessment of patients with fluency or voice disorders.

Read the full article ‘ Is the Portuguese version of the passage ‘The North Wind and the Sun’ phonetically balanced?’ here.

Bilingualism in the Spanish-Speaking World

Post written by Jennifer Austin, María Blume & Liliana Sánchez authors of Bilingualism in the Spanish-Speaking World.

Bilingualism in the Spanish-Speaking WorldBilingualism, and how it affects language and cognitive development, is a topic of increasing relevance in an interconnected world. In Bilingualism in the Spanish-Speaking World, we examine how the outcomes of bilingualism are shaped by factors at the individual level, such as age of acquisition and the amount and type of input, as well as societal support for the minority language in the form of dual-language education and similar initiatives. By analyzing previous research on the effects of these variables on bilingual speakers’ linguistic representations, as well as their minds and brains, we have attempted to provide a better understanding of some emerging conceptual views of the bilingual speaker. We also discuss how societal maintenance of bilingualism differs within the three multilingual communities which are the focus of this book: Peru, Spain and the United States. The status of Spanish varies between these regions; in Peru and the Spanish Basque Country, Spanish is a high-status, majority language, and in the United States, it is a minority language with varying degrees of prestige. While these three communities are linked by the common thread of bilingualism in Spanish, they provide diverse perspectives on the experience of being bilingual in distinct cultural, political, and socioeconomic contexts.

In the first chapter of the book, we examine how the concept of bilingualism has evolved from early definitions which included the expectation that bilinguals should behave like monolinguals, as in Bloomfield’s definition of bilingualism as the “native-like control of two languages” (Bloomfield 1933: 55-56). Increasingly, contemporary theories of bilingualism view differences between bilinguals and monolinguals as expected and normal, rather than deficiencies on the part of the bilingual. In addition, we discuss how heritage speakers challenge previous expectations regarding bilingualism, namely that the first language acquired is always the dominant one (the “mother tongue”), as well as the language that is acquired in a “native-like” fashion.

In the second chapter, we discuss recent research showing that the two languages of a bilingual are highly interconnected at the lexical, syntactic and phonological levels. We also review evidence that the continual interaction between the languages of a bilingual has important repercussions for cognitive development in bilingual children beginning early in infancy. These include enhanced executive function skills stemming from bilinguals’ need to monitor and inhibit one of their languages, as well as enhanced literacy abilities for bilingual children acquiring same-script languages. Bilingualism also produces neuroanatomical changes in multilingual speakers, including enhanced subcortical auditory processing and increased grey matter density in the inferior parietal cortex, an effect that is modulated by language proficiency and age of acquisition. Finally in the second chapter we presented evidence regarding the factors that affect L1 and L2 attrition in bilinguals, including age of second language immersion, availability and type of input, and proficiency levels in each language.

The third chapter examines several theories which have been proposed to account for lexical and syntactic development in bilingual children and adults. While early theoretical accounts assumed that lexical and syntactic development occurred separately, more recent approaches have proposed that their acquisition is interconnected, a theoretical linguistic advance which finds empirical support in the studies of the bilingual lexicon by cognitive psychologists. In this chapter we also present research findings that have allowed the field of bilingualism to move from initial debates on unitary versus binary systems of representation to a more nuanced view of the development of the bilingual lexicon and syntax that involves the interplay of different language subcomponents.

The overall picture that emerges from this book is thatthe cognitive and linguistic effectsof bilingualism illustrate just how complex the representation and processing of language are in the human mind in ways that go beyond accounts based solely on the study of monolinguals.

To find out more about this new book published by Cambridge University Press please click here