Post written by Chun-Yu Lin, Chung-Kai Huang, and Chang-Hua Chen based on an article published in the latest issue of ReCALL
With the trend of globalization, the study of the Chinese language opens the way to different, important fields such as Chinese economy, history, politics, and archaeology. In the US’ higher education context, information and communication technology (ICT) is seen as a valuable add-on to the learning experience, and thereby many universities have developed Web-supported teaching and learning systems and technology-driven curriculum to address this issue. Emergent themes that serve as the driving force for integrating ICT into the Chinese language classroom are: increasing pedagogical flexibility and efficacy, improving learners’ core content knowledge and language skills, and preparing learners to use the future target language in academic or workplace domains. Nonetheless, the integration of ICT is not always an available or accepted part of the course design. To bridge the gap between ICT integration and curriculum and instruction, as well as to enhance the exchange on technology-based Chinese language teaching, this study investigated barriers to the adoption of ICT in teaching Chinese as a foreign language in US universities via a mixed method approach. More specifically, given the complexity of the research questions proposed, quantitative and qualitative data were both collected to provide a better understanding of existing problems of integration barriers than would have been revealed using either research approach alone.
Many Chinese language instructors are enthusiastic about applying ICT to improve the effectiveness of teaching and learning, but many still feel unprepared to take advantage of ICT in their classrooms. This study reflected on the issues of ICT integration from a range of perspectives. The persistent barriers identified consist of availability and access to technology hardware and software, structured design of enacted curriculum, teachers’ technological and content knowledge, technical, administrative, and peer support, inadequate professional development, teacher beliefs, and demographic characteristics of teachers. Evidence of these barriers inhibits successful technology integration efforts and also inhibits the fulfillment of requirements of many technology initiative opportunities. To take immediate action for effecting change over the long term, suggested recommendations include: improving classroom access to ICT, bolstering technical support, strengthening professional development around the instructional uses of technology, and enlisting in-service teachers to advocate for technical support and funding. Ensuring that ICT will be an integral part of the teaching practices will help Chinese language teacher communities to benefit from the capabilities of technology and meanwhile create an environment that is conducive to the development of learning because it corresponds to actual Chinese language teaching contexts.
Read the entire article ‘Barriers to the adoption of ICT in teaching Chinese as a foreign language in US universities’ without charge until 30th June 2014
Post written by Pia Sundqvist, Karlstad University and Liss Kerstin Sylvén (University of Gothenburg) based on an article in the latest issue of ReCALL
Our research addresses young learners of English as a second language (L2) in Sweden and their spare time use of computers for various language-related activities in English, Swedish, and other languages. For instance, they socialize with friends online via Facebook, play various types of digital games, listen to music, watch clips on YouTube, and so on. We use the term “extramural English” in reference to all sorts of spare time activities in English.
The main purpose of our study was to examine language-related use of computers in general, and engagement in playing digital games in particular. We collected data with the help of a questionnaire and a one-week language diary from 76 children in 4th grade (ages 10–11), and then we compared their computer use in English, Swedish, and other languages. Another purpose was to see whether there was a relation between playing digital games in English and (a) gender, (b) first language, (c) motivation for learning English, (d) self-assessed English ability, and (e) self-reported strategies for speaking English. In order to do so, the participants were divided into three “digital game groups”: (1) non-gamers, (2) moderate gamers, and (3) frequent gamers (≥ 4 hours/week). It was possible to divide the participants into these groups since we had access to diary data consisting of their self-reported times for “playing digital games in English”.
The results showed, among other things, that the 4th-graders in this study spent 7.2 hours per week on extramural English activities. In other words, in comparison with the time that is devoted to formal instruction of English in school, the time spent on English outside school is much greater. There was also a statistically significant difference between the boys and the girls, because the boys play more digital games and watch more films. On the other hand, the girls spent significantly more time on out-of-school language-related activities in Swedish than the boys, the reason being that the girls spent more time on Facebook. The examination of the three digital game groups revealed that there were mostly girls in the non-gamers group, there was a mix of girls and boys in the moderate group, and there were mostly boys in the frequent gamers group, which is in line with previous studies. Interestingly, participants with another first language than Swedish were overrepresented among the frequent gamers – a finding which calls for more research. As for the values for motivation and self-assessed English ability, we found that they were high across all groups. Finally, regarding the self-reported strategies, code-switching to one’s first language was more common among the non- and moderate gamers than the frequent gamers.
Access the entire article ‘Language-related computer use: Focus on young L2 English learners in Sweden’ without charge until 30th June 2014
Post written by Hilary Nesi based on a recent article in Language Teaching
Almost everyone uses dictionaries, and in order for them to function most effectively we need to learn how best to consult them, and dictionary-makers need to learn about our consultation needs.
These two topics are the foci of research into dictionary use, but are complicated by the fact that there are lots of different types of dictionary user, consulting dictionaries in many different contexts, for different purposes, and with differing levels of knowledge and expertise. Moreover although the research area is still relatively young (very few empirical studies were conducted before the 1980s) it spans a period of great technological change, and has experimented with a range of methodologies. For these reasons studies purporting to address similar research questions have sometimes arrived at rather different conclusions.
The Research Timeline ‘Dictionary use by English language learners’ is my attempt to trace the developments in the study of dictionary use that are of greatest relevance to ELT, and to identify broad areas of agreement amongst the research findings. Many of the earliest studies were questionnaire-based, and sought information directly from users regarding the dictionaries they owned, their preferences and their consultation strategies. The reliability of some of the survey data has been called into question, however, because although questionnaire respondents usually find it easy to answer factual questions about dictionary ownership, it is hard for them to recall the precise details of their previous dictionary consultations, and tempting for learners to report consultation strategies that their teachers might approve of, rather than the more messy reality of dictionary use. Thus questionnaire-based studies tend to have been replaced by studies that examine dictionary use during some kind of language activity, using as data test scores, task outcomes, the written or oral protocols of participants and/or, in the most recent studies, log files.
The rapid rise of the online dictionary has made dictionary ‘ownership’ a thing of the past for many users, and recent dictionary user research has tended to be less concerned with the dictionary as a commercial product, and more with the processes of dictionary consultation. A recurring theme in the research findings has been the problem of mis-selection and misinterpretation of dictionary information, and one strand of research has examined the extent to which additional annotation to the dictionary entry (in the form of ‘menus’ and ‘signposts’) can help learners select the most appropriate subentries for the tasks they have in hand. Another very recent experimental approach has appropriated eye-tracking technology to investigate how users visually navigate dictionary entry information. Experimental designs are becoming more rigorous, and there are a growing number of replication studies seeking to resolve apparent differences in research results, and explore their causes more deeply.
Ideally, successful dictionary consultation should barely interrupt whatever language activity we are engaged in. Research into dictionary use aims to help lexicographers, learners and teachers achieve this ideal.
Read the entire article ‘Dictionary use by English language learners’ without charge until 30th June 2014.
Post written by Alan Waters based on a recent article in Language Teaching
In recent decades, language teaching has experienced an apparently unending stream of major innovations, such as (to name but a very few), the birth of the communicative approach in the 1980s, the promulgation of the ‘learner-centred approach’ in the 1990s, and, in the current age, the promotion of ‘task- based learning’, ‘e-learning’, ‘English as an international language’, and so on. The tide shows no signs of abating: it is as if something of a ‘pro-innovation bias’ has taken hold, i.e., a widespread consensus that new ideas should and can be adopted as widely as possible, that the changes they entail are inevitably beneficial, and that putting them into practice is a relatively straightforward matter.
However, a small but steadily growing body of research literature has shown that many language teaching innovations have frequently fallen short of the mark, both in terms of impact and the desirability of their consequences The same body of work has also shown that a major cause of these problems has been
a widespread failure to understand and utilize the lessons of innovation theory.
My paper – ‘Managing innovation in English language education: A research agenda’ – therefore sets out to show how this body of research might be profitably built upon. It does so by first of all focusing in turn on each of the main stages in the innovation process – initiation, implementation, and institutionalization (sustainability) – and explaining the nature of areas of innovation theory of relevance to each and how such ideas have already been used in research. I then go on to outline what a typical practical research project involving the further application of each of these concepts might constitute. Next, I look at a number of further areas of innovation theory which have so far not been applied to ELT-based innovation research. I once again describe the ideas and then also outline how they might be used in a series of straightforward research studies.
Finally, I also identify a number of areas of ELT innovation activity where research has been under-represented or not undertaken at all, such as those involving certain geographical locations, private-sector projects, ‘successful’ innovations, and so on. Once again, discussion of each of the areas is accompanied by suggestions for how (further) research might be conducted into them.
It is hoped that, through a greater amount of research activity of these kinds, the knowledge-base needed for a sounder and more successful approach to innovation in language teaching will be strengthened and expanded.
Read the entire paper without charge here until 30th June 2014.
Blog post written by Lara Pierce based on an article published in Journal of Child Language
Internationally adopted (IA) children face a unique language learning situation in that they are exposed to one language from birth, but this language is discontinued at the point of adoption in favour of the language spoken by their adoptive family. IA children share similarities in their language environment with monolingual first language (L1) learners in that they receive the majority of their input in only one language. They functionally lose any abilities they had in their birth language quickly (within the first year or less) and typically become monolingual speakers of their adoptive language. However, their language experience also shares similarities with child second language (L2) learners, as they experience both a delay in acquisition onset of their adoptive language (typically ranging from about 6 months to 2 years), as well as exposure to another language. IA children are thus interesting from a linguistic perspective in that they provide a unique natural experiment to address issues relating to early language delay, as well as early exposure to two languages. Much of the previous work examining IA children’s language acquisition comes from a clinical perspective, using standardized tests and general measures of language ability to show that the majority of IA children “catch-up” to their age-mates relatively quickly following adoption (although for some aspects they appear to show delays, even into the school years). The aim of the present study was instead to examine acquisition of specific linguistic elements in IA children’s language with the goal of comparing them to typical L1- and child L2-acquisition patterns. In this way it was possible to address some interesting theoretical questions about the early period of language acquisition.
Thus, we longitudinally examined the acquisition of grammatical morphology for IA children (adopted from China at 10-13 months of age) in a way that allowed them to be compared to the typical acquisition patterns of L1 and child-L2 learners. While both of these groups share some similarities in the way they acquire this morphology, they also display notable differences. Specifically, child-L2 learners: 1) acquire the morpheme “BE” early, along with non-tense rather than tense-marking morphemes, and 2) show elevated rates of commission errors (i.e., replacing one grammatical morpheme with another) as opposed to omission errors. We could thus examine the patterns observed in the IA children’s acquisition over time to determine whether their development mapped onto either pattern. Our data showed that, during 5 sessions ranging from 9 to 34 months post adoption, IA children acquired grammatical morphemes in a manner similar to L1-learners, and this was evident in both spontaneous and elicited speech. Specifically, they 1) acquired “BE” within the same timeframe and along the same trajectory as other tense-marking morphemes, which was slower and less accurate than non-tense marking morphemes, and 2) showed a high percentage of omission and a low percentage of commission errors, consistent with the pattern observed for L1-learners. Thus, despite early delay in exposure to French, they appeared to acquire their “second first language” in a manner similar to typical monolingual language learners.
Read the entire article ‘Acquisition of English grammatical morphology by internationally adopted children from China’ written by Lara J. Pierce, Fred Genesee and Johanne Paradis here
Bilingualism: Language and Cognition (BLC) is now in its seventeenth year and has become the leading journal in its field enjoying a steady increase in readership and submissions. The 2012 Impact Factor mirrors this upsurge of interest. BLC’s 2012 Impact Factor is quoted as 2.229, which makes it the 5th ranked out of 160 journals in linguistics and the 27th out of 83 experimental psychology journals.
Starting from 2014, a new editorial team will officially be in charge of managing BLC. The new team consists of the two new editors-in-chief, Jubin Abutalebi and Harald Clahsen. The two new editors-in-chief have different academic backgrounds that reflect the breadth of research to be covered by BLC: Dr. Abutalebi mainly in (cognitive) neuroscience and Dr. Clahsen mainly in (psycho)linguistics. The editors-in-chief are assisted by four associate editors, Debra Jared, Robert de Keyser, Ludovica Serratrice and Natasha Tokowicz, and two editorial assistants, Clare Patterson and Lucia Guidi.
Authors will notice changes to the submission and reviewing procedures. To make more efficient use of the limited space in BLC and to reduce the workload for our reviewers, the new editorial team has introduced strict length limitations for new submissions. The editors would also like to highlight that Research Notes are particularly appropriate for the rapid dissemination of new findings and ideas, as final decisions on Research Notes will be taken no later than six weeks after submission, normally after only one round of reviewing.
The new editorial team has also introduced a two-stage reviewing process. The first stage consists of an in-house review aimed to triage and return any inappropriate manuscripts within two weeks of submission. Papers that are deemed suitable in terms of content and quality will enter the second stage and go out for external review.
Last but not least, the reader will notice an immediate visible change of BLC: the new cover! Indeed, BLC gets a fresh look and the editors underline that the new cover reflects the true essence of BLC: the representation and processing of bilingualism and multilingualism in the individual.
You can view the full Editorial Board and Instructions for Contributors on the journals homepage
Post written by Jan H. Hulstijn, based on an article in Language Teaching
The second language acquisition (SLA) ﬁeld is characterized by a wide variety of issues and theoretical perspectives. Is this a bad thing? Are there signs of disintegration?
In applied linguistics in general, and in particular in the field of SLA, it is not uncommon to distinguish between quantitative and qualitative approaches or between cognitive and socio-cultural approaches. In my view, what is potentially more threatening to the ﬁeld than a split between quantitative and qualitative subﬁelds is the proportion of nonempirical theories. If an academic discipline is characterized by too many nonempirical ideas and too few empirical ideas, it runs the risk of losing credit in the scientiﬁc community at large (and in society).
In this paper, I propose to distinguish, instead, between theories formulated in a way that allows empirical testing and theories that are not, or not yet, empirical in this sense. I am not advocating banishing all nonempirical ideas from the SLA ﬁeld, but what would really make the ﬁeld more transparent for both SLA-ers and outsiders is if scholars who propose theories were to indicate to what extent their theory is ready for empirical scrutiny. It does not matter whether the field of SLA is inhabited by many theories. However, it would be a good thing if we viewed the field not only in terms of the ‘issues’, as do most of the textbooks, but also in terms of their empirical or nonempirical status. This would also help us gain a better view of the agenda of our discipline.
For this purpose, I provide a list of theory-classification criteria. Sticking out my neck, I categorize a number of theories as having a more or less falsfiable status. While welcoming theories not yet ready for empirical falsification, I also express my concern about the possibility that the non-empirical theories may outnumber the theories that lend themselves to falsification.
Access the full article without charge until January 31st 2014 here.
Post written by Dr. Lei Xuan and Dr. Christine Dollaghan based on an article in Journal of Child Language
Our research addressed questions about the kinds of words that appear in the early vocabularies of bilingual children. Evidence from some languages, including English, has shown that young children acquire words for people and things before words that label actions and attributes or words that have grammatical functions. However, the hypothesis of a universal preference for nouns (i.e., a “noun bias”) in early lexical development has been challenged by studies suggesting that children acquiring languages such as Korean and Mandarin Chinese may show a weaker preference for nouns.
We used a unique research design to examine the extent of noun bias in 50 bilingual toddlers who were simultaneously acquiring English and Mandarin, two strikingly different languages that are believed to fall near the extremes of the noun bias continuum. By studying noun bias within each child’s English and Mandarin vocabularies we hoped to minimize the threat of confounding due to individual differences in cognitive and sociodemographic factors that could affect the noun preference. By focusing on children whose parent-reported vocabularies in both English and Mandarin fell between 50 and 300 words we hoped to control for variations in noun bias at different vocabulary sizes. By recruiting 50 children, we ensured that statistical power was adequate for our analyses. Our objective was to provide the clearest test to date of the hypothesis that the degree of noun bias differs in these two languages. Specifically, we hypothesized that the mean percentage of nouns in English would exceed the mean percentage of nouns in Mandarin by at least 15%, a value selected based on a synthesis of evidence from monolingual children in five languages.
Our results showed a mean difference in the percentage of English and Mandarin nouns of 16%, providing evidence that the preference for nouns was greater in these children’s English than in their Mandarin vocabularies. Although nouns predominated the total number of words and the 50 most frequently produced words in both languages, the most frequent 50 words in these children’s English vocabularies included substantially more nouns and substantially fewer verbs than did the most frequent words in their Mandarin vocabularies.
The findings converge with previous findings from monolingual children and suggest that not only universal cognitive and perceptual factors but also cross-linguistic variations in language input should be considered in understanding the composition of early vocabularies. The within-subject bilingual design is likely to be a fruitful approach to understanding the influences on children’s lexical development.
Access the full article without charge until January 31st 2014 here.
by Louise Cummings
Nottingham Trent University, UK
As academic researchers, linguists are increasingly being asked to demonstrate the impact of their work on the lives of individuals and on the growth of national economies. There is one field within linguistics where impact is more readily demonstrated than in any other. This is the study of the many ways in which language and communication can break down or fail to develop normally in children and adults with communication disorders. These disorders are the focus of a recently published handbook, the Cambridge Handbook of Communication Disorders, which brings together 30 chapters on all aspects of the classification, assessment and treatment of communication disorders. The chapters in this volume will speak for themselves. My purpose in this short extract is to demonstrate how, in an age of impact, the case for the academic study and clinical management of communication disorders could not be more persuasive.
I begin by revisiting a quotation which I included in the preface to the handbook. It is a comment which was made in 2006 by Lord Ramsbotham, the then Chief Inspector of Prisons in the UK. He remarked: ‘When I went to the young offender establishment at Polmont, I was walking with the governor, who told me that if, by some mischance, he had to get rid of all his staff, the last one out of the gate would be his speech and language therapist’. This statement focuses attention quite forcefully on an issue which clinicians and educationalists have known for years: the remediation of impoverished language and communication skills can have a significant, positive impact on one’s life chances and experiences in a range of areas. These areas include social integration, psychological well-being and occupational and educational success. Conversely, the neglect of language and communication impairments presents a significant barrier to academic achievement, vocational functioning and social participation. The area of professional practice which aims to mitigate these harmful consequences of communication disorders – speech and language therapy (UK) or speech-language pathology (US) – has played an increasingly important role in recent years in raising awareness of these disorders. That increased awareness has been felt not just among members of the public in the form of greater tolerance and understanding of communication disorders, but also in policy areas which have the power to transform the provision and delivery of speech and language therapy services.
“It is clear that a society which neglects communication disorders among its citizens can expect to sustain significant economic harm“.
If the human impact of communication disorders does not persuade the reader of the merits of this area of academic and clinical work, then perhaps the economic implications of these disorders will make the case even more convincingly. A report1 commissioned by the Royal College of Speech and Language Therapists in the UK and published in 2010 found that speech and language therapy across aphasia, specific language impairment and autism delivers an estimated net benefit of £765 million to the British economy each year. In 2000, the economic cost of communication disorders in the US was estimated to be between $154 billion and $186 billion per year, which is equal to 2.5% to 3% of the Gross National Product.2 It is clear that a society which neglects communication disorders among its citizens can expect to sustain significant economic harm. This is in addition to the abdication of any type of social responsibility to the welfare of its people.
1 Marsh, K., Bertranou, E., Suominen, H. and Venkatachalam, M. (2010) An Economic Evaluation of Speech and Language Therapy. Matrix Evidence.
2 Ruben, R.J. (2000) ‘Redefining the survival of the fittest: Communication disorders in the 21st century’, Laryngoscope, 110 (2 Pt 1): 241-245.
The Cambridge Handbook of Communication Disorders, is now available from Cambridge University Press.
Posted on behalf of Xavier Gutiérrez
Xavier Gutiérrez is an assistant professor of Applied Linguistics and Spanish at the University of Alberta in Canada. His latest article, published in Studies in Second Language Acquisition, published by Cambridge University Press, can be accessed today at no charge until October 21, 2013.
Researchers in the field of second language acquisition (SLA) have long been interested in finding out what type of mental representations of linguistic knowledge second language learners develop and how. In short, knowledge of language may be represented in two ways: as implicit, unconscious knowledge—the type usually involved in spontaneous language use such as casual conversations—or as explicit, conscious knowledge, which is involved in more controlled uses of language such as writing. Progress in this area has arguably been slow mainly due to challenges in obtaining valid and reliable measures of implicit and explicit knowledge.
One of the most popular instruments used in SLA to measure linguistic knowledge are grammaticality judgment tests (GJTs). GJTs typically consist of a number of grammatical and ungrammatical sentences, and learners are asked to indicate which ones are correct and which ones are not. Additionally, learners are sometimes asked to identify the error, correct it, and/or describe the grammatical rule violated in the sentence. In GJTs in which learners are only asked to determine the grammaticality of the sentences, there are still questions as to which type of knowledge the tests actually measure.
In recent years, several studies have used factor analysis to determine the validity of measures of implicit and explicit knowledge. Regarding GJTs, these studies have found that tests in which learners have time constraints to judge the sentences (i.e., timed GJTs) constitute measures of implicit knowledge, whereas tests without time limits (i.e., untimed GJTs) are measures of explicit knowledge. Additionally, some studies have noted that, in untimed GJTs, only ungrammatical sentences actually measure explicit knowledge. The study reported in this article takes this issue a step further and examines differences between both types of task stimuli (i.e., grammatical and ungrammatical sentences) in timed and untimed GJTs. The results of the study show that there are statistically significant differences between the learners’ responses to grammatical and ungrammatical sentences in both types of tests and that such differences can be interpreted as learners resorting to their implicit knowledge when judging grammatical sentences and to their explicit knowledge when judging ungrammatical ones. Furthermore, it was found that both time pressure and task stimulus have a significant effect on the learners’ performance on the GJTs. Given the popularity of GJTs in SLA, this study makes a potentially meaningful contribution to the debate on measures of implicit and explicit knowledge.