Post written by Michal Icht and Yaniv Mama based on an article in Journal of Child Language
How can a parent, a dedicated teacher or a speech-language pathologist improve memory performance of a child? How can we help a kid better remember study material, such as new vocabulary?
A promising and straightforward technique may be simply saying the relevant material aloud. This simple method is based on the ‘Production Effect’ in memory. This effect refers to a memory advantage (of about 20%) for words that were read aloud over words that were silently read. Reading aloud was found to enhance memory for other types of material, such as sentences (text), and was proven useful for students and older adults.
Although many other types of mnemonic have been suggested in the literature (e.g., using acronyms), the Production Effect seems especially appropriate for young children. Saying words aloud is simple (does not involve literacy skills) and can be easily applied in many educational settings and contexts (does not require special equipment). The present study is a first investigation of the Production Effect in pre-school children (five-year-olds). As this population cannot read, we used for the first time pictures of objects as stimuli. Will saying the object names aloud improve their memory?
In the first experiment, we used pictures of familiar objects (e.g., ball, teddy-bear, t-shirt). The children were presented with the pictures and were asked to memorize them. Third of the words were studied by looking at the picture, another third by looking and listening to the experimenter saying the word (the object name), and the remaining third by looking and saying the word aloud. As expected, words that were vocally produced were better recalled (29%) than the heard words (21%) and the “silent” words (14%), a significant Production Effect!
In order to suggest the Production Effect as an effective and valid learning method, we wanted to demonstrate its occurrence with the acquisition of new vocabulary. Hence, in the second experiment we used pictures of novel, unfamiliar words (pictures of rare objects, such as: anchor, manger, cuff, pestle). Our five-years old participants learned these rare words by looking at the pictures and listening to the experimenter saying each word twice, or by looking, listening to the experimenter saying the word once, and vocally repeating it once again. The results showed better memory (recognition rates) for words said aloud (54%) relative to heard words (40%). These results support the Production Effect as a prominent memory and learning tool, even for pre-school children. Vocalizing may serve as a mnemonic that can be used to assist learners in improving their memory for new concepts.
We invite you to read the full article ‘The production effect in memory: a prominent mnemonic in children’ here
Blog post written by Judith A. Gierut based on an article in the latest issue of Journal of Child Language
It has long been thought that children’s acquisition of the sound system of a language follows directly from lexical learning. Indeed, some words are better than others in promoting mastery of new sounds and generalized productive use of those sounds across the lexicon. In particular, rhyming words (dubbed lexical neighbors) provide distinct advantages to phonological learning, but the learning mechanism responsible for the effect is not well understood. Some suppose that rhyming words afford a naturalistic case of long-term auditory priming, such that repeated exposure to similar sounding words of the input enhances phonemic distinctiveness. Others suggest that rhyming words benefit phonological working memory, such that exposure to similar sounding words helps retention of sounds and sound sequences.
These hypotheses take on added intrigue when considered relative to the population of children with phonological disorders, which happens to be the most prevalent language learning disability of childhood. We wondered whether it might be possible to take advantage of rhyming words to jumpstart phonological learning for these children, and in the process, to disambiguate hypotheses about relevant learning mechanisms.
Two intervention studies were conducted enrolling preschool children with phonological disorders. We crafted two sets of illustrated stories, one comprised of rhyming words akin to Dr. Seuss books and a second, using the same illustrations but comprised of non-rhyming words that were phonologically unrelated. Children were exposed to either stories with rhyming words or those with non-rhyming words in treatment. In Study 1, stories were presented before teaching production of sounds in error as a test of the priming hypothesis. In Study 2, stories were presented after teaching production of sounds in error as a test of the phonological working memory hypothesis. Results showed that rhyming words promoted greater phonological learning when compared to non-rhyming words, but only when stories preceded production training. The magnitude of phonological gain was on the order of 2:1. By comparison, there was little phonological learning and no differential effect when rhyming or non-rhyming words were presented after production training. The findings are consistent with priming as a mechanism of learning with benefits to phonological acquisition. There is also new promise for treatment of children with phonological disorders in that rhyming words may be employed before production training to advance broader phonological gains.
We invite you to read the full article ‘Dense neighborhoods and mechanisms of learning: evidence from children with phonological delay’ here
Blog post written by Yellowlees Douglas author of The Reader’s Brain: How Neuroscience Can Make You A Better Writer
Journalists, particularly those writing for American audiences, practically have transitions drilled into their heads from their first forays into writing for the public. Where’s your transition? their editors persist, as they linger over each sentence. However, those editors and newsroom sages handed on advice with well-established roots in psycholinguistics—and with particularly striking benefits for the reading public. I explore what linguistics, psychology, and neuroscience can teach us about writing in my forthcoming The Reader’s Brain: How Neuroscience Can Make You a Better Writer. And using an abundance of transitions is perhaps the simplest advice you can follow to make your writing easy to read, in addition to bolstering your readers’ speed and comprehension of even complex, academic prose.
As a species, we evolved to learn from observing cause and effect—and from making predictions based on those observations. For example, your everyday survival relies on your ability to predict how the driver to your right will behave on entering a roundabout, just as we predict hundreds of events that unfold in our daily lives, all of which dictate our behavior. But we feel relatively minimal cognitive strain from all these predictions, mostly made without any conscious awareness, because we can make predictions based on prior experience. We expect the familiar.
Similarly, in reading, we expect sequential sentences to relate to one another. However, most writers assume that their readers see the ideas represented in one sentence as inherently connected to the preceding sentence. But sentences can become islands of meaning, especially when writers fail to provide explicit linguistic cues that inform readers how one sentence follows another.
Take, for example, your typical university mission statement, the kind invariably featured in American university catalogues and websites:
Teaching—undergraduate and graduate through the doctorate—is the fundamental purpose of the university. Research and scholarship are integral to the education process and to expanding humankind’s understanding of the natural world, the mind and the senses. Service is the university’s obligation to share the benefits of its knowledge for the public good.
Chances are, even if someone offered you the lottery jackpot for recalling this content in a mere half-hour, you’d fail—at least, not without some serious sweat put into rote memoriziation. Why? Despite the mission statement containing a mere three sentences, nothing connects any sentence to the others—aside from the writer’s implicit belief that everyone knows that universities focus on teaching, research, and service. Unfortunately, only an academic would understand that research, teaching, and service form the bedrock of any research university. As a result, we can safely guess that the writer was an academic. Sadly, the actual audience for the mission statement—the family members tendering up their retirement savings or mortgaging the house for tuition—fail to see any connections at all. As studies documented as early as the 1970s, readers read these apparently disconnected sentences more slowly and with greater activity in the parts of the brain dedicated to reading. In addition, readers also show poorer recall of sentences lacking any apparently logical or referential continuity.
Because prediction is the engine that enables readers’ comprehension, transitions play a vital role in enabling us to understand how sentences refer to one another. In fact, certain types of transitions—particularly those flagging causation, time, space, protagonist, and motivation—bind sentences more tightly together. When you use as a result, thus, then, because, or therefore, your reader sees the sentence she’s about to read as causally related to the sentence she’s just read. Moreover, when writers place transitions early in sentences, prior to the verb, readers grasp the relationship before they finish making predictions about how the sentence will play out. These predictions stem from our encounters with tens of thousands of sentences we’ve previously read. But put the transition after the verb, and your readers have already completed the heavy lifting of prediction. Or, worse, they’ve made the wrong predictions and need to reread your sentences again.
You might think that a snippet like too or also or even flies beneath your readers’ radar. Think again. Transitions are your readers’ linguistic lifelines that link sentences and ideas smoothly together, making your reading easy to understand and recall. You can discover more about not only transitions but also of how your readers’ brains work through every facet of your writing—from the words you choose to the cadence of your sentences in The Reader’s Brain: How Neuroscience Can Make You a Better Writer.
Blog post written by Terttu Nevalainen based on an article in the latest issue of English Language and Linguistics
Social media and mobile technology have increased and accelerated human interconnectedness and social networking on a global scale. It is a common observation that new words and expressions travel fast in these networks. But our primary medium of communication is still spoken interaction, and this is how language is transmitted to the next generation. Linguists have long been interested in the influence of social networks on language learning, use and, ultimately, on language change. They have shown how people either tend to maintain the language they once acquired or become more apt to change it depending on the kinds of social network relationships they have contracted. Now, two questions intrigue the language historian. First, is it possible to gain access to the social networks of people who lived centuries ago and, second, to the extent that it is, could such networks perhaps be used to trace language change?
Present-day sociolinguistic studies approach these issues by using various methods first to chart network structures and their content, and then record and analyse the network members’ language use in different contexts. Historical studies present particular challenges in both respects. My article highlights the role of place in historical social network research. I begin by discussing the kinds of data – official documents, personal letters and diaries – that social historians have used in reconstructing past communities and social networks. I suggest that these analyses can be enriched by adding linguistic data, and that language historians’ findings on linguistic change may often be interpreted in terms of social networks.
Focusing on Early Modern London, I present two case studies. The first one investigates a 16th-century merchant family exchange network, active in London, Calais and Antwerp, in an analysis based on their extensive family correspondence. Personal letters are supplemented with the wealth of information provided by a private diary in the second case study of the 17th-century naval administrator Samuel Pepys. Pepys’s role as a community broker between the commercial City of London and the Royal Court at Westminster is assessed in linguistic terms. My results show how identifying the leaders and laggers of linguistic change can add to our understanding of the varied ways in which linguistic innovations spread to and from Tudor and Stuart London both within and across social networks.
We invite you to read the full article ‘Social networks and language change in Tudor and Stuart London – only connect?’ here
Blog post written by Liz Morrish co-author of Exploring Language and Linguistics
When we contemplated producing a new introductory textbook in Linguistics, we wanted to offer students something different. Engagement and learning gain are hot topics in higher education circles at the moment, and we feel this book is ahead of the curve. Introductory textbooks can sometimes leave the curious student unsatisfied. They can open up a subject, and then leave the reader wondering where to go next. We decided that students should begin their experience of linguistics with high-quality chapters written by internationally-recognized experts in each of the different fields. The authors have been selected for their experience in writing for an introductory undergraduate audience, to present each sub-discipline of linguistics in an accessible manner. Universities should offer research-led teaching right from day one, and we wanted to capture that aspiration in this textbook.
We also wanted to make sure that students were as engaged by theoretical chapters as much as by chapters in applied linguistics. To ensure this, we have asked authors to structure their chapters around text-box summaries, and frequent exercises (yes, the answers are in the back of the book). There is also an interactive website to support the book, with even more exercises for students to confirm understanding and get feedback. In response to an excellent suggestion by a reviewer, we have also included a group exercise for each chapter.
We were aware that linguistics courses in the US tend to emphasize more structural approaches (phonology, syntax etc.), while those in the UK feature more applied and discourse analytical approaches. In the introductory module which we as editors have co-taught for many years, we have always treated these two approaches equally. We know that students need a thorough grounding in the levels of linguistic description and the tools of linguistic analysis before they are fully prepared to progress to more advanced courses and apply their learning to real-world settings.
To give some examples of how we offer students engaging and challenging exercises:
The phonetics chapter explains the articulation of consonants and vowels, and leads students to a group exercise in making sociophonetic observations. Students will be able to confirm their understanding in the sociolinguistics chapter where the group exercise asks them to make judgements drawing on concepts in phonology, grammar, lexis and discourse in investigating data from the archive of the British Library’s website Sounds Familiar? The language and ideology chapter introduces students to analytical techniques which uncover ideologies in texts, and their relationship to power structures. In the web exercise on language and the media, groups of students are invited to bring these concepts to an examination of a website of a news organisation and critically evaluate the meanings inherent in choices of language, attribution and even pictures as they affect the reading of stories.
It could be argued that the authors of the structural chapters have had a tougher challenge in engaging students, but this has been fully met with some excellent resources and exercises:
The syntax chapter invites students to solve problems by playing with word order in noun phrases; the pragmatics chapter presents data of children with pragmatic disorders so that students can use concepts such as presupposition to diagnose clinical problems; the semantics chapter requires students to question the basis of antonymy and contrast in the lexicon.
This book is fascinating and accessible. It will structure the learning of all students, and extend the conceptual abilities of the most able. We are definitely expecting to see great results in our own modules.
Find out more about this textbook written by Natalie Braber, Liz Morrish & Louise Cummings here
Blog post written by Hilde van Zeeland, winner of the 2014 Christopher Brumfit Award
Most L2 vocabulary research has focused on learners’ knowledge of written, rather than spoken, words. In my thesis, I identified and addressed two gaps in the field: 1) how many spoken (versus written) words L2 learners know, i.e. their vocabulary knowledge in listening, and 2) how successful learners are at learning new words from spoken input, i.e. their vocabulary knowledge from listening.
The first two studies from my thesis (one published, one under review) focused on vocabulary knowledge in listening. Little is known about how many words learners know when they hear them in their spoken form, and in particular, if knowledge found on written tests (e.g. the VLT, VST, and the Yes/No test) is also available to learners when they listen to continuous speech. I compared learners’ knowledge of isolated written words with their knowledge of spoken words in isolation as well as in sentence contexts. When learners saw/heard words in isolation, they showed slightly better knowledge of written than spoken vocabulary. Interestingly, regarding spoken vocabulary, learners often failed to recognise words in continuous speech that they did demonstrate knowledge of when they heard them in isolation. This indicates that results from tests with isolated word forms (whether written or spoken) might overestimate the knowledge learners actually have at their disposal while listening. For pedagogical purposes, this means we should be careful with selecting listening materials based on results from such vocabulary tests (e.g. by means of lexical coverage calculations).
The third and fourth study (both published) focused on vocabulary knowledge from listening. The third study assessed L1 and L2 listeners’ success in inferencing word meanings from context, and explored the effect of three variables that have been found to affect inferencing success in reading: background knowledge, clue type, and vocabulary knowledge. Results showed that these variables had the same effect in listening. This suggests that, regardless of the input modality, it is advisable to control for these variables when carrying out lexical inferencing tasks, especially where their aim is to learn new vocabulary. The fourth study measured L2 listeners’ incidental vocabulary acquisition. It explored their learning of words’ meaning, form and grammatical function. Although learners acquired some knowledge types quicker than others, they did not build durable knowledge of any of them, even after having heard the target words 15 times. This indicates that spoken input alone is not very effective for vocabulary learning, and that some sort of input enhancement might be appropriate.
Together, these studies emphasise the importance of further examining the construct of spoken vocabulary knowledge, as well as the acquisition of it. However, although the vocabulary-listening domain is growing, it remains an under-researched area. I hope these studies will further encourage researchers to explore spoken vocabulary knowledge – both in and from listening.
Congratulations to Hilde on winning this prestigious award.
You can discover more about the Christopher Brumfit PhD/Ed.D. Thesis Award 2015 here.
30th November 2015 – Deadline for receipt of summary and abstract and official proof of thesis acceptance
In his latest industry watch column, Robert Dale, Chief Technology Officer for Arria NLG, takes a look at what’s on offer in the NLP microservices space, reviewing five SaaS offerings as of June 2015
Below is an extract from the column
With NLP services now widely available via cloud APIs, tasks like named entity recognition and sentiment analysis are virtually commodities. We look at what’s on offer, and make some suggestions for how to get rich.
Software as a service, or SaaS – the mode of software delivery where you pay a monthly or annual subscription to use a cloud-based service, rather than having a piece of software installed on your desktop just gets more and more popular. If you’re a user of Evernote or CrashPlan, or in fact even GMail or Google Docs, you’ve used SaaS. The biggest impact of the model is in the world of enterprise software, with applications like Salesforce, Netsuite and Concur now part of the furniture for many organisations. SaaS is big business: depending on which industry analyst you trust, the SaaS market will be worth somewhere between US$70 billion and US$120 billion by 2018. The benefits from the software vendor’s point of view are well known: you only have one instance of your software to maintain and upgrade, provisioning can be handled elastically, the revenue model is very attractive, and you get better control of your intellectual property. And customers like the hassle-free access from any web-enabled device without setup or maintenance, the ability to turn subscriptions on and off with no up-front licence fees, and not having to talk to the IT department to get what they want.
The SaaS model meets the NLP world in the area of cloud-based microservices: a specific form of SaaS where you deliver a small, well-defined, modular set of services through some lightweight mechanism. By combining NLP microservices in novel ways with other functionalities, you can easily build a sophisticated mashup that might just net you an early retirement. The economics of commercial NLP microservices offerings make these an appealing way to get your app up and running without having to build all the bits yourself, with your costs scaling comfortably with the success of your innovation. So what is out there in the NLP microservices space? That early retirement thing sounded good to me, so I decided to take a look. But here’s the thing: I’m lazy.
I want to know with minimal effort whether someone’s toolset is going to do the job for me; I don’t want to spend hours digging through a website to understand what’s on offer. So, I decided to evaluate SaaS offerings in the NLP space using, appropriately, the SAS (Short Attention Span) methodology: I would see how many functioning NLP service vendors I could track down in an afternoon on the web,
and I would give each website a maximum of five minutes of exploration time to see what it offered up. If after five minutes on a site I couldn’t really form a clear picture of what was on offer, how to use it, or what it would cost me, I would move on. Expecting me to read more than a paragraph of text is so Gen X.
Before we get into specifics, some general comments about the nature of these services are in order, because what’s striking is the similarities that hold across the different providers. Taken together, these almost constitute a playbook for rolling out a SaaS offering in this space.
Read the rest of the article including reviews of Alchemy API, TextRazor and more in the Journal of Natural Language Engineering
Blog written by Anne Seaton based on an article in the journal English Today
It was when I was working on Chambers Universal Learners’ Dictionary in the late ’70s that I suddenly focused on the weirdness of the expression ‘what is it like?’ Why ask for a comparison when you want a description? I managed to squeeze it into the dictionary at W (for what), since it had missed the boat at L (for like). Desk dictionaries seemed not to bother with it. But the 1933 OED pinpointed its function with notable precision: ‘The question what is he (or it) like? means ‘What sort of man is he?’, ‘What sort of thing is it?’, the expected answer being a description, and not at all the mention of a resembling person or thing.’ However, it gave only two citations, the earlier dated 1878, whereas citations from my databases, when I began searching on ‘what … like’, showed that it was in use in the early 19th century. Earlier than that there was evidence that the question was indeed used literally to ask for a comparison.
I’m very aware that ‘what is it like?’ should be studied in conjunction with ‘like that’ (as in ‘He’s like that’, ‘It’s like that’), which can be understood as its counterpart in statement form. Citations for ‘people/things like that’ can be found as early as the 17th century, but the use of ‘like that’ as a complement after a linking verb seems to arrive in the mid 19th century. Trollope, who quibbles over ‘What is he like?’ seems OK with ‘like that’. In The Small House at Allington he puts it into the mouth of Johnny Eames:
‘My belief is, that a girl thinks nothing of a man till she has refused him half-a-dozen times.’
‘I don’t think Lily is at all like that.’
Read the full article ‘A literary history of the strange expression ‘what is it like?’ A straightforward question that changed its function and took universal hold’
English language teaching in the Siberian city of Irkutsk
Blog post written by Valerie Sartor based on a recent article in the journal English Today
The Russian Federation, established after the breakup of the USSR in the early 1990s, is the largest country in the world, and until recently, a nation that did not encourage foreigners to enter in order to teach English to the native population. Moscow and St Petersburg remain the two main intellectual and cultural capitals. During the Soviet era (1917-1990), however, cities in the western provinces, such as Kiev and Riga, were also held in high regard for education, with specialized universities dedicated to making contributions to science and technology, as well as the arts and sciences. Very little, however, was known about Siberian educational institutions, and little has been written recently about English in universities in the more remote areas of Siberia.
I served as a Fulbright Global TEFL Exchange Scholar for the 2014-2015 academic year in southeastern Siberia. My post was inside the English language teaching within the Eurasian Linguistic Institute (ELI), a new affiliate branch of the Moscow State Linguistic University (MGLU), located in the city of Irkutsk, Irkutsk Province. Formerly known as the Irkutsk State Pedagogical Institute of Foreign Languages, this facility was founded in 1948. Irkutsk has long hosted a diverse population. Historically it is known as one of the prosperous tea route cities, and also Irkutsk welcomed the Decembrist exiles, along with other political and religious exiles from European Russia and Eastern Europe. Because of this, despite being provincial, Irkutsk has many universities, art galleries, theaters, and beautiful architecture modeled after the buildings in St Petersburg. The ELI building itself is striking.
Presently, at the Eurasian Linguistic Institute, Russian students continue to specialize in learning English in order to become English teachers and translators. Traditionally, females have held these jobs and the trend continues. Globalization has, however, impacted teaching methods as well as the ways in which students acquire fluency. ELI teachers now employ textbooks from the UK and the USA. Many teachers and students travel internationally to English speaking countries for study and work exchanges. Finally, the Internet has opened up a vast window to English language resources.
With these positive opportunities have also come some negative outcomes. Faculty at ELI complain that their students no longer read as extensively as their students did during Soviet times; moreover, with the fluctuating economic situation since the early 1990s, enrolments have dropped. Currently, funding for state universities and institutes is also problematic. Recently, ELI merged with Moscow State Linguistic University as part of Mr. Putin’s plans for streamlining educational institutions to make them sustainable. Funding problems and globalization have also impacted teacher perceptions. Some ELI teachers feel that they have lost “educational capital” as mentors and models in regard to students, who focus more on adjusting to the post-Soviet economic situation than to establishing themselves in the academy.
Yet at the same time, faculty at ELI reported that they were under the same pressure as in Soviet times. They were expected to better themselves academically; to write articles, or to conduct extra-curricular activities – involving creation of textbooks or curriculum. English teachers carry out tedious administrative functions and teach many classes. Nevertheless, the teachers I worked with were dedicated educators, spending many hours at the institute. Many also moonlighted as private tutors in order to enhance their economic situation.
Read the full article ‘Evolving and adapting to global changes regarding English: English language teaching in the Siberian city of Irkutsk, Contemporary English language teaching in a remote Siberian university’ by Valerie Sartorand Svetlana Bogdanova.
Blog post written by Susannah Levi based on an article in Journal of Child Language
When people listen to speech, they hear two types of information: what is being said (such as “That’s a ball”) and who said it (such as MOMMY).
Prior studies have shown that when adults understand speech better when it is spoken by a familiar voice. In this study, we tested whether school-age children also understand speech better when listening to a familiar voice. First, children learned the voices of three previously unknown speakers over five days. Following this voice familiarization, children listened to words mixed with background noise and were asked to tell us what they heard. These words were spoken both by the now familiar speakers and by another set of unfamiliar speakers.
Our results showed that, like adults, children understand speech better when it is produced by a familiar voice. Interestingly, this benefit of voice familiarity only occurred when listening to highly familiar words (such as “book” or “cat”) and not to words that are less familiar to school-age children (such as “fate” or “void”). We also found that the benefit of familiarity with a voice was most noticeable in children with the poorest performance, suggesting that familiarity with a voice may be especially useful for children who have difficulty understanding spoken language.
We invite you to read the full article ‘Talker familiarity and spoken word recognition in school-age children’ here