How do new words reveal the intricacies of our world?
Blends are combinations of two – or, more rarely, three – source words into one through concatenation of clipped morphological material and/or phonological overlap as in smog (< smoke + fog). Even though lexical blending is not a recent word-formation mechanism whatsoever, in the article entitled ‘‘Blended’ Cyber-Neologisms’ Amanda Roig-Marín argues that the coinage of blends in the semantic field of technologies uniquely responds to the speaker’s need to convey the blended realities that have begun to characterise present-day technological devices and related phenomena (e.g. Dronestagram (< drone + Instagram) ‘posts of aerial pictures’ or twimmolation (< Twitter + immolation) ‘the ruin of a person’s reputation because of insensitive Twitter posts’).
This study examines data collected over the period of fifteen years (2000-2015). Since dictionaries cannot keep up with the constantly increasing number of lexical items coined, the author made use of two online neologisms databases, namely Word Spy and The Rice University Neologisms Database, to retrieve truly novel blends. She firstly contextualises this type of cyber-blended words and explains why lexical blending is preferred over simple clipping or compounding. Subsequently, she offers a taxonomy of cyber blends according to the morpho-semantic patterns of these new words.
Likewise, she forecasts the formation of a paradigm akin to what Frath (2005) calls “hamburger type”, that is to say, that some word components such as those based on blog and twitter/tweet (as in vlog (< video + blog) ‘a blog in which the posting takes the form of videos’ or twitchfork (< Twitter + pitchfork) ‘an organised campaign on Twitter to express discontent or attack targets’) can achieve autonomy and thus start to be used productively, as it also happened with the sequel series (e.g. interquel and prequel) or the literati series (digerati, glitterati, etc.).
Access the full article for free through 31st August.
Blog post based on an article in Nordic Journal of Linguistics written by Mikael Roll
Did you know that Swedish and Norwegian have word melodies similar to Chinese? The article A neurolinguistic study of South Swedish word accents: Electrical brain potentials in nouns and verbs reports on previously unexplored brain responses to word tones in South Swedish.
The study adds strong support to the hypothesis that listeners use Swedish word stem tones to preactivate upcoming suffixes. Previous research had consistently found an increase in electrical brain response for one of the Swedish stem tones – accent 1 – as compared to the other tone – accent 2.
This increase in electrical brain response is thought to index preactivation of upcoming language, such as a suffix. Accent 1 stems are associated with fewer possible outcomes, and are therefore thought to increase the certainty of how a word might end. However, previously only the Central Swedish dialect had been investigated, and therefore it was uncertain whether the effect found was really due to the difference in possibilities associated with accent 1 and 2, or rather the acoustic difference between the tones.
In South Swedish, accent tones 1 and 2 are acoustically the mirror image of those in Central Swedish. Still, accent 1 produced a reaction indicating that the electrical brain response seems to reflect preactivation of upcoming suffixes, rather than a difference in acoustic processing.
Access the full article for free until 31st July.
Blog post written by Peter Trudgill author of Dialect Matters – Respecting Vernacular Language
Academic linguists are often asked questions like: Is it really bad form to sometimes split your infinitives? What exactly is wrong with saying “I done it”? Why is the pronunciation of younger people these days so irritating? Why is it OK to drop the k in know but not the h in house? Why do railway companies prefer to have customers alighting from trains rather than passengers getting off them? And what is so important about sentences not starting with a conjunction?
This book argues in favour of the language of ordinary people. It champions everyday vocabulary, such as passenger, as opposed to business-school jargon like customer. Its supports nonstandard dialects, including forms such as I done it, in the face of the tyranny of the view that the standard dialect is the only “correct” and “grammatical” version of the language. It cherishes the English used by native speakers in their everyday lives, not least where they appear to defy the views of pedants who attempt to impose “rules” on us – for example about split infinitives – which have been invented for no good reason. It makes the case for vernacular usage as opposed to politically correct language. It demands respect for local ways of pronouncing local place-names. It asserts the primacy of spoken language and explains the importance of discourse markeres like “like”. And it defends minority languages like Welsh and Navajo, where these are threatened by majority languages like English.
The book is a collection of my weekly columns on accent and dialect from the Eastern Daily Press newspaper, revised and annotated for a wider audience. Many of these essays deal with the history of the English language. Others explain the origins of place-names. Some discuss the ways in which languages change while dismissing the loaded notions of deterioration and progress. Several of the columns look at political problems brought about by language issues; and stress the tragedy of language death. The coverage ranges from England to New England and Moldova; from the languages of indigenous Australians and Americans to the Old Norse tongue of the Vikings; and from vocabulary to phonetics and grammar. One of the pieces even boasts what is quite possibly the first ever usage in a regional British newspaper of the word phonotactics.
One of the main purposes of these columns is to broadcast a message of anti-prescriptivism, anti-linguicism, and respect for demotic linguistic practices. Prescriptivism is a form of prejudice which is so widely accepted in the English-speaking world that it is taken by many people to be axiomatic. Prescriptivists believe that there is only one way in which English “ought” to be spoken and written, and that any deviation from this is “ignorant” or “wrong”. If you ask them their justification for claiming that the sentence I done it is wrong, they may well answer that “everybody knows” it is. In this book, I try to show that this is not so. And I oppose negative attitudes like this – which are sadly held even by many highly educated and otherwise thoughtful people – by proposing that we should cultivate a positive stance towards all the different ways in which English is spoken around the world.
By the term “linguicism” I refer to a phenomenon which is, in its way, every bit as pernicious as racism and sexism, and which these days is more publicly and shamelessly displayed than those other evil phenomena. Linguicism involves being negative towards and discriminating against people because of their accent, dialect or native language. The totally false idea that some dialects of English are – in some mysterious and never specified way –“better” than others has many unfortunate consequences, not least the denigration of whole groups of our fellow human beings.
But I also attempt to convey the message that language is a mysterious, fascinating and enjoyable phenomenon which not enough people know enough about. I have attempted to use my columns as an opportunity to show that language is an extraordinarily interesting phenomenon, especially when we do our best to think about it analytically and positively, without preconceptions and prejudice. Nothing is more important to human beings than language; and I hope that in this book I have succeeded in illustrating the degree to which all languages and dialects are not only worthy of respect and preservation but, as complex creations of human societies and of the human mind, are also highly rewarding and pleasing to discover more about.
All the 150 or so columns in the book are about language in some shape or form, and contain linguistic information with insights which will be of interest to university students and teachers of linguistics, as well as to high-school English Language teachers and their classes: indeed they have already been used to stimulate discussion in classrooms from New Zealand and the USA to the British Isles. For the benefit of this type of reader, most of the pieces in this book are accompanied by brief Linguistic Notes of a technical nature which general readers need not bother with unless they want to achieve a more academic understanding of the issues involved. Local background notes are also provided where necessary for readers not familiar with East-of-England background of a number of the columns.
1. Can you define uptalk very briefly for those who don’t know?
Uptalk is the use of rising intonation (voice pitch) at the ends of statements or parts of statements. It is sometimes referred to as the use of question intonation on statements, but this is misleading, because not all questions have rising intonation (indeed there are many question types that tend to have falling intonation, such as those which have a wh-word at the beginning, like who, what, where), and there are rises on statements that are different from uptalk rises (such as on non-final items in a list like apples, oranges, bananas and pears, or the ‘continuation rise’ that you are likely to hear at the comma in Although this has a rise, it is not a question). Typically uptalk, which is also known as upspeak and high rising terminal (amongst other terms), is used to keep an interaction going, inviting the listener into the conversation. This is a specific instance of a more general property of high pitch to show openness, while lower pitch tends to mark finality or closure. However, because rising intonation is frequently associated with questions, many lay observers criticise ‘uptalkers’ for being uncertain about what they are saying. Interestingly, though, studies have shown that uptalk is highly likely in narrative contexts, such as when people are recounting something they have witnessed or experienced firsthand. These are unlikely to be situations where the speaker is uncertain.
2. What inspired you to write Uptalk?
As a psycholinguist, I devote a lot of my research time to looking at how we produce and understand language, especially spoken language. I have for a long time had a particular interest in how listeners interpret the intonation in utterances that they hear, and when I moved to New Zealand, a country where uptalk has a longer history than in most of the world, I was intrigued by how this particular form of intonation was interpreted. It was clear to me that non-uptalkers frequently arrived at a different interpretation from that intended by the speaker. This interest resulted in a series of research studies, during which I learned more about uptalk in different varieties of English and in other languages too. It seemed a natural next step to put what I had learned into a book where others – whether or not they are linguistics researchers – could have ready access to the wealth of information that is out there concerning the history, spread, and use of uptalk around the world.
3. How much does it vary according to the speaker’s age, gender and regional dialect?
There are certain parts of the world where uptalk has been a feature of spoken English for quite a long time: New Zealand, Australia and parts of Canada and the United States (particularly California). But it has been reported in many other English-speaking countries, as well as in other languages, particularly either there is where contact with English-speaking communities or a clear influence of the English language on youth culture. Typically, it is associated with young women, but it is by no means exclusively used by females, nor just by the young. Indeed, a number of studies have shown that people of the generation who were young uptalkers in the 1980s have continued to use uptalk as they have grown older. There may be some historical basis for saying that uptalk is a feature of young female speech, since linguists have shown that it is often young women who initiate a change in patterns of language use. Now, however, the claim that young women are the main users of uptalk is probably more a stereotype than a reality. In fact, uptalk is so common in some parts of the English-speaking world that subtle distinctions are developing in what uptalk rises and true question rises sound like, as part of making the difference clearer.
4. What are the key features and benefits that readers will take away from Uptalk ?
What I have tried to do in this book is provide a comprehensive overview of what uptalk is like, including how it differs from other forms of rising intonation; what its many functions and meanings are; how it is distributed and used across the many varieties of English (and other languages) in which it is found; which speaker groups are more likely to use it; and how it is perceived and interpreted by listeners. For those interested in how researchers have investigated uptalk, there is also a chapter on methodology. Because there has been so much discussion of uptalk in newspapers and self-help books, as well as on the radio and television, I also wanted to provide an exploration of the media response to uptalk, including some discussion of the types of statements often used in support of the largely negative claims made by journalists and others. So Uptalk covers a lot of ground, and should be of interest to both linguists and non-linguists alike.
Find out more about Uptalk: The Phenomenon of Rising Intonation
Blog Post written by Eve V. Clark (Stanford University), author of the recently published First Language Acquisition (3rd Edition)
How early do infants start in on language?
Even before birth, babies recognize intonation contours they hear in utero, and after birth, they prefer listening to a familiar language over an unfamiliar one. And in their first few months, they can already discriminate between speech sounds that are the same or different.
How early do infants understand their first words, word-endings, phrases, utterances?
Children learn meanings in context, both from hearing repeated uses of words in relation to their referents, and from feedback from adults when they use a word correctly or incorrectly. When a child is holding a ball, the mother might say “Ball. That’s a ball”, and the child could decide that “ball” picks out round objects of that type. Still, it may take many examples to establish the link between a word-form (“ball”) and a word-meaning (round objects of a particular type) and to relate the word “ball” to neighbouring words (throw, catch, pick up, hold). It takes even longer for the child’s meaning of a word to fully match the adult’s.
When do infants produce their first words and truly begin to talk?
Infants babble from 5-10 months on, giving them practice on simple syllables, but most try their first true words at some time between age 1 and age 2 (a broad range). They find certain sounds harder to pronounce than others, and certain combinations (e.g., clusters of consonants) even harder. It therefore takes practice to arrive at the adult pronunciations of words –– to go from “ba” to “bottle”, or from “ga” to “squirrel”. Like adults, though, children understand much more than they can say.
What’s the relation between what children are able to understand and what they are able to say?
Representing the sound and meaning of a word in memory is essential for recognizing it from other speakers. Because children are able to understand words before they produce them, they can make use of the representations of words they already understand as models to aim for when they try to pronounce those same words.
How early do children begin to communicate with others?
A few months after birth, infants follow adult gaze, and they respond to adult gaze and to adult speech face-to-face, with cooing and arm-waving. As they get a little older, they attend to the motion in adult hand-gestures. By 8 months or so, they recognize a small number of words, and by 10 months, they can also attend to the target of an adult’s pointing gestures. They themselves point to elicit speech from caregivers, and they use gestures to make requests – e.g., pointing at a cup as a request for juice. They seem eager to communicate very early.
How do young children learn their first language?
Parents check up on what their children mean, and offer standard ways to say what the children seem to be aiming for. Children use this adult feedback to check on whether or not they have been understood as they intended.
Do all children follow the same path in acquisition?
No, and the reason for this depends in part on the language being learnt. English, for example, tends to have fixed word order and relatively few word endings, while Turkish has much freer word order and a large number of different word-endings. Languages differ in their sound systems, their grammar, and their vocabulary, all of which has an impact on early acquisition.
These and many other questions about first language acquisition are explored in the new edition of First Language Acquisition. In essence, children learn language in interaction with others: adults talk with them about their daily activities – eating, sleeping, bathing, dressing, playing; they expose them to language and to how it’s used, offer feedback when they make mistakes, and provide myriad opportunities for practice. This book reviews findings from many languages as it follows the trajectories children trace during their acquisition of a first language and of the many skills language use depends on.
First Language Acquisition (third edition), Cambridge University Press 2016
Blog post based on an article in Journal of Child Language
Written by Written by Melanie Soderstrom in consultation with article co-authors Eon-Suk Ko, Amanda Seidl, and Alejandrina Crista
It has long been known that adults’ speech patterns unconsciously become more similar over the course of a conversation, but do children converge in this way with their caregivers? Across many areas of child development, children’s imitation of caregivers has long been understood to be an important component of the developmental process. These concepts are similar, but we tend to think of imitation as one-sided and static, while convergence is more dynamic and involves both interlocutors influencing each other. In our study, we set out to examine how duration and pitch characteristics of vocalizations by 1- and 2-year-olds and their caregivers dynamically influence each other in real-world conversational interactions.
We recorded 13 mothers and their children using LENA, a system for gathering full-day recordings, which also provides an automated tagging of the audio stream into speakers. We analyzed pitch and duration characteristics of these segments both within and across conversational exchanges between the mother and child to see whether mothers and children modulated the characteristics of their speech based on each other’s speech. Instead of examining mother-child correlations across mother-child dyads, as previous studies have done, we examined correlations within a given dyad, across conversations. We found small, but significant correlations, particularly in pitch measures, suggesting that mothers and children are dynamically influencing each other’s speech characteristics.
We also looked at who started the conversation, and measured mother and child utterance durations and response latencies (i.e., how quickly mothers responded to their child’s utterance and vice versa). Overall, unsurprisingly, mothers produced longer utterances and shorter responses latencies (faster responding) than their children. However, both the mothers and the children produced longer utterances and shorter response latencies in conversations that they themselves initiated. This finding is exploratory, but suggests that providing children with the conversational “space” to initiate conversations may lead to more mature vocalization, and may therefore be beneficial for the language-learning process.
Read the full article ‘Entrainment of prosody in the interaction of mothers with their young children’ here
Blog post supplementary to an article in English Today written by © M. Lynne Murphy
Last night, I wondered ‘aloud’ on Twitter if British-American English dictionaries are the worst lexicographical products out there. This was after flipping through The Anglo-American Interpreter: a word and phrase book by H. W. Horwill (1939). At first, when I read Horwill’s claims that Americans ask for the time with What time have you?, I thought ‘Wow, American English has changed a lot since 1939’. But as I kept reading the unexpected items in the American column on each page, the British column sounded more and more like contemporary American English. I started to suspect something was amiss. And in the preface I found it: ‘The present book is an original compilation based on more than thirty years’ reading of American books and newspapers, supplemented by what the author has heard with his own ears during two periods of residence in the United States’. The author is bragging that he didn’t reproduce information from earlier works ‘without independent verification’. But did he get independent verification about the things he experienced with his own eyes and ears?
You and I have a great advantage over Mr Horwill, in that we live in the computer age. So we can do things like look in the Corpus of Historical American English (Davies 2010–) and see that the corpus has four examples of What time have you? between 1800 and 1940, but 219 examples of What time is it? We would not conclude that What time have you? is what Americans routinely said in 1939, but we might wonder if it was used in certain circumstances or regions.
I enjoyed finding this book and its oddities because it is the British mirror of a American book that I mention in my recent article ‘(Un)separated by a common language?’ (Murphy 2016). This is the first of a series of four pieces I’m writing for English Today about American and British Englishes: what can be studied about them and how we might think about them. The essay argues that American and British differences should not be dismissed as ‘minor and uninteresting’. Whether they’re minor or not depends on one’s standards for ‘minority’, but they’re certainly not uninteresting. What they are is misunderstood.
Like Horwill, the author of Understanding British English (Moore, 1989) was an enthusiast for the other country. She watched British television, read British and Australian books, and took two vacations in the UK where she acquired some British pen-pals. The book’s listing of British English vocabulary thus contains Australianisms, some misapprehensions of meaning, quite a few questionable part-of-speech judgements, and some words that are perfectly good American English (but apparently not used by Moore).
The problem for Horwill, Moore and many other interested observers of language, is that our experience of English is deeply personal (no one else has heard/read/said all the same words and phrases as you have) and we have a deep need to generalize and stereotype. If you phrase something in a way that I’ve not heard before and we have similar accents, I might think ‘There’s an expression I didn’t know’ or ‘Wow, isn’t she poetic?’ or ‘Hey, he’s saying that wrong’. But if someone with a different accent says it, we are apt to conclude ‘Oh, that must be how those people say it’. The fact is: it still could have been an expression I didn’t know. Or poetic. Or a speech error. And another fact is: I probably didn’t notice the dozens of earlier times when they expressed a similar notion using words I would have used.
We’re so confident that we know our own dialects that we are more than willing to make conclusions about others’. It’s not just enthusiastic-but-amateur dictionary-writers who do this. Articles in the news about Britishisms or Americanisms routinely misidentify the sources of words and phrases (for examples, see Murphy 2006–). Now that we’re in the information age, we have the tools to avoid these mistakes: well-researched dictionaries, accessible linguistic corpora, and the ability to ask people on the other side of the world whether they’d say X or Y—and to get an almost immediate response. It concerns me when those tools aren’t used.
So, before you conclude that that thing you heard on Downton Abbey is ‘how the British say it’ or that Americans ‘don’t use adverbs’ (see Pullum 2014), remind yourself that:
(a) you heard an individual speak, not a nation,
(b) your mind biases you to notice differences rather than similarities, and
(c) you could look it up!
Davies, Mark. 2010-. The Corpus of Historical American English: 400 million words, 1810-2009. Available at http://corpus.byu.edu/coha/.
Horwill, H. W. 1939. An Anglo-American interpreter. Oxford University Press.
Moore, Margaret E. 1989. Understanding British English. New York: Citadel Press.
Murphy, M. Lynne 2006. Separated by a Common Language (blog). http://separatedbyacommonlanguage.blogspot.com
Murphy, M. Lynne. 2016. (Un)separated by a common language? English Today, 32, 56-59.
Pullum, Geoffrey K. 2014. ‘Undivided by a Common Language’. Lingua Franca (blog), Chronicle of Higher Education, 17 March. Available at <http://chronicle.com/blogs/linguafranca/2014/03/17/undivided-by-a-common-language/> (Accessed September 30, 2015).
‘Checking in on Grammar Checking’ by Robert Dale is the latest Industry Watch column to be published in the journal Natural Language Engineering.
Reflecting back to 2004, industry expert Robert Dale reminds us of a time when Microsoft Word was the dominant software used for grammar checking. Bringing us up-to-date in 2016, Dale discusses the evolution, capabilities and current marketplace for grammar checking and its diverse range of users: from academics, men on dating websites to the fifty top celebrities on Twitter.
Below is an extract from the article, which is available to read in full here.
An appropriate time to reﬂect
I am writing this piece on a very special day. It’s National Grammar Day, ‘observed’ (to use Wikipedia’s crowdsourced choice of words) in the US on March 4th. The word ‘observed’ makes me think of citizens across the land going about their business throughout the day quietly and with a certain reverence; determined, on this day of all days, to ensure that their subjects agree with their verbs, to not their inﬁnitives split, and to avoid using prepositions to end their sentences with. I can’t see it, really. I suspect that, for most people, National Grammar Day ranks some distance behind National Hug Day (January 21st) and National Cat Day (October 29th). And, at least in Poland and Lithuania, it has to compete with St Casimir’s Day, also celebrated on March 4th. I suppose we could do a study to see whether Polish and Lithuanian speakers have poorer grammar than Americans on that day, but I doubt we’d ﬁnd a signiﬁcant diﬀerence. So National Grammar Day might not mean all that much to most people, but it does feel like an appropriate time to take stock of where the grammar checking industry has got to. I last wrote a piece on commercial grammar checkers for the Industry Watch column over 10 years ago (Dale 2004). At the time, there really was no alternative to the grammar checker in Microsoft Word. What’s changed in the interim? And does anyone really need a grammar checker when so much content these days consists of generated-on-a-whim tweets and SMS messages?
The evolution of grammar checking
Grammar checking software has evolved through three distinct paradigms. First-generation tools were based on simple pattern matching and string replacement, using tables of suspect strings and their corresponding corrections. For example, we might search a text for any occurrences of the string isnt and suggest replacing them by isn’t. The basic technology here was pioneered by Bell Labs in the UNIX Writer’s Workbench tools (Macdonald 1983) in the late 1970s and early 1980s, and was widely used in a range of more or less derivative commercial software products that appeared on the market in the early ’80s. Anyone who can remember that far back might dimly recall using programs like RightWriter on the PC and Grammatik on the Mac. Second-generation tools embodied real syntactic processing. IBM’s Epistle (Heidorn et al. 1982) was the ﬁrst really visible foray into this space, and key members of the team that built that application went on to develop the grammar checker that, to this day, resides inside Microsoft Word (Heidorn 2000). These systems rely on large rule-based descriptions of permissible syntax, in combination with a variety of techniques for detecting ungrammatical elements and posing potential corrections for those errors. Perhaps not surprisingly, the third generation of grammar-checking software is represented by solutions that make use of statistical language models in one way or another. The most impressive of these is Google’s context-aware spell checker (Whitelaw et al. 2009)—when you start taking context into account, the boundary between spell checking and grammar checking gets a bit fuzzy. Google’s entrance into a marketplace is enough to make anyone go weak at the knees, but there are other third-party developers brave enough to explore what’s possible in this space. A recent attempt that looks interesting is Deep Grammar (www.deepgrammar.com). We might expect to ﬁnd that modern grammar checkers draw on techniques from each of these three paradigms. You can get a long way using simple table lookup for common errors, so it would be daft to ignore that fact, but each generation adds the potential for further coverage and capability.
The remainder of the article discusses the following:
- Today’s grammar-checking marketplace
- Who needs a grammar checker?
‘Checking in on grammar checking’ is an Open Access article. You may also be interested in complimentary access to a collection of related articles about grammar published in Natural Language Engineering. These papers are fully available until 30th June 2016.
Other recent Industry Watch articles by Robert Dale:
By Abby Kaplan author of Women Talk More Than Men and Other Myths about Language Explained
For years now, observers have been alert to a growing social menace. Like Harold Hill, they warn that there’s trouble in River City — with a capital T, and that rhymes with P, and that stands for Phone.
Mobile phones are a multifaceted scourge; they’ve been blamed for everything from poor social skills to short attention spans. As a linguist, I’m intrigued by one particular claim: that texting makes people illiterate. Not only are text messages short (and thus unsuited for complex ideas), they’re riddled with near-uninterpretable abbreviations: idk, pls, gr8. Young people are especially vulnerable to these altered forms; critics frequently raise the specter of future students studying a Hamlet who texts 2B or not 2B.
The puzzling thing is that none of these abominable abbreviations are unique to text messaging, or even to electronic communication more generally. There’s nothing inherently wrong with acronyms and initialisms like idk; similar abbreviations like RSVP are perfectly acceptable, even in formal writing. The only difference is that idk, lol, and other ‘textisms’ don’t happen to be on the list of abbreviations that are widely accepted in formal contexts. Non-acronym shortenings like pls for please are similarly unremarkable; they’re no different in kind from appt for appointment.
Less obvious is the status of abbreviations like gr8, which use the rebus principle: 8 is supposed to be read, not as the number between 7 and 9, but as the sound of the English word that it stands for. The conventions for formal written English don’t have anything similar. But just because a technique isn’t used in formal English writing doesn’t mean that technique is linguistically suspect; in fact, there are other written traditions that use exactly this principle. In Ancient Egyptian, for example, the following hieroglyph was used to represent the word ḥr ‘face’:
It’s not a coincidence, of course, that the symbol for the word meaning ‘face’ looks like a face. But the same symbol could also be used to represent the sound of that word embedded inside a larger word. For example, the word ḥryt ‘terror’ could be written as follows:
Here, the symbol has nothing to do with faces, just as the 8 in gr8 has nothing to do with numbers. The rebus principle was an important part of hieroglpyhic writing, and I’ve never heard anyone argue that this practice led to the downfall of ancient Egyptian civilization. So why do we think textisms are so dangerous?
Even if there’s nothing wrong with these abbreviations in principle, it could still be that using them interferes with your ability to read and write the standard language. If you see idk and pls on a daily basis, maybe you’ll have a hard time remembering that they’re informal (as opposed to RSVP and appt). But on the other hand, all these abbreviations require considerable linguistic sophistication — maybe texting actually improves your literacy by encouraging you to play with language. We all command a range of styles in spoken language, from formal to informal, and we’re very good at adjusting our speech to the situation; why couldn’t we do the same thing in writing?
At the end of the day, the only way to find out what texting really does is to go out and study it in the real world. And that’s exactly what research teams in the UK, the US, and Australia have done. The research in this area has found no consistent negative effect of texting; in fact, a few studies have even suggested that texting might have a modest benefit. It seems that all the weeping and gnashing of teeth about the end of literacy as we know it was premature: the apocalypse is not nigh.
Of course, this doesn’t mean that we should all spend every spare minute texting. (I’m a reluctant texter myself, and I have zero interest in related services like Twitter.) There are plenty of reasons to be thoughtful about how we use any technology, mobile phones included. What we’ve seen here is just that the linguistic argument against texting doesn’t hold water.
View the Women Talk More Than Men…and Other Language Myths Explained Book Trailer or by clicking on the image below…
Cambridge University Press and Studies in Second Language Acquisition are pleased to announce that the recipients of the 2016 Albert Valdman Award for outstanding publication in 2015 are Gregory D. Keating and Jill Jegerski for their March 2015 article, “Experimental designs in sentence processing research: A methodological review and user’s guide”, Volume 37, Issue 1. Please join us in congratulating these authors on their contribution to the journal and to the field.
Post written by Gregory D. Keating and Jill Jegerski
We wish to express our utmost thanks and gratitude to the editorial and review boards at SSLA for selecting our article, ‘Research designs in sentence processing research: A methodological review and user’s guide,’ (March, 2015) for the Albert Valdman Award for outstanding publication. The two of us first became research collaborators several years ago as a result of our mutual interests in sentence processing, research methods, research design, and statistics. With each project that we have undertaken, we’ve had many fruitful and engaging conversations about best practices in experimental design and data analysis for sentence processing research. This article is the product of many of our own questions, which led us to conduct extensive reviews of existing processing studies. Our recommendations are culled from and informed by the body of work we reviewed, as well as our own experiences conducting sentence processing research. Stimulus development and data analysis can pose great challenges. It is our hope that the information provided in our paper will be a useful resource to researchers and students who wish to incorporate psycholinguistic methods into their research agenda and that the study of second language processing will continue to flourish in the future.