Blog post written by Ian Roberts, University of Cambridge
I’d like to begin by talking about my cat, Clover. He really is very intelligent: he knows exactly how to wake me up in the morning, exactly which shelf in which cupboard his food is kept on, where his bowl is, how to get let out, and lots of other things. You won’t catch the average ant, starfish or parsnip doing any of that. By the standards of nearly everything in the known universe, he really is smart.
But of course we’re much smarter. There are plenty of things in the world, especially in our mental world, that poor Clover has absolutely no inkling of: notably such things as nouns, quantifiers and syllables, i.e. language. These things are every bit as much beyond Clover as waking me up to get me to feed it would be for a parsnip or a starfish. Obviously the fact that we have language has a lot to do with this cognitive gulf between us and our pets, but that may not be the whole story.
But a natural question to ask is: is there a similar cognitive gulf between us and other forms of intelligence? We seem to be smartest creatures on our planet, but this is where the extra-terrestrials come in. Here I’m not interested in various forms of slime that might be around on Mars or elsewhere, but intelligent extra-terrestrials, the sort that might build spaceships. Could there be extra-terrestrials so much smarter than us that they would keep us as pets? Or (cue the creepy sci-fi music), are we already pets but we just don’t know it? After all, Clover doesn’t know he’s my pet. Are there, in other words, concepts as impossible for us as the concepts three, verb or phoneme are for Clover?
If the answer is yes, then we’d better keep out of the way of the smarter extra-terrestrials. Nothing good for us can come of contact with such creatures; the best we can hope is to be treated as pets. You don’t want to think about the worst.
But the answer doesn’t have to be yes. It is also quite possible that we have crossed a cognitive threshold. Our capacity to express anything, through the recursive syntax and compositional semantics of natural language might have taken us into a cognitive realm where anything, everything, is possible. Effectively, having language has made us the equal of any extra-terrestrial (who would have to have something like language in order to build their spaceships).
In the movie 2001: A Space Odessey, Stanley Kubrick made one of the most brilliant associative cuts in movie history. The film starts in prehistory, and shows a bunch of ape-men fighting over a water-hole. Then one day one of them comes across a monolith which makes a weird noise. This is an alien artefact which somehow transmits intelligence. Next time he squares up to the enemy ape-men at the water-hole, this one picks up a bone and smashes the enemy’s head in. In jubilation at this discovery of a weapon, he throws it up into the air and as it spins around Kubrick cuts to an image of a spaceship orbiting the earth.
Kubrick’s message is clear: once you’ve figured out how to use tools, it’s a short step to spaceships. That movie was made in the 1960s at a time when many people thought that Man the Tool-Maker was the key to the differences between us and other species, and hence that inventing tools was a crucial step in human evolution. We now know that’s not true, as quite a few other species use tools of various kinds. But Kubrick’s basic idea that there might have been a crucial mutation in human evolution which led, in almost no time from an evolutionary perspective, to space travel might have been right. And it’s a plausible speculation that the mutation in question was whatever it is that makes our brains capable of computing recursive syntax. It’s a short step, not a great leap, from syntax to spaceships.
Anyway, something (God, natural selection, a random mutation, an alien monolith) has given us our extraordinary minds with our extraordinary capacity for generating, storing and transmitting knowledge. Language really must be central to these abilities. My new book The Wonders of Language, or How to Make Noises and Influence People, is an introduction to what linguists have discovered about this truly remarkable phenomenon. Understanding language means understanding a very big part of what it is to be human, what it is to be you.
An Interview with Ian Roberts, the author of The Wonders of Language :
Cambridge University Press is proud to Sponsor the 49th Annual Meeting of the British Association for Applied Linguistics hosted by Anglia Ruskin University in Cambridge on 1–3 September 2016.
Visit the Cambridge stand at the conference to receive 20% off applied linguistics titles on display and take away complimentary copies of our popular linguistics journals.
We would also like to invite you to join us for the Cambridge University Press Colloquium featuring a distinguished panel of speakers: Professor Li Wei, Dr Napoleon Katsos, Dr Jenny Gibson, Dr Martin Dewey and Dr Ardeshir Geranpayeh. They will be addressing the theme of ‘what does it mean to know a second language?’ from a range of perspectives: a sociolinguistic perspective, a clinical-applied research project, a pedagogical perspective and a view based on automated language assessment, and will include an interactive Q&A session.
Cambridge University Press and Cambridge English Language Assessment are co-sponsoring the wine reception to be held on Friday the 2nd of September in the Academy at Anglian Ruskin. We look forward to seeing you there for complimentary drinks and nibbles at 5.30pm.
Cambridge University Press
Blog post written by Robbert Kennedy, University of California, Santa Barbra
I am excited to share Phonology: A Coursebook with instructors everywhere. This textbook represents the culmination of many years of thinking about how to make the content of phonology courses more accessible and engaging to students, and I can share a few examples of what is new about it here.
I have always believed phonological analysis to be an important skill for linguists of any stripe, so I think it’s crucial that students establish a solid understanding of its central concepts. But Linguistics is growing as an academic field, with its traditions of structural analysis and documentation joined by those interested in the study of language through the lens of identity, technology, and many other angles. The growth in size and range of our undergraduate population (at my home institution, and surely many others) reflects this. My personal motivation for writing a phonology textbook thus comes from my classroom observation of the varying interests and learning styles among students, not just in phonology courses but in other linguistics courses as well – so that even if the student is not planning on specializing in phonology, they can still experience the course as a practicum in the procedures of the scientific method.
With this in mind, I have structured this book around a mindset of the primacy of data: its chapters are organized around types of phonological processes and patterns, with assimilation, deletion, insertion, harmony, syllabification, stress, and tonal phenomenal all highlighted as objects of phonological analysis. While I have included familiar classic problem sets, including data from languages such as Yokuts, Turkish, Hungarian, Japanese, Kongo, and Polish, I have enriched them with many others that are either less canonical or newly developed, with notable exercises on syllabification, tone, and prosodic morpho-phonology. Moreover, I have used the data to guide the use of formalisms like features, or rules, or tiered representations.
Meanwhile, I have observed in the past that some students have difficulty seeing phonology and the input-output relationship when following the standard teaching practice of introducing them with distributional facts and phonemic analysis. To address this, I introduce the concepts of underlying representations and processes that operate on them with more concretely observable examples of morphophonemic alternation before exploring phonemic analysis and complementary distribution.
This gives students something more tangible to grasp early on – the idea that a single underlying phoneme could have multiple surface allophones is more plainly obvious when the forms of specific morphemes alternate by their phonological context. In practice, teaching about phonemes by using complementary distribution and mutual exclusivity, which are more circumstantial in their evidentiality, risks a level of abstractness that is perhaps best left until later in the term. There is a parallel to be drawn with calculus, where the instructor may teach either integrals or derivatives first. Teaching derivatives first is more intuitive to many learners, but in phonology it is as if we have been teaching integrals first.
I believe this approach dovetails well with the spirit of Cambridge’s Coursebook series, in which the reader is presented with datasets and exercises, but the analytical steps are narrated procedurally to illustrate the links between detecting patterns and accounting for their nuances and complexity.
The second novel component is a deeper integration of typological generalizations as an element of phonological argumentation. In class when leading students on how to decide among competing analyses, I often find myself turning to typological evidence, yet note that this information is not readily at the hands of undergrads. The organization of the book by processes clarifies that there are certain types of phenomena that are typologically prevalent, and I use this to argue for the student that the formal tools should reflect these trends.
Another deliberate aspect of this textbook is how it treats the role of features and representations. Feature charts and derivational conventions are so rich with detail and precision that students can get lost trying to remember them all, especially if they think of the best analysis as one that uses the correct features. I often see students struggling to memorize feature charts for IPA symbols rather than thinking of natural classes in more concrete terms. Thus I emphasize in the text that the features are valuable analytical tools, but what a student employs in a given analysis must primarily distinguish groups of sounds that behave differently.
This textbook is aimed at introductory phonology classes, particularly for students who have completed an introductory course in linguistics and/or phonetics and have working knowledge of IPA transcription and some basics of morphological analysis. Nevertheless the datasets are numerous and rich enough to be useful for more advanced students of phonology as well.
I look forward to using this textbook in the classroom and sincerely hope other phonology instructors will find it both useful and engaging as a resource for their students.
Teaching a course on this topic?
EMEA lecturers may request a copy of this title for inspection here
US instructors may request a copy of this title for examination here
Visit the book’s page for more information here
Blog post written by Cambridge author Vyvyan Evans.
An emoji is a glyph encoded in fonts, like other characters, for use in electronic communication. It’s especially prevalent in digital messaging and social media. An emoji, or ‘picture character’, is a visual representation of a feeling, idea, entity, status or event. From a historical perspective, the first emojis were developed in the late 1990s in Japan for use in the world’s first mobile phone internet system. There were originally 176, very crude by today’s standards.
Early emoji faces
In 2009, the California-based Unicode Consortium, which specifies the international standard for the representation of text across modern digital computing and communication platforms, sanctioned 722 emojis. The Unicode approved emojis became available to software developers by 2010, and a global phenomenon was born. Today, there are a little over 1,200 emojis available.
The new universal ‘language’?
While emoji is not, strictly speaking, a language, in the way that say, English, French or Japanese are languages, it is certainly a powerful system of communication. English is often said to be the world’s global language, so a comparison is instructive.
English has 335 million native speakers, with a further 505 million speakers who use it as a second language. It’s the primary or official language in 101 countries, from Canada to Cameroon, and from Malta to Malawi – far outstripping any other language. It has been transplanted far from its point of origin – a small country, on a small island – spreading far beyond English shores. But more than the range, English has steadily gained ground in almost all areas of international communication: from commerce, to diplomacy, from aviation to academic publishing, serving as a global Lingua Franca.
But in comparison, emoji dwarfs even the reach of English. The driver for the staggering adoption of emoji has been the advent of mobile computing, especially the smartphone. Emoji was introduced as an international keyboard in Apple’s operating system (iOS) in October 2011. And by July 2013 it had been introduced across most Android operating platforms.
There are different measures for assessing the stratospheric rise of emoji. One factor has been the rapid adoption of smartphones. Today one quarter of the world’s global population owns a smartphone; and based on a survey of mobile computing habits in 41 countries it is estimated that today there are over 2 billion smartphone users with 31% of the global population accessing the internet by smartphone. In terms of specific countries, China exceeded 500 million smartphones during the course of 2014, and it is estimated that India will have over 200 million smartphone users this year, and in the USA the same figure will be achieved by 2017, when 65% of the population of the United States will own a smartphone.[i] In terms of smartphones alone, some 41.5 billion text messages are sent globally every day, using around 6 billion emojis—figures that are mindboggling.[ii]
Emoji all around us
Today emoji is seemingly everywhere, having spread far beyond the messaging systems it was developed for. The New York Subway has now introduced a system, using emoji, to advise passengers of the status of particular subway lines: whether trains are running normally or not. As the NY City website explains: “We’re trying to estimate agony on the NYC subway by monitoring time between trains and adding unhappy points for stations typically crowded at rush hour.” [iii] Here’s an example:
Reprinted from the WNYC website
Even an institution as august as the BBC is not immune. Each Friday, the Newsbeat page on the BBC website—associated with BBC Radio 1 and aimed at younger listeners—publishes the news in emoji. Radio listeners are invited to guess what the headline means. See whether you can figure out which headline this emoji ‘sentence’ relates to:
- Four climbers find what they think is a Dodo chick egg. But it’s not. The bird has been extinct for 450 years.
- One in four people don’t know the Dodo is extinct, a poll finds.
- Four children win a science competition to genetically recreate the Dodo.
(The correct answer is 2).
Moreover, the literary canon is not excluded: a visual designer with a passion for emoji has translated Lewis Carroll’s Alice in Wonderland, a book of 27,500 or so words, into a pictorial narrative, consisting of around 25,000 emoji.[iv] Some example emoji ‘sentences’ are below:
Frivolous or the future?
A common question that people ask is whether anyone—you or I—can simply create their own emojis? The short answer is yes. For instance, Finland, on behalf of the Finnish people, has created its own set of national emojis that express Finnish identity. These include emojis of people in saunas, of a Nokia phone and of a headbanger.
These are a computer generated emojis made available by Finland’s Foreign Ministry on Wednesday Nov. 4, 2015. Finland is launching a series of national emojis that include people sweating in saunas, classic Nokia phones and heavy metal head-bangers. Petra Theman from the Finnish Foreign Ministry says the emojis will be released as a way to promote the countrys image abroad and are based on themes associated with Finland. (Finnish Foreign Ministry via AP)
Finnish national emojis
But while Finland was the first country in the world to embrace its national identity through emojis, you or I won’t be able to text one another the headbanger emoji anytime soon. And that’s because the Finnish emojis have not been officially sanctioned by the Unicode Consortium—and Finland has no plans to submit them for consideration.
A new emoji has to meet various criteria to become a candidate emoji. And only after a lengthy vetting process, taking around 18 months, does a successful candidate emoji pass muster. Even then, it can take still longer for a newly sanctioned emoji to make it onto our digital keyboards – once approved, emojis can take several operating system – updates, and sometimes several years, to make it onto a smartphone or tablet computer near you. So, for now, at least, Finland’s bespoke emojis are classed as ‘stickers’: bespoke images that have to be downloaded as part of an app, in order to be inserted them into text messages.
On January 25th 2016, a Chinese – American businesswoman, YiYing Lu, from San Francisco, succeeded where Finland had declined to tread. Supported by a publically-funded kickstarter campaign, Lu succeeded in having a dumpling achieve official emoji candidate status. And if successful, the proposed dumpling is set to become a bona fide emoji by the end of 2017. In so doing, it would join a growing catalogue of food emojis, including pizza, hamburger, doughnuts and even a taco glyph.
The proposed dumpling emoji. From The Dumpling Project.
The entire emoji vetting process is controlled by a handful of American multinational corporations, based in California. And there are strict qualifying criteria for new emojis: they may not depict persons living or dead, nor deities, for instance. This is why there is no Buddha, or Elvis emojis. Moreover, a candidate emoji must be deemed to have widespread appeal. On this score, the proposal for a dumpling emoji looks to be a strong candidate. A dumpling – a dough filled food parcel – is popular around the world, with exemplars ranging from Italian ravioli to Russian pelmeni, to Japanese gyoza. In Argentina there is empanadas, Jewish cuisine has kreplach, in Korea there is madoo and China has popstickers. But when Lu, an aficionado of Chinese dumplings, attempted to text a friend about the dish, she noticed there wasn’t an emoji she could use.
In early 2016, the fact that the dumpling had officially achieved candidate emoji status in California hit the headlines around the world, from New York, to London, to Beijing; even the broadcast media got in on the act. I was invited onto BBC Radio to discuss the success of the Dumpling Kickstarter project, headlining with Lu herself. The Kickstarter campaign – to raise the necessary funds to prepare the proposal – had been a self-evident success, achieving over $12,000 and reaching its target within a few hours of going live. But the headlines beg the very question: why all the fuss about dumplings? Isn’t this simply frivolity gone mad, an expensive bit of silliness?
On the contrary: emoji matters. The Dumpling Project stands for far more than a simplistic bid to have the favourite food of a Bay area business woman become sanctioned as an emoji. It is an instance of internet democracy at work: indeed, the slogan of the project was ‘emoji for the people, by the people’.
One reason why emoji matters is the following; love it or loathe it, emoji is today the world’s global form of communication. A quarter of the world’s population owns a smartphone, and over 80% of adult smartphone users regularly use emoji, with figures likely to be far higher for under 18s. In short, most of the world’s mobile computing users use emoji much of the time. And yet, the catalogue of emojis that show up on our smartphones and tablet computers – the vocabulary that connects 2 billion people – is controlled by a handful of American multinationals – eight of the eleven full members of the Unicode Consortium are American: Oracle, IBM, Microsoft, Adobe, Apple, Google, Facebook and Yahoo. Moreover, the committee reps of these tech companies are overwhelmingly white, male, and computer engineers – hardly representative of the diversity exhibited by the global users of emojis. Indeed, as of 2015, the majority of food emojis were associated with North American culture, with some throwbacks to the Japanese origins of emoji (such as a sushi emoji).
Hence, one motivation for the Dumpling Project was to ensure better representation. Of course, on its own, a campaign and proposal for a new food emoji cannot do much. But as an appeal to global cultural and culinary diversity, and as call for better representation of this diversity, the dumpling is a powerful emblem. Emoji began as a bizarre little known North Asian phenomenon; since, control has come to rest in the hands of American corporate giants. Dumplings, on the other hand, in their various shapes and guises are truly international and get at the global nature of emoji.
Perhaps more than anything, the Dumpling Project is fun; and in terms of emoji, a sense of fun is the watchword. While these colourful glyphs add a dollop of personality to our digital messaging, the Dumpling Project makes a powerful point without resorting to burning either bras or effigies. It avoids gender, religion or politics in conveying a simple message about inclusiveness in the world’s most widely used form of communication. And in the process, it provides us with an object lesson in the unifying and non – threatening nature of emoji. Perhaps the world can, indeed, be united for the better by this new, quasi-universal form of communication.
Communication and emotional intelligence
Setting aside dumplings, one of the serious questions surrounding the rise and rise of emoji is this: Why has the uptake of emoji grown exponentially: why is a truly global system of communication? Some see emoji as little more than an adolescent grunt, taking us back to the dark ages of illiteracy. But this prejudice fundamentally misunderstands the nature of communication. And in so doing it radically underestimates the potentially powerful and beneficial role of emoji in the digital age as a communication and educational tool.
All too often we think of language as the mover and the shaker in our everyday world of meaning. But, in actual fact, most of the meaning we convey and glean in our everyday social encounters, comes from nonverbal cues. In the spoken medium, gesture, facial expression, body language and speech intonation provide a means of qualifying and adjusting the message conveyed by the words. A facial wink or smile nuances the language, providing a crucial contextualisation cue, aiding our understanding of the spoken word. And intonation not only ‘punctuates’ our spoken language—there are no white spaces and full – stops in speech that help us identify where words begin and sentences end—intonation even provides ‘missing’ information not otherwise conveyed by the words.
Much of our communication is nonverbal. Take gesture: our gestures are minutely choreographed to co-occur with our spoken words. And we seem unable to suppress them. Watch someone on the telephone; they’ll be gesticulating away, despite their gestures being unseen by the person on the other end of the line. Indeed, if gestures are suppressed, in lab settings say, then our speech actually becomes less fluent. We need to gesture to be able to speak properly. And, by some accounts, gesture may have even been the route that language took in its evolutionary emergence.
Eye contact is another powerful signal we use in our everyday encounters. We use it to manage our spoken interactions with others. Speakers avert their gaze from an addressee when talking, but establish eye contact to signal the end of their utterance. We gaze at our addressee to solicit feedback, but avert our gaze when we disapprove of what they are saying. We also glance at our addressee to emphasise a point we’re making.
Eye gaze, gesture, facial expression, and speech prosody are powerful nonverbal cues that convey meaning; they enable us to express our emotional selves, as well as providing an effective and dynamic means of managing our interactions on a moment by moment time – scale. Face – to – face interaction is multimodal, with meaning conveyed in multiple, overlapping and complementary ways. This provides a rich communicative environment, with multiple cues for coordinating and managing our spoken interactions.
Digital communication increasingly provides us with an important channel of communication in our increasingly connected 21st century social and professional lives. But the rich, communicative context available in face-to-face encounters is largely absent. Digital text alone is impoverished and emotionally arid. Digital communication, seemingly, possesses the power to strip all forms of nuanced expression even from the best of us. But here emoji can help: it fulfils a similar function in digital communication to gesture, body language and intonation, in spoken communication. Emoji, in text messaging and other forms of digital communication, enables us to better express tone and provide emotional cues to better manage the ongoing flow of information, and to interpret what the words are meant to convey.
It is no fluke, therefore, that I have found, in my research on emoji usage in the UK, commissioned by TalkTalk Mobile, that 72% of British 18-25 year olds believe that emoji make them better at expressing their feelings. Far from leading to a drop in standards, emoji are making people – especially the young – better communicators in their digital lives.
[ii] Swyftkey April 2015
[iii] http://www.wnyc.org/story/your-subway-agony/ (accessed 8th July 2015 7.30pm BST).
Blog post written by Peter Trudgill author of Dialect Matters – Respecting Vernacular Language
Academic linguists are often asked questions like: Is it really bad form to sometimes split your infinitives? What exactly is wrong with saying “I done it”? Why is the pronunciation of younger people these days so irritating? Why is it OK to drop the k in know but not the h in house? Why do railway companies prefer to have customers alighting from trains rather than passengers getting off them? And what is so important about sentences not starting with a conjunction?
This book argues in favour of the language of ordinary people. It champions everyday vocabulary, such as passenger, as opposed to business-school jargon like customer. Its supports nonstandard dialects, including forms such as I done it, in the face of the tyranny of the view that the standard dialect is the only “correct” and “grammatical” version of the language. It cherishes the English used by native speakers in their everyday lives, not least where they appear to defy the views of pedants who attempt to impose “rules” on us – for example about split infinitives – which have been invented for no good reason. It makes the case for vernacular usage as opposed to politically correct language. It demands respect for local ways of pronouncing local place-names. It asserts the primacy of spoken language and explains the importance of discourse markeres like “like”. And it defends minority languages like Welsh and Navajo, where these are threatened by majority languages like English.
The book is a collection of my weekly columns on accent and dialect from the Eastern Daily Press newspaper, revised and annotated for a wider audience. Many of these essays deal with the history of the English language. Others explain the origins of place-names. Some discuss the ways in which languages change while dismissing the loaded notions of deterioration and progress. Several of the columns look at political problems brought about by language issues; and stress the tragedy of language death. The coverage ranges from England to New England and Moldova; from the languages of indigenous Australians and Americans to the Old Norse tongue of the Vikings; and from vocabulary to phonetics and grammar. One of the pieces even boasts what is quite possibly the first ever usage in a regional British newspaper of the word phonotactics.
One of the main purposes of these columns is to broadcast a message of anti-prescriptivism, anti-linguicism, and respect for demotic linguistic practices. Prescriptivism is a form of prejudice which is so widely accepted in the English-speaking world that it is taken by many people to be axiomatic. Prescriptivists believe that there is only one way in which English “ought” to be spoken and written, and that any deviation from this is “ignorant” or “wrong”. If you ask them their justification for claiming that the sentence I done it is wrong, they may well answer that “everybody knows” it is. In this book, I try to show that this is not so. And I oppose negative attitudes like this – which are sadly held even by many highly educated and otherwise thoughtful people – by proposing that we should cultivate a positive stance towards all the different ways in which English is spoken around the world.
By the term “linguicism” I refer to a phenomenon which is, in its way, every bit as pernicious as racism and sexism, and which these days is more publicly and shamelessly displayed than those other evil phenomena. Linguicism involves being negative towards and discriminating against people because of their accent, dialect or native language. The totally false idea that some dialects of English are – in some mysterious and never specified way –“better” than others has many unfortunate consequences, not least the denigration of whole groups of our fellow human beings.
But I also attempt to convey the message that language is a mysterious, fascinating and enjoyable phenomenon which not enough people know enough about. I have attempted to use my columns as an opportunity to show that language is an extraordinarily interesting phenomenon, especially when we do our best to think about it analytically and positively, without preconceptions and prejudice. Nothing is more important to human beings than language; and I hope that in this book I have succeeded in illustrating the degree to which all languages and dialects are not only worthy of respect and preservation but, as complex creations of human societies and of the human mind, are also highly rewarding and pleasing to discover more about.
All the 150 or so columns in the book are about language in some shape or form, and contain linguistic information with insights which will be of interest to university students and teachers of linguistics, as well as to high-school English Language teachers and their classes: indeed they have already been used to stimulate discussion in classrooms from New Zealand and the USA to the British Isles. For the benefit of this type of reader, most of the pieces in this book are accompanied by brief Linguistic Notes of a technical nature which general readers need not bother with unless they want to achieve a more academic understanding of the issues involved. Local background notes are also provided where necessary for readers not familiar with East-of-England background of a number of the columns.
1. Can you define uptalk very briefly for those who don’t know?
Uptalk is the use of rising intonation (voice pitch) at the ends of statements or parts of statements. It is sometimes referred to as the use of question intonation on statements, but this is misleading, because not all questions have rising intonation (indeed there are many question types that tend to have falling intonation, such as those which have a wh-word at the beginning, like who, what, where), and there are rises on statements that are different from uptalk rises (such as on non-final items in a list like apples, oranges, bananas and pears, or the ‘continuation rise’ that you are likely to hear at the comma in Although this has a rise, it is not a question). Typically uptalk, which is also known as upspeak and high rising terminal (amongst other terms), is used to keep an interaction going, inviting the listener into the conversation. This is a specific instance of a more general property of high pitch to show openness, while lower pitch tends to mark finality or closure. However, because rising intonation is frequently associated with questions, many lay observers criticise ‘uptalkers’ for being uncertain about what they are saying. Interestingly, though, studies have shown that uptalk is highly likely in narrative contexts, such as when people are recounting something they have witnessed or experienced firsthand. These are unlikely to be situations where the speaker is uncertain.
2. What inspired you to write Uptalk?
As a psycholinguist, I devote a lot of my research time to looking at how we produce and understand language, especially spoken language. I have for a long time had a particular interest in how listeners interpret the intonation in utterances that they hear, and when I moved to New Zealand, a country where uptalk has a longer history than in most of the world, I was intrigued by how this particular form of intonation was interpreted. It was clear to me that non-uptalkers frequently arrived at a different interpretation from that intended by the speaker. This interest resulted in a series of research studies, during which I learned more about uptalk in different varieties of English and in other languages too. It seemed a natural next step to put what I had learned into a book where others – whether or not they are linguistics researchers – could have ready access to the wealth of information that is out there concerning the history, spread, and use of uptalk around the world.
3. How much does it vary according to the speaker’s age, gender and regional dialect?
There are certain parts of the world where uptalk has been a feature of spoken English for quite a long time: New Zealand, Australia and parts of Canada and the United States (particularly California). But it has been reported in many other English-speaking countries, as well as in other languages, particularly either there is where contact with English-speaking communities or a clear influence of the English language on youth culture. Typically, it is associated with young women, but it is by no means exclusively used by females, nor just by the young. Indeed, a number of studies have shown that people of the generation who were young uptalkers in the 1980s have continued to use uptalk as they have grown older. There may be some historical basis for saying that uptalk is a feature of young female speech, since linguists have shown that it is often young women who initiate a change in patterns of language use. Now, however, the claim that young women are the main users of uptalk is probably more a stereotype than a reality. In fact, uptalk is so common in some parts of the English-speaking world that subtle distinctions are developing in what uptalk rises and true question rises sound like, as part of making the difference clearer.
4. What are the key features and benefits that readers will take away from Uptalk ?
What I have tried to do in this book is provide a comprehensive overview of what uptalk is like, including how it differs from other forms of rising intonation; what its many functions and meanings are; how it is distributed and used across the many varieties of English (and other languages) in which it is found; which speaker groups are more likely to use it; and how it is perceived and interpreted by listeners. For those interested in how researchers have investigated uptalk, there is also a chapter on methodology. Because there has been so much discussion of uptalk in newspapers and self-help books, as well as on the radio and television, I also wanted to provide an exploration of the media response to uptalk, including some discussion of the types of statements often used in support of the largely negative claims made by journalists and others. So Uptalk covers a lot of ground, and should be of interest to both linguists and non-linguists alike.
Find out more about Uptalk: The Phenomenon of Rising Intonation
Blog Post written by Eve V. Clark (Stanford University), author of the recently published First Language Acquisition (3rd Edition)
How early do infants start in on language?
Even before birth, babies recognize intonation contours they hear in utero, and after birth, they prefer listening to a familiar language over an unfamiliar one. And in their first few months, they can already discriminate between speech sounds that are the same or different.
How early do infants understand their first words, word-endings, phrases, utterances?
Children learn meanings in context, both from hearing repeated uses of words in relation to their referents, and from feedback from adults when they use a word correctly or incorrectly. When a child is holding a ball, the mother might say “Ball. That’s a ball”, and the child could decide that “ball” picks out round objects of that type. Still, it may take many examples to establish the link between a word-form (“ball”) and a word-meaning (round objects of a particular type) and to relate the word “ball” to neighbouring words (throw, catch, pick up, hold). It takes even longer for the child’s meaning of a word to fully match the adult’s.
When do infants produce their first words and truly begin to talk?
Infants babble from 5-10 months on, giving them practice on simple syllables, but most try their first true words at some time between age 1 and age 2 (a broad range). They find certain sounds harder to pronounce than others, and certain combinations (e.g., clusters of consonants) even harder. It therefore takes practice to arrive at the adult pronunciations of words –– to go from “ba” to “bottle”, or from “ga” to “squirrel”. Like adults, though, children understand much more than they can say.
What’s the relation between what children are able to understand and what they are able to say?
Representing the sound and meaning of a word in memory is essential for recognizing it from other speakers. Because children are able to understand words before they produce them, they can make use of the representations of words they already understand as models to aim for when they try to pronounce those same words.
How early do children begin to communicate with others?
A few months after birth, infants follow adult gaze, and they respond to adult gaze and to adult speech face-to-face, with cooing and arm-waving. As they get a little older, they attend to the motion in adult hand-gestures. By 8 months or so, they recognize a small number of words, and by 10 months, they can also attend to the target of an adult’s pointing gestures. They themselves point to elicit speech from caregivers, and they use gestures to make requests – e.g., pointing at a cup as a request for juice. They seem eager to communicate very early.
How do young children learn their first language?
Parents check up on what their children mean, and offer standard ways to say what the children seem to be aiming for. Children use this adult feedback to check on whether or not they have been understood as they intended.
Do all children follow the same path in acquisition?
No, and the reason for this depends in part on the language being learnt. English, for example, tends to have fixed word order and relatively few word endings, while Turkish has much freer word order and a large number of different word-endings. Languages differ in their sound systems, their grammar, and their vocabulary, all of which has an impact on early acquisition.
These and many other questions about first language acquisition are explored in the new edition of First Language Acquisition. In essence, children learn language in interaction with others: adults talk with them about their daily activities – eating, sleeping, bathing, dressing, playing; they expose them to language and to how it’s used, offer feedback when they make mistakes, and provide myriad opportunities for practice. This book reviews findings from many languages as it follows the trajectories children trace during their acquisition of a first language and of the many skills language use depends on.
First Language Acquisition (third edition), Cambridge University Press 2016
By Abby Kaplan author of Women Talk More Than Men and Other Myths about Language Explained
For years now, observers have been alert to a growing social menace. Like Harold Hill, they warn that there’s trouble in River City — with a capital T, and that rhymes with P, and that stands for Phone.
Mobile phones are a multifaceted scourge; they’ve been blamed for everything from poor social skills to short attention spans. As a linguist, I’m intrigued by one particular claim: that texting makes people illiterate. Not only are text messages short (and thus unsuited for complex ideas), they’re riddled with near-uninterpretable abbreviations: idk, pls, gr8. Young people are especially vulnerable to these altered forms; critics frequently raise the specter of future students studying a Hamlet who texts 2B or not 2B.
The puzzling thing is that none of these abominable abbreviations are unique to text messaging, or even to electronic communication more generally. There’s nothing inherently wrong with acronyms and initialisms like idk; similar abbreviations like RSVP are perfectly acceptable, even in formal writing. The only difference is that idk, lol, and other ‘textisms’ don’t happen to be on the list of abbreviations that are widely accepted in formal contexts. Non-acronym shortenings like pls for please are similarly unremarkable; they’re no different in kind from appt for appointment.
Less obvious is the status of abbreviations like gr8, which use the rebus principle: 8 is supposed to be read, not as the number between 7 and 9, but as the sound of the English word that it stands for. The conventions for formal written English don’t have anything similar. But just because a technique isn’t used in formal English writing doesn’t mean that technique is linguistically suspect; in fact, there are other written traditions that use exactly this principle. In Ancient Egyptian, for example, the following hieroglyph was used to represent the word ḥr ‘face’:
It’s not a coincidence, of course, that the symbol for the word meaning ‘face’ looks like a face. But the same symbol could also be used to represent the sound of that word embedded inside a larger word. For example, the word ḥryt ‘terror’ could be written as follows:
Here, the symbol has nothing to do with faces, just as the 8 in gr8 has nothing to do with numbers. The rebus principle was an important part of hieroglpyhic writing, and I’ve never heard anyone argue that this practice led to the downfall of ancient Egyptian civilization. So why do we think textisms are so dangerous?
Even if there’s nothing wrong with these abbreviations in principle, it could still be that using them interferes with your ability to read and write the standard language. If you see idk and pls on a daily basis, maybe you’ll have a hard time remembering that they’re informal (as opposed to RSVP and appt). But on the other hand, all these abbreviations require considerable linguistic sophistication — maybe texting actually improves your literacy by encouraging you to play with language. We all command a range of styles in spoken language, from formal to informal, and we’re very good at adjusting our speech to the situation; why couldn’t we do the same thing in writing?
At the end of the day, the only way to find out what texting really does is to go out and study it in the real world. And that’s exactly what research teams in the UK, the US, and Australia have done. The research in this area has found no consistent negative effect of texting; in fact, a few studies have even suggested that texting might have a modest benefit. It seems that all the weeping and gnashing of teeth about the end of literacy as we know it was premature: the apocalypse is not nigh.
Of course, this doesn’t mean that we should all spend every spare minute texting. (I’m a reluctant texter myself, and I have zero interest in related services like Twitter.) There are plenty of reasons to be thoughtful about how we use any technology, mobile phones included. What we’ve seen here is just that the linguistic argument against texting doesn’t hold water.
View the Women Talk More Than Men…and Other Language Myths Explained Book Trailer or by clicking on the image below…
Cambridge University Press and Studies in Second Language Acquisition are pleased to announce that the recipients of the 2016 Albert Valdman Award for outstanding publication in 2015 are Gregory D. Keating and Jill Jegerski for their March 2015 article, “Experimental designs in sentence processing research: A methodological review and user’s guide”, Volume 37, Issue 1. Please join us in congratulating these authors on their contribution to the journal and to the field.
Post written by Gregory D. Keating and Jill Jegerski
We wish to express our utmost thanks and gratitude to the editorial and review boards at SSLA for selecting our article, ‘Research designs in sentence processing research: A methodological review and user’s guide,’ (March, 2015) for the Albert Valdman Award for outstanding publication. The two of us first became research collaborators several years ago as a result of our mutual interests in sentence processing, research methods, research design, and statistics. With each project that we have undertaken, we’ve had many fruitful and engaging conversations about best practices in experimental design and data analysis for sentence processing research. This article is the product of many of our own questions, which led us to conduct extensive reviews of existing processing studies. Our recommendations are culled from and informed by the body of work we reviewed, as well as our own experiences conducting sentence processing research. Stimulus development and data analysis can pose great challenges. It is our hope that the information provided in our paper will be a useful resource to researchers and students who wish to incorporate psycholinguistic methods into their research agenda and that the study of second language processing will continue to flourish in the future.
Blog post by David McNeill author of Why We Gesture: The Surprising role of the hands in communication
Why do we gesture? Many would say it brings emphasis, energy and ornamentation to speech (which is assumed to be the core of what is taking place); in short, gesture is an “add-on.” (as Adam Kendon, who also rejects the idea, phrases it). However,the evidence is against this. The lay view of gesture is that one “talks with one’s hands.” You can’t find a word so you resort to gesture. Marianne Gullberg debunks this ancient idea. As she succinctly puts it, rather than gesture starting when words stop,gesture stops as well. So if, contrary to lay belief, we don’t “talk with our hands”, why do we gesture? This book offers an answer.
The reasons we gesture are more profound. Language itself is inseparable from it. While gestures enhance the material carriers of meaning, the core is gesture and speech together. They are bound more tightly than saying the gesture is an“add-on” or “ornament” implies. They are united as a matter of thought itself. Thought with language is actually thought with language and gesture indissolubly tied. Even if the hands are restrained for some reason and a gesture is not externalized, the imagery it embodies can still be present, hidden but integrated with speech (and may surface in some other part of the body, the feet for example).
The book’s answer to the question, why we gesture is not that speech triggers gesture but that gesture orchestrates speech; we speak because we gesture, not we gesture because we speak. In bald terms, to orchestrate speech is why we gesture. This is the “surprise” of the subtitle—“The surprising role of the hands in communication.”
To present this hypothesis is the purpose of the current book. The book is the capstone of three previous books—an inadvertent trilogy over 20 years—“How Language Began: Gesture and Speech in Human Evolution,” “Gesture and Thought,” and “Hand and Mind: What Gestures reveal about Thought.” It merges them in to one multifaceted hypothesis. The integration itself—that it is possible—is part of the hypothesis. Integration is possible because of its central idea—implicit in the trilogy, explicit here—that gestures orchestrate speech.
A gesture automatically orchestrates speech when it and speech co-express the same meaning; then the gesture dominates the speech; syntax is subordinate and breaks apart or interrupts to preserve the integrity of the gesture–speech unit.Orchestration is the action of the vocal tract organized around a manual gesture. The gesture sets its parameters, the order of events within it, and the content of the speech with which it works. The amount of time speakers take to utter sentences is remarkably constant, between 1 and 2 seconds regardless of the number of embedded sentences. It is also the duration of a gesture. All of this is experienced by the speaker as the two awarenesses of the sentence that Wundt in the 19th C. distinguished.The “simultaneous” is awareness of the whole gesture–speech unit. It begins with the first stirrings of gesture preparation and ends with the last motion of gesture retraction. The “successive” is awareness of “…individual constituents moving into the focus of attention and out again,” and includes the gesture–speech unit as it and its gesture come to surface and then sink again beneath it.
The gesture in the first illustration, synchronized with “it down”, is a gesture–speech unit, and using the Wundt concepts we have:
“and Tweety Bird runs and gets a bowling ba simultaneous awareness of gesture–speech unity
starts[ll and ∅tw drops gesture–speech unity enters successive awareness it down gesture–speech unity
leaves successive awareness the drainpipe]simultaneous awareness of gesture–speech unity ends.”
The transcript  shows the speech the gesture orchestrated and when – the entire stretch, from “ball” to “drainpipe” is the core meaning of “it down” plus the image of thrusting the bowling ball into the drainpipe in simultaneous awareness. The same meaning appeared in successive awareness, the gesture stroke in the position the construction provided, there orchestrating “it” and “down”together.
The “drops” construction provides the unpacking template and adds linguistic values. Its job is to present the gesture–speech unit, including Tweety’s agent-power in the unit. Gesture–speech unity is alive and not effaced by constructions. To the contrary,Sylvester-up/Tweety-down conflict in socially accessible form. This unit must be kept intact in the speech flow. What is striking and why the example is illustrative, is that “it down” was divided by the construction into different syntactic constituents (“it”the direct object, “down” a locative complement), yet the word pair remained a unit orchestrated by the gesture. In other examples, speech stops when continuing would break up a gesture–speech
it controls them. A gesture–speech unity dominates.
How did it all come about? It occurred because “it down,” plus the co-expressive thrusting gesture, was the source (the “growth point”) of the sentence. The growth point came about as the differentiation of a field of equivalents having to do with HOWTO THWART SYLVESTER: THE BOWLING BALL DOWN. It unpacked itself into shareable form by “summoning” the causative construction (possible because a causative
meaning was in the gesture–speech unit from the start of the preparation – the speaker’s hands already in the shape of Tweety’s “hands” as the agent of thrusting). Thus “it down”and its stroke were inviolate from the start: the stroke orchestrated the two words as a unit, and the gesture phrase the construction as a whole. I believe the situation illustrated with “it down” permeates the production of speech in all conditions and different languages.
1 Participants retell an 8-minute Tweety and Sylvester classic they have just watched from memory to a listener (a friend, not the experimenter). Using Kendon’s terminology and our notation, the gesturephrase is marked by “[” and “]”. The stroke, the image-bearing phase and only obligatory phase of the
gesture, is marked in boldface (“it down”). Preparation is the hand getting into position to makethestrokeandisindicatedbythespanfromtheleftbrackettothestartofboldface(“ba[lland∅twdrops”).Preparation shows that the gesture, with all its significance, is coming into being – there is n oreasonthe hands move into position and take on form than to perform the stroke. Holds are cessations of movement, either prestroke (“drops”), the hand frozen awaiting co-expressive speech, or poststroke
(“down”), the hand frozen in the stroke’s ending position and hand shape after movement has ceased until co-expressive speech ends. Holds of either kind are indicated with underlining. They provide a precise synchrony of gesture-orchestrated speech in successive awareness. Retraction is also an active phase, the gesture not simply abandoned but closing down ( “the drainpipe,” movement ending as the last syllable ended – in some gestures, though not here, the fingers creep along the chair arm rest until this point is reached). In writing growth points – a field of equivalents being differentiated and the psychological predicate differentiating it–we use FIELD OF EQUIVALENTS:PSYCHOLOGICAL PREDICATE (“HOW TO THWART SYLVESTER: THE BOWLING BALLDOWN”).
A “strong prediction.” Our arguments predict that GPs in successive awareness remain intact no matter the constructions that unpack them. This follows from the expectation that unpacking will not disrupt a field of equivalents or its differentiation. Belonging to different syntactic constituents – the “it” with “drops” and the“down” with “the drainpipe” – did not break apart the “it down” GP. Instead, syntactic form adapted to gesture. The example shows that gesture is a force shaping speech not speech shaping gesture. Gesture–speech unity means that speech and gesture are equals, and in gesture-orchestrated speech the dynamic dimension enters from the growth point. In a second version of the “strong prediction,” speech stops if continuing would break the GP apart. The absolute need to preserve the GP in successive awareness then puts a brake on speech flow, even when it means restarting with a less cohesive gesture–speech match up that doesn’t break apart the GP.
Gestures of course do not always occur. This is itself an aspect of gesture. There is a natural variation of gesture occurrence. Apart from forced suppressions (as informal contexts), gestures fall on an elaboration continuum, their position an aspect of the gesture itself. The reality is imagery with speech ranging over the entire continuum.It is visuoactional imagery, not a photo. Gesture imagery linked to speech is what natural selection chose, acting on gesture–speech units free to vary in elaboration. As what Jan Firbas called communicative dynamism varies, the gesture–speech unit moves from elaborate movement to no movement at all. To speak of gesture–speech unity we include gestures at all levels of elaboration, including micro-level steps.
An example of the difference it makes is a word-finding study by Sahin et al of conscious patients about to undergo open-skull surgery, from which the authors conclude that lexical, grammatical and phonological steps occur with distinctive delays of about 200 ms, 320 ms and 450 ms, respectively. We hypothesize that gesture should affect this timing for the 1~2 seconds the orchestration lasts(no gestures were recorded in the Sahin study). If the idea unit differentiating a past time in a field of meaningful equivalents begins with an inflected verb plus imagery,does the GP’s on flashing wait 320 or 450 ms? Delay seems unlikely (although would be fascinating to find). It may be no faster (and perhaps slower) to say “bounced” in an experiment where a subject is told to make the root word into a past tense than to differentiate a field of equivalents with past time gesturally spatialized and the gesture in this space.
To see gesture as orchestrating speech opens many windows—how language is a dynamic process; a glimpse of how language possibly began; that children do not acquire one language but two or three in succession; that gestures are unique forms of human action; that a specific memory evolved just for gesture–speech unity; and how speech works so swiftly, everything (word-finding, unpacking, gesture–speech unity, gesture-placement, and context-absorption) done in a couple of seconds with workable (not necessarily complete)accuracy.