Post written by author Misha Becker discussing her recently published book ‘The Acquisition of Syntactic Struture‘.
Young children are fascinated by animals and captivated when inanimate things are made to come alive. Is there some way their understanding of the difference between “alive” and “not alive” can help them learn language?
In this book I explain a well-known puzzle in linguistic theory by arguing just that. Children expect the sentence subject (often the “do-er” of an action) to be animate, alive. So when they encounter a sentence where the subject is the rock or the house they are led to revise their understanding of the sentence to create a more complex underlying structure. This is what helps them understand the difference between a sentence like The house is easy to see, where the house is the thing being seen, and The girl is eager to see, where the girl is (or will be) doing the seeing. If you didn’t know the meaning of easy or eager, as very young children will not, how would you interpret these sentences? Imagine you hear a sentence like The girl/house is daxy to see. Does it matter whether the subject is girl or house in your guess about what daxy means, and in your interpretation of the seeing event?
I came to the idea for this book when I noticed how strongly adult speakers were influenced by animacy when I tried to make them think of certain abstract structures. When presented with “The girl ____ to be tall” people were more likely to write a verb like want or claim in the blank, but presented with “The mountain ____ to be tall” they were more likely to write seem or appear. Yet the underlying structure of the sentence differs, depending on whether the sentence contains want/claim or seem/appear. In linguistic parliance, the subject of seem/appear is “derived”–it doesn’t really belong, thematically, to the verb, and in this sense the structure is more abstract and complex. It occurred to me that if adults were so strongly influenced by animate vs. inanimate subjects, then children might be as well.
This book describes numerous studies with children showing how the fundamental distinction between alive and not-alive interacts with their understanding of language and the world around them. But it also examines other facets of the animacy distinction with regard to language: how languages around the world place restrictions on animate and inanimate sentence subjects, how adults use animacy in their understanding of sentence structure, how and when babies first begin to represent the concept of animacy, and how computational models can be developed to simulate the use of a distinction like animacy in language learning. The final chapter of the book address the timeless question of where this understanding comes from–is the concept of animacy innate or learned, or both?
Find out more on Misha Becker’s new book ‘The Acquisition of Syntactic Struture‘. published by Cambridge University Press.
Post written by author Lionel Wee discussing his recently published book The Language of Organizational Styling
Organizations are interesting because of the promise and problems they represent. They have promise because they allow individuals to pool their resources and scale up their activities, thus making it possible to achieve things at a supra-individual level. In fact, one might say that this is the very reason why organizations exist at all. At the same time, there is great irony in the fact that, having been created, many organizations then go on to acquire an existence and independence beyond the goals and wishes of their founders. Especially when constituted as virtual persons, organizations can make claims and exert rights that sometimes come into conflict with those of individuals.
One might say, with perhaps only slight exaggeration, that organizations are a form of artificial intelligence – created by us but then coming to have priorities and values that are not always within our control. And just like their better-known computational counterparts, organizations, too, are often portrayed in dystopian terms. Especially in popular media, big businesses are ideologically characterized as faceless, anonymous and profit-seeking entities that undermine the authentic nature of life in small towns and neighborhoods by eroding their individuality and rendering them homogeneous. Scholarly analyses are of course more nuanced, but even here, while organizations have figured prominently as direct objects of study in sociology and business studies, they have been somewhat neglected in sociolinguistics. Organizations usually come into play as part of the backdrop against which the activities of individuals or communities are constrained or enabled; they are rarely the actual focus.
From a sociolinguistic perspective, however, organizations are fascinating because – just like individual speakers – they are entities that employ various semiotic resources, in particular, linguistic resources in order to project specific kinds of identities, cultivate certain kinds of relationships with other organizations, and foster ties with the various communities. But precisely because organizations are entities sui generis, their communiqués and other linguistic activities cannot be reduced to those of the individuals who populate them without at the same time raising a number of conceptual problems. This is because the organization in principle exists above and beyond the intentions and activities of any single individual, however powerful or senior that person might be. And this raises the rather interesting question of how organizations might be best studied.
This is where the sociolinguistic notion of style proves useful, in my view. The analytical beauty of a style-theoretic framework is that it raises issues of strategy, agency and choice as being in need of more careful attention. Speakers make stylistic choices, though not always freely, which means that they have to be mindful of the social and political consequences of these choices. But curiously, the stylistic practices of organizations have not been subjected to any in-depth sociolinguistic analysis and theorizing, even though the extrapolation of style from speaker activity to organizational activity seems a natural one to make. And once this extension is seriously contemplated, we can start asking questions such as the following: Do organizations engage in styling the other? What might prompt an organization to attempt to re-style itself, and what kinds of linguistic maneuvers are involved? Given that big businesses are often seen as anathema to the preservation of a community’s identity, how do big businesses then attempt to overcome this ideological bias? How does talking about organizational styling differ from talking about branding or corporate communications? And perhaps most fundamental of all, does the application of the notion of style to organizational activity require us to revisit and re-evaluate any of our current assumptions about the nature of style (since the predominant tendency is to think of style in connection with people rather than organizations)?
The sociolinguistic study of organizations is relatively new but important, given how ubiquitous organizations are in our lives. Many of us work in organizations; we have our lives regulated by organizations; and more than a few of us join (religious, political, grassroots) organizations because we feel that the goals they pursue can give meanings to our lives.
Find out more on Lionel Wee’s new book ‘The Language of Organizational Styling’ published by Cambridge University Press.
Post written by author Deborah Brandt discussing her recently published book The Rise of Writing
The belief that writing ability is a subsidiary of reading ability runs deep in society and schooling. You can only write as well as you can read. The best way to learn how to write is to read, read, and read some more. Commonplaces like these are easy to find in the advice of teachers and often well-known authors as well. Reading is considered the fundamental skill, the prior skill, the formative skill, the gateway to writing. At minimum, reading is thought to teach the techniques of textuality, the vocabulary, diction, spelling, punctuation, and syntax that any aspiring writer must master. Even more profound, reading is thought to shape character and intellect and provide the wisdom and worldliness that make one worthy to write. In every way reading is treated as the well from which writing springs. We need only try to reverse the commonplace advice to appreciate the superior position that reading holds. How many would readily agree that you can only read as well as you can write? Or that the best way to learn how to read is to write, write, and write some more? Writing has never attained the same formative and morally wholesome status as reading. Indeed, writing unmoored from the instructiveness of reading is often considered solipsistic and socially dangerous.
But in the wider society and over the last fifty years or more, writing has ascended as the main basis of many people’s daily literacy experiences and the main platform for their literacy development. Millions of working adults now spend four hours or more each day (sometimes, a lot more) with their hands on keyboards and their minds on audiences, writing so much, in fact, that they have little time or appetite for reading. In the so-called information economy writing has become a dominant form of labor and production. As a result, writing is eclipsing reading as the literate experience of consequence. Spurred on especially by digital technologies, writing is crowding out reading and subordinating reading to its needs. The rise of writing over reading represents a new chapter—and a new challenge– in the history of mass literacy, a challenge especially for the school, which from its founding has been much more organized around a reading literacy, around a presumption that readers would be many and writers would be few.
But now writers are becoming many. What are some of the changes that we need to pay attention to? Increasingly, people read from inside acts of writing, as they respond to others; research, edit or review other people’s writing; or search for styles or approaches to use in their own writing. “Reading to write” in school has usually meant using reading to stimulate ideas or generate content, but in the wider world reading to write actually stands for a broader, more diverse, more diffused, more sustained and more comprehensive set of practices. Increasingly, how and why we write conditions how and why we read. Relatedly, we write among other people who also write. Learning to write along with other people who write (rather than from authors who address us abstractly) is a new aspect of mass literacy development. Audiences are made up not merely (or mostly) of receptive readers but also responsive writers; increasingly people write to catalyze or anticipate other people’s writing and people read with the aim of writing back.
Further, in an information society, writing is consequential. The kind of writing done by everyday people turns the wheels of finance, law, health care, government, commerce. As the power and consequence of writing courses through the consciousness of everyday people, their acts of writing are often sites of intellectual, moral, and civic reflection- but not necessarily in the same ways as acts of reading. Reading is an internalizing process. That is why the effects of literacy have been sought mostly on the inside: in the formation of character or the quality of inner life or intellectual growth. But writing is a relentlessly externalizing process. Because writing unleashes language into the world, it engages people’s sense of power and responsibility. It can be expected to bring more wear and tear, potentially more trouble. Writing risks social exposure, blame, even, in some cases, retaliation. It requires a level of courage and ethical conviction rarely cultivated in school-based literacy and rarely measured in standard assessments of writing ability.
We are at a critical crossroad in the history of mass literacy in which relationships between writing and reading are undergoing profound change. Writing is overtaking reading as the skill of critical consequence. Until only recently writing was a minor strain in the history of mass literacy, playing second fiddle to reading. But it is surging into prominence, bringing with it a cultural history, a set of cognitive dispositions, and a developmental arc that stand in contrast to reading. As an educational community, we have been slow to incorporate these shifting relationships into the questions we ask and the perspectives that we take. That writing remains so under-studied and under-articulated in comparison to reading is perhaps our greatest challenge.
To find out more about Deborah Brandt’s new book published by Cambridge University Press please click here
In this insightful talk John C Wells, Emeritus Professor of Phonetics at University College London, discusses his latest book with Cambridge University Press, ‘Sounds Interesting: Observations on English and General Phonetics’, along with his research interests and, of course, his acclaimed phonetics blog (the content of which has helped to populate this new book).
Please click on the image to watch the video:
Figurative Language, written by Barbara Dancygier and Eve Sweetser, is a lively, comprehensive and practical book which offers a new, integrated and linguistically sound understanding of what figurative language is. The following extract is taken from the Introduction.
Thinking about figurative language requires first of all that we identify some such entity – that we distinguish figurative language from non-figurative or literal language. And this is a more complex task than one might think. To begin with, there appears to be a circular reasoning loop involved in many speakers’ assessments: on the one hand they feel that figurative language is special or artistic, and on the other hand they feel that the fact of something’s being an everyday usage is in itself evidence that the usage is not figurative. Metaphor, rather than other areas of figurative language, has been the primary subject of this debate. Lakoff and Johnson (1980) recount the story of a class taught by Lakoff at Berkeley in the 1970s in which he gave the class a description of an argument and asked them to find the metaphors. He expected that they would recognize phrases such as shoot down someone else’s argument, bring out the heavy artillery, or blow below the belt as evidence of metaphoric treatment of argument as War or Combat. Some class members, however, protested, saying, But this is the normal, ordinary way to talk about arguing. That is, because these usages are conventional rather than novel, and everyday rather than artistic, they cannot be metaphoric. However, there are many reasons to question this view, and to separate the parameters of conventionality and everyday usage from the distinction between literal and figurative. One of these is historical change in meaning: historical linguists have long recognized that some meaning change is metaphoric or metonymic. For example, around the world, words meaning ‘see’ have come to mean ‘know’ or ‘understand.’ Indeed, in some cases that past meaning is lost: English wit comes from the Indo-European root for vision, but has only the meaning of intellectual ability in modern English. But in other cases, such as the see in I see what you mean, metaphoric meanings in the domain of Cognition exist alongside the original literal Vision uses. This knowing is seeing metaphor is extremely productive: transparent, opaque, illuminate, and shed light on are among the many English locutions which are ambiguous between literal visual senses and metaphoric intellectual ones. Do we want to say that because these are conventional usages, they are not metaphoric? In that case, we would have to separate them completely from less entrenched uses which show the same metaphoric meaning relationship: if someone says they have examined a candidate’s record with a magnifying glass, we probably don’t want to say that there should be a dictionary entry for magnifying glass listing this usage. Still less would we want to make a new dictionary entry if someone said they had gone over the data with an electron microscope. As has been widely argued, starting with Lakoff and Johnson, the most plausible hypothesis here is that while wit is no longer metaphoric, transparent and shed light on are metaphoric – and that it is precisely the habitual use of conventional instances of the knowing is seeing metaphor which helps motivate innovative uses.
It is thus possible for metaphor or metonymy to motivate conventional extensions of word meanings – and figurative links which are pervasively used in this way shape the vocabularies of the relevant languages. At a first approximation, then, we might say that figurative means that a usage is motivated by a metaphoric or metonymic relationship to some other usage, a usage which might be labelled literal. And literal does not mean ‘everyday, normal usage’ but ‘a meaning which is not dependent on a figurative extension from another meaning.’ We will be talking about the nature of those relationships in more detail soon, but of course metaphor and metonymy are not the only motivations for figurative usage. In this context, we might say that polysemy – the relationship between multiple related conventional meanings of a single word – is often figurative in nature. English see continues to manifest simultaneously meanings related to physical vision and ones related to cognition or knowledge: Can you see the street signs? coexists with Do you see what I mean?.
Read the full excerpt here or find out more about Figurative Language here.
written by Professor Bernard Spolsky
It’s great to be relevant! A few weeks after my sociolinguistic history of the Jewish people was published, a Reuters story highlighted a dispute between the visiting Pope Francis and the Israeli Prime Minister over the language spoken by Jesus (Reuter, 28 May 2014). “Jesus spoke Hebrew”, Netanyahu stated. “Aramaic”, responded the Pope. He almost certainly knew both Hebrew and Aramaic, and also Greek (and maybe a little Latin), I would have answered, as I did in one of the earliest studies that I published that marked my growing interest in the language of the Jews.
But this disagreement turns out to be only one the many examples of disputes that I found in my research. There are, I learned, scholars who argue that Jews stopped speaking Hebrew soon after they returned from the first exile in Babylonia (say about 700 BCE), and others who find evidence that it was still spoken after the destruction of the Second Temple by the Romans, as late as the second century of the Common Era. A nine hundred year spread seems a lot; however, seeing we have very little direct evidence of who spoke what, but must depend on much later written sources, we can understand the uncertainties of historical sociolinguistics.
As I carried out my studies, a number of similar major disagreements and doubts emerged. One concerned the origin of the Jewish variety that developed strongest claims to status as a language, Yiddish. There are continuing arguments (some almost violent) about the location where Jews (still reading and writing Hebrew but speaking another variety derived from a non-Jewish “co-territorial” language) started to speak Medieval German and made it their own by adding many words and phrases from Hebrew (or actually from the Hebrew and Aramaic that had become the regular language of religious expression and literacy). The classic theory by the major scholar, Max Weinreich, holds that Yiddish started when Jews speaking a French-based language moved into the Rhineland, and before the Crusades set up barriers between them and Christians that drove them into ghettos, picked up the local spoken German dialect. Another theory (and one that Weinreich recognizes in the footnotes which add a second volume to his monumental history of Yiddish) argues that Yiddish developed further east, in Regensburg in Swabia. Others suggest it developed further east even: one theory holds that it was Jews living in Prague speaking a Slavic based variety who adopted it from the German-speaking Swabian farmers who moved in and populated the region in the 13th century. There are more extreme theories: one Israeli scholar has put forward the notion that it derives from a relexified version of the language of the Sorbians who he believed converted to Judaism, and others relate it to the mythical accounts of the conversion of the Khazars (but recent research has challenged any genetic evidence for the Khazarian hypothesis that Koestler proposed, and has cast serious doubt on the stories of the conversion itself, just as unlikely as the 13th century belief that the invading Mongols were Jews or one of the missing Ten Tribes).
“…the fact that Jewish children mainly attended schools in the local national languages suggests that even without the
subsequent Soviet banning of Yiddish culture and the Nazi extermination of millions of its speakers, Yiddish too would soon
have become an endangered language”.
There is no question that East European Jews developed Yiddish into their main spoken language (although there were many variants that are traced in the major Yiddish dialect atlas that is now appearing), although they continued to pray and write Hebrew. Only in the late 19th century did Yiddish literature start to appear, reaching a high point in the 20th century between the two wars. Here again, there is a quarrel, for in spite of the double standardization (one by YIVO in Warsaw and Vilna, and the second under Soviet imprimatur in Moscow) and the associated burgeoning of secular Yiddish writing, the fact that Jewish children mainly attended schools in the local national languages suggests that even without the subsequent Soviet banning of Yiddish culture and the Nazi extermination of millions of its speakers, Yiddish too would soon have become an endangered language.
Jewish varieties developed elsewhere in the extensive Diaspora. Jews expelled from Spain in 1492 took with them a language variety which developed in North Africa into Haketia and in the Balkans and Turkey into Ladino, which itself developed in time a strong literature. Ladino was replaced in Turkey first by French (when the Alliance Israélite Universelle set up schools for Jews) and then by Turkish (reformed and established by Kemal Ataturk). Jews in the Arab speaking world developed varieties of Judeo-Arabic, used in the Middle Ages to write philosophical and religious works (as second class citizens, Jews and Christians under Islam were forbidden to learn Classical Arabic); in North Africa, they switched to French after colonization, and by the time they were expelled from Arabic countries after the UN decision in 1947, they had little loyalty to Arabic and were easily persuaded to adopt local hegemonic languages, whether Hebrew in Israel or French in France.
And in the West, emancipation and even more the introduction of compulsory state education in the national language, worked against the continuation of Jewish varieties, most of which by now are spoken if at all by the elderly. But there remain some signs of life – Yiddish has been adopted as a spoken variety for boys in some Hasidic sects. And there have grown up postvernacular activities for many of the languages: local groups that learn and read Yiddish or Ladino, theatres that present plays in these two languages and in Judeo-Arabic, web-sites that teach and preserve a number of Jewish varieties; for supporters, the varieties have symbolic and not communicative relevance. And there are signs of the creation of new Jewish varieties, such as the Jewish English learned by newly-observant young Jews, incorporating the Yiddish and Hebrew words and grammar of the Haredim.
The study of Jewish language varieties is quite new, and it is made especially difficult because the historical evidence we have of spoken language is limited, and dependent often on much later written developments. But tracing their history, we can learn how the wandering Jew fared in different times and places, and how Hebrew remained and still remains the main force for identity.
Find out more about The Languages of the Jews and download an excerpt here.
Arabic linguistics is a vast ﬁeld combining study of the Arabic language with the analytical disciplines that constitute the ﬁeld of linguistics. Linguistic theories, methods, and concepts are used to analyze the structure and processes of Arabic; but at the same time, Arabic with its millennium-long intellectual traditions, its complex morphology, and its current broad diversity of registers, informs linguistic theory. Many linguistic approaches to Arabic language analysis have been applied over the past ﬁfty years both within the Arab world and from the point of view of western scholars. These approaches and their disciplinary procedures are both varied and convergent, covering a wealth of data but also coming to terms with central issues of concern to Arabic linguistics that had been neglected in the past, such as validating the prominent role of vernacular Arabic and variation theory in Arabic society and culture. Arabic linguistics is now an active subﬁeld in sociolinguistics, corpus linguistics, and computational linguistics as well as theoretical and applied linguistics. Both traditional and new genres of Arabic writing are now being examined within postmodern frameworks of literary theory and linguistic analysis. Media Arabic studies is a new and rapidly growing ﬁeld; medieval texts are being re-examined in the light of new philology and discourse analysis; previously ignored forms of popular culture such as songs, advertisements, oral poetry, vernacular writing, letters, email, and blogs are now legitimate grist for the linguistics mill.
The discipline of linguistics has a growing number of subﬁelds. The traditional four core divisions usually include theoretical linguistics, applied linguistics, sociolinguistics, and computational linguistics. Each of these has developed new applications, perspectives, hypotheses, and discoveries that extend their analytical power in novel ways, such as cognitive linguistics in theoretical linguistics, second language acquisition in applied linguistics, corpus linguistics in the computational ﬁeld, and discourse analysis in sociolinguistics. When these perspectives and theories are applied to Arabic, the ﬁndings can be revealing, satisfying, or puzzling, but generally lead toward greater understanding of how languages work, how they resemble each other, and how they differ. The ﬁeld of computational linguistics has provided ways to develop extensive corpora of spoken and written Arabic that can be used for pioneering research and analysis of language in use. An active subﬁeld of linguistics – history of linguistics – examines linguistic historiography, the development of language analysis over time, and the evolution of grammatical theory in different cultures.
The phonological, morphological, and syntactic structures of Arabic reﬂect its Semitic origins and its essential differences from Indo-European languages. These differences and their cultural embeddedness are what make Arabic of interest to research in many ﬁelds of linguistics. For example, the particularly well-deﬁned and elaborated verb system with its derivations reﬂect an aspect of classical Arabic that is both fascinating and rigorous in its structure and linguistic logic. As another example, the contrasts between vernacular and written language, their different roles within Arab society, and the tensions between local and regional linguistic identities, form areas of sociolinguistics that pose particular challenges to data collection, empirical study, and objective analysis. Many research challenges and opportunities still lie ahead in this regard.
Read the full excerpt here.
Middle Egyptian, written by Proffessor James Allen, introduces the reader to the writing system of ancient Egypt and the language of hieroglyphic texts. It explores the most important aspects of ancient Egyptian history, society, religion, literature, and language. Grammar lessons and cultural essays allows users not only to read hieroglyphic texts but also to understand them, providing the foundation for understanding texts on monuments and reading great works of ancient Egyptian literature. This third edition is revised and reorganized, particularly in its approach to the verbal system, based on recent advances in understanding the language. (The following excerpt is taken from Chapter 1).
1. Language and Writing
Egyptian is the ancient and original language of Egypt. It belongs to the language family known as Afro-Asiatic or Hamito-Semitic and is related to both of that family’s branches: North African languages such as Berber and Beja, and Asiatic languages such as Arabic, Ethiopic, and Hebrew. Within Afro-Asiatic, Egyptian is unique. It has features that are common to both branches, although it is closer to the African side of the family.
Egyptian first appeared in writing shortly before 3200 BC and remained a living language until the eleventh century AD1. Beginning with the Muslim conquest of Egypt in AD 641, Arabic gradually replaced Egyptian as the dominant language in Egypt. Today, the language of Egypt is Arabic. Egyptian is a dead language, like Latin, which can only be studied in writing, though it is still spoken in the rituals of the Coptic (Egyptian Christian) Church. Throughout its long lifetime, Egyptian underwent tremendous changes. Scholars classify its history into two phases and five major stages:
1) Old Egyptian is the first stage of the language. Although Egyptian writing is first attested before 3200 BC, these early inscriptions (called Archaic Egyptian) consist only of names and labels. Old Egyptian proper is dated from approximately 2700 BC, when the first extensive texts appeared, until about 2100 BC.
2) Middle Egyptian (or Classical Egyptian) is closely related to Old Egyptian. First attested around 2100 BC, it survived as a spoken language for some five hundred years but remained the standard hieroglyphic language for the rest of ancient Egyptian history. Middle Egyptian is the phase of the language discussed in this book.
3)Late Egyptian began to replace Middle Egyptian as the spoken language after 1600 BC, and it remained in use until about 600 BC. Though descended from Old and Middle Egyptian, Late Egyptian differed substantially from the earlier phases, particularly in grammar. Traces of Late Egyptian can be found in texts earlier than 1600 BC, but it did not appear as a full written language until after 1300 BC.
4) Demotic developed out of Late Egyptian. It first appeared around 650 BC and survived until the fifth century AD.
5) Coptic is the name given to the final stage of ancient Egyptian, which is closely related to Demotic. It appeared at the end of the first century AD and was spoken for nearly a thousand years thereafter. The last known texts written by native speakers of Coptic date to the eleventh century AD.
Egyptian also had several dialects. These regional differences in speech and writing are best attested in Coptic, which had five major dialects. They can only be partly detected in the writing of earlier phases of Egyptian, but they undoubtedly existed then as well: a letter from about 1200 BC complains that a correspondent’s language is as incomprehensible as that of a northern Egyptian speaking with an Egyptian from the south. The southern dialect of Coptic, known as Saidic, was the classical form; the northern one, called Bohairic, is the dialect used in Coptic Church services today.
The basic writing system of ancient Egyptian consisted of about five hundred common signs, known as hieroglyphs. The term “hieroglyph” comes from two Greek words meaning “sacred carvings,” which are a translation, in turn, of the Egyptians’ own name for their writing system, “the god’s speech.” Each sign in this system is a hieroglyph, and the system as a whole is called hieroglyphic (not “hieroglyphics”).
Unlike Mesopotamian cuneiform or Chinese, whose beginnings can be traced over several hundred years, hieroglyphic writing seems to appear in Egypt suddenly, around 3250 BC, as a complete system. Scholars are divided in their opinions about its origins. Some suggest that the earlier, developmental stages of hieroglyphic were written on perishable materials, such as wood, and simply have not survived. Others argue that the system could have been invented all at once by an unknown genius. Although it was once thought that the idea of writing came to Egypt from Mesopotamia, recent discoveries indicate that writing arose independently in Egypt.
People since the ancient Greeks have tried to understand this system as a mystical encoding of secret wisdom, but hieroglyphic is no more mysterious than any other system that has been used to record language. Basically, hieroglyphic is nothing more than the way the ancient Egyptians wrote their language . To read hieroglyphic, therefore, you have to learn the Egyptian language.
 Some scholars prefer BCE and CE rather than BC and AD. Because both conventions use the same benchmark (see Essay 9), however, this book retains the older system.
Read the full excerpt from Middle Egyptian, An Introduction to the Language and Culture of Hieroglyphs, here.
Blog post by Remi van Trijp based on a recent article in Language and Cognition
One of the most notorious problems in linguistics is how to handle “long-distance dependencies”: utterances in which some elements seem to have been taken away from their original position and then moved to a different place. Typical examples are WH-questions such as “What did you see” in which the direct object (“what”) takes sentence-initial position instead of following the verb, as it would do in a declarative utterance (e.g. “I saw the game”).
But what makes long-distance dependencies so difficult? Most linguists assume a tree structure (or “phrase structure”) for analyzing utterances. As a data structure, trees consist of nodes that have at most one parent node, which means that information in a tree can only trickle down from a parent to its immediate children, or percolate upwards in the other direction. A tree structure is thus hopelessly inadequate for representing dependencies between nodes that are in the top of the hierarchy and nodes that are situated somewhere below. The most common solution to this problem is to say that there is a “gap” where we would normally expect a part of the utterance. Information about the gapped element then has to be communicated node-by-node upwards in the tree, until the “filler” of the gap is found.
In recent years, however, a cognitive-functional alternative has started to crystallize in which long-distance dependencies spontaneously emerge as a side effect of how grammatical constructions interact with each other in order to cater for the different communicative needs of language users. For example, the difference between “I like ice cream” and “Ice cream I like” can be simply explained as the tendency for speakers to put the most topical information in the front of the sentence – suggesting that word order should be decoupled from an utterance’s hierarchical structure.
While this view has for a long time been dismissed for being ad-hoc and not lending itself to proper scientific formalization, there now exists a formally explicit computational implementation of the cognitive-functional alternative in Fluid Construction Grammar, which works for both parsing and production. The implementation eliminates all formal machinery needed for filler-gaps by chopping down the syntax tree: rather than taking a tree structure as the sole device for representing all information of an utterance, different linguistic perspectives are represented on equal footing (including an utterance’s information structure, functional structure, illocutionary force, and so on).
The implementation shows that a cognitive-functional approach to long-distance dependencies outperforms the filler-gap analysis in several domains: it is more parsimonious, more complete (i.e. it includes a processing model) and it offers a better fit to empirical data on language evolution.
Access the entire article without charge until 31st July 2014
Dr. Aneta Pavlenko
Professor of Applied Linguistics
Written by Aneta Pavlenko, Temple University
We are often asked about the relevance of linguistics for the ‘real world’. On June 2, 2014, I got an opportunity to explain this relevance to the judge, the media, and the general public when I testified as an expert witness in the pre-trial hearing of a Kazakh national, Dias Kadyrbayev, friend of the accused Boston Marathon bomber Dzhokhar Tsarnayev. The hearing was not about guilt or innocence. Its purpose was to determine whether Dias understood his Miranda rights – to remain silent, to request a lawyer, and to have a lawyer provided to him for free – and the consequences of waiving them. There were two complications: the FBI interrogation was not recorded, nor could I test Dias’ proficiency directly because by the time his lawyers contacted me, he had spent more than eight months in jail, interacting and reading in English.
Based on my previous experience with a similar case, I requested Dias’ test scores, academic records, and written texts produced by him prior to the interrogation. I also asked him to write a language learning history based on my prompts. Then I used his test scores from 2011 to establish his baseline proficiency (“no lower than”), linguistic patterns in his learning history to establish a ceiling proficiency (“no higher than”), and linguistic patterns in his writings from 2012-2013 to infer his proficiency at the time in terms of the ACTFL proficiency guidelines. My analysis suggested that at the time of the interrogation he had an Intermediate level of English proficiency and was highly unlikely to understand his Miranda rights without linguistic accommodations, such as clarification, translation, or interpretation.
In court, I tried to explain why a Russian speaker who relied on simple sentences, such as “I am feel bad” or “I did them very bad”, may be unable to automatically process sentences, such as “If you cannot afford a lawyer, one will be appointed for you before any questioning if you wish”. The reasons are many: syntactic complexity, low-frequency words, polysemy, differences between Russian and English in sentence structure and temporal marking, unfamiliarity with the privilege against self-incrimination, and the fact that the rights were presented under stress, in a short time span, without linguistic assistance. My task should have been fairly easy, right?
Sorry to say, fellow linguists, it was not a triumphant experience – eyes glazed when I uttered the terms ‘language proficiency’ and ‘predictive validity’ and mouths opened in extended yawns when I listed ‘deep embedding’, ‘double conditionals’, ‘ellipsis’ and ‘polysemy’ as features that make understanding the Miranda rights challenging for non-native speakers of English. Next-day media reports showed that I failed to communicate my points effectively. The failure lies squarely on my shoulders – I should have found better terms and examples – yet it also stems from different assumptions in academia and the ‘real world’ about language and evidence.
Here is the ‘real world’ version. Kadyrbayev studied English for 6 years in Kazakhstan and spent 4 weeks in the UK and 8 weeks in the US, prior to his arrival in the US in 2011. The prosecution argued that this was a record of ‘extensive’ study, sufficient to establish his English-language competence. They also stated that being a student at the University of Massachusetts at Dartmouth was in and of itself sufficient evidence of Kadyrbayev’s English proficiency.
In academia, years of language study are not a valid predictor of proficiency, due to highly variable instruction quality, and the only evidence that counts are test scores. Kadyrbayev’s 2011 score of 5.5 on IELTS, an international test of English proficiency, indicates, according to the IELTS guide, a low level of proficiency that requires further English study prior to taking any academic courses or even linguistically demanding ESL courses. So how did he get to be a student at UMass?
In fact, Kadyrbayev was not a UMass student – he was enrolled in the program run at UMass by a for-profit corporation Navitas that recruits foreign students who can pay for their courses and promises them that after two years they could transfer into the regular program. In Kadyrbayev’s case, this was not to be because Navitas did not heed his low IELTS scores and instead of offering ESL instruction he badly needed, enrolled him in academic courses, such as math and chemistry, where he struggled to understand what was going on. The mismatch between his level of proficiency and the linguistic demands of his courses led to plagiarism, absenteeism, failed courses, academic probation, and, in February 2013, dismissal from the program. But this does not mean he could not speak English, right?
To make their case, the prosecution emphasized his ability to interact in everyday situations and use colloquial English. These arguments, however, present language as binary, where you either have it or you don’t, and evidence of ‘some’ English suffices as evidence of ‘all’. Researchers, on the other hand, see proficiency in terms of levels and emphasize that speaking skills, or Basic Interpersonal Communicative Skills (BICS) in Jim Cummins’ terms, are acquired earlier than Cognitive Academic Language Proficiency (CALP), necessary to process the Miranda rights.
In terms of consent forms, in academia, regulations for protection of human subjects require us to write research consent forms in plain English and to translate them for speakers with lower levels of English proficiency. In the criminal justice system, there is no requirement to present the Miranda warnings in any language other than English, and a signature on the Miranda form is sufficient evidence of understanding. For linguists, on the other hand, the signature is evidence of understanding that the form had to be signed, and the only valid evidence of understanding of Miranda rights is their restatement in one’s own words.
I left the courtroom that day asking myself: would an American detained in Kazakhstan consent to go through the proceedings in his non-fluent Russian or Kazakh? And if not, how can our own criminal justice system address its monolingual bias and could it “afford” to do so “if it wished”? In my own view, it can and it should – the policies and best practices suggested by research are neither expensive nor time-consuming. The adoption of ‘plain English’ forms and standardized translations of the Miranda warnings, in combination with the requirement to restate the rights in one’s own words, would go a long way towards addressing the disparity in the system. This also may be the only way to ensure that boring linguists like me do not reappear in court.
If you enjoyed this post, find out more about The Bilingual Mind, here.