Tasks, methodological transparency and the IRIS database of research materials

Commentary by Emma Marsden, University of York and Margaret Borowczyk, Georgetown University

IRIS is a repository of instruments used in second language research. It was created to increase access to the variety of materials used to elicit data for empirical studies (e.g. pictures, participant instructions, language tests, response options, working memory tests, videos, software scripts). These materials are so often left out of research reports, mainly due to publishers’ space constraints. IRIS allows consumers to more directly evaluate the validity of certain research and improves the speed and accuracy of replication research.  It is a free, theory agnostic, database that is searchable across over one hundred different search criteria (such as ‘type of instrument’, ‘research area’, or ‘language’). IRIS currently holds more than two and a half thousand files, bundled into almost a thousand complete sets of data collection tools. Most instruments are downloaded by Ph.D. students (4,600 downloads to date), followed by Master’s students (4,400) and language teachers (2,370). This suggests that new generations of second language researchers are making productive use of this resource and building their studies on pre-trialed and peer-reviewed instruments, which will help to develop more tightly related research agendas and increase our understanding of the validity and reliability of the tools that we use. Critically, materials downloaded from IRIS can be adapted by others to suit the particular context under investigation.

The Annual Review of Applied Linguistics began to publish its first empirical articles with the 2016 issue, which focused on tasks. (Prior to this, ARAL had published exclusively reviews). All the instruments used for the studies in the 2016 issue are part of the IRIS repository. ARAL will continue to publish empirical studies (as well as review and position papers) and all instruments used for ARAL articles will be shared via the IRIS database, to benefit the second language research community. Indeed, ARAL is an official journal of AAAL, and AAAL, in line with the methodological reform movement in applied linguistics and beyond, now highlights IRIS in its publication guidelines.

The 2016 ARAL issue on tasks contains several articles that used valuable instruments, each with very wide appeal. For example, Plonsky and Kim (2016) provided a meta-analysis of 85 studies that analyzed task-based learner language, and shared their coding scheme (an Excel file) with IRIS. Their instrument makes explicit the target features (e.g. grammar, vocabulary, pronunciation, pragmatics), methodological features (e.g. study designs, sampling, analyses, reporting practices), and contextual and demographic variables that entered into their analysis. IRIS contains six other meta-analysis assessment instruments, including coding schemas for meta-analyses of L2 strategy instruction (Plonsky, 2011), learner corpus research (Paquot & Plonsky, in press), test format effects on reading and listening test performance (In’nami & Koizumi, 2009), and task and rater effects in L2 speaking and writing (In’nami & Koizumi, in press). Overall, these instruments have proven to be popular, with meta-analysis assessments garnering over 100 downloads as of February 2017.

Révész and Gurzynski-Weiss (2016) contributed an article that combined introspective and behavioral data from teachers to examine what made tasks easy or difficult from teachers’ perspectives. The researchers asked 16 ESL teachers to look at slides that detailed four tasks and 1) assess the linguistic ability students would need to carry out the tasks and 2) consider how they would adapt the tasks to suit the needs of learners at lower and higher proficiency levels. As the teachers contemplated these questions, they were asked to vocalize what they were thinking about, and their eye movements were tracked to provide information about the extent to which they interacted with the task instructions and pictorial input. The slides that Révész and Gurzynski-Weiss used to elicit these data are available on IRIS. The repository contains many other think-aloud protocols which have been used in studies of semantic implicit learning (Paciorek & Williams, 2015), the reactivity of verbal reports (Bowles, 2008), strategy instruction of reading comprehension (Karimi, 2015), and numerous others. As second language research communities working with think-aloud and psycholinguistic data expand, we expect IRIS to be an invaluable resource.

Finally, in the 2016 task issue of ARAL, Li, Ellis, and Zhu (2016) conducted a study comparing the effectiveness of task-based and task-supported instruction for the acquisition of the English passive construction. The effect of four treatments (no instruction, pre-task explicit instruction, within-task feedback with no instruction, and within task feedback with explicit instruction) was measured using a grammaticality judgment test (JT) and an elicited imitation test (EIT). Both instruments are available on IRIS. Searching for ‘elicited imitation’ shows another 45 similar materials are accessible in a wide range of languages, including Arabic, Japanese, Russian and Vietnamese. This is an indication of the growing interest in this method, not only as a measure of sensitivity to specific language features but also as a potentially reliable proxy for general language proficiency. JTs (the other instrument used by Li, Ellis, and Zhu) are, in fact, the second most downloaded instruments on IRIS (following questionnaires, which elicit data on, for example, language awareness, language background, learning strategies). JTs from 425 studies across a wide range of subfields are available, providing an incredibly varied and comprehensive assortment from which to draw. 315 of these JTs have been sourced for an IRIS ‘Special Collection’ (see the button on the Search and Download page) as they are linked to a methodological synthesis of this hugely popular technique (Plonsky, Marsden, Gass, Crowther, Spinner, in preparation). Another Special Collection on IRIS holds approximately 60 self-paced reading tests, also linked to a methodological synthesis (Marsden, Thompson & Plonsky, under review). Other researchers are welcome to develop such collections that are linked to syntheses or meta-analyses they are undertaking

Materials, including data and analysis protocols, are eligible for upload to IRIS if they have been used for any publication that has been peer-reviewed, including Ph.D. theses. In tandem with methodological reform movements in other fields, ARAL, as well as thirty other journals in the field, encourages its authors to make their materials available on IRIS. For further information, see FAQs or contact iris@iris-database.org.

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>