top of page

Project Descriptions

Project Title:  Effectiveness of L2 written corrective feedback in developing L2 accuracy: A Bayesian meta-analysis

Status:  Under Review  

Collaborators: Qiandi Liu (University of South Carolina) & Reza Norouzian (Texas A&M University)

Under consideration for presentation at 2023 AAAL, Portland, OR

Corrective feedback on L2 writing remains one of the most heavily studied areas in applied linguistics, yet clear guidance for practitioners about how to provide it most effectively remains limited. A few previous studies have meta-analyzed written corrective feedback (WCF) research, such as Kang and Han (2015) which analyzed 21 studies to reveal an overall moderate effect of WCF in developing L2 writers’ accuracy in new pieces of writing. Research on WCF has increased dramatically signaling a ripened domain for more meaningful meta-analysis –– particularly for further insight into moderator analysis which can help advance understanding of critical questions regarding how, when, and for whom WCF is most effective. This study provides a needed update and features several methodological advances by introducing a Bayesian approach to conducting a meta-analysis (Norouzian et al., 2018, 2019), which can provide a more valid picture of the generalizable effects of WCF. The analysis allows distinction, for the first time at a meta-analytic level, between short-, medium-, and long-term WCF effects. Results from aggregate data from an initial 52 primary studies that utilized control groups revealing robust evidence of the durability of effectiveness of WCF in delayed posttests and deeper insight into the relative effectiveness of various types of WCF across research contexts and writing task, feedback, and instructional characteristics. Different types of WCF yielded similar effect sizes (e.g., direct, indirect, metalinguistic) although direct combined with metalinguistic WCF resulted in the greatest gains in accuracy. An advantage was also found for rule-governed linguistic features as targets (i.e., simple verb tense) over idiomatic ones (e.g., prepositions), and results suggest diminishing returns for WCF as additional treatments (after the first) did not lead to incremental gains. We conclude with recommendations for practitioners and guidance for researchers to help continue methodological advances in this domain.

Project Title:  "Significance sells": Researcher views on the ethics of the quantitative data handling and reporting 

Status:  Invited, in preparation for Research Methods in Applied Linguistics

Collaborators: Luke Plonsky (Northern Arizona University), Meishan Chen (Kennesaw State University), Romy Ghanem (Northern Arizona University), María Nelly Gutíerrez Arvizu (Universidad de Sonora), Meixu Zhang (Texas Tech University)

Researcher judgment is inevitable. Some decisions are clear cut and fall in line with what is termed ‘responsible conduct of research’ (RCR). Other choices, however, fall into a gray area such as selective reporting of findings (e.g., when p < .05) or strategic omission of methodological details. John et al. (2012) and others have referred to such acts as “questionable research practices” (QRPs). Building on the increased concern over research and reporting practices in applied linguistics and elsewhere in the social sciences (e.g., Gass et al., 2021, Open Science Collaboration, 2015), Isbell et al. (2021) sought to examine the presence of QRPs as well as more clearly egregious practices such as data fabrication and falsification (i.e., fraud) in applied linguistics. The findings, based on a survey of 351 scholars, paint a somewhat disappointing view of the ethics of quantitative researchers in the field. Approximately 17% of the sample admitted to one or more forms of fraud, and nearly all (94%) reported engaging in one or more QRPs. In addition to numeric survey data, Isbell et al. also collected responses from participants regarding their views of the different QRPs in the survey. The present study focuses on those responses in an attempt to shed light on the choices researchers make while handling, analyzing, and reporting quantitative data. Participants’ comments will be considered vis-à-vis aggregate-level survey data reported in Isbell et al. A number of themes will also be highlighted and discussed such as the researcher training, the changing landscape of methodological standards, and the role of top-down (e.g., editorial) guidelines and reform. Recommendations will also be provided to guide researcher practice as well as for graduate training and field-wide standards.

Project Title:  Misconduct and questionable research practices: The ethics of quantitative data handling and reporting in applied linguistics

Status:  Published in The Modern Language Journal (2021)

Collaborators:  Daniel Isbell (University of Hawai'i, Meishan Chen (Kennesaw State University), Deirdre Derrick, Romy Ghanem (Northern Arizona University), María Nelly Gutiérrez Arvizu (Universidad de Sonora), Erin Schnur, Meixiu Zhang (Texas Tech University), Luke Plonsky (Northern Arizona University)

Scientific progress depends on the integrity of data and research findings. Intentionally distorting research data and findings constitutes scientific misconduct and introduces falsehoods into the scientific record (Fanelli, 2009). Unintentional distortions arising from questionable research practices (QRPs), such as unsystematically deleting outliers, pose similar obstacles to knowledge advancement. To investigate the extent of misconduct and QRPs in quantitative applied linguistics research, we surveyed 351 applied linguists who conduct quantitative research about their practices related to data handling and reporting. We found that 17% of respondents (approximately 1 in 6) admitted to one or more forms of scientific misconduct and that 94% admitted to one or more QRPs relevant to quantitative research. We also examined these practices in relation to participant background and training. Researchers admitting to misconduct tended to be earlier in their careers and had experienced publication rejection due to lack of statistically significant results. Quantitative training had generally desirable associations with QRPs. Publication rate and experience with publication rejection were associated with admission of several QRPs related to omitting statistical results. We discuss these findings in the context of quantitative research reform efforts and academic research culture, culminating in five recommendations for the field of applied linguistics to improve ethical quantitative data handling and reporting in research.

Project Title:   The role of pragmatic markers in perceptions of L2 interactive fluency

Status:  Under Review 

Collaborators:  Dr. Julieta Fernandez University of Arizona), Dr. Amanda Huensch (University of Pittsburgh)

Presented at:  AAAL 2018 (Chicago, IL)

 

“You have problems with fluency. You use lots of pauses and too many fillers.” Language learners are often confronted with this kind of assessment of their oral performance. Meanwhile, researchers and practitioners have not reached agreement about what it means to be orally fluent in L2 interaction. This study contributes to illuminating the processes and associated pragmatic features that contribute to fluent interaction. Specifically, we focus on the perceived value of pragmatic marker (PM) use that occurs frequently in L1 interaction but relatively less between L2 learners (Fung & Carter, 2007), such as interpersonal (e.g., I see, okay) and cognitive PMs (e.g., you know, I think). These ‘fillers’ have been associated with disfluency in L2 assessment, which often relies on measures designed for monologic performance (Tavakoli, 2016). Despite evidence of functional value of these PMs in interaction (McCarthy, 2009), they are rarely introduced in L2 classrooms. Motivated by these discrepancies, we present results of a matched-guise experiment that investigated perceived value of PMs when audio samples were manipulated to control for them. Several speech samples of L1 and highly proficient L2 interaction were digitally edited, some in which all interpersonal and cognitive PMs were removed and others in which these types of PMs were inserted to simulate expert speaker-like PM use controlling for frequency, placement, and variety. English L1 and L2 raters (N = 261) judged each interlocutor’s fluency with distractors in between samples in a counter-balanced design. A two-way ANOVA revealed speakers were perceived as significantly more fluent when they made use of PMs than without them. Raters’ qualitative comments supported these findings and uncovered a bias favoring native speaker PM use. Results support the value of explicitly addressing PMs in teaching L2 pragmatics and help researchers in understanding dialogic fluency to guide in its measurement and assessment.

Project Title:   A typology of data collection instrumentation for formative L2 classroom observation   

Status:  In preparation  

Collaborators:  Helena Kore

Reflection plays a central role in teacher education and the collection of data about one’s teaching serves as a vital tool for reflective teacher education activities. This work brings together the spectrum of options that L2 teachers and teacher educators can consider in collecting formative classroom observation data for reflective purposes. We set out to collect existing published formative L2 teaching observation instruments and to present the types of data that have been collected and the various options for tools that have been used in this process. We present prototypical examples and describe key elements and variations of instruments identified in over 60 sources. The result will help assist practitioners in linking potential questions they may have about their teaching, their classrooms, or their students’ behavior to ideas for and examples of data collection instruments. Practitioners will find this synthesis valuable in identifying, adapting, adopting, or designing new observation tools that are tailored to local professional development goals. 

Project Title:   The interaction between grammatical knowledge and feedback type in L2 written corrective feedback

Presented at:  2016 Symposium of Second Language Writing (Phoenix, AZ)

 

Second language (L2) writing researchers and practitioners continue to seek theoretical and empirical evidence to support expert provision of written corrective feedback (WCF) for L2 writers.  Research on the effects of WCF in developing accuracy over time has begun to generate convincing support of the practice (e.g., Kang & Han, 2015; van Beuningen, de Jong, & Kuiken, 2012), while many of these findings point to advantages in direct methods of WCF (i.e., supplying learners with correct forms) over indirect approaches (i.e., indicating the location and/or type of error to prompt students to self-correct, often through use of a code).  These findings, however, conflict with the opinions of many researchers, writing teachers, and students who support the use of indirect approaches which are thought to encourage students’ analytic reflection, engagement, and processing of the feedback they receive (Ferris, 2011). To explore the contextual factors that could moderate the effectiveness of different feedback types, several researchers have hypothesized variables such as affect, learner developmental readiness, and the nature of the errors, e.g. ‘treatability’ of error types (Ferris, 2011; Russel & Spada, 2006), although scant empirical research supports these claims.  This study is the first to investigate whether learners’ preexisting grammatical knowledge can predict the effectiveness of more or less explicit feedback types.  

 

The methods used to investigate this relationship follow a recent trend in SLA studies that focus on Aptitude-Treatment Interaction (ATI).  ATI research (see Li, 2013; Yilmaz, 2013) attempts to match specific types of instruction with learners who will most benefit, based on individual difference characteristics. Using a quasi-experimental design, the study is being conducted in a Thai university EFL setting (n = 100) with three groups of students: one receiving direct feedback, another indirect, and a third as a control. The six target error types were determined by analyzing essays written by students in previous semesters. A grammatical knowledge test was designed and piloted to measure the sub-constructs of error correction ability, metalinguistic knowledge, and error type recognition ability (Cronbach’s α = .89).  Students received feedback on seven unreferenced academic essays written over a semester in timed settings. 

 

Multiple regression was used to analyze gain scores in accuracy against grammatical knowledge scores across individual error categories.  Preliminary findings suggest an interaction effect between grammatical knowledge and the effectiveness of feedback type for particular error types.  Thai EFL learners appear to benefit more from direct feedback on error types for which they lack robust grammatical understanding. These findings can inform theory in light of the interaction between learner-external and learner-internal variables. Rather than simply label particular error types as more or less treatable, as previous research suggests (Ferris, 2011; van Beuningen, et al., 2012), practitioners could adapt the assessment tool presented in this study as a diagnostic to guide more tailored feedback strategies for heterogeneous classes or for individual students.  A more nuanced understanding of how learners’ variable grammatical knowledge can influence their use of different feedback strategies can maximize the potential benefit of the feedback they receive.

 

 

 

Project Title:  Reframing an EAP speaking curriculum: Targeting academic discussion sub-skills

Collaborators:  Dr. Alexander Nanni (Mahidol University, Thailand)

Presented at:  2016 Foreign Language Learning and Teaching conference (Bangkok, Thailand)

This presentation outlines an innovative approach to academic speaking curriculum design. We introduce an integrated skills EAP course in an intensive English program in Thailand that focuses on preparing students to participate successfully in academic classroom discussion.  We begin by making the case for the instruction of academic discussion skills (Hsu, Van Dyke, & Chen, 2015; Kim, 2006) since instructional objectives for academic speaking typically emphasize oral presentations in formal assessment. Academic discussion skills include interactive skills to develop discourse topics organically (e.g., responding, clarifying, supporting, progressing, questioning, redirecting) and the ability to integrate relevant support from academic references using citations.  Presenters then map the progression of a unit that integrates academic listening, reading, and writing skills and culminates in an informed academic discussion on a topic relevant to university students across majors. We share useful scaffolding materials, including structured note-taking and outlining activities, multimedia research tasks, peer feedback activities that develop confidence, and reflection activities using video recordings and YouTube annotation. Providing this scaffolding facilitates students’ preparation to make meaningful contributions and focuses attention to language form that supports meaningful interaction.  Students are largely successful in engaging in lively academic discussion and this is supported by evidence that demonstrates substantial gains in students’ academic discussion skills over the course of a semester. The presentation concludes with practical suggestions for implementing this approach and tips for overcoming challenges, such as divergent cultural expectations of classroom participation (Jones, 1999).  While this approach to teaching and assessing academic speaking skills has been designed and implemented in an intensive EAP program, it could be applied in a variety of academic contexts and should be of interest to a wide cross-section of educators who are interested in enhancing their students’ ability to participate effectively in university classrooms.

 

Project Title:   A methodological synthesis of research on the effectiveness of corrective feedback in L2 writing

Status:  Published in the Journal of Second Language Writing (2015)

Collaborator:   Dr. Qiandi Liu (University of South Carolina)

Presented at:  AAAL 2015 (Toronto, Ontario)

 

Despite an abundance of research on written corrective feedback (WCF), the very fundamental questions of whether and to what extent different types of WCF are effective in assisting learners to develop their accuracy remain controversial. A number of review papers have attributed this stagnation to the methodological limitations and inconsistencies in this body of research (e.g., Bruton, 2010; Ferris, 2004; Guénnete, 2007; van Beuningen, 2010). Nonetheless, such arguments rely on limited and perhaps selectively chosen samples of empirical studies. Further, although this domain has also been meta-analyzed multiple times (e.g., Biber, Nekrasova & Horn, 2011; Russell & Spada, 2006; Truscott, 2007), none of these studies focuses specifically on methodological features.

 

The first goal of this study is, therefore, a comprehensive review of methodological practices in WCF research. Continuing in a growing tradition of methodological review in L2 research (e.g., Plonsky & Gass, 2011), we conducted an exhaustive search yielding 51 published and unpublished studies and coded for design features following meta-analytic procedures. The methodological synthesis identified a number of important limitations, such as reporting inconsistencies on research design features, common use of ‘one-shot’ treatments, mixing of CF types as treatment for single groups, and varied outcome measures. These limitations restrict comparisons of results from different studies. Our second goal was to meta-analyze quantitative results in this domain and to examine design and methodological features as potential moderators of the effectiveness of WCF. Preliminary results provide evidence for overall gains in accuracy on new writings, which appear to be moderated by (a) the operationalization of a control group and (b) the length of treatment, among other features. Finally, we provide a number of suggestions, both methodological and substantive in nature, in the hopes of advancing and improving future research practices in this domain.

 

 

 

Project Title:   The type and linguistic foci of oral corrective feedback in the L2 classroom: A meta-analysis

Status:  Published in Language Teaching Research (2016)

Presented at:  AAAL 2014 (Portland, OR)

The role of corrective feedback (CF) remains a central focus of L2 research with increasing attention to how teachers make use of CF in their classrooms. Empirical support for CF is growing, including recent meta-analytic efforts that reveal variables that may influence effectiveness, including CF type, target linguistic foci, and context of use (Li, 2010; Lyster & Saito, 2010). However, to compliment these findings little can be generalized about teachers’ tendencies in feedback choices. Lyster and Ranta’s (1997) taxonomy of CF types (comprised of recasts, elicitation, clarification requests, repetition, metalinguistic, and explicit correction) remains dominant in observational studies conducted in a growing range of teaching contexts. Furthermore, several researchers have hypothesized factors that may influence teachers’ use of CF, studied in isolated contexts (Moore, 2002; Sheen, 2004). The present study brings together research in this area in the first synthesis of classroom CF research to aggregate proportions of CF types L2 teachers provide, as well as their target linguistic foci. The study further investigates contextual and methodological factors (i.e., moderators) that influence CF choices across teaching contexts and teacher/learner characteristics. A comprehensive search in ERIC, LLBA, PsycINFO, ProQuest Dissertaions and Thesis databases, Google, and Google Scholar yielded 27 published and unpublished descriptive classroom studies comprising data sets from 51 classrooms. Results reveal that the overall proportion of CF types support Lyster and Ranta’s initial findings, with recasts the most common. Grammatical errors received the largest proportion of feedback, followed by lexis and phonological errors. Several moderating variables were found to affect teachers’ CF choice, including years of experience, level of training, native vs nonnative status, and student proficiency. A clearer picture of the patterns of CF that teachers provide and the variables that moderate their choices can guide future research design and highlight gaps between empirical findings and classroom practice.

 

 

Project Title:   Domain definition and search techniques in meta-analyses of L2 research (Or why 18 meta-analyses of feedback have different results)

Collaborator:  Luke Plonsky (Northern Arizona University)

Status:  Published in Second Language Research (2015)

 

Applied linguists have turned increasingly in recent years to meta-analysis as the preferred means of synthesizing quantitative research. The first step in the meta-analytic process involves defining a domain of interest. Despite its apparent simplicity, this step involves a great deal of subjectivity on the part of the meta-analyst. This article problematizes the importance of clearly defining and operationalizing meta-analytic domains. Toward that end, we present a critical review of one particular domain, corrective feedback, which has been subject to 18 unique meta-analyses. Specifically, we examine the unique approach each study has taken in defining their domain of interest. In order to demonstrate the critical role of this stage in the meta-analytic process, we also examine variability in summary effects as a function of the unique subdomains in the sample. Because different techniques used to identify candidate studies carry assumptions about the type of research that falls within the domain of interest (e.g. published vs. unpublished), we also include a brief review of search techniques employed in a set of 81 meta-analyses of second language research. Building on the work of In’nami and Koizumi (2010) and Oswald and Plonsky (2010), the results for this phase of the analysis show that L2 meta-analysts generally rely on a stable but very limited set of search strategies, none of which is likely to yield unpublished studies. Based on our findings related both to domain definition and search techniques employed by L2 researchers, we make specific recommendations for future meta-analytic practice in the field.

 

 

 

 

 

bottom of page