Paper Summary
Share...

Direct link:

Using a Trait Model to Characterize Growth Effects in a Writing Intervention

Mon, April 25, 2:30 to 4:00pm PDT (2:30 to 4:00pm PDT), Marriott Marquis San Diego Marina, Floor: South Building, Level 1, Leucadia

Abstract

This paper presents empirical results from a pilot study for an argument writing intervention. Participants included 146 six-grade students at a suburban, mixed-race middle school. About 2/3 of them were eligible for free and reduced lunch, and about 1/6 had IEPs.

The intervention consisted of three lessons in which students learned to engage in critical discussion, identify argument components, and write an essay that integrates arguments from both sides.

Participants were assessed both before and after the intervention, about three weeks apart. Pretest and posttest consisted of eight multiple-choice argument comprehension items, an argument summarization task, a personal essay, and a source-based essay. Each task had two topics, with students being randomly assigned one of the two topics at pretest and one at posttest. The summary and the essays were all double scored. Exact interrater agreements ranged from 55% to 80%, and adjacent agreements were all above 90%. Writing tasks – both essays and the summary – were automatically scored for writing traits.

Analysis of the results indicated that there was a significant increase in overall score and on each of the component tasks. There were also significant increases in several trait scores. Essentially, at posttest, students had significantly higher scores on the Organization, Text Cohesion, Conventionality, and Formality (academic language) traits, and significantly lower scores on the sentence length trait. In a follow-up analysis, we constructed a graphical model that allowed us to judge the effect of demographic variables on pretest traits and change scores, and the effect of pretest traits and change scores on total test score. Trait change scores were equally as strong predictors of posttest score as trait pretest scores. Some demographic variables had significant impacts. Females showed significantly higher total scores on Organization, Text Cohesion, Conventionality, and Formality scores, and significantly lower Sentence Length scores. Low SES students had significantly lower Organization, Text Cohesion, Conventionality, and Formality scores, and significantly higher Sentence Length scores. Special education students not only had lower overall performance (and corresponding patterns of trait scores) but showed less growth than general education students.

These results indicate that the trait model was sensitive not only to differences in student performance on a single writing assessment, but to patterns of change in writing behavior induced by instruction. It thus seems feasible to use trait scores to track changes in writing performance over the course of the school year. The availability of the Criterion corpus would be critical for this purpose, since it would make it possible to determine appropriate expectations by grade level and genre. Note that in this study, all the essays were from a single genre, and students almost universally displayed the characteristic genre profile for argumentation (very high in stance taking; very low in concreteness, interactivity, and contextualization). As a result, these traits did not have significant relationship with score, though they do in larger samples that include essays from multiple genres.

Authors