Showing posts with label Assessment. Show all posts
Showing posts with label Assessment. Show all posts

Thursday, October 18, 2018

Beware of snakes in the grass

Most people have a healthy fear of snakes. I don’t. I have an incredibly unhealthy fear of snakes. I see them where they are. I see them where they aren’t. I’ve been known to toss a bowl of popcorn into the air when they suddenly appear on the television screen. So, when I learned about the Cobra Effect, the circumstance in which a plan to reduce the venomous snake’s threat to humans actually increased the threat, I was horrified. But that’s nothing compared to the alarm I felt in reading The Testing Charade: Pretending to Make Schools Better in which Daniel Koretz uses the Cobra Effect as a mechanism to organize the junk drawer of problems with standardized testing in public education.

Specifically, Koretz outlines an example of the Cobra Effect known as Campbell’s Law in which social scientist Donald T. Campbell asserts “the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." Insufficient measures generate distortions, and Koretz argues that in education, standardized testing is the insufficient measure.

The distortions?

There are too many for a short review but Koretz does a good job organizing the basic misunderstandings, unintended consequences, and defeat devices associated with standardized testing. From the widening gaps among the intended, taught, and learned curricula… to the people-sorting nature of standardized tests… to the incentivizing of instructional behaviors that run counter to how people learn, Koretz reviews the specifics of how standardize testing is an incomplete instrument for evaluating teaching and learning.

More importantly, in using Campbell’s Law, Koretz includes education in a broader discussion about social change processes, cleverly applying in his thesis what he recommends for education evaluation; broaden the discussion, increase balance in the evaluation process, recognize that there are tradeoffs in managing complex systems, and stop trying to quantify everything.

As much as I would like to make the comparison, standardized tests are not the Cobras of this story. Unfortunately, the Cobras are the distortions and unintended consequences of their misuse; score inflation, test prep, the needlessly diminishing reputation of public education, the struggles of the poorest students in a system not designed to account for them, and the limitations of an evaluation instrument built around regression to the mediocre.

I won’t throw a bowl of popcorn into the air the next time I see a standardized test, but I do fear the real snakes in the grass, the unintended consequences of misusing standardized tests. The charade, as Koretz calls it, will eventually come back to bite us.

Sunday, February 14, 2016

Reconcilable differences

Sir Ken Robinson told a group of Texas educators recently that the problem with standardization in education is that people “don’t come in standard versions.” Simple. Common sense. Square peg, round hole. People are unique. Standard education doesn’t account for that fact.

So what to do?

Since it’s not likely that people will come in standard versions anytime soon, we need to change the educational environment so that it takes “proper account of the different talents of students.” Educational systems should be designed to preserve the intrinsic motivation for learning with which children are born. They should encourage the self-determination that leads to a life of purpose, productivity, and personal responsibility.

There is an opportunity at hand this year for Texas to influence system design for the better in education.

The Texas Commission on Next Generation Assessments and Accountability held its first meeting in January. Created by legislation passed by the 84th Texas Legislature and charged with studying and making recommendations for changes in the state’s testing and accountability system, the commission will meet six times this year before filing a final report in September.

What direction should the commission take?

In his latest book, Creative Schools: The Grassroots Revolution That's Transforming Education 1, Robinson makes the point that a problem with most current assessment systems is that they are too light on description to “convey the complexities of the process” they are meant to summarize. To enhance assessment, we need to expand the elements of description and the evidence upon which learning is evaluated.

Research is providing direction for this kind of work if only we can muster the will to think and act in different ways. A study published in the Journal of Educational Psychology last month, The Narrative Waltz: The Role of Flexibility in Writing Proficiency 2, could inform a discussion about next generation assessments.

Long story, short… the researchers found that context in the evaluation of learning matters.

Short story, longer… having longitudinal data to make decisions about student learning is critical; specifically, in this case, in evaluating writing proficiency. The researchers were trying to understand the extent to which high school students’ linguistic flexibility across a series of essays was associated with writing proficiency. Flexibility refers to students ability to adapt their writing styles and use various linguistic properties according to the specific context of the writing task; properties like cohesion, rhetoric, complex syntax, explicit argument, reported speech, contrasted ideas, and narrativity.

To analyze flexibility, the researchers utilized both natural language processing techniques and dynamic methodologies to capture variability in students’ use of narrative text properties across 16 argumentative, prompt-based essays.

Why narrativity as opposed to another linguistic property?

One, they needed a property to quantify and analyze, and narrativity, as an easier writing style for high school students to use, was likely to be more prevalent across multiple writing assignments. Two, they wanted to address what they believe is a common misunderstanding about the use of narrativity in essay writing. Research has shown that texts with more narrative elements tend to be easier to read and comprehend. Narrativity, therefore, is often assumed to be a sign of essay quality. But this assumption is not supported in the literature. The majority of research on essay quality suggests the opposite; that higher quality writing is associated with decreased levels of text narrativity.

In the current study, the degree of narrativity was assessed using a narrativity component score provided by Coh-Metrix, a computational text analysis tool that analyzes text at the word, sentence, and discourse levels. Coh-Metrix calculates the coherence of texts on many different measures. The Coh-Metrix narrativity Easability Component score, based on the results of a previous, large-scale corpus analysis, served as a measure of text readability based on the degree of story-like elements that were present within an individual text. To quantify students’ narrative writing patterns, the researchers applied random walks and Euclidian distances to create visualizations and classifications of students’ use of narrative properties across the 16 assigned essays.

The researchers found that it was students’ flexible use of text properties rather than the consistent use of a particular set of text properties (in this case, narrativity) that correlated with established measures of essay quality. The situational influence of narrative text elements on writing quality varies. It’s not just about frequency or consistency of use, but the writer’s ability to use appropriate techniques given the context. Flexibility informs writing proficiency.

Bottom line? Writing proficiency measures should account for a student’s sensitivity to multiple linguistic profiles, which can only be determined by evaluating multiple writing samples over a period of time.

The authors argue their findings suggest standardized test developers should “aim to develop more sophisticated assessments that can capture students' writing skills across a number of different contexts." In other words, when designing assessment systems, we need to think in terms of expanding the elements of description and the evidence upon which learning is evaluated.

The square peg and round hole are not irreconcilable. It’s a question of commitment to a process of developing “original ideas that have value.” Or what Robinson calls... creativity.

1 Robinson, K. & Aronica, L. (2015). Creative Schools: The Grassroots Revolution That's Transforming Education, New York: Penguin Publishing Group.

2 Allen, L. K., Snow, E. L., Mcnamara, D. S., Allen, L. K., Snow, E. L., & Mcnamara, D. S. (2016). Journal of Educational Psychology Proficiency The Narrative Waltz : The Role of Flexibility in Writing Proficiency.

Sunday, May 12, 2013

Putting the pieces together

Understanding student engagement is about understanding the dimensions and depth of the relationship between students and the school community, according to the authors of an analysis of measuring student engagement published last year in the Handbook of Research on Student Engagement. The authors frame engagement as a three-dimensional construct that includes a cognitive component (engagement of the mind), a behavioral component (engagement in the life of the school), and an emotional component (engagement of the heart). And they assert that measuring the depth of each of these elements of engagement requires understanding students' perspectives.

"As in any social system, an understanding of the complexities of the system does not necessarily reside in those at the top of the system, who only have a narrow understanding and perspective on the ways in which the system operates; those at the bottom of the social hierarchy within a system often have the greatest insights into the whole system." 

The authors assert that educators must learn to conceptualize engagement as a cultural issue rather than a structural one. They describe cultures as interrelated, overlapping, nonlinear sets of relationships that, when viewed from a distance, appear as webs or sets of webs. In other words, a single student's engagement in the school begins from a perspective that is unique to that student but not independent from the perspectives of others nor from the structures upon which the school environment is based. This is why, the authors assert, attempting to understand engagement only as a structural issue and implementing top-down policies as a means to address it "does not have a direct and uniform impact on student outcomes." Cultures are way too complicated for that.

So is engagement.

For example, emotional engagement has been found to be fluid on any given day across learning environments for individual students, suggesting that engagement is context dependent (Park, Holloway, Arendtsz, Bempechat, & Li, 2012). While this may seem to complicate things from an educator's perspective, it is actually great news for schools because many elements of the learning context are within the control of educators: how we interact with students, the work that is designed for students, and how organizational systems are designed to support that work.

It also means we need students' perspectives. But how do we get them?

Again, complicated... and the best way is the most personal and hardest to measure; getting to know individual students and what motivates them. But an aggregate understanding of the students' perspective is important as well for schools, something that is communicable and practicable. There are student self-report surveys that have proven to be statistically reliable and valid measures of engagement as it relates to student achievement. Experience sampling has proven to help researchers understand engagement as it relates to context. Teacher ratings and student interviews also provide insight into students' interaction with learning environments. And observation is potentially useful for both research and practical purposes. The problem is that currently the most common, and too often the only, means of understanding the student experience is performance on standardized assessments. While performance data is important for understanding the full scope of a student's educational experience, it is not enough and does little to inform understanding about the educational processes that are linked to the outcomes the assessments measure.

The authors of the current analysis review the findings of the latest High School Survey of Student Engagement (HSSSE), a student survey designed to assess the extent to which high school students engage in educational practices associated with high levels of learning and development. A statistically valid and reliable construct for data collection and analysis, the HSSSE is designed to collect information regarding all three aspects of engagement (cognitive, behavioral, and emotional). The latest administration of the HSSSE revealed three major themes: students feel they have little voice in the school community, teachers are powerful figures in the lives of students, and students crave activity and interaction in the learning environment.

These three understandings represent actionable data for schools in designing their learning environments. When combined with other data, including personal knowledge about what motivates students, interview and observation data, and a variety of student performance data, we can begin to put together the engagement puzzle. Warning, however, it's a really big puzzle. And it will take attention, commitment, persistence and a sense of purpose to complete it.

But if engagement is the key to learning... puzzle anyone?

  • Yazzie-Mintz, E. & McCormick, K. (2012). Finding the humanity in the data: Understanding, measuring, and strengthening student engagement. In S.J. Christensen, A.L. Reschly, & C. Wylie (Eds.) Handbook of Research on Student Engagement, 743-761.
  • Park, S., Holloway, S. D., Arendtsz, A., Bempechat, J., & Li, J. (2012). What makes students engaged in learning? A time-use study of within- and between-individual predictors of emotional engagement in low-performing high schools. Journal of Youth and Adolescence, 41(3), 390–401. doi:10.1007/s10964-011-9738-3