Sunday, 12 December 2004

Piracy vs. Stealing: Teacher Fails "A" Student for Topic Choice

I am going to address two issues in this post.

The first part is about the issues of evaluation and assessment. The next part deals with lesson I learnt from an instructional design point of view.

First background (Quoting from Boing Boing):

"Sixteen year-old Steve Geluso was failed by his English teacher for choosing to distinguish piracy from stealing in an essay.

[sic]

"His teacher failed him, saying there was no difference between the two and that he was "splitting hairs". Other teachers who read his essay said that he did well from an organizational and technical standpoint, but because his teacher felt that there was no difference between piracy and stealing, she gave him an 'F' because she disapproved of the content of his essay.


Steve's papers (scanned) can be found in his web blog.




On assessment



One of the first comment on assessment can be found here (THE NEW ACCOUNTABILITY). While this post initiated to discuss the new way of using the net to hold teachers accountable, the post reflects the mainstream view of assessment (evaluation of a PRODUCT as a way of evaluating the ability of the learner who produced the product, without looking into the PROCESS of how the "product" was created.):

The first scorer failed to assess the product (in this case the essay) based on the agreed* rubric.

*"agreed" may be a wrong word here. The publishing of the assessment criteria and the fact that the assessee produced a work according to the published criteria should legally form a contractual agreement which, in this case, there is no exit clause. Assessing the product in any way other than this published criteria is violating the contractual agreement under which the student took the examination.

[This point is irrelevant here: Isn't splitting hair one of the way our understanding of our world can be advanced? Most academic papers are hair splitting definition arguments! Take the definition of "Learning object" as a vivid example relevant to e-learning, or irrelevant to e-learning!]

For important assessment like this and if there is disagreement, a second assessor could be called. It is important that the second assessor is independent and have no prejudice to either parties. It seems that there is a clear case of "information cascading" happening here.

The fundamental problem with assessment based on a single end product is that it is based on an industrial age paradigm.

There is nothing wrong with such a paradigm if you are still living in developing or under-developed countries. The education system in any developed countries should start thinking seriously what kind of citizens will be required to sustain the current standard of living. "America cannot remain rich by producing pillar cases" is true for all developed countries!

One of the important corollary of the industrial age quality control (assessment) is that any product must be evaluated against an agreed set of measures. Any deviation from the measures outside the agreed tolerance is considered faulty - even if the deviation actually *improved* the product in some way.

I will leave my dear readers to ponder a better solution, because I don't have one! We know that learning is a process. What can we do to evaluation the effectiveness of the learning process? What kind, under what condition and how many evidences should we collect in order to provide us with confidence to draw the necessary conclusion? Do we evaluate the learning process (as in how it is delivered etc.) or do we evaluate the effect on the learner (as in how well a learner has mastered a certain skill or concept)? I raise this second part of the last question because I don't believe education (learning) should be a way of "sorting" people!

On lessons learnt



So far, you have been reading a story - a story told by me with a "coloured" glass put on. You may find it interesting (I assume so, since you are still reading this line.). The great question is "SO WHAT?" Have you learnt anything? Have you found anything useful? Is there any ROI for the time you have spent reading up to now?

If you have found my argument full of crap, laughing all the way thinking how silly I am. You have your reward! Entertainment!

OK, let's put on our instructional designer hard hat and assume that we want to use this story for some instructional purposes. By selecting such a story, have we done our job? My answer will be a simple NO. We are not started yet.

Our job is to induce learning, cause changes in the learners' mind. Throwing information (in this case, a story) to the learner is NOT inducing change. We should design ways to activate the change process, prompt the learner to reflect on the information, build links to his/her prior knowledge and arrive at a socially agreed view on the issue.

How can we do that? Let say this is the e-learning design challenge for this festive season.

ps BECAUSE WISDOM CAN’T BE TOLD (OR READ ONLINE) provides some fuel for thinking about this. I may write a post re: case-based learning later.

2 comments:

jocalo said...

Albert--

You raise important, critical questions about effective assessment of writing. My comments were analytical, not one's that advocate this particular form of high stakes testing.

Finding ways of assessing what students have learned is a major project. If you check the link on my left navigatinal bar reading "What Students Say", you'll see a beginning effort at qualitative assessment that involves both what students say they have learned as well as evidence in their own writing that they have learned it. This approach is more a way of assessing the outcomes of a particular course, but could be adapted to portfolio assessment to see what learning has occurred for individual students.

Albert Ip said...

Thanks Jocalo.

I enjoyed reading your piece on formative evaluation (http://faculty.deanza.fhda.edu/jocalo/2004/01/28).

I enjoyed your analytical review of the rubric (and I actually based on your review to draw my conclusion about the breach of the contractual agreement by the first scorer. He scored outside the bound of the rubric! He would be wrong even if he have given the student a distinction based on the fact that he liked the argument!)

None of us like to see summative evaluation continue to be a mean of sorting out students - at least for those living in the developed countries. It is a fact of life at the moment - We need to have a voice to initiate some change...