|Defining "feedback" isn't very much fun.|
One of the things that I want to work on next year is improving the feedback that I give my kids on their work. I know that I tend to give kids too little feedback over the course of the year. I also know why this happens: because I think that most teacher feedback is pretty lame, coming either in the form of right/wrong or as running comments about the correctness of the procedure.*
* You know what I'm talking about, right? "Nice job subtracting the 3 from both sides of this equation. But careful! Can you really divide both sides of the equation when they look like that? Try it again." Maybe you love this sort of thing but I generally feel this is just a translation of our red-ink corrections into English.
My path forward needs to be finding ways of giving feedback that are great and productive for my kids. Earlier this year I found something that I really loved: giving questions as feedback. I want to find more productive moves, similar to my post-it note routine.
The problem is that when I started looking around for feedback resources, it seemed that the things that I liked using weren't always considered to be proper feedback. Grant Wiggins defines feedback so that only information -- not questions -- could be considered proper feedback.
I find disagreements about definitions incredibly frustrating. I learn very little from debates about what "conceptual learning" means or what exactly the difference between a "problem" and an "exercise" is. So, what do we mean by "feedback"? Oy. Untangling that question doesn't seem like it would be very much fun.
I think we can mostly avoid taking much of a stand on it, though, as long as we reframe the question. Instead of organizing the question around feedback, let's organize it around assessments. I'm finding it more productive to avoid asking "What's the best way to give feedback in math class?" Instead, I'm finding it helpful to take some quiz, test, problem or whatever and start imagining the different ways that one could possibly respond to the student work.
Take this quiz, for example:
What are the ways that we could imagine following-up on this completed quiz?
- We could send kids to stations and have them check their work against an answer key.
- We could mark it right/wrong and give it back to kids.
- We could mark the questions right/wrong and then tell kids where we think their skills are along a 1-5 (or novice to master) rubric.
- We could go over the quiz as a whole group.
- We could ask kids questions without marking right/wrong and send them to groups to help everyone work on questions.
- We could run a follow-up class that asks kids to analyze and improve some common responses on the quiz.
I imagine that as my list grows, I'll start being able to group the possible responses into categories. Here are some groups that seem like they might be productive ways to bundle some of these reactions to a completed quiz:
- Individual vs. Whole-Class Follow-Ups - Some of our ways of responding to student work involve interacting with individuals (e.g. sending kids to stations). Other ways of responding involve interacting with the entire class, like running a lesson that addresses a common problem on the quiz.
- For learning vs. For reporting - Some of these responses are useless for reporting what a kid knows to stakeholders. There's no way that a record of the questions you ask a kid could be of much use to a parent or an administrator. Asking questions in response to completed student work is very clearly for learning and for learning only.
- Explicit Evaluation vs. Implicit Evaluation - Some of these responses to the quiz would involve explicitly telling kids which of their responses was correct. Other responses avoid that. Maybe there's no difference between being explicit or implicit? Maybe this is a distinction that doesn't matter?
I'll build my distinctions, concepts and theory from the ground-up, rather than burying my theory in a definition.
A lot of people in education try to use a technique we might as well call "proof by definition." We'll try to tell people that they're wrong because they've got their terms mixed up. We'll limit the scope of what we're studying by defining it out of existence in the first slide. I think we often end up doing this with feedback. I'm hoping that a bottom-up approach can help me build a deeper understanding of the choices I face.