Challenge Accepted: Improving Our Feedback Part 1
The Role of Knowledge
Last week’s pieces (Part 1 & Part 2) can be boiled down to the following three points:
We’re not very reliable or accurate in our judgement of instruction;
This could be because we don’t have enough knowledge to recognize and deconstruct good instruction; and
Even with more accurate ratings from more knowledgeable judges, our evaluation framework might encourage us to make poor instructional recommendations.
Before diving into how we might fix this situation, I do want to acknowledge that this is my best guess of how to fix it. If I had a tried and tested method, I’d be sending a $50 million invoice to the Gates Foundation for finishing the work that they started with the MAT project. This differs from previous entries of Challenge Accepted because instead of having research to directly support what’s written, I’m using information that makes the most sense to me, so that disclaimer needs to be acknowledged right up front. I also want to say that I feel for those who are tasked with evaluating teachers because they’ve been put in an impossible situation. Since the science of reading “movement” started, we’ve heard countless stories from educators who have had to reconcile that they’ve worked incredibly hard to help students while using practices that likely had a negative impact. Our hearts go out to these people who feel enormous guilt for using ineffective practices–practices that they were trained to use. I believe that teacher evaluators deserve some of this same sympathy. As Chris O’Brien stated, “Principal preparation programs — formal and informal — teach policy. Law. Finance. Organizational theory. The history of education reform.” Do you notice what’s missing there? Many (Most? All?) of these programs do not teach how people learn and the instructional moves that aim to maximize learning. While in their teacher preparation programs, current leaders were trained to use the same poor practices as our teachers, and their administrative program likely did very little to improve instructional knowledge. For example, I had just one course that discussed instruction in my admin program, and it was very explicit in teaching us that we needed to teach and assess standards in isolation. Not only did my program place little emphasis on instruction, what we were taught contradicted the research. How can we expect evaluations to consistently produce better results when the preparation programs do not provide people with the knowledge needed to evaluate?
Start with Building Knowledge
If teachers do not come out of their preparation programs with a solid understanding of teaching and learning, then the onus is on the district to provide professional development. The same thing is true for administrators. One of the previous pieces mentioned my son’s budding interest in football and how he’s confused about penalties because he doesn’t know the rules. If he’s ever going to be able to recognize a false start, then he needs to learn that most offensive players can’t move once they’re set. He will never be able to see that as an issue if he doesn’t have the knowledge. The same thing is true for teaching, but it’s a lot more complex. Understanding how learning happens is more complicated than understanding the rules of a game. I appreciate the sides that Caiti Wade provided to this discussion, first stating that evidence-informed practice is “what’s most likely to be effective for most students most of the time,” before following that up with stating, “evidence-informed practice works for all students.” In a very humorous juxtaposition, she put thoughts from those two pieces side-by-side:
The issue is that this is only humorous if you’ve done the learning and have a nuanced understanding. The fact that she knew she should probably get out in front of any potential criticism is evidence of the fact that we, as a community of professionals, do not have a nuanced understanding of learning science. I am two full years into developing my own understanding, and I learn something new every single day. This new learning isn’t necessarily a new idea each day. It might be a new way to discuss a concept that I already knew or a deeper understanding of previous learning, but refining my own comprehension of learning science has helped me to better articulate this knowledge to teachers, plan for instruction, see intentional teacher moves, and diagnose where learning may be breaking down in the classroom. So what did Caiti Wade mean with that contradiction? I would encourage you to read both pieces because I can’t possible quote all of the wonderful information here, but this paragraph stands out as a succinct summary:
At the level of principle, evidence-informed practice works for all students. That’s because it is grounded in how humans learn, and that’s not something that magically changes because you’ve entered a different postcode. At the same time though, at the level of practice, no single strategy will work perfectly for every student, in every moment and in every context. Even with sound underlying principles, what happens in the moment is impacted by a whole range of variables, such as prior knowledge, attention, timing, delivery and how windy it is outside (the latter of which remains a largely unaccounted-for variable in education research). This does not mean that evidence-informed practice doesn’t work.
Knowing that we lack a nuanced understanding, we need to ensure that we have a rigorous professional development calendar that helps evaluators understand the intentional teacher moves that have the biggest impact on learning. We cannot ask staff to think critically about how well a particular move was executed in the classroom and provide suggestions to improve it if they don’t have a strong mental model of what it should look like. For example, I thought cold calling was a terrible practice for most of my career because I didn’t understand the purpose. My experience of it when I was in school was that it was used to call out or embarrass a student. I simply didn’t know that there was a better way to use it, so it was nothing that I would ever recommend. But knowing what practices are is just one step of many. Another is knowing the advantages and drawbacks of certain practices. Choral responses are great for driving students’ attention to particular content, but they don’t allow teachers to hear individual responses. Cold calling allows teachers to hear individual responses, but they don’t allow teachers to check the understanding of the whole class. Mini whiteboards allow teachers to check everyone’s understanding, but they’re more time consuming. Dylan Wiliam has stated that “opportunity cost is the single most important concept in educational improvement.” Each of these practices has a place, and they can be used in conjunction with one another, but we need to understand the cost of selecting one strategy over another. If evaluators are simply looking to make sure that teachers are using cold calling, they’ll be satisfied if a teacher uses only this strategy to gather a response to a hinge question, not knowing that we probably want to survey the whole class at this point in the lesson.
We can’t simply say, however, that we need to build peoples’ knowledge. We’ve all had the experience of wrapping up a meeting with someone saying something to the effect of, “We need to come back together to discuss X, Y, and Z.” But there’s no attempt to put a date on the calendar. We just acknowledged that we need to meet again, but nothing was put on the schedule. If we’re going to say that it’s our responsibility to help build knowledge, we need to be able to point to the dates where this is going to occur. This means that we need to discuss the importance of a timeline. If the evaluators in your district are anything like me, building new knowledge will mean that they will first have to unlearn deeply rooted beliefs. In each of the challenge pieces I’ve posted this year, I’ve tried to be incredibly vulnerable in sharing my (poor) past practice. This hasn’t been easy. As a teacher, my students’ futures were impacted by the instruction that I did and did not provide. As a coach and teacher evaluator, my judgements and feedback impacted peoples’ livelihood. Because of this, I certainly went through each stage of grief when I started discovering how little accurate information I knew (despite having two masters and a doctorate). The staff in your district may experience the same, so the robust PD plan that you implement will need to be created well in advance of executing it so that you’re readily available to support people as they experience the cognitive dissonance that they will likely feel. This preparation is not unlike the planning that teachers need to do in order to provide appropriate scaffolding for their students. You need to have a strong mental model of where you’re going so that you’re better able to close the gap between the understanding evaluators have and the understanding you want them to have. Creating the plan in advance also frees you up to ensure that the new learning isn’t mutated between when the professional development is given and when it’s actually implemented in buildings. It wouldn’t be acceptable for teachers to not have a clue on Tuesday what they’re planning to do on Wednesday. We need to hold ourselves to that same standard.
Climbing down from my soapbox, we need to shift our attention to a framework. Administrators certainly need knowledge if they’re going to evaluate the intentional moves that teachers make, but using that knowledge most effectively in their work requires a framework. We’ve already discussed some of the drawbacks of the Danielson Framework, but another limitation is that it doesn’t help anyone understand where to start. Regardless of if it’s being used to evaluate a teacher who needs a lot of support, receiving several needs improvement, or a teacher who is fairly proficient, one is left wondering what’s the next best step. Domain 3 focuses on instruction, so one might assume that’s the starting place. But the quality of learning is almost always based on classroom conditions, which is Domain 2. Drilling down further, it doesn’t matter how good the classroom environment is if the lesson plan was poorly developed, so we might consider going back to Domain 1. Finally, the quality of the lesson plan is impacted by a teacher’s pedagogical content knowledge, and improving one’s knowledge is found in Domain 4. When it comes to improving classroom instruction, I might start with Domain 1 because I believe that everything from routines to instruction starts with planning, but another administrator might start with Domain 2 because they feel that culture is the bedrock of instruction. To improve consistency, it would be helpful to have a tool that looks at instruction in a sequence of steps. This type of tool is exactly what we’ll discuss on Thursday.


