Sunday, December 29, 2019

Implementing Multi-Tiered Support Systems in Reading in the Messy World of Real Schools


Hello Friends!

In preparing for my annual spring term Literacy Difficulties course, I came across two excellent, recent pieces on implementing MTSS in reading that I have not used in the past (refs at bottom). Both pieces issued from a successful collaboration between a State Department of Education and researchers at a state university that focused on supporting four very high-needs schools in developing, implementing, and refining school-wide MTSS systems in reading. Significantly, the project produced the kinds of outcomes that most schools hunger for: Two years of implementation of the MTSS systems produced a +0.5 effect size in students’ reading achievement, meaning that students who received two years of supports accelerated their reading performance by 19 percentile points.

The pieces, which overlap quite a bit, begin with the premise that most schools know, in general, what needs to be done in terms of implementing MTSS in reading but “underestimate
the supports that schools need to build systems and infrastructure to implement and sustain these practices.” The authors then go into detail in terms of how the project schools “worked to overcome the complexities inherent in implementing multitiered reading supports in high-priority schools,” providing "examples of how schools involved with the K–3 reading initiative delved into the details to move past barriers and build the systems and infrastructure to implement a comprehensive MTSS framework fully, with fidelity and consistency.” I found that pieces were highly engaging, filled with practical insights, and extremely challenging, as they describe many specific practices that enabled the schools to be successful but that, quite honestly, most schools typically just don’t have in place.

It would take quite a bit of verbiage to adequately summarize the content of the pieces, and I was unable to upload a chart from the Coyne chapter that would have done it for me. So, here is the briefest of summaries. Both pieces identify a set of four common stumbling blocks, that they present in the form of practical statements, that often trip up schools’ attempts to implement MTSS in reading in robust ways: 
  • "We have an MTSS plan, but it doesn't guide our day-to-day reading practice."
  • "We have identified a common approach to Tier 1, but it doesn't seem like there is consistency in reading instruction across teachers and classrooms."
  • "We have useful reading data from our students, but it feels like we are not able to use it to make meaningful instructional decisions for all our students."
  •  "We have students who need intensive small-group instruction, but now what?" 
The pieces then go on to present examples of exactly how the four schools overcame these obstacles, detailing the establishment of literacy leadership teams and the teams' long- and short-term objectives; decisions regarding planning, prioritizing, and meeting; and tools for utilizing data. Again, it is beyond the scope of this post to lay out these "solutions" in detail. But, if these stumbling blocks sound familiar and the idea of actual practical solutions is thought-provoking, email me and I can send the chapter/articles to you in PDF. They will provide much more specific description of the practices and tools that the project schools employed to address these issues.

Overall, I would leave you with these following critical points:

1)    Most schools vastly underestimate the challenges involved in implementing a robust MTSS system in reading and greatly overestimate the degree to which MTSS practices are implemented fully and with fidelity.
2)    Partial implementation of MTSS systems in reading likely will not improve student outcomes, particularly for students with, or at risk, for learning disabilities (Balu et al., 2015). "Half way" just isn't good enough, and ends up being a waste of vitally important resources. It really takes “everything and the kitchen sink” to produce the outcomes we desire.
3)    There are schools out there – in extremely challenging settings – that are “getting it right.” If they can do it, you can do it! And, if you can do it, you should do it!
4)    These two pieces will give you plenty of real, practical guidance in constructing a robust MTSS system in reading. I would also be happy to “join the discussion” if your school is genuinely interested in undertaking this process.

Coyne,M. D., Oldham, A., Leonard, K., Burns, D.,&Gage,N. (2016).Delving into the details:
Implementing multitiered K–3 reading supports in high-priority schools. In B. Foorman
(Ed.), Challenges to implementing effective reading intervention in schools. New Directions for
Child and Adolescent Development, 154, 67–85.

Leonard, K., Coyne,M. D., Oldham, A., Leonard, K., Burns, D.,& Gillis, M.B. (2019). Implementing MTSS in Beginning Reading: Tools and Systems to Support Schools and Teachers. Learning Disabilities Research & Practice, 34(2), 110–117

Monday, December 16, 2019

Long-term, Multifaceted Vocabulary Produces Eye-Popping Outcomes

Hello Friends!

Time to turn to my greatest passion in terms of literacy research: Vocabulary instruction. Decades of research has demonstrated that vocabulary knowledge is critical to students’ long-term academic achievement, particularly with regard to reading comprehension (August, Carlo, Dressler & Snow, 2005; Chall, Jacobs, & Baldwin, 1990; Cromley & Azevedo, 2007; Davis, 1968; Lesaux, Crosson, Kieffer, & Pierce, 2010; Senechal, etc… 2006). Over the past decade, I have conducted three long-term vocabulary instruction research studies, collaborating with teachers in each grade 2-5 to develop, implement, refine, and test multifaceted vocabulary instruction that is feasible for teachers and produces "more than expected growth" in general vocabulary knowledge for students at all levels/language backgrounds. My teacher colleagues and I have published a number of practical descriptions of our instruction in The Reading Teacher over the last few years, and just recently I have finished research articles on the 4th-5th grade study and the 3rd grade study that are winding their way through the review and publication process. I thought that I would share the most salient outcomes from the Vocabulary and Language Enhancement (VALE) studies here.

First, here is a little overview of VALE instruction (at 3rd Grade - the framework stays the same across the grades, but emphases shift a bit...). VALE was designed to address what I consider to be the "3 Domains" of vocabulary instruction: Quality (teaching high-value words intensively), Quantity (providing efficient instruction in larger sets of related content/curriculum words), and Strategy (teaching word-learning strategies). Here is a look at how this played out in 3rd Grade:

General Component
Focus of Instruction
Schedule of Instruction
Word-learning-strategy instruction
Explicit instruction in
common affixes, the word-part strategy, and use of context cues to infer word meanings
15 weeks x 2 days a week, 20-30 minutes a day.
Word flooding
Efficient introduction and review of large sets of related content words or words taken from literary read aloud texts.
Approximately 10 content units and 10 read aloud texts during the year. Brief routines (5-15 minutes) for introducing and reviewing vocabulary interspersed on multiple days per week.
Target word instruction
Robust instruction on 12 target words per week
3-4 days a week, 10-20 minutes of activities a day.
Teaching vocabulary for application
Sentence imitating to support application of connectives in writing
20-25 weeks x 2-4 days a week, 10-15 minutes a day.
Character trait vocabulary to support application of terms in analytic discussion and writing
15-20 weeks. Routines for introducing vocabulary, text analysis, and essay writing interspersed on multiple days per week.

So, how did the kids do? Vocabulary researchers typically give researcher-designed tests of specifically-taught words/strategies and a standardized test of general vocabulary. Commonly, these studies show that students outperform control students on specifically-taught words but not on normed tests of general vocabulary. Our goal in VALE was to teach vocabulary in such a way that the students would, indeed, show accelerated growth in general vocabulary. I use the Gates-MacGinitie test of general vocabulary as a standardized measure and "Normal Curve Equivalents" as the unit to compare our classes vs. the norming sample. In general, the expectation with NCE's is a "0 effect size," meaning that you would expect a class that starts the year at the 50 NCE to end the year at the 50 NCE. Here are the effect sizes for the 3 years of the 3rd Grade study: 1.45, 0.93, 1.17. These effect sizes demonstrate that the VALE students, in each of year of the project, made greatly accelerated growth in general vocabulary knowledge in comparison to the norming population. A clear sense of the practical importance of the VALE students’ growth resulted from calculating their grade-equivalent scores. This conversion revealed that the VALE students made 3.6, 2.3, and 3.2 years of growth in general vocabulary knowledge, respectively, in the course of one academic year! Importantly, there was no signifcant difference in growth between the lower and higher scoring students from fall testing, meaning that all students, on average, made comparable growth regardless of beginning scores.

The Grade 2 study in dual immersion school involved 6 classes, each taught by a team of two teachers, one teaching in Eng and one teaching in Spanish. The vocab instruction occurred in both English and Spanish, although it looked somewhat different in each language. In the second year of the project, the effect size for growth in English vocabulary knowledge across the 6 classes was 0.42. This effect size is on the high end of “small” (range for small = 0.2 -0.5). These outcomes were achieved in a DI setting in which the students spent only ½ as much time in English instruction as the norming group. In simple terms, these students showed greater gains in general Eng vocab than the norming group despite spending ½ as much time in English instruction! There was no significant difference in growth rate between English Learners and native English Speakers - both groups gained equally.

Overall, I am thrilled with these outcomes! As I mentioned at the beginning, it is common for vocab instruction research to produce gains in specifically taught items but not in general vocabulary knowledge. And, while the former is not unimportant, it is the later that really holds the potential - particularly if it were to occur over several years - to impact students' more general academic performance.

Thanks for taking the time to read about the VALE research. Be on the lookout for more info on these projects here on the CLC, including descriptions of the instructional practices. What kind of questions, thoughts, or connections to your setting does the work raise for you?

Powerful New Research on the Effects of Systematic PD in Phonics Instruction

Hello Friends!

This is research study that I am sharing far and wide. It is also highly relevant to the previous blog post on the journalistic article focused on the "cueing system" approach to beginning reading. So, I wanted to share the key points here. The study was authored by Linnea Ehri, the "great dame" of word recognition research. Here is a bit of the abstract that introduces the details of the study:

Teaching systematic phonics effectively to beginning readers requires
specialized knowledge and training which many primary grade teachers lack. The
current study examined effects of a year-long mentoring program to improve
teachers’ knowledge and effectiveness in teaching phonics and the extent that it
improved students’ achievement in reading and spelling. Teachers in urban, lower
SES schools completed a 45 h course followed by 90 h of in-school training.
Mentors (N = 29) worked with kindergarten, 1st, 2nd, and 3rd grade teachers
(N = 69) twice a week for 30 weeks during the year. Each visit included a 45 min
prep period plus 45 min of modeling and feedback in the classroom. Mentors taught
teachers how to provide systematic phonics instruction to their students
(N = 1,336). Monthly ratings by mentors revealed that teachers improved their
phonics teaching skills with many reaching the highest ratings by May.
Teachers’ agreement with principles of phonics instruction remained
strong or increased from fall to spring. Students’ reading and spelling skills showed
large gains during the year and far exceeded effect sizes from comparable data
sources. Students met grade-level expectations at the end of kindergarten and first
grade.

Just how large were the student gains vs. the norming sample of the Gates reading tests that were used? In first grade, the effect size for decoding was 1.67 (2.22 for ELs!)! This is an enormous effect! In second grade, the EF was 1.05 for decoding and 1.01 for reading comprehension; not as large as first grade, but still extremely large (even on the test of comprehension!).
 
Although it never seems particularly "sexy" to talk about phonics instruction in this day and age, these kinds of gains in reading in a setting with significant lower-SES and EL student populations strike me as what so many schools chase but rarely achieve. What produced these gains? Providing teachers with extensive professional development to enhance their expertise in implementing systematic phonics instruction across the primary grades. Seems like a "doable" approach to me! What do you think?

Cueing Systems and the new Reading Wars

Hello All,
A former student recently sent this link to me, asking about my thoughts on the article:
http://www.apmreports.org/story/2019/08/22/whats-wrong-how-schools-teach-reading. It is a lengthy, award-winning journalistic piece reporting on the teaching of beginning reading, where schools and certain curriculum materials might go wrong, etc... I found the reporting to be quite in-depth and the piece to be an excellent read. It calls into question that most durable of school hallway "folk theories," the 3 Cueing Systems. I particularly appreciate the way that the author fastidiously clarifies terms and holds views accountable for the "necessary consequences" that issue from them. This is not easy to do! I will be fascinated to see if this argument gains further traction in the public domain in the coming years.

We discussed this article in my Elementary Literacy Research and Instruction grad class this term. One student asked a very pertinent question: "I do not understand how teaching (both) cueing systems and phonics does not work. How does one negate the other?" To me, this really gets at the heart of things, as it would be shocking for a teacher today to utterly ignore phonics, even if they were committed to the cueing system model. Here was my response:

 I think that there are "degrees of danger" associated with the cueing system. The highest degree would be teaching contextual cueing to the total exclusion of phonics. I don't think that anyone these days would do that. The next degree would be teaching context strategies in a way that displaces a fair amount of phonics/decoding instruction at a critical time when many students need all of the later they can get. I often say that many students need not only the right stuff, but they also need it at the right intensity. To me, this kind of "displacement" is a real danger, as I have heard teachers talk about their instruction in a way that does indicate that it is a reality. The next degree would be that instruction in the cueing system doesn't take away much at all from phonics instruction but that it does, to some degree, push kids away from engaging in at least some of the hard decoding work that we know is critical to becoming competent early readers. This might not hinder many students, but it may indeed be an obstacle for some. These days, I would say that this (lower) level of "danger" is probably the most common.

Now, overall, I think that the question of "Does one negate the other?" is right at the heart of the matter. In the end, I always want students to have an "experience" with a given word that enables them to read that word "better" or more easily the next time they see it. To my mind - and I think that research overwhelmingly supports this - the key ingredient in such an experience is thorough analysis of the grapheme-phoneme relationships in that word. And, the best, easiest pathway for kids to engage in this analysis is to "decode," to use knowledge of phonics patterns - whether sequential (i.e., cvc) or hierarchical (i.e., vce, etc...) to carefully read a word. When they do this, it sets them on the road to creating lasting bonds in memory between the word's spelling and pronunciation. Reading a word through use of context might impact the degree to which kids experience this in two ways. First, they might get in the habit of using something like a picture cue to "read" the word and then not go back and carefully analyze the actual printed word. This habit would slow down their "automatizing" of whatever words that they do this with. Second, they may not form the habit of doing the hard work of trying to analyze/decode unfamiliar words and instead develop the habit of looking for other pathways to guess and move on.

Significantly, there was a research study that demonstrated that a group of first graders who made more "non-word errors" - i.e., tried like the dickens to sound out an unfamiliar word and ended up with a non-word approximation - ended up better readers late in the year than kids who substituted meaningful words for such unfamiliar words. So, in other words - mindset and habits matter. Kids who develop the mindset and habit that they are going to try very hard to use their phonics knowledge and thoroughly analyze unfamiliar words are better off in the long run than those who slip off into the practice of making context-facilitated guesses. To me, this does suggest that an emphasis on the cueing system can, at times, "negate" the benefits of phonics instruction by pushing kids away from the mindset and habit of doing the hard work of word analysis.

If you have the time, check out the article and share your comments or questions here!

Personally-Valued and Researched-Based Practice in Literacy Instruction




If you have been in one of my graduate classes in literacy education, it is quite likely that I began the class by discussing the concepts of “personally-valued practice” and “research-based practice.” I believe that these are valuable concepts that explain a lot of what goes on with regard to literacy instruction at the classroom level and that can be used by teachers and schools to reflect on and evaluate their current literacy instruction and how it measures up to relevant research. So, I thought that it would be valuable to introduce them here, in the inaugural post for this restart of the CLC blog.

I believe that literacy instruction in most classrooms represents a mixture of personally-valued practices and research-based practices, and I think that it is critical that teachers grow increasingly more to able to distinguish between the two.

Personally-valued practices are instructional approaches, activities, or routines that are based on our philosophies of childhood, learning, or literacy (preexisting personal beliefs or values about how kids learn, what they “need,” etc…), on personal history and anecdotal evidence (“I have always done it this way, and it seems to really work…”), or simply on what we like to do with kids and think that they find engaging, enjoyable, etc... Note that what is absent from this description is clear evidence that an instructional approach has empirical research support, i.e., that there are research studies that demonstrate that it worked better than other approaches in one or more settings. Personally-valued practices simply don’t have this kind of clear research to support them; we simply can’t say, with any surety, that they are good for the students that we teach.

In contrast, research-based practices are those instructional approaches, activities, or routines that do have clear research support, meaning that they have been shown to produce greater outcomes than other approaches in carefully conducted research in one or more settings. Now, research findings have varying degrees of limitations. A practice may have produced greater learning in one study in a suburban 5th grade, and, thus, we may not know if it will produce the same benefits in a setting with a high percentage of English learners. Another practice may have been the focus of numerous intervention studies in many different settings, and, in each case, it may have produced benefits. Thus, we could turn to the instruction in the second case with great confidence that it is good for students. In the first case, we could say that we know that the practice has worked in one setting and that there is a rationale for trying it in our setting, but that we better watch closely to see if it is producing the outcomes that we are seeking.

 At this point, I should also make the point that I have heard Tim Shanahan make. “Research support” vs. “no research support” doesn’t mean that the former produces wildly positive gains for every student and that no one learns anything in the later. It is not all-or-nothing. Generally, a research-based practice shows small-to-medium benefits in learning over another practice. Said another way, kids in nearly all forms of instruction do inevitably “develop” as readers or writers, but research-based instruction offers the best chance to maximize their learning.

Now, back to the distinction between personally-valued and research-based practice. To my mind, we owe it to “other people’s kids” to do what we know gives those kids the best chance to maximize their learning. Said another way, implementing personally-valued practices because “We believe that it is good for kids,” “We really think that they need this,” “We have always done it this way,” or “The kids really like it” basically puts a teacher’s needs or wants before the students’ needs. But, as teachers, we have already “made it:” we have college degrees, a career, a steady income, etc… Our students, on the other hand, haven’t obtained these things. Their futures hang in the balance, dependent, to some degree, on their teachers’ knowledge, decisions, quality of instruction, etc… In this situation, I believe that to displace research-based instruction with personally-valued practice is irresponsible, akin to “instructional malpractice.”

In many cases, I think that problems arise because we believe that personally-valued practices are indeed research-based practices. Trust me, I have been there! And, in these days, when literacy instruction has been highly commercialized, with private companies literally making hundreds of millions of dollars marketing and selling literacy instruction curricula and materials, it can be very difficult to wade through the glitzy marketing and various claims of “research-based.” The renewed “Reading Wars” further confuse the issue, as the competing sides and their respective "experts" claim that their favored methods are “research-based.” Consequently, I believe that teachers and schools need a high degree of expert knowledge on literacy development and literacy instructional research.

How do we get there? It is always helpful to begin with what are considered “consensus conclusions” in the field. These are conclusions that are typically supported by numerous studies and the vast majority of leading researchers. For example, the National Reading Panel report provides what can be considered “consensus conclusions.” Another important consideration is to look for “external” research. It is never a great idea to rely on research on an approach that was conducted by the very people who are selling the approach! It is also absolutely critical to keep learning, keep questioning, keep holding a “tentative” view on your instructional practices (i.e., “I think that these research-based activities, so I am going with them. But, if and when I hear otherwise, if and when I find compelling evidence to the contrary, then it will be time to change). Finally, perhaps the most important step is to honestly evaluate your “local data.” Is what you or your school doing producing robust outcomes for all kids (and not just those who would thrive in nearly any situation)? If not, do you know enough about the research in the areas that seem most relevant to your outcomes? How can you get highly pertinent research in your hands? Do you know someone who could help you think through the findings and their applications in your setting? 

Wow, that was a lot more long-winded than I had in mind! So, if you are still with me, thanks! I believe that this is a vital issue that each school and teacher faces today. If we are going to do the very best by our students, which I know is the intention of every teacher out there, then we need to rigorously reflect on where/whether we are providing research-based instruction or have fallen into personally-valued instruction. Hopeful, over time, this blog will serve as a trustworthy resource to you, spurring and informing you as you engage in this reflection and strive to fill your days with research-based reading and writing instruction.