As a result of the 2004 reauthorization of the federal Individuals with Disabilities Education Act (IDEA), public »ÆÉ«apps may now use new approaches to evaluate a child for specific learning disabilities (SLD). To help parents better understand both the traditional “aptitude-achievement discrepancy” approach, and the newer “responsiveness-to-intervention” (RTI) approach, and how each might affect their child, Daryl Mellard, Ph.D., a principal investigator at the National Research Center on Learning Disabilities, at the University of Kansas, Lawrence, answers some questions about the two approaches. The second article in this series discusses what parents can expect if RTI is implemented at their child’s »ÆÉ«app.
What was the original rationale for using the aptitude-achievement discrepancy method to identify specific learning disabilities (SLD)? Why has it been criticized in recent years?
It’s interesting to note that reading clinicians used the concept of “less-than-expected” reading achievement before the aptitude-achievement discrepancy approach was linked with SLD (e.g., Franzen, 1920; W.S. Monroe, 1921). Researchers, clinicians, and parents had noted a group of students who were not achieving in a particular academic area at a level one would expect, in comparison to their achievement in other areas. These students had particular deficits, for example, a severe reading deficit, and at the same time showed remarkable strengths or high achievement in other areas. So, the central concepts were underachievement in a specific area of deficit, and strong abilities and skills in other areas. In 1977, when regulations were first adopted for implementing the Individuals with Disabilities Education Act, the method by which SLD would be identified was a controversial issue. Consensus was eventually reached that we would assess students using the aptitude-achievement discrepancy approach, which became the “test” for underachievement in a particular academic area.
The discrepancy approach had advantages from an administrative point of view: The simplicity of the approach made it efficient. One could assess the level of a student’s underachievement by administering one or a few achievement tests and comparing a student’s scores on the achievement tests to his aptitude score. Such calculations offered an assumed level of precision that had appeal. Both the assessments themselves and the formulas for calculating the discrepancy could look pretty sophisticated, and in many ways they were sophisticated. In our desire to simplify the complex, labor intensive, and costly assessment process, aptitude-achievement discrepancy offered a solution! The model looked good on the surface.
However, in practice, researchers and educators made a terrible mistake in that they failed to realize the limitations of the aptitude-achievement discrepancy. From an SLD perspective, the model was clearly insufficient. A significant discrepancy between a student’s aptitude and achievement only indicates the severity of underachievement; it is not a test of SLD. The research literature suggests that students with SLD are underachieving (Swanson, 2000), but not all underachieving students have SLD. Â A medical analogy might be helpful here. Elevated temperature is a common, measurable symptom of illness. We use a thermometer to check for the discrepancy between a child’s temperature and what we consider a normal temperature, 98.6 degrees. All you can say about a child with a high temperature is that, first, he’s “hotter” than expected, and further tests are needed to understand why his temperature is high; and, second, an intervention is likely needed.
From a parent’s and teacher’s perspective, this issue is significant because the scores in a discrepancy calculation do not inform us about any of the underlying basis for the child’s underachievement. The discrepancy is the product of a large number of influences, some of which are intrinsic to the student, such as, limited aptitudes for reading acquisition, short attention span, difficulties with pattern recognition, poor working memory, or low self-regulating or self-monitoring performance; and others that are part of the home, instructional, and curricular opportunities, including lack of exposure and practice with pre-academic skills such as rhyming words, inconsistent or insufficient practice with academic skills, lack of a sufficiently organized instructional environment, or changing »ÆÉ«apps and curriculum due to family relocations. From a teacher’s perspective, understanding the basis of the discrepancy is not so important because the major concern is getting the student help beyond what is available in the general education classroom.
Too often in SLD assessments, one finds a discrepancy between a student’s aptitude and achievement and jumps to the conclusion that an intervention is necessary. Insufficient time is spent trying to understand the basis for the discrepancy. So, significant errors are likely made in people’s good faith efforts to help. Because unless you have a good understanding of the basis of what’s causing the discrepancy, you really don’t know how to best help a child learn.
Could you describe how the “responsiveness-to-intervention” (RTI) approach to identifying a specific learning disability is supposed to work?
RTI has two applications. The first application is that of a prevention model to limit the amount of academic failure all students experience, not just those who have an SLD. Or, stated in a more positive view way, RTI helps to ensure that, at the first sign of problems, a student receives the academic supports he needs to be successful. The second application of RTI is determining whether a student has a SLD. Both applications are very important, but clearly the second application requires a higher degree of integrity and precision because the outcome – judging whether or not a student has a disability – has important life-long implications for that student and his family.
The fundamental RTI concept is that students receive the high-quality instruction and intervention that enables them to be successful. RTI involves frequent, ongoing classroom-based assessment of a student’s progress in specific academic areas (e.g., basic reading skills, reading comprehension, math calculation, and written expression) and behavioral areas (e.g., attending to tasks, completing tasks on time, and appropriate interpersonal interactions). As soon as a student starts to lag behind his peers in any academic or behavioral area, he receives more intense instruction in that area. After a specified period of time, if he is still under-achieving relative to his classmates, in spite of more intense instruction, he is provided with an even more intense instructional intervention. So, RTI is designed to catch any individual child’s under-achievement early, and to address the problem in a very individualized way.
One of the wonderful advantages to RTI is the broad application and benefit that is potentially available for all students. To illustrate with an analogy, we can think of RTI as similar to a public health model in which we have tiers of increasingly intense interventions for disease, which we direct at smaller and smaller segments of the population. In public health, the large population gets wellness information on how to stay healthy and receives basic, broad vaccinations. That’s the first or “primary” tier of intervention.  In spite of this primary tier of intervention, however, some members of the population might get ill. Or, we might discover as the result of large-scale screening of the population, that some people need more specialized treatment. This level of specialized treatment is considered the secondary level of intervention, which is not for the general population, but for a smaller segment of maybe 10 to 15% of the total population.  Even within this second-tier group, though, some persons, 5% or so, are going to need further, very specialized interventions. This highest level is referred to as the tertiary level of intervention and is, by its design, the most intense and likely the most costly level of intervention.
RTI can work as the public health model applied to students’ »ÆÉ«app performance. School staff provides a high-quality education for all students and conducts screenings to ensure that everyone is benefiting from that education. For students whose academic screening results suggest that a closer look – including a more refined/specific assessment – and a more intense intervention is needed, the »ÆÉ«apps will have procedures to ensure that the appropriate services are provided, and that the student’s progress (or lack of progress) in response to that intervention is monitored.
Is there research to support RTI’s effectiveness in identifying and providing academic interventions for kids with SLD?
All students benefit from having instructional and curricular approaches that are closely matched to their current individual level of functioning and need. That’s an essential feature of RTI, which makes it a wonderful model of instruction. The research has demonstrated through a number of studies (Mellard, Byrd, Johnson, Tollefson, & Boesche, 2004) that an RTI framework can benefit youngsters by addressing academic difficulties in an individualized and timely way.
On the other hand, we don’t have good data yet on broad application of RTI as a model for identification of SLD.  We lack data on how effectively and reliably RTI works for SLD determination.  When the National Research Center on Learning Disabilities at the University of Kansas solicited nominations from »ÆÉ«app districts of »ÆÉ«apps implementing exemplary RTI practices, most of the »ÆÉ«apps about which we received information were using RTI in a prevention model and not as an approach specifically aimed at determining which children had SLDs. And we have almost no data yet on how RTI models might work in middle »ÆÉ«apps or high »ÆÉ«apps. Further, we are limited to RTI models that focus on reading interventions in primary »ÆÉ«app-age youngsters.  Some research is emerging on students whose difficulties are in math, but those findings come from research settings, not »ÆÉ«app-based adoptions. So »ÆÉ«apps that adopt the RTI approach will be developing those models on their own.
Some advocates of the RTI approach suggest that the information obtained from RTI assessments is sufficient for SLD determination.  As mentioned earlier, when a youngster has completed multiple RTI intervention tiers, and the interventions have had minimal success, the instructional and diagnostic staff (e.g., »ÆÉ«app psychologists, reading teachers, or language therapists) does not yet know why the implemented interventions were unsuccessful, or which interventions might work. To garner that important information, other assessment approaches will be needed, including extensive histories on health, development, education, family education data, information processing abilities (e.g., working memory, attention, sensation level, and self-monitoring), and overall intellectual capacity.
On the surface, a well-designed, rigorous RTI implementation should have great benefits. Almost any assessment model that looks at youngsters across time is clearly superior to just a snapshot of a child’s performance for a single moment in time, which is what  a »ÆÉ«app’s multi-disciplinary team gets when they administer a series of standardized tests. How well we can scale up the research-based RTI models into general education classrooms remains to be seen. The challenges and potential benefits are significant.
What are the shortcomings of the responsiveness-to-intervention approach?
As I mentioned earlier, as yet no controlled studies have been conducted on how RTI works for SLD determination, on the implementation of RTI in any setting above elementary »ÆÉ«apps, or on a comparison of cost effectiveness or broad-scale application, in multiple districts or across time. This latter question will likely be particularly difficult to answer because »ÆÉ«apps are very dynamic settings. The dynamics of »ÆÉ«apps change as staff turns over, new curricula are adopted, instructional grouping is reorganized, or other federal (e.g., No Child Left Behind), state (e.g., high-stakes testing), or local initiatives compete for attention and resources.
Another concern is that our view of the essential nature of disability within the context of public education will change. We have variation already across »ÆÉ«apps and districts about which student characteristics are considered to constitute a disability. Many of the desirable features of RTI, such as classroom level screening and progress monitoring, could also undermine the current assumption that a learning disability is a unique condition with a particular constellation of student characteristics, which requires a specific kind of intervention. This could allow general low achievement due to low socio-economic status or other environmental influences to become a more dominant factor in disability determination. If we change our view of the essential nature of SLD from the historical focus on students with unexpected under-achievement, to a focus on students with generalized low achievement in spite of high-quality instruction, we should be clear that such a shift is acceptable.
What aspects of the discrepancy approach to identifying learning disabilities might be useful to retain?
That’s a tough question because conventional wisdom, numerous researchers, and even federal political appointees have criticized most everything that looks like any form of the discrepancy approach. Â On the other hand, anecdotally, the students I know who were referred for evaluation because of academic problems, and who evidenced a significant discrepancy between aptitude and achievement in a specific academic skill (e.g., word recognition or reading comprehension), needed intense interventions. So the evidence of the discrepancy was helpful in confirming the presence of a significant academic problem, and, in comparison to the RTI models, was quick to obtain.
Another consideration is that discrepancy formulas that look at differences among achievement scores or particular abilities can help pinpoint particular areas of concern. For example, we can calculate if the difference between a student’s reading recognition scores is reliably different from his reading comprehension scores. We use a variety of assessments in our comprehensive evaluations, and looking at a profile of scores can help us determine if one set of skills is significantly stronger or more deficient than another set of skills. We make those profile comparisons using some form of a discrepancy formula. In this application of discrepancy we have a consistent and objective standard for judging differences among a student’s scores. As a parent, you may find such information helpful, but you must also recall that the formula does not provide a good explanation for why one score might be significantly lower than another score, or which intervention might help to improve the learning or performance. If parents understand the basics of how the discrepancy approach and RTI work, they’ll have a clearer idea about what the »ÆÉ«app’s assessment can actually tell them about their child’s strengths and needs.
References
- Mellard, D.F., Byrd, S.E., Johnson, E., Tollefson, J.M., Boesche, L. (2004). Foundations and research on identifying model responsiveness to intervention sites. Learning Disabilities Quarterly, 27 (4), 243 -256.
- Swanson, H.L. (2000). Issues facing the field of learning disabilities. Learning Disabilities Quarterly, 23, 37 – 50.
Reviewed, 2010