Fulcher-Rood et al. interviewed school-based SLPs across the United States about how we choose assessment tools and diagnose/qualify our students. They wanted to understand not just which tools we use, but why we choose them, what “rules” we follow when we make diagnostic decisions, and what external factors affect those decisions. We’ve reviewed some other surveys of SLPs’ current assessment practices in the past—on the use of LSA, and on methods we’re using to assess bilingual clients—and these findings are kinda similar. There’s a lot of detail in the survey, but we’ll just focus on a couple things here.
- We give a LOT of standardized tests, and qualify most of our students for service on the basis of those scores, with reference to some established cut-off (e.g. 1.5 SD below the mean)
- We don’t do a ton of language sample analysis (at least the good ol’ record-transcribe-analyze variety)
- We use informal measures to fill in the gaps and show academic impacts, but those results are less important when deciding who qualifies for service
None of this is likely to surprise you, but given what we know about the weaknesses of standardized tests (especially given diversity in home languages, dialects, and SES), the arbitrary nature of most cut-off scores, and the many advantages of LSA and other non-standard measures… it’s a problem.
So, what barriers are we up against when it comes to implementation of evidence-based assessment practices? First—let’s say it all together—TIME. Always time. Standardized tests are easy to pull, fairly quick to administer and score, and you often have a handy dandy report template to follow. Besides that, we’re often subject to institutional guidelines or policies that require (or *seem* to require) standard scores to qualify students for services.
None of the SLPs in the survey mentioned that research was informing their selection of assessment tools or diagnostic decisions. That doesn’t necessarily mean none of them consider the research—they just didn’t bring it up. But guys! We need to be bringing it up! And by “we,” I mean YOU! The person taking your all-too-limited time to read these reviews. The authors of the study pointed out (emphasis mine) that “there are differences between policies (what must be done) and guidelines (how can it be done)... potentially, school-based SLPs interpret some of the guidelines as mandatory, instead of as suggested.” Maybe there’s some wiggle room that that we aren’t taking advantage of. We can speak up, evaluation by evaluation, sharing our knowledge of research and best practices.
It all boils down to this: “While it is important for SLPs to adhere to the policies set forth by their employment agency, it is equally important for SLPs to conduct evaluations guided by best practice in the field. SLPs may need to advocate for policy changes to ensure that evidence-based practice is followed.”
Fulcher-Rood, K., Castilla-Earls, A. P., & Higginbotham, J. (2018). School-Based Speech-Language Pathologists’ Perspectives on Diagnostic Decision Making. American Journal of Speech-Language Pathology. Advance online publication. https://doi.org/10.1044/2018_AJSLP-16-0121.