What’s driving our clinical decision-making?

We know a lot about what types of assessment tools SLPs tend to use (see here, here, and here, for example), but we don’t know much about how we synthesize and prioritize the information we gather in those assessments to come up with a diagnosis (or lack thereof). How do we reconcile inconsistent results? What factors tend to carry the most weight? How much do outside influences (i.e. policies and caseload issues) affect our decisions? Two different studies this month dive into the minds of SLPs to begin answering these questions.

Fulcher-Rood et al. begin by pointing out that school-based SLPs receive conflicting information on how to assess and diagnose language disorders from our textbooks, our federal/state/local guidelines and policies, and the research. So how do we actually approach this problem in real life? To learn more, they used a pretty cool case study method, where lots of assessment results were available for each of five, real 4–6-year-olds (cognitive and hearing screenings, parent/teacher questionnaires, three different standardized tests and two different language samples, transcribed and analyzed against SALT norms), but the 14 experienced SLPs who participated only saw the results they specifically asked for to help them make their diagnoses. This better reflects actual practice than just giving the SLPs everything upfront, because in school settings you’re for sure not going to have SPELT-3 scores or LSA stats to consider unless you’re purposefully making that happen. The case studies were chosen so that some showed a match between formal and informal results (all within or all below normal limits), whereas some showed a mismatch between formal and informal testing, or overall borderline results. Importantly, SLPs were instructed not to consider the “rules” of where they work when making a diagnosis.

Here were some major findings:

  • Unsurprisingly, when all data pointed in the same direction, SLPs were unanimous in determining that a disorder was or wasn’t present.

  • When there was conflicting information (standard scores pointed one direction, informal measures the other), almost all the SLPs made decisions aligning with the standardized test results.

  • Across cases, almost all the SLPs looked at CELF-P2 and/or PLS-5 scores to help them make a diagnosis, and in most cases they asked for parent/teacher concerns and language sample transcripts as well. A third of the SLPs didn’t ask for LSA at all.

  • Only a few SLPs used SPELT-3 scores, and no one asked for language sample analyses that compared performance to developmental norms.

These results reinforce what we learned in the survey studies linked above: SLPs use a lot of standardized tests, combined with informal measures like parent/teacher reports, and not so much language sampling. What’s troubling here is the under-utilization of tools that have a really good track record at diagnosis language disorders accurately (like the SPELT-3 and LSA measures), as well as over-reliance on standardized test scores that we know can be problematic—even when there’s tons of other information available and time/workplace policies aren’t a factor.

The second study, from Selin et al., tapped into a much bigger group of SLPs (over 500!), to ask a slightly different question:

5.png

Under ideal conditions, where logistical/workplace barriers are removed, how are SLPs approaching clinical decision-making? And what about the children, or the SLPs themselves, influences those decisions? 

Their method was a little different from the first study. SLPs read a paragraph about each case, including standard scores (TOLD-P:4 or CELF-4, PPVT-4, GFTA-2, and nonverbal IQ) and information about symptoms and functional impairments (use of finiteness, MLU, pragmatic issues, etc.). Rather than giving a diagnosis, the SLPs made eligibility decisions—should the child continue to receive services, and if so, in what area(s) and what type of service (direct, consultation, monitoring, etc.)?

The survey method this team used yielded a TON of information, but we’ll share a few highlights:

  • Freed from the constraints of caseloads and time, SLPs recommended continued service more often than we do in real life. We know that workplace policies and huge caseloads can prevent us from using best practices, but it’s helpful to see that play out in the research. It’s not just you!

  • Six cases were specifically set up to reflect the clinical profile of Specific Language Impairment*, but when determining services and goal areas, SLPs choices didn’t consistently align with that profile. So, even when a case was consistent with SLI, services weren’t always recommended, and when they were, the goals didn’t necessarily correspond with the underlying deficits of that disorder. So as a group, our operational knowledge of EBP for language disorders has a lot of room for improvement. Unlike with speech sound disorders, SLPs were not sensitive to clinical symptoms of SLI (tense/agreement errors, decreased MLU) when making eligibility decisions.

  • Yet again, SLPs relied heavily on standardized scores, even when other evidence of impairments was present.  

So what can you do with all this information? First of all, think about what YOU do in your language assessments. What tools do you lean on to guide your decisions, and why? Are you confident that those choices are evidence-based? Second, keep doing what you’re doing right now—learning the research! There is tons of work being done on assessment and diagnosis of language disorders, use of standardized tests, and LSA (hit the links to take a wander through our archives!). Taking a little time here and there to read up can add up to a whole new mindset before you know it.  

*SLI, or developmental language disorder (DLD) with average nonverbal intelligence.

 

Fulcher-Rood, K., Castilla-Earls, A., & Higginbotham, J. (2019). Diagnostic Decisions in Child Language Assessment: Findings From a Case Review Assessment Task. Language, Speech, and Hearing Services in Schools. doi:10.1044/2019_LSHSS-18-0044

Selin, C. M., Rice, M. L., Girolamo, T., & Wang, C. J. (2019). Speech-Language Pathologists’ Clinical Decision Making for Children With Specific Language Impairment. Language, Speech, and Hearing Services in Schools. doi:10.1044/2018_LSHSS-18-0017

A one–two punch for assessing young Spanish–English learners

Do you serve pre-K or kindergarten-aged kids? Are some/lots/all of them from Hispanic backgrounds and learning Spanish AND English? Mandatory reading right here, friends!

So—a major issue for young, dual-language learners? Appropriate language assessments. We talk about it a lot (plus here, here, here, and here, to name a few). In this new study, the authors compared a handful of assessments to see which could most accurately classify 4- and 5-year-olds (all Mexican–American and dual-language learners) as having typical vs. disordered language.

2.png

The single measure with the best diagnostic accuracy was two subtests of the Bilingual English-Spanish Assessment (BESA)—Morphosyntax and Semantics (the third subtest is phonology, which they didn’t use here). But to get even more accurate? Like, sensitivity of 100% and specificity of about 93%? Add in a story retell task (they used Frog, Where Are You?). Sample both Spanish and English, and take the better MLUw of the two. This BESA + MLU assessment battery outperformed other options in the mix (English and Spanish CELF-P2, plus a composite of the two, a parent interview, and a dynamic vocab assessment).

Not familiar with the BESA? It’s a newer test, designed—as the name implies—specifically for children who are bilingual, with different versions (not translated) of subtests in each language. If you give a subtest in both languages, you use the one with the highest score. And before you ask—yes, the test authors believe that monolingual SLPs can administer the BESA, given preparation and a trained assistant.

Now, the researchers here don’t include specific cut scores to work with on these assessments, but you can look at Table 2 in the paper and see the score ranges for the typical vs. disordered language groups. They also note that an MLUw of 4 or less can be a red flag for this group.

The major issue with this study, affecting our ability to generalize what it tells us, is that the sample size was really small—just 30 kids total. So, take these new results on board, but don’t override all that other smart stuff you know about assessing dual-language learners (see our links above for some refreshers if needed). And keep an eye out for more diagnostic studies down the road—you know we’ll point them out when they come!

 

Lazewnik, R., Creaghead, N. A., Smith, A. B., Prendeville, J.-A., Raisor-Becker, L., & Silbert, N. (2018). Identifiers of Language Impairment for Spanish-English Dual Language Learners. Language, Speech, and Hearing Services in Schools. Advance online publication.  https://doi.org/10.1044/2018_LSHSS-17-0046

Just say "yes" to narrative assessment for ASD

We all have those high-functioning kids with ASD who score in the average range on the CELF but so clearly have language issues. It can be hard to justify services for students like this, especially in school districts where test scores are the main criteria for eligibility. King & Palikara sought a solution to this frequent dilemma by using a variety of different assessment tools.

6.png

Using groups of adolescents both with and without high-functioning ASD, the researchers tested each child using the CELF-4, a standardized vocabulary test, a variety of narrative analysis tasks, and the Children’s Communication Checklist (CCC-2), completed by parents and teachers.

Not surprisingly, the adolescents with ASD scored similarly to typically developing peers on the CELF-4 and vocabulary measure. However, students with ASD scored significantly lower on a variety of narrative tasks.

Compared to peers, adolescents with ASD produced narratives that:

  • Were shorter and less grammatically complex
  • Used more limited vocabulary
  • Included less reasoning and fewer explanations
  • Made fewer references to emotion and thoughts
  • Made use of fewer linguistic enrichment devices
  • Contained less conflict resolution and reduced character development
  • Were overall less coherent

Did you get all that?

Basically, when assessing high-functioning students with ASD, especially those on the verge of qualifying, do yourself a favor and include some kind of narrative measure. I know, I know—narrative analysis can be complex and time-consuming, and the authors note this as well. But using narratives in assessment can give us great information about specific areas of difficulty that the CELF just doesn’t address. Besides, narrative assessment results translate so easily into IEP goals, so it will be worth your while. Check out the original article for more details on how they used and analyzed narrative assessment!

 

King, D., & Palikara, O. (2018). Assessing language skills in adolescents with autism spectrum disorder. Child Language Teaching and Therapy, 34(2), 101–113.

School-based assessments: Why do we do what we do?

4.png

Fulcher-Rood et al. interviewed school-based SLPs across the United States about how we choose assessment tools and diagnose/qualify our students. They wanted to understand not just which tools we use, but why we choose them, what “rules” we follow when we make diagnostic decisions, and what external factors affect those decisions. We’ve reviewed some other surveys of SLPs’ current assessment practices in the past—on the use of LSA, and on methods we’re using to assess bilingual clients—and these findings are kinda similar. There’s a lot of detail in the survey, but we’ll just focus on a couple things here.

  • We give a LOT of standardized tests, and qualify most of our students for service on the basis of those scores, with reference to some established cut-off (e.g. 1.5 SD below the mean)
  • We don’t do a ton of language sample analysis (at least the good ol’ record-transcribe-analyze variety)
  • We use informal measures to fill in the gaps and show academic impacts, but those results are less important when deciding who qualifies for service

None of this is likely to surprise you, but given what we know about the weaknesses of standardized tests (especially given diversity in home languages, dialects, and SES), the arbitrary nature of most cut-off scores, and the many advantages of LSA and other non-standard measures… it’s a problem.

So, what barriers are we up against when it comes to implementation of evidence-based assessment practices? First—let’s say it all together—TIME. Always time. Standardized tests are easy to pull, fairly quick to administer and score, and you often have a handy dandy report template to follow. Besides that, we’re often subject to institutional guidelines or policies that require (or *seem* to require) standard scores to qualify students for services.

None of the SLPs in the survey mentioned that research was informing their selection of assessment tools or diagnostic decisions. That doesn’t necessarily mean none of them consider the research—they just didn’t bring it up. But guys! We need to be bringing it up! And by “we,” I mean YOU! The person taking your all-too-limited time to read these reviews. The authors of the study pointed out (emphasis mine) that “there are differences between policies (what must be done) and guidelines (how can it be done)... potentially, school-based SLPs interpret some of the guidelines as mandatory, instead of as suggested.” Maybe there’s some wiggle room that that we aren’t taking advantage of. We can speak up, evaluation by evaluation, sharing our knowledge of research and best practices.

It all boils down to this: “While it is important for SLPs to adhere to the policies set forth by their employment agency, it is equally important for SLPs to conduct evaluations guided by best practice in the field. SLPs may need to advocate for policy changes to ensure that evidence-based practice is followed.”

Fulcher-Rood, K., Castilla-Earls, A. P., & Higginbotham, J. (2018). School-Based Speech-Language Pathologists’ Perspectives on Diagnostic Decision Making. American Journal of Speech-Language Pathology. Advance online publication. https://doi.org/10.1044/2018_AJSLP-16-0121.

Language of school and SES matter in standardized testing of bilinguals

Assessing children from diverse language backgrounds can be a challenge, but at least for Spanish speakers, SLPs have a decent array of resources available—including a growing number of standardized tests. The CELF–4S is one of these, designed to diagnose language disorders in Spanish speakers (mono- or bilingual) from 5–21 years old. It’s not just a Spanish translation of the English CELF, but is written specifically for speakers of Spanish. Great, right?

2.png

The problem is that the norming sample for this test was somewhat smaller than what’s recommended, and so the norms in the test manual may not be valid for all groups. Previously, there have been disagreements between the test creators and other researchers about whether you need separate norms for monolingual and bilingual speakers (in the test manual, they’re together).

This study focused on children from 5–7 years old with multiple risk factors for underperformance on standardized language tests. These included low SES (low-income family and parents with lower levels of education) and attending an English-only school, which favors English to the detriment of the home language. The researchers gave the CELF–4S to a huge group (656) of these kids, a lot more per age bracket than the test was originally normed on. The average Core Language Score was 83.57—more than one standard deviation below the mean, which is given in the manual as the cut-off score for identifying a language disorder. In Table 3, you can see how the results break down by subtest and age group. And, yes. You read that right. Given the published test norms, over half of these kids would appear to have DLD.

Wow. This is clearly not okay. So what do we do?

It looks like we need separate test norms for low-SES children in English schools. The authors used a subset of the original sample (still large at 299, 28 of whom had been found to have a language disorder via multiple methods of assessment) to look into the test’s diagnostic accuracy. That cut-off score of 85? Yeah, it resulted in so many false positives (specificity of only 65%) that it wasn’t clinically useful. The researchers computed an adjusted cut-off score of 78 for this group, which has acceptable diagnostic sensitivity and specificity (85% and 80%, respectively).

The big takeaway is this: Use the CELF–4S very cautiously. Understand the limitations of the normative sample used to standardize the test. If you are working with kids matching the profile of this paper’s sample (5-7 years old, low-SES/maternal education, and in English-only schools), keep that adjusted cut-off score of 78 in mind. And above all, remember that standardized testing alone is not a good way to assess young English learners.

 

Barragan, B., Castilla-Earls, A., Martinez-Nieto, L., Restrepo, M. A., & Gray, S. (2018). Performance of Low-Income Dual Language Learners Attending English-Only Schools on the Clinical Evaluation of Language Fundamentals–Fourth Edition, Spanish. Language, Speech, and Hearing Services in Schools. Advance online publication. doi: 10.1044/2017_LSHSS-17-0013.