What’s driving our clinical decision-making?

We know a lot about what types of assessment tools SLPs tend to use (see here, here, and here, for example), but we don’t know much about how we synthesize and prioritize the information we gather in those assessments to come up with a diagnosis (or lack thereof). How do we reconcile inconsistent results? What factors tend to carry the most weight? How much do outside influences (i.e. policies and caseload issues) affect our decisions? Two different studies this month dive into the minds of SLPs to begin answering these questions.

Fulcher-Rood et al. begin by pointing out that school-based SLPs receive conflicting information on how to assess and diagnose language disorders from our textbooks, our federal/state/local guidelines and policies, and the research. So how do we actually approach this problem in real life? To learn more, they used a pretty cool case study method, where lots of assessment results were available for each of five, real 4–6-year-olds (cognitive and hearing screenings, parent/teacher questionnaires, three different standardized tests and two different language samples, transcribed and analyzed against SALT norms), but the 14 experienced SLPs who participated only saw the results they specifically asked for to help them make their diagnoses. This better reflects actual practice than just giving the SLPs everything upfront, because in school settings you’re for sure not going to have SPELT-3 scores or LSA stats to consider unless you’re purposefully making that happen. The case studies were chosen so that some showed a match between formal and informal results (all within or all below normal limits), whereas some showed a mismatch between formal and informal testing, or overall borderline results. Importantly, SLPs were instructed not to consider the “rules” of where they work when making a diagnosis.

Here were some major findings:

  • Unsurprisingly, when all data pointed in the same direction, SLPs were unanimous in determining that a disorder was or wasn’t present.

  • When there was conflicting information (standard scores pointed one direction, informal measures the other), almost all the SLPs made decisions aligning with the standardized test results.

  • Across cases, almost all the SLPs looked at CELF-P2 and/or PLS-5 scores to help them make a diagnosis, and in most cases they asked for parent/teacher concerns and language sample transcripts as well. A third of the SLPs didn’t ask for LSA at all.

  • Only a few SLPs used SPELT-3 scores, and no one asked for language sample analyses that compared performance to developmental norms.

These results reinforce what we learned in the survey studies linked above: SLPs use a lot of standardized tests, combined with informal measures like parent/teacher reports, and not so much language sampling. What’s troubling here is the under-utilization of tools that have a really good track record at diagnosis language disorders accurately (like the SPELT-3 and LSA measures), as well as over-reliance on standardized test scores that we know can be problematic—even when there’s tons of other information available and time/workplace policies aren’t a factor.

The second study, from Selin et al., tapped into a much bigger group of SLPs (over 500!), to ask a slightly different question:

5.png

Under ideal conditions, where logistical/workplace barriers are removed, how are SLPs approaching clinical decision-making? And what about the children, or the SLPs themselves, influences those decisions? 

Their method was a little different from the first study. SLPs read a paragraph about each case, including standard scores (TOLD-P:4 or CELF-4, PPVT-4, GFTA-2, and nonverbal IQ) and information about symptoms and functional impairments (use of finiteness, MLU, pragmatic issues, etc.). Rather than giving a diagnosis, the SLPs made eligibility decisions—should the child continue to receive services, and if so, in what area(s) and what type of service (direct, consultation, monitoring, etc.)?

The survey method this team used yielded a TON of information, but we’ll share a few highlights:

  • Freed from the constraints of caseloads and time, SLPs recommended continued service more often than we do in real life. We know that workplace policies and huge caseloads can prevent us from using best practices, but it’s helpful to see that play out in the research. It’s not just you!

  • Six cases were specifically set up to reflect the clinical profile of Specific Language Impairment*, but when determining services and goal areas, SLPs choices didn’t consistently align with that profile. So, even when a case was consistent with SLI, services weren’t always recommended, and when they were, the goals didn’t necessarily correspond with the underlying deficits of that disorder. So as a group, our operational knowledge of EBP for language disorders has a lot of room for improvement. Unlike with speech sound disorders, SLPs were not sensitive to clinical symptoms of SLI (tense/agreement errors, decreased MLU) when making eligibility decisions.

  • Yet again, SLPs relied heavily on standardized scores, even when other evidence of impairments was present.  

So what can you do with all this information? First of all, think about what YOU do in your language assessments. What tools do you lean on to guide your decisions, and why? Are you confident that those choices are evidence-based? Second, keep doing what you’re doing right now—learning the research! There is tons of work being done on assessment and diagnosis of language disorders, use of standardized tests, and LSA (hit the links to take a wander through our archives!). Taking a little time here and there to read up can add up to a whole new mindset before you know it.  

*SLI, or developmental language disorder (DLD) with average nonverbal intelligence.

 

Fulcher-Rood, K., Castilla-Earls, A., & Higginbotham, J. (2019). Diagnostic Decisions in Child Language Assessment: Findings From a Case Review Assessment Task. Language, Speech, and Hearing Services in Schools. doi:10.1044/2019_LSHSS-18-0044

Selin, C. M., Rice, M. L., Girolamo, T., & Wang, C. J. (2019). Speech-Language Pathologists’ Clinical Decision Making for Children With Specific Language Impairment. Language, Speech, and Hearing Services in Schools. doi:10.1044/2018_LSHSS-18-0017

It’s 10 AM: Do you know where your gym teacher is?

When you hear “cotreatment,” what other professionals spring to mind? OTs? PTs? How about your friendly neighborhood adapted phys ed teacher? In this study, an SLP and an adapted PE teacher (I’m guessing they don’t like to be called APEs?) teamed up to teach concept vocabulary to 10 pre-kindergarteners with Down Syndrome.

Why target vocabulary in gym class? A couple of reasons. One, having physical experiences related to a new word increases the semantic richness of the learning—something that we know helps kids. Two, a branch of developmental theory (dynamic systems theory, if you’re interested!) holds that language and motor skills develop in a coordinated, interconnected way. Plus? Getting up and moving during your vocab lesson is fun!

9.png

Each week, five different concept words were targeted by the SLP only, the adapted PE teacher only, or both in a co-treatment condition. Teaching occurred in 30-minute large group lessons, four days per week for nine weeks total. Check out the article for specifics about what the lessons looked like in each condition—the key thing is that with co-treatment, the kids got to demonstrate receptive understanding of the concepts through a variety of gross motor actions.

Overall, the intervention had a weak effect with only the PE teacher (makes sense, since teaching words isn’t the point of gym), and a medium effect if the SLP was involved. Out of the ten children, four learned more concepts in co-treatment weeks as compared to weeks when the SLP or PE teacher worked alone. The other six did about the same either way. The authors noticed that the kids who learned better in co-treatment were the children with the highest non-verbal intelligence scores and better ability to use effortful control (so, for example, stopping when a grownup says to stop), but more research is needed to draw strong conclusions from those results. Big picture, here? This type of co-treatment, when done thoughtfully and collaboratively, doesn’t hurt and may help some kids. Also, when many of us are trying to get out of the therapy room and treat kids where they are, bringing intervention to gym class makes a lot of sense from a “least restrictive” point of view. And once again… it’s fun!

 

Lund, E., Young, A., & Yarbrough, R. (2019). The Effects of Co-Treatment on Concept Development in Children With Down Syndrome. Communication Disorders Quarterly, 1525740119827264. doi:10.1177/1525740119827264

Traveling SLP Magic: How to be in two places at once

Where are my itinerant clinician friends—those SLPs who pack up their therapy room in a weird rolling suitcase thing, make nice with administrative assistants all over town, eat in their cars, and find themselves constantly thwarted by conflicting building schedules? Yes, hello there! Let’s talk about how things could be different.

In a word… telepractice. As much as we value being physically present for our students and colleagues, we’re living in the age of Facetime, video conferencing, and working remotely. The whole realm of using technology to be a place that you’re not is now mainstream, and easier for people to accept and accommodate than even a few years ago. And after all, a 15-minute drive can easily mean 30 or 40 minutes of lost productivity, once you factor in packing/unpacking, parking, check-in, and everything else involved with a transition between buildings. This article takes the perspective that it’s not whether SLPs should be using telepractice, but how. There’s been plenty of research showing that telepractice can work (see our reviews on the topic), we just need to be smart about:

What job tasks we target for telepractice, and

How we go about it

The article lays out two case studies of SLP using telepractice for (1) direct service to high school students, (2) remote supervision of an SLPA, and (3) remote observations and consultations by a district AAC specialist. They include a lot of really helpful details about how they set these systems up, so definitely check out the article if you’re thinking about trying something similar. The authors studied the effectiveness of telepractice in these cases through a survey. The participants reported that:

  • Telepractice was effective and generally easy to implement for both direct and indirect services/supervision

  • The dreaded technical issues could be dealt with

  • It could be motivating to students, and

  • The SLPs had increased flexibility and decreased travel time

The downsides? Tech troubles did happen, and there were also some issues communicating and coordinating with sites. Choosing the right partners and laying down the groundwork is critical to making it work!

6.png

The last part of the article lays out some very practical pro tips for other SLPs. For example, they recommend holding a team meeting upfront to demonstrate the systems you’ll use, answer questions, and secure buy-in from everyone involved. Also consider small but impactful steps like scheduling email reminders (with backup contact information and links to video sessions), or using two separate computers on the clinician end of thingsone for the audio/video, and one for all your other therapy “stuff.”  And if your admin needs any convincing? Remind them that you’ll be saving them time (from travel) and potentially money (from mileage reimbursements)!

Note: Not all states allow Medicaid billing of telesessions quite yet. So if you’re in the schools, that is an important thing to check first.

 

Boisvert, M. K., & Hall, N. (2019). Telepractice for School-Based Speech and Language Services: A Workload Management Strategy. Perspectives of the ASHA Special Interest Groups. doi:10.1044/2018_PERS-SIG18-2018-0004

How am I supposed to know if an app is evidence-based?

Aha! This is a fun one. The simple answer is that the iPad has been around for less than a decade (shocking, huh?), and there is very little research on apps in our field (the little we do have is on AAC and aphasia). Nooooo! So you see where this is going: it’s not easy. The best you can do is perhaps, 1) know the research on the effective ingredients of speech–language treatment in the first place, and see if you can identify those within the apps, and 2) know the research on multimedia learning (not from our field; see article for overview) and use that to also guide your thinking. Then, of course, EBP also requires considering clinical practice and client data as well…

Challenging as this is, Heyman (2018) has started to pick at the question with a survey and interview study of hundreds of SLPs, asking how SLPs are selecting apps for therapy. The results:

How do SLPs know which apps to consider?

They’re mostly relying on word of mouth and social networks.

Then how do SLPs make purchase decisions?

“The main finding reported was that participants used apps because they were engaging and motivating to children…”

The top two features SLPs reported as important were:

  1. different developmental/difficulty levels and

  2. child-friendly theme

(See article for a ranking of 25 other features SLPs prioritize!)

Finally, “Participants emphasized that apps were a tool and used them in the same was as any other tool or toy...”

What do we think of this?

Well, it seems that SLPs’ biggest concern is just getting kids excited about the therapy process. And that makes sense. But, ideally, we need to find a way to start to identify which apps will actually give us the features and flexibility to make good progress on speech–language goals. Heyman provides a checklist of features that could be considered, including things like: Can targets be repeated? Can items be skipped? How much control do you have over the screen (e.g. ability to remove elements)? … But we need a lot more research in this area to know which of these features matter, and when.

In the meantime, a little more digging by SLPs could certainly help! Heyman states, “Interestingly, only 22% of respondents looked at the developer sites in order to obtain information about apps; yet, information regarding the background and research evidence are often provided on the developer site.”

 

Heyman, N. (2018) Identifying features of apps to support using evidence-based language intervention with children. Assistive Technology. Advance online publication. doi: 10.1080/10400435.2018.1553078.

How do you interpret “educational performance”?

We don’t have to remind you of all the challenges facing children with speech sound disorders (SSD), especially since roughly 90% of school-based SLPs serve students with SSDs. Although we have that in common, we’re pretty different in how we (and our districts/states) interpret “educational performance,” a key phrase from IDEA. These differences have a huge impact on which students ultimately get services—and which students don’t.

By surveying SLPs nationwide, the authors of this article found a lot of variability. The guidelines we use come from different agencies (states, districts, state speech–language–hearing associations, etc.), but at least some of the differences are due to our individual decision making, because the survey found that “SLPs are familiar with their state guidelines but do not consistently use them as evidenced by considerable variability within and between states.”

11.png

Essentially, we are taking different factors into account when looking for the impact (or lack thereof) of SSDs on kids’ school success. Are you looking at only at grades? Do you weigh access to the curriculum, oral participation in class, or spelling? Do you take social-emotional adjustment into the mix? Consider how you determine educational impact now, and how either a narrower or broader view of the concept would change your practice. Would you have more artic/phono students? Fewer? Would they get services earlier, or keep them longer? Would you do your evals differently? Having the most possible students in therapy isn’t really the goal (must think least restrictive environment), but under-serving these students is definitely a problem.

Big takeaway here: other SLPs out there are likely making decisions very differently from how you are—and it’s time we talked more about it. As you reflect on the questions above, talk with your SLP coworkers and friends—even consider the conversations you might have with administrators, policy makers, and your local and state agencies. Small changes in policy (or how you and your coworkers apply the policy) could help ensure kids with SSDs get the services they need in the schools.

Farquharson, K., & Boldini, L. (2018). Variability in interpreting “educational performance” for children with speech sound disorders. Language, Speech, and Hearing Services in the Schools. Advance online publication. doi: 10.1044/2018_LSHSS-17-0159.