SUGAR update: can it diagnose DLD?

Remember SUGAR? It’s the new, alternative language sample analysis protocol meant to work within the realities of a busy SLP’s workload. It’s been a while, so here’s a quick recap: SUGAR involves calculating four metrics on a 50-utterance sample where you only transcribe child utterances:  

  1. Mean length of utteranceSUGAR (MLUS)*

  2. Total number of words (TNW)

  3. Clauses per sentence (CPS)

  4. Words per sentence (WPS) 

For specifics and examples, check out the complete procedures (including videos) on their website.

While the creators of SUGAR have provided some support for its validity, the diagnostic accuracy of the four measures hasn’t been tested—until now! In this new study, the authors recruited 36 3- to 7-year-old children with DLD (currently receiving or referred to services) and 206 with typical language, and used the SUGAR protocol to sample their language. All four measures showed acceptable sensitivity and specificity (above 80%), using research-based cutoff scores (see the paper for specifics on cutoffs for each measure). The most accurate classification, according to the authors, was achieved with a combination of MLUS and CPS.

2.png

One of SUGAR’s big selling points is that it’s quick (like, 20 minutes quick), at least for kids with typical language. Did that still hold for the children with DLD? Actually, in this study they took less time to provide a 50-utterance sample than their typical peers. Bonus!

Language sampling can be daunting for the full-caseload SLP, but we love that research like this is identifying promising LSA measures that have high diagnostic accuracy (higher, we might add, than many commercially-available tests), while addressing our time and resource barriers.

An important note: there are many methodological differences between SUGAR and other LSA procedures, and SUGAR has not been uncontroversial. We’ll be on the lookout for more research on SUGAR’s diagnostic potential or comparing SUGAR to more traditional protocols to help us really understand the pros and cons of the different LSA methods.

*When calculating MLUS, derivational morphemes (-tion) are counted separately and catenatives (hafta, wanna) count as two morphemes.

 

Pavelko, S. L., & Owens Jr, R. E. (2019). Diagnostic Accuracy of the Sampling Utterances and Grammatical Analysis Revised (SUGAR) Measures for Identifying Children With Language Impairment. Language, Speech, and Hearing Services in Schools. doi:10.1044/2018_LSHSS-18-0050

A one–two punch for assessing young Spanish–English learners

Do you serve pre-K or kindergarten-aged kids? Are some/lots/all of them from Hispanic backgrounds and learning Spanish AND English? Mandatory reading right here, friends!

So—a major issue for young, dual-language learners? Appropriate language assessments. We talk about it a lot (plus here, here, here, and here, to name a few). In this new study, the authors compared a handful of assessments to see which could most accurately classify 4- and 5-year-olds (all Mexican–American and dual-language learners) as having typical vs. disordered language.

2.png

The single measure with the best diagnostic accuracy was two subtests of the Bilingual English-Spanish Assessment (BESA)—Morphosyntax and Semantics (the third subtest is phonology, which they didn’t use here). But to get even more accurate? Like, sensitivity of 100% and specificity of about 93%? Add in a story retell task (they used Frog, Where Are You?). Sample both Spanish and English, and take the better MLUw of the two. This BESA + MLU assessment battery outperformed other options in the mix (English and Spanish CELF-P2, plus a composite of the two, a parent interview, and a dynamic vocab assessment).

Not familiar with the BESA? It’s a newer test, designed—as the name implies—specifically for children who are bilingual, with different versions (not translated) of subtests in each language. If you give a subtest in both languages, you use the one with the highest score. And before you ask—yes, the test authors believe that monolingual SLPs can administer the BESA, given preparation and a trained assistant.

Now, the researchers here don’t include specific cut scores to work with on these assessments, but you can look at Table 2 in the paper and see the score ranges for the typical vs. disordered language groups. They also note that an MLUw of 4 or less can be a red flag for this group.

The major issue with this study, affecting our ability to generalize what it tells us, is that the sample size was really small—just 30 kids total. So, take these new results on board, but don’t override all that other smart stuff you know about assessing dual-language learners (see our links above for some refreshers if needed). And keep an eye out for more diagnostic studies down the road—you know we’ll point them out when they come!

 

Lazewnik, R., Creaghead, N. A., Smith, A. B., Prendeville, J.-A., Raisor-Becker, L., & Silbert, N. (2018). Identifiers of Language Impairment for Spanish-English Dual Language Learners. Language, Speech, and Hearing Services in Schools. Advance online publication.  https://doi.org/10.1044/2018_LSHSS-17-0046

Early verbs and inflections in children who use AAC

When developing therapy plans for kids who use AAC, it’s common to look at kids with typically developing language to decide what to work on next. But should we? Do kids who use SGDs to communicate develop early verbs and inflectional morphemes similarly to typically-developing children?

In this study, conversations between four 8–9-year-old children who used AAC and an adult were analyzed across a 10-month period. The conversations with adults were examined to see which verbs the kids used (ACTION verbs—John is playing versus STATE verbs—John is being silly), in which order, and whether they added inflection. Since the participants were just first learning to use verbs, their patterns were compared to children in a similar developmental period (1;6-3;0).

Compared to kids without disabilities, the participants:

  • used more action verbs than state verbs

  • used go, want, and like frequently

  • produced third-person singular -s less often and later than -ing and -ed

While the participants seemed to mirror typical kids, they did differ in one way—by NOT producing action verbs before state verbs, but rather producing both at the same time.

How does this help us? It gives us some idea of which verbs to target and in what order. For school-age kids with no cognitive impairment, we should target both action verbs and state verbs. As the authors point out, these kids are likely to already have the mental representations of these categories. So why aren’t they producing them?  That likely falls on us (verbs aren’t on their systems, low expectations, lack of appropriate instructions, etc.). For young kids, we should follow typical development and focus on action verbs before state verbs. With action verbs, we can then follow typical verbal inflection development by targeting -ing (swimming) and -ed (opened), followed by state verbs and third person singular -s (knows).

Although this study only included four participants, it can boost our confidence in following typical language patterns for children who use AAC, and it offers some guidance in an area that many SLPs find challenging—making the jump to verb usage and morphology.

 

Savaldi-Harussi, G., & Soto, G. (2018). Early verbal categories and inflections in children who use speech-generating devices. Augmentative and Alternative Communication, 34(3), 194–205.

You should collect persuasive language samples—we’ll convince you.

We’ve talked before about language sampling with older students, and how using narrative or expository (informational) tasks are better than conversation at aligning with academic expectations and eliciting complex syntax. But what about persuasive language? It’s important for school, sure, but also students’ personal interactions. For every time they need to lay out an argument in an essay or debate, they’ll have dozens of opportunities to convince a friend, parent, teacher, or someone else to see things their way. Talking a classmate out of risky behavior, explaining a situation to a cop… it doesn’t take too much imagination to see the potential importance of this skill. And when you’re speaking (or writing) persuasively, you have to convey complex ideas in a concise and clear way, requiring especially deft use of complex syntax.

16.png

The authors of this paper found that ninth-graders responding to a persuasive prompt (giving reasons why teens should or should not have jobs) used more complex syntax than in response to an expository one (explaining how a teacher can be a role model to teens). They also compared different modalities—written responses to expository prompts were more complex than spoken ones, but the results were mixed with persuasive samples. The researchers measured complexity (and you can too!) by the percentage of complex sentences and the average number of clauses per sentence. There were some specific differences in microstructure—types of verbs and clauses, for examples—between the two genres as well, that the paper lays out in more detail.  

So keep this in mind when you’re next assessing an older student: allowing a written response for an expository language sample will elicit more complex language, but with a persuasive prompt, you can go either way and maximize that complex syntax.

 

Brimo, D., & Hall-Mills, S. (2018) Adolescents’ production of complex syntax in spoken and written expository and persuasive genres. Clinical Linguistics & Phonetics. Advance online publication. doi: 10.1080/02699206.2018.1504987

Identifying “disorder within diversity”

This month, ASHA’s Language, Speech, and Hearing Services in Schools journal put out a (free!) clinical forum on the concept of “disorder within diversity.” The forum includes an introduction, where you can read about the usefulness of moving away from “difference vs. disorder,” and five related research articles. Here we review one of the articles, and others can be found here, here, and here.

Do you like your grammatical morphemes accurate, diverse, or productive?

So, you’re an awesome clinician who is eliciting and analyzing a language sample from a bilingual preschooler. High five for you! You want to capture some data about their grammar skills. What exactly do you measure?

The authors of this study suggest that, rather than counting up how accurate the child’s use of tense and agreement markers is (so, finding the percentage of accurate uses out of total obligatory contexts), you instead focus on the diversity and productivity of tense/agreement markers. The morphemes we’re concerned about are:

  • third person singular –s
  • past tense –ed
  • copula BE (am, are, is, was, were)
  • auxiliary DO (do, does, did)
  • auxiliary BE (am, are, is, was, were).* 

Notice there are five morphemes and 15 total forms here; that’ll be important in a second. These morphemes are clinical markers for language disorders in English.  

So what are diversity and productivity, and how do you measure them? Enter tense marker total and TAP score. We’ll give the basic gist of both, but the specifics for calculating them came from Hadley & Short (2005).

  • Diversity (tense marker total): How many of those 15 forms from earlier did the child use in the language sample?
  • Productivity (tense/agreement productivity, or TAP score): How many different ways did the child uses those five morphemes? Up to five points allowed per morpheme, for a max of 25.

The authors found that, for a group of 4-year-old Spanish–English bilingual children, tense marker total and TAP scores:

1.png
  • Were correlated with MLU(words) and NDW (number of different words), valid LSA measures for this population
  • Changed over the course of a school year
  • Looked different for children with and without parent-reported language concerns

The article provides group means (typical language vs. language concerns) for both measures, but not true normative or diagnostic data, so you can’t use tense marker totals or TAP scores to directly diagnose a language disorder at this point. However, consider using them as criterion-based measures to describe tense and agreement skills, identify morphemes to focus on in therapy, and monitor growth.

*If you’re having one of those days and can’t remember the difference between a copula and auxiliary—no sweat. A copula is the only verb in a clause (like is was, there), but auxiliaries are those “helping verbs” that are linked up with another verb (like are was with linked, there).

Potapova, I., Kelly, S., Combiths, P. N., & Pruitt-Lord, S. L. (2018). Evaluating English Morpheme Accuracy, Diversity, and Productivity Measures in Language Samples of Developing Bilinguals. Language, Speech, and Hearing Services in Schools, 49(2), 260–276. https://doi.org/10.1044/2017_LSHSS-17-0026.

School-based assessments: Why do we do what we do?

4.png

Fulcher-Rood et al. interviewed school-based SLPs across the United States about how we choose assessment tools and diagnose/qualify our students. They wanted to understand not just which tools we use, but why we choose them, what “rules” we follow when we make diagnostic decisions, and what external factors affect those decisions. We’ve reviewed some other surveys of SLPs’ current assessment practices in the past—on the use of LSA, and on methods we’re using to assess bilingual clients—and these findings are kinda similar. There’s a lot of detail in the survey, but we’ll just focus on a couple things here.

  • We give a LOT of standardized tests, and qualify most of our students for service on the basis of those scores, with reference to some established cut-off (e.g. 1.5 SD below the mean)
  • We don’t do a ton of language sample analysis (at least the good ol’ record-transcribe-analyze variety)
  • We use informal measures to fill in the gaps and show academic impacts, but those results are less important when deciding who qualifies for service

None of this is likely to surprise you, but given what we know about the weaknesses of standardized tests (especially given diversity in home languages, dialects, and SES), the arbitrary nature of most cut-off scores, and the many advantages of LSA and other non-standard measures… it’s a problem.

So, what barriers are we up against when it comes to implementation of evidence-based assessment practices? First—let’s say it all together—TIME. Always time. Standardized tests are easy to pull, fairly quick to administer and score, and you often have a handy dandy report template to follow. Besides that, we’re often subject to institutional guidelines or policies that require (or *seem* to require) standard scores to qualify students for services.

None of the SLPs in the survey mentioned that research was informing their selection of assessment tools or diagnostic decisions. That doesn’t necessarily mean none of them consider the research—they just didn’t bring it up. But guys! We need to be bringing it up! And by “we,” I mean YOU! The person taking your all-too-limited time to read these reviews. The authors of the study pointed out (emphasis mine) that “there are differences between policies (what must be done) and guidelines (how can it be done)... potentially, school-based SLPs interpret some of the guidelines as mandatory, instead of as suggested.” Maybe there’s some wiggle room that that we aren’t taking advantage of. We can speak up, evaluation by evaluation, sharing our knowledge of research and best practices.

It all boils down to this: “While it is important for SLPs to adhere to the policies set forth by their employment agency, it is equally important for SLPs to conduct evaluations guided by best practice in the field. SLPs may need to advocate for policy changes to ensure that evidence-based practice is followed.”

Fulcher-Rood, K., Castilla-Earls, A. P., & Higginbotham, J. (2018). School-Based Speech-Language Pathologists’ Perspectives on Diagnostic Decision Making. American Journal of Speech-Language Pathology. Advance online publication. https://doi.org/10.1044/2018_AJSLP-16-0121.

A faster way to measure grammar skills

1.png

We’ve previously pointed you to research supporting “Percent Grammatical Utterances” (that’s PGU for the acronym-inclined) as a good language sample analysis to help diagnose developmental language disorder (DLD). While great practice, the procedure for computing PGU can be, in reality, pretty time-consuming.

In this study, the researchers that brought us PGU have given us a faster way to accomplish pretty much the same thing. Yay, science! They want to find a good method to measure and monitor growth in grammar skills — there really isn’t anything like that right now — so the process needs to be efficient enough to do multiple times a year for each kiddo. Enter Percent Grammatical Responses (you guessed it… PGR for short).

So how does PGR work? Kids between 3;0 and 3;11 saw a series of 15 pictures (described here). For each, the adult asked “What is happening in the picture?” The whole response was scored as either grammatical or ungrammatical. Take the number of grammatical responses, divide by 15, and, voila! PGR. No dividing responses up into C-Units… woo!

That’s too easy. It can’t be valid! It can! PGR appears to actually measure grammatical ability, since it correlates closely with SPELT-P 2 scores, while not being correlated with a measure of vocabulary. It also correlates with PGU, which has proven validity. As diagnostic tools for DLD, PGR and PGU agreed 92% of the time on “passes” and 94% of the time on “fails,” with given cutoff scores for each.

Awesome, right? But remember: this initial validation study only established a cutoff score for 3-year-olds, so we don’t have enough information to substitute PGR in for PGU with older kids. Also, hold off for now on using PGR to monitor progress, in addition to adding it to your diagnostic toolkit. More study is needed to determine if PGR is sensitive enough to reflect skill growth over time.

Cultural/Linguistic Diversity Note: The sample “ungrammatical” responses in the paper are constructions that are perfectly good in Non-Mainstream American English (or African American English). The kids in this study spoke “mainstream English,” but as always, be mindful of dialect differences in assessment.

Eisenberg, S. L. & Guo, L. (2017). Percent Grammatical Responses as a General Outcome Measure: Initial Validity. Language, Speech, and Hearing Services in Schools. Advance online publication. doi: 10.1044/2017_LSHSS-16-0070.

And more...

  • Boyle et al. propose a way to program digital books for students that could have benefits for their language and literacy skills, using visual scene display apps that allow for dynamic presentation of text (e.g. Tobii-Dynavox’s Snap Scene). Their pilot study showed that this might be an effective strategy to help young children with language disorders learn new sight words.
  • Coufal et al. show “comparable treatment outcomes between traditional service delivery and telepractice” for children ages 6–9 ½ with speech sound disorder only.
  • Lundine et al. share some preliminary evidence suggesting that junior- and high-school aged students who have suffered TBIs and are struggling academically might have particular challenges with expository discourse (understanding and producing informational, rather than narrative, passages) that don’t show up on typical language assessments. As always with older students, consider throwing an expository language sample into your testing routine!
  • Szumski et al. compare the outcomes of two social skills programs (“Play Time/Social Time” and “I Can Problem Solve”) in preschool-aged children with ASD in Poland.  “Play Time/Social Time” was more effective in improving interaction skills, while “I Can Problem Solve” was more effective in improving children’s ability to take others’ perspective. Both curricula were developed for implementation in the preschool classroom with children with and without special needs.
  • Wittke & Spaulding found that teachers perceived preschool children with developmental language disorder (DLD) who were receiving services as having poorer executive functioning (e.g. inhibition, working memory, and task shifting) as compared to preschoolers with DLD who were not receiving services. Because we know the challenges of differentially diagnosing DLD, SLPs should be aware that children who have poorer executive functioning skills are more likely to be referred for services than peers with higher executive function skills who also meet criteria for DLD.

 

Boyle, S., McCoy, A., McNaughton, D., & Light, J. (2017). Using digital texts in interactive reading activities for children with language delays and disorders: a review of the research literature and pilot study. Seminars in Speech and Language, 38(4), 263–275.

Coufal, K., Parham, D., Jakubowitz, M., Howell, C., & Reyes, J. (2017). Comparing traditional service delivery and telepractice for speech sound production using a functional outcome measure. American Journal of Speech-Language Pathology. Advance online publication. doi:10.1044/2017_AJSLP-16-0070

Lundine, J. P., Harnish, S. M., McCauley, R. J., Zezinka, A. B., Blackett, D. S., & Fox, R. A. (2017). Exploring summarization differences for two types of expository discourse in adolescents with traumatic brain injury. American Journal of Speech–Language Pathology. Advance online publication. doi:10.1044/2017_AJSLP-16-0131.

Szumski, G., Smogorzewska, J., Grygiel, P., & Orlando, A-M. (2017). Examining the effectiveness of naturalistic social skills training in developing social skills and theory of mind in preschoolers with ASD. Journal of Autism and Developmental Disorders. Advance online publication. 10.1007/s10803-017-3377-9. 

Wittke, K. & Spaulding, T. J. (2017). Which preschool children with specific language impairment receive language intervention? Language, Speech, and Hearing Services in Schools.