Shifting and switching from Spanish to English

5.png

In the US, children who speak Spanish at home often begin learning English when they start school, and their dominant language shifts from Spanish to English over time. To get a better idea of how this happens, the authors of this study looked at the change in grammatical accuracy (percent grammatical utterances or PGU*) in Spanish and English narrative retells from kindergarten to second grade.  

As expected, children’s PGU in English went up over time, while PGU in Spanish went down. The researchers compared children in bilingual (English–Spanish) vs. English-only classrooms. For children in bilingual classrooms, the decrease in Spanish PGU was slower, but the increase in English PGU was slightly slower also.  

The researchers also looked at a subgroup of the children who had lower PGU in Spanish at the outset. They called this group “low grammaticality” because they didn’t have enough measures to confidently diagnose developmental language disorder (DLD). Children in this group showed a different pattern, with Spanish PGU holding steady for those in bilingual classrooms, suggesting that they benefited from bilingual teaching.

For a brief time (around age 8), English and Spanish PGU scores for the low grammaticality group looked similar to the rest of the children, which means that if we assessed them at this point, we might not be able to tell who does and doesn’t have DLD. The authors encourage us to assess children in their home language early on, before this shift happens.

So as if assessing English language learners wasn’t hard enough, we also need to consider the type of instruction children are getting and their skills in each language over time.  Ideally, we’d assess children in their home language right when they start school. When that’s not possible, dynamic assessment might help us to differentiate language disorders from normal language dominance shifting during the early school years. For other resources on diagnosing DLD in English language learners, see reviews here, here, and here.

 

*Remember that higher PGU means more accurate use of grammar.

Castilla-Earls, A., Francis, D., Iglesias, A., & Davidson, K. (2019). The impact of the Spanish-to-English proficiency shift on the grammaticality of English learners. Journal of Speech, Language, and Hearing Research. doi: 10.1044/2018_JSLHR-L-18-0324.

Language deficits in preschoolers born premature: How should we assess?

By now, it’s fairly well known that prematurity is a major risk factor for language delays in toddlerhood and beyond. But what do those language deficits look like and how can we assess them adequately?

This study examines these questions by comparing preschoolers born preterm* with their typically developing, full term counterparts. They examined both groups’ expressive language skills, nonverbal IQ, and attention skills, as well as parental reports of hyperactivity and attention problems.

A standardized language assessment (CELF-Preschool 2) and language sample analysis were used to assess expressive language skills, with some interesting results. The only significant difference in CELF-P2 results was the Recalling Sentences subtest, but every measure of semantic and grammatical skills was significantly lower in the language samples of the preterm group. Attentional difficulties partially explained these skill differences, but not hyperactivity or nonverbal IQ. Keep in mind that these results don’t necessarily match those of previous studies of children born preterm, but the authors of this study do a thorough job of explaining possible reasons for this in the discussion section.

What are the takeaways for evaluating preschoolers born preterm?

  1. Don’t forget the value of standardized sentence recall tasks as an indicator of language disorder.

  2. Language sample analysis is worth taking the time to complete. Structured, standardized language assessments don’t always adequately measure deficits in conversational language skills.

2.png

Check out our previous reviews (there are so many of them!) if you’re feeling stuck on where to begin with language sample analysis. But if you’re involved in research or just curious about the details, be sure to click over to the article for an interesting discussion of which measures the authors chose to use and why.

*before 36 weeks gestation; also, the researchers excluded children with diagnoses that further increased their risk of delays (issues such as chromosomal abnormalities, meningitis, or grade III/IV intraventricular hemorrhage)

 

Imgrund, C. M., Loeb, D. F., & Barlow, S. M. (2019). Expressive Language in Preschoolers Born Preterm: Results of Language Sample Analysis and Standardized Assessment. Journal of Speech, Language, and Hearing Research. doi:10.1044/2018_jslhr-l-18-0224

What’s driving our clinical decision-making?

We know a lot about what types of assessment tools SLPs tend to use (see here, here, and here, for example), but we don’t know much about how we synthesize and prioritize the information we gather in those assessments to come up with a diagnosis (or lack thereof). How do we reconcile inconsistent results? What factors tend to carry the most weight? How much do outside influences (i.e. policies and caseload issues) affect our decisions? Two different studies this month dive into the minds of SLPs to begin answering these questions.

Fulcher-Rood et al. begin by pointing out that school-based SLPs receive conflicting information on how to assess and diagnose language disorders from our textbooks, our federal/state/local guidelines and policies, and the research. So how do we actually approach this problem in real life? To learn more, they used a pretty cool case study method, where lots of assessment results were available for each of five, real 4–6-year-olds (cognitive and hearing screenings, parent/teacher questionnaires, three different standardized tests and two different language samples, transcribed and analyzed against SALT norms), but the 14 experienced SLPs who participated only saw the results they specifically asked for to help them make their diagnoses. This better reflects actual practice than just giving the SLPs everything upfront, because in school settings you’re for sure not going to have SPELT-3 scores or LSA stats to consider unless you’re purposefully making that happen. The case studies were chosen so that some showed a match between formal and informal results (all within or all below normal limits), whereas some showed a mismatch between formal and informal testing, or overall borderline results. Importantly, SLPs were instructed not to consider the “rules” of where they work when making a diagnosis.

Here were some major findings:

  • Unsurprisingly, when all data pointed in the same direction, SLPs were unanimous in determining that a disorder was or wasn’t present.

  • When there was conflicting information (standard scores pointed one direction, informal measures the other), almost all the SLPs made decisions aligning with the standardized test results.

  • Across cases, almost all the SLPs looked at CELF-P2 and/or PLS-5 scores to help them make a diagnosis, and in most cases they asked for parent/teacher concerns and language sample transcripts as well. A third of the SLPs didn’t ask for LSA at all.

  • Only a few SLPs used SPELT-3 scores, and no one asked for language sample analyses that compared performance to developmental norms.

These results reinforce what we learned in the survey studies linked above: SLPs use a lot of standardized tests, combined with informal measures like parent/teacher reports, and not so much language sampling. What’s troubling here is the under-utilization of tools that have a really good track record at diagnosis language disorders accurately (like the SPELT-3 and LSA measures), as well as over-reliance on standardized test scores that we know can be problematic—even when there’s tons of other information available and time/workplace policies aren’t a factor.

The second study, from Selin et al., tapped into a much bigger group of SLPs (over 500!), to ask a slightly different question:

5.png

Under ideal conditions, where logistical/workplace barriers are removed, how are SLPs approaching clinical decision-making? And what about the children, or the SLPs themselves, influences those decisions? 

Their method was a little different from the first study. SLPs read a paragraph about each case, including standard scores (TOLD-P:4 or CELF-4, PPVT-4, GFTA-2, and nonverbal IQ) and information about symptoms and functional impairments (use of finiteness, MLU, pragmatic issues, etc.). Rather than giving a diagnosis, the SLPs made eligibility decisions—should the child continue to receive services, and if so, in what area(s) and what type of service (direct, consultation, monitoring, etc.)?

The survey method this team used yielded a TON of information, but we’ll share a few highlights:

  • Freed from the constraints of caseloads and time, SLPs recommended continued service more often than we do in real life. We know that workplace policies and huge caseloads can prevent us from using best practices, but it’s helpful to see that play out in the research. It’s not just you!

  • Six cases were specifically set up to reflect the clinical profile of Specific Language Impairment*, but when determining services and goal areas, SLPs choices didn’t consistently align with that profile. So, even when a case was consistent with SLI, services weren’t always recommended, and when they were, the goals didn’t necessarily correspond with the underlying deficits of that disorder. So as a group, our operational knowledge of EBP for language disorders has a lot of room for improvement. Unlike with speech sound disorders, SLPs were not sensitive to clinical symptoms of SLI (tense/agreement errors, decreased MLU) when making eligibility decisions.

  • Yet again, SLPs relied heavily on standardized scores, even when other evidence of impairments was present.  

So what can you do with all this information? First of all, think about what YOU do in your language assessments. What tools do you lean on to guide your decisions, and why? Are you confident that those choices are evidence-based? Second, keep doing what you’re doing right now—learning the research! There is tons of work being done on assessment and diagnosis of language disorders, use of standardized tests, and LSA (hit the links to take a wander through our archives!). Taking a little time here and there to read up can add up to a whole new mindset before you know it.  

*SLI, or developmental language disorder (DLD) with average nonverbal intelligence.

 

Fulcher-Rood, K., Castilla-Earls, A., & Higginbotham, J. (2019). Diagnostic Decisions in Child Language Assessment: Findings From a Case Review Assessment Task. Language, Speech, and Hearing Services in Schools. doi:10.1044/2019_LSHSS-18-0044

Selin, C. M., Rice, M. L., Girolamo, T., & Wang, C. J. (2019). Speech-Language Pathologists’ Clinical Decision Making for Children With Specific Language Impairment. Language, Speech, and Hearing Services in Schools. doi:10.1044/2018_LSHSS-18-0017

SUGAR update: can it diagnose DLD?

Remember SUGAR? It’s the new, alternative language sample analysis protocol meant to work within the realities of a busy SLP’s workload. It’s been a while, so here’s a quick recap: SUGAR involves calculating four metrics on a 50-utterance sample where you only transcribe child utterances:  

  1. Mean length of utteranceSUGAR (MLUS)*

  2. Total number of words (TNW)

  3. Clauses per sentence (CPS)

  4. Words per sentence (WPS) 

For specifics and examples, check out the complete procedures (including videos) on their website.

While the creators of SUGAR have provided some support for its validity, the diagnostic accuracy of the four measures hasn’t been tested—until now! In this new study, the authors recruited 36 3- to 7-year-old children with DLD (currently receiving or referred to services) and 206 with typical language, and used the SUGAR protocol to sample their language. All four measures showed acceptable sensitivity and specificity (above 80%), using research-based cutoff scores (see the paper for specifics on cutoffs for each measure). The most accurate classification, according to the authors, was achieved with a combination of MLUS and CPS.

2.png

One of SUGAR’s big selling points is that it’s quick (like, 20 minutes quick), at least for kids with typical language. Did that still hold for the children with DLD? Actually, in this study they took less time to provide a 50-utterance sample than their typical peers. Bonus!

Language sampling can be daunting for the full-caseload SLP, but we love that research like this is identifying promising LSA measures that have high diagnostic accuracy (higher, we might add, than many commercially-available tests), while addressing our time and resource barriers.

An important note: there are many methodological differences between SUGAR and other LSA procedures, and SUGAR has not been uncontroversial. We’ll be on the lookout for more research on SUGAR’s diagnostic potential or comparing SUGAR to more traditional protocols to help us really understand the pros and cons of the different LSA methods.

*When calculating MLUS, derivational morphemes (-tion) are counted separately and catenatives (hafta, wanna) count as two morphemes.

 

Pavelko, S. L., & Owens Jr, R. E. (2019). Diagnostic Accuracy of the Sampling Utterances and Grammatical Analysis Revised (SUGAR) Measures for Identifying Children With Language Impairment. Language, Speech, and Hearing Services in Schools. doi:10.1044/2018_LSHSS-18-0050

A one–two punch for assessing young Spanish–English learners

Do you serve pre-K or kindergarten-aged kids? Are some/lots/all of them from Hispanic backgrounds and learning Spanish AND English? Mandatory reading right here, friends!

So—a major issue for young, dual-language learners? Appropriate language assessments. We talk about it a lot (plus here, here, here, and here, to name a few). In this new study, the authors compared a handful of assessments to see which could most accurately classify 4- and 5-year-olds (all Mexican–American and dual-language learners) as having typical vs. disordered language.

2.png

The single measure with the best diagnostic accuracy was two subtests of the Bilingual English-Spanish Assessment (BESA)—Morphosyntax and Semantics (the third subtest is phonology, which they didn’t use here). But to get even more accurate? Like, sensitivity of 100% and specificity of about 93%? Add in a story retell task (they used Frog, Where Are You?). Sample both Spanish and English, and take the better MLUw of the two. This BESA + MLU assessment battery outperformed other options in the mix (English and Spanish CELF-P2, plus a composite of the two, a parent interview, and a dynamic vocab assessment).

Not familiar with the BESA? It’s a newer test, designed—as the name implies—specifically for children who are bilingual, with different versions (not translated) of subtests in each language. If you give a subtest in both languages, you use the one with the highest score. And before you ask—yes, the test authors believe that monolingual SLPs can administer the BESA, given preparation and a trained assistant.

Now, the researchers here don’t include specific cut scores to work with on these assessments, but you can look at Table 2 in the paper and see the score ranges for the typical vs. disordered language groups. They also note that an MLUw of 4 or less can be a red flag for this group.

The major issue with this study, affecting our ability to generalize what it tells us, is that the sample size was really small—just 30 kids total. So, take these new results on board, but don’t override all that other smart stuff you know about assessing dual-language learners (see our links above for some refreshers if needed). And keep an eye out for more diagnostic studies down the road—you know we’ll point them out when they come!

 

Lazewnik, R., Creaghead, N. A., Smith, A. B., Prendeville, J.-A., Raisor-Becker, L., & Silbert, N. (2018). Identifiers of Language Impairment for Spanish-English Dual Language Learners. Language, Speech, and Hearing Services in Schools. Advance online publication.  https://doi.org/10.1044/2018_LSHSS-17-0046