Identifying “disorder within diversity”

This month, ASHA’s Language, Speech, and Hearing Services in Schools journal put out a (free!) clinical forum on the concept of “disorder within diversity.” The forum includes an introduction, where you can read about the usefulness of moving away from “difference vs. disorder,” and five related research articles. Here we review one of the articles, and others can be found here, here, and here.

Do you like your grammatical morphemes accurate, diverse, or productive?

So, you’re an awesome clinician who is eliciting and analyzing a language sample from a bilingual preschooler. High five for you! You want to capture some data about their grammar skills. What exactly do you measure?

The authors of this study suggest that, rather than counting up how accurate the child’s use of tense and agreement markers is (so, finding the percentage of accurate uses out of total obligatory contexts), you instead focus on the diversity and productivity of tense/agreement markers. The morphemes we’re concerned about are:

  • third person singular –s
  • past tense –ed
  • copula BE (am, are, is, was, were)
  • auxiliary DO (do, does, did)
  • auxiliary BE (am, are, is, was, were).* 

Notice there are five morphemes and 15 total forms here; that’ll be important in a second. These morphemes are clinical markers for language disorders in English.  

So what are diversity and productivity, and how do you measure them? Enter tense marker total and TAP score. We’ll give the basic gist of both, but the specifics for calculating them came from Hadley & Short (2005).

  • Diversity (tense marker total): How many of those 15 forms from earlier did the child use in the language sample?
  • Productivity (tense/agreement productivity, or TAP score): How many different ways did the child uses those five morphemes? Up to five points allowed per morpheme, for a max of 25.

The authors found that, for a group of 4-year-old Spanish–English bilingual children, tense marker total and TAP scores:

1.png
  • Were correlated with MLU(words) and NDW (number of different words), valid LSA measures for this population
  • Changed over the course of a school year
  • Looked different for children with and without parent-reported language concerns

The article provides group means (typical language vs. language concerns) for both measures, but not true normative or diagnostic data, so you can’t use tense marker totals or TAP scores to directly diagnose a language disorder at this point. However, consider using them as criterion-based measures to describe tense and agreement skills, identify morphemes to focus on in therapy, and monitor growth.

*If you’re having one of those days and can’t remember the difference between a copula and auxiliary—no sweat. A copula is the only verb in a clause (like is was, there), but auxiliaries are those “helping verbs” that are linked up with another verb (like are was with linked, there).

Potapova, I., Kelly, S., Combiths, P. N., & Pruitt-Lord, S. L. (2018). Evaluating English Morpheme Accuracy, Diversity, and Productivity Measures in Language Samples of Developing Bilinguals. Language, Speech, and Hearing Services in Schools, 49(2), 260–276. https://doi.org/10.1044/2017_LSHSS-17-0026.

What test do you want 30% of kindergarteners to fail? A language screener

Did you ever add a child to your caseload and think, “Why haven’t I seen this kid sooner?!” You’re not alone. Underidentification of developmental language disorder in young children is a major issue. So, how can we deal with this? One way is to identify good screening tools. Previous research shows that effective language screeners should result in a failure rate close to 30%, meaning that 30% of the children don’t pass, and you’ll capture the children most likely to have a language disorder.

2.png

The authors of this study found that probing for past-tense grammar was an effective way to screen for language disorder in kindergarten students. Specifically, they gave a large group of kindergarten students a screener of grammatical tense marking— the Rice Wexler Test of Early Grammatical Impairment (TEGI) Screening Test—which included past tense and third-person singular probes. Only the past-tense probes resulted in a failure rate close to 30%, showing their potential use as an effective screening tool. If children* fail past-tense probes, this is a red flag and tells us that close monitoring or a formal evaluation may be the next appropriate step.

The students were also screened for nonverbal intelligence, articulation, and emergent literacy skills. Interestingly, the children who failed the past-tense probe often had age-appropriate skills in these areas. What does this tell us? We can’t rely on screeners of related skills to identify children at risk for language disorder—we have to screen oral language directly. If we don’t, we may miss kids who fly under the radar due to their relatively stronger articulation or literacy abilities.

Want to know the best part? The TEGI Screening Test is FREE and available here!

*One very important note: the TEGI is only valid for children who speak Standard (Mainstream) American English. Students who speak African American English or Spanish-influenced English should not be screened with this tool. Check out this review for an alternative.

Weiler, B., Schuele, C. M., Feldman, J. I., & Krimm, H. (2018). A multiyear population-based study of kindergarten language screening failure rates using the Rice Wexler Test of Early Grammatical Impairment. Language, Speech, and Hearing Services in Schools49, 248–259. doi: 10.1044/2017_LSHSS-17-0071.

Teacher ratings as a language screening for dialect speakers

3.png

In the last review, we shared research on a potentially valid tool to screen Mainstream English-speaking kindergarteners for language disorders. But what about our kiddos who speak other dialects of English, like African American English (AAE) or Southern White English (SWE)? In this study, researchers gave a group of AAE- and SWE-speaking kindergarteners a handful of language and literacy screeners, to see which one(s) could best identify possible language disorders, while avoiding “dialect effects.”

Their most successful screener (and TISLP’s winner for best acronym of the month) was the TROLL, or Teacher Rating of Oral Language and Literacy—available here for free. And yes, that’s a teacher questionnaire, rather than another individually-administered assessment for our students who spend so much time testing already. Importantly, the teachers completed the ratings and the end of the kindergarten year, not the beginning, so they had time to really get to know the students and their abilities.

The researchers calculated a new cut score of 89 for this population, since the TROLL itself only suggests cut scores through age 5. This resulted in sensitivity of 77% for identification of language disorders. Now, 77% isn’t really high enough—we want a minimum of 80 for a good screener. But it may be a starting place until better tools come our way.

Gregory, K. D., & Oetting, J. B. (2018). Classification Accuracy of Teacher Ratings When Screening Nonmainstream English-Speaking Kindergartners for Language Impairment in the Rural South. Language, Speech, and Hearing Services in Schools, 49(2), 218–231. https://doi.org/10.1044/2017_LSHSS-17-0045.

School-based assessments: Why do we do what we do?

4.png

Fulcher-Rood et al. interviewed school-based SLPs across the United States about how we choose assessment tools and diagnose/qualify our students. They wanted to understand not just which tools we use, but why we choose them, what “rules” we follow when we make diagnostic decisions, and what external factors affect those decisions. We’ve reviewed some other surveys of SLPs’ current assessment practices in the past—on the use of LSA, and on methods we’re using to assess bilingual clients—and these findings are kinda similar. There’s a lot of detail in the survey, but we’ll just focus on a couple things here.

  • We give a LOT of standardized tests, and qualify most of our students for service on the basis of those scores, with reference to some established cut-off (e.g. 1.5 SD below the mean)
  • We don’t do a ton of language sample analysis (at least the good ol’ record-transcribe-analyze variety)
  • We use informal measures to fill in the gaps and show academic impacts, but those results are less important when deciding who qualifies for service

None of this is likely to surprise you, but given what we know about the weaknesses of standardized tests (especially given diversity in home languages, dialects, and SES), the arbitrary nature of most cut-off scores, and the many advantages of LSA and other non-standard measures… it’s a problem.

So, what barriers are we up against when it comes to implementation of evidence-based assessment practices? First—let’s say it all together—TIME. Always time. Standardized tests are easy to pull, fairly quick to administer and score, and you often have a handy dandy report template to follow. Besides that, we’re often subject to institutional guidelines or policies that require (or *seem* to require) standard scores to qualify students for services.

None of the SLPs in the survey mentioned that research was informing their selection of assessment tools or diagnostic decisions. That doesn’t necessarily mean none of them consider the research—they just didn’t bring it up. But guys! We need to be bringing it up! And by “we,” I mean YOU! The person taking your all-too-limited time to read these reviews. The authors of the study pointed out (emphasis mine) that “there are differences between policies (what must be done) and guidelines (how can it be done)... potentially, school-based SLPs interpret some of the guidelines as mandatory, instead of as suggested.” Maybe there’s some wiggle room that that we aren’t taking advantage of. We can speak up, evaluation by evaluation, sharing our knowledge of research and best practices.

It all boils down to this: “While it is important for SLPs to adhere to the policies set forth by their employment agency, it is equally important for SLPs to conduct evaluations guided by best practice in the field. SLPs may need to advocate for policy changes to ensure that evidence-based practice is followed.”

Fulcher-Rood, K., Castilla-Earls, A. P., & Higginbotham, J. (2018). School-Based Speech-Language Pathologists’ Perspectives on Diagnostic Decision Making. American Journal of Speech-Language Pathology. Advance online publication. https://doi.org/10.1044/2018_AJSLP-16-0121.

Bilingual or English only? How to teach vocabulary to dual language learners

We often hear about the benefits of bilingualism, but we can’t overlook the challenges it can bring. Some children begin learning a second language (L2), while their first (L1) is still developing. As early as preschool, these children—called dual language learners (DLLs)—can fall behind their monolingual peers in important areas, including vocabulary development. In order to ensure that this lag doesn’t continue, it’s crucial to provide effective vocabulary instruction as early as possible.

In this study, Méndez et al. tested whether a bilingual vocabulary instructional approach or an English-only approach would better improve the English vocabularies of preschool-aged Spanish-English DLLs. The only difference between the two approaches was the language(s) of vocabulary instruction. For 5 weeks, the preschoolers participated in small-group shared readings targeting 30 English words. Some key features of the intervention included repeated readings of culturally-relevant stories, many exposures to target words, multi-modal presentations, and child-friendly definitions.*

After 5 weeks, they found that the preschoolers learned more English and Spanish vocabulary from the bilingual approach than the English-only approach. By presenting information in both Spanish and English, it seems that the preschoolers were able to leverage their L1 knowledge to support learning in L2. The authors also found that the bilingual instruction was effective regardless of gender or the initial vocabulary skills of the preschoolers before instruction.

5.png

So, what’s the takeaway here? Even if you don’t have DLLs on your caseload at the moment, chances are you will in the future, so this matters for all of us. In order to be most effective, vocabulary instruction for preschool DLLs should include input in both languages. English-only instruction will not lead to better vocabulary outcomes for preschool DLLs. For the monolingual SLPs out there, don’t let this scare you! Think of it as an opportunity to get creative and collaborate with other professionals, family members, or L1 speakers in the community in order to support L1 development and L2 learning.  

*If you want to learn more about the specific instructional approach, the same authors provide more details in an earlier publication with a different group of preschool DLLs. Or, if you’re interested in SLPs’ roles regarding dual language learners, ASHA provides some great resources.

Méndez, L. I., Crais, E. R., & Kainz, K. (2018). The impact of individual differences on a bilingual vocabulary approach for Latino preschoolers. Journal of Speech, Language, and Hearing Research61, 897-909. doi:10.1044/2018_JSLHR-L-17-0186.

Building bilingual children's vocabularies: How much teaching do we really need to be doing here?

We know that bilingual children’s vocabulary predicts long-term literacy outcomes. In this study, teachers taught higher-level English words (e.g., illness, clung, fierce) through storybook reading activities to low-income second graders* who spoke Spanish at home. The complete word lists, books used, and an example lesson are in the article’s supplemental materials. Each word was taught by one of three methods:

  1. Extended instruction: Teachers pre-taught words in English and Spanish and provided additional examples and practice during and after reading
  2. Embedded instruction: Teachers defined words in English during story reading and provided songs and writing practice after reading
  3. Control: Teachers read words in the story but provided no additional explanation
6.png

Children learned the most words through extended instruction (the one with the most examples). But, the authors pointed out what we all know—there are soooo many words for children to learn, and only so much time to teach them. Luckily, the less intensive embedded instruction method also led to good learning compared with the control condition. In the real world, this level of teaching may be good enough.

*Note that the authors didn’t assess children’s L1 (Spanish) language ability, so we don’t know how many children (if any) had developmental language disorder (DLD). See here for guidance on diagnosing DLD in bilingual children, here for tips for giving vocab tests to children with DLD, and here for more resources on bilingual intervention. 

August, D., Artzi, L., Barr, C., & Francis, D. (2018). The moderating influence of instructional intensity and word type on the acquisition of academic vocabulary in young English language learners. Reading and Writing, 31, 965–989. doi:10.1007/s11145-018-9821-1

Improving the narrative skills of children with language disorder

7.png

It’s hard to imagine going through a day without either telling or hearing a story. “What’d you do this weekend? What was the movie about? What happened?!” We don’t think twice when answering these questions. For some kids, though, this can be really difficult. As the authors of this study point out, “…elementary school–age children with language disorders who demonstrate poor narrative skills are disadvantaged during a large portion of the school day because a great deal of classroom instruction incorporates some degree of narrative discourse into the lessons.”

To address this need, the authors developed a program called Supporting Knowledge in Language and Literacy (SKILL), with the goal of specifically targeting narrative skills directly related to the elementary curriculum (that’s right, think Common Core). The intervention was designed to teach children basic story elements such as characters, setting, actions, consequences, and the relationships among these elements. It then uses story modeling, story retelling, story generation, and story evaluation to develop the child’s narration and literacy skills. Check out the study for a detailed description of the program.

After roughly 8 weeks, the four children who received the intervention told stories that were longer, contained more diverse vocabulary, and were more complex than their stories at baseline.

Other than results, what’s good about SKILL? The lessons include evidence-based procedures that are actually scripted out for you. You buy the manual (see here) and materials and you’re off to the races! No planning needed.

Butthe authors admit that the study was small and did not meet the highest design standards, and the SKILL curriculum has only been studied one other time as a whole-classroom intervention.

Gillam, S. L., Olszewski, A., Squires, K., Wolfe, K., Slocum, T., & Gillam, R. B. (2018). Improving narrative production in children with language disorders: An early-stage efficacy study of a narrative intervention program. Language Speech and Hearing Services in Schools, 49, 197-212. doi:10.1044/2017_lshss-17-0047.

Thinking outside the box(es) for older beginning communicators

Unfortunate but true: Despite the advances our field has seen in AAC awareness, knowledge, and technologies, too many children with complex communication needs remain “emergent” or “pre-symbolic” communicators into adolescence and beyond. Older beginning communicators encounter huge restrictions to their participation across environments. There are lots of reasons for this, and many individual factors at play, but it’s definitely a problem.

The authors of this study argue that some part of this skill gap—and one reason that gains from AAC interventions with this population have been modest—is that the available high-tech AAC options have just been too cumbersome: difficult and slow to program, with high cognitive, linguistic, and motoric demands for the user. They suggest a different approach, now possible thanks to evolving technology: visual scene displays (VSDs), based on photographs snapped by device’s onboard cameras, programmed “just-in-time” with voice-output hotspots. And yes, “just-in-time” means “you’re programming hotspots right then and there during the interaction.” Remember that the next level up from “emergent” communicator is “context-dependent.” These technologies are intended to help learners make that leap, by giving them quick and easy access to that context, right when it’s relevant.

8.png

The researchers used a tablet and mobile app* with these features during high-interest leisure activities with 9–18 year-old beginning communicators. During the activity, a communication partner snapped a picture and programmed in a couple of relevant hotspots. (By the way, they say they needed only 25 SECONDS to program a VSD with two hotspots.) The article has some great descriptions of how the interactions were structured and how the partners chose what to program. Compared to a baseline condition (using the participants’ current AAC systems), the beginning communicators averaged over 20 additional conversational turns within 15 minutes using the just-in-time approach.

What was the magic ingredient here? There are a number of possibilities, but the authors highlight a few:

  • Access to the immediate context of the activity
  • Potential advantages of using photographs vs. symbols
  • Contextualized vocabulary, for a reduced cognitive demand
  • Use of mainstream technology (tablets)

*The specific mobile app they used, EasyVSD, is not commercially available, but Snap Scene is based on the same technology.

Holyfield, C., Caron, J. G., Drager, K., & Light, J. (2018). Effect of mobile technology featuring visual scene displays and just-in-time programming on communication turns by preadolescent and adolescent beginning communicators. International Journal of Speech-Language Pathology. Advance online publication. https://doi.org/10.1080/17549507.2018.1441440.