Assessing language with diverse preschoolers? Go for dynamic assessment

2.png

Making the right call when assessing language skills of children with cultural or language backgrounds that don’t match our own is hard. Using our go-to assessment methods, we risk labeling normal language variation as signs of a disorder. Standardized test norms may over-identify children from non-mainstream language backgrounds as having language impairment.  

Enter dynamic assessment, which involves testing a child, providing teaching and support, and then retesting to see what the child can do with help. In a new study, Henderson et al. used dynamic assessment to assess language skills of Navajo preschoolers with narrative retell tasks from the Predictive Early Assessment of Reading and Language (PEARL, from the same acronym aficionados that brought us the DYMOND).

Dynamic assessment takes longer than static (one-time) assessment. The PEARL accounts for this—you give the pretest, look at the score, and then administer the teaching and retest only if it’s below a cutoff. Henderson et al. found that the reported cutoff score for the PEARL pretest didn’t work well for Navajo children; sensitivity and specificity were better with a cutoff score of 7 rather than 9. Looking at the whole test, scores on the retest (following teaching) were even better at diagnosing children, and examiners’ “modifiability” ratings (how the child responded to teaching) diagnosed children with 100% accuracy. These findings suggest that the PEARL is a valid test for assessing language in children from non-mainstream language or cultural backgrounds.   

 

Henderson, D. E., Restrepo, M. A., & Aiken, L. S. (2018). Dynamic assessment of narratives among Navajo preschoolers. Journal of Speech, Language, and Hearing Research, 61(10), 2547–2560.

Perspectives & Tutorials

Beyond Social Skills: Supporting Peer Relationships and Friendships for School-Aged Children with Autism Spectrum Disorder

Clinical Implications for Working with Nonmainstream Dialect Speakers: A Focus on Two Filipino Kindergarteners

This clinical focus piece is a useful resource on the features of Phillipine English, a variety of English spoken by many Filipinos and influenced by Tagalog (with English, the other official language of the Philippines). The authors discuss two case studies and give some general recommendations for working with nonmainstream dialect speakers, so it’s worth a look even if you don’t have any current clients from this background.

Ending the Reading Wars: Reading Acquisition From Novice to Expert

This is a really good paper. No extra description needed… just go check it out! And it’s open-access (party, party!)

Illustrating a Supports-Based Approach Toward Friendship with Autistic Students

Individual differences in children’s pragmatic ability: A review of associations with formal language, social cognition, and executive functions

A Multilinguistic Approach to Evaluating Student Spelling in Writing Samples

A lot of SLPs don’t feel confident addressing our kiddos’ writing and spelling needs. But… yeah—we need to go there. This clinical focus piece describes a system for assessing spelling that capitalizes on our skills analyzing spoken language samples to classify subtypes of spelling errors that are either phonological, orthographic, or morphological in nature. The informal method they describe would be useful for describing skills, setting goals, and tracking progress.

The Place of Morphology in Learning to Read in English

Promoting Conditional Use of Communication Skills for Learners with Complex Communication Needs: A Tutorial

Stepping Stones to Switch Access

Feel clueless when working with switch users? Get stuck in “cause-and-effect” land? In this piece from ASHA’s SIG 12, switch-access expert Linda Burkhart lays out a progression of switch skills and possible target activities from the very earliest stages of learning (I hit a button and a thing happens!) all the way to building automaticity with two-switch scanning. This is one to keep handy if you have clients with complex communication needs.

Teacher ratings as a language screening for dialect speakers

3.png

In the last review, we shared research on a potentially valid tool to screen Mainstream English-speaking kindergarteners for language disorders. But what about our kiddos who speak other dialects of English, like African American English (AAE) or Southern White English (SWE)? In this study, researchers gave a group of AAE- and SWE-speaking kindergarteners a handful of language and literacy screeners, to see which one(s) could best identify possible language disorders, while avoiding “dialect effects.”

Their most successful screener (and TISLP’s winner for best acronym of the month) was the TROLL, or Teacher Rating of Oral Language and Literacy—available here for free. And yes, that’s a teacher questionnaire, rather than another individually-administered assessment for our students who spend so much time testing already. Importantly, the teachers completed the ratings and the end of the kindergarten year, not the beginning, so they had time to really get to know the students and their abilities.

The researchers calculated a new cut score of 89 for this population, since the TROLL itself only suggests cut scores through age 5. This resulted in sensitivity of 77% for identification of language disorders. Now, 77% isn’t really high enough—we want a minimum of 80 for a good screener. But it may be a starting place until better tools come our way.

Gregory, K. D., & Oetting, J. B. (2018). Classification Accuracy of Teacher Ratings When Screening Nonmainstream English-Speaking Kindergartners for Language Impairment in the Rural South. Language, Speech, and Hearing Services in Schools, 49(2), 218–231. https://doi.org/10.1044/2017_LSHSS-17-0045.

A faster way to measure grammar skills

1.png

We’ve previously pointed you to research supporting “Percent Grammatical Utterances” (that’s PGU for the acronym-inclined) as a good language sample analysis to help diagnose developmental language disorder (DLD). While great practice, the procedure for computing PGU can be, in reality, pretty time-consuming.

In this study, the researchers that brought us PGU have given us a faster way to accomplish pretty much the same thing. Yay, science! They want to find a good method to measure and monitor growth in grammar skills — there really isn’t anything like that right now — so the process needs to be efficient enough to do multiple times a year for each kiddo. Enter Percent Grammatical Responses (you guessed it… PGR for short).

So how does PGR work? Kids between 3;0 and 3;11 saw a series of 15 pictures (described here). For each, the adult asked “What is happening in the picture?” The whole response was scored as either grammatical or ungrammatical. Take the number of grammatical responses, divide by 15, and, voila! PGR. No dividing responses up into C-Units… woo!

That’s too easy. It can’t be valid! It can! PGR appears to actually measure grammatical ability, since it correlates closely with SPELT-P 2 scores, while not being correlated with a measure of vocabulary. It also correlates with PGU, which has proven validity. As diagnostic tools for DLD, PGR and PGU agreed 92% of the time on “passes” and 94% of the time on “fails,” with given cutoff scores for each.

Awesome, right? But remember: this initial validation study only established a cutoff score for 3-year-olds, so we don’t have enough information to substitute PGR in for PGU with older kids. Also, hold off for now on using PGR to monitor progress, in addition to adding it to your diagnostic toolkit. More study is needed to determine if PGR is sensitive enough to reflect skill growth over time.

Cultural/Linguistic Diversity Note: The sample “ungrammatical” responses in the paper are constructions that are perfectly good in Non-Mainstream American English (or African American English). The kids in this study spoke “mainstream English,” but as always, be mindful of dialect differences in assessment.

Eisenberg, S. L. & Guo, L. (2017). Percent Grammatical Responses as a General Outcome Measure: Initial Validity. Language, Speech, and Hearing Services in Schools. Advance online publication. doi: 10.1044/2017_LSHSS-16-0070.

And more...

Flipsen conducted a literature search to answer the question—what might be the outcome for a hearing-impaired child whose access to amplification is delayed (with no sign language in the interim)? This study was written as a guide for clinicians on searching the literature and making predictions for their own clients. Four studies were identified, showing that delayed access to a cochlear implant (meaning, waiting until age 2 or 3), is predictive of speech–language delays into late childhood/early adulthood.

Hansen & Scott reviewed the autism intervention literature to find evidence-based practices for children with both autism and hearing loss. In short… there really aren’t any (yet).

Shollenbarger et al. showed that 1st graders who speak nonmainstream American English (NMAE) may perform worse on phonological awareness tasks involving final consonant clusters (CVCC words) than children who speak mainstream American English. The authors urge caution when using CVCC words to assess rhyming and phoneme segmentation in children who speak NMAE.

Flipsen, P. (2017). Predicting the Future: A Case Study in Prognostication. American Journal of Speech-Language Pathology. Advance online publication. doi:10.1044/2017_AJSLP-17-0022.

Hansen, S. & Scott, J. (2017). A Systematic Review of the Autism Research with Children Who Are Deaf or Hard of Hearing. Communication Disorders Quarterly. Advance online publication. doi: 10.1177/1525740117728475.

Shollenbarger, A. J., Robinson, G. C., Taran, V., & Choi, S. (2017). How African American English-speaking first graders segment and rhyme words and nonwords with final consonant clusters. Language, Speech, and Hearing Services in the Schools, 48(4), 273–285.

And more...

·      Berry & Oetting describe dialect differences observed in children with Gullah/Geechee heritage (Southeastern United States).

·      Douglas et al. take communication partner training strategies that already have a good evidence base (see Kent-Walsh & McNaughton, 2005 and Douglas et al., 2014), and hosted them online for parents of young children with autism. Results indicate “increased communication by the child”; however, note that this study is small, and “…further replication is necessary before generalizing results.”

·      Hughes et al. provide evidence that having a positive, meaningful relationships with a person who stutters (friend, family member, role model) is, “…associated with high ratings of an average person who stutters as being trustworthy and reliable.” The authors suggest that, “… simply knowing a person who stutters may not improve attitudes toward stuttering,” but that having a bit more meaningful relationship with a person who stutters, may. This prediction requires further investigation, as the study is correlational not experimental.

·      McConachie et al. reviewed the literature to identify outcomes that parents value when monitoring the progress of their young children with autism, and then asked parents to rate those outcomes in order of importance. The top four outcomes were: happiness, anxiety or unusual fears, hypersensitivity (“discomfort with being touched, too much noise, bright lights, certain tastes, etc.”), and self-esteem (“positive views of self”). Check out Table 2 in the study for the additional six outcomes that round out the top 10!

·       Montgomery et al. tested sentence comprehension for different complex sentence types in typically developing (TD) children and children with developmental language disorder (DLD). Meaning cues in the sentences were removed so that children had to rely on word order alone. They found that children with DLD had poorer comprehension than TD children, especially in sentences that violate the usual subject-verb-object word order. They recommend that SLPs consider teaching complex sentences containing familiar words and tons of meaning cues, to compensate for this difficulty with word order.

·       Owen Van Horne et al. taught past tense –ed to children with developmental language disorder by starting with either “easy” or “hard” lists of verbs. Difficulty was based on the likelihood of each verb being used in the past tense (e.g. “jump” commonly used; “rest” less common) plus its phonological features (e.g., words ending in –t or –d are harder to inflect). The children who started with the lists of “hard” verbs showed greater gains in accuracy for past tense –ed on treated verbs and more generalization of –ed to untreated verbs, but did not finish treatment more quickly. The full list of verbs, plus an example treatment script (so you can see exactly what their instruction looked like) is included in the article.

 

Berry, J.R., & Oetting, J.B. (2017). Dialect Variation of Copula and Auxiliary Verb BE: African American English–Speaking Children With and Without Gullah/Geechee Heritage. Journal of Speech, Language, and Hearing Research. Advance online publication. doi:10.1044/2017_JSLHR-L-16-0120

Douglas, S.N., Kammes, R., & Nordquist, E. (2017). Online Communication Training for Parents of Children With Autism Spectrum Disorder. Communication Disorders Quarterly. Advance online publication. doi: 10.1177/1525740117727491.

Hughes, C.D., Gabel, R.M., & Palasik, S.T. (2017). Examining the relationship between perceptions of a known person who stutters and attitudes toward stuttering. Canadian Journal of Speech–Lanugage Pathology and Audiology, 41(3), 237–252.

McConachie, H., Livingstone, N., Morris, C., Beresford, B., Couteur, A. L., Gringas, P. … & Parr, J. R. (2017). Parents suggest which indicators of progress and outcomes should be measured in young children with Autism Spectrum Disorder. Journal of Autism and Developmental Disorders. doi:10.1007/s10803-017-3282-2

Montgomery, J. W., Gillam, R. B., Evans, J. L., & Sergeev, A. V. (2017). “Whatdunit?” Sentence Comprehension Abilities of Children With SLI: Sensitivity to Word Order in Canonical and Noncanonical Structures. Journal of Speech, Language, and Hearing Research. Advance online publication. doi: 10.1044/2017_JSLHR-L-17-0025.

Owen Van Horne, A. J., Fey, M., & Curran, M. (2017). Do the Hard Things First: A Randomized Controlled Trial Testing the Effects of Exemplar Selection on Generalization Following Therapy for Grammatical Morphology. Journal of Speech, Language, and Hearing Research. Advance online publication. doi: 10.1044/2017_JSLHR-L-17-0001.

Modification to standardized tests for speakers of nonmainstream dialects

The authors of this paper discuss how, when an SLP evaluates a young speaker of a nonmainstream American English dialect (NMAE), s/he is faced with two tasks: first, to determine if the child is a speaker of a nonmainstream dialect, and then to determine if that child does or does not have a language disorder.

Though the task may seem straightforward at first glance, it can be incredibly challenging. One major barrier is that children use NMAE variably. Conversational contexts are more likely to elicit NMAE use, then use can also change per communication partner. Dialect use also changes with age; the authors state, “… the general trend is that use of NMAE features drops during the first few years of elementary school as students master code-switching strategies, and then increases during adolescence as students begin using NMAE dialect for more social reasons (N.P. Terry et al., 2010; Van Hofwegen & Wolfram, 2010).” This variability is challenging. Then, the overlap between what’s considered ungrammatical in mainstream American English and grammatical in NMAE makes it all the more challenging.

As part of the evaluation process, SLPs may choose to use a combination of language sample analysis (LSA) with standardized testing. An adjustment that is often made to the standardized test in order to account for the child’s dialect is to apply scoring modifications—that is, count an item on a test as “accurate” if it’s accurate per the child’s dialect. And this is in-line with what is recommended within testing manuals, e.g. per the CELF-4 and CELF-5.

In this study, the researchers examined what happens when you try using scoring modifications on the CELF-4 with a sample of 299 2nd-grade students. They found that:

  • without scoring modifications, NMAE speakers were over-identified as having a language disorder
  • but with scoring modifications, the over-identification of children as having a language disorder was improved, but the under-identification of NMAE speakers who do truly have a language disorder also increased

Yikes. It’s well-known that using a standardized language assessment for a speaker of a nonmainstream dialect, when the test wasn’t designed with speakers of that dialect in mind, can provide inaccurate diagnostic results (see article for review). However, this study also provides clear data that the scoring modifications don’t exactly work well, either.

Currently, there isn’t a perfect solution. For now, it’s important for SLPs to simply understand the potential pitfalls they may encounter during assessment. The authors suggest that good options to add to the assessment protocol include: detailed case histories of the child’s abilities at both home and school, peer comparisons, LSA, and dynamic assessment. The authors acknowledge the huge need for more research on how to streamline this process, because even with some of the strategies that look promising (like dynamic assessment), we still don’t have adequate research to fully guide diagnostic decision-making.

Hendricks, A.E., & Adlof, S.M. (2017). Language Assessment With Children Who Speak Nonmainstream Dialects: Examining the Effects of Scoring Modifications in Norm-Referenced Assessment. Language, Speech, and Hearing Services in Schools. Advance online publication. doi:10.1044/2017_LSHSS-16-0060