Assessing language with diverse preschoolers? Go for dynamic assessment

2.png

Making the right call when assessing language skills of children with cultural or language backgrounds that don’t match our own is hard. Using our go-to assessment methods, we risk labeling normal language variation as signs of a disorder. Standardized test norms may over-identify children from non-mainstream language backgrounds as having language impairment.  

Enter dynamic assessment, which involves testing a child, providing teaching and support, and then retesting to see what the child can do with help. In a new study, Henderson et al. used dynamic assessment to assess language skills of Navajo preschoolers with narrative retell tasks from the Predictive Early Assessment of Reading and Language (PEARL, from the same acronym aficionados that brought us the DYMOND).

Dynamic assessment takes longer than static (one-time) assessment. The PEARL accounts for this—you give the pretest, look at the score, and then administer the teaching and retest only if it’s below a cutoff. Henderson et al. found that the reported cutoff score for the PEARL pretest didn’t work well for Navajo children; sensitivity and specificity were better with a cutoff score of 7 rather than 9. Looking at the whole test, scores on the retest (following teaching) were even better at diagnosing children, and examiners’ “modifiability” ratings (how the child responded to teaching) diagnosed children with 100% accuracy. These findings suggest that the PEARL is a valid test for assessing language in children from non-mainstream language or cultural backgrounds.   

 

Henderson, D. E., Restrepo, M. A., & Aiken, L. S. (2018). Dynamic assessment of narratives among Navajo preschoolers. Journal of Speech, Language, and Hearing Research, 61(10), 2547–2560.

Dynamic assessment = Crystal ball for reading skills?

Helping kids become proficient readers is a big deal. Schools often screen children’s decoding skills (the ability to sound out words) to figure out who needs help. But what do screening results mean for children’s future reading ability? Petersen et al. followed a diverse group of children from kindergarten to fifth grade to find out.

The authors administered a quick dynamic assessment task at the beginning of kindergarten. Children were asked to decode four nonsense words, taught how to decode them, and asked to decode them again. Examiners scored children’s accuracy and how easily they responded to teaching. The task took only three minutes to administer on average. (The task is described more in this article, and it’s similar to the decoding tasks on the PEARL.) The children’s schools also screened their ability to name letters and sounds at the beginning of kindergarten and their oral reading fluency at the end of each year.

3.png

Performance on the dynamic task in kindergarten classified children into average vs. struggling reader categories in fifth grade with 75–80% accuracy. The 3-minute dynamic task was better at predicting reading skill than the traditional static (one-time) screening, especially for the Hispanic students in the sample, many of whom were English language learners.

The task wasn’t perfect at predicting fifth grade reading skill, but it was pretty good, especially considering how fast it was to administer. These findings suggest that, compared to the static measures, dynamic assessment of decoding could save a ton of intervention time. Dynamic tasks are less likely to pick up children who just lack reading exposure, saving us time for working with the kids who will continue to need help with reading (AKA, making RTI less of a massive undertaking).

 

Petersen, D. B., Gragg, S. L., & Spencer, T. D. (2018). Predicting reading problems 6 years into the future: Dynamic assessment reduces bias and increases classification accuracy. Language, Speech, and Hearing Services in the Schools, 49(4), 875–888.

GUEST POST: On the DYMOND (Dynamic Measure of Oral Narrative Discourse)

Have you been avoiding dynamic assessment because it is too complicated and time consuming? A new study by Petersen et al. (2016) outlines an efficient, accurate, and standardized way to identify language impairment in school-aged children using a narrative dynamic assessment.

What is dynamic assessment?

Dynamic assessment is a method used to measure a student’s learning potential rather than their current knowledge. A test-teach-retest approach is often used. The child is given an initial test to determine their current individual performance. They are then given a brief period of instruction to determine their learning potential (modifiability). Lastly, they are retested using an alternate form of the pre-test. Overall modifiability is based on the student’s change in score from pretest to posttest, what learning behaviors the student exhibited, and how much effort from the examiner was needed to teach the child. This focus on modifiability makes dynamic assessment especially useful with culturally and linguistically diverse populations, where differences in prior knowledge have historically confounded the accurate identification of language impairment.

Why use dynamic assessment of oral narratives?

  • Higher classification accuracy than most traditional standardized, norm-referenced assessments
  • Measures a child’s ability to learn rather than prior knowledge
  • Overcomes test biases against culturally and linguistically diverse populations
  • Takes less than 30 minutes to administer
  • Assesses multiple skills including story grammar, vocabulary, cohesion, and grammar in a functional context
  • Provides direction for intervention

The recent study by Petersen et. al (2016) investigated the classification accuracy of a dynamic assessment in identifying culturally and linguistically diverse children with and without language disorders. 42 Spanish-English-speaking bilingual children were given two 25-minute test-teach-retest narrative dynamic assessments. Missing story grammar elements and subordinating conjunctions (e.g., because, after) were taught during the teaching phase. Results showed that modifiability ratings (remember—that's learning potential) were able to identify children with and without language disorders with almost perfect accuracy after only 25 minutes.

The DYMOND (Dynamic Measure of Oral Narrative Discourse), a standardized dynamic assessment of oral narratives for school-age children that is based on this most recent study is currently being piloted. You can download the DYMOND for free here. By participating in this pilot initiative, you can help gather national norms for this assessment and have a free tool that will help accurately identify children with language impairment.

This review is written by Guest Authors: Whitney A. Mount, Ashlynn J. Stevens, Mikal A. Forseth, & Douglas B. Petersen. Thank you all for taking the time to share your research with us!

Petersen, D.B., Chanthongthip, H., Ukrainetz, T.A., Spencer, T.D., & and Steeve, R.W. (2017). Dynamic Assessment of Narratives: Efficient, Accurate Identification of Language Impairment in Bilingual Students. Journal of Speech, Language, and Hearing Research, 60, 983­–998.

AAC assessment and intervention for preschoolers with severe speech impairment

This review covers two research papers in one, from the same research group and measuring the same students; the first paper on dynamic assessment of AAC users, and the second paper on intervention for AAC users.

In both studies, the participants were 10 three–four-year-old children with receptive language within normal limits, but severe speech impairment (< 50% intelligible). The children were provided an iPad with Proloquo2Go to use for AAC.

Study #1 (Dynamic Assessment):

Dynamic assessment “uses a teach-test approach”, as opposed to static assessment, which simply tests the child’s current skill set. The researchers state, “… using DA may enable clinicians to improve their ability to predict when children are ready to focus on early syntax when using AAC.”

For the DA procedure, the researchers assessed as much as they could of the following four targets:

  • agent-action-object (e.g. “Pig chase cow.”)
  • possessor-entity (e.g. “Pig plate.”)
  • entity-locative (e.g. “Pig under trash.”)
  •  entity-attribute (e.g. “Pig is happy.”)
5.png

First, using graduated prompting, they provided the student with increasing support as needed (e.g. moving from “Tell me about this one.”...to... “Look… Lion in car… now tell me about this one (target = Pig under trash.) ...to... “See, pig is under the trash. Now you tell me.” ...to... “Tell me pig is under the trash. Pig under trash.”). Also, note that the only grammatical marker required to be used by the children during DA was “is” in the entity-attribute sentences. All the others—“IS, THE, possessive –‘s, and third person singular –s… were included as independent symbols,” but weren’t required to be produced by the children within DA (that came later, in intervention). Vocabulary targeted was all within the children’s receptive vocabulary; a full list of the vocabulary, plus pictures of how they arranged and labeled vocabulary within Proloquo2Go is in the article appendices. Toys, puppets, and figurines were used to demonstrate the target sentences. Ten trials per target (e.g. 10 possessor–entity sentences) were administered.

The researchers found that, not only were the young children able to participate in the DA, but they even learned some expressive syntax types within DA as well. There was some variability in which sentence structure types were difficult for individual children, however, emphasizing that, “… a broad range of targets must be investigated before concluding that a child is not capable of creating rule-based utterances when using graphic symbols to communicate.” Thus, it’s not adequate to test just one or two short sentence types when trying to decide if a child is ready to work on multi-word sentences.

Study #2 (Intervention):

The same 10 children (above) participated in intervention as well. The same four targets (above), with each intervention session focused on one of the four targets. Activities included:

concentrated modeling

  • Ten sentence pairs of one sentence type served as targets, and were “designed to highlight key features of the target”. For example, one pair was “Pig in car” vs. “Pig under car”. The clinician would teach this by first saying Pig is under the car, while acting it out with toys, then providing augmented input on the child’s device. Next, the clinician would repeat the process with the contrasted sentence (Pig under car).

play (20 minutes)

  • After concentrated modeling, they switched to play-based instruction, which was more child-led, but still included adult instruction—“For example, for entity–locative, the examiner could make Cow hide her eyes, place Penguin under the trash can, and then ask the child to tell Cow where Penguin was (Penguin under trash).”
  • Features of the play session included “setting up opportunities for communication… providing spoken and aided models of the target using a range of exemplars… providing indirect and direct spoken prompts… assisting with message productions…”

Results showed that, “the majority of the participants mastered the majority of the targets and did so quickly.” Possessor-entity sentences were the easiest; agent-action-object were the most difficult. The researchers also found that students generalized the new syntactic structures with novel vocabulary, as well.

A really interesting part of the study was that, “nine participants spontaneously used the possessive marker accurately at least once with no aided models provided…”. Only four of the ten students were explicitly taught the grammatical markers (IS, THE, possessive –‘s, and third person singular –s) and these students, “…required only one or two intervention sessions to demonstrate consistent use of the markers.”

Binger, C., Kent-Walsh, J., & King, M. (2017). Dynamic assessment for 3- and 4-year-old children who use augmentative and alternative communication: evaluating expressive syntax. Journal of Speech, Language, and Hearing Research. Advance online publication. doi: 10.1044/2017_JSLHR-L-15-0269

Binger, C., Kent-Walsh, J., King, M., & Mansfield, L. (2017). Early sentence productions of 3- and 4-year-old children who use augmentative and alternative communication. Journal of Speech, Language, and Hearing Research. Advance online publication. doi: 10.1044/2017_JSLHR-L-15-0408

And more...

  • Barton-Hulsey et al. present three case studies of dynamic assessment for an AAC device. Their results, per client, can’t be generalized to a broader population. However, the article presents clear and explicit methods for AAC evaluation and data collection, which may be worth clinical consideration.
  • Iarocci et al., in study of 174 children with and without autism, found that, “… exposure to a second language is not associated with an adverse impact on the communication and cognitive skills of children with ASD.” The authors acknowledge some of the common concerns of bilingualism in low-language children, and review research on the benefits of bilingualism in these children.
  • Morgan et al. show that cleft palate is a risk factor for language development, and that internationally-adopted children with cleft palate are at an even greater risk of low language skills (presumably because of the interruption in language as they switch from L1 to a new primary L2). We’ve talked about the impact of foreign adoption on language before.
  • Tenenbaum et al. examined visual conditions to support word learning in typically-developing children and those with autism spectrum disorder. They found that for children with autism (ages pre-K through early-elementary, with Preschool Language Scales (PLS) scores of at least 12 months, and producing at least single words), the children learned new object words better when the targeted object was held close to the speaker’s face while producing the word (but without covering the mouth), and that this supported word learning better than attention to the mouth alone or attention to the object alone.
  • Thurman et al. examine differences between the language skills of male children with fragile X and autism, and found that boys with fragile X have a relative strength in lexical skills compared to boys with autism.

 
Barton-Hulsey, A., Wegner, J., Brady, N.C., Bunce, B.H., & Sevcik, R.A. (2017) Comparing the effects of speech-generating device display organization on symbol comprehension and use by three children with developmental delays. American Journal of Speech­–Language Pathology. Advance online publication. doi:10.1044/2016_AJSLP-15-0166.
 
Iarocci, G., Hutchison, S.M. & O’Toole, G.J. (2017). Second language exposure, functional communication, and executive function in children with and without autism spectrum disorder (ASD). Journal of Autism and Developmental Disorders. Advance online publication. doi: 10.1007/s10803-017-3103-7
 
Morgan, A.R., Bellucci, C.C., Coppersmith, J., Linde, S.B., Curtis, A., Albert, M., O'Gara, M.M., & Kapp-Simon, K. (2017). Language development in children with cleft palate with or without cleft lip adopted from non–English-speaking countries. American Journal of Speech–Language Pathology. Advance online publication. doi:10.1044/2016_AJSLP-16-0030.

Tenenbaum, E.J., Amso, D., Righi, G., & Sheinkopf, S.S. (2017). Attempting to “Increase intake from the input”: attention and word learning in children with autism. Journal of Autism and Developmental Disorders. Advance online publication. doi:10.1007/s10803-017-3098-0.

Thurman, A.J., McDuffie, A., Hagerman, R.J., Josol, C.K., & Abbeduto, L. (2017). Language skills of males with fragile X syndrome or nonsyndromic autism spectrum disorder. Journal of Autism and Developmental Disorders, 47(3), 728–743.

What we want from therapy—measuring outcomes

This literature review examines how we’re measuring speech–language outcomes in preschoolers. They looked at 214 studies of children age birth to five, published between 2008 and 2015, and considered how the outcomes measured align with the ICF-CY (International Classification of Functioning, Disability, and Health for Children and Youth) framework.
Wait—what’s the ICF-CY? The ICF is a "framework for measuring health and disability"  and the ICF-CY is the pediatric version. It has been around since 2007, and part of the ASHA Scope of Practice (tip: here are some examples for how to consider this framework for speech sound disorders and developmental language disorders). The ICF-CY takes into account: Functioning and Disability (including Body Functions and Structures, Activities, Participation) and Contextual Factors (including Environmental Factors and Personal Factors). Here’s an example of how Functioning and Disability could be taken into account for a child with cleft palate:

  • Structures: Is the cleft repaired?
  • Functions: Is the child able to differentiate oral from nasal airflow?
  • Activities: Is the child intelligible within everyday conversations?
  • Participation: Does the child initiate conversations with peers?

OK—back to the study. So what they found is that our field is measuring outcomes with a pretty heavy bias toward activities, followed by functions, and very minimally participation. Also, we tend to measure certain skills certain ways. For example, “participation” tends to be pragmatic measures. However, as can be seen in the example above, you don’t need to have a pragmatic disorder for your communication disorder to significantly impact participation. Aren’t we worried about how speech affects participation? And how language affects participation? And that’s the point here: we’re in a habit of measuring things certain ways, but this doesn’t exactly align with the ICF-CY, and may also not align with what the child and parent are really wanting out of therapy.
So, what should we do now? First, simply becoming familiar with ICF-CY gets the ball rolling. You may quickly recognize some opportunities to change how you're measuring some clients' outcomes. Then, the authors also include entire tables of outcomes measures already available to us. You may simply look through these to brainstorm options for your caseload.
 
Cunningham, B.J., Washington, K.N., Binns, A., Rolfe, K., Robertson, B., Rosenbaum, P. (2017). Current Methods of Evaluating Speech-Language Outcomes for Preschoolers With Communication Disorders: A Scoping Review Using the ICF-CY. Journal of Speech, Language, and Hearing Research, 60, 447–464.

Language skills of internationally adopted children

Children adopted internationally (CAI) go through a period of interrupted language acquisition, where they must switch from a former language environment to a new one. Many of these children also spend some time in institutions, with a low adult-to-child ratio and few older children to provide social language examples. In this study, the authors review the literature to show that many of these children, though within normal limits, have delayed or slightly lower-than-average language skills. The findings of the current study were consistent with this. The internationally adopted four-year-olds in the current study... “…had lower CELF-P:2 (Clinical Evaluation of Language Fundamentals–Preschool 2) core language scores for the expressive subtests than the U.S. nonadopted group,” but nonetheless, “… performed largely within 1 SD on the core language scores…” On average, these children had, “…lived with their adopted families for three years.”
 
The researchers, however, did more than just give the children a language test. They also examined their ability to perform routines that depend on cognitive and linguistic skills. They found that the CAI had difficulty with false belief tasks. These are tasks where you must recognize that another person’s perspective is different than yours, predict what the other person is thinking, and explain it (e.g. “What will your friend think is in the box?”). The children had the most difficulty with the more linguistically challenging tasks (e.g. answering “Why” questions). The variables predictive of performance on these tasks were the CELF core language score, and living with older siblings. This suggests that perhaps both the linguistic adjustment phase and the environment a child is adopted into have an impact on sociolinguistic outcomes.


Hwa-Froelich, D.A., Matsuo, H., Jacobs, K. (2016). False Belief Performance of Children Adopted Internationally. American Journal of Speech-Language Pathology. Advance online publication. doi:10.1044/2016_AJSLP-15-0152.

New insight on the assessment of bilingual children

For Spanish–English bilingual children, how do you integrate and analyze data from both languages in order to determine if the child has a language impairment? You likely have parent reports, teacher reports, standardized tests, language samples, observations, but don't know what to prioritize from each. 
This article presents two case studies pulled from a larger longitudinal study, and really does a good job of guiding clinicians through the research on this topic. Issues discussed include:

  • translating standardized tests
  • using standardized tests in which the student doesn’t match the test population
  • combining data from multiple tests
  • language dominance
  • current and cumulative language exposure
  • what to measure in Spanish vs. English language samples
  • ... and other topics with some surprising answers


See: Anaya, J.B., Peña, E.D., & Bedore, L.M. (2016). Where Spanish and English Come Together: A Two Dimensional Bilingual Approach to Clinical Decision Making. Perspectives of the ASHA Special Interest Groups, 1(1), 3–16.