Assessing language with diverse preschoolers? Go for dynamic assessment


Making the right call when assessing language skills of children with cultural or language backgrounds that don’t match our own is hard. Using our go-to assessment methods, we risk labeling normal language variation as signs of a disorder. Standardized test norms may over-identify children from non-mainstream language backgrounds as having language impairment.  

Enter dynamic assessment, which involves testing a child, providing teaching and support, and then retesting to see what the child can do with help. In a new study, Henderson et al. used dynamic assessment to assess language skills of Navajo preschoolers with narrative retell tasks from the Predictive Early Assessment of Reading and Language (PEARL, from the same acronym aficionados that brought us the DYMOND).

Dynamic assessment takes longer than static (one-time) assessment. The PEARL accounts for this—you give the pretest, look at the score, and then administer the teaching and retest only if it’s below a cutoff. Henderson et al. found that the reported cutoff score for the PEARL pretest didn’t work well for Navajo children; sensitivity and specificity were better with a cutoff score of 7 rather than 9. Looking at the whole test, scores on the retest (following teaching) were even better at diagnosing children, and examiners’ “modifiability” ratings (how the child responded to teaching) diagnosed children with 100% accuracy. These findings suggest that the PEARL is a valid test for assessing language in children from non-mainstream language or cultural backgrounds.   


Henderson, D. E., Restrepo, M. A., & Aiken, L. S. (2018). Dynamic assessment of narratives among Navajo preschoolers. Journal of Speech, Language, and Hearing Research, 61(10), 2547–2560.

Dynamic assessment = Crystal ball for reading skills?

Helping kids become proficient readers is a big deal. Schools often screen children’s decoding skills (the ability to sound out words) to figure out who needs help. But what do screening results mean for children’s future reading ability? Petersen et al. followed a diverse group of children from kindergarten to fifth grade to find out.

The authors administered a quick dynamic assessment task at the beginning of kindergarten. Children were asked to decode four nonsense words, taught how to decode them, and asked to decode them again. Examiners scored children’s accuracy and how easily they responded to teaching. The task took only three minutes to administer on average. (The task is described more in this article, and it’s similar to the decoding tasks on the PEARL.) The children’s schools also screened their ability to name letters and sounds at the beginning of kindergarten and their oral reading fluency at the end of each year.


Performance on the dynamic task in kindergarten classified children into average vs. struggling reader categories in fifth grade with 75–80% accuracy. The 3-minute dynamic task was better at predicting reading skill than the traditional static (one-time) screening, especially for the Hispanic students in the sample, many of whom were English language learners.

The task wasn’t perfect at predicting fifth grade reading skill, but it was pretty good, especially considering how fast it was to administer. These findings suggest that, compared to the static measures, dynamic assessment of decoding could save a ton of intervention time. Dynamic tasks are less likely to pick up children who just lack reading exposure, saving us time for working with the kids who will continue to need help with reading (AKA, making RTI less of a massive undertaking).


Petersen, D. B., Gragg, S. L., & Spencer, T. D. (2018). Predicting reading problems 6 years into the future: Dynamic assessment reduces bias and increases classification accuracy. Language, Speech, and Hearing Services in the Schools, 49(4), 875–888.

GUEST POST: On the DYMOND (Dynamic Measure of Oral Narrative Discourse)

Have you been avoiding dynamic assessment because it is too complicated and time consuming? A new study by Petersen et al. (2016) outlines an efficient, accurate, and standardized way to identify language impairment in school-aged children using a narrative dynamic assessment.

What is dynamic assessment?

Dynamic assessment is a method used to measure a student’s learning potential rather than their current knowledge. A test-teach-retest approach is often used. The child is given an initial test to determine their current individual performance. They are then given a brief period of instruction to determine their learning potential (modifiability). Lastly, they are retested using an alternate form of the pre-test. Overall modifiability is based on the student’s change in score from pretest to posttest, what learning behaviors the student exhibited, and how much effort from the examiner was needed to teach the child. This focus on modifiability makes dynamic assessment especially useful with culturally and linguistically diverse populations, where differences in prior knowledge have historically confounded the accurate identification of language impairment.

Why use dynamic assessment of oral narratives?

  • Higher classification accuracy than most traditional standardized, norm-referenced assessments
  • Measures a child’s ability to learn rather than prior knowledge
  • Overcomes test biases against culturally and linguistically diverse populations
  • Takes less than 30 minutes to administer
  • Assesses multiple skills including story grammar, vocabulary, cohesion, and grammar in a functional context
  • Provides direction for intervention

The recent study by Petersen et. al (2016) investigated the classification accuracy of a dynamic assessment in identifying culturally and linguistically diverse children with and without language disorders. 42 Spanish-English-speaking bilingual children were given two 25-minute test-teach-retest narrative dynamic assessments. Missing story grammar elements and subordinating conjunctions (e.g., because, after) were taught during the teaching phase. Results showed that modifiability ratings (remember—that's learning potential) were able to identify children with and without language disorders with almost perfect accuracy after only 25 minutes.

The DYMOND (Dynamic Measure of Oral Narrative Discourse), a standardized dynamic assessment of oral narratives for school-age children that is based on this most recent study is currently being piloted. You can download the DYMOND for free here. By participating in this pilot initiative, you can help gather national norms for this assessment and have a free tool that will help accurately identify children with language impairment.

This review is written by Guest Authors: Whitney A. Mount, Ashlynn J. Stevens, Mikal A. Forseth, & Douglas B. Petersen. Thank you all for taking the time to share your research with us!

Petersen, D.B., Chanthongthip, H., Ukrainetz, T.A., Spencer, T.D., & and Steeve, R.W. (2017). Dynamic Assessment of Narratives: Efficient, Accurate Identification of Language Impairment in Bilingual Students. Journal of Speech, Language, and Hearing Research, 60, 983­–998.

AAC assessment and intervention for preschoolers with severe speech impairment

This review covers two research papers in one, from the same research group and measuring the same students; the first paper on dynamic assessment of AAC users, and the second paper on intervention for AAC users.

In both studies, the participants were 10 three–four-year-old children with receptive language within normal limits, but severe speech impairment (< 50% intelligible). The children were provided an iPad with Proloquo2Go to use for AAC.

Study #1 (Dynamic Assessment):

Dynamic assessment “uses a teach-test approach”, as opposed to static assessment, which simply tests the child’s current skill set. The researchers state, “… using DA may enable clinicians to improve their ability to predict when children are ready to focus on early syntax when using AAC.”

For the DA procedure, the researchers assessed as much as they could of the following four targets:

  • agent-action-object (e.g. “Pig chase cow.”)
  • possessor-entity (e.g. “Pig plate.”)
  • entity-locative (e.g. “Pig under trash.”)
  •  entity-attribute (e.g. “Pig is happy.”)

First, using graduated prompting, they provided the student with increasing support as needed (e.g. moving from “Tell me about this one.” “Look… Lion in car… now tell me about this one (target = Pig under trash.) “See, pig is under the trash. Now you tell me.” “Tell me pig is under the trash. Pig under trash.”). Also, note that the only grammatical marker required to be used by the children during DA was “is” in the entity-attribute sentences. All the others—“IS, THE, possessive –‘s, and third person singular –s… were included as independent symbols,” but weren’t required to be produced by the children within DA (that came later, in intervention). Vocabulary targeted was all within the children’s receptive vocabulary; a full list of the vocabulary, plus pictures of how they arranged and labeled vocabulary within Proloquo2Go is in the article appendices. Toys, puppets, and figurines were used to demonstrate the target sentences. Ten trials per target (e.g. 10 possessor–entity sentences) were administered.

The researchers found that, not only were the young children able to participate in the DA, but they even learned some expressive syntax types within DA as well. There was some variability in which sentence structure types were difficult for individual children, however, emphasizing that, “… a broad range of targets must be investigated before concluding that a child is not capable of creating rule-based utterances when using graphic symbols to communicate.” Thus, it’s not adequate to test just one or two short sentence types when trying to decide if a child is ready to work on multi-word sentences.

Study #2 (Intervention):

The same 10 children (above) participated in intervention as well. The same four targets (above), with each intervention session focused on one of the four targets. Activities included:

concentrated modeling

  • Ten sentence pairs of one sentence type served as targets, and were “designed to highlight key features of the target”. For example, one pair was “Pig in car” vs. “Pig under car”. The clinician would teach this by first saying Pig is under the car, while acting it out with toys, then providing augmented input on the child’s device. Next, the clinician would repeat the process with the contrasted sentence (Pig under car).

play (20 minutes)

  • After concentrated modeling, they switched to play-based instruction, which was more child-led, but still included adult instruction—“For example, for entity–locative, the examiner could make Cow hide her eyes, place Penguin under the trash can, and then ask the child to tell Cow where Penguin was (Penguin under trash).”
  • Features of the play session included “setting up opportunities for communication… providing spoken and aided models of the target using a range of exemplars… providing indirect and direct spoken prompts… assisting with message productions…”

Results showed that, “the majority of the participants mastered the majority of the targets and did so quickly.” Possessor-entity sentences were the easiest; agent-action-object were the most difficult. The researchers also found that students generalized the new syntactic structures with novel vocabulary, as well.

A really interesting part of the study was that, “nine participants spontaneously used the possessive marker accurately at least once with no aided models provided…”. Only four of the ten students were explicitly taught the grammatical markers (IS, THE, possessive –‘s, and third person singular –s) and these students, “…required only one or two intervention sessions to demonstrate consistent use of the markers.”

Binger, C., Kent-Walsh, J., & King, M. (2017). Dynamic assessment for 3- and 4-year-old children who use augmentative and alternative communication: evaluating expressive syntax. Journal of Speech, Language, and Hearing Research. Advance online publication. doi: 10.1044/2017_JSLHR-L-15-0269

Binger, C., Kent-Walsh, J., King, M., & Mansfield, L. (2017). Early sentence productions of 3- and 4-year-old children who use augmentative and alternative communication. Journal of Speech, Language, and Hearing Research. Advance online publication. doi: 10.1044/2017_JSLHR-L-15-0408

What we want from therapy—measuring outcomes

This literature review examines how we’re measuring speech–language outcomes in preschoolers. They looked at 214 studies of children age birth to five, published between 2008 and 2015, and considered how the outcomes measured align with the ICF-CY (International Classification of Functioning, Disability, and Health for Children and Youth) framework.
Wait—what’s the ICF-CY? The ICF is a "framework for measuring health and disability"  and the ICF-CY is the pediatric version. It has been around since 2007, and part of the ASHA Scope of Practice (tip: here are some examples for how to consider this framework for speech sound disorders and developmental language disorders). The ICF-CY takes into account: Functioning and Disability (including Body Functions and Structures, Activities, Participation) and Contextual Factors (including Environmental Factors and Personal Factors). Here’s an example of how Functioning and Disability could be taken into account for a child with cleft palate:

  • Structures: Is the cleft repaired?
  • Functions: Is the child able to differentiate oral from nasal airflow?
  • Activities: Is the child intelligible within everyday conversations?
  • Participation: Does the child initiate conversations with peers?

OK—back to the study. So what they found is that our field is measuring outcomes with a pretty heavy bias toward activities, followed by functions, and very minimally participation. Also, we tend to measure certain skills certain ways. For example, “participation” tends to be pragmatic measures. However, as can be seen in the example above, you don’t need to have a pragmatic disorder for your communication disorder to significantly impact participation. Aren’t we worried about how speech affects participation? And how language affects participation? And that’s the point here: we’re in a habit of measuring things certain ways, but this doesn’t exactly align with the ICF-CY, and may also not align with what the child and parent are really wanting out of therapy.
So, what should we do now? First, simply becoming familiar with ICF-CY gets the ball rolling. You may quickly recognize some opportunities to change how you're measuring some clients' outcomes. Then, the authors also include entire tables of outcomes measures already available to us. You may simply look through these to brainstorm options for your caseload.
Cunningham, B.J., Washington, K.N., Binns, A., Rolfe, K., Robertson, B., Rosenbaum, P. (2017). Current Methods of Evaluating Speech-Language Outcomes for Preschoolers With Communication Disorders: A Scoping Review Using the ICF-CY. Journal of Speech, Language, and Hearing Research, 60, 447–464.