The kids missing from our caseloads


Kids with developmental language disorder (DLD) can fly under the radar for years without anyone flagging their language weaknesses, including their parents. Hendricks et al. looked into whether parents of first and second graders (6–9-year-olds) with DLD were concerned about their children’s language skills and whether a quick, whole-class screening could distinguish children with and without DLD accurately.

For the language screening, children heard 16 sentences from the Test for Reception of Grammar (TROG-2) and circled picture responses in a booklet. This method meant that it only took 15–20 minutes to screen each class of kids. Children also completed additional language and reading testing, and their parents filled out a questionnaire.

The researchers found that parents of children with DLD rarely reported concerns about their language skills—although parents of children with DLD were twice as likely to have concerns if their children struggled with reading, too. Also, the quick, whole-class screener showed promise for identifying DLD. At the best cutoff score, 76% of children with DLD were correctly flagged, while 25% of children without DLD were incorrectly flagged. While these values aren’t quite at an acceptable level, the trade-off of spending 20 minutes or fewer to screen an entire class of children means that the screener warrants more research.

In summary, if we wait for parents of children with DLD to raise concerns about their language, we might be waiting too long, and parents of children with DLD and average reading skills are especially unlikely to notice that anything is wrong. Screening all children’s language could help identify them sooner; fortunately, efficient screeners show promise!


Hendricks, A. E., Adlof, S. M., Alonzo, C. N., Fox, A. B., & Hogan, T. P. (2019). Identifying children at risk for developmental language disorder using a brief, whole-classroom screen. Journal of Speech, Language, and Hearing Research. doi:10.1044/2018_JSLHR-L-18-0093

Girls vs. Boys: Communication differences in autism


If you work with students with autism, chances are you’ve noticed some communication differences between the boys and girls on your caseload. But how do you quantify these differences? Do they impact treatment? Are they even real?

We’ve touched on this topic before, but there isn’t loads of research on it at the moment. This preliminary study by Sturrock et al. takes a deeper dive into examining the language and communication profiles of females and males with autism.

The study explored the language and communication skills of 9–10-year-old children with ASD and IQ scores in the average range*, compared to age and gender matched peers with typical development (TD). Within both groups, female and male performance were examined separately. Note that each of the four groups was relatively small (13 children per group). Overall, though, they found some surprising (and not so surprising) differences among the groups.

The ASD group as a whole scored about the same as the TD group on measures of expressive and receptive language. However, the authors did see a subtle deficit in the ASD group when it came to narrative language tasks (an issue we’ve discussed before).

But what about those gender-related differences? Well, it turns out that within the ASD group, females outperformed males in pragmatic language and semantic language tasks. However, females with ASD still lagged behind matched females with TD. Another interesting difference? Girls in general consistently scored better than boys on “language of emotion” tasks (like listing as many feeling/emotion words as possible in one minute).

So what we do with these preliminary findings? Primarily, this study can help you consider potential areas of strength and weakness to look out for during evaluation and treatment of children with ASD. Additionally, the authors make the case that by increasing our awareness of the female ASD profile, a historically under-diagnosed and misdiagnosed condition, we may be able to help these girls get identified and get access to services sooner rather than later.

*The authors refer to this as High-Functioning Autism.


Sturrock, A., Yau, N., Freed, J., Adams, C. Speaking the same language? A preliminary investigation comparing the language and communication skills of females and males with High-Functioning Autism. Journal of Autism and Developmental Disorders. doi: 10.1007/s10803-019-03920-6.

And more...

  • Accardo and colleagues provide an overview of effective writing interventions for school-age children with ASD. Most interventions took place in the classroom and used mixed approaches, combining “ingredients” like graphic organizers, video modeling, and constant time delay—a prompting strategy borrowed from ABA. Within the review, Tables 1 and 2 give an idea of what each one looked like, so check that out.

  • Baker & Blacher assessed behavior and social skills in 187 13-year-olds with ASD, intellectual disabilities (ID), or both. They found that having ID along with ASD was not associated with more behavior problems or less developed social skills as compared with ASD only.

  • Cerdán et al. found that eighth graders who had poor comprehension skills correctly answered reading comprehension questions more often when the question was followed by a rephrased, simplified statement telling them exactly what they needed to do.

  • Curran et al. found that preschool-aged children who are DHH and receive remote microphones systems in their homes have significantly better discourse skills (but no better vocabulary or syntax skills) than otherwise-matched children who don’t get those systems.

  • Facon & Magis found that language development, particularly vocabulary and syntax comprehension, does not plateau prematurely in people with Down Syndrome relative to people with other forms of intellectual disability. Language skills continue to show growth in both populations into early adulthood. (We’ve previously reviewed specific interventions that have resulted in language gains among older children and teens with Down Syndrome. )

  • Hu et al. suggest that computer-assisted instruction (CAI) can improve matching skills in school-age children with autism and other developmental disabilities. Although techy and exciting, CAI on its own isn’t enough—evidence-based instructional strategies like prompting and reinforcement have to be programmed in, too. This CAI used discrete trial training, and was more efficient (fewer prompts and less therapy time were needed for mastery!) than a traditional, teacher-implemented approach with flashcards.

  • Lim et al. found that the literacy instruction program MULTILIT was effective with school-age children with Down syndrome. MULTILIT combines phonics and sight word recognition instruction, geared toward children with students who are “Making Up Lost Time in Literacy” (MULTILIT; get it?). The program was implemented 1:1 for 12 weeks, and the students made gains in phonological awareness, word reading and spelling. MULTILIT has been investigated by the developers, but this is the first time it’s been studied by other researchers—and with kids with Down syndrome in particular.  Note: This article wasn’t fully reviewed because the training (provided only in Australia) is not available to the majority of our readers.

  • Muncy et al. surveyed SLPs and school psychologists and found that, in general, these professionals are underprepared to assess and treat children with hearing loss and other, co-occurring disabilities, and that they lack confidence in this area. Participants reported many barriers to valuable collaboration with other professionals, like audiologists (hint: there aren’t enough of them!), and that they want more training in this area.

  • Schlosser et al. found that 3–7 year old children with ASD accurately identified more animated symbols than static symbols. The animated symbols represented verbs; for example, depicting a person turning around versus a still line drawing of “turn around.” It makes sense to see action verbs—well—in action; however, researchers acknowledge we can’t make grid displays full of animated symbols since that could be overstimulating. The next step is to test the effects of animation on symbol identification with other more well-known symbols sets like PCS.

  • Scott et al. used science books and a signed dialogic reading program with an 11-year-old Deaf student, and found increases in the student’s ability to answer comprehension questions.

  • St John et al. found that 92% of their sample of children and adolescents with Klinefelter syndrome also had a communication impairment. Pragmatic, language, and literacy impairments were common, and the researchers described some speech impairments as well. Establishing a comprehensive communication profile for this group is important because we’re still learning about Klinefelter syndrome, which is caused by one or more extra X chromosomes.

  • Updates on PEERS, a structured social skills program for adolescents and young adults we’ve discussed before! Wyman & Claro used the school-based version of PEERS both with adolescents with ASD (the target audience) and those with intellectual disabilities (ID; an overlooked group in social skills research who may benefit nonetheless). Both groups of students improved their social knowledge, and the ID group (but not the ASD group) increased social interactions with friends outside of school. Meanwhile, Matthews et al. found that speeding up the traditional, clinic-based PEERS program, by offering it in 7 weeks (twice weekly sessions) instead of 14, didn’t reduce its effectiveness.

Accardo, A. L., Finnegan, E. G., Kuder, S. J., & Bomgardner, E. M. (2019). Writing Interventions for Individuals with Autism Spectrum Disorder: A Research Synthesis. Journal of autism and developmental disorders, 1-19. doi:10.1007/s10803-019-03955-9

Baker, B. L., & Blacher, J. (2019). Brief Report: Behavior Disorders and Social Skills in Adolescents with Autism Spectrum Disorder: Does IQ Matter? Journal of Autism and Developmental Disorders. doi:10.1007/s10803-019-03954-w

Cerdán, R., Pérez, A., Vidal-Abarca, E., & Rouet, J. F. (2019). To answer questions from text, one has to understand what the question is asking: Differential effects of question aids as a function of comprehension skill. Reading and Writing. doi:10.1007/s11145-019-09943-w

Curran, M., Walker, E. A., Roush, P., & Spratford, M. (2019). Using Propensity Score Matching to Address Clinical Questions: The Impact of Remote Microphone Systems on Language Outcomes in Children Who Are Hard of Hearing. Journal of Speech, Language, and Hearing Research. doi:10.1044/2018_JSLHR-L-ASTM-18-0238

Facon, B., & Magis, D. (2019). Does the development of syntax comprehension show a premature asymptote among persons with Down Syndrome? A cross-sectional analysis. American Journal on Intellectual and Developmental Disabilities. doi: 10.1352/1944-7558-124.2.131

Hu, X., Lee, G. T., Tsai, Y, Yang, Y., & Cai, S. (2019). Comparing computer-assisted and teacher-implemented visual matching instruction for children with ASD and/or other DD. Journal of Autism and Developmental Disorders. doi:10.1007/s10803-019-03978-2

Lim, L., Arciuli, J., Munro, N., & Cupples, L. (2019). Using the MULTILIT literacy instruction program with children who have Down syndrome. Reading and Writing. doi:10.1007/s11145-019-09945-8

Matthews, N. L., Laflin, J., Orr, B. C., Warriner, K., DeCarlo, M., & Smith, C. J. (2019). Brief Report: Effectiveness of an Accelerated Version of the PEERS® Social Skills Intervention for Adolescents. Journal of Autism and Developmental Disorders. doi:10.1007/s10803-019-03939-9

Muncy, M. P., Yoho, S. E., & McClain, M. B. (2019). Confidence of School-Based Speech-Language Pathologists and School Psychologists in Assessing Students With Hearing Loss and Other Co-Occurring Disabilities. Language, Speech, and Hearing Services in Schools. doi:10.1044/2018_LSHSS-18-0091

Schlosser, R. W., Brock, K. L., Koul, R., Shane, H., & Flynn, S. (2019). Does animation facilitate understanding of graphic symbols representing verbs in children with autism spectrum disorder? Journal of Speech, Language, and Hearing Research. doi:10.1044/2018_JSLHR-L-18-0243

Scott, J. A., & Hansen, S. G. (2019). Comprehending science writing: The promise of dialogic reading for supporting upper elementary deaf students. Communication Disorders Quarterly. doi:10.1177/1525740119838253

St John, M., Ponchard, C., van Reyk, O., Mei, C., Pigdon, L., Amor, D. J., & Morgan, A. T. (2019). Speech and language in children with Klinefelter syndrome. Journal of Communication Disorders. doi:10.1016/j.jcomdis.2019.02.003 

Wyman, J., & Claro, A. (2019). The UCLA PEERS School-Based Program: Treatment Outcomes for Improving Social Functioning in Adolescents and Young Adults with Autism Spectrum Disorder and Those with Cognitive Deficits. Journal of Autism and Developmental Disorders. doi:10.1007/s10803-019-03943-z

SUGAR update: can it diagnose DLD?

Remember SUGAR? It’s the new, alternative language sample analysis protocol meant to work within the realities of a busy SLP’s workload. It’s been a while, so here’s a quick recap: SUGAR involves calculating four metrics on a 50-utterance sample where you only transcribe child utterances:  

  1. Mean length of utteranceSUGAR (MLUS)*

  2. Total number of words (TNW)

  3. Clauses per sentence (CPS)

  4. Words per sentence (WPS) 

For specifics and examples, check out the complete procedures (including videos) on their website.

While the creators of SUGAR have provided some support for its validity, the diagnostic accuracy of the four measures hasn’t been tested—until now! In this new study, the authors recruited 36 3- to 7-year-old children with DLD (currently receiving or referred to services) and 206 with typical language, and used the SUGAR protocol to sample their language. All four measures showed acceptable sensitivity and specificity (above 80%), using research-based cutoff scores (see the paper for specifics on cutoffs for each measure). The most accurate classification, according to the authors, was achieved with a combination of MLUS and CPS.


One of SUGAR’s big selling points is that it’s quick (like, 20 minutes quick), at least for kids with typical language. Did that still hold for the children with DLD? Actually, in this study they took less time to provide a 50-utterance sample than their typical peers. Bonus!

Language sampling can be daunting for the full-caseload SLP, but we love that research like this is identifying promising LSA measures that have high diagnostic accuracy (higher, we might add, than many commercially-available tests), while addressing our time and resource barriers.

An important note: there are many methodological differences between SUGAR and other LSA procedures, and SUGAR has not been uncontroversial. We’ll be on the lookout for more research on SUGAR’s diagnostic potential or comparing SUGAR to more traditional protocols to help us really understand the pros and cons of the different LSA methods.

*When calculating MLUS, derivational morphemes (-tion) are counted separately and catenatives (hafta, wanna) count as two morphemes.


Pavelko, S. L., & Owens Jr, R. E. (2019). Diagnostic Accuracy of the Sampling Utterances and Grammatical Analysis Revised (SUGAR) Measures for Identifying Children With Language Impairment. Language, Speech, and Hearing Services in Schools. doi:10.1044/2018_LSHSS-18-0050

On intelligibility: why use it, and options for measurement

It’s been suggested that we use intelligibility as part of comprehensive speech assessment, and measurement of treatment outcomes. Why? Well, because intelligibility is kind of the point of speech therapy in the first place, right? Also, intelligibility can pick up on phonological changes that other measures (like percent consonants correct, PCC) can’t.

So which intelligibility measures are we supposed to be using, exactly? Or, more appropriately—what are our options?

First, there are many ways to measure intelligibility. We can use rating scales, single word measures, or connected speech; and raters may include the clinician, family, peers, or unfamiliar listeners. Each of these have their own pros and cons in terms of reliability, validity, and compatibility with clinical practice. But the gold standard has been to calculate the percent of words understood, by unfamiliar listeners, in a connected speech sample (Gordan-Brannan & Hodson, 2010).

And while speech samples + few unfamiliar listeners as raters may be ideal, that carries a time burden for clinicians. Further, we also really want data on how the child is functioning in his or her everyday life. These considerations are what make the Intelligibility in Context Scale (ICS) particularly enticing—developed and measured over the past several years from (ongoing) research by McLeod and colleagues.

What is the ICS? It’s a brief, 7-item rating scale, completed by the parents of preschool and school-aged children. It can supplement other clinical measures for a nice look at functional speech. The scale can be found here (also on the last page of this article). Additional things to know about it:


Multilingual populations: It’s been translated into 60 languages (free, online!), and being multilingual doesn’t affect the scores (McLeod et al., 2015). It’s recommended that you use a separate sheet for each language the child speaks.

Screening: Use for preschool screening would be appropriate, especially as additional normative data is collected by future research. For now, this article can help you identify appropriate scores for your environment. You’ll extrapolate to your clinical population by looking at the scores they found in their sample of 4- and 5-year-olds (see Table 3 of the study). Do keep in mind the limitations of their study (read Limitations section). But basically, the scores in this study are relatively conservative, so children are generally likely to require further speech evaluation if their scores are lower than that 2015 study.

Psychometric properties: You can find this data throughout several of their articles; in particular, this one, which provides support for the ICS as a valid and reliable measurement of preschool children’s intelligibility.

Though we started by looking at the McLeod et al., 2015 paper, research for this Throwback review ended up sending us toward several papers on intelligibility, linked out above. Enjoy!

Iconicity of AAC symbols—Does it matter for learning?

If you work in AAC, you’ve encountered the AAC symbol hierarchy. You know—the idea that some symbols, like photographs, may be easier for kids to learn because they are more iconic. There’s a lot of chatter out there about this concept. Does a hierarchy exist? Is it just a myth? Guess what—the answer’s not so straightforward.

In this study, 13 school-aged students with both developmental and language delays participated in an observational symbol-learning task on the computer. They were shown 6 “iconic” Blissymbols and 6 “arbitrary” lexigrams. The Blissymbols looked like their referents (the one for clock looked like a clock), while the lexigrams had no relationship to their referents.

The task was simple: the students touched the symbols on the screen and a color photograph of the corresponding vocabulary popped up. The students did this repeatedly for 30 minutes, for a maximum of 12 sessions, and were then tested for their symbol-learning.

Turns out there was a very small advantage for the iconic symbols (they learned one more symbol), but only when the students knew the vocabulary beforehand. So if a student knew the concept DOG, they were a bit more likely to learn the iconic symbol for DOG, rather than the arbitrary symbol. 


But, what if students didn’t know the vocabulary (an oh-so-common occurrence)?  There was no difference in the students’ ability to learn an iconic symbol versus an arbitrary symbol, when the vocabulary was previously unknown. So if a student didn’t know the concept GORILLA, they were just as likely to learn the iconic symbol as the arbitrary symbol.

This is not a black-and-white situation! Yes, iconic symbols may have a slight advantage in some situations. But—if you’re teaching new vocabulary, it’s probably not worth getting hung up on iconicity, since how closely a symbol looks like its referent doesn’t seem to make or break the learning process.


Sevcik, R. A., Barton-Hulsey, A., Romski, M., & Hyatt Fonseca, A. (2018). Visual-graphic symbol acquisition in school age children with developmental and language delays. Augmentative and Alternative Communication, 34(4), 265–275.

And more...

Chester et al. enrolled school-aged children with ASD in group social skills training that included play (unstructured or semi-structured) for 8 weeks. They found that participants gained social skills (as rated by parents, teachers, and the children themselves) compared to waiting controls.  

Conlon et al. looked at narratives (via the ERNNI) produced by 8-year-old boys and girls with ASD and average nonverbal intelligence. While we know that children with ASD often struggle with narratives in general, there may be important gender-related differences. This study found that girls’ stories were more complete, included more information about characters’ intentions, and were easier to follow (i.e. they had better referencing).

Joseph used word boxes (a low-tech method using drawn rectangles and letter tiles) to teach sound segmentation, word identification, and spelling skills to three third graders with autism, and found that all children improved on sound segmentation and word ID and two children improved on spelling. 

Montallana et al. studied inter-rater reliability of the VB-MAPP Milestones and Barriers assessments. The VB-MAPP is commonly used to assess and plan intervention for children with ASD, but we haven’t known much about its psychometrics. While the milestones section had largely moderate to good reliability, agreement between raters on barriers was poor to moderate.  

Thirumanickam et al. found that a video-based modeling intervention was effective in increasing conversational turn-taking in a small number of adolescents with ASD who used AAC—BUT, only when provided with additional instruction (least-to-most prompting). They stated that for students with ASD, some level of prompting is likely required to engage in video-based interventions.


Chester, M., Richdale, A. L., & McGillivray, J. (2019). Group-Based Social Skills Training with Play for Children on the Autism Spectrum. Journal of Autism and Developmental Disorders. Advance online publication. doi:10.1007/s10803-019-03892-7

Conlon, O., Volden, J., Smith, I. M., Duku, E., Zwaigenbaum, L., Waddell, C., … Pathways in ASD Study Team. (2019). Gender Differences in Pragmatic Communication in School-Aged Children with Autism Spectrum Disorder (ASD). Journal of Autism and Developmental Disorders. Advance online publication. doi:10.1007/s10803-018-03873-2

Joseph, L. M. (2018). Effects of word boxes on phoneme segmentation, word identification, and spelling for a sample of children with autism. Child Language Teaching and Therapy34(3), 303–317.

Montallana, K. L., Gard, B. M., Lotfizadeh, A. D., & Poling, A. (2019). Inter-Rater Agreement for the Milestones and Barriers Assessments of the Verbal Behavior Milestones Assessment and Placement Program (VB-MAPP). Journal of Autism and Developmental Disorders. Advance online publication. doi:10.1007/s10803-019-03879-4

Thirumanickam, A., Raghavendra, P., McMillan, J. M., & van Steenbrugge, W. (2018). Effectiveness of video-based modelling to facilitate conversational turn taking of adolescents with autism spectrum disorder who use AAC. Augmentative and Alternative Communication, 34(4), 311–322.

Measuring the earliest forms of communication

As you may have realized (with frustration!) by now, we have limited options for evaluating the expressive communication skills of children who are minimally verbal. Enter: the Communication Complexity Scale (CCS), designed to measure just that. Prior papers have described the development of the CCS and determined its validity and reliability, but in this study, we get to see it in action with a peer-mediated intervention.

First, a little bit about the tool. It’s a coding scale—not a standardized assessment—that can be used during observations. Because prelinguistic communication skills often take time to develop with this population, this tool helps us think about all the incremental steps along the way and accounts for the variety of communicative modes the children might use. It’s a 12-point scale following this pattern:


The researchers found that the CCS could measure improvement in overall communication complexity and behavior regulation for preschoolers with autism after a peer-mediated intervention (the same one we reviewed here!).

So far in the research, the CCS has only been used during structured tasks meant to elicit communicative responses (see the supplemental material), such as holding a clear bag with toys where the child can see it, but can’t access it independently. We know it's crucial to observe our students in natural communication opportunities, though, so we'd have to be a little flexible in using the CCS during unstructured observations. The scale could definitely be useful when describing communication behaviors during evaluations or when monitoring progress. Wouldn’t it be much more helpful to say “The child consistently stopped moving (i.e. changed her behavior) in response to the wind-up toy stopping” instead of “The child was not observed to demonstrate joint attention”? Using the CCS, we have new ways of describing those “small” behaviors that really aren’t small at all!

NOTE: This study crosses over our Early Intervention vs. Preschool cut-offs, with kids from 2 to 5 years old. So for those of you who also read the Early Intervention section, we’ll publish this there next month! Just giving you the heads-up so you don’t feel like it’s Groundhog Day :)

Find links to the scale and score sheets, here.

Thiemann-Bourque, K. S., Brady, N., & Hoffman, L. (2018). Application of the communication complexity scale in peer and adult assessment contexts for preschoolers with autism spectrum disorders. American Journal of Speech-Language Pathology. doi:10.1044/2018_AJSLP-18-0054

A one–two punch for assessing young Spanish–English learners

Do you serve pre-K or kindergarten-aged kids? Are some/lots/all of them from Hispanic backgrounds and learning Spanish AND English? Mandatory reading right here, friends!

So—a major issue for young, dual-language learners? Appropriate language assessments. We talk about it a lot (plus here, here, here, and here, to name a few). In this new study, the authors compared a handful of assessments to see which could most accurately classify 4- and 5-year-olds (all Mexican–American and dual-language learners) as having typical vs. disordered language.


The single measure with the best diagnostic accuracy was two subtests of the Bilingual English-Spanish Assessment (BESA)—Morphosyntax and Semantics (the third subtest is phonology, which they didn’t use here). But to get even more accurate? Like, sensitivity of 100% and specificity of about 93%? Add in a story retell task (they used Frog, Where Are You?). Sample both Spanish and English, and take the better MLUw of the two. This BESA + MLU assessment battery outperformed other options in the mix (English and Spanish CELF-P2, plus a composite of the two, a parent interview, and a dynamic vocab assessment).

Not familiar with the BESA? It’s a newer test, designed—as the name implies—specifically for children who are bilingual, with different versions (not translated) of subtests in each language. If you give a subtest in both languages, you use the one with the highest score. And before you ask—yes, the test authors believe that monolingual SLPs can administer the BESA, given preparation and a trained assistant.

Now, the researchers here don’t include specific cut scores to work with on these assessments, but you can look at Table 2 in the paper and see the score ranges for the typical vs. disordered language groups. They also note that an MLUw of 4 or less can be a red flag for this group.

The major issue with this study, affecting our ability to generalize what it tells us, is that the sample size was really small—just 30 kids total. So, take these new results on board, but don’t override all that other smart stuff you know about assessing dual-language learners (see our links above for some refreshers if needed). And keep an eye out for more diagnostic studies down the road—you know we’ll point them out when they come!


Lazewnik, R., Creaghead, N. A., Smith, A. B., Prendeville, J.-A., Raisor-Becker, L., & Silbert, N. (2018). Identifiers of Language Impairment for Spanish-English Dual Language Learners. Language, Speech, and Hearing Services in Schools. Advance online publication.