Select a question below to see the response.
General Questions
Should I use the Shaywitz DyslexiaScreen more than one time for a student?
|
Yes. Many state laws require annual screening in early grades. In addition, screening annually in early years increases the power of the results of the screener over time.
|
When using the Shaywitz DyslexiaScreen in Kindergarten, at what point during the school year should I screen?
|
In the manual, we recommend waiting at least 6-8 weeks from the start of the school year for any form of the screener, so that the teacher-rater has time to get to know the student academically and have enough classroom performance examples to rate confidently. More broadly, given the widely variable preschool experience of Kindergarten students, it may be advisable to wait longer into the school year, even after the mid-year break, to allow all students to have access to quality instruction prior to screening. Organizational factors and external constraints may also play a role in determining the best timing for this grade. Conversely, waiting too long in the Kindergarten year may miss the opportunity for more explicit and structured instruction in Kindergarten for students at risk for dyslexia. Balancing all these factors is key.
|
Can a teacher purchase the Shaywitz DyslexiaScreen directly?
|
Schools and school districts are able to purchase this tool, as is our typical process. For individual purchases, users must be Level B qualified. Some teachers may be Level B qualified, some may not.
Dr. Shaywitz and Pearson agree that classroom teachers are those professionals closest to the student academically. They are the best responders to academic performance related to reading for this screener. When Dr. Shaywitz developed the screener based on her long-standing data set, she specifically crafted the items for teachers. She wanted the best evidence to come from teachers who can respond to questions about student performance in the classroom environment.
Administering a screening program includes identifying appropriate students to screen, whether a universal student population or a subset of students, and ensuring that the screening program is administered with fidelity. Those who administer screening programs also must train examiners appropriately, interpret results correctly within the limits of the scope of the screening tool, and disseminate results appropriately to various stakeholder groups. Those professionals who are trained in appropriate and ethical management of assessment tools are called, in Pearson terms, Level B qualified users. There are multiple scenarios in which a professional may be Level B qualified. Please refer to our user qualifications page on our website. Both Dr. Shaywitz and Pearson are committed to our ethical responsibility that tools we publish are used appropriately, on behalf of students and their families.
Recommendations about what to do next following screening require a collaborative approach between professionals from different disciplines and often across general and special education perspectives.
|
Why are the Shaywitz DyslexiaScreen forms named "Form 0, Form 1, etc." and not by grade (e.g., The Kindergarten Form)?
|
Great question. Around the world, each grade and the expectations of student performance vary. As we publish the Shaywitz DyslexiaScreen in other countries, we needed to give each country the flexibility to assign the appropriate grade level to each form, based on the local education system and the local research on the screener.
|
Content Questions
Are the Shaywitz DyslexiaScreen results valid for children who are young or old for grade?
|
The results for children who are considered "out of level" are not necessarily invalid and may be interpreted by qualified practitioners. If a child's age is outside the typical age range associated with his or her grade level, Q-global alerts the user by providing a message window entitled, "Invalid Assessment Record(s)." The error message indicates the age levels typically associated with the form and advises the use of caution when interpreting results.
|
Can you comment on the perceived age of the original norm sample and the sample size of the national validation studies?
|
The Shaywitz DyslexiaScreen utilizes a cut score norm, which provides a reference point for each form to divide the data set into two groups: At Risk for Dyslexia and Not At Risk for Dyslexia. The teacher ratings must exceed the cut score in order to classify a student as At Risk for Dyslexia.
Unlike traditional normative samples that must be sufficiently large and representative of the population to establish a normal distribution of scores, a cut score norm only requires a representative sample of the two target groups: in this case, students who are At Risk for Dyslexia and Not At Risk for Dyslexia. The Shaywitz DyslexiaScreen cut scores are based on a sample of 414 schoolchildren who participated in the Connecticut Longitudinal Study that began in 1983 and continues to the present day. In contrast to traditional normative samples that are collected within one year or less, the Shaywitz normative sample has been followed over three decades from school entry into early adulthood. The longitudinal research design has allowed Dr. Shaywitz to detect measurable changes in reading performance and risk classification over the course of each student’s academic career, strengthening the reliability and validity of the Shaywitz DyslexiaScreen risk classification in the early grades.
To confirm that the cut scores remain valid and reliable across samples, 115 children between the ages of 5 and 7 participated in a national validity study in April through July 2016. For the kindergarten and grade 1 forms, 115 students (i.e., 50+ at each grade) offered the statistical power needed for the psychometric analysis of the data set. The results of this national validity study indicated that the reliability coefficients and clinical sensitivity and specificity results were similar to those derived from the Connecticut Longitudinal Study. These findings support the use of the Shaywitz DyslexiaScreen for identifying children with dyslexia and suggest that the teacher ratings that indicate risk for dyslexia remain consistent over time. Additional Grade 2 and Grade 3 validation studies were conducted in 2017 and 2018, respectively, in the same way with similar results.
|
Does the Shaywitz DyslexiaScreen include a measure of rapid automatic naming (RAN) or family history?
|
Certainly, individuals with dyslexia often have family members with similar (but not always) struggles in reading, writing, and/or spelling related to dyslexia. Further, many individuals, through an assessment process, show lower than average performance on one or more RAN tasks.
That said, the data set behind the screener did not provide the evidence strength needed to include items related to these two areas in the screener itself. In addition, family history may not always be available and therefore makes it difficult to require in a screening process.
|
Should students who are English Language Learners (ELLs) also be screened for dyslexia?
|
Yes. Dyslexia is not language-specific; it crosses alphabetic and logographic writing systems. Further, a screener does not diagnose. The goal of any screener is to sort reliably individuals who are "at risk" vs. those who are "not at risk." Any positive classification of "At Risk for Dyslexia" on the Shaywitz DyslexiaScreen, whether from a student who is also classified as ELL or not, should be followed by a secondary measure or measures to establish whether or not the student actually has dyslexia. In the case of a student who is an ELL, if dyslexia exists generally it will present in English and in the student's primary language(s), depending on the language(s) and age of the student. View Dyslexia & ELLs
|
In the Connecticut Longitudinal Study, did teachers have training in dyslexia?
|
The Connecticut Longitudinal Study focused on a cohort of hundreds of children, not teachers. The study followed, and continues to follow, this cohort of children into adulthood. The study, as part of its protocol, gathered data from teachers who may or may not have had specific training on dyslexia--it simply asked questions of teachers of these children in the cohort about the nature of each child's academic and classroom performance, which any teacher can do once they have time in the classroom to observe a student's work. In addition, each year a full assessment battery was also given to each child in the cohort.
The Shaywitz DyslexiaScreen items were derived from an empirical analysis of teacher response data and assessment scores over time. Then, the screeners' items were field tested by a range of classroom teachers, who also had a wide spectrum of knowledge regarding dyslexia--no formal dyslexia training was provided by the study over the years nor during the field testing of the items. In fact, the items were designed to be completed by teachers who had very little or no understanding of dyslexia. While training in dyslexia is not required or necessary for a teacher to complete the screener, Dr. Shaywitz and Pearson clearly advocate for the training of all teachers in the science of dyslexia.
|
Experts I’ve read define dyslexia as word level reading difficulties but average or better than average language skills. Word reading skills, reading fluency and reading comprehension are substantially below the student's language comprehension skills. If language is strong, how can dyslexia be a language-based disorder?
|
The issue of strengths and weaknesses in a profile of dyslexia is such an important topic! The evidence is clear that reading is primarily a language-based effort, as opposed to a purely cognitive/metacognitive effort, for example. The tasks of word-level decoding and reading comprehension (i.e., tasks of written language) require a number of linguistic skills to accomplish--knowledge of sounds, letters, and words/word parts, which are all part of the linguistic code and together carry the meaning needed for comprehension. A recent article by Hugh Catts and colleagues describes students with dyslexia who have strong *oral (*i.e., semantic or phonological) language skills (seen as a "protective factor") and students with dyslexia who have weak *oral *language skills (seen as a "risk factor"). An example of low oral language skills might be students who receive preschool speech-language services for oral language skill development and then ended up being diagnosed with dyslexia once into elementary school. They may not have average or better oral language skills at the time of the dyslexia diagnosis. In fact, their risk for dyslexia could have been predicted much sooner--not just with oral language performance, but along with other factors in a hybrid model--than a typical 3rd grade dyslexia assessment. So while dyslexia is indeed a language-based disorder, the profile of language skills among those with dyslexia is not homogeneous.
Here's a link to the Catts, et al abstract, if you are interested: http://link.springer.com/article/10.1007/s11145-016-9692-2
|
Can't teachers skew the results by selecting the frequency that will automatically yield an at-risk rating, making this not a valid indicator of risk for dyslexia?
|
Unfortunately, any professional making erroneous or inaccurate ratings to positively or negatively influence the outcome of an assessment is nothing new in the world of testing. This would be an ethical violation in the use of tests and test content under a professional code of ethics.
Beyond the psychometric and research evidence, the Shaywitz DyslexiaScreen items were developed based on direct teacher feedback and support to create the clearest and easiest way possible to rate student performance. Supporting Teacher Guides further illustrate the use of the item's content. The validity and reliability data are accurate and defensible. All assessments, including all rating scales like this screener, have a margin of error factored into the responses of those who complete the forms.
Further, administrators of a screening program can provide a check and balance. Prevalence data from the literature and our own research have confirmed that up to 20% of a typical school classroom could be flagged as "at risk" for dyslexia. If a class percentage of risk status is higher than that, it may require follow-up to understand why the risk status was higher. This is a powerful use of the group report, if it is needed.
Finally, the power of this tool is in its ability to function as a mass triage system--you cast a wide net and gather students who may be at risk. The value is in the combination of speed and breadth for finding students at younger grades faster than is typical in the world of education to date. Certainly, a few students who are flagged as "at risk" will end up not having dyslexia upon further assessment. They may need something else, or nothing beyond excellent Tier 1/general education instruction. This is just step 1 of the overall "find and serve those with dyslexia" effort.
|
Technical Questions
Which Administration Delivery Option should I choose: Q-global, Review360, or aimswebPlus?
|
If you are already a Q-global user and want to screen a single student or a small group of students, Q-global is your best choice. If, however, you are planning to screen a percentage of students across a grade or literally screen every student in the grade, then you want to choose Review360 or aimswebPlus, which will give you both mass screening capabilities--including administrator dashboards, teacher progress indicators, and rostering support--and robust and flexible reporting tools for aggregated and disaggregated reporting across the district.
|
How can I implement the Shaywitz DyslexiaScreen on Review360?
|
Use the PDFs below to guide you through the getting started and screening process. Select the documents that are best suited to your needs: K-12 organizations or non-education organizations.
K-12 Education Organizations
Non-K-12 Organizations
|
| |