Totally automatic - the sticking point between phonics and fluency.

Phonics is certainly fundamental to reading fluency but there seems to be a missing link.

Children learn how to read fluently at different rates and with differing levels of difficulty (Snowling and Hume, 2005) and while some attain fluency relatively easily, others encounter complications and struggle to attain the ability to decode swiftly and automatically. Although the research into the importance of phonological processing skills in reading acquisition is emphatic and consistent (Stanovich, 2000), there is increasing evidence that orthographic processing is relevant in explaining variances in word recognition skills (Cunningham, Perry and Stanovich, 2001). ‘Automaticity with word recognition,’ state Cunningham, Nathan and Raher (2011), ‘plays a fundamental role in facilitating comprehension of text and, thus, is a primary determinant of reading achievement through schooling…’ (2011, p. 259).

LaBerge and Samuels (1974) developed the Automatic Information Processing Model, which added, after letter recognition by the iconic memory, a reference to the phonological memory which enabled sounds to be associated with the visual image before receiving the attention of the episodic and semantic memory resulting in correct word identification. They developed the concepts of external and internal attention. External attention being observable evidence of reading and internal attention being the unobservable elements of cognitive alertness (how cognitively vigilant the reader is and how much effort is being applied), selectivity (what experiences the reader is drawing on to comprehend the text) and capacity (how cognitively attentive the reader actually is). Labergere and Samuels (1974) also introduced the concept of automaticity: the ability to perform a task whilst devoting little attention to the commission. In the case of reading, this relates specifically to the swift and accurate decoding of text with almost no imposition on working memory with the resultant benefit that almost all attention is available for comprehension. The implication that there is a crucial stage buffering decoding mastery and reading fluency is central to the understanding of development in readers for whom phonics mastery has not been achieved.

All interactive theoretical models of skilled reading emphasise the need for fast, automatic word recognition (Ehri, 2005). Although these models differ in their explanations of the cognitive processes involved, they all assume that word recognition develops from a slow, arduous, intentional process requiring constant sound symbol deciphering, into a process that enables the immediate identification of words through their lexical quality (Perfetti, 2007).

Thus, the ability to recognise words quickly, accurately and without effort allows cognitive resources to attend almost entirely to reading comprehension. Conversely, without the achievement of reading automaticity, the cognitive load required to decode words leaves insufficient space in working memory for reading comprehension (Perfetti, 2007) so in order to reduce this cognitive load in processing alphabetic orthographies readers must attain effortless use of the alphabetic code (Chen, 2007). This cognitive theory is supported by substantial empirical research evidence that fast, effortless word recognition is the strongest predictor of reading comprehension and accounts for high degrees of variance in comprehension throughout schooling (NICHD, 2000, Stanovich, 2000). Deficiencies in swift, accurate word recognition in early schooling are the clearest predictor of deficiencies in reading comprehension in later schooling (Cunningham and Stanovich, 1997). It is this orthographic processing deficit that has been identified as a crucial ‘sticking point’ between phonological processing and reading fluency (Stanovich and West, 1989) and explains significant variances in reading and spelling ability (Badian, 2001).

Orthographic processing is the ability to form, store and access orthographic representations which abide by the allowable order of letters within a language and are then linked to the phonological, semantic and morphological coding of that language (Cunningham et al., 2011). In other words, readers can recognise and decode a group of letters as being a plausible pattern within a language (this could include a pseudo-word) and then access semantic memory to recognise that it is a recognisable word within that language, even though they may not be able to define the word (this would exclude a pseudo-word).

Clearly this requires a dependence on phonological processing ability (Barron, 1986). Nonetheless, no cognitive process is ever completely isolated and thus orthographic processing can be seen as a separate construct to phonological processing with a differential in the weighting on each process being dependent on the reading task and the proficiency of the reader (Berninger, 1994). This is crucial for the development of the reading of English which contains so many homophones and the swift recognition that the phonic decoding of a word is not the sole indication of meaning.

If orthographic processing (or reading automaticity) has thus been recognised as a separable construct in the development to the further separable construct of reading fluency, an understanding of how it advances is crucial.

Reading fluency, the ability to recognise words quickly, accurately and with inherent understanding of wider meaning (prosody) is linked with the instant processing of phonological, semantic, morphological and syntactic information (Perfetti, 2007). Much research has rightly focused on developing our understanding of phonological awareness and many millions been allocated to delivering and testing phonological awareness in young readers to ensure mastery. Although a vital and fundamental foundation of reading proficiency it is, of itself, not sufficient for reading fluency. The significance of orthographic processing as a buffer between phonological mastery and reading fluency and a source of variance in word recognition highlights it as a crucial developmental stage on the road to fluency.

When a developing reader with well-developed phonic awareness encounters an unfamiliar word, they utilise their phonological processing capacity and an exhaustive grapheme by grapheme decoding operation ends in a successful decoding. This results in in the formation of cognitive orthographic representations. In typical readers, with a small number of encounters with the word, it will be added to the child’s orthographic lexicon (Share, 2004) and with the amalgamation of phonological and orthographic representations fluent future word identification will be enabled (Ehri, 2005).

The phonological component is the primary means for acquiring orthographic representations (Share, 1995). Nonetheless, it is the frequency of exposure to a word and the successful identification of it that develops the word recognition process. If the word is familiar it will be read automatically and if not, the reader will phonologically decode it. What is being built by the reader is not a bank of memorised shapes that they identify and associate with meanings but the capacity to recognise logical and acceptable letter patterns and link them to semantic and morphological knowledge: Reicher’s (1969) word superiority effect (More here -

But how is this developed and how should it be taught? Share (1995) developed a theoretical framework that postulated that the detailed orthographic representations necessary for fast, efficient word recognition are primarily self-taught during independent reading.

This self-teaching model (Share, 1995) has been tested and refined thorough cross-linguistic investigations across multiple languages with shallow orthographies such as Hebrew to deeper orthographies such as Dutch and English and has significant evidence to suggest a robust theoretical model (Cunningham et. al, 2011). Cross-linguistic evidence supports the model for both oral reading and silent reading (Bowey and Muller, 2005). Languages with shallow orthographies require only two exposures to unfamiliar words for automaticity to be achieved (Share, 2004) whereas English, with its deeper orthography requires at least four exposures (Nation et. al., 2007) with increased exposures making little difference in results. Younger children, however, do benefit from greater numbers of exposures (Bowey and Muller, 2005).

The greatest implication for the teaching of reading from this model is that children require multiple and varied opportunities to self-teach (Cunningham et. al., 2002). Furthermore, children require time to phonologically recode words on their own during instruction if automaticity is to be developed (Share, 1995). When a child hesitates when attempting to decode, it is essential that they are afforded sufficient time to attempt a phonological recoding. Even failed attempts facilitate some level of orthographic learning (Cunningham et. al., 2011) especially when the teacher is able to refer to the alphabetic coding structure rather than merely read the word. And this perhaps is the crucial factor. Without teacher input and monitoring, sustained silent reading showed almost no positive effects in developing orthographic processing (NICHD, 2000) because there was no way of ensuring a child’s investment in the reading. Reading has to be taught, monitored and invested in by the reader and that investment needs to be constantly assessed.

Spelling appears to have a positive effect on the development of orthographic representations and reading automaticity when children are taught the spelling of words through their graphemic structure associated with the phonic code (Shahar-Yames and Share, 2008). Spelling should help reading more than reading helps spelling (Perfetti, 19997). It also develops vocabulary growth and writing with evidence that the attention demands for composition are not diluted by attention on accurate spelling (Torrence and Galbraith, 2006).

The texts to which emergent readers are exposed are crucial and an area for potential confusion. Clearly, to develop reading automaticity children must have numerous exposures to high frequency words and higher frequency words. The more regularly a word is correctly identified, the more quickly it becomes imbedded as an orthographic representation. However, for development of reading comprehension, use of rhetorical devices, advanced language techniques and in-depth analysis of a text, a more complex text, with greater numbers of unfamiliar and complex words is necessary (Elder and Paul, 2006). There is no need for a conflation, only a necessary understanding of what the purpose of each text is. Analysis of a complex text above instructional level for deep analysis develops both comprehension ability and enables the widening of a child’s lexicon (Booth et. al., 1999), whereas reading a text at a child’s instructional level helps develop reading automaticity by the exposures to familiar words.

Whereas allowing children to read unsupervised may enable the self-teaching of orthographic processing for automaticity, within a school setting the monitoring of investment in the process is more vital (NICHD, 2000). Therefore, a move from guided reading groups (where many participants’ reading cannot be monitored) to a whole-class approach which demands investment through leverage devices (Lemov, Driggs, Woolway, 2016) will ensure greater and regular exposure to text. This technique may be utilised for texts above instructional level, but where automaticity is not yet developed should be used regularly with texts at instructional level.

Where a reduced number of children have yet to attain automaticity then individual intervention can be utilised. Reading and rereading a text (of about 100 words) at or just above instructional level until fluency is achieved with teacher intervention highlighting miscues (Laberge and Samuels, 1976) has proved effective (Stahl, 2003). More effective, is assisted reading (Chomsky, 1978), where the teacher and the student repeatedly read the text simultaneously whilst pointing at the words. With each repeated reading, the teacher softens their voice until the student is the dominant reader. Young, Rasinski and Mohr (2018) have developed a successful variation of this technique whereby the student reads marginally behind the teacher until able to become the dominant reader.

The effective teaching of phonics is unarguably the foundation of efficient reading and the monitoring of phonic mastery a crucial role for educators (the Bryant test is a reliable, quick and effective assessment). That mastery seldom occurs in the English alphabetic code in the second year of formal education (when the Phonics Screening Check takes place) and most children are not comfortable decoding at a polysyllabic level until into their fourth year of schooling (Dehaene, 2011). This, however, is not the end of the process. Children need to practice decoding at speed to gain automaticity and engage the word superiority effect (Reicher, 1969). Although this appears to be a self-teaching process, that process can be enhanced by regular exposure to relevant texts and monitoring of the self-teaching process. Furthermore, it can be assessed using simple tests such as the Appalachian State University Word Recognition Inventory. This self-teaching need not be left to chance.

‘Automatic word recognition is necessary for successful reading comprehension…although much of what a reader ultimately has to do is read, there are significant advantages in encouraging a student to develop a proclivity toward phonological recoding on the path to automatic word recognition. Furthermore, the assistance trained teachers who understand the intricacies of language and reading development, and instruct with attention to the complexities of the languages their students hear, speak and write, is priceless.’ (Cunningham 2011, p. 277)

This post is number 18 in a series of posts.

You may be interested in reading:












Follow the reading ape on twitter @thereadingape







Follow the reading ape on twitter - @thereadingape