top of page

Controlling the text - the dilemma of decodable texts.


Code-based instruction explicitly and purposely teaches letter-sound correspondences systematically. Chall’s (1967) review of reading studies indicated that the more time pupils spent reading the learned letter-sound patterns was key to their progress in reading. Controlled texts, written to maximise the use of words with the taught phonic patterns (or decodable texts as they are known in the UK), allows children to practice decoding in an environment where they will be successful, rather than experiencing the frustration of encountering the oddly spelled high-frequency words that are common in early readers (Rayner et al. (2012). The use of decodable texts is an expectation of the new Ofsted inspection framework.

By the 1980s, Beck (1981) found that a number of specific reading programmes included between 79% and 100% of words in code-based readers that were decodable on the basis of taught correspondences. Using these controlled texts in the early stages of code instruction provides children with practice at sounding-out words while emersed in the context of reading sentences allied to the greater instability of the words encountered that that entails. As the quality of lexical representations progresses, words become more familiar and are read more quickly (Rayner et al. 2001, 2002) resulting in the ability to read text more rapidly and with appropriate intonation – reading fluency. Pupils initially work harder to decode texts word by word than if they were to read text composed of memorised words associated with the word method of reading instruction. The letter by letter processing involved builds the high-quality lexical representations needed to support quick and accurate reading (Perfetti, 1992). Controlled texts inevitably have contrived storylines with little narrative depth. However, this misses their point. They are not to be read for narrative value but for the practice of decoding in the context of meaningful text, thereby developing the word superiority effect (Reicher, 1969). Children taught to read by code-based instruction therefore require exposure to a language rich environment that is broad, deep and enticing.

The alternative is to use texts that are not controlled for children’s studied letter-sound correspondences. In the context of code-based instruction this then encourages early readers to practice decoding (which is what they are doing, they are not practicing reading) whilst encountering words that they are unable to decode. This potentially undermines their faith in the code and encourages them to identify words in the manner of poor readers by relying more on context cues to facilitate word identification (Stanovich et al., 1981). In other words, they guess the word. A very inefficient method of word recognition (Ehrlich,1981) with a one in a hundred success rate (Stanovich, 1980).

However, all readers, whatever their language, need to move from the imperfect, yet nevertheless functional approach to decoding, to a more refined heuristic of orthographic processing – word recognition (Cunningham, 2011). The independent (not scaffolded) identification of words via decoding facilitates the establishing of autonomous orthographic representations. This occurs in all orthographies. This shift to automatic word recognition seems, however, to be largely self-taught through regular exposure to text (Share, 2008). The more text an emergent reader is exposed to, the more presentations of words, the more efficient the orthographic processing becomes, and the quicker the goal of fluency is reached (Cunningham and Stanovich, 1990). The effect of print exposure on orthographic processing is founded in significant research (Cunningham et al., 2001).

This results in the killer question for decodable texts: bearing in mind the restrictive availability of decodable texts and the imperative that children be exposed to high levels of print, at what point do we introduce texts that are not controlled for learned sound-to-letter correspondences?

The answer is, inevitably, nuanced.

It helps greatly to look at the research in other orthographies, particularly those with far greater transparency than English, like Hebrew which has a very shallow orthography. In Hebrew there is no need to control the texts as all words follow close to one-to-one sound to letter correspondence – all texts are essentially controlled. Share (2004) found that although first grade (year 2) pupils learning to read Hebrew exhibited high levels of decoding accuracy (as would be expected), they still struggled to read words and amalgamate the orthographic and phonological representations to generate self-teaching. He concluded that decoding accuracy alone may not be sufficient to promote orthographic learning. It was the phonological recoding, he surmised, that was more critical for self-teaching. By the second grade, children learning Hebrew, however, had been exposed to massively increased amounts of print (as a result of essentially all text being controlled because of Hebrew’s shallow orthography) which then enabled ‘the more experienced reader to develop a sensitivity to orthographic detail that is beyond the grasp of the novice’ (Share, 2004:29).

And here is the dilemma: exposure to significant amounts of print helps to develop orthographic processing, but children learning to read an opaque orthography (English) are restricted in the availability of that print. Can we practically restrict emergent readers of English to controlled texts until they have mastered the entire phonic code including polysyllabic decoding? The resources are just not available. Children’s writers write narratives to engage children and entice parents to part with their money, not to help children to develop orthographic processing – and I can’t imagine a celebrity children’s author writing without an eye on the merchandising and film deal – although they seem more than happy to pontificate on reading instruction. It is interesting that Cunningham (2011) alerted policymakers to this very problem nearly ten years ago and suggested that the publishers of textbooks for reading instruction take much more consideration of this issue and produce books which were far more supportive of orthographic processing. Perhaps it was the lack of marketing potential or an absence of celebrity authors willing to develop this more mundane type of publication that has resulted in the tumbleweed response.

Interestingly, although the English alphabetic code is an absolute brute (see here for how we landed up with this) it does seem to afford some cognitive benefits. Not having the luxury of one-to-one correspondence obliges readers to look beyond the ‘low level phonology and consider higher order regularities that are word specific’ (Share, 2004:292). As a result, children learning English seem to develop orthographic sensitivity and a reliance on larger, multi-letter orthographic units sooner; as early as the year 2. An effective orthography must to some extent be governed by rules. English is a compromise between the needs of the novice (decipherability) and the expert (automaticity and morphology – the implication of meaning). This compromise has resulted in a developmental balance whereby code knowledge develops the automatising of legitimate letter patterns such that sufficient decoding knowledge and orthographic processing capacity (word recognition) allied to lexical capacity affords the emergent reader of English the possibility of reading unknown words (that do not conform to previously encountered and embedded patterns) correctly (Booth et al., 1999). The million-dollar question, of course, is: what is sufficient?

The heuristic would appear to be:

· where the simple code is being taught, controlled texts are probably essential. Attempting to decode ANY text that does not conform to the elements of the code to which an early reader has been exposed will undermine a child’s faith in the systematic nature of decoding. Children at this phase of decoding should be exposed to multiple narratives and texts as ‘read alouds’.

· once children are exhibiting significant understanding of the complex code, have embedded the concept that letters can represent more than one sound and that a single sound can be represented by more than on letter, and have instant word recognition of words conforming to their code knowledge (in other words their orthographic processing is developing) they may have sufficient code knowledge to apply this to unique or rarer words that exist in their lexicon. This will probably not happen before the end of year 1 and the beginning of year 2 (grade 1).

· If exposure to texts that are not controlled degrades a child’s faith in the code (particularly for children who develop code knowledge more slowly), revert to controlled texts.

· Children learning to read require significant exposure to print particularly once orthographic processing becomes evident. The presentation of words in multiple contexts is essential for orthographic processing.

· This is only a heuristic suggested by the research. It is not a law. Keep an eye on the sufficient code knowledge and use your judgement. If a child starts guessing words…stop.

bottom of page