In April 1967, Kenneth Goodman presented his landmark paper at the American Educational Research Association. ‘Reading: A Psycholinguistic Guessing Game’ (Goodman, 1967) was the culmination of five years of research and took the world of reading instruction by storm. It has been reprinted in eight anthologies, is his most widely cited work and stimulated myriad of research into similar models of reading. The aftershocks for reading instruction are still felt to this day.
The central tenet of the paper was the refutation of reading as a precise process that involves ‘exact, detailed, sequential perception and identification of letters, words, spelling patterns and large language units…’ (1982, p33), but that it is a selective process that involves the partial use of available language cues based on ‘readers’ expectations’ (1982, p33). The reader guesses words based on semantic and contextual expectations and then confirms, rejects and refines these guesses in ‘an interaction between thought and language…’ (1982, p34). Inaccuracies, or ‘miscues’, as Goodman (1982) calls these errors, are inherent (indeed, vital) to this process of psycholinguistic guesswork.
Goodman (1982) justifies his theory by linking it to Chomsky's (1965) model of oral sentence production which results in precise encoding of speech being sampled and approximated when the message is decoded. Thus, he maintains, the oral output of the reader may not be directly related to the graphic stimulus of the text and may involve ‘transformation in vocabulary and syntax’ (1982, p38) even if meaning is retained. The implication is profound: the reader is reading for meaning not for accuracy and it is semantics and context that drives the reading process not alphabetic decoding. The tether with the work of Chomsky (1965) is intuitively attractive but evolutionarily awkward. Chomsky’s (1965) research theorised that language and language structures for humans are an inherent, intuitive, natural attribute developed over millions of years of genetic selection and are thus biologically primary (Geary, 2002). To suggest that reading and writing aligned to Chomsky’s (1965) oral model of grammatical intuition and that a reader was an ‘Intuitive grammarian’ (Goodman, 1982, p161) ignored the inconvenient truth that the five thousand years it has taken to develop writing is not an evolutionary period (Dehaene, 2014).
Goodman’s (1982) ‘whole language’ approach was highly critical of Chall’s (1967) research which separated code-breaking from reading for meaning. Although never addressing the weight of research from which she drew her conclusions, and which contradicted his hypothesis, he accused her of misunderstanding how the linguistic code operated and was used in reading. ‘A language is not only a set of symbols…’ he opined, ‘it is also a system of communication…’ (1982, p127) and he accused her of overlooking the ‘fact’ that phonemes do not really exist and that oral language is no less a code than written language, maintaining that even in an alphabetic system it is the interrelationship between graphophonic, syntactic and semantic information and the switching between these cuing systems that enables the reader to extract meaning from text. He states with certainty that:
‘Reading is a process in which the reader picks
and chooses from the available information only
enough to select and predict a language structure
that is decodable. It is not in any sense a precise
perceptual process.’ (1982, p128).
Goodman’s consistent conflation of oral language with written language, and his failure to acknowledge the vastly differing processes required to decode and recode the two systems of communication lies at the heart of the failure the whole language approach to reading. With resounding echoes of Gestalt theory (Ellis, 2013), and supported by Clay’s (1991) assertion that readers rely on sentences for rapid word perception, he took his flawed model of reading to its illogical conclusion, deducing that whole texts contained more meaning and were thus easier to read than pages, paragraphs and sentences. The grapheme and letter, containing the lowest level of meaning, were therefore, the most difficult to read and were, for reading instruction, the most irrelevant (Goodman and Goodman, 1981).
There was little room for phonics and phonemic awareness within the theory. Smith (1975) believed it did have a place but only after children had learned to read. Nonetheless, in line with all of Chall’s (1967) research, even he couldn’t fail to notice that all of his best readers had excellent phonemic awareness. His conclusion was a triumph of confirmation bias (Wason, 1960): a good reader was intuitively able to make sense of phonics. In other words, phonics mastery did not make a good reader; good reading enabled phonics mastery. Smith’s (2012) book for teachers, ‘Understanding Reading’, supports and encourages Goodman’s (1972) guesswork technique for poor readers, recommending skipping unknown words, guessing them or, as a last resort, sounding the word out. It concludes with the emphatic statement that, ‘Guessing…is the most efficient manner in which to learn to read…’ (2012, p 232). Goodman and Smith had developed a model of reading that followed exactly the paradigm of the poor reader (Adams and Bruck, 1993).
Whole language reading as a pedagogical technique spread like wildfire. It was fanned by two powerful empirical gusts drawn relentlessly by the doldrums of intuition. Firstly, it appeared to work; at least initially. The more difficulty a child has with reading, the more reliant they become on memorisation of texts and the utilisation of word shape and visual and contextual cues and the more fluent they appear, although often paraphrasing and skipping words (Juel et al. 1985). By being taught non-phonological compensatory strategies, poor readers seem to make progress; progress that eventually stalls once they reach seven years old and texts become more demanding, have fewer visual cues and the child’s logographic memory capacity had been reached. By this stage, confident readers have cracked the phonetic code for themselves (Adoniou, 2017) so appear to have mastered reading through ‘whole word’ methods.
Secondly, the method appealed to teachers. Both Smith and Goodman appealed directly to teachers to ignore the gurus and experts, trust their intuition and carry out their own research (Kim, 2008). The theory that reading was, like oral language, intuitive, absolved teachers from having to teach it. This aligned perfectly with the constructivist teaching theories of Dewey (1916) that abounded in the 1960s, 1970s and 1980s (Peal, 2014) with the belief that knowledge, including knowing how to read, could be discovered and constructed. This despite Perfetti’s warning that, ‘learning to read is not like acquiring one’s native language, no matter how much someone wishes it were so’ (1991, p. 75). Teaching phonics on the other hand was highly technical, complex and required training, practice and repetition gaining it a reputation for ‘drill and kill’. Whole language methods, with the emphasis on guessing and intuitive learning enabled teachers to abdicate responsibility for the teaching of reading and concentrate on the far more enticing elements of literacy.
It is this professional abdication of reading instruction that has been the most damaging legacy of Smith and Goodman. In 2012 the National Union of Teachers (NUT), the second largest teaching union representing in excess of three hundred thousand teachers, denounced the introduction of systematic synthetic phonics as the promotion of a single fashionable technique with one NUT executive stating, "Most adults do not read phonically. They read by visual memory or they use context cueing to predict what the sentence might be…’ (Mulholland, 2014). The union was emphatic that phonics alone would not produce fluent readers and that ‘mixed methods’ were essential. The largest teaching union, the NAS/UWT, asserted that children, ‘…need to use a combination of cues such as initial letter sounds and illustrations to make meaning from text…’ (politics.co.uk, 2013). This resistance from educational institutional leadership clearly reflected the attitudes of their memberships. According to a NFER (2012) survey the majority of teachers specifically mentioned the use of picture cues as a reading technique along with the visual memorisation of word shapes and the sight learning of words. Further research by the NFER (Walker and Bartlett, 2013) found that 67% of teachers believed that a ‘mixed methods’ approach to the teaching of reading was the most effective. A survey by the NAS/UWT in 2013 (politics.co.uk, 2013) showed that 89% of teachers believed that children needed to use a variety of cues to extract meaning from text confirming the results of Sheffield Hallam University’s research two years earlier that revealed 74% of primary school teachers encouraged pupils to use a range of cueing systems that included picture clues (Gov.uk, 2011).
The whole language approach to reading instruction has maintained traction under cover of the balanced literacy. This compromise arrangement attempted to end the reading wars by empowering teachers to decide which methods best suited individual children and use a cocktail of approaches to address reading failure (Seidenberg, 2017). In practice, this meant that the vast majority of older teachers continued with their whole language approach and thus ignored phonics instruction except where statutory assessments enforced it (Seidenberg, 2017).
Marie Clay’s Reading Recovery (1995) programme is perhaps the most remarkable evidence of the indestructability of the whole language approach to reading. Clay (1991) popularised the whole language approach in New Zealand along with Smith and Elley who maintained that, ‘children learn to read themselves; direct teaching plays only a minor role…’ (1995, p87) as learning to read was akin to learning to speak. This resulted in 20% of all six-year-olds in New Zealand making little or no progress in toward gaining independence in reading in their first year of schooling (Chapman, Turner and Prochnow, 2001). The solution was Clay’s Reading Recovery programme: the same approach that had failed the same children in their first year of teaching. It seemed the ultimate insult to these struggling readers and should have been the final nail in the whole language coffin. But this reading instruction Rasputin refused to die and clung to life with a remarkable feat of resurrection.
Reading recovery worked.
Studies showed that not only is it beneficial, it is cost effective too (May et al, 2015) and is recognised as good practice by the Early Intervention Foundation, European Literacy Policy Network, Institute for Effective Education and What Works Clearinghouse as well as being advocated by London University’s UCL. A recent US study (Sirindes et. al, 2018) reaffirmed these assertions which were backed on social media by education heavyweight Dylan Wiliam (2018).
How can a whole language model of reading instruction defy the avalanche of research that undermines the efficacy of the approach? How can it work when the eyes and brain cannot process the contextual adjustment involved in psycholinguistic guessing fast enough for it facilitate efficient reading (https://www.thereadingape.com/single-post/2019/01/03/Why-the-reading-brain-cant-cope-with-triple-cueing).
The answer lies partly in the model and partly in the research. The programme constitutes twenty weeks of daily, thirty-minute one-to-one sessions with a trained practitioner. Fifty hours of one-to-one reading, however poor the instruction, will result in some improvements for most readers. This may be the result of improved guessing strategies, greater numbers of words recognised by shape and far greater opportunities for the child to start to crack the alphabetic code by themselves as well as any phonics instruction the child is receiving outside of the programme. Furthermore, the research is not nearly as positive as it at first appears. Tumner and Chapman (2016) questioned the research design of May et al (2015) as a result of the lowest performing students being excluded from the study. They concluded that the successful completion rate of students was modest and that there was no evidence that reading recovery leads to sustained literacy gains. More damning, however, is their highlighting of the range of experiences and interventions that the control group were exposed to. This cuts to the kernel of the traction maintained by Reading Recovery: until it is tested directly against an efficient systematic phonics programme it will continue to indicate modest improvements in reading for its participants. Reading Recovery versus fifty hours of extra one-to-one linguistic phonics instruction: no contest.
When the study is finally commissioned and completed, expect some very red faces in the world of education academia – not least at UCL.
This post is number 16 in a series of posts.
You may be interested in reading:
Follow the reading ape on twitter @thereadingape
Follow the reading ape on twitter - @thereadingape