top of page

All phonics instruction is not the same.

‘The question for teachers is no longer “look and say” or phonics. Instead, the question is which phonics programmes are most effective…’ (Gibb, 2018). The English Schools’ Minister’s assertion that the reading wars have been won by the relentless barrage of research undermining ‘look and say’ approaches may not be strictly true – the rear-guard anti-phonics battles being fought in Australia by academics, the pervasiveness of ‘Balanced Literacy’, the cockroach-like immortality of ‘Reading Recovery’ (Clay, 1985) and the myopic knee-jerk reactions of the English teaching unions and leadership unions against any sniff of checking whether children can actually read, before we discover at eleven years old that 40% of them can’t (DfE, 2017), is testament to the fact that the retreating hordes are ready to fight to the death – he is, nevertheless, right to affirm that the blanket moniker of ‘phonics’ is unhelpful.

Phonics instruction has existed in many forms for thousands of years. Quintilian’s (1913) insistence that any syllable, whether existent in Latin or not, be able to be read and blended before a student be permitted to move onto the gramaticus stage of their education, along with his resolution that sounds be taught before letters was in essence a phonics programme. Its efficacy was dependent on the transparency of the Latin alphabetic code with its regularity in sound to letter correspondence. The adoption of Quintilian (1913) teaching methods and the expectation that English grammar school boys learn to read and write Latin and Greek before English was the only phonic instruction available in formal public education in the middle ages. It was, nonetheless, considerably more phonics instruction than was made available to the poor, who were relegated to learning to read at Sunday school without the help of Latin instruction and its transparent phonic construction, assisted only by the letter names and a single word associated with each letter along with some syllables and an array of highly complex religious texts to be learnt by heart. Pascal’s (Rodgers, 2002) revolutionary early phonics programme was the first that recognised that reading could be expressly taught through the recognition of phonemes associated with graphemes and their blending together to decode words. This was, nonetheless, designed for French and it was not until Webster’s primer that a universal phonic programme emerged for English. The development of Kay’s (1801) ‘The New Preceptor’ and Mortimer’s (1890) ‘Reading Without Tears’ advanced phonics programmes and the systemisation of an approach founded on an understanding of the English alphabetic code, culminating in Nellie Dale’s (1902) iconic ‘On Teaching of English Reading’ which sold well on both sides of the Atlantic.

With the rise of ‘look and say’ reading instruction, promoted by Huey’s (1904) adoption of Gestalt theory (Ellis, 2013) and with the rise of the powerful U.S university-based teacher training institutions and the pre-eminence of Gates (1927), it was analytic phonics that became the dominant phonic approach. As basal readers and reading schemes that relied on word repetition came to dominate early reading teaching, the idea that reading could be taught from the atomised understanding of phonemes and graphemes and the blending of these sounds through the identification of their representations was driven to education’s hinterland.

Ironically, analytic phonics developed out of the inherent flaws of the whole-word and sentence approach. When a word could not be recognised or remembered, the reader was taught to resort to the secondary identification method of guesswork using picture cues, syntactic cues and contextual cues. If the first two methods were ineffectual only then did the reader resort to analysing the letters and attempt to decode the word by identifying the sounds. However, without having been taught a systematic word attack strategy of letter to sound correspondence this usually resulted in teacher intervention. The teacher would then teach the phonic code of the associated word and encourage the reader to decode it. So inefficient was the system, and so inexpert at identifying the relevant element of the alphabetic code were the teachers, that the most common outcome was the identification of the word by the teacher or, where no teacher was present, the avoiding of the word by the reader. Analytic phonics is often cited as phonics instruction when, as described above, it is nothing of the sort and possesses no element of systemisation.

The requirement of a large bank of memorised words is also a prerequisite for phonics programmes and systems that compel the reader to decode unknown words by the use of associative letter patterns in known words. When an unknown word is encountered, the reader is required to reference known words with similar letter patterns and utilise and apply these patterns to the unknown word. The concept of onset and rime phonics is predicated upon the reader ignoring the opening grapheme’s phoneme, the identification of the subsequent letter pattern through association with a known word, the replacement of the opening phoneme of the known word with the actual phoneme and the blending of the replaced phoneme with the identified letter pattern’s sound. If that seems complicated for a single syllable word, then imagine the cognitive gymnastics required for a polysyllabic word. For a struggling reader with a poor word memory bank the demands may be debilitating and possibly catastrophic.

Although referenced as phonics programmes, these systems all require large banks of learned words and ignore the fundamental assumption of an alphabetic code: that letters and combinations of letters represent sounds and that by systematically learning the sounds represented by the letters and synthesising those sounds together a word can be decoded and recoded. Furthermore, by regular practicing of that decoding process, expertise in decoding will develop, automaticity will be achieved, the word superiority effect activated, and words can be read accurately and quickly enough for the working memory to focus on comprehension of the text.

NB -for a detailed analysis of the variety of phonics approaches see Stephen Parker's excellent post.

This is, in essence, the principle behind systematic synthetic phonics. It is an approach that explicitly teaches the connection between graphemes and phonemes and is fundamentally bottom up. By mastering the coding of sound to letter correspondence of the English alphabetic code, emergent readers can apply that code knowledge to decipher any word by enacting a letter to sound to word process in tandem with a lexical route (Dehaene, 2015) to achieve meaning. It eschews the top down whole-word recognition reading pedagogy that only applies phonetic knowledge when the logographic and contextual recognition and implied guessing systems fail.

In order to be effective, however, the letter representations of the sounds required to decode English must be atomised, codified and then stratified into a hierarchy that enables this most complex of alphabetic codes to be taught and practiced by young learners. Thus, systematic synthetic phonics teaching begins with the initial or simple code, whereby children are firstly taught simple grapheme phoneme correspondences that enables them to successfully read a large number of words, grasp the concept of the alphabetic code and start blending and segmenting whilst understanding the reversibility of reading and writing. Once mastered, the complex code is introduced with its increasing multifarious variations in representations of vowel and consonant sounds and the crucial concepts that one letter can represent more than one sound and that the same sound can be represented using different letter combinations. The clarity of the codification is crucial, with the taught understanding that all sounds are encoded and can thus be decoded, however obtuse, discrete and singular that codification may be. With regular practice allied to the reading of texts that are constructed using words with the taught grapheme/phoneme correspondences (decodable texts), readers have effective strategies for decoding unknown words. The final element of the process is the teaching of the decoding of polysyllabic words where the procedural knowledge acquired to decode syllables is extended to blend numbers of syllables in polysyllabic words. Once mastered, a process that may take between two and three years (Dehaene, 2015), the alphabetic code is unlocked, and readers have an effective attack strategy for decoding any unknown word.

The English language has been encoded using an alphabet. To ignore that alphabet when teaching the decoding of the English alphabetic code is inexplicable (Daniels and Diack, 1953). That code has to be unlocked in order for fluent reading to be achieved. There is no alternative (Dehaene, 2015). Even when not explicitly taught, many children will learn to crack the code by themselves and perhaps up to 75% are able to do this (Adoniou, 2017). These children will be able to read. As for the rest, they must either continue attempting to crack the code well into secondary education (all the time wondering why everyone else seems to be able to read and they cannot) or rely on their memory of word shapes which will likely give them a reading vocabulary of about two thousand words (Rasinski, 2010) condemning them to a life of semi-literacy. Ironically, these learners who struggle to crack the code do so because of a phonological deficit identified as one of the key specific cognitive barriers to reading (Paulesu et.al, 2000). There is, nonetheless, no alternative. Without a comprehensive mastery of the decoding of the alphabetic code and the phoneme/grapheme correspondences there will be no reading (Dehaene, 2015)

. Without being specifically taught to master these correspondences, however slow and laborious the learning may be to develop phonological processing, there will be no reading. There are no short-cuts and no alternative paths.

That synthetic phonics trumped analytic phonics was brought to stark focus in 2004 with the publication of a seven-year study in Scotland comparing the two approaches. Watson and Johnston’s (2004) research into 304 primary-school-aged children taught reading thorough synthetic phonics and analytic phonics across thirteen classes for sixteen weeks found that those taught by synthetic phonics were seven months ahead of their chronological reading age, seven months ahead of the other children in the study and eight months ahead in terms of their spelling. What was perhaps more remarkable was that the classes being taught by synthetic phonics were from the most socially deprived backgrounds of all study participants. Furthermore, these children were followed to the end of their primary school careers by which time they were three and half years ahead of their chronological reading age and significantly ahead of age expectations in their reading comprehension and spelling (Johnston and Watson, 2005).

Although criticised for a research design that conflated the phonic elements with other potential contributing factors (Ellis, 2009), the dramatic contrast in outcomes could not be ignored and were not. In England, the influence of the study on the Rose Review (2006) was substantial and Nick Gibb (Ellis, 2009), then in opposition, used the study to question government education policy with the resulting Phonics Screening Check being implemented once he became Schools’ Minister. Ironically, political agency in Scotland being far less centralised ensured a more measured reaction (Ellis, 2009) and a widening literacy gap (Sosu and Ellis, 2014).

Synthetic phonics begins with the letters, assigns sounds to those letters and develops the ability to blend those sounds to read the word formed by the combination of those letters. From the mid-nineteenth century, however, a number of pedagogues have questioned the rationale of this starting point. Pitman and Ellis (1845), conscious of the complexity of the English alphabetic code, developed the phonotypic alphabet that more easily represented the atomised sounds of the English language and although it failed to be widely adopted, it did form the basis of the Pitman shorthand programme. At the heart of Pitman and Ellis’s (1845) approach was the understanding that sounds are represented by letters and this enabled them to alter the letter configuration to simplify the representation of sound. All previous approaches assumed that letters existed to create the sounds in order to read. However, argued Pitman and Ellis (1845) speech comes before writing so sounds must come first, and letters are thus the representation of sounds. Although seemingly a semantic difference it was, nonetheless, fundamental to the birth of a new approach to synthetic phonics: linguistic phonics.

The most successful early proponent of this sound-first approach was Dale (1902) who insisted that her charges learn and identify sounds before being introduced to the letters that represented these sounds and followed Quintilian’s (1892) approach of avoiding explicit letter names that added unnecessary sounds until phonemes were embedded. Dale (1902) also grouped phonemes irrespective of the graphemes that signified them rather than being led by the alphabet in the manner that previous phonic programmes such as Webster’s (1832) were organised. Furthermore, Dale (1902) taught spelling at the same time as reading with children writing letters rather than merely observing letters.

Although successful and popular, Dale’s (1902) programme was a victim of the rise of Gestalt Theory, flash cards and basal readers, and it was not until the 1960s that any new linguistic programmes surfaced again. It was James Pitman’s development of the international teaching alphabet that reintroduced a linguistic approach (Downing and Nathan, 1967). Like his grandfather Isaac, he attempted to simplify the alphabetic code for English and although effective, it was predicated on children having to unlearn his alphabet in order to then read words written in the actual English alphabetic code. Similarly, the Lippincott programme (McCracken et. al. 1963) used an artificial alphabet but followed Dale’s approach more consistently by focusing on sounds and blending, teaching spelling alongside reading, utilising decodable text and insisting that children write letters rather than merely observe them. Strikingly, the Lippincott linguistic programme proved to be the most successful reading programme in the substantial Bond and Dykstra (1965) study of early reading instruction in the United States.

Evans and Carr’s (1985) major observational studies in Canada reinforced the efficacy of sound and spelling focus for early reading instruction. They found that most literacy activities within a classroom had either a neutral or negative effect on reading with the only positive impacts being: learning the phonemes and how they are represented in letters, blending and segmenting sounds in words and the amount of time spent writing the representations of sounds. Time spent memorising sight words was a strong negative predictor. Independent learning tasks were particularly damaging to early reading mastery by encouraging study to degenerate into ‘random learning which may detract from…reading skills’ (Evans and Carr 1985, p344). The importance of writing letters for the reinforcing of sound/symbol correspondence was further emphasised by Hulme et. al. (1987) who found that the motor activity of writing graphemes significantly improved reading outcomes over the use of letter tiles. Cunningham and Stanovich (1990) also concluded that spelling was appreciably better when letters were written by hand rather than typed or manipulated utilising letter tiles.

These elements of this research were drawn together by McGuinness in 1997 with the identification of the essentials of an effective ‘linguistic’ reading programme. This ‘Prototype’ (McGuinness, 1997) was founded on the cornerstone of sound to print orientation: that phonemes, not letters are the basis for the code, with phonemes, possessing a finite nature, providing the pivot point for enlightening the opacity of the English alphabetic code and making its reversibility transparent; a transparency which the attribution of sounds to spellings obscures. Thus, what appears at first to be semantic pedantry – that sounds are represented by graphemes as opposed to graphemes creating sounds – reduces the code to a logical (if still complex) manageability for teachers and learners: there are forty-four sounds that are represented using the twenty-six letters rather than the overwhelming prospect of facing many hundreds of thousands of words and letter combinations whose sounds need to be identified. Only the phonemes are taught and no other sound units, but instruction begins with a reassuringly transparent artificial alphabet that identifies one-to-one letter-to-sound correspondence before introducing the more complex variations of phoneme encoding. Sounds are identified and sequenced through blending and segmenting with the writing of letters integrated into lessons to link writing to reading and embed spelling as a function of the code. There is an explicit and taught identification that a single letter can represent more than one sound and that sounds can be represented by more than one letter hence the imbalance between the number of phonemes and the number of available letters.

In keeping with Quintilian and Dale (1902), McGuinness (1997) also included, as part of the ‘Prototype’, the avoidance of letter names for early instruction citing Treiman and Tincoff’s (1997) assertion that this focussed attention on syllables instead of phonemes thus blocking conceptual understanding of the alphabetic principal and undermining spelling. Also included was the elimination of the teaching of sight words (words with such irregular spelling that they require memorisation by sight) which undermines decoding by encouraging ineffective strategies and which are relatively rare once effective code analysis has been applied (McGuinness, 2004) – McGuinness (2004) argues that of recognised one hundred high frequency sight words (Dolch, 1936) only twenty eight do not conform to the regular code.

Programmes that most align to this ‘Prototype’ (McGuinness, 2004) reveal growing evidence to substantiate its value. Stuart’s (1999) ‘Dockland’s’ study, with children of whom fifty three percent knew no English words at the outset, found that participants made substantial progress and read well above national norms by the end of the programme; a programme that aligned well with the model. In Canada Sumbler and Willows (1999) found significant gains in word recognition testing, word attack strategies and spelling when children were taught using a similar programme, and perhaps most compelling was the revelation that the model used by Watson and Johnston (2004) in their Clackmannanshire research exhibited extensive correspondence with the ‘Prototype’ (McGuinness, 2004).

More recent, and more explicit research in terms of a linguistic approach to phonics mastery was carried out by Case, Philpot and Walker (2009), following 1607 pupils across 50 schools over 6 years. The programme aligned directly with McGuinness’s (2004) ‘Prototype’ and revealed that children taught by the model achieved decoding levels substantially above the national data (Case, Philot, Walker, 2009), with 91% attaining the national expected level at KS1 statutory assessments. This longitudinal study, carried out by the programme designers using data from the schools using the package, found little or no variations across gender, socio economic or geographical groupings. The Queen’s University Belfast study (Gray et. Al, 2007), derived data from 916 pupils over 22 schools utilising linguistic phonic approaches, concluded that children exposed to this teaching approach gained substantial advantage in both reading and writing and that this advantage was sustained throughout the primary phase.

With the rise in England’s world ranking in reading (PISA, 2017) attributed to the assessed cohort being the first to be subject to a statutory phonics check (Weale, 2019), it would seem apposite for extensive and explicit research to be carried out to establish the most effective programmes being utilised to deliver phonic mastery. Perhaps more pertinent, however, would be research into the subject knowledge required by primary school teachers in all phases to understand the anatomy of the English alphabetic code, to be able to teach the deciphering of the code and analyse the remedying of gaps in code knowledge particularly in older children, and how any deficiencies in this subject knowledge can be filled. It would also be an appropriate occasion to investigate the proficiency of the training received by new primary school teachers into their understanding of the English alphabetic code and their aptitude in teaching its decoding.

You may be interested in reading:

1. https://www.thereadingape.com/single-post/2018/01/07/Phonics---the-lever-of-civilisation

2. https://www.thereadingape.com/single-post/2018/03/30/Why-English-is-just-so-darned-difficult-to-decode---a-short-history

3. https://www.thereadingape.com/single-post/2018/04/09/Phonics-2000-years-ago---Quintilianthe-accidental-phonics-genius-with-a-toxic-legacy

4. https://www.thereadingape.com/single-post/2018/04/29/Martin-Luther---the-unshackling-of-literacy---but-a-fatal-flaw

5. https://www.thereadingape.com/single-post/2018/06/01/Universal-literacynot-in-England---the-historical-roots-of-a-divided-education-system

6. https://www.thereadingape.com/single-post/2018/06/10/How-America-helped-ruin-English-reading-instructionwith-some-help-from-the-Germans

7. https://www.thereadingape.com/single-post/2018/07/08/Short-term-gains-long-term-pain-The-Word-Method-starts-to-take-hold

8. https://www.thereadingape.com/single-post/2018/08/06/How-assessment-and-inspections-ruined-reading---England-1870

9. https://www.thereadingape.com/single-post/2018/09/01/Efficacy-vs-intuition---phonics-goes-for-a-knockout-blow-in-the-late-19th-century-but-is-floored-by-bad-science

10. https://www.thereadingape.com/single-post/2018/10/14/Huey-and-Dewey---no-Disney-double-act

11. https://www.thereadingape.com/single-post/2018/10/23/Virtual-insanity-in-reading-instruction---teaching-reading-without-words

Follow the reading ape on twitter @thereadingape

12.https://www.thereadingape.com/single-post/2018/11/25/Bad-science-the-end-of-phonics

13. https://www.thereadingape.com/single-post/2018/12/09/Rudolph-Flesch-points-out-the-elephant-in-the-room

14.https://www.thereadingape.com/single-post/2019/01/03/Why-the-reading-brain-cant-cope-with-triple-cueing

15. https://www.thereadingape.com/single-post/2019/02/03/The-final-battle-of-the-reading-wars-Why-1965-should-have-marked-the-end-of-the-war

16. https://www.thereadingape.com/single-post/2019/02/20/The-very-peculiar-case-of-Goodman-Smith-and-Clay-or-why-the-whole-language-approach-just-wont-die

17. https://www.thereadingape.com/single-post/2019/02/24/The-whole-language-reading-Rasputin-gets-a-blow-to-the-head---simple

18. https://www.thereadingape.com/single-post/2019/03/17/Totally-automatic---the-sticking-point-between-phonics-and-fluency

Follow the reading ape on twitter - @thereadingape

bottom of page