So hands up, who hates phonics? Some very influential people...

April 17, 2019

Systematic phonics instruction has been described as the nearest thing to a silver bullet for the development of literacy.  With all the economic and social ills associated with illiteracy (detailed here), it would be a brave person who would deny the weight of research in its favour.  There remains considerable resistance to this form of instruction from the teaching profession (detailed here) but also a number of high-profile international academics.  The prospect of a Phonics Screening Check in Australia has enraged them.  Their views are analysed in this post.

 

‘There is clear evidence that a systematic approach to phonics results in gains in word reading and spelling.  However, there is inconclusive evidence to suggest that no one method of teaching children to read is superior to any other method,’ states Glazzard (2017, p44), then Professor at Leeds Trinity University teacher training faculty.  He suggests that analytic phonics could be as effective as systematic synthetic phonics (the differences are detailed here) and encourages the use of onset and rime as a strategy for decoding where systematic synthetic phonics has proved ineffective.  He argues that many younger children are not able to deal with the smallest unit of sound, the phoneme, but must begin with larger units and recommends onset and rimes. There is no reference to Castro-Caldas et al’s (1998) study that established that the inability to identify the smallest unit of sound was apparent in all illiterates.  The basis of his statement is the analysis of the Clackmannanshire research (Johnson and Watson, 2005) and the deduction that this was inconclusive as a result of an incoherent research design (Wyse and Goswami, 2008). Despite the substantial research (please see numerous other posts) indicating the effectiveness of systematic synthetic phonics and the paucity of research supporting analytic or incidental phonics for word decoding, going all the way back to Gates’ (1927) flawed study (detailed here), Glazzard (2017) is insistent that reading instruction is not a ‘one size fits all’ (2017, p 53) model.  His argument criticises the ‘Simple View of Reading’ (Gough and Tumner, 1986) for failing to atomise the steps to decoding proficiency, but he conflates decoding with reading comprehension whilst ignoring the fundamental premise of Gough and Tumner’s (1986) work: that decoding and reading comprehension are separate but correlated (detailed here).  Furthermore, Glazzard (2017) considers the Phonics Screening Check to be too crude an assessment of literacy development to be valuable despite it being the only statutory reading assessment that does not evaluate comprehension alone and has any element of phonic forensic capability – it, at the very least, can suggest a problem with decoding.  Ironically, in his 2013 book ‘Teaching Systematic Synthetic Phonics and early English’ (Glazzard and Stokoe, 2013) he emphasises the importance of systematic synthetic phonics instruction to enable children to learn to decode, and for children who fall behind in reading, he recommends further systematic synthetic phonics instruction in KS2. He further states that the atomisation of words into phonemes and graphemes is supportive of effective spelling.

 

Emeritus professor at Birmingham University Margaret Clark (OBE) (2017) is similarly unconvinced about the effectiveness of a Systematic Synthetic Phonics approach stating that there is no significant research that suggests that the method is more effective than analytic phonics or whole language instruction.  Although she references Chall (1966) (detailed here), she fails to cite her conclusion that approaches that focus on the decoding of print through sound awareness are the most successful, but instead quotes from the Bullock Report (DES, 1975) which emphasises mixed methods of reading instruction and implies that Clay’s (1991) psycholinguistic guessing approach (detailed here) and multiple cueing can be effective (see Presley, 2006 quote below). Like Glazzard (2017), she too suggests that the Clackmannanshire study was flawed and is thus inconclusive and concludes that there is, ‘no (italics included in the original source) evidence to support phonics in isolation as the one best method…’ (2017, p.97).  Clark (2006) also questions the wisdom of introducing children to reading long before this takes place in other countries and recommends delaying the teaching of reading.  Nonetheless, she makes no reference to the phonetic transparency of other languages or that English has developed the most complex alphabetic code that takes years longer to master (Dehaene, 2015).  She also makes no reference to the established view that children should start to learn to read English by the age of six (Holdaway 1979; Teale 1984; Stanovich & West 1989) - more details here.

 

The recommendation that a Phonics Screening Check, similar to that which is statutory in England, be introduced into Australian primary education (Buckingham, 2016), has enraged a number of high-profile academics.  Gardner (2017) - senior lecturer Curtin University and UK Literacy Ambassador to Australia - likens the screening to a ‘virus’ (2017, p113) undermining the art of pedagogy.  He sees the insistence on the adoption of systematic synthetic phonics as a reductionist model of teaching by direct instruction which views literacy as a systematic process leading to standardised accountability and a statutory check as a right-wing political policing imperative. Gardner (2017) cites the mandatory inclusion of Systematic Synthetic Phonics teaching within the English Teacher Standards (DfE, 2011) as evidence of this ‘policing’ (2017, p114).  Wrigley (2017), a professor at Northumbria University, concurs with Gardner’s (2017) polemic that Systematic Synthetic Phonics teaching and screening have been the result of ministerial power being ‘increasingly exercised and abused,’ (2017, p213) and policing by ‘the privatised Ofsted system of England’ (2017, p214).  He suggests that the teaching of synthetic phonics fits the right-wing political preference of explicit instruction.  In his examples of ‘skillful’ (2017, p210) teaching he suggests an incidental approach to phonics in line with Gates’ (1927) flawed recommendations that analyses words and sounds with no reference to the logic of the alphabetic code and with the repetition of words that implies a preference for a whole-word approach.  A positive reference to the National Literacy Strategy (DfE, 1998) ‘searchlight’ model of teaching reading suggest he considers the multi-cueing approach at least as effective as synthetic phonics, this despite the ‘whole language’ (Goodman, 1972) implications of the model and ignores Pressley’s (2006) caution:

 

‘…teaching children to decode by giving primacy to semantic-

contextual and syntactic-contextual cues over graphemic-phonemic 

cues is equivalent to teaching them to read the way weak readers 

read!’ (2006, p. 164)

 

Cox (2017) - an Associate Professor in Sydney - also questions the political imperatives and urges restraint over the speed of implementation of a Phonics Screening Check in Australia questioning whose expertise and whose knowledge is taking precedence.  Cox (2017) does not reference the rapid decline in literacy standards in Australia as a reason for urgent reform or the substantial research supporting systematic phonics instruction (although like all of these academics he states how important phonics is).  He, like Gardner (2017), cites  Ken Robinson’s (2015) claim that the commercialisation and politicisation of education is damaging the prospects of young people. Robinson’s (2015) promotion of creativity over knowledge and attacks on direct instruction models of teaching are, by implication, attacks on Systematic Synthetic Phonics instruction. His constructivist (Tracey and Morrow, 2012)  approach to education is not compatible with effective reading instruction (Geary, 2007) - details here.

 

Further support for this resistance to a statutory phonics check in Australia comes from Adoniou (2017) - Associate Professor, Faculty of Education, Canberra University - who notes that its introduction in England has seen a 23% increase in children attaining the threshold without the corresponding improvement in reading assessment at the age of six and seven. The improvements of English children’s reading scores for ten and eleven-year-olds is not cited.  She also notes that Australia ranked higher than England on the PISA international reading assessments (OECD, 2015) but fails to reference England’s rise in the rankings and Australia’s decline and the fact that none of the tested cohort in England would have been subject to mandatory early phonics instruction.  There is no mention of England’s rise to within the top ten of countries internationally ranked (including those with languages with far greater phonic transparency) for the PIRLS international reading assessment (IEA, 2016) or that England is ranked significantly higher than Australia in this measurement of ten-year olds or that the assessed English cohort would have been subject phonics teaching and assessment.  Adoniou (2017) is insistent that the specific teaching of phonics is not yielding returns for England because ‘English is not a phonetic language…and synthetic phonics programmes make phonological promises to students that English cannot keep…’ (2017, p.177).  She makes these claims despite English being a language encoded using an alphabet to represent the forty-four phonemes and that 90% of the language follows regular encoding precepts (McGuinness, 2004) and that no word in English is completely phonologically opaque (Gough and Hillinger, 1980; Share, 1995; Tumner and Chapman, 1998).  She also fails to note that where phonemes with different spellings are taught in conjunction, there is no ‘promise’ (Andoniou, 2017 p.177) of phonic regularity but an explicit articulation of this complexity of the encoding.  Adoniou (2017) suggests no additional solution to Australia’s declining literacy standards is necessary other than further unspecified teacher education, a maintaining of the status quo and specific (although again unspecified) interventions, mainly focusing on vocabulary and comprehension instruction and not decoding, for those students who fail to learn to read.  

 

Dombey (2017) - Emeritus Professor of Literacy, Brighton University - proposes that reading is more about making sense of text than the privileging of the identification of words and cites Taylor and Pearson’s (2002) study which Dombey (2017) suggests indicates that an approach which combines enjoyment, syntactic analysis and phonetic examination in equal measure is more efficacious than phonics instruction alone. The study was not, as implied by Dombey (2017), a comparison of the two approaches but an analysis of the variety of perceived behaviours present for effective reading outcomes in schools across a number of studies.  In the final study, six generalised behaviour criteria were identified for schools with high-attaining reading outcomes none of which cited either phonics or a balanced approach to literacy.  Where a balanced approach did feature as an indicator, it featured studies from the 1990s when the multi-cueing approach was more prevalent and, when prior studies were named, direct instruction was cited as an important factor with no mention of either phonics instruction or a balanced approach.  It is apposite to mention that an important factor cited by the study (Taylor and Pearson, 2002) as an indicator of reading success was the use of small phonics-focused intervention groups for children who were struggling with reading.  Dombey (2017) cites Clay (1972) and Goodman (1995) as useful informants on the teaching of reading despite the weight of research discrediting their approaches (Pressley, 2006) - more details here - and she goes on to suggest that Cattell’s (1886) research is evidence of whole word decoding as a relevant strategy for emergent readers despite the word superiority effect developing from phoneme recoding and not logographic recognition (Reicher, 1969) - more details here.

 

All of these academics acknowledge the importance of phonetic approaches to word decoding for emergent readers, and the majority recognise synthetic phonics as the most effective strategy for decoding unfamiliar words. What they suggest, however, is that synthetic phonics instruction is not empirically superior to analytic phonics for the teaching of reading. 

 

There are two key elements that need addressing when assessing the substance of this claim.  The first is the evidence upon which the conclusions are based.  All of the academics question the results of the Clackmannanshire research of Johnston and Watson (2005) that directly compared the two approaches by criticising a research design that did not account for myriad of variables that could influence and conflate the results, such as teacher efficacy, home instruction and amounts of reading occurring in other parts of the curriculum (Wyse and Goswami, 2008; Wyse and Styles, 2007; Ellis and Moss, 2014).  This is disingenuous, as it fails to recognise the complexity of designing research studies within working schools where different curricula, foci and emphasis exist.  Schools are not laboratories and isolating effects and controlling variables is almost impossible.  Educational researchers are aware of this and data are necessarily compromised but not necessarily invalidated, Further criticism targeted the greater amounts of time spent on synthetic phonics in the settings that utilised the approach as opposed to children taught analytic phonics.  This exposes a crucial flaw in analytic phonics: the amount of time spent on phonics analysis is dependent on pupils’ word recognition failure as well as teacher monitoring and analysis of those failures along with teacher decisions when to analyse and what to analyse.  Analytic phonics is de facto not systematic.  It highlights an advantage of the synthetic phonics programme utilised by the study and developed by Lloyd (McGuiness, 2004), who observed that young children had the capacity to focus for longer periods than assumed: pupils were taught as a whole class and received at least twenty minutes of daily instruction and often more.  That systematic synthetic phonics teaches phonics for longer and more consistently cannot be seen as a factor undermining the study: it is an inherent quality of the approach.  Whatever the limitations of the research design, the improvements for the children exposed to synthetic phonics instruction over those exposed to analytic phonics are emphatic especially when effect sizes are examined.  When Systematic Synthetic Phonics was compared directly to analytic phonics alone, the effect sizes were unequivocal: close to 1.0 for reading and greater than 1.0 for spelling (McGuinness, 2004).  These continued to hold well past the life of the study

 

The second element referenced by many of the academics is the ambivalent conclusion of the Torgerson et. al. (2006) research, that although there was evidence to suggest that systematic phonics should be integral to early reading there was insufficient evidence to suggest that it was superior to analytic phonics.  This is a very selective interpretation.  What the conclusion actually said was:

 

‘The current review has confirmed that systematic phonics instruction is 

associated with an increased improvement in reading accuracy…However, 

there was little RCT evidence on which to compare analytic and synthetic 

phonics, or on the effect of on reading comprehensionor spelling, so that 

it was not possible to reach firm conclusions on these.’ (2006, p47).

 

The conclusion on reading accuracy was firm: systematic phonics was superior.

 

The academics have conflated reading instruction with decoding instruction.  Systematic phonics enables effective word decoding.  It is not reading fluency instruction, nor is it reading comprehension instruction or text immersion or vocabulary instruction.  It is, nonetheless, a prerequisite for all of these. Analytic phonics along with semantic and syntactic cueing can be an essential element of reading instruction, comprehension and spelling instruction once the majority of the English alphabetic code has been comprehended and absorbed (Perfetti, 2007) and the word superiority effect (Reicher, 1969) is starting to be established and automaticity developed (more details here).  It is not, however, as stated by Torgerson et al (2006) superior to systematic phonics for word reading accuracy.

 

It seems worth repeating the words of Daniels and Diack (1953):

 

‘An alphabet is a system of symbols for sounds and these symbols are written down in the order in which the sounds are made. A printed word is a time chart of sound. The act of reading is the act of translating those time-charts into the appropriate sounds, the sounds being associated with things, events and emotions.’

 

The translation of the time-chart comes first. The 'things, events and emotions' come afterwards.

 

You may be interested in reading:

 

1. https://www.thereadingape.com/single-post/2018/01/07/Phonics---the-lever-of-civilisation

 

2. https://www.thereadingape.com/single-post/2018/03/30/Why-English-is-just-so-darned-difficult-to-decode---a-short-history

 

3. https://www.thereadingape.com/single-post/2018/04/09/Phonics-2000-years-ago---Quintilianthe-accidental-phonics-genius-with-a-toxic-legacy

 

4. https://www.thereadingape.com/single-post/2018/04/29/Martin-Luther---the-unshackling-of-literacy---but-a-fatal-flaw

 

5. https://www.thereadingape.com/single-post/2018/06/01/Universal-literacynot-in-England---the-historical-roots-of-a-divided-education-system

 

6. https://www.thereadingape.com/single-post/2018/06/10/How-America-helped-ruin-English-reading-instructionwith-some-help-from-the-Germans

 

7. https://www.thereadingape.com/single-post/2018/07/08/Short-term-gains-long-term-pain-The-Word-Method-starts-to-take-hold

 

8. https://www.thereadingape.com/single-post/2018/08/06/How-assessment-and-inspections-ruined-reading---England-1870

 

9. https://www.thereadingape.com/single-post/2018/09/01/Efficacy-vs-intuition---phonics-goes-for-a-knockout-blow-in-the-late-19th-century-but-is-floored-by-bad-science

 

10. https://www.thereadingape.com/single-post/2018/10/14/Huey-and-Dewey---no-Disney-double-act

 

11. https://www.thereadingape.com/single-post/2018/10/23/Virtual-insanity-in-reading-instruction---teaching-reading-without-words

Follow the reading ape on twitter @thereadingape

 

12.https://www.thereadingape.com/single-post/2018/11/25/Bad-science-the-end-of-phonics

 

13. https://www.thereadingape.com/single-post/2018/12/09/Rudolph-Flesch-points-out-the-elephant-in-the-room

 

14.https://www.thereadingape.com/single-post/2019/01/03/Why-the-reading-brain-cant-cope-with-triple-cueing

 

15. https://www.thereadingape.com/single-post/2019/02/03/The-final-battle-of-the-reading-wars-Why-1965-should-have-marked-the-end-of-the-war

 

16. https://www.thereadingape.com/single-post/2019/02/20/The-very-peculiar-case-of-Goodman-Smith-and-Clay-or-why-the-whole-language-approach-just-wont-die

 

17. https://www.thereadingape.com/single-post/2019/02/24/The-whole-language-reading-Rasputin-gets-a-blow-to-the-head---simple

 

18. https://www.thereadingape.com/single-post/2019/03/17/Totally-automatic---the-sticking-point-between-phonics-and-fluency

 

19. https://www.thereadingape.com/single-post/2019/04/07/All-phonics-instruction-is-not-the-same

 

Follow the reading ape on twitter - @thereadingape

 

 

 

Please reload

This site was designed with the
.com
website builder. Create your website today.
Start Now