The whole language reading Rasputin gets a blow to the head - simple.

February 24, 2019

Whilst Smith and Goodman were condemning millions of children to illiteracy - 60% of Californian nine and ten-year olds taught by their whole language methods over seven years were unable to gain an even superficial understanding of their books (Turner and Burkard, 1996) – two US academics were researching the use of an analytical algorithm to assess which reading interventions struggling readers required.  Gough and Tumner (1986) studied 250 children over seven years from kindergarten through to fourth grade.  Observing that some children had highly advanced language comprehension skills (oral comprehension) but poor reading comprehension skills, they found they were able to predict reading comprehension efficacy by measuring a third dimension.  By testing children’s decoding skills using a pseudo-word assessment and multiplying the percentage by (not adding it to) a language comprehension score they were able to directly predict the child’s reading comprehension level.  A deficit in either decoding or language skills always resulted in a lower reading comprehension score.  Where language skills were high, an improvement in decoding led to an equal (not partial) improvement in comprehension scores.

 

As an analytical tool for the focusing of reading intervention it proved invaluable.  Reading comprehension (the reason for reading) could be improved by interventions addressing readers’ deficits in either decoding or language comprehension.  If two of the three variables were known, then the third could be accurately predicted. Hyperlexic readers (good decoding but poor language comprehension) would receive specific instruction in language whereas dyslexic readers (poor decoding but at least average language comprehension) would receive decoding interventions – NB. This is Gough and Tumner’s (1986) definition of dyslexia. Poor decoding allied to poor language comprehension identified the ‘garden variety’ poor reader.

 

Although valuable, the analytical element of the study was not it’s most significant contribution to reading instruction.  By identifying decoding as an essential predictor of reading comprehension, Gough and Tumner (1986) exploded the fallacy of Gestalt Theory: that the whole word was mightier than the letter.  Without the ability to decipher the sounds represented by letters and blend them to identify words, reading was not possible.  The multiplier effect was crucial: a deficit in either decoding or language skills always resulted in a lower reading comprehension score; the product of two fractions being de facto less than the multiplicand and the multiplier. The conclusion was emphatic: an enhanced ability in one area could not compensate for a deficit in another area. 

 

Whole language theory, with its emphasis on comprehension and its eschewal of decoding could, therefore, not work as a framework for reading instruction: it was just not mathematically possible.  

 

A watershed in reading instruction had been reached. Phonics was key; not the only key, but key nonetheless.

 

Gough and Tumner’s (1986) research developed into the influential ‘Simple View of Reading’, further modified by Hoover and Gough (1990), which drew three clear conclusions from the study. Firstly, that the highly complex manifestation of reading fluency can be broken down into two identifiable categories: the ability to decode and the ability to comprehend language.  Decoding relates to an ability to decipher text quickly and accurately.  Language comprehension, although not specific to reading, relates to domain knowledge, reasoning, imagining and interpretation (Kamhi, 2007).  Secondly, that any difficulties in reading are a result of either poor language comprehension, poor decoding skills or a deficit in both.  Thirdly, that for reading to be confident and competent, facilities in both areas must be strong; strength in one area cannot compensate for weakness in the other. Thus, a student with excellent language comprehension will achieve a reading comprehension level not exceeding their decoding level and any improvement in their decoding will result in an improvement in their reading comprehension.

 

The implication for teaching and teachers was clear: if both parts of the framework were not attended to, children would not become competent readers.  Thus, a systematic phonics programme delivered early in a child’s schooling and continued until mastery was essential, allied with a concentration on language development, vocabulary building, widening content knowledge and an exposure to diverse narratives and stories (Lloyd-Jones, 2013).

 

The robustness of the framework is supported by numerous studies.  Kendeou et al. (2009) employed factor analysis of the diverse measures od assessment undertaken independently by two separate research studies in the USA and Canada analysing both decoding and comprehension.  Their conclusions supported the generality and validity of the ‘Simple view of Reading’ framework and its algorithms. In 2008 the Institute of Education Sciences in the USA commissioned a series of large-scale studies and provided persuasive evidence of the role of  ‘The Simple View of Reading’ (Hoover and Gough, 1990) and the efficacy of the framework (Vaughn, 2018).  Further advocacy of specific phonics instruction was implicit in Lonigan’s (2018) examination of 720 children in grades 3 to 5 in the USA which supported the predictive validity of the components of the ‘Simple View of Reading’ (Hoover and Gough, 1990).  This predictive validity was further supported by Chiu’s (2018) analysis of prekindergarten children’s outcomes at grade 3 which found the listening comprehension and word recognition concepts were strongly related.

 

So robust are the constructs of ‘The Simple View of Reading’ that it featured in the Rose review (2005) of the primary curriculum of England and the language of its framework appears in the Primary National Curriculum of England (2014).

 

It is, nonetheless, not without its problems and many of these relate to the construct of  ‘language comprehension’ and the complexities of acquiring the domain knowledge, reasoning, imagining and interpretation (Kamhi, 2007) associated with it, as opposed to the clearer progression of decoding competency which can be acquired with a systematic approach to phonics instruction.  This is exacerbated by the assessment of language comprehension and reading comprehension and the inadequacies of a testing and scoring format that is suitably reliable and valid for large-scale studies (Snow, 2018). Furthermore, as Francis et al. (2018) maintain, the framework is not designed to contrast the text and cognitive skills that may be required to address the selected reading.  The most evident issue, however, lies with the false impression suggested by displaying reading comprehension alongside decoding that reading comprehension, like decoding, is a single concept; that it is unidimensional and not really as complex as it really is (Catts, 2018). To be fair to Gough, Hoover and Peterson (1996) they acknowledge the issue of differing content knowledge and that it can conflate the results of the model.  Thus the great strides in decoding provision made in the last few years have not been mirrored by improvements in reading comprehension. Strategic reading instruction (Swanson et al., 2014) will only be effective when combined with adequate content knowledge (Willingham, 2006).

 

Perhaps the greatest flaw in the framework, however, lies in its inherent assumption that decoding efficiency is sufficient for fluency.  This ignores what is possibly the missing link in reading acquisition: automaticity. Decoding competently is not the same as decoding automatically at speed. The word superiority effect (Reicher, 1969) is only evident once automaticity is attained by a reader.  Prior to this, an emergent reader can only decode using phonological processing (Stanovich, 1990).  Decoding can be practiced to automaticity and, more crucial for the model, automaticity can be tested.  It can thus form part of the algorithm.

 

The Reading Ape will share the results of a study into automaticity in Autumn 2019.

 

This post is number 17 in a series of posts.  

 

You may be interested in reading:

 

1. https://www.thereadingape.com/single-post/2018/01/07/Phonics---the-lever-of-civilisation

 

2. https://www.thereadingape.com/single-post/2018/03/30/Why-English-is-just-so-darned-difficult-to-decode---a-short-history

 

3. https://www.thereadingape.com/single-post/2018/04/09/Phonics-2000-years-ago---Quintilianthe-accidental-phonics-genius-with-a-toxic-legacy

 

4. https://www.thereadingape.com/single-post/2018/04/29/Martin-Luther---the-unshackling-of-literacy---but-a-fatal-flaw

 

5. https://www.thereadingape.com/single-post/2018/06/01/Universal-literacynot-in-England---the-historical-roots-of-a-divided-education-system

 

6. https://www.thereadingape.com/single-post/2018/06/10/How-America-helped-ruin-English-reading-instructionwith-some-help-from-the-Germans

 

7. https://www.thereadingape.com/single-post/2018/07/08/Short-term-gains-long-term-pain-The-Word-Method-starts-to-take-hold

 

8. https://www.thereadingape.com/single-post/2018/08/06/How-assessment-and-inspections-ruined-reading---England-1870

 

9. https://www.thereadingape.com/single-post/2018/09/01/Efficacy-vs-intuition---phonics-goes-for-a-knockout-blow-in-the-late-19th-century-but-is-floored-by-bad-science

 

10. https://www.thereadingape.com/single-post/2018/10/14/Huey-and-Dewey---no-Disney-double-act

 

11. https://www.thereadingape.com/single-post/2018/10/23/Virtual-insanity-in-reading-instruction---teaching-reading-without-words

Follow the reading ape on twitter @thereadingape

 

12.https://www.thereadingape.com/single-post/2018/11/25/Bad-science-the-end-of-phonics

 

13. https://www.thereadingape.com/single-post/2018/12/09/Rudolph-Flesch-points-out-the-elephant-in-the-room

 

14.https://www.thereadingape.com/single-post/2019/01/03/Why-the-reading-brain-cant-cope-with-triple-cueing

 

15. https://www.thereadingape.com/single-post/2019/02/03/The-final-battle-of-the-reading-wars-Why-1965-should-have-marked-the-end-of-the-war

 

16. https://www.thereadingape.com/single-post/2019/02/20/The-very-peculiar-case-of-Goodman-Smith-and-Clay-or-why-the-whole-language-approach-just-wont-die

 

 

 

Follow the reading ape on twitter - @thereadingape

Please reload

This site was designed with the
.com
website builder. Create your website today.
Start Now