top of page

A (not very short) history of reading...


LANGUAGE

The evolution of oppositional thumbs and an erect stature have been fundamental developments in mankind, but when the ancients suggested that “speech is the difference of man” they were highlighting the one dominant factor that separates homo sapiens from the rest of the animal kingdom; the quantum leap performed by humankind that truly divorces man from beast: language (McLuhan and Logan, 1977).


This ability to communicate specific ideas and detailed, diverse information almost certainly initially developed with gestural communication, the remnants of which humans still employ today (McNeil, 1992). However, the need to communicate at greater distances and through a means that did not distract busy, tool-utilising hands, necessitated the involvement and improvisation of primates’ oral capacity (McNeil, 2000). This necessitated a three-fold adaptation. Firstly, within the structure of the throat; the hyoid bone on the underside of the chin in humans is uniquely ridged and supported by a series of finely-tuned muscles that connect to the tongue, the mouth floor, the pharynx and the epiglottis indicating adaptation for the unique purpose of speech (Rutherford, 2016). Secondly, the human brain has a refined zone within it, the Broca’s (1864) area, adapted for the cognitive complexity of language and thirdly, humans have the communication gene FOXP2, present in all primates but in humans adapted with a unique protein sequence enabling speech (Rutherford, 2016). As the development of speech is archaeologically invisible, it is almost impossible to date its induction into the primate communication paradigm except through this anatomical and genetic analysis. Archaeological study of the advancement of the structures of the throat and lips suggests that diverse language was only possible from one hundred thousand years ago (Jablonski and Aiello, 1998).


This had a profound impact on the ability of humans to survive in a hostile environment for which they had few, if any, physical advantages, as it enabled a socio-centric, cooperative group dynamic to develop (Haidt, 2013). Even more profound was the cognitive impact on humans. Spoken language structures the way that man engages with the world both cognitively, perceptually and communicatively (Whorf and Caroll, 1956). Thus, with the spoken word, humans were able to cultivate the concept of abstraction. Althusser (2017, p51) saw language as ‘the first abstraction’: a fundamental hinge for humanity’s philosophical, and by implication cognitive, development. A word, he argued, possesses the existence of abstraction because its non-figurative sound is an intellection of its association with the actual entity it designates. Thus, through this abstraction from object or thought to sound and the oral communication of that entity, the ability to speak developed into the foremost means of interaction, enabling a profound advance in cognitive development for humankind.


Nonetheless, oral communication fades instantly (Schmandt-Besserat, 1996), is limited to the capacity and retrieval accuracy of the brain’s long-term memory and is hampered by an inability to record ideas, concepts and information and transmit these over temporal and geographical distances. The next great leap in human development occurred with the development of writing which, as Breasted (1926) noted, ‘had a greater influence on uplifting the human race than any other intellectual achievement in the career of man’. When poet Fancisco de Quevedo (2017) wrote that books enabled him to have ‘a conversation with the deceased, and listen to the dead with my eyes,’ he articulated the power of the written word to record, preserve and share human intellectual ideas and endeavours. Indeed, the boundary between history and prehistory is defined by the existence, or otherwise, of written artefacts (Rutherford, 2006). This power to ‘transmit language at a distance in space or time’ (Istrin, 1953, p109) enhanced man’s capacity for abstract thinking as it involved what Althusser (2017, p52) referred to as a ‘double abstraction’ which occurred when encoding an entity to sound, and sound to symbol and the reverse process of abstractions when decoding. Writing, as Schmandt-Besserat (1996, p1) pronounced, ‘changed the human condition.’


Oral communication developed over an evolutionary period of millions of years and is biologically primary (Geary, 2007) in that it requires no didactic instruction. Writing, on the other hand, according to Geary (2007) is biologically secondary, that is to say it cannot be learned through cognitive osmosis, must be taught and studied and was cultivated and refined over a relatively short period of five thousand years. The implication is therefore that the human brain did not evolve to write but that writing has evolved to the constraints of the evolved human brain (Dehaene, 2014).


THE DEVELOPMENT OF WRITING AND READING

Writing systems, and the corresponding ability to decode those systems and thus decipher meaning, were developed by different civilisations at different times, in different places and for different reasons. The manner of the recorded communication was crucial to societal, administrative and military expansion. Innis (1953) argued that writing upon stone and clay facilitated the control of communication over time, thus enabling bureaucratic systems to develop, whereas the invention of parchment and paper added geographical control to this temporal command with the ease of transportation of records and communicated thoughts resulting in profound military advantages (McLuhan and Logan, 1977).


Nonetheless, the specific individual writing systems that developed had a profound effect upon those individual civilisations, their development, their societies and their influence both in terms of the contemporaneous power they were able to exert but also in terms of their intellectual, philosophical, economic and social legacy. Furthermore, the complexities and sophistications within those coded systems and the associated and specific resultant cognitive demands afforded significant cultural leverage within and across civilisations both in terms of time and space (Havelock, 1976). The extent of this leverage was further dependent on the complexity and universality of the coding and decoding paradigm exclusive to specific systems. One system in particular enabled such economic and political advantage that it bequeathed the descendants of those who developed it and adapted it, the dominant world position for three millennia. The effects of this advantage still resonate today.


Before writing systems developed, humans cultivated the ability to characterise objects in their literal configuration through representational graphic forms (Dehaene, 2014). The first evidence we have of these are the cave paintings in Chauvet in southern France which date from 33,000 years ago. These images reveal an already sophisticated ability in our ancestors to represent objects and animals with a few significant marks. They exhibit an advance from the need for laborious three-dimensional representation of animals and objects by employing an economy of marks to reproduce the holistic experience (Dehaene, 2014). Although not a fully formed abstraction (Althusser, 2017), it was this shift in communication from descriptive art to an ideographic representation of an object that was seen by Leroi-Gourhan (1993) as one of the great advances towards a written code. The first evidence of Althusser’s (2017) abstraction appears in sites in Mesopotamia dating back to 8000 BC, not as symbols of language but as mathematical notation (Schmandt-Besserat, 1996) in the form of abstract calculi representing numerical units and arithmetic bases. This development from literal representation through the use of ‘tokens’ (Schmand-Besserat, 1996) to mathematical abstractions is seemingly a vital stage in the evolution of a written code as the birth of the Mayan writing system in South America, completely independent of any other writing system, also has its conception in calculation and calendric systems (Leroi-Gourhan, 1993).


All writing systems began as ideographic systems where the idea of an entity, whether tangible or intangible, is characterized by a symbol (McLuhan and Logan, 1977) with these symbols originating initially as highly representational resemblances. Nonetheless, whilst symbols held inherent representation of the entity there existed the conundrum of interpretation with the resulting mutation of the symbol and the proliferation of different versions of the symbol. This led to stylisation, a move away from pictography and the acceptance of the symbol into convention and a developed single orthography (Dehaene, 2016). The problem of drawing pictures of abstract ideas and intangible concepts further undermined the efficacy and sustainability of pictograms. In fact, pictograms and logographs never developed beyond accounting systems and never fully evolved into complex writing systems as a result of these terminal shortcomings.


The first evidence of a recordable code for language suggests it was developed by the Sumer civilisation in the fourth millennia BC in Mesopotamia (Daniels and Bright, 1996). Initially this was an entirely logographic encoding system focusing exclusively on meaning. However, as the number of pictograms and logographs increased, they discovered a fatal flaw in the representational code: the capacity of the human memory. Humans have an evolved genetic disposition for language (Rutherford, 2013) but an upper limit for the memorisation of representations of about two thousand individual symbols (Rasinski, 2010). With about fifty thousand words in common usage, a logographic code with one-to-one word to symbol correspondence was clearly not serviceable. Undaunted, the Sumerians economised the code (Kramer, 1963) with logographs representing groups of words and related words along with symbols for categories of meanings, but no matter how efficiently they employed their logographs they kept sidling over the limits of human memory.


It was at this juncture that the ancient Sumerian scholars underwent a dramatic damascene moment: what if symbols represented sounds? This profound epiphany could have led to the Sumerian civilisation and its descendants dominating world culture for millennia. Sadly, for them, they took a calamitous wrong turn, for instead of applying symbols to the smallest individual sounds, they applied them to the smallest unit of sound: the syllable (Kramer, 1963). Nonetheless, this enabled them to create a far more efficient coding system, for they discovered that their language had far fewer syllables than words which thus required far fewer symbols and kept them within the limits of the human memory. This could have enabled the first efficient syllabic coding system to have evolved, but the Sumerians were cursed by their logographic past for they now had a mixed, and contradictory, code that included syllabic symbols for sound as well as historic logographic symbols and was so complex that only the few select academic elite could possible master it (Crystal, 1995). Universal literacy was thus never a possibility.


Almost a thousand years later and thousands of miles removed, the Chinese developed a coding system based on almost identical concepts and adaptions as the Sumerians. Starting with logographic representations, their code developed a syllabic structure resulting in the categorisation of around two thousand three hundred tonal syllables and their assigned symbols (Stevenson et al., 1982). However, the frequency of homophones required further refinement for differentiation with the resulting use of around two hundred classifiers that are forged with the syllable sign for the writing of words. Chinese is thus learnable (with a good memory) with considerable schooling and practice, but it is highly complex and why Chinese students are still learning to read and write well into their teenage years (Dehaene, 2014). It is perhaps this inconsistent complexity that, despite huge historical technological advantages, prevented the eastern civilisations dominating science and culture on a worldwide scale (Breasted, 1926).


At the same time that the Sumerians were developing their writing system, Egyptian writing was evolving along similar lines but with even more erratic complexity (Bridge, 2003). Their hieroglyphic script employed pictograms, logographs, category determiners and sound-based symbols which they integrated with such random abandon as to ensure that only the very few were able to access it. The privileging of the aesthetic over the practical guaranteed these ‘few’ high status, regular employment and economic stability; the archetypal hegemony and habitus four thousand years in advance of Gramsci (2014) and Bordieu (Silva and Warde, 2012).


These sentries of entitlement complicated Egyptian writing even further. With Egypt’s extensive trading links, the writing code was forced to expand to encrypt numerous foreign names and places that could not be accommodated by their system either logographically or syllabically. A further code was designed to accommodate these irregularities. The only way the scholars could absorb the irregular foreign languages was to assign a symbol to an individual sound (Diringer, 1968). This new code required twenty-two symbols that represented the twenty-two individual sounds that could be combined to codify the foreign names. Unbeknown to them, the scholars were within touching distance of the Copernican moment in the encoding of language.


They had invented the first rudimentary alphabet. Unfortunately for their civilisation and its legacy, they were completely unaware of the potential of the orthographic revolution upon which they had stumbled.


Others were not so wasteful. The Seirites, a Semitic tribe that mined copper in the Sanai desert for the Egyptians, adopted the twenty-two consonant symbols and used it to codify their own language (Coulmas, 1993). As a trading people, their code was exposed to numerous partners in the Levant and its efficacy and economy meant it soon spread through the middle-east via the Hebrews and Phoenicians and as far as the Indian sub-continent. It was eventually adopted by the foremost economic, military and intellectual powerhouse in Europe: the Greeks. They added the final piece to the alphabetic jigsaw.


The twenty-two symbol Semitic code included only consonants which ensured that not all sounds could be codified. In 700BC the Greeks added vowels to the code and created the fully phonetic alphabet (Dehaene, 2013). This was the paradigm shift for encoding sound and thus enriched, the alphabet spread to other cultures and was the basis of all western alphabets including the one we use today. The Greeks had invented the most sophisticated writing programme ever devised (McLuhan and Logan, 1977). They were now able to transcribe unambiguously and accurately the spoken words of any language using fewer than thirty symbols. What was even more profound, was that this system operated well within the confines of the human memory and was thus learnable by anyone with access to a proficient teacher. Universal literacy was now a possibility. The alphabet was invented only once in human history and has not been improved upon (Havelock, 1976).


For the first time in the history of mankind humans were able to codify graphically their language sounds. Writing had been stripped of any pictographic and syllabic elements by the invention of the notation of the atoms of speech – phonemes. As a result of the ensuing cultural evolution of this process, the alphabet was honed and refined into a minimal set of symbols that were compatible with the constraints of the human brain and could enable the encoding and transcription of any word in any language (Dehaene, 2016).


Reading and writing were now available to any person with access to instruction of this coding system and the tenacity, resolve and persistence to master it. Thus, recorded communication and the sharing of ideas at a distance of space and time were no longer the exclusive preserve of a tiny educational elite. This ensured a rapid adoption of the system by any culture that came in contact with it, and with Greek power and culture in decline and subsumed by Roman expansion and authority, followed by the supremacy of Latin, the alphabetic principle dominated the Old World.


WHY ENGLISH IS SO DIFFICULT TO DECODE


Their time in the sun was, nonetheless, short-lived. The retreat of a dominant, longstanding and stable military and administrative superpower created power vacuums across northern Europe. These vacuums were filled by new, aggressive, expansionist military powers, and with their successful military victories came their languages and a further diluting influence on regenerating local tongues.


Nowhere was this more evident than in Great Britain (Bragg, 2003).


The development of English, its vast lexicon, the difficulties encoding its diverse cocktail of sounds derived from many languages, hundreds of years of multifarious, random and unaccountable spellings all shoehorned into an alphabetic code of just twenty-six symbols have resulted in a multi-layered, phonetic complexity that can appear chaotic and bewildering. Modern English has the most complex phonetic code of any alphabetic language (McGuinness, 1999). It takes a child, on average, three and a half years to learn to decode English compared to the four months it takes to learn to decode the almost entirely phonetic languages of Italian and Spanish (Dehaene, 2016). Italy reports considerably fewer cases of dyslexia and Italians who struggle with reading are far advanced in decoding their language when compared to their English-speaking counterparts (Paulesu, 2001). This complexity is almost entirely a result of indiscriminate twists of historical fate.


So where did it all go wrong? If the Romans had remained in Britain, today we may well be speaking something close to the phonetically logical modern Italian. However, in the fifth century, with the northern tribes of the Scots and the Picts rampaging southwards, the Britons appealed to the Romans for protection. The Romans, who had deserted Britain to protect Rome from their own barbarian invasion had no capacity or resources for assistance. The Celtic tribes then appealed to the nations of north-eastern Europe and received support from the Saxons, the Angles and the Jutes. These nations fulfilled their military brief; and some. Liking what they found in fertile Britain, they decided to remain, and in the words of the Venerable Bede (2013, p46) they ‘…began to increase so much that they became terrible to the natives themselves that had invited them.’ The Celtic tribes once again found themselves dominated by a militarily superior and administratively adept interloper. The Britons fought back, but the imposition of Anglo-Saxon power was never in doubt as wave after wave of immigrants crossed the North Sea to join the fight and enjoy the spoils. Over a period of a hundred years the Germanic tribes had settled as far north as the highlands and had either destroyed or pushed back to the very edges of the island the Celtic tribes. The invaders now referred to the Celts as ‘foreigners’ (Crystal, 1995). The Celts referred to all of the invaders with the moniker ‘Saxons’. These Saxons brought with them a language that would subsume and dominate the reviving Celtic tongue. By the end of the fifth century the foundation was established for the emergence of the English language (Crystal, 1995).


The encoding of Anglo-Saxon had been achieved through a runic alphabet that the invaders brought with them from Europe (Crystal, 2012). This was perfectly serviceable until the arrival of Augustine and the Roman Christian missionaries who required the recording of letters, administrative documents, prayers and homilies in this new language they encountered (Crystal, 2012). The runic alphabet, with its associations with mysticism, charms and pagan practices was anathema to the purposes of these early Christian monks. They decided to use the Roman alphabet with its twenty-three letters which had served the Old World more than adequately for many hundreds of years (Wolman, 2014). And here is where the problems begin.


Anglo-Saxon had at least thirty-seven phonemes. To enable the encoding and decoding of these sounds either required more letters, a utilisation of a combination of letters to represent phonemes or some letters representing more than one sound (Crystal, 2014). The early Christian monks were not linguists and they decided to utilise all of the strategies, adding four new letters to the Roman alphabet to accommodate the dichotomy of sound to symbol correspondence. Thus, two of the fundamental weaknesses and complexities of decoding modern English were embedded into Old English: some letters represented more than one sound and some sounds could be represented by more than one letter (McGuiness, 1999).


Nonetheless, although complex, the system was functional and by the ninth century, through the rise of the Kingdom of Wessex in southern England under Alfred the Great, had started to gain a standardised traction (Grant, 1986). Following Alfred’s defeat of the Danes his power was assured. But Alfred was more than a great warrior. Stung by the loss of a linguistic heritage through the Vikings’ attempts to destroy the developing English language by book-burning, Alfred endeavoured to have the great written works translated into English (Smyth, 1995). Furthermore, he made a commitment to education in the language and of the language, writing that young freemen in England should be taught ‘how to read written English well’ (Bragg, 2011). As a result, the monks at Alfred’s power base in Winchester translated a multitude of religious and legal writings into the evolving language simultaneously standardising much of the orthography and logography of Old English (Crystal, 2014).


By the time Harold Godwinson was crowned King in 1066, Anglo Saxon English was well-established across much of what is England today. It was a largely phonetic language that followed the generalised rule of sound to spelling correspondence. As a result, it was relatively simple to decode and encode. Had modern English evolved directly from it, children today would find learning to read English a far simpler and faster proposition. However, another twist of historical linguistic fate awaited the language.


The Normans invaded.


Perhaps if Harold had not had to defeat the Vikings at the Stamford Bridge and march two hundred and fifty miles south in two weeks, his army would have had the energy to repel the Normans (Wolman,2014), and we would today teach our children to decode a phonetic language in the four months it takes the Finns to teach their children (Dehaene, 2006). However, Harold was defeated, William I was crowned king, French became the official language of England and the seeds of linguistic complexity were sown thus ensuring it would take English speaking children in the twenty-first century three and a half years to learn to decode the most complicated alphabetic language on earth (Dehaene, 2006).


Had French subsumed and subjugated the language of Old English in the manner that the Normans overwhelmed the Saxons, then French would have become the universally spoken tongue and much phonetic complexity would have been avoided. But the Norman conquerors were happy to keep State, Church and people separate and, in the centuries following the Norman conquest, England became a trilingual country (Crystal, 2012). French was the language of law and administration; Latin the language of the Church and English the language of the street and the tavern (Bragg, 2003). As a result, Old English, the language of the oppressed, started to evolve and shift as it absorbed and mutated thousands of French words and its spellings. The result was a subverted but enriched language that became almost unrecognisable from Old English (Wolman, 2014); a language that, nonetheless remained the language of the oppressed. With such low social status, the language stood little chance of survival.


Upward social mobility for English came in the form of a flea.


In 1350 almost a third of the population of England were wiped out by the Black Death (Gottfried, 2010). Areas of high population density, towns and church communities suffered particularly with the result that the upper echelons of society and church were decimated. The majority of survivors were those living in isolation in rural settings: the dregs of society (Bragg, 2003); and they spoke English. This had a two-fold influence on the integration of English into the ruling classes. Firstly, the Norman masters began taking English-speaking wives whose children learned both English and French. Secondly, the English-speaking peripheral class started to buy and farm the cheap land that flooded the market as a result of the mass mortality (Bragg, 2003). As a consequence, English first leaked into the ruling class and then flooded into the upper strata of society. By the late fourteenth century Richard II was conducting state affairs in English, Chaucer had written the bestselling Canterbury Tales in English and in 1385 John de Wycliffe translated the Bible into the language. English was no longer the vulgar language of the underclass. It was here to stay.


The language gained further adhesion through its adoption and use by the Chancery court at Westminster (Wolman, 2014) who needed to communicate the formal orders of the King’s court in the language of those for whom it was intended: English. This more formal utilisation required a greater standardisation of the language and a more consistent orthography to ensure the documentation was imbued with sufficient regal authority and for ease of replication. That meant that a far greater regularity in the spelling of words was required; ‘receive’ had forty alternative iterations (Scragg, 2011). This was the start of formalisation of the encoding of this mongrel of a language and the essence of its modern complex code. These decisions were, nonetheless, not being made by linguists with an understanding of the etymology of the words they were standardising, but by overworked scribes seeking quick solutions often based on personal whim and anecdotal evidence (Crystal, 2014). Nonetheless, the amount of documentation available for reading was minimal as it was all hand-written, and, as a result, very few members of society had any course to read. Whilst this was the case, standardisation of the encoding of Middle English was within the control of a few influential scribes in Westminster. They were not to hold the power for long.


Had the standardisation of English spelling remained in the hands of these few scribes then a regularity may have evolved rapidly, but when William Caxton imported Guttenberg’s printing technology to London in the late fifteenth century the Middle English alphabetic code was thrown into chaos (Barber, Beal and Shaw, 2013).


Writing could now be reproduced on an industrial scale by the setting of type and its printing on to paper. Thus, decisions as to spelling and orthography fell to the typesetters. Their focus was not on accuracy but on speed, economy and the aesthetic of the page (Wolman, 2014). Readers preferred justified type which meant type-setters often omitted letters at the end of a line to ensure visual appeal (Wolman, 2014). Furthermore, the vast majority of the early typesetters were Flemish immigrants who had English as a second or third language. With pronunciation of words varying across counties and districts, and setters working at speed with tiny metal type, the same word could be spelled differently on the same page (Barber, Beal and Shaw, 2013). Caxton, as the forerunner, became the arbiter of standardised spelling, but he was not a linguist and took a ‘best fit’ approach to the crystallisation of the orthography. He did, nonetheless, contribute greatly to the initial calibration of English spelling (Bragg, 2003).


As books became more widely available and read among a growing literate class, printing presses sprung up in all major towns and English spelling became more and more erratic further aggravated by the multiplicity of regional accents and thus pronunciation. This was exacerbated by the renaissance and the view that Latin and French etymology, with their richer literary traditions and history, were superior to middle English. Thus, the Latin roots of some words were preferred over their Saxon derivations. Old English differentiated long and short vowel sounds through double consonants preceding the vowel, but Latin roots (with single letter to sound correspondence) did not require the differentiation (Crystal, 2014). Numerous scribes and printers thus eschewed the parvenu double consonant in preference to the ‘superior’ single consonant. The need for the untangling of the English alphabetic code became more and more urgent and demanded.


Standardisation of spelling and pronunciation was attempted but with little success. In 1569 London lawyer John Hart published his orthography in an attempt to establish a consistency in spelling. Privileging pronunciation, he attempted to classify ‘proper speech’, but his attempts gained little traction as ‘proper’ English was not the English spoken by many in the country and hence ignored. Without authority, standardisation could never tame the shape-shifting beast that was the English language. The power was in the hands of the printers and they were beholden to their owners and their readers. As such, the sixteenth century and seventeenth century language shapers had no institutional authority, unlike the French and Italians who had academies to authorise and solidify spelling and pronunciation. John Dryden’s attempts to establish consistency through the creation of an English academy failed and by the eighteenth century the English language was, in the words of Samuel Johnston (2016, p134), ‘in a state of anarchy’. It was the work of this remarkable scholar that set the English language on the road to alphabetic consistency, tethered the language with a hawser of standardisation and established the regularity required for effective instruction. In terms of today’s pedagogy relating to the teaching of the reading and writing of English it was a Copernican moment.


Johnson’s stratagem had three strands: to standardise spelling; to standardise pronunciation and to provide categorical definitions of words. Untangling the complexities of the phonic coding that had erupted over the centuries and simplifying it to ensure the language could be easily deciphered and taught was not a priority. It was not even a consideration. When Johnson was uncertain of a standardised spelling, he always reverted to the original derivation and etymological root of the word rather than to its simplest phonic form. Nonetheless, this was at least a consistent approach. In eight years, Johnson produced a dictionary containing 43,000 words; an undertaking that took a cadre of scholars at the French academy eighty years to achieve with their language. English today, spoken, read and written across the world, has drifted remarkably little since then. The English phonic code, although excruciatingly complex, convoluted and complicated had at last been anchored. It would take many more years before it was identified, acknowledged, unravelled, stratified and presented in a format that could ensure that it could be effectively taught.


THE DEVELOPMENT OF THE TEACHING OF READING USING THE ALPHABET.


The teaching and learning of the ancient logographic writing systems relied on constant repetition and memory to enable reading and writing. A Sumerian schoolboy would repeatedly copy lists of symbols that represented syllables until memorised and then he would write out and memorise the nine hundred logographs along with syllable signs. He would also have to learn the correct pronunciation of all the logographs as well as the syllables (Kramer, 1963). He was not learning to decode because there was no code to learn and his only method of studying was memorisation. The only teaching technique was repetition of the task until accuracy was achieved and the only pedagogical approach was the threat of corporal punishment and the actuality of corporal punishment. So difficult to master was the language, that a Sumerian schoolboy would learn this syllabic and logographic code well into his late teens (Kramer, 1963).


Contrast this with the economy and efficiency of teaching reading with the ancient Greek alphabet that was, ‘ingenious in its simplicity and monumental in its impact’ (Enos, 2012, p. 5). By evolving a system of only twenty-four letters, each of which captured the discrete but essential sound of the utterances of their language and when blended together reconstructed the vocal patterns of everyday speech, the alphabet removed the cumbersome feats of memorisation. It was a code which required teaching, learning and practice and was constructed with such simplicity that it could be easily absorbed by a child. The grammatists of ancient Athens taught this exclusively until a child was seven years old. It was this one-to-one sound to symbol correspondence of the Greek alphabet that transformed the ease of teaching reading and writing. With an almost entirely phonically transparent language, once the twenty-four letter and sound representations had been learned and blending had been practiced to automaticity, literacy was merely a function of vocabulary. This prised literacy from the elite and handed it to whole societies through changing reading instruction, ‘from a craft skill…into an art of social power’ (Gelb, 1989, p 184).


Virtually every individual element of Roman education was inherited from the Greeks. However, the Romans cohered the Greek’s pedagogical principles into a system that could be replicated across their empire as a tool to service public policy (Murphy, 2012). The greatest evolution was the reversion from tuition of the individual to the Greek ‘school’ method: the education of large groups of pupils in forms. In regard to the teaching of reading, this required a systemised approach and method. Roman education was dominated and sculpted by the pedagogue Quintilian whose Institutio Oratoria (1913) documented both approach and method. With its focus on producing worthy orators for public life, Quintilian separated a boy’s education (and it was only boys) into three distinct elements. The first required the acquisition of the basic language skills of reading and writing, followed by a period of rationalisation and practice along with the study of grammar which, when mastered, ensured that the student was ready to study the art of rhetoric. Although foreshadowing Piaget’s (1977) theories on stages of cognitive development, it differed in one crucial (and very modern) aspect: movement from stage to stage was dependent on mastery and not age. Quintilian was, perhaps, the forerunner of the mastery movement (Bruner, 1966).


It was in this first phase that the fundamentals of reading were learned. Classical Latin, like Greek (which was often learned first), being an almost wholly phonetically transparent language with one to one sound to symbol correspondence and every letter pronounced (Allen, 1989), decoding was relatively easy to acquire; macrons indicated long and short vowel sounds, which in Latin referred solely to the length of time the vowel was held and not to a different sound. Quintilian was insistent that sound to symbol correspondence was privileged over the learning of alphabet letter names (Van Nieuwenhuizen, Brand and Classen, 2014). Quintilian’s insistence that the learning of letter names in the initial stages ‘hinders their recognition…’ (1977, p46) was a remarkable assertion, which can only have been discovered empirically and was entirely at odds with the development of reading instruction following the demise of the Roman empire. For many hundreds of years until the development of linguistic phonics programmes (McGuiness, 1999), letter names were always learned initially, and, in some cases, spellings were taught before sounds. Indeed, the teaching of English reading became dominated by the learning and recitation of the alphabet, a pedagogy that almost certainly retarded decoding proficiency and created a toxic legacy for the emergent state education systems of the nineteenth century. Nonetheless, Quintilian’s discovery of this phonic pedagogical veracity was crucial to the speed at which decoding was achieved aby his students.


Quintilian separated reading into four pedagogical stages (Lannham, 2012): the learning of sound to symbol correspondence was followed by an almost obsessive fixation on syllables; once syllables could be decoded at speed then words were introduced followed by short sentences. This four-stage pedagogical approach remained in place for centuries and was referenced by Remigius of Auxerre in the ninth century (Lanham, 2012).


It is worth analysing this staged approach to the teaching of decoding by Quintilian because of the long-lasting resonance on the teaching of reading and its implications on the development of the teaching of the decoding of English. What is revealing, bearing in mind Quintilian had no recourse to cognitive science, is his attitude to the syllable and his insistence that syllables were required to be memorised. ‘There is no short way with the syllable,’ he wrote (1977, p165). As a result of this relentless pedantry, pupils practiced reciting the letters of the alphabet in every possible combination before repeating the procedure with syllables and then with whole words to the point where Quintilian has been accused by Capella (1977, p105) of developing a ‘fatigue-march’ progression towards reading that would have ‘bored the gods’. Nevertheless, he had stumbled across a cognitive keystone for reading: that the brain privileges syllable when reading (Dehaene, 2014).


But did Quintilian’s pupils really memorise all of the syllables? The answer is an emphatic ‘no’. Although he placed a huge emphasis on memory and memorisation, there is no way they could have. As an alphabetic language, the consonant, vowel combinations of Latin were myriad and formed tens of thousands of syllables, far exceeding the two thousand symbols that the human mind is capable of holding in long term memory (Rasinski, 2010). So, what was happening, because Quintilian’s pupils certainly learned to read? They had to. His insistence that each stage required mastery ensured that no child, irrespective of age, could move onwards unless he had fully grasped the entirety of the previous stage. This fundamental tenet ensured that no pupil entered the second ‘grammaticus’ phase without being able to decode the language (Murphy, 2012). Thus, if they could not ‘remember’ the thousands of syllables required to read, no pupil would ever have progressed beyond the elementary stage of learning.


Although pupils were being taught to memorise syllables and were certainly trying to memorise the syllables, they were doing no such thing. They were decoding the letter sounds that they had spent so long learning (and, yes, memorising, because there were only twenty-four of them and they had sound to symbol correspondence) and then they were blending them together at speed to sound out the syllables. Because they were encouraged to practice this relentlessly, they became adept at it: to the point of automaticity. They were then compelled to practice the reading of longer polysyllabic words and then phrases, clauses and sentences. This is early evidence of the word superiority effect (Reicher, 1969) that has enlightened modern understanding of reading.


During all of this relentless practice the assumption was that pupils were memorising. They were in fact decoding: initially at individual letter level, then at syllabic level and finally at polysyllabic level. For all the wrong cognitive developmental reasons, Quintilian had stumbled across the perfect way to teach the decoding of Latin: sound to symbol correspondence, followed by blending, followed by polysyllabic blending and word reading. Quintilian added a pedagogical piece de la resistance: an insistence on the avoidance of haste when reading. By demanding that reading be, ‘at first sure, then continuous and for a long time slow…’ Quintilian (1977, p14) gave his charges the space to practice decoding to automaticity. He seemed to have understood the importance of this much neglected and misunderstood stage on the road to reading fluency.


Furthermore, it should be remembered that Quintilian was teaching reading in classes of up to twenty boys and that all of these boys learned to read. Contrast this with the chaos that ensued when the education systems of England and Scotland moved from one-to-one tutoring of the rich to universal, compulsory, mass schooling and found the collective teaching of reading to mastery almost impossible, and Quintilian’s reputation as a master pedagogue is well-earned.


There is, however, a caveat.


Quintilian’s genius bequeathed the teaching of reading English a lethal hangover. Latin syllables were decodable once letter sounds had been mastered. The memorisation that Quintilian demanded was in fact merely practice to mastery. But what if, as with English, single letters represented more than one sound and the same sound could be represented by a number of letter combinations? Without knowledge of how to decode those complexities, memorisation of syllables would be no more than that: a memorisation feat, and with the brain’s capacity for memorising symbols at about two thousand (Rasinski, 2010) and over fifty thousand syllable combinations in English (McGuiness, 1999) a pedagogical storm was brewing.


The perfect storm hit landfall with the English language.


THE UNIVERSAL TEACHING OF READING.

For the ancient Greeks, the reason to learn to read was driven by the pedagogical keystone of oratory. Orating well was the raison d’être; in fact, Plato had a deep suspicion of writing, believing that it undermined the dynamic, interactivity of oral exchanges (Enos, 2012). Likewise, Quintilian’s pupils learned to read in order to access the texts of the great orators thus enabling imitation, and they learned to write to prepare texts for memorisation prior to oration (Murphy, 2012). For the vast majority of citizens, however, reading and writing afforded little utility beyond simple transactional notation and correspondence. As for the shared enjoyment of narrative, this was exchanged almost entirely through the interface of story-telling and drama.


The spread of Christianity with its sacred texts, prayers and homilies ensured a growing cannon of writing to access, however, these were almost exclusively in Latin and Greek and thus the preserve of the elite. Even with their translation into national languages there was little reason to read them as the church in Rome insisted that the interpretation (and by extension the reading) of these sacred texts was the preserve of the clergy (Smith, 2002).


Reading was a cosy closed-shop, exclusively the preserve of the powerful, the rich and the clergy. The idea of universal literacy would have been seen as laughable until an obscure German professor of moral theology decided that he didn’t get the joke and challenged the status quo with his Ninety-Five Theses (Luther, 2005).


Martin Luther (2005) and the Protestant Reformation transformed the paradigm with the establishment of the fundamental doctrine (the first of the ninety-five) that each individual was directly responsible to God for his or her own salvation. It was a cornerstone of Protestantism that the believer must not be dependent upon any priest or pastor for the interpretation of any mass or prayer but must read the Word of God directly and draw his or her own conclusions. Illiteracy exposed the individual to reliance on others for interpretation thus thwarting the fundamental concept of Protestantism. Learning to read was now a religious necessity. In fact, Luther even attached a timescale, recommending that the gospels be able to be read by the time a child had reached their tenth year (Luther and Tappert, 1967).


With Henry VIII’s break from Rome and the English Reformation, the irresistible rise of Protestantism in England began – with a short interlude during the reign of Mary I. By the early seventeenth century the English (and American) protestants considered the enabling of children to read the Word of God for themselves a religious duty (Smith, 2002). For the first time since of the invention of the alphabet the tripartite algorithm of availability, value and utility was satisfied for all citizens - and satisfied with a trump card: universal literacy was a religious duty. Failing to read was failing God.


The control of universal reading instruction was thus appropriated by the protestant church. The desire for universal literacy was now driven by a powerful, influential and motivated promoter with a just, moral and sacred investment in the project. This altruistic, religious passion for reading instruction should have been the catalyst for universal literacy and perhaps even the universal suffrage craved by the enlightenment thinkers.


The result was a disaster for English reading instruction: a disaster that has ensured universal literacy for the English-speaking world remains a mirage to this day and, when allied to the Quintilian (1977) obsession with syllables, saw the colliding of two perfect schematic storms into a pedagogical conflation that is the foundational reason that the teaching of the skills to decode English have been so ineffective for so long.


Since Quintilian instruction, the alphabet had been the starting point for reading tuition and the learning of the letters, a crucial foundation of that instruction (Graham and Kelly, 2015). This alphabetic method was formalised through the Catholic Church by the development of hornbooks (Tuer, 1896). So-called because of the thin, transparent cover of horn protecting the small pieces of paper backed by a wooden frame, they were usually comprised of the alphabet, the Lord’s Prayer and several columns of syllables. This format of alphabet, syllables and sacred text in these forerunners of instructional text books developed into the most ubiquitous objects of reading education: the ABC and the primer. It was these that became the bedrock of the Protestant revolution of reading tuition and cemented the fatally flawed format for the teaching of the decoding of English.


Firstly, all of these forms encouraged the initial learning of the letters of the alphabet in contrast to and in contradiction of Quintilian’s (1913) insistence that it should be the sounds that should be learned at the outset. This was often exacerbated by the consumption of gingerbread letters to embed the memorisation of the letter names (Prior, 2012). Letter names are an abstraction of the aural representation (Althusser, 2017) and often give no clue to the sound they symbolise. Thus, the most prevalent of all primers, ‘The New England Primer’ (Watts, 1820), indicated the sound represented by a letter with a rhyme (usually religious) and a single word starting with the letter. There are two fundamental flaws in this strategy. Firstly, it ignores the inherent complexity of the English alphabetic code: that one letter may represent more than one sound and that the same sound may be represented by more than one letter. So, suggesting that the letter ‘A’ was a representation of the initial sound required to pronounce ‘Adam’ excludes all other sounds represented by the letter and precludes the letter’s use in combination with other letters when representing other phonemes. Secondly, by including the letter within a rhyme that was not decodable at the learner’s instructional level, the only method of learning was memorisation; and memorisation is not reading – the entire raison d’être of the primer.


It is the issue of memorisation that critically undermined the primer, and it is here that the Protestant church must take the blame. By combining the desire to teach reading with the desire for religious instruction, the ABCs and primers produced and approved by the protestant church conflated their intentions. These books consisted of the fundamentals of the protestant religion from the Lord’s Prayer, the Ten Commandments and the Creed through to catechisms in the form of questions and answers. These were highly intricate texts which required the full understanding of the complex code and polysyllabic automaticity to decode fluently; and all this with only the single sound to symbol correspondence indicated by the primer. Only the very best infant code-breakers would be able to read this.


So why didn’t the teachers see the error of their ways, admit defeat and seek a more efficacious instructional model? Because it appeared that children were reading. Their empirical reading fluency was merely a feat of memorisation. The Lord’s Prayer, the Creed, The Ten Commandments and the Catechisms were already familiar and were easily learned – it was certainly easier to memorise them than to decode them. There was no other further assessment of children’s reading ability as there was nothing else to read; and there was nothing else that the church wanted them to read. Books were a luxury and far beyond the means of any labouring family, and a family that could afford a bible could afford an education.


Nevertheless, little harm was done at the time. Some children managed to crack the code, and those that didn’t were certainly no worse off and at least had a greater knowledge of the religion to which they belonged and the moral guidance it dictated. It was the legacy of the approach that was toxic for universal literacy. Memorisation of words and passages that created the illusion of reading along with conflation with religious moral instruction would affect and infect the teaching of decoding when universal education was established in the nineteenth century.


It is pertinent at this juncture to separate the teaching of reading to the upper and middle classes in England from the instruction of the working classes and the poor, for it is this latter system that has evolved into what we perceive today as state sponsored education. It is this evolution of the teaching of reading to the masses that has resulted in a legacy of continued illiteracy in English speaking countries.


The merchant and middle-classes had an advantage: not only could they afford full-time education, they were taught to read and write Latin and Greek before English and in the case of Oxford and Cambridge universities were, until early in the nineteenth century, taught exclusively in Latin and Greek. From the sixteenth century the sons of merchants were taught in English Grammar schools which were influenced by the Humanist movement and by Erasmus in particular (Baldwin, 1944). The method of study advanced by Erasmus was fundamentally and profoundly influenced by Quintilian’s approach to learning with its emphasis on the development of rhetoric for the purpose of great oratory. The learning of Latin enabled the student to access the rich classical cannon from which they could develop their oratory expertise, thus were not merely restricted to religious texts, and Latin, as related earlier, has a far simpler phonic code. The Grammar schools also insisted on Quintilian’s rigid expectation of mastery of reading and writing before advancing to the ‘gramaticus’ stage along with the teaching of sounds before letter names and by thus doing, ensured the decoding of Latin to automaticity. So, although the Grammar schools expected students to read English eventually, it was with a comprehensive knowledge and mastery of the reading and writing of Latin. With considerable decoding experience of Latin, cracking the English phonic code, although not simple, was far easier. They had the added advantage of being schooled daily for several hours for many years ensuring ample opportunity for deliberate practice as well as having access to a plethora of texts, not just the highly restricted biblical scripts and catechisms associated with horn books and primers.


Compare this to the lot of a peasant child taught to read the most complex alphabetic code ever devised for an hour every Sunday using a primer full of intricate religious scripts and encouraged to learn letter names and spell before decoding. Add to this an incoherent approach to the deciphering of the phonic design of English, no decodable texts, an encouragement to utilise memory as the primary reading technique and almost no opportunity to practice, and it is astonishing that anyone learned to read. In fact, there is little evidence that anyone from the working poor, educated in this manner, did learn to read with the possible exception of Thomas Cromwell who ended up running the country – if that is what reading did for the masses, perhaps it was no surprise that universal literacy was viewed with such suspicion by the upper classes.


In America, the situation was no different. Driven by the Lutheran revolution which demanded access to sacred texts and was further intensified by feelings of persecution after the exodus from the intolerance of the old world, the conflation of reading instruction with religious ideology was even more pronounced. The manner of instruction was dependent almost entirely on the primer and the New England Primer in particular (Smith, 2002). Other primers, like the Franklin Primer, did attempt to break with the exclusivity of religious works with the introduction of fables, but these were manifest in the form of rhymes which encouraged memorisation rather than decoding and were once more underpinned by a moral narrative. Although an American primer, The New England Primer almost certainly had its origins in England as The Protestant Tutor authored by Benjamin Harris (1680) and it is perhaps for this reason that the English and American models of reading instruction developed almost in parallel.


THE START OF A PHONICS PROGRAMME

Benjamin Franklin, the great enlightenment thinker and politician and Founding Father of the American democratic experiment of people ruling themselves, was a passionate, proselytising advocate of phonics instruction. It was sound, he believed, that gave words their true power and a writing system, he opined, should be based on a code for sounding out words (Wolman, 2014). Indeed, so filled with contempt for his defeated imperial masters and their complicated associated language was he, that he sought to simplify their complex alphabetic code into a phonetic code that could be universally taught and learned by the newly emancipated members of his democracy. His ideas gained little traction for a revision of the code, but he was determined to ensure that his fellow Americans could read.


Franklin found a staunch ally, advocate and agent in Noah Webster, a lawyer turned school teacher, so frustrated with the difficulty he encountered teaching children to read using available resources that he wrote his own. The Elementary Spelling Book (Webster, 1832) was the first ‘speller’ of its kind in the United States and was revolutionary in a number of ways. Firstly, Webster privileged sound over letters insisting that ‘Letters are the markers of sounds’ (1785, p15). This was a radical deviation from the received wisdom that letter names and spellings were mastered before reading. Secondly, Webster represented sound in terms of the differing spellings associated with that sound and followed this up with documentation of the different sounds that could be associated with each letter thereby articulating, in terms of pedagogy, for the very first time, the fundamental complexity of the English alphabetic code: that one sound could be represented by more than one letter and that one letter could represent more than one sound. His third innovation was to eschew the reliance upon religious texts by adding stories, fables and morals.


Had he organised the instructional lists of words in terms of their phonemes, he would have developed the first coherent, if rudimentary, linguistic phonics programme. However, he organised the lists by letters and set these out alphabetically. It was a crucial error and one that has infected phonics programmes to this day. As a result, he undermined his expressed intent to build foundations on sounds, introducing digraphs, consonant clusters and vowel digraphs to learners who had yet to encounter these complexities with the implication that sounds were derived from letters (McGuinness, 1999). This was further confused by an incomplete phonemic analysis ensuring that the speller was not phonemically comprehensive, resulting in the necessity for inventing spelling rules with rhymes and adages to recall these rules; this despite Webster insisting that rules should not be taught. As a linguist he was perhaps subconsciously aware that a code always follows its inherent logic, however complex, and that the inclusion of ‘rules’ immediately undermined its rationality and, by definition, negated its codification.


The speller was further flawed by its use of increasingly complex lists of syllables (a child would encounter 197 on the second page). Lists of syllables for reading instruction seems to have derived from Quintilian’s preoccupation with their use when teaching reading. However, in Quintilian’s case, as discussed earlier, it resulted in the practice of decoding to automaticity, as with the greater phonemic consistency of Latin decoding syllables (even those not occurring in the language) would not undermine reading; indeed, could only enhance automatic decoding. Furthermore, it introduced students to polysyllabic decoding. With English, however, syllable reading, when separated from the word, gives the reader no indication of which particular element of the code is associated with the syllable. Dehaene’s (2011) phonological neurological route is thus satisfied but not his lexical route. Add to this the inclusion of texts that could not be decoded with the knowledge chronologically acquired, and it would seem that Webster had produced a turkey with its neck primed for instant wringing.


Since its inception, in excess of one hundred million of these spellers have been distributed (Smith, 2002). It still sells well today. Some turkey; some neck! The influence of the ‘blue backed’ speller on the teaching of reading English cannot be underestimated both in North America and in England.

WHOLE WORD READING INSTRUCTION


Despite the flaws of Webster’s speller, reading instruction in the United States continued in this vein until the 1840s. True to the enlightened, democratic, emancipatory ambition of the founding fathers, more and more Americans were attending school, driven by the desire for an ‘intelligent citizenship’ (Smith, 2002) sufficiently educated to partake in democracy. The quality of the instruction was debateable as evidenced by the poverty of writing in letters home from captured Yankee soldiers (Chestnut, 2011) – the northern states had established free compulsory elementary education in the early nineteenth century.


This altruistic beneficence born of enlightened democratic motivation drove the desire for universal free education in America and reading instruction lay at the heart of elementary schooling. The paucity of the pedagogy was becoming apparent and crucially apparent to one of America’s most powerful leaders. Horace Mann was one of the most influential educationalists of the first half of the nineteenth century and did more than any one individual to shape America’s innovative redirection of education as a driver of civic virtue rather than sectarian advancement (Cubberly, 1962). He had a deep influence on reading pedagogy. As a result of economic envy and military insecurity, he also had a profound influence on the teaching of English on the other side of the Atlantic.


Mann almost stumbled across the essence of an effective phonics programme that could have transformed English reading instruction both for his nation and for the English-speaking world. Sadly, for universal literacy, he took a ruinous wrong turn.


Mann, along with many contemporary American educationalists sought new sources for principles that would broaden the intellectuality of their potential electorate and that would make reading instruction more effective. They were drawn to the rapidly expanding military and economic powerhouse in Europe, Prussia, which was developing innovative approaches to education driven by the great Swiss pedagogue, Johann Pestalozzi. What Mann witnessed in Prussian and Saxon schools left him contemptuous of the alphabetic approach to the teaching of reading so ingrained in American schools. His description of a reading lesson in a Prussian school (Mann, 1983) has uncanny parallels with a modern synthetic phonics lesson: letter names were not applied, words were built using sounds not letters, digraphs were used to represent one sound, sounds were blended together to build words, children responded in unison and individually, words containing similar sounds utilising the same digraphs were identified.


Furthermore, the Prussian text books were secular and included a diverse range of content from stories to material on science, nature, history, geography and general information in line with Pestalozzi’s theories on whole-child engagement. Although not entirely decodable phonetically, the texts were at least graduated from the simple to the more complex, allowing emergent readers the chance of encountering less complicated writing rather than being plunged into non-vernacular, traditional ecclesiastical language.


With the wholesale adoption of the German Pestalozzean system of reading instruction and text book composition, the free and compulsory American education system seemed set fair under Mann’s dynamic and dogmatic direction. Unfortunately, the system was germinated from a bad seed. Pestalozzi’s hamartia was his famous method of object teaching.


In Pestalozzi’s schools a series of engravings were prepared representing objects and experiences which were then characterised by words. Where possible Pestalozzi privileged the actual object over an engraved image and within his primers were images of objects along with their representation in written form. ‘The teacher first drew a house upon the blackboard,’ reported Mann (1983, p116) when describing a Prussian reading lesson. Although the lesson progressed in a manner not dissimilar to modern synthetic phonics instruction, it had been fatally undermined by the Pestalozzean construct of the drawn house.


With the picture of the house represented before the letters were decoded, there was no need for the child to decode the word. The picture negated the code and returned writing to the hieroglyphs that led Egyptian script into chaos. Indeed, this was worse than hieroglyphs because now the child had to recognise the image and also remember the word. Without the need to decode the word there was only one strategy remaining: ignore the letters and remember the whole word as a shape. By borrowing from the German model of reading instruction and utilising it for the teaching of reading English, Mann was entranced by the impact of Pestalozzean pedagogy without acknowledging a fundamental difference between the two languages. As Goswami, Ziegler and Richardson pointed out, ‘English and German are very similar in their phonological and orthographic structure but not in their consistency’ (2005, p345). It was the greater transparency of the Germanic phonemic structure that militated against readers relying on whole word recognition.


The word method of teaching reading was born and the alphabetic code that had so enriched the world for two thousand years was replaced by symbol memorisation for English reading instruction. With the brain’s capacity for symbol memorisation limited (Rasinski, 2010) and the English lexicon in the tens of thousands, a future of illiteracy for the English-speaking poor was assured.


‘It is not perhaps, very important that a child know the letters before he begins to read. It may learn first to read words by seeing them, hearing them pronounced, and having their meanings illustrated, and afterwards it may learn to analyse them or name the letters of which they are composed.’ Thus, wrote Worcester in 1828. These words, so prescient in terms of the direction that English reading instruction would take in America, were the death knell for the effective teaching of reading English. It was not until 1840 that Josiah Bumstead’s ‘My Little Primer’ was first published. This was the first reader based specifically on the word method and, in the introduction, claimed to be a method ‘more philosophical, intelligent, pleasant and rapid. It is that of beginning with familiar and easy words, instead of letters’ (1840, p1).


The method was furthered by John Russel Webb’s (1846) primer entitled ‘The New Word Method’ which advocated printing words onto cards along with a pictorial representation. Children were then encouraged to read the word on the card when it was flashed in front of them. Without any alphabetic or phonemic training, there was no possibility that children were decoding the words; they were either remembering the word through its shape or recognising the symbol. The method appeared to have astonishingly rapid results for early readers. Russel Webb’s nephew noted with delight that there was no stumbling over words but that, ‘children read in pleasant natural tones’ (Smith, 2005, p83). What he was observing, however, was that there was no decoding, with its necessary lack of automaticity, only whole word recognition masquerading as reading fluency evidenced in the condemning words, ‘the children could not spell the words – they did not even know the names of the letters’ (Smith, 2005, p83). John Russel Webb’s New Word Method (1846) may have had little impact had it not been for a Teachers’ Institute meeting in New York that year that investigated the new method and resolved to not only publish the method but introduce it into all of its schools. The New York School Journal was effusive in its praise of the method, claiming that, ‘Millions of children have been saved years of drudgery by the use of the method…’


Webb’s (1846) primer disregarded any phonetic training for new readers and divided its contents into three parts. Part one focused on the teaching of initial high frequency words, part two on the teaching of new words and part three on the teaching of spelling. The only mention of sound to letter correspondence comes in part two and is revealing in its order of study: ‘Reading, Spelling, The Alphabet, Sounds of Letters’ (1846). Words, spelling and the alphabet were taught before sounds and rather than letters being the representation of sound, the implication was that they created sound. The method was cognitively flawed, confusing for early readers and contradictory to neuroscience (Dehaene, 2011). Nevertheless, Webb’s word reading methods spread like wildfire fanned by a wind of evangelical belief that it was the holy grail of initial reading instruction.


What the word method was in fact achieving was the appearance of fluency in early readers by treating whole words as symbols to be recognised rather than as sounds represented by letters and letter combinations to be decoded according to the alphabetic principle. This rapid early fluency in emergent readers is pedagogically undermined by McCandliss, Curran and Posner’s (1993) experimentations exploiting artificial alphabets with the clear evidence of plateauing and regression of whole word reading over time against the accelerated learning resultant from phonemic strategies.


THE DEVELOPMENT OF PHONICS PROGRAMMES

However confused and ineffective the teaching of reading had become in the United States, at least there was national debate and research and a desire to find the pedagogical El Dorado. No such national debate occurred in Britain at this time. The enlightenment experiment of the United States that saw a well-educated and literate populace as a fundamental driver of democracy (Smith, 2002) had placed research and development of early reading instruction into the forefront of education, however anecdotal, heuristic and misinformed.


Despite this, there was, in England, an emergent movement away from the alphabetic method of teaching reading and some early experimentation with the utilisation of phonic methods. These were all heavily influenced by Blaise Pascal’s work on synthetic phonics in 1655 (Rodgers, 2002) which privileged sounds over letter identification and encouraged the synthesising of those sounds to decode syllables and then words. Pascal’s work built upon Quintilian’s pedagogy and although radical, was fairly rudimentary in its analysis and application of sounds. It was, furthermore, developed for the French language: although still complex, a far more phonetically consistent language than English. Nonetheless, the idea of teaching sounds and synthesising them together to decode words crossed the channel and gained some traction with the teaching of the complex English alphabetic code.


‘A thorough knowledge of the pronunciation of the characters of the alphabet,’ was the chief foundation of reading wrote R. Kay (2016, p2). Kay had been experimenting with phonics in a school in the late eighteenth century. Not only did Kay’s method privilege sound over letter identification, it took the simple but revolutionary step of privileging sounds in digraphs. ‘The New Preceptor’ was published in 1801 and formed the basis of all phonic teaching in England in the nineteenth century. An unrecognised hero of phonic reading instruction even Kay’s first name and gender remain uncertain.


Kay’s methods were adapted by James Kay-Shuttleworth at his teacher training college in Battersea (the precursor to the College of Saint Mark and Saint John) and blended with the phonic method introduced through Germany (Diack, 1965). Here Kay-Shuttleworth added a key component of effective phonics teaching: the linking of sounds and writing. This was through necessity rather than design. By adopting the German phonic method devised for orphaned children which eschewed the expense of reading books, in initial instruction words were built using marks on slate and then the sounds synthesised into words. Reading books were not introduced until children were confident encoding and decoding the words on slate. As Kay-Shuttleworth’s teachers worked exclusively with the poor, the economics of a method that required only reusable slate was conveniently efficacious. Although the only significant difference from Kay’s (1801) work, Kay-Shuttleworth had stumbled across a substantial innovation: the link between sounds and spelling during phonics instruction. Such was the success of this method; Her Majesty’s Inspectorate of Schools produced a handbook on the system for trainee teachers.


The privileging of sounds in early reading instruction started to gain momentum in England culminating in the publication in 1857 of ‘Reading Without Tears, a Pleasant Mode of Learning to Read’ (Mortimer, 1890). Mortimer’s substantial teaching tome started with one-to-one sound and letter correspondence with short vowel sounds and extensive practice pages, firstly of three letter words and then short sentences followed by simple decodable stories. Although conflated with pictures and undermined by following Webster’s (1785) misstep of prejudicing letters rather than phonemes, the power of the book lay in its organisation and progression. A child never encountered a word that they had not been instructed how to decode (although often confused and complicated) and for which they had not had ample practice decoding. This was then followed by the opportunity of reading a story (however simple and contrived) containing words that were entirely decipherable according to the instruction. The book then worked through progressively more complex decoding but always with substantial opportunity for practice. The book sold in excess of one million copies in England alone and was translated into French, German, Russian and Samoan (Diack, 1965). Although almost exclusively utilised by the upper and middle class, the method proved extremely successful - with the notable exception of one master Winston Spencer Churchill (Churchill, 2013).


THE UNDERMINING OF PHONICS INSTRUCTION

England had stumbled to the forefront of initial reading instruction and was primed for universal literacy. However, universal education was historically viewed with deep suspicion in England, with many concurring with MP David Giddy’s (Standpointmag.co.uk, 2013) assertion in the early nineteenth century that the ability to read and write would be prejudicial to the morals and happiness of the poor. It was certainly not intended to interfere with the need to work, with the Committee of the Council on Education (Lloyd-Jones, 2013) insisting in 1850 that children of the labouring classes leave school at the earliest age at which they could earn a living. The extraordinary military and economic dominance gained by Britain as a result of the first industrial revolution was driven by ruling class scientific learning, middle class entrepreneurism and working-class cheap labour (Thomas, 2013). As long as the ruling classes continued to be well-educated there was no reason to undermine the rich pool of cheap labour by educating the poor.



It was the hubristic pride of the Great Exhibition that undermined this meme. Devised as a showcase for British industrial and inventive dominance, it nonetheless became clear that in the United States, a less populace nation only sixty-eight years old, Britain had a rival of such pragmatic inventiveness and innovative clarity and perspicacity, allied to seemingly unbounded natural resources, that the balance of world economic power was shifting (Thomas, 2013). This, combined with the industrial and military advances made by Germany, forced Britain to take a cold, hard look at the reasons for these cracks in its dominant position. The Duke of Newcastle’s royal commission in the 1860s was the first indication to the world’s foremost superpower that the universal education systems of the United States and Germany might be the engine for their growing industrial and military eminence. Prime Minister Gladstone, who had previously exhibited little interest in education, attributed Germany’s triumph in the Franco-Prussian war to, ‘the cause of systematic, popular education’ (Matthew, 1997, p203).


Concerns of economic and military competition allied to the growing tide of demands for workers’ rights, the fledgling union lobby and the rise of the Christian Socialist movement meant that universal education became an emergent political imperative. Further momentum was generated by the convenient truth that child labour was no longer a driver of industrialisation, made redundant by developments in mechanisation and technology. Indeed, child labourers were often seen as ‘more trouble than they were worth’ (Simon, 1973, p359). These forces were drawn together under the umbrella of the National Education League whose expanding political power, influence tactical manoeuvring resulted in the Elementary Education Act of 1870.


The Act, although blemished with stains of compromise and legislating for neither free nor compulsory education, signalled the commencement of the British government taking responsibility for the education of the nation’s children. Unlike the United States, however, whose desire for universal education was fuelled by an enlightenment aspiration to ensure all could partake in democracy, the British compromise was firmly founded in its class-riven society. The result was a three-tiered system. The upper-middle classes had their children educated in the public-school system of private education, middle class children attended the endowed schools and the lower classes were dependent on the church schools or the Board Schools and were only educated until the age of thirteen.


In terms of reading instruction, this meant that the upper and middle classes learnt Latin and Greek before English, thus empowering them with a rich phonemic awareness of the classical languages before attempting to decipher the complexities of the English alphabetic code. For the lower classes attending church schools, reading instruction was founded in the alphabet with little sound-spelling correspondence with the conflation of early introduction of syllables and the deciphering of polysyllabic and complex sacred texts.


For those attending the newly founded Board Schools, exclusively the children of the poor and working-class, the adoption of a phonic approach to reading instruction along the lines of Mortimer’s pedagogy would have been transformational both educationally and socially by enabling widespread literacy throughout the working poor. It would perhaps have been a delicious irony had the class-riven British educational paradigm, so culpable in ensuring centuries of illiteracy, trumped the enlightenment project of American democracy by delivering universal literacy through effective elementary decoding instruction.


The irony curdled and putrefied under the heat of a stultifying statutory assessment framework.


The Newcastle Commission (1861) had recommended that school funding be related to outcomes and this manifested itself in the form of the Revised Code (Lowe, 1862) which linked teacher pay to pupil performance in arithmetic, writing and reading. Children were tested in all three subjects (and no others) by Her Majesty’s Inspectors, and in reading by inspectors listening to children read and assessing competence. The result was not the search for and delivery of the most efficacious instructional model for reading, but the model that ensured that children passed the test. With very large classes and wide variances in abilities, the model utilised was rote learning and drilling such that, ‘children learned their books off by heart’ (Simon, 1965, p116). The improvements in reading instruction methods that had developed through the 1850s ‘were cut short’ (Lawson and Silver, 1973, p291). Even the inspectorate acknowledged the fallings, noting that by the age of fourteen a child could have attended three thousand reading lessons but would have as a result neither ‘the power or the desire to read’ (Holmes, 1911, 128).


By the 1890s the recently formed National Union of Teachers was exerting its burgeoning influence, lobbying vigorously for an end to payment by results and inspection without notice and by the turn of the century had, to all intents and purposes, achieved the complete abandonment of the framework (Lawson and Silver, 1973). Teachers were now free and empowered to design, develop and promote their own methods, curriculum and pedagogy. ‘For thirty years they had been treated as machines, and they were suddenly asked to act as intelligent beings’ (Holmes, 1911, p111).


As a result of the new freedoms, curricula began to expand beyond reading, writing and arithmetic. This reduced allocated time for reading instruction within the timetable particularly in the initial school years. The growing recognition that younger children required teaching according a more clearly defined pedagogy had resulted in the introduction of infant schools from 1870. The more clearly defined pedagogy adopted was that of Friedrich Froebel, the creator of the kindergarten and the disciple of Pestalozzi who privileged learning through play and believed that adult responsibility for the education of a child should be concerned with the child’s natural unfolding (Morrow, 2001). Systematic, formalised reading instruction was thus, not a priority in infant schools.


Nevertheless, literacy levels appeared to be rising. The nineteenth century measure of literacy being the ability to sign a marriage register (rising from 80% to 94% between 1871 and 1891) lacks validity and credibility against today’s analysis (Stephens, 1998), but the rise in the number of publications directed at the working-class Penny journals and Penny Dreadfuls) along with a tripling of the number of lending libraries (Lawson and Silver, 2013) indicates a widespread rise in the ability to read among the working class credited to the numbers of children attending schools. Despite the lack of universal phonic teaching, having some reading instruction and practice was improving literacy. This is borne out by recent research which suggest that without explicit phonic instruction, but with some form of teaching, it is estimated that 75% of children eventually break the complex English alphabetic code (Adoniou, 2017).


Meanwhile in the United States the vice-like grip of the whole-word method on initial reading instruction and the reliance on the McGuffy Reader was starting to diminish as the long-term effects of the method were starting to become evident. Pupils made apparently huge initial strides in reading fluency in the early grades, but as Smith notes from a contemporary journal ‘…pupils in the higher grades are not able to read with ease and expression; they have so little mastery over words that an exercise in reading become a laborious effort at word-calling…’ (2002, p124). There is no doubt as to where the blame is placed: ‘…the inability of pupils in the higher grades to call words is the legitimate outgrowth of the teaching of the word method…’ 2002, p124.


There were further problems. By ignoring the alphabetic code and by teaching reading through symbol recognition the word method, by definition, ignored the letters that formed words. The method had raised concerns as to the efficacy of promoting accurate spelling from its first institution. In the introduction to his primer, ‘the New Word Method’, John Russel Webb includes a glowing account of its effectiveness from a teacher that contains the flippant yet portentous warning, ‘It was soon discovered that the children could not spell…’ (1846, p4). By the 1890s the concerns had grown to such an extent that Joseph Rice (an early proponent of evidence-based practice in education) conducted a survey of pupils in the public schools of the United States. Thirty-three thousand pupils were given reading and spelling tests with the conclusion that ‘phonics led to better results in reading than word methods…’ (1893, p 144). Rice (1912) also concluded that the best spelling results were obtained where the phonic method was used.


As a result, there was a significant and explicit shift towards phonic teaching methods particularly in early reading instruction. Phonics had never actually gone away. It was an integral part of the word method; however, it was always employed after word learning and in an analytical form often many months after reading instruction had begun. This analytic approach was thus predicated on word recognition and was thus not taught as a decoding strategy but a spelling tactic. Synthetic phonics programmes started to develop apace as the word method proved ineffective, and although they still prejudiced Webster’s (1785) alphabetic order rather than codifying sound to letter relationships, the failure of the word method led to some clear pedagogical conclusions. Rebecca Pollard’s ‘Synthetic Method’ was adamant that: ‘Instead of teaching the word as a whole and afterwards subjecting it to phonic analysis, it is infinitely better to take the sounds of the letters for our starting point,’ (1889, p3). She then states with resounding prescience, ‘There must be no guess work;’ (1889, p4).


There was, nonetheless, a shadow being cast on phonetic methods from the considerable influence of George Farnham and his tome, ‘The Sentence Method of Teaching Reading’ (1881). Farnham’s logic articulated that reading instruction material should be neither letter-based, nor word-based but focus on the smallest unit of sense: the sentence (Diack, 1965). This was the word method extrapolated to the extreme. Children learned whole sentences and repeated them until they mastered the meaning and sense with all relevant stresses and emphasis. The sentence was then analysed and fractured into words and then letters and then sounds. Farnham’s method was opportune and played well with the general goal of the period of developing an interest in literature as the learning of whole stories became integrated into the method (Smith, 2002). Farnham’s method had spectacular results when children’s reading, writing and spelling was tested against those being taught by the word method and phonics instruction such that he maintained in the forward of his book that the problem of how to teach the reading, writing and spelling successfully, ‘had been solved’ (1881, p2).


The results of the experiments were chronically invalid. Children who had been taught by his method were doing no more than reciting a learned script where all the stresses and syntax had been drilled to perfection. They were being compared to children who were decoding unseen text using phonic strategies and who had not reached automaticity, along with children who had been taught the word method and were encountering some words that they had not yet memorised. In terms of the spelling and writing, Farnham’s charges were merely encoding a script they had learned to spell and write to perfection. Nevertheless, the results were taken on face value and the sentence method became an established part of elementary teacher instruction in the eastern United States training colleges.


Although there is no concrete evidence of Farnham’s (1881) book crossing the Atlantic (Diack, 1965) England was not immune to the sentence method but sourced this pedagogy from Belgium and the work of Ovide Decroly with disabled pupils. Decroly used whole sentences to teach reading but favoured commands which were learned verbatim, then broken down into words and then syllables and subsequently into individual sounds as phonetic analysis (Hamaide, 1977). This differed from Farnham’s methods which explicitly eschewed phonic analysis. Nevertheless, the effect was the same: the prejudicing of word learning with later phonic analysis is not decoding instruction (McGuinness, 1999).


The sentence method was a further projection of the word method’s attempt to solve a perceived, intuitive yet non-existent problem: that early readers were not reading with fluency, often sounded stilted and wooden, read slowly, without prosody and with numerous stutters and corrections. In England, Holmes, of Her Majesty’s Inspectorate, noted in his criticism of phonic methods of instruction, ‘the dismal and unnatural monotony of sound which pervades every classroom…’ (quoted in Diack, 1965, p77) and Russell Webb’s ‘The New Word Method’ delights in the absence of ‘unpleasant tones and drawling,’ (1846, p3) as a result of his whole word teaching. To this day, the sounds of a novice reader decoding can be painful and were dismissed by children’s poet Michael Rosen as ‘barking at print’ (2012, michaelrosenblogspot.com).


What was being ignored by these critics was the necessary phase of decoding through which a novice reader must pass as they associate the letters and combinations of letters with the sounds they represent, articulate those segmented sounds and then synthesise them to form a word. This phonological neurological route will be further complicated as the reader’s brain attempts to access meaning (which may or may not exist for a young child) through a lexical neurological route (Dehaene, 2014). The resultant oral expression of this process and of the developing reader practicing within this phase will be a necessary slow, stuttering and often monotone manifestation. By attempting to leapfrog this essential phase of reading and reading practice by favouring prosody over decoding and automaticity, the word method and sentence method advocates privileged the appearance of reading fluency over the potential actuality of it and why, as discussed above, reading in older students plateaued at such a low altitude of expertise. This pedagogical misconception infected reading instruction for almost a century ultimately mutating into the disastrous whole language approach to reading (Goodman, 1973) and condemning generations of English speakers to semi-literacy.


THE END OF PHONICS

Instructional models based on phonics were proving increasingly popular especially among teachers (Diack, 1965) although all continued to organise their teaching chronology alphabetically rather than by sound until a school mistress at Wimbledon High School in England designed the first truly ‘linguistic’ approach to reading at the turn of the nineteenth century (Diack, 1965). Dale’s (1902) ‘On Teaching of English Reading’ based the teaching of reading on the forty-two sounds in the English language and once the sounds were mastered children were taught to match each sound to its letter or digraph (McGuiness, 1999). Dale insisted that no letter names were taught initially, indeed, she used no letters whilst sounds were being learned and she made the vatic link between decoding and spelling to avoid, ‘unnecessary and fruitless labour,’ (1902, p10) in the future.


Her masterstroke was almost accidental. By organising sounds by the location of their formation in the mouth when speaking – children felt the movement of their mouths, tongues and jaws when creating sounds – sounds were naturally arranged together and the letters that represented these sounds ordered accordingly. The result was the first recorded move away from Webster’s (1832) alphabetic structure of phonics to an approach that codified phoneme to grapheme correspondence systematically by sound. Children recorded these on a frame on the inside of their school desks. When learning to decode words, children came to the front of the class and held up cards with the letters that represented the sounds in the word. The sounds were individually spoken by the whole class and then the cards drawn together, and the sounds blended to create the word. The first linguistic phonics programme had been created.

Although phonics instruction techniques were in the ascendency at the turn of the century the word and sentence methods were too well rooted to wither. All they required was a chance watering for them to flourish once more. They hadn’t long to wait, and ironically it was scientific research that rescued them.


In 1885, American PhD student James Cattell carried out a series of laboratory studies at Wundt University in Germany utilising tachistoscopic techniques which measured eye fixation times on letters and words (Rayner, 1989). What Cattell discovered was that in ten milliseconds a reader could apprehend equally well three or four unrelated letters, two unrelated words, or a short sentence of four words - approximately twenty-four letters (Anderson and Dearbourne, 1952). What is remarkable is that Cattell’s experiments were extremely accurate and when retested by Reicher (1969) found to be statistically reliable.


The generalisation posited from this outcome was that words are more memorable than letters, which resulted in the deduction that we do not read words by the serial decoding of individual letters: so, we must, therefore, read whole words not individual letters. If that were true, then that left phonics, with its teaching of serial phoneme decoding and blending to identify words as the uninvited interloper; the side-salad that nobody had ordered.


The assumptions derived from Cattell’s investigations seem perfectly rational and the teaching of reading through word recognition a logical conclusion. There were, nonetheless, two critical flaws.


Firstly, by focusing on the whole word, the pedagogy ignored the fundamental construct of words: the alphabet. Alphabetic language encoding associates letters and combinations of letters with sounds which are then decoded as sounds either orally or cognitively. Words are not visual templates and they cannot be recognised by shape or logographic quality (Dahaene, 2014). If they could, argues Rayner (1989), then changes in typeface, font, handwriting or case would catastrophically undermine the recognition of a word, and text written in alternating case would be undecipherable, which is not the case (Smith, Lott and Cronnell, 1969). Rayner’s (1989) postulation is further validated by McCandliss, Curran and Posner’s (1993) work on artificial alphabets and their long-term efficacy in comparison with word shape deciphering. In addition, the complexity of reading languages heavily dependent on symbol recognition indicates the inefficiency of word-shape recognition: Chinese children are expected to recognise only 3500 characters by the age eleven (Leong, 1973), and it takes twelve years of study to learn 2000 logographs in Japanese kanji (Rasinski, 2010).


But if words are not read by the serial processing of individual letters in order, and the brain has a limited capacity to remember sufficient words by shape, how can words be read faster than letters? There is clearly something more complex that is creating this word superiority effect (Reicher, 1969).


Automaticity of word recognition is achieved by efficient orthographic processing (Wagner and Barker, 1994). This processing is defined as the ability to form, store and access orthographic representations (Stanovich and West, 1989) allied to a reader’s knowledge about permissible letter patterns and word-specific knowledge of real word spellings (Ehri, 1980).


Thus, automatic word recognition is not reliant on word shape but on the recoding of letter patterns which results in the conclusion that orthographic processing is contingent on phonological processing (Ehri, 1997). Reading fluency is thus dependent on the acquisition of word-specific orthographic representations (Perfetti, 2007), linked to phonological, syntactic, semantic and morphological information. Once orthographic processing is viable for a reader (as a result of numerous fixations – practice in other words), the word superiority effect (Reicher, 1969) becomes evident and the counter-intuitive phenomenon of the word being perceived before its component letters is manifest (Ehrlich and Rayner, 1989). This may seem absurd but consider the mis-spelled word ‘diffrence’ that may be read as ‘difference’ before the mis-spelling is identified (Rayner, 1989).


But if the word is processed in advance of the individual letters and syntax, semantic information and context play a role in that processing, then reading instruction should surely privilege those elements and the whole language, balanced literacy, multi-cuing methods of teaching reading will be more efficacious.


This was the second critical floor in Cattell’s (1886) assumptions. The word superiority effect is only evident once automaticity is attained by a reader. Prior to this, an emergent reader can only decode using phonological processing (Stanovich, 1990). Any initial reading instruction that does not enable a reader to decode sounds in sequence and encourages continued practice of decoding will require the pupil to either decipher the code by themselves or achieve the pseudo-fluency of logographic (word shape) processing (McGuinness, 1999).


Although the word is the smallest unit of meaning, it is only the deep and full knowledge of letter patterns that make up the alphabetic code that enables the decoding of the word.


EARLY READING INSTRUCTION IS NOT COMPATIBLE WITH A CONSTRUCTIVIST PARADIGM

The word method now had scientific proof that it was correct and a powerful advocate proselytising its gospel when Edmund Burke Huey, an American educator stumbled upon Cattell’s findings and wrote the bestselling ‘The Psychology and Pedagogy of Reading’ (1908).


Huey’s book (1908) is not clear as to how words are perceived and he maintains that this is not only different for differing readers, but that individual readers will use different strategies to perceive different words. He maintains that initial recognition is not always the whole word form but ‘dominant’ parts with the word outline becoming increasingly prevalent. This is in contradiction to the findings of Cattell (1886) that he references which indicates that the word is read faster than individual letters for competent readers.


Heavily influenced by Ward’s ‘Rational Method’ (1908), Huey (1908) maintained that children learned to read as they learned to talk and that the desire to learn to read could be enhanced and motivated by environmental curiosity and interest and that vocabulary would grow with experience. This ‘natural’ method of reading follows verbatim the process of speech adoption, from the reading equivalent of ‘babbling’ to spoken sentences and fluency in reading and is in contradiction of Geary’s (2007) assertion that reading is a biologically secondary ability that requires instruction. Smith avers that Huey ‘does not advocate any particular method,’ (2002, p116) in direct contrast to Diack who accuses him of ‘inconsistencies’ and ‘self-contradiction’ (1965, p53) when maintaining the importance of letters, shortly after suggesting that reading be free from their domination. Diack again takes issue with Huey’s attitude to phonics and his accusation that Pollard’s (1889) phonics programme was ‘purely phonic, almost arrogantly so’, (1965, p61) whilst advocating analytical phonics as an integral part of school reading instruction. The longevity, traction and adherence that Huey’s book had on reading instruction, particularly in the United States is clear from Smith’s testimony in 1934 that, ‘This book was the first scientific contribution to reading instruction and is still considered a standard reference…’ (p 116).


The influence of Dewey’s (1916) ‘Inquiry Learning’ model on Huey is evident and noted by both Smith (2002) and Diack (1965). Dewey’s development of Rousseaux’s (2012) unfoldment theory of education, further adapted by Froebel and Pestalozzi, involved the creation of an environment that encouraged curiosity, and which promoted investigation and problem solving. This chimed well with Huey’s immersive, whole word pedagogy. Dewey’s growing pre-eminence both as a philosopher and an educationalist, Russell described him as, ‘the leading living philosopher of America’ (2014, p730), promoted and expanded his influence over schooling and reading instruction. His theory of knowledge and his substitution of ‘inquiry’ for ‘truth’ was, however, not accepted by Russell (2014) who characterised Dewey’s definition of truth as biological rather than mathematical and considered the severing of the link between belief and the fact that verifies the belief and replacing it with ‘inquiry’ as invalidating the theory. Inquiry, Russel (2014) argued, may prove to be false and therefore not be factual.


Nonetheless, Dewey’s influence on reading instruction was substantial and remains influential to this day. His establishment of an experimental ‘laboratory school’ based on his inquiry learning work at the University of Chicago and the creation of the ‘Activity Curriculum’ was influential in the United States. This was designed to stimulate curiosity in students through stimulating experiences (Smith, 2002) which encouraged the identification of problems that required solving. Once solved, the subsequent problem was to be identified, investigated and solved. The process emphasised the importance of the environment in a child’s education and was stitched together through social collaboration with the central core being the motivation of the student (Tracey and Morrow, 2012). Dewey’s Inquiry Learning was in essence constructivist, emphasising the requirement of students to create and construct their own learning.


The repercussions for reading instruction were profound as the implication was clear: the ability to read could be constructed through problem solving, investigation and motivation. If curiosity were sparked, then learners could construct learning. But how can a child construct the unravelling of the most complex alphabetic code known to mankind?


Huey (1908) was adamant that learning to read is just the same as learning to talk, with children grasping what is within their cognitive perception and ignoring the obscure. It is interest that drives development and an interest in print that drives a child’s desire to imitate; a clear parallel with Dewey’s ‘curiosity’ (Tracey and Morrow, 2012). Formal reading exercises are eschewed by Huey, who advanced this imperative with the recommendation that if a child were unable to read a text then it should not be read; ‘Its very difficulty is the child’s protection against what is as yet unfitted for,’ (Huey, 1908, p57).


This is in fundamental breach of Geary’s (2002) model of biologically secondary development that involves the co-opting of primary folk-psychological systems (in the case of reading this is language communication). Geary (2002) defines this co-optation as the adaption through instruction of evolved cognitive systems for culturally specific uses and, specifically in the case of reading acquisition, in the co-option of primary language systems. The cognitive systems and neurological routes that are employed when reading are different to those employed for oral communication and have adapted to the constraints of the human brain not evolved (Dehaene, 2014). The cognitive systems engaged for processing phonemes are the same systems engaged when reading (Pugh et al, 1997).


READING READINESS

The polemic against teaching reading before the age of seven began in England in the late nineteenth century but became enshrined in American pedagogy such that the acceptance of ‘delay as a teaching technique’ developed into common educational parlance (Anderson and Dearborn, 1952). Dewey (2017), nonetheless, espoused that a child need not be exposed to text before the age of eight and in some cases ten years old. Huey concurred with the eight-year old threshold and recommended no formal reading instruction until the habits of spoken language had been well formed; the curriculum would focus on promoting the desire to read (Diack, 1965).


The concept of reading readiness was so inculcated into the pedagogy of reading instruction that it became a body of doctrine in itself (Terman and Walcutt, 1958) with a battery of tests to assess when a child was ready to start reading instruction. Terman and Walcutt (1958) observed that the real reason for the readiness programme was to concoct a justification for deferring reading instruction as a result of unsuccessful methods of teaching. The programme, they noted, ‘had the further advantage of giving official sanction to the notion that many children will have made no progress in reading in the first year, or even two…’ (1958, p104). Thus, a child who was struggling to read was not poorly taught, they were merely not ready.


When Wilson and Burke (1937) researched the efficacy of the reading readiness programme over four years, analysing correlations between reading achievement and assessment scores they found that phonics combinations and recognition of letter sounds correlated highly with reading achievement whereas those assessments indicating reading readiness had insignificant correlations. The school in which the research took place was ironically the Horace Mann School in New York.


This delay in the teaching of reading is contrary to current research which concluded that children should be reading by the age of six (Holdaway 1979; Teale 1984; Stanovich & West 1989).


THE EFFECTS OF GESTALT THEORY ON READING INSTRUCTION.

Huey’s (1908) ‘scientific’ proof that phonic instruction was a superfluous, torpid stage of reading development resulted in a revival of whole-word reading methods. Phonics was not entirely expunged from reading education but was exiled to the peripheries of instruction or used as a post facto analytical intervention. Along with Dewey’s (2017) influential environment-driven approach to child learning, the course away from early phonics instruction was set fair with the pedagogical sails filled with these warm, intuitive, constructivist educational trade winds.


A third gust of theoretical impetus would further drive reading education towards the doldrums of illiteracy. With the publication in 1927 of ‘The Mentality of Apes’, Koehler (2013) brought Gestalt psychological theory to the world of education. The Gestalt school of psychology was founded on Wertheimer’s (Ellis, 2013) assertion that behaviour was not determined by its individual elements but that, ‘the part processes are themselves determined by the intrinsic nature of the whole…’ (cited in Ellis, 2013, p11). It was not long before educationalists extrapolated the Gestalt proponents’ theory, that immediate wholes were psychologically innate, to the teaching of reading. The whole, in terms of meaning, was not the word but the sentence. It manifested itself in the disinterring of the sentence method of teaching reading as promulgated by Farnham in the 1880s with his assertion that, ‘The first principle to be observed in teaching reading is that things are recognised as wholes…’ (1905, p17). It is ironic that the ‘The Sentence Method of Teaching Reading’ by Jagger (1929) had exactly the same title as Farnham’s manual, despite the temporal difference of forty-five years and the geographical difference of four thousand miles.


Jagger (1929) imbedded Gestalt theory into the foundations of reading instruction and generalised his postulations to the very extremities of reason, maintaining that, ‘…our system of written words…is mainly indicative of sense; it is indicative of sound in a secondary degree. The written form of each word is associated directly with its meaning and indirectly with its sound…’, adding with a flourish, ‘…to teach reading ideographically, without the interpolation of sound between written sign and meaning, is therefore in accord with the present character of English spelling as well as in accord with the historical development of writing….’ (1929, p11)


Written English, Jagger (1929) emphatically concluded, is not alphabetic but ideographic.


The sentence being, ‘the indivisible unit of thought and language’ (1927, p114) resulted in children who were taught by Jagger’s methods learning sentences designed and framed by themselves from their own speech and then spoken repeatedly. Each sentence was accompanied by an image and the more vivid the image the more efficacious would be the reading. Once the sentence had been repeated to mastery the image was withdrawn. The individual words within the sentence were then broken down and taught as wholes but only those words that communicated substantive meaning; pronouns, articles and prepositions were to be absorbed inherently. Individual letters were never taught as they, as such, did not exist.


This made the teaching of writing somewhat arbitrary under Jagger’s system, for without letters how was a child to form words? Jagger was rather circumspect about this and happy for children to ‘scrawl’ initially. Nonetheless, this scrawl was only illegible to the eyes of others; to a child it made perfect sense. Gradually, Jagger insisted, these ‘scrawls’ would become increasingly accurate and represent the actual word and it was only when fully exact were children able to discern the individual parts of the ideographs, namely the letters, and need to know their individual labels. This left spelling in an indiscriminate vacuum, but even this contradiction did not daunt the confidence of this English pedagogue who neatly sidestepped the issue by declaring that spelling need not be taught in infant schools and that no time should be wasted attempting to eradicate spelling errors (Diack, 1965).


Gestalt theory and the sentence method of Jagger was an explicit refutation of the phonic method and greatly influenced reading instruction. It crossed the Atlantic to further support Huey’s (1904) misinterpretation of Cattell’s (1886) science with a review in the Elementary School Journal (Smith, 1929) praising Jagger’s book as, ‘…a valuable contribution to the field of reading…’ (1929, p791). Where it had its greatest influence, however, was in the sphere of initial reading books and primers.


It transpired that after the eschewal of letters, for any evident success the method was reliant upon the frequent repetition of words. This led to a dilemma for the compliers of reading books: frequent repetition of word in their books resulted in the inevitable logic that the number of different words needed to be reduced and those words that remained needed to be repeated as frequently as possible. This led to the paradox that the books that were devised to teach a child to read were specifically designed to withhold as many words as possible from the child learning to read. This was clearly not an attractive selling point, so the process was renamed as ‘scientific vocabulary control’. Once again, science rode into the battle against illiteracy on the wrong side.


The success in the diminution of the vocabulary load of ‘scientific control’ reduced some reading books vocabulary to a mere twenty words (Diack, 1965). Not satisfied with this meagre fayre, the method further reduced cognitive demands on emergent readers by including pictures for all of the repeated words. Gestalt theory and the sentence and word method methods had reached their logical but absurd conclusion: reading instruction without the need for words. The method of reducing word variety was repeated in Denmark in the 1950s. The skewed logic fitted the language, with Danish having less phonic consistency that other Scandinavian languages. All new primers were designed with a limited number of words. Research by Ellehammer (1955) found that, ‘In spite of this work on more systematic and easy primers we find an increasing number of poor readers in schools…’ (1955, p295).


READING RESEARCH IN THE TWENTIETH CENTURY.

Although educational research was in its infancy in the early decades of the twentieth century there was sufficient concern and curiosity from both teachers and universities as to the veracity of whole word methods versus phonics that a number of rudimentary studies were conducted. The majority of these studies compared phonic and non-phonic approaches to the teaching of reading and had little concern with the actual programmes followed. Many had small sample sizes and were conducted over short periods. Nonetheless, the results are enlightening. Currier and Duguid (1916), Buswell (1922) and Mosher and Newhall (1930) all found that children taught by phonic approaches were more accurate in their reading (especially when attending to unknown words), had superior comprehension and made fewer guesses at newly encountered words than those taught using look-and-say methods. They were, crucially, slower at reading and read with less apparent prosody. Speed and fluency were the now well-established touchstones of ‘good reading’ no matter the age and stage of the emergent reader.


In 1924 Sexton and Herron conducted the ‘Newark Phonics Experiment’ in eight New Jersey schools specifically investigating the value of phonics in the teaching of beginning reading. Uniquely, the sample sizes were relatively large at 220 and 244 pupils and the study took place over three years. To further militate against the quality of teaching, the research design had the same teacher alternate both phonic and non-phonic instruction of the groups over the period of the study. The results in the first two years indicated that children taught by non-phonic methods had made the more progress in reading. However, by the end of the third year the group that had received phonics instruction outscored the other group on every test with particularly dramatic divergence in spelling scores. The indications from these studies were that children taught using phonics methods learned to read more accurately and comprehended better than those taught using a look-and-say method: eventually. Having a longitudinal element built into the study was crucial.


The most influential study of the era was carried was out by Gates at Columbia University (1928) who compared two samples of seven-year-olds; one taught by conventional phonics approaches and the other using a whole word approach with a focus on reading comprehension. The results were statistically inconclusive; however, Gates’ interpretation of the results did not reflect this.


In the test of phonemic awareness, the phonics group performed slightly better. Nevertheless, this was interpreted as a ‘moral victory for non-phonics methods’ (1927, p223) as the group had been taught no specific phonics knowledge. In the word recognition test the results were very similar but reported as the non-phonics group showing ‘superiority’ (1927, p223) when they encountered words they had previously learned. There were no differences in the word pronunciation tests and in assessments designed to ascertain a child’s ability to see a word as a unit of its known parts, yet Gates’ (1927) analysis implies an inherent and prejudiced difference evidenced by his use of positive comparative vocabulary. He states that the non-phonic training seemed to ‘sharpen perception’ and enable ‘rapid appraisal’ and when they made errors made a ‘more detailed study’ (1927, p224). When describing the phonics groups’ attempts at reading newly encountered words, he describes their efforts pejoratively as they ‘labor (sic) longer’ (1927, 224) before attempting the word. He fails to mention whether or not this added labour resulted in success.


Gates (1927) is most emphatic in his analysis where he assesses and compares the two groups’ ability in silent reading and comprehension which he considers ‘the main objective of reading instruction,’ (1927, p225). The non-phonics trained group show ‘markedly superior attainments’ (1927, p225) and showed ‘a clear advantage…the non-phonics pupils were superior in silent reading by 35%...’ (1927, p225).


Despite the ambiguity of the research outcomes (except in silent reading) Gates is remarkably unequivocal when analysing the results of the studies:

‘That it will be the part of wisdom to curtail the use of phonics instruction

in the first grade very greatly, is strongly implied; indeed, it is not

improbable that it should be eliminated entirely.’ (1927, p226)

For one of the leading educational researchers from one of the most influential universities in the United States writing in one of the most highly regarded journals to draw such an emphatic conclusion one would expect the study to be extremely robust. It was not. Firstly, the study had a sample size of twenty-five children in each group when the recommended minimum is thirty (Cohen and Mannion, 2016). Secondly, the research only lasted for six months, with results after three months very similar to those at the end of the study. This would have given the children in the phonics group sufficient knowledge to decode only words in the initial code and not be close to automaticity for the vast majority of words. In contrast, children in the whole word group would have learned a number of whole words and could thus give the impression of fluency; although they guessed words far more regularly. Thirdly, and most crucially, the reading test was timed, with the total number of words read correctly being the arbiter of reading efficacy and not the percentage of words read accurately. Children decoding utilising phonics strategies would be far slower as they were still at a letter by letter decoding stage, hence Gates’ observation of their stuttering work attack strategies as opposed to the immediate and confident guessing of the non-phonics group.


More damning is the unscientific pejorative language used in the analysis of the phonics group’s word attack strategies which suggests researcher partiality. The evidence for confirmation bias is strong. Gates (1928) was a staunch advocate of ‘word mastery’ (the word-method of learning words by shape and sight) of reading instruction recommending the use of flash cards, picture cards, picture dictionaries and word books as part of his programme of teaching and warning of the dangers of traditional phonics (Gray, 1929). He developed the concept of ‘intrinsic’ phonics which was phonic analysis only applied after a word had been learned and only when the word could not be read from memory. This was far less formal than analytical phonics as there was an assumption that the phonetic structure of a word would be absorbed ‘intrinsically’ by the learner. Learners only received specific help where needed. He is unspecific as to when this approach would be was necessary or where this help would stem from, as the teacher had been absolved from any responsibility to teach phonics in any form but to concentrate entirely on word learning; the phonics would come naturally. Gray (1929) states that Gates’ pedagogical emphasis was on ‘fluency, fullness of comprehension and enjoyment of reading,’ (1929, p468) with an avoidance of the stuttering development of reading associated with traditional phonics approaches.


Gates’ research, article and following book had a considerable influence on reading instruction in the United States. Gates’s position at the University of Columbia endowed him with significant authority, both in terms of research profile and instruction of trainee teachers, and his concepts gained traction over the century – he was inducted into The Reading Hall of Fame in 1978.


The use of phonic strategies was never denied according to Terman and Walcutt, (1958), and the adherence to the utilisation of a variety of methods of early reading instruction, applied according to the need of the specific child was the ‘reasonable…open-minded’ (1958, p93) response. This ‘legend in the mythology’ (1958, p93) created by Gates denied the evidence that the myth contradicted the facts.


Nonetheless, the relegation of phonics to the periphery of initial reading instruction continued throughout the first part of the twentieth century, with more and more reading experts proselytising whole word recognition, and was dealt another blow with Dolch and Bloomster’s research in 1937 into phonic-readiness. The authors administered a phonics test on first and second grade children whose mental ages they had assessed and whom had been instructed in phonics. The findings appeared conclusive: children with a mental age below seven scored almost nothing on the assessment despite the fact that they were able to recognise some words by sight. The authors concluded that children below the third grade were not mentally developed enough to apply the principles of phonics and should be taught words by sight only.


Dolch and Bloomster’s study (1937) contained a design flaw. There was no analysis of the phonics instruction that the participants had received either in terms of the amount or the efficacy. They defined phonics instruction received in terms of generalisations about whether participants had received any instruction of how letters were sounded out. Thus, children could have been taught very little phonics and instructed in the phonics approach approved at the time: incidental phonics only after word-guessing had failed. The researchers did not attempt to instruct the participants in phonic decoding strategies relevant to their mental ages and therefore had no means of knowing whether these children had actually retained any relevant instructed knowledge or whether the knowledge instructed had not been effective. By ensuring that phonics instruction was delayed by three years, the effect of the study was ultimately to ensure that phonics was only taught as a remedial technique (Terman and Walcutt, 1958).


It was not long before almost every reading expert and every book on reading instruction took one of two attitudes towards phonics instruction for early readers: it was either useless or it was detrimental (Terman and Walcutt, 1958).


The influential ‘Teaching Children to Read’ by Adams and Grey (1949), was emphatic that the general ‘contour’ (1949, p48) and familiarity of a word was crucial to reading it and that any understanding of phonics should be avoided. The book admonished parents who were college graduates for sending their children to school with an understanding of the alphabet and associated sounds. The authors note that many poor readers in the third grade are unable to sound out words and rather than concluding that phonics instruction may alleviate this deficit they offer this as proof that phonics instruction is inefficient for poor readers. Grey (1960) in ‘On Their Own in Reading’ states categorically that the syllabic unit is always determined intuitively before the application of phonics and recommends the teaching of the syllable as the ‘pronunciation unit’ (1960, p33).


‘Problems in the Improvements in Reading’ (McCullough, 1955) became the foremost text in remedial work with poor readers and asserted that there were six stages of learning to read. Stage one was the understanding that reading was a left to right phenomenon, stage two involved the recognition of words by shape or configuration, stage three involved initial letter sounds followed by a contextual guess, stage four required the noticing of regular endings and stage five necessitated the recognition of the middles of words. The final stage was the sounding out of individual letter sounds within a word: in other words, phonics. Although included as a stage, and the final stage at that, the authors were not convinced of its efficacy, writing, ‘the analysis of the sound of a word may lead the amateur word sleuth with a lot of little parts, which he finds himself incapable of reassembling into meaningful wholes…’ (1946, p79).


By 1955, the most eminent publication on reading instruction, ‘The Reading Teacher’ in its editorial penned by the ‘distinguished’ (Terman and Walcutt, 1958 p110) Emmett Betts concluded that children taught by a phonics approach ‘can call words but cannot read…’ (1955, p2). Phonics instruction was almost completely discarded and where it was utilised this manifested itself in remedial teaching after the second grade as part of a mixed-methods paradigm with a variety of alternative word attack strategies (Terman and Walcutt, 1958).


‘WHY JOHNNY CAN’T READ’

In 1955 Rudolph Flesch published ‘Why Johnny Can’t Read’: a coruscating, polemical attack on the prevailing ‘look and say’ methods that had been incubated and propagated by the powerful Chicago and Columbia Universities that dominated reading instruction in American schools. The book was a sensation and remained in the best-seller lists for thirty-two weeks. Flesch (1955) was emphatic: the only way to develop early reading was through systematic phonics instruction delivered at the earliest opportunity. He based his assertions on a clear and coherent analysis of eleven research papers which, he attested, emphatically supported the efficacy of phonics instruction over other methods, and he saved particular vitriol for Gates’s (1928) study and recommendation of incidental phonics.


At the heart of Flesch’s (1954) argument was his analysis of the Hay and Wingo (1968) systematic phonics programme ‘Reading with Phonics ‘(1968) which ensured children taught by the method were consistently above the national reading norms (Terman and Walcutt, 1958). In the Bedford Park school visited by Flesch (1954) that utilised the Hay and Wingo (1968) method there were no non-readers and all grade one children were able to read a newspaper report fluently. This aligned with the extensive study carried out in Champaign, Illinois by Hastings and Harris (1953) in which the experimental group outperformed the control group in reading assessments by twenty-five percent with most children fluent by the second grade.


‘Why Johnny Can’t Read’ (1955) is often seen as a watershed in reading education and the tipping point for the return of phonics as a reading instruction approach. It was no such thing, as evidenced by Flesch’s need to write the follow up book ‘Why Johnny Still Can’t Read’ (1981). It was Flesch’s abrasive style and evident outrage that almost certainly undermined his influence along with an implication that the publishers of the ‘look and say’ primers and the academics who wrote them were engaged in an elitist, economic conspiracy. Although he had completed a PhD from Columbia University, he was treated as an academic parvenu by an establishment that closed ranks against him, and his book which, although it sold well to a general public bemused as to why their children were unable to read, was largely ignored by the teaching community.


The academic community circled their wagons and fought back with a plethora of articles of riposte. The sternest and most influential defence against Flesch (1955) came from Harvard University’s Carroll (1956) who suggested that Flesch had distorted the research. Carroll (1956) accused Flesch of failing to discriminate between the different kinds of phonics instruction adopted in the studies thus misleading readers over whether or not phonics had actually been used. Flesch (1955) was rigorous in his distinguishing the technique being utilised as either systematic phonics or incidental phonics. Carroll (1956) was also critical that Flesch (1955) failed to demonstrate any tested phonics techniques that could aid reading instruction; this despite Flesch’s (1955) inclusion of a detailed description of the Hay and Wingo (1968) programme and the Bloomfield (1933) method.


It was Carroll’s (1955) reliance on and careful selection of the researcher conclusions of the analysed studies that was the least scientific criticism of Flesch (1955) and yet gained the most purchase in the academic world. By cherry-picking any and every statement of doubt and ambivalence over a phonics approach, Carroll (1956) created the impression that although each study may indicate an advantage for phonics instruction, the researchers harboured serious doubts as to its efficacy. There was particular amplification of the divergence between phonics instruction’s positive effects for word reading but the higher scores on paragraph reading and fluency for those taught by the word-method and incidental phonics which are tested through comprehension and not decoding accuracy. Carroll (1956) is considered to be pivotal in influencing the academic world that Flesch (1955) was wrong and that the acknowledged authorities could be trusted (Terman and Walcutt, 1958).


Flesch’s (1955) polemical, outraged and aggressive literary style was his downfall. It is ironic that if he hadn’t been so forceful, he may have had greater influence. He made one further error: he sighted England as the home of good sense: a phonics utopia where all were taught by his espoused method and where all could read fluently. It was no such place.


Perhaps Flesch (1955) had the last laugh, however. His book was read by England’s Schools’ Minister Nick Gibb (2018) who has championed the teaching of systematic synthetic phonics and had the political influence to establish a policy of early decoding instruction across the country and an accompanying assessment and monitoring protocol that has gone some way to raising reading standards.


Flesch’s book was also read by Theodore Geisel who, as a direct result, penned ‘The Cat in The Hat’ (1957).


Where Flesch’s (1955) ‘Why Johnny Can’t Read’ stole the headlines, it was Terman and Walcutt’s ‘Reading Chaos and Cure’ (1958) that rationalised Flesch’s (1955) raw polemic with scholarly analysis. Their systematic deconstruction of Gestalt theory’s assumption that a child learns from whole to parts as related to reading, concluded with the devastating resolution that an illogical union had been forged between the theory and the study of eye movements. This, they stated, had resulted in ‘the ultimate theoretical basis of the reading method which today is undermining our educational system’ (1958, p 48) as it ignored the basic and fundamental fact that printed words contain letters which are symbols of sounds; and if further proof were needed, they quoted Cronbach’s assertion in the influential Educational Psychology that, ‘The good reader takes in a whole word or phrase at a single glance, recognising it by its outline…’ (1954, p46).


Flesch’s (1955) contention that English children were all being taught phonics was indeed a fallacy. When Diack and Daniels (1953) warned against the confusion being caused by mixed methods of teaching reading they received such an outraged backlash from the teaching profession (Lloyd-Jones, 2013) that Morris (1953) coined the term ‘phonicsaphobia’ to describe the ‘pathological’ irrational reaction to any mention of phonics as a teaching method.


The clearest evidence of the universality of the entrenching of mixed methods teaching of early reading into the fabric of the English education edifice comes from the Plowden Report (Blackstone, 1967). A comprehensive inquiry into the English primary education system it concluded that, ‘Children are helped to read by memorising the look of words, often with the help of pictures, by guessing from a context…and by phonics, beginning with the initial sounds. They are encouraged to try all the methods available to them and not depend on only one method…’ (Blackstone, 1967, p212). What is revealing is not merely the clear reliance on, and recommendation of, mixed methods for teaching reading, but the inference that phonics is a last resort – and only for initial sounds.


In 1959 the United States, with the debate at its most bitter point, the National Conference on Research in English established a special committee on research in reading. Jeanne Chall (1967), a committee member, proposed that a critical analysis of all research in existence would guide any experimentation proposed by the committee. Her proposal was accepted and funded by the Carnegie Corporation and the study began in 1962. It was completed in 1965.


Chall (1967) analysed all available historical research into the ‘look and say’ method compared with phonics approaches to early reading instruction examining their efficacy in outcomes for comprehension, vocabulary and rate. In all cases except one (Gates’ 1927 study), the results indicated that in the initial grades the ‘look and say’ method indicated higher scores but that this advantage was eliminated, nullified and surpassed by those taught by a phonics approach as they progressed to higher grades. Chall (1967) concluded that ‘phonics is advantageous not only for word recognition but also for comprehension…’ (1967, p. 108) and that phonics instruction had a ‘cumulative effect that is crucial in producing the later advantage…’ (1967, p. 108).


Chall (1967) also analysed the studies to compare systematic phonics instruction with the intrinsic phonics method so beloved of Gates (1927) and so vilified by Flesch (1955). The results were emphatic. In terms of word recognition, spelling, vocabulary and comprehension, children taught using systematic phonics outperformed those being taught using intrinsic phonics. Only in reading rate did those utilising an intrinsic phonics approach gain an advantage and this advantage declined and was surpassed by grade 4.


The conclusions of the Chall study were unequivocal:

‘Most children in the United States are taught to read by…a meaning emphasis

method. Yet the research from 1912 to 1965 indicates that a code-emphasis

method – ie. one that views beginning reading as essentially different from

mature reading and emphasizes learning of the printed code for the spoken

language – produces better results…’ (1967, p. 307).




PSYCHOLINGUISTIC APPROACHES TO READING

In April 1967, Goodman presented his landmark paper at the American Educational Research Association. ‘Reading: A Psycholinguistic Guessing Game’ (Goodman, 1967) was the culmination of five years of research and took the world of reading instruction by storm. It has been reprinted in eight anthologies, is his most widely cited work and stimulated myriad of research into similar models of reading. The aftershocks for reading instruction are still felt to this day.


The central tenet of the paper was the refutation of reading as a precise process that involves ‘exact, detailed, sequential perception and identification of letters, words, spelling patterns and large language units…’ (1982, p33), but that it is a selective process that involves the partial use of available language cues based on ‘readers’ expectations’ (1982, p33). The reader guesses words based on semantic and contextual expectations and then confirms, rejects and refines these guesses in ‘an interaction between thought and language…’ (1982, p34). Inaccuracies, or ‘miscues’, as Goodman (1982) calls these errors, are inherent (indeed, vital) to this process of psycholinguistic guesswork.


Goodman (1982) justifies his theory by linking it to Chomsky ‘s (1965) model of oral sentence production which results in precise encoding of speech being sampled and approximated when the message is decoded. Thus, he maintains, the oral output of the reader may not be directly related to the graphic stimulus of the text and may involve ‘transformation in vocabulary and syntax’ (1982, p38) even if meaning is retained. The implication is profound: the reader is reading for meaning not for accuracy and it is semantics and context that drives the reading process not alphabetic decoding. The tether with the work of Chomsky (1965) is intuitively attractive but evolutionarily awkward. Chomsky’s (1965) research theorised that language and language structures for humans are an inherent, intuitive, natural attribute developed over millions of years of genetic selection and are thus biologically primary (Geary, 2002). To suggest that reading and writing aligned to Chomsky’s (1965) oral model of grammatical intuition and that a reader was an ‘Intuitive grammarian’ (Goodman, 1982, p161) ignored the inconvenient truth that the five thousand years it has taken to develop writing is not an evolutionary period (Dehaene, 2014).


Goodman’s (1982) ‘whole language’ approach was highly critical of Chall’s (1967) research which separated code-breaking from reading for meaning. Although never addressing the weight of research from which she drew her conclusions, and which contradicted his hypothesis, he accused her of misunderstanding how the linguistic code operated and was used in reading. ‘A language is not only a set of symbols…’ he opined, ‘it is also a system of communication…’ (1982, p127) and he accused her of overlooking the ‘fact’ that phonemes do not really exist and that oral language is no less a code than written language, maintaining that even in an alphabetic system it is the interrelationship between graphophonic, syntactic and semantic information and the switching between these cuing systems that enables the reader to extract meaning from text. He states with certainty that:


‘Reading is a process in which the reader picks and chooses from the

available information only enough to select and predict a language

structure that is decodable. It is not in any sense a precise perceptual

process.’ (1982, p128).


Goodman’s consistent conflation of oral language with written language, and his failure to acknowledge the vastly differing processes required to decode and recode the two systems of communication lies at the heart of the failure the whole language approach to reading. With resounding echoes of Gestalt theory (Ellis, 2013), and supported by Clay’s (1991) assertion that readers rely on sentences for rapid word perception, he took his flawed model of reading to its illogical conclusion, deducing that whole texts contained more meaning and were thus easier to read than pages, paragraphs and sentences. The grapheme and letter, containing the lowest level of meaning, were therefore, the most difficult to read and were, for reading instruction, the most irrelevant (Goodman and Goodman, 1981).


There was little room for phonics and phonemic awareness within the theory. Smith (1975) believed it did have a place but only after children had learned to read. Nonetheless, in line with all of Chall’s (1967) research, he acceded that all of his best readers had excellent phonemic awareness. His conclusion was infused with confirmation bias (Wason, 1960): a good reader was intuitively able to make sense of phonics. In other words, phonics mastery did not make a good reader; good reading enabled phonics mastery. Smith’s (2012) book for teachers, ‘Understanding Reading’, supports and encourages Goodman’s (1972) predicting technique for poor readers, recommending skipping unknown words, guessing them or, as a last resort, sounding the word out. It concludes with, ‘Guessing…is the most efficient manner in which to learn to read…’ (Goodman, 2012, p 232). Goodman and Smith had developed a model of reading that followed exactly the paradigm of the poor reader (Adams and Bruck, 1993).

In light of this, allied to Chall’s (1965) research, which emphatically supported the use of phonics instruction for early readers, the psycholinguistic guessing technique for reading should have struggled to be adopted. It was nonetheless assisted by a large-scale research study across the United States. Bond and Dykstra (1967) sought to finally establish which was the most effective instructional technique for early reading and analysed the decoding of nine thousand children. Their conclusion that there was no clear method that worked better than any other, and that it was the quality of the teacher not the programme that mattered, undermined the advantages of a phonics approach identified by Chall. Bond and Dykstra’s (1967) research was flawed. Their research question sought to identify the effectiveness of reading instruction methods, but their data were organised and analysed according to the class in which children were taught and not the method by which they were taught. Thus, the study compared classroom teaching and not methods of teaching and concluded, inevitably, because this is how the data was codified, that the teacher was more important than the method. If this were the case, then method was irrelevant and whole language approaches were as effective as phonic approaches if the teaching was good.


Buoyed by this conclusion, whole language reading as a pedagogical technique spread quickly driven by two engines. Firstly, it appeared to work; at least initially. The more difficulty a child has with reading, the more reliant they become on memorisation of texts and the utilisation of word shape and visual and contextual cues and the more fluent they appear, although often paraphrasing and skipping words (Jules et al. 1985). By being taught non-phonological compensatory strategies, poor readers seem to make progress; progress that eventually stalls once they reach seven years old and texts become more demanding, have fewer visual cues and the child’s logographic memory capacity had been reached (Rasinski, 2010). By this stage, confident readers have cracked the phonic code for themselves (Adoniou, 2017) so appear to have mastered reading through psycholinguistic methods.


Secondly, the method garnered the approval of teachers. Both Smith and Goodman appealed directly to teachers to ignore the gurus and experts, trust their intuition and carry out their own research (Kim, 2008). The theory that reading was, like oral language, intuitive, absolved teachers from having to teach it. This aligned with the constructivist teaching theories of Dewey (1916) that were predominant in the 1960s, 1970s and 1980s (Peal, 2014) with the belief that knowledge, including knowing how to read, could be discovered and constructed by the learner. This despite Perfetti’s (1991) warning that, ‘learning to read is not like acquiring one’s native language, no matter how much someone wishes it were so’ (1991, p.75). Teaching phonics on the other hand was highly technical, complex and required training, practice and repetition gaining it a reputation for ‘drill and kill’. Whole language methods, with the emphasis on guessing and intuitive learning enabled teachers to abdicate responsibility for the teaching of reading and concentrate on the far more enticing elements of literacy. With Bond and Dykstra’s (1967) research indicating that method was irrelevant, teachers had free reign to adopt whichever approach was the most alluring.


It is this professional abdication of reading instruction that has been the most damaging legacy of Smith and Goodman. In 2012 the National Union of Teachers (NUT), the second largest teaching union representing in excess of three hundred thousand teachers, denounced the introduction of systematic synthetic phonics as the promotion of a single fashionable technique with one NUT executive stating, ‘Most adults do not read phonically. They read by visual memory or they use context cueing to predict what the sentence might be…’ (Mulholland, 2014). The union was emphatic that phonics alone would not produce fluent readers and that ‘mixed methods’ were essential. The largest teaching union, the NAS/UWT, asserted that children, ‘…need to use a combination of cues such as initial letter sounds and illustrations to make meaning from text…’ (politics.co.uk, 2013). This resistance from educational institutional leadership clearly reflected the attitudes of their memberships. According to a NFER (2012) survey the majority of teachers specifically mentioned the use of picture cues as a reading technique along with the visual memorisation of word shapes and the sight learning of words. Further research by the NFER (Walker and Bartlett, 2013) found that 67% of teachers believed that a ‘mixed methods’ approach to the teaching of reading was the most effective. A survey by the NAS/UWT in 2013 (politics.co.uk, 2013) showed that 89% of teachers believed that children needed to use a variety of cues to extract meaning from text confirming the results of Sheffield Hallam University’s research two years earlier that revealed 74% of primary school teachers encouraged pupils to use a range of cueing systems that included picture clues (Gov.uk, 2011).


The whole language approach to reading instruction has maintained traction under cover of the concept of mixed methods or balanced literacy. This compromise attempted to end the reading wars by empowering teachers to decide which methods best suited individual children and use a cocktail of approaches to address reading failure (Seidenberg, 2017). In practice, this meant that the vast majority of older teachers continued with their whole language approach and thus ignored phonics instruction except where statutory assessments enforced it (Seidenberg, 2017).


Clay’s (1995) Reading Recovery programme is perhaps the strongest evidence of the continuing traction gained by the whole language approach to reading. Clay (1991) popularised the whole language approach in New Zealand along with Smith and Elley who maintained that, ‘children learn to read themselves; direct teaching plays only a minor role…’ (1995, p87) as learning to read was akin to learning to speak. This resulted in 20% of all six-year-olds in New Zealand making little or no progress in toward gaining independence in reading in their first year of schooling (Chapman, Turner and Prochnow, 2001). The solution was Clay’s Reading Recovery programme: the same approach that had failed the same children in their first year of schooling.


However, research indicates that Reading Recovery is an effective strategy. Studies showed that not only is it beneficial, it is cost effective too (May et al, 2015) and is recognised as good practice by the Early Intervention Foundation, European Literacy Policy Network, Institute for Effective Education and What Works Clearinghouse as well as being advocated by London University’s UCL. A recent US study (Sirindes et. al, 2018) reaffirmed these assertions which were backed on social media by education heavyweight Dylan Wiliam (2018).


How can a whole language model of reading instruction defy the substantial research that undermines the efficacy of the approach?


The answer lies partly in the model and partly in the research. The programme constitutes twenty weeks of daily, thirty-minute one-to-one sessions with a trained practitioner. Fifty hours of one-to-one reading, however poor the instruction, will result in some improvements for most readers. This may be the result of improved guessing strategies, greater numbers of words recognised by shape and far greater opportunities for the child to start to crack the alphabetic code by themselves as well as any phonics instruction the child is receiving outside of the programme. Furthermore, the research is not nearly as positive as it at first appears. Tumner and Chapman (2016) questioned the research design of May et al (2015) as a result of the lowest performing students being excluded from the study. They concluded that the successful completion rate of students was modest and that there was no evidence that reading recovery leads to sustained literacy gains. More damning, however, is their highlighting of the range of experiences and interventions that the control group were exposed to. This cuts to the kernel of the adhesion maintained by Reading Recovery: until it is tested directly against an efficient systematic phonics programme it will continue to indicate modest improvements in reading for its participants.


Smith and Goodman’s whole language approach was adopted by the state of California for seven years which resulted in 60% of Californian nine and ten-year olds being unable to gain an even superficial understanding of their books and in California slumping to the bottom of the United states reading league tables (Turner and Burkard, 1996).


THE SIMPLE VIEW OF READING

Gough and Tumner (1986) studied 250 children over seven years from kindergarten through to fourth grade. Observing that some children had highly advanced language comprehension skills (oral comprehension) but poor reading comprehension skills, they found they were able to predict reading comprehension efficacy by measuring a third dimension. By testing children’s decoding skills using a pseudo-word assessment and multiplying the percentage by (not adding it to) a language comprehension score they were able to directly predict the child’s reading comprehension level. A deficit in either decoding or language skills always resulted in a lower reading comprehension score. Where language skills were high, an improvement in decoding led to an equal (not partial) improvement in comprehension scores.


As an analytical tool for the focusing of reading intervention it proved invaluable. Reading comprehension (the reason for reading) could be improved by interventions addressing readers’ deficits in either decoding or language comprehension. If two of the three variables were known, then the third could be accurately predicted. Hyperlexic readers (good decoding but poor language comprehension) would receive specific instruction in language whereas dyslexic readers (poor decoding but at least average language comprehension) would receive decoding interventions – NB. This is Gough and Tumner’s (1986) definition of dyslexia. Poor decoding allied to poor language comprehension identified the ‘garden variety’ poor reader.


Although valuable, the analytical element of the study was not its most significant contribution to reading instruction. By identifying decoding as an essential predictor of reading comprehension, Gough and Tumner (1986) exposed the fallacy of Gestalt Theory: that the whole word was the smallest point of focus. Without the ability to decipher the sounds represented by letters and the ability to blend them to identify words, reading was not possible. The multiplier effect was crucial: a deficit in either decoding or language skills always resulted in a lower reading comprehension score; the product of two fractions being de facto less than the multiplicand and the multiplier. The conclusion was emphatic: an enhanced ability in one area could not compensate for a deficit in another area.


Whole language theory, with its emphasis on comprehension and its eschewal of decoding could, therefore, not work as a framework for reading instruction: it was not mathematically possible. Phonics was key; not the only key, but key nonetheless.


Gough and Tumner’s (1986) research developed into the influential ‘Simple View of Reading’, further modified by Hoover and Gough (1990), which drew three clear conclusions from the study. Firstly, that the highly complex manifestation of reading fluency can be broken down into two identifiable categories: the ability to decode and the ability to comprehend language. Decoding relates to an ability to decipher text quickly and accurately. Language comprehension, although not specific to reading, relates to domain knowledge, reasoning, imagining and interpretation (Kamhi, 2007). Secondly, that any difficulties in reading are a result of either poor language comprehension, poor decoding skills or a deficit of both. Thirdly, that for reading to be confident and competent, facilities in both areas must be strong; strength in one area cannot compensate for weakness in the other. Thus, a student with excellent language comprehension will achieve a reading comprehension level not exceeding their decoding level and any improvement in their decoding will result in an improvement in their reading comprehension.


The implication for teaching and teachers was clear: if both parts of the framework were not attended to, children would not become competent readers. Thus, a systematic phonics programme delivered early in a child’s schooling and continued until mastery was essential, allied with a concentration on language development, vocabulary building, widening content knowledge and an exposure to diverse narratives and stories (Lloyd-Jones, 2013).


The robustness of the framework is supported by numerous studies. Kendeou et al. (2009) employed factor analysis of the diverse measures of assessment undertaken independently by two separate research studies in the USA and Canada analysing both decoding and comprehension. Their conclusions supported the generality and validity of the ‘Simple view of Reading’ framework and its algorithms. In 2008 the Institute of Education Sciences in the USA commissioned a series of large-scale studies and provided persuasive evidence of the role of ‘The Simple View of Reading’ (Hoover and Gough, 1990) and the efficacy of the framework (Vaughn, 2018). Further advocacy of specific phonics instruction was implicit in Lonigan et al.’s (2018) examination of 720 children in grades 3 to 5 in the USA supporting the predictive validity of the components of the ‘Simple View of Reading’ (Hoover and Gough, 1990). This predictive validity was further supported by Chiu’s (2018) analysis of pre-kindergarten children’s outcomes at grade 3 which found that listening comprehension and word recognition concepts were strongly related.


So robust are the constructs of ‘The Simple View of Reading’ that it featured in the Rose review (2005) of the primary curriculum of England and the language of its framework appears in the Primary National Curriculum of England (DfE, 2014).


It is, nonetheless, not without its problems and many of these relate to the construct of ‘language comprehension’ and the complexities of acquiring the domain knowledge, reasoning, imagining and interpretation (Kamhi, 2007) associated with it, as opposed to the clearer progression of decoding competency which can be acquired with a systematic approach to phonics instruction. This is exacerbated by the assessment of language comprehension and reading comprehension and the inadequacies of a testing and scoring format that is suitably reliable and valid for large-scale studies (Snow, 2018). Furthermore, as Francis et al. (2005) maintain, the framework is not designed to contrast the text and cognitive skills that may be required to address the selected reading. Perhaps the greatest issue, however, lies with the false impression suggested by displaying reading comprehension alongside decoding that reading comprehension, like decoding, is a single concept; that it is unidimensional and as complex as it really is (Catts, 2018). Gough, Hoover and Peterson (1996) acknowledge the issue of differing content knowledge and that it can conflate the results of the model. As a result, the great strides in decoding provision made in the last few years have not been mirrored by improvements in reading comprehension. Strategic reading instruction (Swanson et al., 2015) will only be effective when combined with adequate content knowledge (Rasinski, 2010).


READING AUTOMATICITY

Children learn how to read fluently at different rates and with differing levels of difficulty (Snowling and Hume, 2005) and while some attain fluency relatively easily, others encounter complications and struggle to attain the ability to decode swiftly and automatically. Although the research into the importance of phonological processing skills in reading acquisition is emphatic and consistent (Stanovich, 2010), there is increasing evidence that orthographic processing is relevant in explaining variances in word recognition skills (Cunningham, Perry and Stanovich, 2001). ‘Automaticity with word recognition,’ state Cunningham, Nathan and Raher (2011), ‘plays a fundamental role in facilitating comprehension of text and, thus, is a primary determinant of reading achievement through schooling…’ (2011, p. 259).


LaBerge and Samuels (1974) developed the Automatic Information Processing Model, which added, after letter recognition by the iconic memory, a reference to the phonological memory which enabled sounds to be associated with the visual image before receiving the attention of the episodic and semantic memory resulting in correct word identification. They developed the concepts of external and internal attention. External attention being observable evidence of reading and internal attention being the unobservable elements of cognitive alertness (how cognitively vigilant the reader is and how much effort is being applied), selectivity (what experiences the reader is drawing on to comprehend the text) and capacity (how cognitively attentive the reader actually is). Laberge and Samuels (1974) also introduced the concept of automaticity: the ability to perform a task whilst devoting little attention to the commission. In the case of reading, this relates specifically to the swift and accurate decoding of text with almost no imposition on working memory with the resultant benefit that almost all attention is available for comprehension. The implication that there is a crucial stage buffering decoding mastery and reading fluency is central to the understanding of development in readers for whom phonics mastery has not been achieved.


All interactive theoretical models of skilled reading emphasise the need for fast, automatic word recognition (Ehri, 2005). Although these models differ in their explanations of the cognitive processes involved, they all assume that word recognition develops from a slow, arduous, intentional process requiring constant sound symbol deciphering, into a process that enables the immediate identification of words through their lexical quality (Perfetti, 2007).


Thus, the ability to recognise words quickly, accurately and without effort allows cognitive resources to attend almost entirely to reading comprehension. Conversely, without the achievement of reading automaticity, the cognitive load required to decode words leaves insufficient space in working memory for reading comprehension (Perfetti, 2007) so in order to reduce this cognitive load in processing alphabetic orthographies readers must attain effortless use of the alphabetic code (Chen, 2007). This cognitive theory is supported by substantial empirical research evidence that fast, effortless word recognition is the strongest predictor of reading comprehension and accounts for high degrees of variance in comprehension throughout schooling (NICHD, 2000, Stanovich, 2000). Deficiencies in swift, accurate word recognition in early schooling are the clearest predictor of deficiencies in reading comprehension in later schooling (Cunningham and Stanovich, 1997). It is this orthographic processing deficit that has been identified as a crucial ‘sticking point’ between phonological processing and reading fluency (Stanovich and West, 1989) and explains significant variances in reading and spelling ability (Badian, 2001).


Orthographic processing is the ability to form, store and access orthographic representations which abide by the allowable order of letters within a language and are then linked to the phonological, semantic and morphological coding of that language (Cunningham et al., 2011). In other words, readers can recognise and decode a group of letters as being a plausible pattern within a language (this could include a pseudo-word) and then access semantic memory to recognise that it is a recognisable word within that language, even though they may not be able to define the word (this would exclude a pseudo-word).


Clearly this requires a dependence on phonological processing ability (Barron, 1986). Nonetheless, no cognitive process is ever completely isolated and thus orthographic processing can be seen as a separate construct to phonological processing with a differential in the weighting on each process being dependent on the reading task and the proficiency of the reader (Berninger, 1994). This is crucial for the development of the reading of English which contains so many homophones and the swift recognition that the phonic decoding of a word is not the sole indication of meaning.


If orthographic processing (or reading automaticity) has thus been recognised as a separable construct in the development to the further separable construct of reading fluency, an understanding of how it advances is crucial.


Reading fluency, the ability to recognise words quickly, accurately and with inherent understanding of wider meaning (prosody) is linked with the instant processing of phonological, semantic, morphological and syntactic information (Perfetti, 2007). Much research has rightly focused on developing our understanding of phonological awareness and many millions been allocated to delivering and testing phonological awareness in young readers to ensure mastery. Although a vital and fundamental foundation of reading proficiency it is, of itself, not sufficient for reading fluency. The significance of orthographic processing as a buffer between phonological mastery and reading fluency and a source of variance in word recognition highlights it as a crucial developmental stage on the road to fluency.


When a developing reader with well-developed phonic awareness encounters an unfamiliar word, they utilise their phonological processing capacity and an exhaustive grapheme by grapheme decoding operation ends in a successful decoding. This results in in the formation of cognitive orthographic representations. In typical readers, with a small number of encounters with the word, it will be added to the child’s orthographic lexicon (Share, 2004) and with the amalgamation of phonological and orthographic representations fluent future word identification will be enabled (Ehri, 2005).


The phonological component is the primary means for acquiring orthographic representations (Share, 1995). Nonetheless, it is the frequency of exposure to a word and the successful identification of it that develops the word recognition process. If the word is familiar it will be read automatically and if not, the reader will phonologically decode it. What is being built by the reader is not a bank of memorised shapes that they identify and associate with meanings but the capacity to recognise logical and acceptable letter patterns and link them to semantic and morphological knowledge: Reicher’s (1969) word superiority effect.


Share (1995) developed a theoretical framework that postulated that the detailed orthographic representations necessary for fast, efficient word recognition are primarily self-taught during independent reading.


This self-teaching model (Share, 1995) has been tested and refined thorough cross-linguistic investigations across multiple languages with shallow orthographies such as Hebrew to deeper orthographies such as Dutch and English and has significant evidence to suggest a robust theoretical model (Cunningham et. al, 2011). Cross-linguistic evidence supports the model for both oral reading and silent reading (Bowey and Muller, 2005). Languages with shallow orthographies require only two exposures to unfamiliar words for automaticity to be achieved (Share, 2004) whereas English, with its deeper orthography requires at least four exposures (Nation et. al., 2007) with increased exposures making little difference in results. Younger children, however, do benefit from greater numbers of exposures (Bowey and Muller, 2005).


The greatest implication for the teaching of reading from this model is that children require multiple and varied opportunities to self-teach (Cunningham et. al., 2011). Furthermore, children require time to phonologically recode words on their own during instruction if automaticity is to be developed (Share, 1995). When a child hesitates when attempting to decode, it is essential that they are afforded sufficient time to attempt a phonological recoding. Even failed attempts facilitate some level of orthographic learning (Cunningham et. al., 2011) especially when the teacher is able to refer to the alphabetic coding structure rather than merely read the word. And this perhaps is the crucial factor. Without teacher input and monitoring, sustained silent reading showed almost no positive effects in developing orthographic processing (NICHD, 2000) because there was no way of ensuring a child’s investment in the reading. Reading has to be taught, monitored and invested in by the reader and that investment needs to be constantly assessed.


Spelling appears to have a positive effect on the development of orthographic representations and reading automaticity when children are taught the spelling of words through their graphemic structure associated with the phonic code (Shahar-Yames and Share, 2008). Spelling should help reading more than reading helps spelling (Perfetti, 1997). It also develops vocabulary growth and writing with evidence that the attention demands for composition are not diluted by attention on accurate spelling (Torrence and Galbraith, 2006).


The texts to which emergent readers are exposed are crucial and an area for potential confusion. Clearly, to develop reading automaticity children must have numerous exposures to high frequency words and higher frequency words. The more regularly a word is correctly identified, the more quickly it becomes imbedded as an orthographic representation. However, for development of reading comprehension, use of rhetorical devices, advanced language techniques and in-depth analysis of a text, a more complex text, with greater numbers of unfamiliar and complex words is necessary (Elder and Paul, 2006). There is no need for a conflation, only a necessary understanding of what the purpose of each text is. Analysis of a complex text above instructional level for deep literary exploration develops both comprehension ability and enables the widening of a child’s lexicon (Booth et. al., 1999), whereas reading a text at a child’s instructional level helps develop reading automaticity by the exposures to familiar words.


The effective application of phonics is the foundation of efficient reading and the monitoring of phonic mastery a crucial role for educators but that mastery seldom occurs in the English alphabetic code in the second year of formal education (when the Phonics Screening Check takes place) and most children are not comfortable decoding at a polysyllabic level until into their fourth year of schooling (Dehaene, 2011). This, however, is not the end of the process. Children need to practice decoding at speed to gain automaticity and achieve the word superiority effect (Reicher, 1969).


‘Automatic word recognition is necessary for successful reading comprehension…although much of what a reader ultimately has to do is read, there are significant advantages in encouraging a student to develop a proclivity toward phonological recoding on the path to automatic word recognition. Furthermore, the assistance trained teachers who understand the intricacies of language and reading development and instruct with attention to the complexities of the languages their students hear, speak and write, is priceless.’ (Cunningham et.al. 2011, p. 277)


NOT ALL PHONICS PROGRAMMES ARE THE SAME

Phonics instruction has existed in many forms for thousands of years. Quintilian’s (1913) insistence that any syllable, whether existent in Latin or not, be able to be read and blended before a student be permitted to move onto the grammaticus stage of their education, along with his resolution that sounds be taught before letters was in essence a phonics programme. Its efficacy was dependent on the transparency of the Latin alphabetic code with its regularity in sound to letter correspondence. The adoption of Quintilian (1913) teaching methods and the expectation that English grammar school boys learn to read and write Latin and Greek before English was the only phonics instruction available in formal public education in the middle ages. It was, nonetheless, considerably more phonics instruction than was made available to the poor, who were relegated to learning to read at Sunday school without the help of Latin instruction and its transparent phonic construction, assisted only by the letter names and a single word associated with each letter along with some syllables and an array of highly complex religious texts to be learnt by heart. Pascal’s (Rodgers, 2002) revolutionary early phonics programme was the first that recognised that reading could be expressly taught through the recognition of phonemes associated with graphemes and their blending together to decode words. This was, nonetheless, designed for French and it was not until Webster’s primer that a universal phonic programme emerged for English. The development of Kay’s (1801) ‘The New Preceptor’ and Mortimer’s (1890) ‘Reading Without Tears’ advanced phonics programmes and the systemisation of an approach founded on an understanding of the English alphabetic code, culminating in Nellie Dale’s (1902) iconic ‘On Teaching of English Reading’ which sold well on both sides of the Atlantic.


With the rise of ‘look and say’ reading instruction, promoted by Huey’s (1904) adoption of Gestalt theory (Ellis, 2013) and with the rise of the powerful U.S university-based teacher training institutions and the pre-eminence of Gates (1927), it was analytic phonics that became the dominant phonic approach. As basal readers and reading schemes that relied on word repetition came to dominate early reading teaching, the idea that reading could be taught from the atomised understanding of phonemes and graphemes and the blending of these sounds through the identification of their representations was driven to education’s hinterland.


Ironically, analytic phonics developed out of the inherent flaws of the whole-word and sentence approach. When a word could not be recognised or remembered, the reader was taught to resort to the secondary identification method of guesswork using picture cues, syntactic cues and contextual cues. If the first two methods were ineffectual only then did the reader resort to analysing the letters and attempt to decode the word by identifying the sounds. However, without having been taught a systematic word attack strategy of letter to sound correspondence this usually resulted in teacher intervention. The teacher would then teach the phonic code of the associated word and encourage the reader to decode it. So inefficient was the system, and so inexpert at identifying the relevant element of the alphabetic code were the teachers, that the most common outcome was the identification of the word by the teacher or, where no teacher was present, the avoiding of the word by the reader. Analytic phonics is often cited as phonics instruction when, as described above, it possesses no element of systemisation.


The requirement of a large bank of memorised words is also a prerequisite for phonics programmes and systems that require the reader to decode unknown words by the use of associative letter patterns in known words. When an unknown word is encountered, the reader is required to reference known words with similar letter patterns and utilise and apply these patterns to the unknown word. The concept of onset and rime phonics is predicated upon the reader ignoring the opening grapheme’s phoneme, the identification of the subsequent letter pattern through association with a known word, the replacement of the opening phoneme of the known word with the actual phoneme and the blending of the replaced phoneme with the identified letter pattern’s sound. If that seems complicated for a single syllable word, then the cognitive gymnastics required to decode a polysyllabic word may, for a struggling reader with a poor word memory bank, be debilitating and possibly catastrophic (Parker, 2018).


Although referenced as phonics programmes, these systems all require large banks of learned words and ignore the fundamental assumption of an alphabetic code: that letters and combinations of letters represent sounds and that by systematically learning the sounds represented by the letters and synthesising those sounds together a word can be decoded and recoded. Furthermore, by regular practicing of that decoding process, expertise in decoding will develop, automaticity will be achieved, the word superiority effect (Reicher, 1969) activated, and words can be read accurately and quickly enough for the working memory to focus on comprehension of the text.


This is, in essence, the principle behind systematic synthetic phonics. It is an approach that explicitly teaches the connection between graphemes and phonemes and is fundamentally bottom up. By mastering the coding of sound to letter correspondence of the English alphabetic code, emergent readers can apply that code knowledge to decipher any word by enacting a letter to sound to word process in tandem with a lexical route (Dehaene, 2015) to achieve meaning. It eschews the top down whole-word recognition reading pedagogy that only applies phonetic knowledge when the logographic and contextual recognition and implied guessing systems fail.


In order to be effective, however, the letter representations of the sounds required to decode English must be atomised, codified and then stratified into a hierarchy that enables this most complex of alphabetic codes to be taught and practiced by young learners. Thus, systematic synthetic phonics teaching begins with the initial or simple code, whereby children are firstly taught simple grapheme phoneme correspondences that enables them to successfully read a large number of words, grasp the concept of the alphabetic code and start blending and segmenting whilst understanding the reversibility of reading and writing (McGuiness, 1999). Once mastered, the complex code is introduced with its increasing multifarious variations in representations of vowel and consonant sounds and the crucial concepts that one letter can represent more than one sound and that the same sound can be represented using different letter combinations. The clarity of the codification is crucial, with the taught understanding that all sounds are encoded and can thus be decoded, however obtuse, discrete and singular that codification may be. With regular practice allied to the reading of texts that are constructed using words with the taught grapheme/phoneme correspondences (decodable texts), readers have effective strategies for decoding unknown words. The final element of the process is the teaching of the decoding of polysyllabic words where the procedural knowledge acquired to decode syllables is extended to blend numbers of syllables in polysyllabic words. Once mastered, a process that may take between two and three years (Dehaene, 2015), the alphabetic code is unlocked, and readers have an effective attack strategy for decoding any unknown word.


The English language has been encoded using an alphabet. To ignore that alphabet when teaching the decoding of the English alphabetic code is inexplicable according to Daniels and Diack (1953). That code has to be unlocked in order for fluent reading to be achieved (Dehaene, 2015). Even when not explicitly taught, many children will learn to crack the code by themselves and perhaps up to 75% are able to do this (Adoniou, 2017). These children will be able to read. As for the rest, they must either continue attempting to crack the code well into secondary education or rely on their memory of word shapes which will likely give them a reading vocabulary of about two thousand words (Rasinski, 2010). Ironically, these learners who struggle to crack the code do so because of a phonological deficit identified as one of the key specific cognitive barriers to reading (Paulesu,2001).


Evidence that synthetic phonics was substantially more efficacious for early instruction than analytic phonics was brought to focus in 2004 with the publication of a seven-year study in Scotland comparing the two approaches. Watson and Johnston’s (2004) research into 304 primary-school-aged children taught reading thorough synthetic phonics and analytic phonics across thirteen classes for sixteen weeks found that those taught by synthetic phonics were seven months ahead of their chronological reading age, seven months ahead of the other children in the study and eight months ahead in terms of their spelling. What was perhaps more remarkable was that the classes being taught by synthetic phonics were from the most socially deprived backgrounds of all study participants. Furthermore, these children were followed to the end of their primary school careers, by which time they were three and half years ahead of their chronological reading age and significantly ahead of age expectations in their reading comprehension and spelling (Johnston, McGeown and Watson, 2011).


Although criticised for a research design that conflated the phonic elements with other potential contributing factors (Ellis and Moss, 2013, Wyse and Goswami, 2008) and the differing amount of teaching (Wyse and Styles, 2007), the dramatic contrast in outcomes could not be ignored and were not. In England, the influence of the study on the Rose Review (2006) was substantial and Nick Gibb (Ellis, 2009), then in opposition, used the study to question government education policy with the resulting Phonics Screening Check being implemented once he became Schools’ Minister. Ironically, political agency in Scotland being far less centralised ensured a more measured reaction (Ellis, 2009) and a widening literacy gap (Sosu and Ellis, 2014).


Synthetic phonics begins with the letters, assigns sounds to those letters and develops the ability to blend those sounds to read the word formed by the combination of those letters. From the mid-nineteenth century, however, a number of pedagogues have questioned the rationale of this starting point. Pitman and Ellis (1845), conscious of the complexity of the English alphabetic code, developed the phonotypic alphabet that more easily represented the atomised sounds of the English language and although it failed to be widely adopted, it did form the basis of the Pitman shorthand programme. At the heart of Pitman and Ellis’s (1845) approach was the understanding that sounds are represented by letters and this enabled them to alter the letter configuration to simplify the representation of sound. All previous approaches assumed that letters existed to create the sounds in order to read. However, argued Pitman and Ellis (1845) speech comes before writing so sounds must come first, and letters are thus the representation of sounds. Although seemingly a semantic difference it was, nonetheless, fundamental to the birth of a new approach to synthetic phonics: linguistic phonics.


The most successful early proponent of this sound-first approach was Dale (1902) who insisted that her charges learn and identify sounds before being introduced to the letters that represented these sounds and followed Quintilian’s (1892) approach of avoiding explicit letter names that added unnecessary sounds until phonemes were embedded. Dale (1902) also grouped phonemes irrespective of the graphemes that signified them rather than being led by the alphabet in the manner that previous phonic programmes such as Webster’s (1832) were organised. Furthermore, Dale (1902) taught spelling at the same time as reading with children writing letters rather than merely observing letters.


Although successful and popular, Dale’s (1902) programme was a victim of the rise of Gestalt Theory, flash cards and basal readers, and it was not until the 1960s that any new linguistic programmes surfaced again. It was James Pitman’s development of the international teaching alphabet that reintroduced a linguistic approach (Downing and Nathan, 1967). Like his grandfather Isaac (Pitman, 1845), he attempted to simplify the alphabetic code for English and although effective, it was predicated on children having to unlearn his alphabet in order to then read words written in the actual English alphabetic code. Similarly, the Lippincott programme (Hayes and Wuest 1969) used an artificial alphabet but followed Dale’s (1902) approach more consistently by focusing on sounds and blending, teaching spelling alongside reading, utilising decodable text and insisting that children write letters rather than merely observe them. Strikingly, the Lippincott linguistic programme proved to be the most successful reading programme in the substantial Bond and Dykstra (1967) study of early reading instruction in the United States.


Evans and Carr’s (1985) major observational studies in Canada reinforced the efficacy of sound and spelling focus for early reading instruction. They found that most literacy activities within a classroom had either a neutral or negative effect on reading with the only positive impacts being: learning the phonemes and how they are represented in letters, blending and segmenting sounds in words and the amount of time spent writing the representations of sounds. Time spent memorising sight words was a strong negative predictor. Independent learning tasks were particularly damaging to early reading mastery by encouraging study to degenerate into ‘random learning which may detract from…reading skills’ (Evans and Carr 1985, p344). The importance of writing letters for the reinforcing of sound/symbol correspondence was further emphasised by Hulme et. al. (1987) who found that the motor activity of writing graphemes significantly improved reading outcomes over the use of letter tiles. Cunningham and Stanovich (1990) also concluded that spelling was appreciably better when letters were written by hand rather than typed or manipulated utilising letter tiles.


These elements of this research were drawn together by McGuinness in 1997 with the identification of the essentials of an effective ‘linguistic’ reading programme. This ‘Prototype’ (McGuinness, 1997) was founded on the cornerstone of sound to print orientation: that phonemes, not letters are the basis for the code, with phonemes, possessing a finite nature, providing the pivot point for enlightening the opacity of the English alphabetic code and making its reversibility transparent; a transparency which the attribution of sounds to spellings obscures. Thus, what appears at first to be semantic pedantry – that sounds are represented by graphemes as opposed to graphemes creating sounds – reduces the code to a logical (if still complex) manageability for teachers and learners: there are forty-four sounds that are represented using the twenty-six letters rather than the overwhelming prospect of facing many hundreds of thousands of words and letter combinations whose sounds need to be identified. Only the phonemes are taught and no other sound units, but instruction begins with a reassuringly transparent artificial alphabet that identifies one-to-one letter-to-sound correspondence before introducing the more complex variations of phoneme encoding. Sounds are identified and sequenced through blending and segmenting with the writing of letters integrated into lessons to link writing to reading and embed spelling as a function of the code. There is an explicit and taught identification that a single letter can represent more than one sound and that sounds can be represented by more than one letter hence the imbalance between the number of phonemes and the number of available letters.


In keeping with Quintilian and Dale (1902), McGuinness (1997) also included, as part of the ‘Prototype’, the avoidance of letter names for early instruction citing Treiman and Tincoff’s (1997) assertion that this focussed attention on syllables instead of phonemes thus blocking conceptual understanding of the alphabetic principal and undermining spelling. Also included was the elimination of the teaching of sight words (words with such irregular spelling that they require memorisation by sight) which undermines decoding by encouraging ineffective strategies and which are relatively rare once effective code analysis has been applied (McGuinness, 2004) – McGuinness (2004) argues that of recognised one hundred high frequency sight words (Dolch, 1936) only twenty eight do not conform to the regular code.


Programmes that most align to this ‘Prototype’ (McGuinness, 2004) reveal growing evidence to substantiate its value. Stuart’s (1999) ‘Dockland’s’ study, with children of whom fifty three percent knew no English words at the outset, found that participants made substantial progress and read well above national norms by the end of the programme; a programme that aligned well with the model. In Canada, Sumbler and Willows (1999) found significant gains in word recognition testing, word attack strategies and spelling when children were taught using a similar programme, and perhaps most compelling was the revelation that the model used by Watson and Johnston (2004) in their Clackmannanshire research exhibited extensive correspondence with the ‘Prototype’ (McGuinness, 2004).


More recent, and more explicit research in terms of a linguistic approach to phonics mastery was carried out by Case, Philpot and Walker (2009), following 1607 pupils across 50 schools over 6 years. The programme aligned directly with McGuinness’s (2004) ‘Prototype’ and revealed that children taught by the model achieved decoding levels substantially above the national data (Case, Philot, Walker, 2009), with 91% attaining the national expected level at KS1 statutory assessments. This longitudinal study, carried out by the programme designers using data from the schools using the package, found little or no variations across gender, socio economic or geographical groupings. The Queen’s University Belfast study (Gray et. Al, 2007), derived data from 916 pupils over 22 schools utilising linguistic phonic approaches, concluded that children exposed to this teaching approach gained substantial advantage in both reading and writing and that this advantage was sustained throughout the primary phase.


IS A MASTERY OF PHONICS ESSENTIAL FOR READING?

‘There is clear evidence that a systematic approach to phonics results in gains in word reading and spelling. However, there is inconclusive evidence to suggest that no one method of teaching children to read is superior to any other method,’ states Glazzard (2017, p44), then Professor at Leeds Trinity University teacher training faculty. He suggests that analytic phonics could be as effective as systematic synthetic phonics and encourages the use of onset and rime as a strategy for decoding where systematic synthetic phonics has proved ineffective. He argues that many younger children are not able to deal with the smallest unit of sound, the phoneme, but must begin with larger units and recommends onset and rimes. There is no reference to Castro-Caldas et al’s (1998) study that established that the inability to identify the smallest unit of sound was apparent in all illiterates. The basis of his statement is the analysis of the Clackmannanshire research (Johnson and Watson, 2005) and the deduction that this was inconclusive as a result of an incoherent research design (Wyse and Goswami, 2008). Despite the substantial research (see above) indicating the effectiveness of systematic synthetic phonics and the paucity of research supporting analytic or incidental phonics for word decoding, going all the way back to Gates’ (1927) flawed study, Glazzard (2017) is insistent that reading instruction is not a ‘one size fits all’ (2017, p 53) model. His argument criticises the ‘Simple View of Reading’ (Gough and Tumner, 1986) for failing to atomise the steps to decoding proficiency, but he conflates decoding with reading comprehension whilst ignoring the fundamental premise of Gough and Tumner’s (1986) work: that decoding and reading comprehension are separate but correlated. Furthermore, Glazzard (2017) considers the Phonics Screening Check to be too crude an assessment of literacy development to be valuable despite it being the only statutory reading assessment that does not evaluate comprehension alone and has any element of phonic forensic capability – it, at the very least, can suggest a problem with decoding. Ironically, in his 2013 book ‘Teaching Systematic Synthetic Phonics and early English’ (Glazzard and Stokoe, 2013) he emphasises the importance of systematic synthetic phonics instruction to enable children to learn to decode, and for children who fall behind in reading, he recommends further systematic synthetic phonics instruction in KS2. He further states that the atomisation of words into phonemes and graphemes is supportive of effective spelling.


Clark (2017) is similarly unconvinced about the effectiveness of a Systematic Synthetic Phonics approach stating that there is no significant research that suggests that the method is more effective than analytic phonics or whole language instruction. Although she references Chall (1966), she fails to cite her conclusion that approaches that focus on the decoding of print through sound awareness are the most successful, but instead quotes from the Bullock Report (Bullock, 1975) which emphasises mixed methods of reading instruction and implies that Clay’s (1991) psycholinguistic guessing approach can be effective. Like Glazzard (2017), she too suggests that the Clackmannanshire study was flawed and is thus inconclusive and concludes that there is, ‘no (italics included in the original source) evidence to support phonics in isolation as the one best method…’ (2017, p.97). Clark (2017) also questions the wisdom of introducing children to reading long before this takes place in other countries and recommends delaying the teaching of reading. Nonetheless, she makes no reference to the phonetic transparency of other languages or that English has developed the most complex alphabetic code that takes years longer to master (Dehaene, 2015). She also makes no reference to the established view that children should start to learn to read English by the age of six (Holdaway 1979; Teale 1984; Stanovich & West 1989).


The recommendation that a Phonics Screening Check, similar to that which is statutory in England, be introduced into Australian primary education (Buckingham, 2016), has enraged a number of high-profile academics. Gardner (2017) likens the screening to a ‘virus’ (2017, p113) undermining the art of pedagogy. He sees the insistence on the adoption of systematic synthetic phonics as a reductionist model of teaching by direct instruction which views literacy as a systematic process leading to standardised accountability and a statutory check as a right-wing political policing imperative. Gardner (2017) cites the mandatory inclusion of Systematic Synthetic Phonics teaching within the English Teacher Standards (DfE, 2011) as evidence of this ‘policing’ (2017, p114). Wrigley (2017) concurs with Gardner’s (2017) polemic that Systematic Synthetic Phonics teaching and screening have been the result of ministerial power being ‘increasingly exercised and abused,’ (2017, p213) and policing by ‘the privatized Ofsted system of England’ (2017, p214). He suggests that the teaching of synthetic phonics fits the right-wing political preference of explicit instruction. In his examples of ‘skillful’ (2017, p210) teaching he suggests an incidental approach to phonics in line with Gates’ (1927) flawed recommendations that analyses words and sounds with no reference to the logic of the alphabetic code and with the repetition of words that implies a preference for a whole-word approach. A positive reference to the National Literacy Strategy (DfEE, 1998) ‘searchlight’ model of teaching reading suggest he considers it is at least as effective as synthetic phonics, this despite the ‘whole language’ (Goodman, 1972) implications of the model and ignores Pressley’s (2006) caution:

‘…teaching children to decode by giving primacy to semantic-

contextual and syntactic-contextual cues over graphemic-phonemic

cues is equivalent to teaching them to read the way weak readers

read!’ (2006, p. 164)


Cox (2017) also questions the political imperatives and urges restraint over the speed of implementation of a Phonics Screening Check in Australia questioning whose expertise and whose knowledge is taking precedence. Cox (2017) does not reference the rapid decline in literacy standards in Australia as a reason for urgent reform or the substantial research supporting systematic phonics instruction (although like all of these academics he states how important phonics is). He, like Gardner (2017), cites Robinson’s (2015) claim that the commercialisation and politicisation of education is damaging the prospects of young people. Robinson’s (2015) promotion of creativity over knowledge and attacks on direct instruction models of teaching are, by implication, attacks on Systematic Synthetic Phonics instruction. His constructivist (Tracey and Morrow, 2012) approach to education is not compatible with effective reading instruction (Geary, 2007).


Further support for this resistance to a statutory phonics check in Australia comes from Adoniou (2017) who claims that its introduction in England has seen a 23% increase in children attaining the threshold without the corresponding improvement in reading assessment at the age of six and seven. The improvements of English children’s reading scores for ten and eleven-year-olds is not cited. She also notes that Australia ranked higher than England on the PISA international reading assessments (OECD.org, 2015) but fails to reference England’s rise in the rankings and Australia’s decline and the fact that none of the tested cohort in England would have been subject to mandatory early phonics instruction. There is no mention of England’s rise to within the top ten of countries internationally ranked (including those with languages with far greater phonic transparency) for the PIRLS international reading assessment (timmsandpirls.bc.edu, 2016) or that England is ranked significantly higher than Australia in this measurement of ten-year olds or that the assessed English cohort would have been subject phonics teaching and assessment. Adoniou (2017) is insistent that the specific teaching of phonics is not yielding returns for England because ‘English is not a phonetic language…and synthetic phonics programmes make phonological promises to students that English cannot keep…’ (2017, p.177). She makes these claims despite English being a language encoded using an alphabet to represent the forty-four phonemes and that 90% of the language follows regular encoding precepts (McGuinness, 2004) and that no word in English is completely phonologically opaque (Gough and Hillinger, 1980; Share, 1995; Tumner and Chapman, 1998). She also fails to note that where phonemes with different spellings are taught in conjunction, there is no ‘promise’ (Andoniou, 2017 p.177) of phonic regularity but an explicit articulation of this complexity of the encoding. Adoniou (2017) suggests no additional solution to Australia’s declining literacy standards is necessary other than further unspecified teacher education, a maintaining of the status quo and specific (although again unspecified) interventions, mainly focusing on vocabulary and comprehension instruction, for those students who fail to learn to read.


Dombey (2017) proposes that reading is more about making sense of text than the privileging of the identification of words and cites Taylor and Pearson’s (2002) study which Dombey (2017) suggests indicates that an approach which combines enjoyment, syntactic analysis and phonetic examination in equal measure is more efficacious than phonics instruction alone. The study was not, as implied by Dombey (2017), a comparison of the two approaches but an analysis of the variety of perceived behaviours present for effective reading outcomes in schools across a number of studies. In the final study, six generalised behaviour criteria were identified for high-attaining reading outcomes none of which cited either phonics or a balanced approach to literacy. Where a balanced approach did feature as an indicator, it featured studies from the 1990s when the approach was more prevalent and, when prior studies were named, direct instruction was cited as an important factor with no mention of either phonics instruction or a balanced approach. It is apposite to mention that an important factor cited by the study as an indicator of reading success was the use of small phonics-focused intervention groups for children who were struggling with reading. Dombey (2017) cites Clay (1972) and Goodman (1995) as useful informants on the teaching of reading despite the weight of research discrediting their approaches (Pressley, 2006) and she goes on to suggest that Cattell’s (1886) research is evidence of whole word decoding as a relevant strategy for emergent readers despite the word superiority effect developing from phoneme recoding and not logographic recognition (Reicher, 1969).


All of these academics acknowledge the importance of phonetic approaches to word decoding for emergent readers, and the majority recognise synthetic phonics as the most effective strategy for decoding unfamiliar words. What they suggest, however, is that synthetic phonics instruction is not empirically superior to analytic phonics for the teaching of reading.


There are two key elements that need addressing when assessing the substance of this claim. The first is the evidence upon which the conclusions are based. All of the academics question the results of the Clackmannanshire research of Johnston and Watson (2005) that directly compared the two approaches by criticising a research design that did not account for myriad of variables that could influence the gains, such as teacher efficacy, home instruction and amounts of reading occurring in other parts of the curriculum (Wyse and Goswami, 2008; Wyse and Styles, 2007; Ellis and Moss, 2014). This is disingenuous, as it fails to recognise the complexity of designing research studies within working schools where different curricula, foci and emphasis exist. Schools are not laboratories and isolating effects and controlling variables is almost impossible. Educational researchers are aware of this and data are necessarily compromised but not necessarily invalidated, Further criticism targeted the greater amounts of time spent on synthetic phonics in the settings that utilised the approach as opposed to children taught analytic phonics. This exposes a crucial flaw in analytic phonics: the amount of time spent on phonics analysis is dependent on pupils’ word recognition failure as well as teacher monitoring and analysis of those failures along with teacher decisions when to analyse and what to analyse. Analytic phonics is de facto not systematic. It highlights an advantage of the synthetic phonics programme utilised by the study and developed by Lloyd (McGuiness, 2004), who observed that young children had the capacity to focus for longer periods than assumed: pupils were taught as a whole class and received at least twenty minutes of daily instruction and often more. That systematic synthetic phonics teaches phonics for longer and more consistently cannot be seen as a factor undermining the study: it is an inherent quality of the approach. Whatever the limitations of the research design, the improvements for the children exposed to synthetic phonics instruction over those exposed to analytic phonics are emphatic with effect sizes. When Systematic Synthetic Phonics was compared directly to analytic phonics alone the effect sizes were unequivocal: close to 1.0 for reading and greater than 1.0 for spelling (McGuinness, 2004).


The second element referenced by many of the academics is the ambivalent conclusion of the Torgerson et. al. (2006) research, that although there was evidence to suggest that systematic phonics should be integral to early reading there was insufficient evidence to suggest that it was superior to analytic phonics. This is a very selective interpretation. What the conclusion actually said was:


‘The current review has confirmed that systematic phonics instruction is

associated with an increased improvement in reading accuracy…However,

there was little RCT evidence on which to compare analytic and synthetic

phonics, or on the effect of on reading comprehension or spelling, so that

it was not possible to reach firm conclusions on these.’ (2006, p47).


The conclusion on reading accuracy was firm: systematic phonics was superior.


Theses academics have conflated reading instruction with decoding instruction. Systematic phonics enables effective word decoding. It is not reading fluency instruction, nor is it reading comprehension instruction or text immersion or vocabulary instruction. It is, nonetheless, a prerequisite for all of these. Analytic phonics along with semantic and syntactic cueing can be an essential element of reading instruction, comprehension and spelling instruction once the majority of the English alphabetic code has been comprehended and absorbed (Perfetti, 2007) and the word superiority effect (Reicher, 1969) is starting to be established and automaticity developed. It is not, however, as stated by Torgerson et al (2006) superior to systematic phonics for word reading accuracy.



IF PHONICS DIDN’T WORK IN EARLY SCHOOLING, CAN IT WORK LATER ON?

The impact of instruction in phonics on reading is significantly greater in the first two years of schooling (Ehri, 2004). However, effect sizes are considerably smaller when this instruction is introduced after the age of seven with fluency instruction and comprehension instruction producing far greater effect sizes (NIHCD, 2000). This led Ehri (2004) to suggest that beyond ages of seven, phonics instruction must be combined with other forms of reading instruction if maximum impact is to be attained. This is counter-intuitive to the Simple View of Reading (Gough and Tumner, 1986) which indicates that decoding mastery is directly corelated with reading comprehension. Ehri (2004) makes this supposition directly from effect sizes and concedes to there being a paucity of research in this field. She surmises, that the diminishing effect sizes for phonics instruction in older pupils may be a result of the difficulties in altering students’ habits and compensatory strategies when attacking unknown words. This may require the suppression of guessing words based on initial phonemes and contextual cues and the closer examination of spellings when reading words (Ehri, 2004).


Although specific research into the teaching of phonics beyond the age of seven is elusive, inferences can be drawn from wider studies. ‘The Reading First Program’, a reading intervention for struggling readers, established as part of the No Child Left Behind (2001) legislation in the United States, included substantial decoding instruction for older pupils (seven- and eight-year olds). Although the impact on reading fluency and comprehension was poor, the impact on decoding was significant (Kucan and Palincsar, 2011) indicating that phonics instruction for older children can be effective. The poor outcomes for comprehension have been blamed on a poorly conceived strategy for instruction with the approach lacking a systematic and consistent method (McKeown et al, 2009).


Further indication that phonics instruction in later years may be effective can be implied from Vaughn et al.’s research into a thirty-week intervention for forty-five eight-year-old children identified with reading problems. Although the intervention included both fluency and comprehension instruction, the early weeks were weighted heavily in favour of phonemic awareness and letter-sound relationships. Seventy-six per cent of the sample met the success criteria at the end of the intervention and further monitoring indicated that of those, seventy percent continued to be successful readers.


McCandliss et al.’s (2003) study of seven-year-olds whose word-attack strategy relied on initial consonant decoding also indicated that interventions that focused on phonemic manipulation by altering one letter at a time resulted in participants significantly outperforming the control group in a decoding assessment. These results were supported by Harm et al.’s study into word building interventions that focused on participants writing letters rather than focusing on speech activities. The pairing of orthography and phonology was crucial, the study concluded, to enhance the knowledge of phonemic structure (Harm et al., 2003).


The growing recognition that many reading difficulties are revealed beyond the early years of schooling (Chall, 1967) has not corresponded with research studies in this area. However, Leach et al. (2003) studied older pupils with late-emerging reading difficulties (eight and nine-year olds) and concluded that word recognition, decoding and spelling are significant impediments to progress in reading achievement beyond early school grades. They suggested that late emerging reading difficulties are being overlooked by educators and that a more forensic assessment protocols are required by schools.


Summarising research into struggling readers, particularly in later years, Kucan and Palincsar (2011,) conclude that, ‘We need to focus our efforts on minimizing the bottle-neck effects of the decoding problems experienced by some struggling readers…’ (Kucan and Palincsar, p. 354).


With the poverty of research into the value of phonics instruction in later years it is apposite to review studies into adult literacy improvement for any clues as to the efficacy of an approach which seeks to advance decoding strategies where some non-phonic compensatory schemas may be established. Despite there being little academic investigation in this area of reading prior to the 1970s (Brooks, 2011), Kruidenier’s (2002) analysis of random controlled trials into adult illiteracy indicated that phonemic awareness and word analysis instruction led to an increase in achievement for poor adult readers. Burton’s (2007) small-scale study of adult illiterates supported this with its conclusion that phonics instruction enhanced students’ progress. A follow-up study by Burton (2008) found that there was a positive correlation between students’ progress and the amount of phonics training that their teachers had received. Brooks notes the lack of research into the efficacy of systematic synthetic phonics instruction for older learners despite the positive indications for very young readers and suggests that this approach ‘awaits convincing demonstration…’ (2011, p.192).






Comments


bottom of page