top of page

Why the 'Searchlights' won't go out.

The Plowden Report (Central Advisory Council for Education, 1967) concluded that, ‘Children are helped to read by memorising the look of words, often with the help of pictures, by guessing from a context…and by phonics, beginning with the initial sounds. They are encouraged to try all the methods available to them and not depend on only one method…’ (1967:212). This model of teaching reading had been embedded in England’s National Literacy Strategy

(NLS) (DfEE, 1998) and the introduction of the ‘Literacy Hour’. The strategy stemmed from Clay and Cazden’s (1990) conception of reading as an elaborate activity requiring multiple sources of information to extract meaning from text (Stuart et al., 2008). It was explicit in its expectation that the teaching of reading should employ a variety of foci through its articulation of the ‘Searchlights’ (DfEE, 1998) model whereby unknown words were identified using a combination of four possible enlightenments: phonic knowledge, contextual knowledge, word recognition and grammatical knowledge. A child encountering an unknown word could identify it by using phonic cues, recall it as a logograph or remembered word or guess it from the context, the illustrations or the syntax.


Phonic strategies were implied as part of the strategy, but twe eachers had no motivation to apply them systematically (and were never taught to do so) and became more reliant on the onset and rime approaches in which they had received training. This approach required us to direct the reader to decode unknown words by the use of associative letter patterns in known words (Parker, 2018). When an unknown word was encountered, we would encourage the reader to reference known words with similar letter patterns and utilise and apply these patterns to the unknown word. The approach is predicated upon the reader ignoring the opening grapheme’s phoneme, the identification of the subsequent letter pattern through association with a known word, the replacement of the opening phoneme of the known word with the actual phoneme and the blending of the replaced phoneme with the identified letter pattern’s sound. We were not aware that this is a cognitively complex activity that may stretch the threshold of an emergent reader’s working memory (Parker, 2018).


We had received some training during teacher education in ‘Progression in Phonics’ (DfES, 1999) which had been added to the NLS (DfEE, 1998) and specifically referenced the ‘Searchlights’ model of instruction and emphasised the analysis of spoken words into phonemes. This analytic phonics approach did not attempt to teach the whole alphabetic code systematically but was utilised as a teaching and learning strategy when word identification failed during reading. However, as we did not learn to teach the alphabetic code systematically, we had no ability to use it occasionally. The approach did not demand effortful decryption from pupils but encouraged learning through the analysis of letter patterns and associated sounds after the unidentified target word had been read to the reader by the teacher. As such, children were not required to decode unknown words but received scaffolding at the point of failure. Teachers in upper KS2 considered the application only relevant to the lower years of primary school teaching and did not at any time teach the programme or explicitly employ any of the principles. Our sole strategy was to identify the initial sound, sometimes link the next syllable to an associated rime, read the word to the child and perhaps repeat the word.


In 2012 the National Union of Teachers (NUT), the second largest teaching union, echoed our understanding that ‘Most adults do not read phonically. They read by visual memory, or they use context cueing to predict what the sentence might be…’ (Mulholland, 2014: 13). ‘Mixed methods’ (Mulholland, 2014: 13), they stated, were essential. The largest teaching union, the NAS/UWT, concurred that children ‘…need to use a combination of cues such as initial letter sounds and illustrations to make meaning from text…’ (politics.co.uk, 2013:3). Their survey (politics.co.uk, 2013) suggested that the vast majority of respondent teachers believed that children needed to use a variety of cues to extract meaning from text in line with the fundaments of the ‘Searchlights’ model.


This confusion did not suggest a malevolence or professional indolence among teachers. The Rose Review (2006) of the teaching of early reading noted that there was an imperative to improve the professional knowledge and skills of teachers with a focus on Initial Teacher Training. This training was understandably concentrated on those entering the profession and those teaching in the lower primary years but as for many (Chew, 2018), for teachers in upper KS2, there was no additional training, and we continued teaching according to the ‘Searchlights’ model. The Rose Review (DfES, 2006) also charged Headteachers with ensuring sufficient training was available in schools, but this, according to Chew (2018) focused on Early Years and Key Stage 1 (KS1) and ignored KS2 in an attempt to mitigate the costs of training. Teachers only experienced SSP teaching when they were transferred to teach in classes in lower years. Pearson (2004) highlights the difficulty of changing practice linked to policy and research. He suggests that widespread acceptance and implementation is the result of generalised acceptance and acknowledgement of research by the profession. This may have been the case with the ‘Searchlights’ model and why we persisted with its teaching practices beyond its demise and the developing national focus on SSP instruction.


The lack of training, particularly in the rationale of code-based instruction and its application for older readers, may have been aggravated by the continuing bellicose atmosphere within the academic world that became known as ‘The Reading Wars’ (Connor et al., 2004). With no consensus reached as to the most efficacious method of reading instruction and conflicting research suggesting systematic phonic approaches were not the only method of effective instruction (Wyse and Bradbury, 2022; Goswami, 2021; Bowers, 2020; Clark, 2017, Glazzard, 2017, Pearson, 2006), it was understandable that we continued with our learned and accepted models of instruction (Chew, 2018). This divergence was further intensified by the inevitable politicising of policy decisions (Pearson, 2004) and accusations of de-professionalising teachers (Gardiner, 2017). It is perhaps unsurprising that many teachers chose to ignore the polemic and continued to employ strategies from the ‘Searchlights’ model with which we felt comfortable and had accepted as effective (Pearson, 2004). We therefore, had little understanding of the place of phonics within the teaching of early reading, its application for word attack strategies for older readers and were applying onset and rime strategies, contextual, logographic and semantic cues to promote word recognition. Thus, we were employing three of the four searchlights in the model, whilst ignoring any reference to grapheme/phoneme correspondences. Kim (2008) has suggested that the implication of the ‘Searchlights’ model (DfEE, 1998) for teachers was that reading was a multi-sourced and elaborate activity and enabled me to posit that children learned to master it intuitively, absolving pedagogues from having to teach sound to letter correspondence specifically and thereby liberating us to concentrate on the less routine and, Kim (2008) goes on to suggest, perhaps more enticing aspects of literacy through literature dominant, meaning-based activities.


Thirty-five years prior to training in the ‘Searchlights’ model, Chall’s (1967) analysis of all available historical research into early reading methods in the USA had concluded that ‘phonics is advantageous not only for word recognition but also for comprehension’ (1967:108) and that phonics instruction had a ‘cumulative effect that is crucial in producing the later advantage…’ (1967:108). Chall (1967) also analysed the studies to compare systematic phonics instruction with analytic phonics. In terms of word recognition, spelling, vocabulary and comprehension, children taught using systematic phonics instruction outperformed those being taught using analytic phonics. Only in reading rate did those utilising an analytic phonics approach gain an advantage and this advantage was overcome by grade four (9 and 10-year-olds). Chall (1967) concluded that the results of the study suggested that most children in the United States were taught to read using a meaning emphasis method despite the research from 1912 to 1965 indicating that a code-emphasis method that emphasised learning of the printed code for the spoken language produced superior results.


Goodman (1982), however, was critical of Chall’s (1967) research which separated codebreaking from reading for meaning, and he suggested her conclusions indicated a misunderstanding of how the linguistic code operated and was used in reading. ‘A language is not only a set of symbols; it is also a system of communication…’ (1982:127) he stated, and he suggested that she had overlooked the ‘fact’ (1982:127) that phonemes did not really exist, and that written language is no less a code than oral language. He argued that even in an alphabetic system it is the interrelationship between grapho-phonic, syntactic and semantic information, and the switching between these cuing systems, that enabled the reader to extract meaning from text. He maintained that reading was a process in which the reader picks and chooses from the available information only enough to gain and approximate meaning. It was not, he stated, a precise perceptual process.


Goodman’s (1970) approach had developed following his landmark paper, ‘Reading: A Psycholinguistic Guessing Game’ (1970). The central tenet was the refutation of reading as a precise process that involved ‘exact, detailed, sequential perception and identification of letters, words, spelling patterns and large language units…’ (Goodman, 1982:33), but that it was a selective process that involved the partial use of available language cues based on ‘readers’ expectations’ (1982:33). The reader, he maintained, guesses words based on semantic and contextual anticipations and then confirms, rejects and refines these guesses in ‘an interaction between thought and language…’ (1982:34). Inaccuracies, or miscues, are inherent and vital to this process of psycholinguistic guesswork. Goodman (1982) justified his theory by linking it to Chomsky‘s (1965a) model of oral sentence production which results in precise encoding of speech being sampled and approximated when the message is decoded. Thus, Goodman (1982) maintained, the oral output of the reader may not be directly related to the graphic stimulus of the text and may involve ‘transformation in vocabulary and syntax’ (1982: 38) as long as meaning is retained. The implication was that the reader is reading for meaning not for accuracy and it is semantics and context that therefore drive the reading process not alphabetic decoding.


The parallels between Goodman’s (1970) theories and Clay and Cazden’s (1990) conception of reading as a multi-sourced communication model suggested that the ‘Searchlights’ model had been influenced by the whole language approach. However, despite the growing evidence that all words were fixated (even if only by the parafovea), and that cognisance of letter combinations played an important role in reading, Clay (1991) supported Goodman’s (1970) theory maintaining that readers relied on meaning and sentences for rapid word perception, and that whole texts contained more meaning and were thus easier to read than pages, paragraphs and sentences. Clay (1991) further asserted that letters, containing the lowest level of meaning, were therefore, the most difficult to read and were, for reading instruction, the most irrelevant despite the research from eye movements suggesting that the letters, and their patterns within a language, were being fixated thus indicating relevance.


Although in England Goodman’s (1970) approaches were not explicitly adopted, his ideas gained traction and authority through Meek’s (1982) influence over teachers and parents. Meek (1982) believed that phonics instruction was ‘highly inefficient’ (1982/2012: 42) because English contained so many exceptions that it had become detached from its phonic and alphabetic constructs. She posited that the use of phonic rules constricted a child’s curiosity about words and therefore their ability to develop orthographic processing. She echoed Godman’s (1970) principle of reading being the pursuit of meaning and that children should therefore be encouraged to guess words and utilise contextual cues, including pictures and illustrations to expose this meaning. The teacher, she stated, should not adopt a devised system but must respond to and promote the individual child’s efforts and approaches to comprehend a text. This chimed with the approach many of us had adopted as a class teacher. Like Goodman (1970), Meek (1982) stated that children learn the alphabetic code through reading and not vice versa. Teachers were encouraged to concentrate on creating literate readers (Meek et al.,1983) and develop ‘readerly behaviours’ (1983:111) in students thereby promoting an inherent desire to read through an emphasis on narrative texts.


With whole language approaches having greater traction, Clay and Cazden’s (1990) influence on the multi-source ‘Searchlight’ model (DfEE, 1998) ensured that it was in essence a meaning-based approach (Stuart et al., 2008). Phonics did have a place in the whole language approach (Smith, 1975), but only after children had learned to read: a good reader, Smith (1975) asserted, was intuitively able to make sense of phonics. In other words, phonics mastery did not make a good reader; good reading enabled phonics mastery. This confounding of word reading and text comprehension (Stuart et al., 2008) suggested the multiple reading skills identified by the ‘Searchlights’ model (DfEE, 1998) were acquired in parallel. It implied the rejection of the assertion of code-based approaches that the understanding of text was dependent on effective word identification. This fusing of approaches inferred a compromise which suggested that the correct approach was the one that was most appropriate at the time, with children encouraged to remember some words by shape, use picture, contextual and semantic cues as well as engage phonic signals. With the articulation of the ‘Searchlights’ model (DfEE, 1998) in England’s National Literacy Strategy (DfEE, 1998) this mixed methods approach became embedded in teacher training and informed our understanding of reading instruction and thereby the way we taught pupils to read.


After the introduction of the National Literacy Strategy (DfEE, 1998) and the ‘Searchlights’ model (DfEE, 1998), reading outcomes in England did rise initially, but after three years started to flatten and then plateau. Evidence that teaching phonics systematically was substantially more efficacious for early instruction than analytic phonics was further highlighted in the USA with the publication of the National Reading Panel (NRP) report (NICHD, 2000) which concluded that a code-based instructional approach helped children to read better than all forms of comparison group instruction. A meta-analysis of the findings evaluating the effect of systematic with non-systematic phonics, including no phonics, across a range of reading measures concluded that systematic phonics instruction ‘should be implemented as part of literacy programmes to teach beginning reading…’ (Ehri et al.,2001:446). Garan (2001, 2002) criticised the NRP’s approach to meta-analysis suggesting that it was fundamentally flawed and further suggested that the conclusions in the executive summary were inconsistent with the elaborated findings of sub-groups. However, the NRP found phonics to be a useful instructional approach in a specific time frame (children aged five to seven) but was not effective for older children and concluded that systematic phonics instruction should be integrated with other reading instruction to create a balanced reading programme with phonics instruction never being a total reading program (NICHD, 2000).


In England, Brooks (2003) criticised the phonics element of the National Literacy Strategy (DfEE, 1998), recognising that the format of instruction within it was different to that highlighted in the National Reading Panel report (NICHD, 2000) on reading in the USA. He was further provoked to question the ‘Searchlights’ model and NLS (DfEE, 1998) framework after the publication of the Clackmannanshire study of Johnston and Watson (2004). Johnston and Watson’s (2004) research into 304 primary-school-aged children taught reading through SSP and analytic phonics across thirteen classes for sixteen weeks found that those taught by SSP were seven months ahead of their chronological reading age, seven months ahead of the other children in the study in their reading and eight months ahead in terms of their spelling. Classes being taught by SSP were from the more socially deprived backgrounds of all study participants. These children were followed to the end of their primary school careers (aged 11), by which time they were three and a half years ahead of their chronological reading age and significantly ahead of age expectations in their reading comprehension and spelling (Johnston et al. 2011). Although criticised for an absence of peer review for the study, and a research design that conflated the phonic elements with other potential contributing factors (Ellis and Moss, 2013; Wyse and Goswami, 2008) and the differing amount of teaching time (Wyse and Styles, 2007), the dramatic contrast in outcomes gave the research significant influence. Johnston and Watson’s (2004) findings were supported by a meta-analysis (Torgenson et al.,2006) that included only randomised controlled trials. The analysis found a significant effect size for word reading accuracy where systematic phonics instruction was utilised.


Brooks (2003) identified that the major distinction between the SSP used by Johnston and Watson (2004) and that in the NLS (DfEE, 1998) hinged on whether the target word was identified for the child in advance by articulation from the teacher or whether, as was the case in the Clackmannanshire study (Johnston and Watson, 2004), children decoded the word for themselves. In other words, the difference between SSP and analytic phonics. He recommended that a resolution to the differences of the two positions be reached through discussion but concluded that phonics teaching within the strategy was synthetic. Brooks (2017) later recognised this as an approach that lacked coherence, as the majority of words encountered by emergent readers are unfamiliar, and was, therefore, contrary to the instruction utilised by Johnston and Watson (2004).


With the publication of the Clackmannanshire study (Johnston and Watson, 2004) and following the parliamentary House of Commons Education and Skills Select Committee report on the teaching of reading, the U.K government commissioned a review into the teaching of early reading in England. The Rose Review of the teaching of early reading (DfES, 2006) acknowledged the conceptual rationality of children utilising letter-sound knowledge to decode unknown words and recommended SSP as the future of early reading instruction whilst exposing the weaknesses of the multi-cuing ‘Searchlights’ model of the NLS (DfEE, 1998) and recommended a reconstruction of the Searchlights model (DfEE, 1998). The review highlighted that the ‘Searchlights’ model (DfEE, 1998) did not best reflect the way children learned to read and chimed with Pressley’s (2006) suggestion that ‘the scientific evidence is simply overwhelming that letter-sound cues are more important in recognizing words…than either semantic or syntactic cues’ (2006:21) and that the approach may be instructing children to read in the way that weak readers read. The review suggested reconstruction along the lines of ‘The Simple View of Reading’ (Gough and Tunmer, 1986, Hoover and Gough, 1990). As Stuart et al. (2008) noted, this may have been the theoretical paradigm shift away from Clay and Cazden’s (1990) model of reading as a complex, multi-source activity to a more linear, binary theory.


By identifying decoding as an essential predictor of reading comprehension, Gough and Tunmer (1986) indicated that without the ability to decipher the sounds represented by letters and the related orthographic processing development, reading comprehension was restricted. A deficit in either decoding or language skills always resulted in a lower reading comprehension score. The Rose Review of the teaching of early reading (DfES, 2006) concurred and recommended the adoption of ‘The Simple View of Reading’ as the new conceptual framework in which the teaching of reading should be located (Stuart et al., 2008). The clarity of ‘The Simple View of Reading’, along with the momentum created by the results of the Clackmannanshire study (Johnston and Watson, 2004), and despite a change in government leading to the abandoning of Rose’s (2006) suggested new curriculum, led to SSP forming the bedrock of early reading instruction.


‘Playing with Sounds’ (DfES, 2004) was replaced by a government developed SSP programme ‘Letters and Sounds’ (DfES, 2007) which explicitly warned against the utilisation of the alternative cueing strategies inherent in the Searchlights model (DfEE, 1998). In the same year a pilot of the PSC (DfE, 2012) revealed that 31.8% of the year one children screened achieved the threshold score. In 2012, all year one children in England were assessed by the check with 58% achieving the threshold mark and by 2019 82% of children in year one achieved the threshold mark (DfE, 2019c).


SSP was further embedded into the Teacher Standards (DfE, 2011) which specifically stated that when teaching early reading, teachers should be able to, ‘demonstrate a clear understanding of systematic synthetic phonics’. Furthermore, The National Curriculum for England (DfE, 2014b:14) included the expectation that, ‘phonics should be emphasised in early teaching of reading to beginners (i.e., unskilled readers)’. SSP was now being driven through statutory assessment and curriculum expectations. To this was added the third driver of the school inspection framework. The Ofsted, inspection handbook (DfE, 2019b) included expectations that younger children gain phonics knowledge, that they read books that closely connect to that knowledge, and that assessments were to be made by inspectors as to how well staff teach children to read systematically using synthetic phonics and how well they assess children’s progress in gaining phonic knowledge.


Despite this, many teachers continued with the multi-cuing strategies of the ‘Searchlights’ model. Our reticence may have been, as Chew (2018) argues, because the prevalence of mixed methods, driven by the National Literacy Strategy (DfEE, 1998), resulted in many teachers in upper KS2 having substantial experience of the application of the ‘Searchlights’ model (DfEE, 1998) and little experience of phonics instruction. Chew (2018) suggests that the traction of the ‘Searchlights’ model (DfEE, 1998) resulted in the associated problems of KS2 teachers using multi-sourced approaches when assisting pupils with word attack strategies for unknown words. Furthermore, despite Pressley’s (2006) claim of overwhelming scientific evidence that the systematic teaching of letter-sound cues was essential, not all researchers and academic concurred and so there was no definitive agreement that multi-cuing strategies should be abandoned.


Those ‘searchlights just won’t go out…



bottom of page