Jelly and Bean

What is meant by decoding- still unresolved-March 2017

What is meant by decoding- still unresolved-March 2017


The term ‘decoding’ is often used to describe the transcription of written text to spoken language. I suggest it is this particular term, ‘decoding’, used in different contexts, that is responsible for much of the muddled thinking related to reading, writing and, in particular, synthetic phonics.

In order to try to clarify the concept, I am going to suggest we think of two separate codes, CODE1 and CODE2. These may not be codes in reality, but the names will suffice as labels as I try to distinguish between them.

When we speak to other people we are communicating a message. In return people communicate a message back to us. We have conversations. Messages mean something to us. If they did not, they would not be messages.

When we are not in a position to say a message to another person, we can write our message on paper (or iPad, computer, or text on a mobile phone) and get it delivered.

To do this we have to be able to ‘encode’ our message into symbols for the words we write down. The person receiving our message can then ‘decode’ the symbols into words and our message is delivered and understood.

I have used the words ‘encode’ and ‘decode’ here because they are terms in general use to describe this process.

So, logically, if we ‘encode’ spoken language to written language, whilst preserving the meaning of the message, or ‘decode’ written language to spoken language and we understand the message, there must be a ‘code’. This code, I am calling CODE1.

CODE1 is used in the transposition of speech to writing. It is also used when reading written text with understanding.

CODE1 transcribes a spoken message into a written message, and vice versa, so the meaning of the message is clear to the receiver. The meaning of the message is preserved.

Historically, the first messages sent and received by people were pictures. These were followed by the mark making of tradesmen. We can follow the history of writing until we get to the following stage.

The development of CODE2

Before alphabetic writing could develop, the ‘phoneme’ had to be discovered, or defined, or agreed upon by the people of a community. The speech of a community had to be listened to very carefully, so that the continuous stream of sound could be chopped into the smallest units that account for the change in meaning of the words in the message. These ‘smallest units’ are called ‘phonemes’.

These smallest units of sound, the ‘phonemes’, are no longer continuous entities. They are separate or discrete. They are called vowels and consonants and they are not pure single sounds. They vary according to the vocal tract of each speaker, local dialect and regional accent.

They also vary due to the articulation of other phonemes surrounding them in a word. The different sounds of the same phoneme are called ‘allophones’. Examples of allophones are the sounds of ‘p’ in ‘pat’ and ‘spit’, and ‘k’ in ‘kite’ and ‘skin’.

The existense of allophones led linguists to classifying phonemes as ‘categories of sound’. The defining principle of a phoneme is that it changes the meaning of a word, e.g. ‘i’ and ‘a’ are different phonemes in the words ‘big’ and ‘bag’; ‘g’ and ‘t’ are different phonemes in the words ‘big’ and ‘bit’; ‘b’ and ‘d’ are different phonemes in the words ‘big’ and ‘dig’.

However, historically, once agreement had been reached on the phonemes of a language, the symbols used to represent them, the graphemes, could also be agreed for the language’s written form, or vice versa.

Hence, people now say that there are 44 phonemes (in UK English Received Pronunciation) represented by various combinations of the 26 letters of the alphabet. They call this the English Alphabetic Code. I shall call it CODE2.

There are obviously relationships between phonemes and their written counterparts, graphemes. These relationships are seen in eveything we read and write. They are systematically shown in alphabetic code charts. They are taught to children in phonics lessons in schools. They form the basis of synthetic phonic and linguistic phonic teaching programmes. The graphemes appear in all written words. The phonemes are abstracted from all speech.

But CODE2 does not ‘encode’ spoken language into written language or ‘decode’ written language into spoken language because the phonemes and graphemes have no meaning per se. They simply code sounds for symbols and symbols for sounds. CODE 2 is simply a relationship between sounds and symbols. No meaning of language, as a communication medium, is involved.

The basis for this conclusion comes from the Simple View of Reading. This is the model that synthetic phonic programmes and linguistic phonic programmes are based on.
According to the Simple View, reading comprehension is the product of ‘decoding’ and ‘linguistic comprehension’. In other words, ‘decoding’ is not part of ‘linguistic comprehension’. It is separate from it.

So what does this ‘decoding’ mean (CODE2) that is different from the ‘decoding’ of CODE1?

The authors of synthetic phonic programmes use the term ‘decoding’, as in CODE2, to describe the sounding out of phonemes and blending them together to arrive at the pronunciation of a word. The children are pronouncing the symbols and blending them, synthesizing them, into the overall sound of the word. The understanding or comprehension of the word is not involved.

CODE2 is used when symbols are used for sounds and the sounds are blended together. The 44 phonemes and 176 combinations of graphemes and phonemes are the ‘code’ (CODE2) that synthetic phonic programme authors teach in their programmes.

When using CODE2 and speaking of ‘decoding’ written symbols, this DOES NOT include any linguistic understanding. It includes the pronunciation and blending of the symbols only.

When using CODE1 and speaking of ‘decoding’ messages, this DOES include linguistic understanding.

The Simple View of Reading changed the meaning of ‘decoding’ written text into ‘pronouncing’ written text. Prior to this use, ‘decoding’ meant that written messages were transcribed back to the spoken language of the community. After the introduction of the Simple View, as a framework to teach reading in the UK, ‘meaning’ was removed from ‘decoding’.

Educational psychologists have used the term ‘decoding’ for a long time. When they assess children’s reading ability they ask children to ‘decode’ non-words or pseudo-words. They do this so that they can assess the children’s phonic knowledge using materials which the children do not recognise. No meaning is involved in the use of non-words, and so it is only the children’s use of CODE2 that is assessed.


Since February 2015, I have come across this paper.

(2012) The Simple View of Reading Redux: vocabulary knowledge and the independent components hypothesis by W E Tunmer and J W Chapman.

There are two changes to the original Simple View of Reading in this paper.

First, the wording has changed and ‘decoding’ is now called ‘skilled word recognition’.

Tunmer and Chapman explain on page 9

“Gough and Tunmer (1986) originally defined skilled decoding (i.e., D) as the ability to “read isolated words quickly, accurately, and silently” but then added that they were “reluctant to equate decoding with word recognition” because of their firm belief that “word recognition skill (in an alphabetic orthography) is fundamentally dependent upon knowledge of letter-sound correspondence rules, or what we have called the orthographic cipher”

They go on to say, on page 10, “In subsequent articles the original authors of the SVR attempted to avoid potential confusion about how D is conceptualized in the model by explicitly equating decoding with “skilled word recognition”, which they defined as “the ability to rapidly derive a representation from printed input that allows access to the appropriate entry in the mental lexicon” (Hoover & Gough, 1990, p.130; Hoover & Tunmer, 1993, p. 6).”

The phrase ‘allows access to the appropriate entry in the mental lexicon’ shows that they still consider ‘skilled word recognition’ to be a necessary skill prior to accessing the meaning of the word. They consider that the blending of the sound/symbol correspondences (CODE2) should enable children to pronounce the word so that they can identify it, if it is in their spoken or heard vocabulary.

There are two issues here. The first is that the term ‘skilled word recognition’ is another ambiguous term. I suggest many people think that ‘skilled word recognition’ includes the meaning of the word, when it clearly does not, according to Tunmer’s and Chapman’s definition. They are using the term ‘skilled word recognition’ to mean the pronunciation of symbols into sounds in what they have called the ‘orthographic cipher’, i.e. CODE2.

The second issue concerns the independence of the two components, ‘decoding/skilled word recognition’ and ‘linguistic comprehension’. In this article Tunmer and Chapman concede that these components are not independent of each other. The vocabulary knowledge children already possess is a factor in both strands of the model.

In 2012, Tunmer and Chapman also published another paper.

(2012) Does Set for Variability Mediate the Influence of Vocabulary Knowledge on the Development of Word Recognition Skills? by W E Tunmer and J W Chapman

(‘Set for variability’ (Venezky, 1999), is the ability to determine the correct pronunciation of approximations to spoken English words.)

The research in this paper concerns how children arrive at the pronunciation of a string of letters they see written down.

There are many-to-many correspondences between the symbols in the English language and the sounds associated with them. Children have to choose from their knowledge of these correspondences which sound to say when they see a particular symbol.

However, ‘set for variability’ and the ‘orthographic cipher’ are topics for future discussion.

Delivery Information
Inspection copies
Teaching guides
Free writing resources