- The London and Geneva Schools of thought about intelligence
- Spearman, factor analysis and g differences
- Piaget's constructivist account of development
- Mental intake speed (especially Inspection Time)
- Explaining 'differentiation' of abilities at higher g-levels
If there is something that lies behind many of the differences between people in comprehension and knowledge, what is its nature? There have been two main traditions of theorizing about the nature of intelligence. The 'essentialist' tradition of Charles Spearman and the London School holds general intelligence (g) to be a true mental power that is a key resource for most cognitive activity. This power differs quantitatively between individuals; and its level changes within an individual as a result of biological maturation and decline - or even because of shorter-term changes that drugs may one day mimic. By contrast, the 'constructivist' tradition of Jean Piaget and the Genevan School is that intellectual capacities develop as children change their ideas ('schema') to accommodate their increasing experience of reality - rather as scientists alter theories to enhance their range, economy and predictive power. Individual differences in childhood are interpreted by Piagetians as maturational delays that will be remedied as children continue to interact with the environment.
After examining these traditions and the problems of demonstrating their adequacy, modern work on 'mental speed' is considered. Though cognitive psychologists initially disdained them as unpromising, measures of speed-of-intake of elementary information now have behind them a twenty-year record of research into their correlation with IQ differences. The technique of 'inspection time' (IT) testing is particularly discussed, along with other tasks (such as 'paced serial addition' and letter-reading speed) that mainly reflect differences in speed of extraction of information. If intake speed actually underlies intelligence, some of the developmental problems left by Spearman and Piaget can be resolved; alternatively, if a fast intake speed results from intelligence, this shows at least that g is of wider significance and is more closely linked to perception than if g were only 'academic intelligence.'
Watson and Binet differed radically in the use they had for the concept of intelligence. Watson had proposed how to condition and extinguish habits without regard to intelligence at all; while Binet had shown how to assess the level of a child's mental development - to which an educator would need to adapt. Yet these pioneers of applied, improvement-oriented psychology shared an important theoretical agreement - on a negative. Though for different reasons, neither thought of intelligence as a definable mental entity. Watson, the empiricist, shunned abstraction; and Binet, more alert to how 'science' can sanction mere ideas and words, doubted that IQ numbers had any 'real' basis. By the end of his work, Binet's concept of intelligence, far from pinning it down, emphasized its breadth. In 1911, Binet wrote "Comprehension, invention, direction and criticism: intelligence is constrained in these four words" (Fancher, 1985). No more than Watson did Binet possess or want a theory of what intelligence was.
Early attempts to define intelligence as "judgment", "adaptability to new situations", "the eduction of relations", "the capacity to acquire capacity" produced no agreement at the first big American conference on intelligence testing in 1922 (Spearman, 1923, Chapter 1; Siegler & Richards, 1982, p.90). Yet psychology could not long remain content with intelligence being simply "whatever the tests test" - which E.G.Boring (soon to be America's leading professor of psychology) had articulated as being the fall-back position. The achievement of reliable, unbiassed and predictive measurement necessarily invites theorizing about what is being measured by an instrument. This may be relatively easy to specify - though even weight and temperature are not without complications for scientists as to what they really are. Or it may be intrinsically complex and involve much more, even on the surface, than is captured in numbers - as when levels of female sexual attractiveness are quite readily agreed by males while leaving researchers little the wiser about 'what attractiveness really is'. (Only lately has it become clear that beauty can be created and exist independently of ever having been perceived: for males will rate as most attractive composite photos of women's faces that involve novel exaggerations of characteristics which males generally favour - such as wide eyes, fuller lips and gracile chins (Perrott et al., 1994).) Within a few years of the development in the USA of the Stanford-Binet Test, the two main theories that were to dominate the twentieth-century psychology of intelligence were being put on show. One was championed in University College London by Charles Spearman (1863-1945); and the other was conceived by the Swiss psychologist, Jean Piaget (1896-1980) - working first at Binet's former laboratory in Paris and then, from 1929, in the Jean Jacques Rousseau Institute of Geneva.
The London and Geneva schools diverged in their subjects (adults, young children), methods (group testing, individual testing) and focus (difference amongst age peers, development across age ranges). Yet they agreed about Binet's discoveries, about the unity and generality of intelligence (whether or not they used Spearman's symbol 'g ') and about the unlikelihood of intelligence being 'learned'; and there would be no set-piece battles between them. It was the ideas of Spearman and Piaget that differed profoundly. Interestingly, in view of the London School's subscription to the hereditarian ideas of Sir Francis Galton, it was Piaget who was the more 'biological' and evolutionary in his approach; by contrast, Spearman inclined to view general intelligence as a specifically human feature - as symbolic intelligence largely is. At the same time, both men were markedly idealistic, given to rumination and complexity of thought (quite unlike Watson), and concerned to acknowledge human intelligence as an active causal force in the world and to establish a psychology that was relevant to man's spiritual nature, agentic status and high moral quest.
Charles Spearman was a well-born, serious and high-minded British Army officer who resigned his commission in mid-career to pursue his interest in the nature of human consciousness. Opposed to any idea that human learning occurred by mere association and retention, Spearman wanted to show the role of "the mind or 'soul' as the agent in conduct" (see Evans & Waites, p.56). After seven years of study for a PhD with Wundt, in Leipzig, he came across Galton's ideas and the technique of correlation. Spearman soon made the first of his own methodological breakthroughs in statistics by developing a method of finding the 'true correlation' between two variables; and this paved the way for his development of the technique of factor analysis.
Spearman made allowance for the unreliability of variables (often considerable in psychology, especially where single items are concerned) by using variables' reliability correlations (multiplicatively) in the divisor of the correlation between them: thus, the poorer the reliability of the variables, the higher was the 'true' correlation between them after correction. (Spearman's point was that if tests X and Y correlated at .50 while the reliability of Test X was only .50, the X/Y r was as high as the reliability of X could possibly allow. In this case (assuming Y's reliability was an unproblematic 1.00) it could be said that the 'true' correlation, between whatever X 'truly' measured and Y was .50 / (.50 x 1.00) = 1.00.) Likewise, Spearman noticed a way of correcting correlations (r's) for any restriction of range in the variables involved. When only some narrow subsection of a population is used, as when psychologists study students for convenience, r 's between mental tests will be 'attenuated'. This is because test unreliability will be responsible for a larger percentage of the individual differences in test scores than it would in a study involving a normal (and thus wider) range of IQ's. Correlations between variables are higher when the full range of the variables is used because data points then involve greater relative reliability: by way of illustration, an IQ of 160 will be reliably different from the IQ's of many more people in the population than will an IQ of 106. The effect of attenuation in research is substantial: for example, r 's of .70 in the normal population will be attenuated to .45 if a study involves subjects in only one half of the IQ range (e.g. over or under IQ 100) (see e.g. Detterman, 1993). Researchers will miss a lot when they cannot study collections of people who range normally along the dimensions with which they are concerned.
Spearman's development of the technique of factor analysis is another extension of the basic idea of estimating what correlations between variables would have been if other statistical influences - particularly, those detectable via other correlations - had not been at work.(1) It is rather as in the process of factoring in algebra, where complex expressions are simplified by extracting the common multipliers of all terms. In outline, factor-analytic procedure is as follows.
- Working from a matrix showing all the r 's(2) between tests, factor analysis first sums each variable's r 's with all the other variables. These sums are added together to yield the sum of sums - which is the total covariance in the study. (Covariance is the technical term for 'the going-together, or overall intercorrelation, of variables with each other'.) Factor analysis then ascertains each variable's proportional contribution to this covariance: each variable's sum (of its own r 's) is divided by the total covariance. The resulting factor (the 'first factor') is simply a list of these contributions to the square root of the sum of sums from all the variables. Figure II,1 provides an example.
Figure II,1: Extracting the first factor from a correlation matrix.
Note: The r 's in brackets, in the 'leading diagonal' of the r matrix duplicate the highest correlation of each variable with any of the others so as to provide estimates of how well each variable correlates with itself. Such 'communality estimates' allow inclusion of each variable's own unique variance when estimating its contribution to overall covariation.
- Some variables will have had greater intercorrelation with all the other variables and will thus have contributed more to the covariance These variables are said to be especially loaded on (i.e. correlated with) the first factor and they are the most important in any attempt to interpret the nature of the factor. (Usually these high-loading variables will have correlated especially strongly among themselves - as did Tests A and B in Figure II,1.)
- Using these proportional contributions (the loadings) of the variables, this source of variance (the factor) is deducted (extracted) from the original correlations. (Each r loses the product of its two constituent variables' loadings on the first factor: in Figure II,1, the revised r between A and B would drop to .72 - .84 x .81) = .04.) If any statistically significant correlations remain in the matrix, the factor analytic process is repeated to extract new, independent factors.
- Resulting factors are then evaluated. In the analysis of mental abilities, which invariably correlate positively and substantially, the first factor - usually assumed to be the g factor - normally turns out to account for at least twice as much of the variance in the original matrix as do all subsequent factors put together. However, by multiplying variables' factor loadings, it is possible to calculate the r 's that would have occurred between variables if only two factors had been at work - e.g. perhaps the g factor and one other; and then to find a new single first factor that would account best for such hypothetical r 's. In this way, factors can be hypothesized that redistribute variance from g and a specific - perhaps from g and a specific 'vocabulary' factor - to a blending factor that might itself provide a good indicator of 'verbal ability'. (A preference for identifying such blended factors guided the work of Thurstone and Guilford; and to this day Gardner continues the search - see Chapter 1. But it is hard to keep blended factors both well-defined by particular tests and independent of each other: this is because mental tests involve g to such a great extent, as compared to specific factors.)
One way of understanding what factor analysis achieves derives from the fact that a correlation between any two variables can be represented as the angle made by two straight, intersecting lines. A correlation will usually be represented as the cosine of the angle between two vectors: thus two lines at ninety degrees will stand for zero correlation; and two lines at 45║ will stand for a correlation of +.71. Further variables may be represented by further lines that make stipulated angles with the previous two lines - though it may be necessary after a while to move into three or more spatial dimensions. Figure II,2A shows six variables that have various degrees of positive correlation with each other. Shorter lines are used to represent divergence into a third dimension.
Figure II,2A Geometrical representation of correlations between variables. (E.g. Variable a correlates very highly with e, about .70 with b, less with c; and least with d)
The resulting picture is as of a cross section of the spokes of an umbrella - but in a drawing from which the handle of the umbrella has been omitted. In terms of this analogy, finding the first factor would be equivalent to estimating where, in the drawing, the handle of the umbrella should have been drawn - see Figure II,2B.
Figure II,2B Geometrical representation of a first common factor (which would itself correlate as highly as possible with as many of the original variables as possible).
By successive extraction of factors, the analysis 'accounts' as economically as possible for the individual differences that have yielded correlations (usually including the correlation of each test with itself). In particular, it accounts for the differing degrees of correlation that are found amongst test items (or packages of items).
As his new career developed, Spearman became increasingly involved with those human mental abilities that could be measured and studied, and thus, in due course, with Binet's tests. These he judged a "hotchpot" - though still a practical one that he presumed to measure intelligence because the specific, non-intellectual elements in Binet's many items cancelled each other out. Spearman (1916) would resist the view (to be championed by the Edinburgh psychologist, Sir Godfrey Thomson (1916)) that tests X, Y and Z might all inter-correlate for quite different reasons: Thomson's theory (of 'multiple bonds') required distributions of correlations that were hugely improbable and would have to predict the eventual discovery of uncorrelated mental tests - some tapping only the abilities required for X and Y, and others tapping only the abilities required for Y and Z, and for X and Z.(3) Spearman's own analyses began with simpler tests that he hoped would realize Galton's dream of being underlying abilities that provided (at least in part) the psychological basis of all forms of intelligence. In 1904, Spearman had published data from village school children showing that sensory discrimination (for pitch and hue) and attentional readiness were 'truly' well correlated - once his correction for measurement error was made. Spearman's data had suggested to him that the ability to take in even the simplest information about physical objects might be responsible for people's differences in intellect. However, Spearman could not prove it. In particular, the Columbia psychologist, E.L.Thorndike (1874-1949), argued against him that, on the contrary, general intelligence might assist even sensory acuity (on standard tests). Moreover, although Binet's tests were 'complex' and of less immediate theoretical interest, they had the merit of yielding strong correlations with teacher's judgments without any correction for unreliability at all. By 1909, therefore, Spearman compromised with Thorndike and supposed that children's differences in both sensory discrimination and teacher-assessed intelligence would be "based on some deeper fundamental cause" (see Deary, 1994a) - and thus need not themselves correlate strongly.
Once Binet's practical achievement was clear, Spearman became especially concerned to identify what there was in common among Binet's "gallimaufry" of "multitudinous tests" Spearman's concern was with the variables that typically loaded substantially on the first and biggest factor found in mental ability correlations.(4) It was Spearman who christened this the g factor: he was mindful of physicists' use of g for the Newtonian constant of gravity, and he thus expressed his hoped of delivering a 'physics of the soul' (physicae animae, Spearman, 1923)). Across his factor analyses, it turned out that the truest measures of intelligence - correlating as highly as possible with all the others and thus with the g factor - were those in which the testee had to handle the most abstract relationships. The relationship of X 'being essential to, involving, or being defined by' Y is of this kind: e.g.
BIRDS are to WINGS as CASTLES are to: GUNS / FLAGS / BATTLES / WALLS ?
PIGS are to BOARS as DOGS are to: LIONS / SHEEP / CATS / WOLVES ?
However, reflecting his earlier theoretical proclivities, Spearman was inclined to think that the ability to handle abstract relationships was determined primarily by some kind of 'mental power': this 'energy' would be in particular demand for working out (i.e. inferring) abstract relations, but was also necessary in varying degrees to drive other 'mental engines' as well. Thus Spearman came to play down the involvement in intelligence of "the apprehension of experience" and to emphasize "the eduction of relations and correlates."
Spearman anticipated the idea that there might be general laws about human information processing and he could be called the first cognitive psychologist. (It was only in the 1960's that academic psychologists would interest themselves in 'information processing capacities', and only in the 1970's that they would claim their chief interest as being, like Spearman's, in cognition.) In particular, Spearman's idea that mental energy might be more important for novel than for practised tasks anticipated Cattell's distinction between fluid and crystallized intelligence (see Chapter 1). Spearman also observed the greater 'differentiation' of intelligence (i.e. the lower correlations between different types of test ) at higher levels of g (see Chapter 1): he referred to differentiation as a 'law of diminishing returns' whereby "the more energy a person has available already, the less advantage accrues to his ability from further increments of it" - rather as a ship's speed is not doubled by doubling the coal in its boiler (Deary & Pagliari, 1991). Yet Spearman was frustrated by events of his day. Following the Leipzig tradition of concern with reaction times, explorations were occasionally made of the relation between reaction speeds and IQ; but no promising correlations were discovered. Eventually, after big promises from James McKeen Cattell, Wissler's (1901) analysis of McKeen Cattell's data received much attention: the correlations between academic knowledge and laboratory abilities turned out to be slight - though chiefly because of restriction of range around what would probably have been very high average intelligence in McKeen Cattell's undergraduate testees.(5) Reflecting what were becoming lowered expectations of such 'simplistic' approaches, even a study by Spearman's young admirer (and eventual successor at University College London) was not followed up. Cyril Burt (see Chapter III) (1909) reported superior performance at recognizing briefly illuminated 'spot patterns' by those Oxford children having higher teacher- and peer-rated intelligence (several of them the sons of dons and bishops); but his paper was to be overshadowed by Binet and Simon's work and would sit unremarked in the psychological literature for seventy years.
Spearman's concern was with the full grandeur of intelligence and, though he wished to consider it as deriving from some kind of 'energy', he had to be impressed by the decisive results of what was, after all, the equally important search for good, practical measures of intelligence. Usually it appeared that it was the more complex items were best at measuring intelligence - and studies of brain damage in rats would eventually confirm the greater impact of such damage on the learning of those mazes that were more complex (Lashley, 1929). Spearman was thus to remain a central theorist and methodologist in the intelligence test movement; and his enduring memorial was the classic multiple-choice test of g developed by his Scottish student, John Raven, from Spearman's illustrations for teaching purposes of how abstract reasoning can be used to complete spatial designs (as in Chapter 1, Figure I,2) by 'the eduction and relation of correlates'.
Yet Spearman's clarification of the centrality of reasoning to measured intelligence did not fulfil Galton's dream of finding the most basic manifestations and the developmental origins of intelligence differences. In appreciating the role of g in detecting and making use of abstract relations, Spearman had shifted the emphasis from the simpler processes of apprehension with which his work had begun. While Spearman and his London School followers were emphatic that intelligence 'really exists', and even that children's differences should be nationally registered on an "intellective index" which could help determine the right to vote (Hart & Spearman, 1912; Spearman, 1927), their failure to discover more about its 'essence' would prove an enduring problem. By the end of Spearman's life, American psychologists were following the lead of the Chicago psychometrician, Louis Thurstone (1887-1955) in trying to break g up into separate components - even though Thurstone (e.g. 1946, p.110) himself admitted that his separate components were invariably correlated and that "there seems to exist some central energizing factor which promotes the activity of all these special abilities." (In the above Figures II 2A & B, Thurstonian procedures might involve driving one factor through variables e, a and c and another through b, d and f. This is perfectly legitimate mathematically as a way of describing correlations amongst variables; but what is usually forgotten by psychologists who settle for such multiple 'oblique' (correlated) abilities is that the r 's between the oblique factors remain to be explained.(6)) Spearman's g factor will usually account for some fifty to sixty per cent of the covariance between abilities - as even critics admit (Gould, 1981); but its 'reality' was Platonic rather than Aristotelian - it lacked substantial underpinning from more basic psychological (or physiological) processes. The case for talking of g could easily survive attempts to interpret it as resulting from biases (Chapter 1) and to break it up into many different components: despite the efforts of Thurstone (and, later, of J.P.Guilford (1959), with his 150 proposed abilities) positive correlations persisted between all mental abilities that were at all reliable).(7) Nevertheless, the dream of Galton and Spearman remained unrealized: any elementary bases of g differences had still to be found.
Like Spearman, Jean Piaget was exercised by the largest problems about human nature. His interest in the role of 'the dynamic flux of consciousness in evolution' had led him, as a gifted adolescent, to an interest in animals that he was able to indulge when appointed to a zoo curatorship before going up to university. Piaget's adult career followed a path almost as stony as Spearman's; but eventually, as behaviourism declined, he enjoyed some two decades of popularity with educators and developmental psychologists in the English-speaking world.
Piaget's central idea was that human intelligence was not some elusive form of energy, but rather a developmental construction. Through childhood, according to Piaget, we go through stages and styles of operation - as the whole human race may have done in evolution - and gradually resolve the problems that we encounter as a result of our earlier, immature approaches. For example, we come to reject our early, simple assumptions that bigger objects will be heavier, or that taller containers will tend to hold a greater volume of liquid. Piaget's notion (following the mighty K÷nigsberg philosopher, Immanuel Kant (1724-1804)) was that developed human intelligence involves a set of 'constructions' that are virtually bound to arise as we move through childhood encountering problems for our theories about the real world and having to come up with better answers. Eventually, by mid-adolescence, most children have abandoned the risky mental short cuts; so they arrive at the stage of being able to understand 'formal', logical operations that involve symbolic reasoning. For Piaget, the growth of intelligence was a developmental journey on which humans are all equally embarked; so a veil could be drawn over children's markedly different individual rates of progress, and indeed over the fact that many adolescents never reach the stage of 'formal operations' at all. Just as agreeably, Piaget claimed that human intelligence - i.e. the intelligence that we almost all have as adults (a Binet Mental Age of at least eleven years) - develops interactively ('in interaction with the environment'). No one but the most hard-bitten behaviourist would ever have doubted that some kind of curiosity-driven exploration of the environment would be one important part of the developmental process; but Piaget's followers were especially attracted to the notion because it seemed an alternative both to the behaviourist's idea that the environment 'shapes' and 'conditions' us and to the crudely hereditarian idea that we are quite directly the products of our genes. (Piagetians did not always understand that genes can be expected to have their own causal influences partly by yielding people's selection of and response to particular environments - see Chapter III.)
After behaviourism began to wear thin in academic psychology, around 1965, the first of these attractions, the 'egalitarian' stress on how all children develop rather similarly found a welcome in America. Contrary, in fact, to Piaget's own expectations, American psychologists believed that Piagetian ideas would lead to the hoped-for educational accelerations that had eluded behaviourists. However, the price was that Piagetian ideas would no longer be spared exposure to the large-scale empirical approach; so American and Canadian psychologists were soon producing the first reports indicating that 'Piagetian intelligence', far from being the non-g intelligence so often sought by psychologists, correlated perfectly well with traditional IQ, and especially with measures of fluid, untaught, general intelligence (gf) (see Tuddenham, 1970; Steinberg & Schubert, 1974; Kuhn, 1976; Humphreys & Parsons, 1979; Willerman, 1979, pp.98-99; Carroll et al., 1984). For example, Raven's Matrices and the Wechsler Intelligence Scale for Children correlated with Piagetian measures of conservation, seriation and class inclusion as highly as the reliabilities of the latter would allow - and as high as .80 when Spearman's correction was applied.(8) The history of the other favourite Piagetian view, as to the importance of 'interaction', is of another bumpy grounding of a big idea. At first, interaction had an apparently unfalsifiable status: for what reasonably intelligent child could be found that had not 'interacted with the environment'? Yet the facts gradually broke in: normal intelligence is found in many children whose cerebral palsy or spina bifida drastically limited their ability to 'explore' or 'interact with' their environments.
The most striking case of 'interactionless intelligence' is the 99% palsied young Irish poet, Davoren Hanna (1990). Hanna had no capacity for voluntary movement at all - until age six when his mother noticed that he could sometimes squirm and fall off her lap in one direction or another. Soon he mastered the skill of falling forward with a finger pointing towards, say, 'an apple' on the floor; once shown letters, he quickly learned to fall in the direction of keys on an alphabet board. On an 'interactionist' account of intelligence, he should have been profoundly intellectually deficient. Yet by age eleven Hanna was writing affecting poems which soon won him international recognition: understandably, since he had often been recommended for lifetime institutionalization, one poem, 'The How the Earth Was Formed Quiz', concerned being 'tested' by psychologists who showed little recognition of his abilities or emotions. At thirteen he answered a journalist who asked if he knew anything about Moscow by saying Moscow had "the best red cabbage you'll find outside Chicago, long queues and poncey ballet dancers". As with many motorically disabled children, the most severe restrictions on 'interaction with the environment' had not in fact impaired his intelligence.
The grandest ideas of both Spearman and Piaget were thus hard to vindicate. Spearman and his followers could not pin down and quantify the capacity for experiential 'apprehension', let alone the 'energy' that Spearman claimed to 'fuel' all intelligent performance. For their part, Piaget and the Piagetians could not hide, circumvent or explain lasting individual differences in g ; and they could not demonstrate that ceaseless, 'constructive' developmental interaction was in fact necessary to normal intelligence - though none would doubt that interaction with the environment is often a result of intelligence. Nor could any particular differences between children in Piagetian 'interaction with the environment' be shown to yield the lasting individual differences in IQ that required explanation; and even Piaget's claims as to what were the main 'stages' of development came to be so qualified by the researches of his English-speaking followers as to leave little but Binet's premise that children's intelligence increases with age. Certainly, Spearman and Piaget provided psychologists with escape from the straitjacket of Watsonian environmentalism and from Binet's unwillingness to theorize at all. Followers of Spearman were free to recognize general human individual differences that did not seem to result mainly from differences in opportunity to learn; and Piagetians were free to say that child development owed more to maturation (and indeed to consequent interaction with the environment) than to being conditioned. Yet what was it that differed as between age peers - yielding countless effects on educability? What was it that matured? What explained difference in development?
It might be thought that to answer such questions would be the job of the experimental psychologist. However, in the behaviourist tradition of laboratory psychology, experimental psychologists were trapped into examining learnable 'reactions' and 'skills'. Because they could only hope to account for what looked as if it could be learned, that was all they studied. Nevertheless, because laboratory reactions are contrived to suit experimenters and have little intrinsic motivation or meaning, the behaviourist interest was chiefly in their speed; and this could itself have been promising if any attempt had been made to examine a range of subjects who differed in intelligence. Preliminary evidence for this view had first been pointed out by the well-known behaviourist and personality theorist, Hans Eysenck in a classic paper (1967). Eysenck had escaped to Britain from Hitler's Germany and had become, by the 1960's a leading exponent of empiricism - sceptical, like Binet, of the dogmatism of medical men. Yet although, at London's Maudsley Hospital, he advocated and developed behaviourist techniques to alleviate phobias and unwanted obsessions, he did not follow B.F.Skinner, who scorned talk of traits, dimensions and allied mentalistic abstractions. After Piaget's death in 1980, Eysenck would be the world's best-known living psychologist, though his steady support for the reality of g and other deep-seated human differences cost him many honours.(9)
By 1980, reaction time (RT) had been studied by differential psychologists (especially by Eysenck's admirer, Arthur Jensen (1987) - see Chapter 1). After subtracting the 'motor time' (MT) component (i.e. the time taken to respond to the onset of a stimulus - like a single light - when no choice about it is required) from total RT (when choosing which of two lights came on), the remainder, 'decision time' (DT) has a correlation of around -.25 with IQ. RT tasks can be made substantially more complicated by requiring subjects to respond to relatively abstract and complex questions about richer displays: e.g.
'Which of three illuminated lights is, by its spatial separation from the other two, the 'odd man out'?'
'Is it true or false of the following display that the letter B is shown above the number 4?'
The IQ/DT correlation may then reach -.50. However, overall the DT correlations with IQ were either modest or seemed just 'common-sense' (when the DT task involved more complex instructions). Thus they could hardly shift mainstream experimental psychologists from their own conviction that RT depends on testees' levels of practice and strategy deployment - which themselves have intrinsically little to do with IQ. Thus human experimental psychologists, despite their long-standing interest in RT's, managed first to miss the non-zero correlation between RT and IQ; and, when it was forced upon them, they dismissed it as yet another modest product of the omnipresent operations of learning. (In fairness, they had plenty of psychometricians for company. For it had long been conventional wisdom that, on IQ tests themselves, the speed with which testees respond to IQ items bears no strong relation to the level of their intelligence (see Carroll, 1993 and Chapter IV). The important speed-advantage of the high-IQ person would prove to be of a different nature. )
In fact, it was studies of the speed of perceptual intake, not of behavioural output, that would provide the crucial breakthrough needed by the followers of both Spearman and Piaget. Perception had come to be neglected by behaviourists because it seemed so recalcitrantly innate and so uniform as between different people. Yet gradually it emerged that there were subtle yet reliable differences between people in how quickly they could take in, pick up, extract or apprehend consciously the most simple features of the world.
The classic device in the study of perceptual intake speed is the tachistoscope (T-scope), a box in which stimuli can be illuminated for mere fractions of a second to ascertain whether the testee is able to identify them. Importantly, the testee need not be asked to react with speed: the testee's 'perceptual speed' or 'inspection time' (IT) (once called 'sensory RT') - established over a series of trials lasting some twenty minutes - is simply the lowest duration of illumination that the testee requires so as to make largely correct judgments of the target stimuli. Ever since 1908, there had been occasional reports that T-scope abilities correlated with intelligence; but it was only around 1980 that several replicable effects were claimed from work in Adelaide and Edinburgh.
In these IT studies, testees had to indicate whether the longer of two parallel vertical lines (of markedly different lengths, 21/2" or 3") was on the left or right of a central fixation point (see Figure II,3). If they could not verbally distinguish their left from their right, testees simply raised a hand according to the side on which the longer line had appeared. The target lines were illumined for various durations, around one tenth of a second, and then followed immediately by an illumined 'mask' of two overlapping lines (each 31/2"): this prevented any image, 'icon' or after-image of the target lines persisting in immediate visual memory. Across a range of young adult testees, including a few who had a history of mild learning difficulties, correlations between IT and IQ were around -.70 (Nettelbeck & Lally, 1976; Brand, 1979; Brand & Deary, 1982). Detterman (1984) was technically correct to complain that these early studies suffered from "small numbers and extended IQ ranges." However, the effects were very strong, fully significant and involved an IQ range that was only 20% greater than normal: applying Spearman's correction, the true r was still .65. No experimental psychologists of this period would have expected these correlations to be other than the modest -.25 found for measures of DT with IQ. The long-sought correlate of intelligence in elementary information processing had possibly been found.
Figure II,3 illustrates the three successive presentations that have been used most commonly in Inspection Time studies. The three fields of a tachistoscope are illumined in turn. They contain respectively: (Time 1) the fixation cue - often together with 'masking lines; (Time 2) the target lines (varied randomly from trial to trial as to whether the longer line is on the left or the right); and (Time 3) the backward mask. First, while the testee has been instructed to look at the central fixation point (o), vertical target lines are briefly illumined so as to appear at either side of the fixation point. The testee has been asked in advance to watch for where the longer target line appears - to the left or to the right. The target lines are succeeded immediately by the masking stimulus: this prevents the testee experiencing any after-image of the target lines. With no pressure for speed of response, the testee then makes the required judgment. The experience for the testee is rather as for a batsman who is trying to detect the way in which a ball is leaving the hand of a fast bowler.
Soon, other measures of IT for similarly brief auditory tones and vibrations of the fingers turned out to show strong r 's with IQ so long as testees were not mainly university-educated (see Deary, 1992, 1995 for a review). [Other perceptual processes enabling remarkable feats operate quickly, automatically and without awareness (Jaynes, 1974/1992; Velmans, 1991) and doubtless involve such widespread brain activity (both in animals and man) as to be considered anything but 'simple'. However, such operations of 'parallel processing' involve mechanisms adapted by evolution to allow all of us to respond sensibly to the complex but repeated patterns of the real world. By contrast, the perceptual, 'inspection time' tasks described here involve the ability to use not patterned real-world information but highly particular, elementary information that is available only for durations measured in milliseconds. In these perceptual tasks, which expressly require the focussing of attentional resources on answering one elementary question, it turns out that there are important individual differences in what people can grasp.]
What was the explanation of these strong IT/IQ correlations? Could they be explained as causal effects of IQ on IT - as Thorndike had interpreted Spearman's correlation of IQ with attention? Modern cognitive psychology has many ways of disputing the reality of even the most basic and robust phenomena. Perhaps lower-IQ testees were over-anxious at such a challenging task (Irwin, 1984), under-motivated at such a boring task (Mackintosh, 1986), lacking some necessary "elaborated cognitive structure" (Ceci, 1990), unfamiliar with the psychological laboratory, unable to develop the right 'strategies' to assist them, or unable to pay attention and be ready for the onset of the illumination of the target material? Such ingenious attempts at explanation have encountered ten objections, as follows.
- Motivation. Low-IQ subjects enjoy IT-testing. This is because most IT trials use durations of illumination that are set on any one trial to be fairly close to what the testee has managed previously. All testees thus feel they are doing quite well at the task - for they have no idea of what durations (harder or easier) the experimenter is using with other testees. The experience of IT testing thus resembles that of being tested for IQ on an individually administered IQ test such as the Stanford-Binet or the Wechsler. In such testing too, testees are mainly being asked to solve problems that are not too easy and not too difficult for them. Thus the items are not found babyish or boring on the one hand, or too daunting and depressing on the other. At the same time, subjects do not know what items are used to test other testees, so they do not become either over-confident or downcast.
- Attention. Even learning-handicapped subjects cope perfectly well (with 97.5% accuracy) so long as the lines in the T-scope are visible for a fifth of a second. If such testees had any commonplace problem with attention, this would make such levels of performance quite impossible for them - as Langsford et al. (1994) spell out.
- Strategy acquisition. With only one significant exception [to follow, see (iv)], special tricks or strategies have not been found responsible for testees' achieving high or low IT's. In Edinburgh, Vincent Egan (e.g. 1994a, 1994b) found that giving subjects correct or incorrect feedback on their IT performance made no difference to their IT/IQ correlations: so having the opportunity to learn by results is not necessary to showing the fast intake speed that goes with a higher IQ. Nor was the IT/IQ correlation weakened significantly if testees had to make do without early practise at relatively long exposure-durations: subjects 'thrown in at the deep end' presumably had greatly restricted opportunities for learning or strategy-formation, yet they showed virtually the same IT/IQ r . This result has been confirmed in Edinburgh by Deirdre Quinn (1995). Quinn used 28 subjects of mean age 29.2 (s.d. 10.5) and slightly above-average intelligence (Standard Raven' Matrices mean 47.5 (s.d.10.2) - though including some drinking men and women recruited from local bars). When tested in the usual way, with IT exposures gradually becoming shorter (i.e. harder) the IT/IQ correlation was -.52; and when testing began at the hardest durations and gradually became easier the r was -.43. It made no significant difference how testing proceeded: higher-IQ testees did not depend on practice effects for their shorter IT's. (For the 14 subjects who experienced the normal, 'slow-to-fast' testing procedure first, the r was -.65. - This r was found under the most conventional and sensible testing arrangements, and not when maximum opportunity for practice had been given.) Again, the IT's of Egan's normal-IQ testees were unaffected by their having to solve a steady stream of mental arithmetic problems at the same time. This showed that IT requires no special ability to pay 'attention' in any everyday sense of that word. Whereas RT tasks involve sensory and motor processes that may be singularly specialized or open to practice, IT is more 'perceptual' and able for this reason to show higher correlations with IQ (as Jensen (1994) now allows).(10)
- Movement after-effect. Some people are able to use an 'apparent movement' cue which they detect as the IT backward mask appears immediately after the target lines. For some testees, the offset (termination) of the target stimulus, followed immediately by the onset of the masking lines, makes the shorter of the two target lines seem to 'jump' downwards for a longer distance than does the longer line. This happens especially if subjects are highly practised or when, for ease of administration, the lines are presented on a computer-driven TV screen rather than in a proper T-scope.(11) However, whether a subject 'sees' such apparent movement usually bears no relation to IQ; and there is no known way of training people to watch out for the movement cue (Mackenzie & Cumming, 1986). So the IT/IQ r does not reflect differential use of this particular strategy by testees of higher and lower IQ's. IT/IQ r 's are thus markedly higher if testees are selected to exclude any users of apparent motion cues. Alternatively, when IT presentation is computerized, different chequered backward masks can be used on each trial (so that the tips of the target lines are sometimes masked and sometimes not): apparent-movement cues are thus rendered virtually unusable and IT/IQ r 's return to the same high levels first obtained using T-scope presentation. Thus Stough et al. (1994), having recruited via newspaper advertising in Auckland 35 adults having a mean IQ of 109 and a range that was only 16% restricted (s.d. = 12.6) report a correlation of -.55 between IT duration required and Full Scale Wechsler IQ. The use of a 'flash' mask that provides visual 'noise' around the ends of the target lines after their exposure has similarly countered motion cue use and yielded IT/IQ correlations of -.76 (among testees not using other conscious strategies) (Evans & Nettelbeck, 1993). More generally, omitting the five per cent of subjects who show unreliable performance on IT tasks (for whatever reason) markedly strengthens the IT/IQ correlations: in 63 volunteer testees from unemployment bureaus, having a median IQ of 115, with a range from 80 to 130, Bates & Eysenck (1993a) found that dropping unreliable IT performers improved the IT/IQ r from -.45 to -.62.
- Individual strategies. Any one speed-of-intake technique will be of limited interest to conventional cognitive psychologists until they can spot the 'strategy' differences that account for people's varying scores. (Just as behaviourists once attributed all behavioural differences to 'conditioning', so cognitivists invoke 'strategies' - see Brand, 1987a.) Such psychologists thus profess indifference to IT phenomena despite some sixty studies of IT and IQ in non-retarded young adults finding on average a strong r even without using g's full population range. Moreover, since the major reviews by Nettelbeck (1987) and Kranzler & Jensen (1989), a further thirty studies have appeared. Though most recent studies use computerized presentation of lines made up of lights - with their attendant visual after-effects - and an over-representation of undergraduate subjects, correlations seldom dip below .40 (e.g. Deary, 1995); and notions that IT differences might be traced to background features such as exposure to video games or personality type (as mooted by Brebner & Cooper, 1986) have proved unfounded (Mackenzie & Cumming, 1986; Nicolson, 1995). Despite late-middle-aged testees having had so many more years in which to develop the stylistic and strategic idiosyncrasies that would introduce complexity and militate against simple linear correlations between two variables, the IT/IQ r is around -.55 (see Nettelbeck & Rabbitt, 1992). Overall, results are compatible with an estimate that the true IT/IQ r in the full population (including representative proportions of the young, the elderly and the retarded) would be -.75. Moreover, since correlations around -.50 are regularly achieved across many procedural variations, it must now be reckoned very hard to explain the IT/IQ r without referring to general mental speed of intake: after twenty years of research on IT, it is unlikely that any study will now discover key, naturally occurring strategies or short-cuts to success on IT tasks that explain away the IT/IQ correlations.
- Intelligent strategies? Even to suggest that intelligent 'strategies' are required for spotting differences in ultra-briefly presented line-lengths seems bizarre: for how can a person be said to 'do' anything 'intelligently' within one twentieth of a second - or even within the one fifth of a second within which the brain's distinctive processing of the simulus has taken place (see Objection x (c) below)? To import the mentalistic language of plans and strategies to 'explain' individual differences in such automatic processing is strange. (Of course, a person may genuinely 'be' intelligent ('sharp', 'observant') in noticing some briefly occurring phenomenon - but that is precisely the claim of the speed theorist, not the strategy theorist!)
- IQ develops IT? If, over some developmental span, it was IQ that made for subtle psychological changes that eventually yielded better IT performance, then IQ should predict later IT. However, in 104 privately educated 12-14-year-old school children tested over two years, it was earlier IT (auditory) that predicted later IQ rather better (.44) than earlier IQ predicted later IT (.28) (Deary, 1995).
- IQ itself the basis for IT? If IQ just happened to convey some accidental superiority in IT, it would seem unlikely that this effect would be robust across the numerous variations in IT studies over twenty years: virtually no two studies have even attempted to use precisely similar procedures. IQ correlations with auditory IT (for tones that are so briefly presented as to be merely faint clicks) have certainly been lower, around an uncorrected r of -.40 (Raz et al., 1983; Brand, 1984; Nettelbeck et al., 1986; Deary, 1994b; Nicolson, 1995); but this is because many testees have pitch discrimination problems (i.e. are somewhat tone deaf) even for tones of normal durations - problems that are unrelated to intelligence. Although only a few estimates are available, visual and auditory IT themselves correlate at around .45 (Nettelbeck et al., 1986; Nicolson, 1995) - as well as can be expected in view of their own imperfections as pure speed measures (e.g. Barrett & Kranzler, 1994): rather than concoct ways in which IQ might convey unlearned advantages on such different tasks, it is more economic to envisage that one underlying variable, mental speed of intake, conveys advantages on IT and gf tasks - advantages which crystallize developmentally into differences in knowledge and understanding (gc).
- Low correlations? Variations in the strengths of the IT/IQ correlation are not to hard to understand. Computerized versions of IT have problems because the TV screen cannot display stimuli reliably for very brief durations and because lines made up of lights generate strong after-images. Just as importantly, many studies have used undergraduates who have a markedly restricted range of g . Even without testees below IQ 85, the original tachistoscopic method (using a mask composed of multiple lines, and beginning testing with many longer, easier exposure-durations) still delivers an IT/IQ r of -.65 (Quinn, op.cit.).
- Other tests of simple information processing functions also correlate strongly with IQ They, too, seem to involve information-intake, or apprehension, rather than the conventionally intelligent operations of reflection, reasoning or problem solving that are the immediate requirements for success at tests of gf.
- Information Processing Speed. One is a task of spotting the lowest number from groups like:
29 24 30 23 28 26
This task is trivially easy for even minimally numerate children once the numbers have all been 'taken in' - yet it is this very process of apprehension that takes time and yields marked individual differences between testees: this test (the Information Processing sub-scale of the British Ability Scales) is one of the best measures of g all the way through childhood and adolescence (Elliott et al., 1978).
- PASAT. Another speed-of-intake task is 'paced serial addition' (PASAT). Testees listen to the tester reading out a succession of numbers, at a rate of around one every two seconds. Throughout, after each number is heard, in the gap before the next target number is read out, testees calculate and supply what they think is the sum of the latest two numbers which the experimenter has spoken - as is illustrated in Figure II,4.
The task can be made harder by decreasing the inter-stimulus gap, and the correlation of PASAT performance with IQ is an impressive .62 (Egan, 1988).
- AEP's. Recordings of the brain's electrical response to the onset of a single tone have indicated a connection between perceptual intake and intelligence. IQ has often been reported to relate to the waveform patterning of the brain's electrical reaction to stimuli even when subjects are just lying still while tones are played and are not engaged in reporting the tones (or in any other problem-solving work). A hundred trials are usually given so that the part of the 'evoked potential' reaction that is due to the signal is, as it were, magnified in comparison with the part that is due to random noise (which itself, being random, is necessarily changing from trial to trial). The resulting, more reliable 'averaged evoked potentials' (AEP's) are the measures that are finally examined for their correlations with IQ (for a review see Matarazzo, 1992). For example, Gilbert et al. (1991), studying twenty 13-14-year-old children, found the Hendricksons' (e.g. 1982) 'string length' measure of AEP (indexing relatively great variability in the post-stimulus waveform of the potential) to be correlated at .41 with IQ. In large samples from Eysenck's base at the Maudsley Hospital, brain indices yield quite a variety of correlations - up to .45 (Bates & Eysenck, 1993b; Barratt & Eysenck, 1994); and relatively anterior brain locations yield stronger correlations. In Edinburgh, Peter Caryl and Yuxin Zhang have especially remarked the role of the earlier parts of the brain's 'average evoked potential' (AEP) reactions (occurring up to one fifth of a second after the onset of each tone, especially during the rising phase of the P200 component of brain reaction). Despite their thirty undergraduate subjects' restriction of IQ range, P200 records showed r 's as high as .60 with both IT performance and IQ (Caryl, 1994). The London findings indicate that the AEP/IQ relations are to do with post-sensory processing; and the Edinburgh findings locate the IQ-related AEP and IT phenomena at the very earliest stages of perceptual intake of information - prior to brain processes normally associated with cognition, recall or conscious thought.
- Letter-reading speed. A long-running programme of work in Germany has repeatedly yielded clear correlations between IQ and how quickly testees can read (sotto voce) through randomized strings of letters of the alphabet (Lehrl & Fischer, 1990). (This is primarily a test of individual differences in intake speed, since the alphabet in its normal, overlearned order can be spoken in half the time and with much less variation between people.)
- Infants' responses to novelty. Tests of how quickly infants get bored with stimuli and stop looking at them (presumably because intake and assimilation are complete) are presently the only substantial individual predictors of IQ in childhood (Bornstein & Sigman, 1986; McCall & Carriger, 1993; Colombo, 1993; Rose & Feldman, 1995(12)). Despite the unreliability invariably associated with the psychological testing of infants, fixation-duration while habituating predicts 3-year IQ better (r = -.45) than does the rate or pattern of habituation itself. It is distinguishable from usual indices of attention span and exploration; and, though the jury is still out, it "appears to be a measure of speed of processing" (Fagen, 1995). In token of this recognition that such measures are indeed precursors of IQ, newer scales for clinical testing of infant mental development include 'visual habituation', 'discrimination' and 'novelty preference' (Bayley, 1993). Tests of speed of identity recognition have also appeared to have substantial correlations with IQ (Eysenck, 1995). Such developments are entirely in line with the ideas of IT researchers, and equally with IT researchers' predictions that speed-of-intake testing would come to supplement and sometimes replace traditional estimation of gf (Brand & Deary, 1982).
There have now been twenty years in which psychological researchers could have found some special explanation for the IT/IQ r . Today, to persist with strategy-theorizing in the absence of such serious evidence must be wishful-thinking. Quite the most likely hypothesis at present is that IT tasks manage to tap basic speed-of-apprehension differences; and that these speed differences are causal - both directly, in themselves, and indirectly, over the course of development - to setting up the differences that are finally measured conventionally as the highly correlated variables gf and gc. All the above lines of research with IT and similar techniques suggest that g is essentially connected with 'perceptual intake speed' for elementary information and need no longer be considered merely as 'what the intelligence tests test'. Higher-IQ people are not especially characterized by the speed with which they respond to stimuli, make decisions or execute responses in real life; but they are clearly quicker at extracting the most elementary information from the world. Their intake speed will presumably mean that they can take in more information per unit time and that their final decisions and responses, when they are made, will be of higher quality for being 'better informed'. Although IT tasks themselves are usually less reliable than IQ (especially when computerized) and are correlated better with IQ than with each other, the only obvious ability that they require, in common with gf, has to be intake speed. Intake speed need not be at the level of neuronal transmission - though Reed & Jensen (e.g. 1991) have reported evidence linking visual pathway transmission speed very slightly to IQ. It may equally be that superior immediate retention of the earliest traces of a stimulus has the same effect - by allowing good decisions about a stimulus despite a minimal duration of exposure. (In a similar way, Just & Carpenter (1992) outline a theory of individual differences in working memory in which lower g is associated with loss of processing that has not been completed sufficiently quickly: e.g. embedded subclauses of sentences may be abandoned at lower g levels.) The main point is that g is associated with rapid extraction of information - much more than with rapid execution of responses. Yet it is not just Spearman's problem about the fundamental nature of gf to which 'intake speed' provides an answer. The biggest headache for Piagetian theorists, too, may be over. The Piagetian 'constructivist' view of intelligence likens g to a toddler's tower of bricks - with later, higher developments depending on earlier ones. This is plausible enough if the growth of gc is seen as one feat of childish 'accommodation' and knowledge-acquisition succeeding another. But this notion provides no coverage of three well-established features of g .
- Throughout childhood there are steady improvements even at simple mental tasks - e.g. at short-term memory for telephone-type number strings. Development takes children from an average Wechsler Digit Span of 3.25 (average length successfully recited both forwards and backwards) at age 61/2 to a span of 5.5 by adulthood. The average adult performs this simple task of information processing and temporary storage at a level found only for the top one per cent of 61/2-year-olds (see Carroll, 1993). These marked developmental improvements plainly require no special Piagetian 'accommodatory' or other breakthroughs to any realm of 'higher operations': and, indeed, children improve not suddenly but quite steadily across the age range.
- In apparent reversal of the 'constructive' Piagetian developments of childhood, old age witnesses a 'deconstruction' that Piagetian theory cannot begin to explain. Though many developmental and lifetime achievements of knowledge and apparent understanding remain unaffected in old age, basic gf and capacity for active reasoning (as measured on Piaget's own tasks) declines, especially from about 55. Not only should there be no such deconstruction of intelligence with age, but Piagetian 'interactionism' should actually predict that adults will improve their intellectual functioning right throughout the lifespan. Now, however, help is at hand. For IT's show big improvements through childhood - especially till age 121/2 (Anderson, 1992; Deary et al., 1989); allied measures of recognition time for simple stimuli improve from 44ms to 23ms between 10 years and adulthood (Dempster, 1981); and, out of the entire range of tasks used by psychologists to monitor functioning with every gadget and computer programme of modern cognitive science, it is T-scope performance that shows the biggest deterioration with advancing years (even bigger than the decline of gf as conventionally tested). (According to the world's chief authority on the psychology of ageing, Timothy Salthouse (1992, 1993a, 1993b), almost 80% of the age-related variance in some measures of fluid cognition is associated with variations in perceptual speed. Salthouse has written that "statistically controlling perceptual comparison speed greatly attenuated the age-related variance in measures of working memory"; and that "the results of [my own] and other studies indicate that the reductionistic analysis of age differences in cognition can, and should, be extended at least to focus on speed of information processing as an explanatory variable.") Thus the idea of g deriving essentially from underlying factors of perceptual and neural efficiency can provide constructivist theorizing about development with the concept transplant that it needs. The child's constructions of intelligence, or at least of knowledge, require, through childhood, an increasing speed-of-apprehension that is essential to raising gf and Mental Age; and those Piagetian abilities that are not crystallized into gc will be adversely affected by gf's decline.
- Beyond improving on the formulations of Spearman and Piaget, a third advantage of an 'extraction speed' account of g differences is to make some room for the latest fashions and findings in experimental psychology. Lately, a key notion for experimentalists has been that of 'working memory', alias short-term memory, or 'desk-top memory', i.e. how well people can take in and hold on to information over a few minutes (normally meaningless information, to maintain scientific purity). By the 1970's, Piagetian tests of 'conservation' (e.g. of the volume of a liquid as it is poured into a differently shaped container) and other candidates for the status of 'new IQ tests' had turned out to correlate quite simply with the old IQ tests. Just so today, 'working memory' has turned out, to the astonishment of experimental and cognitive psychologists, to correlate as highly with g as the limited reliabilities and validities of experimentalists' tests of it will allow. The relation is so striking that Kyllonen & Christal (1990) and Salthouse (1993a) have even urged working memory itself to be the source of intelligence differences; however, this cannot explain g's strong relations with IT tasks (which require no working memory in any conventional usage of that term). Indeed, it has actually been known for some while that doing well at Digit Span is best predicted by how quickly testees can take in the target letters or numbers in the experiment (Dempster, 1981). (How easily people recognize numbers presented for a few milliseconds was found to be quite the most important determinant of whether they could recall numbers over an interval of a minute.) That working memory correlates substantially with most other cognitive tests of the experimental laboratory (e.g. Kyllonen, 1994, p.314) attests to nothing as much as the familiar correlational potency of g itself. It can now be appreciated that experimental psychologists have been indirectly concerned with the problem of the nature of intelligence all along, even if they abjured the political incorrectness of relating their work overtly to IQ and psychometric g . Piaget's ideas give no reason to link intelligence to experimentalists' working memory any more than to Digit Span or biological ageing; yet these links that have been discovered suggest a fundamental source of those intellectual developments of childhood that Binet and Piaget had noticed.
As intelligence yields key secrets of its nature, one very interesting problem remains. Just as Binet had insisted, and as Spearman himself had actually found, sizeable non-g mental differences are especially seen in people of higher g, MA and IQ (e.g. in the Verbal-Performance distinction and other bipolar contrasts - see Chapter 1). In line with Spearman's idea, researchers have sometimes remarked it to be easier to distinguish independent and sizable differences in literary sensibilities, scientific interests, sporting knowledge, historical curiosity and personality features among older and brighter children (Anastasi, 1970; Brand et al., 1994). Does it help in understanding such phenomena of 'differentiation' if perceptual speed differences are thought to provide the main basis of g differences?
Apparently the answer is 'yes'. For the relation between IT and g is itself stronger among lower-g testees. This tendency had been observed from the earliest IT/IQ studies in Adelaide and Edinburgh (Brand, 1979); and it is easy to confirm so long as testees range reasonably widely (Knibb, 1992). IT/IQ correlations can easily be as high as .80 for testees around IQ 60 (with s.d. = 15), but they are the usual .50 for young adult subjects of around IQ 110. Furthermore, Levy (1992) has observed that the high IT/IQ r 's for lower-IQ testees may be artificially depressed because there is more unreliability of performance found in the records of longer-IT subjects. Some psychologists have proposed that mentally handicapped people, young children and elderly people should not be included in IT/IQ studies because they "spuriously" inflate the IT/IQ r 's. But such methodological concern reflects nothing but the egalitarian inclination of many psychologists to ignore g-differences in the population as the major feature of the human condition and to concentrate psychology on university psychology students who are easier to motivate and less disturbing of beliefs in natural equality. Psychology should be about everyone - not about higher-IQ, middle class aspirants who produce pleasing results for cognitivists, disunitarians and closet egalitarians. The proper thing to do is to look at both sides of the coin: that g and intake speed have a true correlation of around .75 in the full population; but that, even with efficient methods of IT-testing, the IT/IQ r drops to about .55(13) when only young adults of normal intelligence are tested, and to around .30 in students having IQ's above about 115 (assuming s.d.'s are similar). Catherine Nicolson's (1995) study of 35 Edinburgh adults (mean age 23, "most....not undergraduates") on a light-emitting-diode IT task (developed by Deary et al., 1989) provides an example (see Figure II,5): Nicolson's overall IT/IQ correlation was -.64, but there was no correlation at all for the subjects in the top half of her IQ distribution, and a correlation of -.80 in across the bottom half of the IQ range.
It has been a remarkable feature of twenty years' research on IT that so many investigators have used undergraduate subjects and thus missed the clear-cut effects that are obtainable in relation to g . In 1946, the distinguished U.S. psychometrician, Quinn McNemar, observed: "the existing science of human behaviour is largely the science of the behaviour of sophomores" (Newstead, 1979, p.384). Sadly, despite today's staggering public outlays on psychology, this is remains true - presumably because it suits most psychologists to keep their heads well and truly in the sand.
Yet even if IT-testing agrees with psychometric testing in finding g to be more important to differences among the lower-IQ, and less unitary (i.e. less important in accounting for mental ability variance) amongst higher-IQ testees, how can this be explained? One possibility, first advanced by Ian Deary in Edinburgh (see Brand, 1984), is to point to how intellectual 'investment opportunities' change with development. The idea is that, once a person has reached a certain level of intelligence, options present themselves that were not previously available, yet between which choice is necessary (in terms of how time and energy are to be spent). Ingeniously, however, Michael Anderson (1992; and see Brand, 1988) has suggested an alternative focus on detectability: this idea is that the relation between intake speed and specific measures of verbal, spatial, logical, creative and memory abilities might be likened to the relation between a tape-recorder and its tapes. Thus, a user's tapes may be genuinely varied in their quality, in uncorrelated ways; but these quality differences between them will hardly be noticed unless the tapes are played on a machine (Anderson's 'Basic Processing Mechanism') that does not itself introduce random noise that makes all the tapes seem of low quality. Anderson's idea is that a good level of mental speed (or 'basic processing efficiency') does not cause differentiation of abilities in the higher-g range, but rather allows differences that were always present to be observed. At lower levels of speed and g, a testee will not be able to perform well on any mental tasks; whereas, if g is high, it can be detected that the subject is better at some types of task than at others.
Deciding between the development and detectability hypotheses will depend largely on whether differentiation occurs at higher levels of CA as well as IQ: for the development hypothesis requires time over which investment and crystallization of gf can occur. The largest-ever study of differentiation (drawing data from 10,000 13-16 year old schoolchildren in ╔ire) reports that mental abilities themselves differentiate according to g more than to age and thus favours the detectability hypothesis (Deary et al., in press). On the other hand, evidence from past studies is that the development hypothesis is required to account for educational attainments and personality features. It seems likely that differentiation of all kinds increases with both g and IT; and evaluating whether it increases as a function of time x IT or of IT alone will depend on the age at which IT's own developmental improvement is eventually agreed to stop. Whatever the final story, mental intake speed will join psychometric g as a variable that will require close consideration, not neglect, by genuine researchers of personality and individual differences.
Instead of uniting their forces against the vaunted 'mindlessness' and anti-realism of thought-outlawing empiricists and language-worshipping idealists, twentieth-century researchers of intelligence have tended to divide in their pursuit of the different approaches of Spearman or Piaget. Meanwhile, many experimental psychologists and modern cognitive scientists have preferred to try to neglect general intelligence altogether. Today it can be appreciated that the followers of Spearman and Piaget were pursuing largely complementary approaches; and the emergence gf as being linked to elementary information-intake solves historical problems that long beset both camps.
General intelligence is no longer just 'what the tests test' - whether the tests be those favoured by Spearman or Piaget. Rather g is what develops, enables differentiation and perhaps itself differentiates in the first twenty years of life and beyond. Its fundamental nature as speed-of-intake may itself one day be broken down into sub-components - but these sub-components will be systematically interdependent, not those of cognitive psychologists who are looking to break up g into entirely independent processes. For g itself is a substantially unitary variable that is now known to have strong connections with a wide range of procedures that can be indexing in common only something like the capacity for taking in simple information and registering elementary perceptual features of the world. The arch-critic of all 'reification of factors', Stephen Jay Gould (1981/82, p. 268) has declared his agreement that "under certain circumstances, factors may be regarded as hypothetical causal influences." Today, it is surely time for Gould and supporters to admit that the relevant circumstances have now arisen - or to spell out what further circumstances they would have in mind. Nathan Brody (1992, p.349) has summarized the matter thus:
"The first systematic theory of intelligence presented by Spearman in 1904 is alive and well. At the center of Spearman's paper of 1904 is a belief that links exist between abstract reasoning ability, basic information-processing abilities and academic performance. Contemporary knowledge is congruent with this belief."
Perhaps even Binet himself would not have been too displeased: for at the very outset of his work on intelligence, in 1890 (p.582, transl. J.B.Carroll), he had observed : "What we call intelligence in the narrow sense of the term consists of two chief processes. First, to perceive the external world, and then to reinstate the perceptions in memory, to rework them, and to think about them." Anyhow, with today's advance to 'mental intake speed' in mind, renewed interest attaches to the question of how g-differences arise in terms of the venerable influences of nature, nurture and their interaction with each other. For, whatever biological influences may be at work, some types of ability will surely reflect people's environmental familiarities - especially those familiarities that they have actively cultivated - and thus yield the differentiation of intelligence that Binet had wished to recognize. .
- Spearman's view of the generality and importance of general intelligence meant that he was relieved by Binet's practical psychometric achievements and by the clear and strong g factors yielded by such mental tests. Spearman was thereafter distracted from his early search for simple, information-processing functions that might underlie intelligence - not least because intelligence itself could have been a partial cause of superior performance on measures of sensory discrimination and attention.
- Piaget's account of the development of intelligence through childhood allows for children's constructive 'interaction with the environment' in improving their 'schemas' of the world. However, it does not explain abiding individual differences in general intelligence or the decline in fluid intelligence that often occurs in old age. Like Spearman, Piaget left big gaps in his programme and did not attempt to vindicate his belief in the developmental role of nature and maturation by using twin study.
- Many measures of speed of intake of information correlate substantially with IQ - notably Inspection Time (IT). IT is the length of exposure needed by a subject to see target stimuli presented very briefly in a tachistoscope (or, less satisfactorily, via a TV screen or miniature lights controlled by computer). IT probably correlates at around -.75 with g if reliable measures are used and if subjects have the same range of g levels as does the normal population (i.e. including the lower levels of g that are found in children, the elderly and the mentally handicapped).
- Such individual differences in intake speed probably play a major roles both in psychological development and in differences in development. Intake speed differences (or processes close and causal to them) either cause g differences or are just as affected by g as are dimensions of knowledge and reasoning ability. In relation to general intelligence, mental intake speed is either basic to it, integral to it, or both. The g factor emerges in a new light from research on IT: it can no longer be identified superficially with reasoning ability or knowledge; and it is no longer just 'what the tests test'.
ENDNOTES to Chapter II
Prior to the development of factor analysis proper, Spearman had developed the 'tetrad' method for seeing whether a matrix of correlations contained more than the g factor. He would calculate the product of any two correlations, rab and rcd; from it he would subtract the product of the other available correlations, rac and rbd. If, over repeated (and indeed laborious) calculations, any such product differences were non-zero, he would conclude that more factors than just g would be needed to account for the whole pattern of intercorrelations in the matrix. For example, if tests a and b are especially highly correlated because they both reflect (say) clerical ability as well as g, then the above exercise will yield a non-zero outcome.
To handle negative correlations, all correlations in the matrix may be squared; or one of the variables may be 'reflected' (e.g. extraversion can be renamed as introversion).
Evans and Waites (1981, p.129)) hope that Thomson's (1916) interpretation of mental tests' intercorrelations can stand even when they admit Louis Thurstone's idea of several distinct abilities to have fallen. However, they make the same prediction from Thomson's theory: "If this picture is correct then it should, in principle, be possible to devise a series of cognitive tests each of which taps distinct cognitive systems, and which yield scores which are not mutually intercorrelated." Evans & Waites further cite the work of Stevenson et al. (1976) as having the promise that they seek; but still, today, there is no battery of the requisite uncorrelated mental tests to have resulted from this work - any more than from J.P.Guilford's or Howard Gardner's programmes (see Chapter I).) The noted behaviourist and psychometrician, Lloyd Humphreys (1971, 1994) maintains a similar view, that intelligence is 'the acquired repertoire of information, knowledge, and intelliectual skills available to a person at a particular point in time: but from a 'repertoire' it really should be possible to sample plenty of routines without sampling others.
Like Binet, Spearman took the view that Binet's programme of multiple tests had succeeded because errors in one form of assessment were roughly cancelled out by the errors made by the others. Spearman held that IQ probably correlated as high as .90 with g . However, Spearman was concerned to identify the nature of g more closely if possible.
Prior to the big expansion of university education in the 1960's, IQ levels of students were probably around 140 - as would be expected from the students being highly selected (as within the top 1% of the population in terms of academic achievement) and IQ correlating at .50 with educational attainment. Herrnstein & Murray (1994) consider that American universities may have become progressively more selective by IQ-type criteria throughout the twentieth century, but there is no direct evidence for this proposition.
It is also tends to be forgotten that Thurstone himself had no objection at all to using factor analysis to seek the structure of biological reality: he believed his own preferred 7 factors (at first hypothesized to be independent) were indeed more real than g and just as heritable (e.g. Brand, 1984; Gould, 1981).
The declared opponent of unconditional reification, S.J.Gould (1981/1992, p.309) correctly observes: "The very fact that estimates for the number of primary abilities have ranged from Thurstone's 7 or so to Guilford's 120 or more indicates that vectors of the mind may be figments of the mind."
Kuhn's .80 correlation involved middle class 7-year olds of around IQ 109. Her r 's were lower (.33 after unreliability correction) for middle-class 11-year-olds of around IQ 114 - a result indicative of differentiation of abilities at higher g levels (see the last Sections of Chapters I and II).
Although Eysenck became Britain's best-known psychologist by the 1970's - and behind only Freud and Piaget in international indices of how often work is cited in learned journals - he was not knighted. Eysenck had especially upset the British establishment in the 1950's by being one of the first to suggest that Fascists and Communists might have something psychologically in common (Eysenck, 1954). His autobiography, Rebel with a Cause (Eysenck, 1990) documents his many anti-establishment involvements.
IT may show changes over such exposures as the 38,400 trials given to two subjects over 60 days by Deary et al., 1993; but this is of little relevance to explaining IT differences as ordinarily tested.
Since a TV screen is 'refreshed' only at 25 milliscecond intervals, scheduled exposure durations are seldom achieved. Precise computer-controlled durations of illumination are enabled by light-emitting diodes; but these produce much stronger after-images and apparent movement effects than do T-scope presentations. Modern technology is much better at mimicking processes of decision-making and output than at mimicking real-world input - this is probably the secret of why 'artificial intelligence' remains a pipe dream.
Rose & Feldman (1995) found correlations by age 11 of around .30 for 167 children tested as infants with measures of visual recognition memory. They note that "most of the infancy measures were related to perceptual speed." Such a degree of forward prediction of child IQ, across ten formative years, is usually exceeded only by using the IQ's of children's parents.
Correlations of -.35 typically found in student subjects correct into estimated correlations of -.55 for the full, normal range of IQ's in young adults - e.g. Deary et al., 1989.