- From Aristotle & Aquinas to empiricism & idealism
- Empiricism & idealism: attractions and problems.
- The business of the book: can intelligence be 'real'?
- Vox populi....
- Definitions of intelligence?
Many hopeful concepts in modern psychology have met lasting customer resistance: infantile sexuality, the death wish, one-trial learning, subliminal perception, self-actualization and authoritarianism-of-the-left have all made quite as many enemies as friends. Intelligence is similar - or worse. It has been at once crucial to psychology, interesting to the public and controversial to experts. Elitism, racism, ignorance and 'ignoracism' are among the accusations that have been hurled at each other by protagonists of the different views that will be registered in this book.
To some, the study of intelligence might seem a simple success story. Friends of IQ would claim that intelligence was the first human characteristic that psychology could measure; that it proved possible to trace some of the causes of differences in intelligence to genes and brain functions; that, despite much research into other personality factors by occupational psychologists, intelligence remained the only widely useful predictor of career success; and that mainstream academic psychology has lately come to focus chiefly on intelligence - although re-naming it 'cognition'. On the other hand, however, there is protest. Despite IQ tests being in wide use since 1910, many working psychologists would be loath to concede that such tests tap the most important, general way in which people differ psychologically. Many psychologists can be relied upon to deny that there is any such (one) 'thing' as intelligence. Intelligence has been thought a 'label' that is both scientifically and politically unhelpful; harmful effects of racial and other prejudices on a growing child's intelligence are thought to have been played down by IQ enthusiasts out of a dogmatic belief in heredity; and modern 'cognitive psychology', it may be explained, has really been trying (since its invention, around 1970) to break intelligence and cognition down into numerous separate components - thus making IQ-testing a thing of the past (if ever the analytic cognitive breakdown were completed). Still worse, though some of these divisions may be bridged by psychologists' factual inquiries, others are philosophical.
The concept of intelligence first appears in the writings of Aristotle (384-322 BC). Not content with the stress of Plato (428-348 BC) on the supremacy of theoretical reasoning (i.e. of formal, logical or mathematical reasoning) in his account of the mind, Aristotle made room equally for practical reasoning: for him, reasoning included how people worked out the best education, the best husband, and the best route to Thessalonika. For Aristotle, thinking was concerned as much with the achievement of everyday objectives as it was with the acquisition of abstract knowledge. Whereas active, analytical understanding arrived at abstractions, insights and restructuring of knowledge (e.g. changes in the categories and grouping principles used), passive understanding involved the intake and synthesis of the offerings of the senses and the imagination. To use the terms of the twentieth-century psychologist, Jean Piaget (1896-1980), Aristotelian intelligence involved not only 'accommodatory' re-arrangements of learning and adjustive re-structuring, but also the 'assimilatory' processes of intake that inform them. In the language of today's cognitive psychology, intelligence involved not only high-level, overarching strategies, heuristics (formulae) and 'meta-processes' (principles about processes) but also information-gathering from the environment and very 'basic' processes of transmission of simple information. Though Aristotle's Logic (showing the correct forms of syllogistic reasoning) was eventually to be one of his best-known legacies to the Christian world, his unusually broad conception of human understanding was to be another. Translated into Latin by Cicero (106-43 BC), what Aristotle called 'understanding' became intelligentia; and, once details of Aristotle's system of thought were brought back into mediaeval western Europe by Muslim invaders and by Rome's conquest of Constantinople, the genius of Aquinas (1225-1274) made intelligence a key concept of mediaeval Western psychology. Man had an intelligence that was deemed competent to deliver truth - even in matters of theology.
Aristotle's scientific ideas could not expect to go unchallenged. After all, his data base was so slight that he had concluded (from the corrugations of the cerebral cortex) that the brain was chiefly a radiator for cooling the blood. (By the twentieth century A.D. it proved possible to modify this notion - though Aristotle may still have been right about the most remote evolutionary origins of the brain in control of body temperature (Falk, 1990). Even the more central mentalistic concepts of Aristotle and Aquinas eventually faced two escalating challenges that would not go away. On the one hand, the rising tide of (British) empiricism (and its descendant, positivism) queried the use by scientists and scholars of all propositions (other than those of sheer logic) that could not be supported by the provision of the hard, physical and objective evidence that had to come ultimately from the senses. On the other hand, the philosophical counter-surge of (mainly German) idealism, insisted that there were no realities of any kind except the constructions of human consciousness and language. To talk of the existence of intelligence (or of intelligence differences between people) was unacceptable to both of these main philosophical schools whose wide influence on other disciplines was to reach through to the end of the twentieth century. The tendency was for Anglo-Saxon intellectuals to scorn the abstract and the German intellectuals to scorn the concrete possibilities of knowledge. They were all unhappy with mental as distinct from material realities and with the efficacy of human rational agency; and both schools maintained, in their different ways, that humanity was a creation of society. (Empiricists claimed experience was all; and idealists claimed that language was all. - But society was the chief source of both experience and language.)
Empiricism's chief offshoot in Anglo-Saxon psychology, behaviourism, presumed that all such abstractions as 'intelligence' (and likewise 'the will', 'consciousness', and 'emotion') would need to be broken down into sets of statements having tangible reference to particular, observable 'behaviours' and performances of testees. Such unpackaging would allow them to be scrutinized. If a fully satisfying decomposition could not after all be accomplished, these terms would be revealed as relics of Christian (or Greek) mysticism and fancifulness and as unsuitable for use in science - hence the preference for dropping most of them. With even 'consciousness' overboard, behaviourists were certainly not inclined to suppose that, just because two intelligence tests correlated with each other, they would therefore measure the same single quality of mind - let alone 'thing'.
On the other hand, idealism and its chief psychological offshoot of constructivism held all mental concepts to be inventions of the human mind. Not content with denying the fundamental force of human reason, idealists would go beyond the empiricists in denying even that any of our ideas might be adequately traced to bases in veridical perception. However seemingly 'scientific', mental concepts were just like others to the idealist. Chihuahas, cyclones, concubinage and conscience - all concepts come into being only among language users. Concepts would thus reflect the influence of culture and politics: in particular, their deployment would express the hegemony, interests and value-judgments of particular social groupings. Constructivists (in this like behaviourists) had no interest in studies examining the possible heritability of human differences in intelligence - no matter how straightforward or 'powerful' the available methods. The physicist and philosopher-of-science, Thomas Kuhn (1962), and kindred sociologists of science had characterized science and its concepts as being, at least at its "pre-mature" outset, partly the invention of scientists as they simply felt a preference for one paradigm rather than another.(1) Happily following this largely relativistic lead, constructivists assumed that all 'mental qualities' - and not just those eliminated by the empiricists - would one day be revealed as little but expressions of the Zeitgeist. While behaviourists rejected the concept of 'intelligence' because it was too mental and subjective, idealists would reject it because, in involving quantification and objectivity, it was not subjective enough.
Followers of both these main schools of twentieth-century philosophy thus agreed in abjuring Aristotelian realism (with its dualistic acknowledgment of both mind and matter). According to the popular derivatives of both empiricism and idealism, it was unhelpful to think that there is any real world of mentality - of mental phenomena, feelings and faculties - to which scientists try to make their theories correspond. Already in 1896, the British theoretical psychologist, G.F.Stout (1896, Vol. 1, p.18) had set the tone for the twentieth century by declaring "such words as potentiality, faculty [and] susceptibility" to be "mere marks of our ignorance." Whether, between them, blunt empiricism and unanchored idealism could avoid the challenge of a true science of the mind would remain the big question for psychology in the twentieth-century.
Distinctive social changes can be traced to the twentieth-century pressures on realism by empiricists and idealists. The modern worlds of business, governmental bureaucracy and the universities are ostensible converts to the behaviourist idea that every task can be broken down into elements that allow it to be taught piecemeal to just about anyone, regardless of personal characteristics (including intelligence). Employees today are commonly 'assessed' - at least when assessment is public - in terms of how they perform in very particular situations (file management skills, summarizing the week's achievements, contributing to discussion, etc.) that can be objectively specified. Indeed, the performances that are officially required will at least look as if employees could prepare for them by rote learning and rehearsal. Such apparently fair-minded practices of situation-specific assessment follow naturally from acceptance of behaviourist approaches. For example, behaviourists suggest handling problems like phobias, impotence and school failure by 'controlled behavioural desensitization' (i.e. gentle exposure) - or equally by 'flooding' (i.e. violent exposure); and they claim to be able to teach by rote or by 'modelling' (imitation) the particular skills apparently required by clients (e.g. 'social skills' for use in queues, domestic arguments or interviews). The Zeitgeist makes it churlish to challenge the rhetoric that, without bothering with the underpinning features of personality envisaged by realists, behaviour and ideas can readily be changed. Utopianism is a comfort: no-one is lastingly superior and everyone can be retrained in new skills or - as idealists would prefer - 'discourse'. Idealism additionally contributes to the modern world via its relativistic acceptance that one set of ideas is as good as another - since there is nothing to which ideas have to be made to correspond: this encourages a tolerance of other people's ideas which has helped to create the modern world of international trade, travel and political co-operation.
So attractive were the doctrines of empiricism and idealism to psychology through the twentieth century that it was possible to draw a veil for a while over their problems. Behaviourists could 'extinguish' specific phobias by repeated exposure so long as the sufferer did not have wider neurotic problems; and, just as any idealist could wish, most sentences that people use in speech are novel (and readily altered - e.g. into the negative). Thus it was possible for psychologists to forget the countless features of human behaviour, personality, psychopathology, thought content and language-use that could not be thus altered. However, empiricism and idealism were to experience grave set-backs. Notably, they proved unconvincing in what became the twentieth century's core quest in philosophy. The problem was to provide, if possible, a plausible account of the century's intellectual and practical success story, science - to provide an adequate 'philosophy of science', as the quest came to be called. Empiricism insists that truths (other than those of logic) require tangible confirmation in direct experience and observation; so it has difficulty countenancing scientists' use of unobservable hypothetical entities such as electrons and gravity - not to mention quarks and black holes. Even the reality of the existence of life itself remains problematic to empiricists while this biochemical phenomenon cannot be specified in terms of measurable phenomena of chemistry. The empiricist (qua 'nominalist') is a mistruster of mere names; and, as the philosopher, J.S.Mill (1806-1873), observed, of the human tendency to imagine that any as-yet-undiscovered entities must be "peculiarly abstruse and mysterious." On the other hand, idealism maintains that virtually all 'truths' - whether scientific, moral or theological - have nothing but the same relativistic status: all are merely aspects of consciousness, culture and language and can change quickly once fashion gives the word. Thus idealism and empiricism both have difficulty with scientific progress. Today much is known about the structures of the universe, the brain, human ethnic groups and cultures and other species than was known a century ago. Yet both empiricism and idealism view science as just a set of 'rules' for moving from one description to another. The empiricist allows that rules are based on discovered regularities, while the idealist holds them to be a social convenience (at least for the language-making classes); but, when predicting from X to Z, both are unhappy to allow Y to come in between - let alone the rich, deep structure of the universe from which alone a scientist will typically make predictions.
Perhaps most spectacularly, popular empiricism and idealism were both challenged by the arrival, from 1960, of psychopharmacological amelioriation (via chlorpromazine, lithium and beta-blockers) for serious psychiatric disorders (schizophrenia, manic-depressive illnesses and chronic anxiety). Again, there was increasing research evidence (from twin and adoption studies) that psychotic illnesses ran rather noticeably along genetic lines and had many associated neurological complications. Contrary to popular theories of the role of lack of love, lack of stimulation or inconsistent parenting, serious psychopathology could not in fact be traced to such environmental causes operating either alone or in specifiable conjunction with patients' personalities.
A spectacular case was that of childhood autism (the Rain Man syndrome): at first blamed on bookish, 'refrigerator mothers', this condition gradually turned out to involve exposure physical ailments (like German measles) and unusual constellations of genetic factors (i.e. epistasis - see Chapter III): the children's early symptoms were not caused by but merely noticed sooner by middle class parents. The conversion of Britain's top child psychiatrist, Sir Michael Rutter, from environmentalism about this sad and still very mysterious condition marked the turning of the tide (Burne, 1994).
For empiricists, the surprise was that complicated disorders that defied simple behavioural description were in fact quite 'real' enough to be significantly triggered by physical problems and controlled by drugs - even though the routes by which psychiatric medications took their effects were far from being understood. The 'rule' was that chlorpromazine allowed sufferers from schizophrenia to live outside hospital; but the gap between medication and outcome was not one that the empiricist could hope to fill. For idealists, it was just as surprising that profound disorders of thinking (or 'labels', supposedly reflecting the culture-serving biases of psychiatric experts, as envisaged by Laing (1964) or Foucault (1970)) could be so well controlled (or indeed rescinded, by the very same experts) as to half-empty the mental hospitals of the West. This advance by drug companies, together with increasing evidence of genetic involvement in schizophrenia, was a breakthrough for realism and to this day upsets the liberal and utopian consciousness that would frankly prefer human beings to have proved more changeable by means that were more 'social' than those actually disclosed to mainstream medicine and scholarship. (Indeed, drug use of all kinds - including recreational use - testifies to the reality of many of the complex mental states that should strictly be dismissed as 'fanciful' by systematic empiricists, and as 'rhetoric' by serious idealists.) Still, despite the successes of realists, the remaining (and growing) social problems of crime, drug addiction, child abuse, unemployment and ethnic strife continue to encourage behaviourists and constructivists to believe that their own favoured approaches might one day prove as relevant as realism to the improvement of the human condition. Especially if the reality and importance of intelligence differences can be denied, there is plenty of hope for empiricism and idealism.
Intelligence itself might at any time have slipped decisively into or out of the purview of realism. It could at any stage have turned out to be affected by some crucial protein, mineral or vitamin; or twin research might have broken it up into quite distinct components - some genetic, perhaps, and others determined by environment. However, systematic experimentation on human intelligence was ethically impossible; there was no lucky breakthrough to drug control (as there had been for schizophrenia); and twin studies were rare since few thought it necessary to check and quantify the importance of environmental factors. So the issue of whether intelligence was a 'real', important variable of general significance had to be addressed in other ways. Matters were further complicated by political and educational arguments in which the notion of intelligence was easily embroiled; and by some of the peculiarities of IQ researchers (and, as Chapter III will discuss, of equally determined non-researchers).
The business of this book is thus to tell an unconcluded tale that is important to psychology, to philosophy, and probably to politics. The concern is to introduce IQ and the controversies that properly surround it - especially those that bear on whether intelligence can be at last established or finally eradicated as a central concept in describing human nature. Today, the case for realism in psychology and social science probably stands or falls by IQ. If IQ differences are not real then there can be little else in psychology that is: even an academic defence of 'sex' as more than a social creation would be hard going if IQ had been shunted off as an elitist fiction.
That intelligence is real enough to be at least a metaphorical 'possession' seems widely allowed. Despite a century of protest by empiricists and idealists, the concept that descended from Aristotle via Aquinas is still a part of present-day vocabulary. The following quotations may serve to illustrate such everyday usage - supplemented as this might be by references to people's 'intellect', 'sense', 'creativity', 'understanding' and just 'ability'.
'INTELLIGENCE' IN MODERN USAGE
"Gerard Depardieu plays Danton; he is a bovine and sometimes slack-jawed actor who looks here as if he has spent the night crawling through a cornfield. But his physical presence tends to conceal the intelligence which he invests in each part, and so he is perfectly suited to play a character whose animal cunning is only matched by his sensual greed."
Peter ACKROYD, 1983, The Spectator, 24 ix.
"A human disposal chute for uppers, downers, hash, grass, LSD, cocaine, heroin and common-or-garden booze, [Keith Richards, of the 'Rolling Stones'] has not been kind to his system. Yet somehow he has survived a ten-year heroin odyssey and is now 'clean'. This reflects a certain constitutional toughness, but also an intelligence and good sense he is not often credited with."
ANON., 1985, The Spectator, 18 v.
"In public, Princess Michael [of Kent] is a dazzling figure. In private, she is warm, funny, frank and possessed of high intelligence and formidable energy. Her force of character has been given added edge by [her unpropitious beginnings - born as the monarchies of Eastern Europe were collapsing]."
Anne De COURCY, 1985, Sunday Telegraph, 6 i
"Like pearls from oysters, [a great chef's dishes] result from lonely struggling effort, and also from intelligence and quite exceptional intuition."
Egon RONAY, 1988; quoted in Private Eye, 1 iv.
"She has energy, intelligence, articulation, the best voice in the world, the best body, the best face, and the best appreciation of a joke."
Warren BEATTY, speaking of his newlywed wife, Annette Bening, to Vanity Fair; quoted in The Independent on Sunday (Sunday Review), 24 v 1992.
"Given my wish to seduce Chloe, it was essential that I find out more about her. How could I abandon my true self unless I knew what false self to adopt? But this was no easy task, a reminder that understanding another requires hours of careful attention and interpretation, teasing a coherent character from a thousand words and actions. Unfortunately, the patience and intelligence required went far beyond the capacities of my anxious, infatuated mind."
Alain DE BOTTON, 1993, Essays in Love. London : Macmillan.
"Joan Littlewood is the greatest theatre director of the present century, knocking possible rivals....into a cocked hat when it comes to intelligence, originality and the incalculable influence for good she has had on theatre all over the world.... "
John WELLS, 1994, The Spectator, 2 iv.
"MANCUNIAN MAN, 33, handsome, intelligent, warm, witty, emotionally open seeks tall, sensual blue-stocking for lively committed relationship."
Private Eye, 10 ii 1995.
"Intelligence, without which beauty was just beauty, leavened everything Garbo did."
Barry PARIS, 1995, Garbo: a Biography. London : Sidgwick & Jackson.
Today we are still living in the long shadow of Aristotelian realism. Yet is Aristotle's a kindly or a gloomy shade? Should we be reminding ourselves of our luck; or be trying to escape from over-simplification to sunlit uplands of relativism where man and his conventions are acknowledged the measure of all things? This is the large question that lies behind the four particular questions to be addressed directly in the four chapters of book: these concern the psychometric measurement, the psychological bases, the psychogenetic origins and the psychotelic importance of intelligence.
On encountering such questions it is tempting to begin to address them by adopting some 'definition'. How, after all, can scientists measure intelligence if 'no-one knows what it really is'? Thus a search for scholarly and at least half-adequate definitions begins. Attempted definitions of intelligence might include such classics as 'reasoning ability', 'learning ability', 'the eduction of relations and correlates', 'general cognitive resources' or even - risking leaving the realm of psychology altogether - 'the application of information to situation' or 'organism-environment correspondence' (the idea favoured by Herbert Spencer (1820-1903) when he re-introduced the term in his Principles of Psychology (1855)). (For many further attempts, see Baker, 1974, pp. 495-6.) However, the problem is that any definition that manages to be more engaging than what can be found in a dictionary turns out to be unacceptable to someone or other. For example, intelligence cannot be 'the ability to learn' since much conditioning (whether by association of stimuli or by reward and punishment) does not require intelligence. To define intelligence in terms of school learning would be far too narrow; and, anyhow, in researches, many intelligent children improve less than others over a school year of teaching (mainly because they know more of the curriculum at the year's outset).
Thus Chapter 1 begins not with a definition of intelligence but with the failure of late-nineteenth-century British and German laboratory psychologists to come up with a test looking as if it measured intelligence by any criterion at all; and with the crucial work in this field by a French psychologist who was previously best known for his work on hypnosis and sexual fetishism. This work was to provide what seemed to other psychologists the first plausible measurement of intelligence. Yet, like other scientific concepts (such as electricity, gravity and heat), intelligence would long prove easier to 'measure' than to understand; and easier to understand than to discuss without animosity. Disputes would persist - suggesting at least that psychology had found itself a topic that was central to a proper understanding of human nature and political society.
Kuhn's own position is sometimes distinguished from that of his sociological followers along the following lines (Hull, 1996).
"....within science studies, those who view science as relative to time and place have adopted Kuhn as one of their patron saints. Because the transition [of scientists] from one paradigm to another cannot be explained entirely in terms of reasons, argument and evidence, such factors play no role whatever in such transitions. ....Kuhn himself was dismayed by his relativist disciples. This was not what he intended at all!"