In recent years, the field of behavioral genetics has made some important strides in unearthing some of the thousands of genes that are likely associated with IQ. While the association of any particular gene with IQ is minuscule, the more we discover their combined influence on the development of brain networks that support cognition (such as the frontal-parietal network), the more we will learn about the nature of human intelligence.
However, in discussions of this topic online, I’ve noticed that some people, especially those who see policy implications of these findings, make the erroneous assumption that the extent to which a particular IQ test is heritable, is the extent to which that test is influenced by genes and is therefore immutable. This is a faulty assumption.
Consider a fascinating study conducted by Kees-Jan Kan and colleagues, in which they analyzed the results of 23 independent twin studies conducted with representative samples, yielding a total sample of 7,852 people. They investigated how heritability coefficients vary across specific cognitive abilities. Importantly, they assessed the “cultural load” of various cognitive abilities by taking the average percentage of test items that were adjusted when the test was adapted for use in 13 different countries.
For instance, here is the cultural load of the Wechsler Intelligence Test subtests:
They discovered two main findings. First, in samples of both adults and children, they found that the greater the cultural load, the greater the test was associated with IQ:*
This finding suggests that the extent to which a test of cognitive ability correlates with IQ is the extent to which it reflects societal demands, not cognitive demands.
Second, in adults, the researchers found that the higher the heritability of the cognitive test, the more the test depended on culture. The effects were medium-to-large, and statistically significant.
As you can see above, highly culturally loaded tests such as Vocabulary, Spelling, and Information had relatively high heritability coefficients, and were also highly related to IQ. As the researchers note, this finding “demands explanation”.
Why did the most culturally-loaded tests have the highest heritability coefficients?
These findings are intriguing, and indeed demand explanation. One possibility is that Western society is a homogenous learning environment– school systems are all the same. Everyone has the same educational experiences. The only thing that varies is cognitive ability. This explanation seems unlikely if we look at the vast variation in educational opportunities and enrichment across the Western world.
Another possibility is that tests of vocabulary and general knowledge are actually more cognitively demanding than solving the most complex abstract, perceptual reasoning tests. While it is true that vocabulary often emerges as one of the best predictors of general cognitive ability in large test batteries, this explanation also seems unlikely. It’s not clear why tests such as vocabulary would have a higher cognitive demand than tests that are less culturally-loaded, but more cognitively complex (e.g., Raven’s Progressive Matrices). Also, this theory doesn’t provide an explanation for why the heritability of IQ increases linearly from childhood to young adulthood.
My preferred explanation for these findings require thinking in terms of genotype-environment covariance, in which cognitive abilities and knowledge dynamically feed off each other. Those with a proclivity to engage in cognitive complexity will tend to seek out intellectually demanding environments. As they develop higher levels of cognitive ability, they will also tend to achieve relatively higher levels of knowledge. More knowledge will make it more likely that they will eventually end up in more cognitively demanding environments, which will facilitate the development of an even wider range of knowledge and skills. As Kees-Jan Kan and his colleagues point out, societal demands influence the development and interaction of multiple cognitive abilities and knowledge, thus causing positive correlations among each other, and giving rise to the general intelligence factor.
To be clear: these findings do not mean that differences in IQ are entirely determined by culture. As already noted, it is not in dispute that cognitive ability is influenced by perhaps thousands of interacting genes. What these findings do suggest is that there is a very important role of culture, education, and experience in the activation of these genes.
Behavioral genetics researchers– who parse out genetic and environmental sources of variation– have often operated on the assumption that genotype and environment are independent and do not covary. These findings suggests they very much do, and that such covariation should be brought more heavily into these discussions, especially for more contentious discussions such as black-white differences in IQ test scores.
In his analysis of the US Army data, the British psychometrician Charles Spearman noticed that the more a test correlated with IQ, the larger the black-white difference on that test. Years later, Arthur Jensen came up with a full-fledged theory he referred to as “Spearman’s hypothesis: the magnitude of the black-white differences on tests of cognitive ability are directly proportional to the test’s correlation with IQ. In a controversial paper in 2005, Jensen teamed up with J. Philippe Rushton to make the case that this finding suggests that black-white differences in IQ test scores must have a significant genetic contribution.
But the research by Kees-Jan Kan and colleagues suggest just the opposite: The bigger the difference in cognitive ability between blacks and whites, the more the difference is determined by cultural opportunities for cognitive enrichment.**
Conclusion
As Kees-Jan Kan and colleagues note, their findings “shed new light on the long-standing nature-versus-nurture debate.” Of course, this study is not the last word on this topic. There certainly needs to be much more research looking at the crucial role of genotype-environment covariance in the development of cognitive ability.
But at the very least, these findings should make one think twice about the meaning of the phrase “heritability of intelligence.” Instead of a simple index of how “genetic” an IQ test is, it’s more likely that in Western society– where learning opportunities still differ so drastically from each other– heritability is telling you just how much the test is influenced by cultural opportunities for all people to reach their maximal cognitive potential.
—
* Throughout this post, whenever I use the phrase “IQ”, I am referring to the general intelligence factor: technically defined as the first factor derived from a factor analysis of a diverse battery of cognitive tests, representing a diverse sample of the general population, explaining the largest source of variance in the dataset (typically around 50 percent of the variance).
** For data showing that Black-White differences in cognitive ability are largest on the highly culture-dependent tests, I highly recommend reading Chapter 4 of Kees-Jan Kan’s doctoral dissertation, “The Nature of Nurture: The Role of Gene-Environment Interplay in the Development of Intelligence.”
Acknowledgement: thanks to Rogier Kievit for bringing the article to my attention, and to Kees-Jan Kan for his kind assistance reviewing an earlier draft of this post.