Our group have previously engaged with Regina Fabry’s work (2017a), which skilfully seeks to combine Richard Menary’s work on enculturated cognition (see week 24) and the predictive processing framework. Fabry (2015, 2017a) terms her position “enculturated predictive processing” (see week 13). In this paper (2017b), Fabry’s focus is on the notion of cognitive innovation and she makes a strong argument for the case that, contrary to a methodological individualist position, this phenomenon is distributed both horizontally across a generation and vertically across multiple generations. In this brief review I will begin by outlining Fabry’s central claim and then turn to some queries.
Fabry’s central claim is that cognitive innovation is better explained in the context of cumulative cultural evolution and enculturation (the ontogenetic acquisition of cognitive practices). She contrasts this position with methodological solipsism (Fodor 1980) – the contention that we can understand a particular cognitive attribute by only focusing on a single isolated individual without taking any ecological factors into account. In contrast to this position, Fabry argues that cognitive innovation should be understood in the light of two major points:
Enculturation: Humans are hyper social organisms. We are incredibly dependent on other members of our species for huge amounts of our daily cognitive lives. Not only are we hyper dependent but we are also highly sociable – with experiments demonstrating that we have a propensity to both sharing and teaching (Dean et al 2012). But this cannot be thought of as a mere add-on. Instead, the enculturated cognition position holds that human cognition is fundamentally shaped and transformed by the acquisition of cultural practices. And furthermore, that without enculturation, human cognition would not be what it is.
In this paper, Fabry’s core contention is that cognitive innovation must always be seen in the context of the cultural niche in which it takes place. It is not isolated individual achievements, but distributed across populations and groups in both space and time. And this relates to another key feature: Cumulative cultural evolution. It has been argued by Tomasello and others that a qualitatively distinctive feature of human culture is the high fidelity of transmission of information from one generation to the next (although see Vale et al 2017 for an interesting possible case of cumulative culture in chimpanzees). This enables what Tomasello (1999) has referred to as the ratchet effect: the retention of small and incremental improvements of cultural products. Fabry’s contention is that cognitive innovation must be seen in the light of this phenomena because the vast majority of innovations are elaborations of previous cultural products and phenomena. And as such, cannot be seen as an isolated and individualistic phenomena. She uses the history of computation as an exemplar of her position (as briefly sketched in the image below from Infogrades).
I am almost entirely in agreement with Fabry’s position. Indeed, I have a recent paper (Gillett 2018) that discusses this topic in scientific communities that is consilient in many respects with Fabry’s position. Albeit, following Michael Tomasello (1999), I term the intergenerational collaborative effort “virtual collaboration”. And furthermore, my concern, following Joseph Henrich and colleagues (Henrich 2016; Muthukrishna et al 2014 [see Week 17]), is to point to the possibility of incremental innovations that build without any deliberate conscious inventiveness. Instead, cognitive innovations can also arise across generations through transmission errors and luck (etc.). I speculated that in scientific communities an area in which this might occur more frequently is interdisciplinary research (due to the complexities of the communication networks and how information is distributed between the members of these research teams).
So, in the spirit of general agreement, here are some general questions that were raised by Fabry’s paper. I should note that none of these are arguments against Fabry’s position. Instead, they are only intended to test the limits of the project and see where it can take us in understanding cognitive innovation as a distributed cognitive phenomenon:
1. What about Kuhnian paradigm shifts? The incremental account sees the gradual accumulation of innovations amounting to large breakthroughs, but we can also consider more dramatic and quicker transitions, as well as largely fallow patches where no changes occur: how does a distributed account of cognitive innovation handle these phase transitions? This could be seen as analogous to the debate between punctuated equilibrium and gradualists (which interestingly was partly inspired by Kuhn’s work!).
2. What about expertise? Are they the drivers of innovation? My query here is whether it is experts or novices who drive cognitive innovations. Some evidence indicates that sometimes an expert’s knowledge can get in the way of attempting to solve a problem. Instead, a newcomer to the task who is not as heavily embedded in the system of thought or community might have more scope for flexible thinking or new ideas. I.e. the newcomer has not become completely habituated into the use of certain cognitive norms and so either unknowingly violates them in interesting ways or makes fruitful mistakes in their deployment, or tests ideas out that the expert’s knowledge precludes them from either considering due to how their representations have saliences and constraints which focus attention (work by Zhang 1997, Vorms 2012, Charbonneau 2013, and others shows how this works in regards to external representations).
Conversely, one could contend that it is experts who drive cognitive innovation through the gradual extension and increasing sophistication of certain cognitive norms. The idea here is that innovation cannot be pinned down to any one individual but is instead a state of increasingly communal complexity contributed to by multiple highly skilled individuals (I take this to be Fabry’s core insight). Examples of this latter circumstance can be seen in scientific communities with regards to what Rebecca Kukla (2012) refers to as “radically collaborative” projects in which it is not possible to pinpoint who is truly deserving of epistemic credit. For example, one paper published in the discovery of the Higgs Boson had over 1,500 authors (ATLAS collaboration 2015) [Indeed CERN have interesting regulations regarding who should be deserving of epistemic credit in the publishing of findings (see Galison 2003 for a discussion)]. But this can also create complicated issues surrounding who at assign epistemic credit (Kukla 2012) and blame for misconduct (Helgesson & Eriksson 2018). As Kukla notes, this issue is particularly vexed in communities of inquiry like biomedical research where there are non-human actors – e.g. companies – with financial interests.
Of course, it is almost certainly likely to be some combination of expertise and novice-hood – hence, this in some ways a recapitulation of Thomas Kuhn’s (1977) “essential tension” between innovators and conservatives in a community of enquiry. Kuhn was concerned to try and understand the balance between, on the one hand  maintaining an adequate level of communication between all members of the community through a common theoretical vocabulary and accepted theoretical assumptions (etc.). This can be very difficult if every member of the community is constantly innovating and ‘ploughing their own furrow’ (as is shown in mathematical population models by Zollman (2010) and others). And on the other hand , if the community is too conservative the exploration of the relevant problem spaces becomes stunted and ultimately cannot adequately tackle the problems in a manner that is sufficiently effective. Working out how to handle this tension between these conservative and innovating forces is a complex theoretical and empirical matter (see week 23 for a discussion of Fred D’Agostino’s (2008) naturalisation of this division).
3. How strongly should we take the embodied aspect of doing mathematics? Fabry argues strongly for the notion that both neural and bodily factors are important constraints for considering how mathematics is actually performed. I.e. mathematics cannot just take on any logical shape but is constrained by what kinds of computational power a human brain can produce, and what regions are involved with this. Fabry’s point is that the limits of computational power has been shifted by the creation of more improved computers (also see Bailey & Borwein 2005 for a detailed discussion of how computers have altered what kinds of mathematical problems can be computed; and the interesting disputes this caused within the mathematics community about whether to accept mathematical proofs that had been devised by machines but could not feasibly be checked by humans). But there is also the notion here that the brain structures involved also shape how humans tackle mathematics. For instance, convergent evidence from neuropathological studies, behavioural experiments, and neuroimaging studies indicate that there is a significant cognitive and cortical overlap in the processing of mathematical and spatial phenomena: e.g. Geerstman’s syndrome, the SNARC effect, etc. (Dehaene 1997).
But Fabry is also arguing that the body is a strong constraint in human mathematical reasoning. This seems like a radical claim that might be too hard to hold. In the case of neurological constraints, although there is some degeneracy, neuropathological constraints show that damage to the intraparietal sulcus severely limits the extent to which an agent can engage in mathematical reasoning. In contrast, severe damage to a human body does not seem to elicit the same kind of constraint. There are famous cases of blind and paralysed mathematicians. As such, it does not seem that bodily constraints are as important for mathematical reasoning as neurological constraints.
4. The case study of computation is quite wide: how do we pin down vertical collaboration and intergenerational cognitive innovation in a more concrete fashion? Lastly, any study of culture and cognition struggles to handle the complexity of the multiple webs of relations in these ecologically salient case studies. Fabry’s case study of computation is good in some regards because it picks out a discreet cognitive phenomenon. On the other hand, the case is very vast in terms of the various kinds of patterned practices (Roepstorff et al 2010) that could be at play here. I wonder whether it might be worth first tracing a particular cognitive tool or specific innovation in computation and tracing the amendments in fine detail (to be fair, Fabry’s discussion of the relationship of Ada Lovelace and Charles Babbage is exemplary).
And perhaps this is already done by historians of mathematics and science. Indeed an excellent example, that is not too tangential, is Nancy Nersessian’s (1984) awesome work on how James Maxwell Clerk’s on the field equations of electromagnetic radiation were massively influenced and brought together by his peculiar socio-cultural milieu of people and epistemic tools: being trained in the Scottish geometrical approach to mathematics; meeting Faraday and his diagrams on force fields; the pervasive mechanistic philosophy in Victorian Britain at the time; as well as other important teachers and other figures who heavily shaped and contributed to how he tackled the problem. As Nersessian puts it:
“These sociocultural factors, taken together with cognitive factors, help to explain the nature of the theoretical, experimental, and mathematical knowledge and the methodological practices” (2005, p. 21)
Nersessian, like Fabry, argues that it is only by taking into account all of these factors that one can properly understand how cognitive innovations like scientific discoveries are actually made. And that doing so demonstrates that they are thoroughly distributed cognitive phenomena.
ATLAS Collaboration (2012) Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC. Physics Letters B, 716, 1-29.
Bailey, D. H. & Borwein, J. M. (2005) Future Prospects for Computer-Assisted Mathematics. Notes of the Canadian Mathematical Society 37, 2–6.
Charbonneau, M. (2013) The cognitive life of mechanical molecular models. Studies in History and Philosophy of Biological and Biomedical Sciences, 44(4), 585-594.
D’Agostino (2008) Naturalizing the essential tension. Synthese 162, 275-308.
Dean, L. G., Kendal, R. L., Schapiro, S. J., Thierry, B., & Laland, K. N. (2012) Identification of the Social and Cognitive Processes Underlying Human Cumulative Culture. Science 335, 1114-1118.
Dehaene, S. (1997) The Number Sense: How the Mind Creates Mathematics. London: Penguin.
Fabry, R. E. (2015) Enriching the Notion of Enculturation: Cognitive Integration, Predictive Processing, and the Case of Reading Acquisition. A Commentary on Richard Menary. In T. Metzinger & J. M. Windt (Eds). Open MIND: 25(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958571143
Fabry, R. (2017a) Betwixt and between: the enculturated predictive processing approach to cognition. Synthese. [published online], 1-36.
Fabry, R. E. (2017b) Cognitive Innovation, Cumulative Cultural Evolution, and Enculturation. Journal of Cognition and Culture 17 (7), 375-395.
Fodor, J. (1980) Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain sciences 3,63-109.
Galison, P. (2003) The Collective Author. (pp. 325-355) in M. Biagoli & P. Galison (eds.) Scientific Authorship: Credit and Intellectual Property in Science. London: Routledge.
Gillett, A. J. (2018) Invention through Bricolage: Epistemic Engineering in Scientific Communities. RT. A Journal on Research Policy & Evaluation 1, 1-17.
Helgesson, G. & Eriksson, S. (2018) Responsibility for scientific misconduct in collaborative papers. Medicine, Health Care and Philosophy 21, 423-430.
Henrich, J. (2016) The Secrets of Our Success: How culture is driving human evolution, domesticating our species, and making us smarter. Oxford: Princeton University Press.
Kuhn, T. (1977) The Essential Tension: Selected Studies in Scientific Tradition and Change. Chicago: The Chicago University Press.
Kukla, R. (2012) “Author TBD”: Radical Collaboration in Contemporary Biomedical Research. Philosophy of Science 79, 845-858.
Muthukrishna, M., Shulman, B. W., Vasilescu, V. & Henrich, J. (2014) Sociality influences cultural complexity. Proceedings of the Royal Society: B 281 (1774), 2511. DOI: 10.1098/rspb.2013.2511
Nersessian, N. (1984) Faraday To Einstein: Constructing Meaning in Scientific Theories. Dordecht: Martinus Nijhoff Publishers.
Nersessian, N. J. (2005) Interpreting Scientific and Engineering Practices: Integrating the Cognitive, Social, and Cultural Dimensions. (pp. 17-56) in M. E. Gorman, R. D. Tweney, D. C. Gooding & A. P. Kincannon (Eds.) Scientific and Technological Thinking. London: Lawrence Erlbaum Associate.
Roepstorff, A., Niewöhner, J. & Beck, S. (2010) Enculturing brains through patterned practices. Neural Networks 23, 1051–1059.
Tomasello, M. (1999) The Cultural Origins of Human Cognition. London: Harvard University Press.
Vale, G. L., Davis, S. J., Lambeth, S. P., Schapiro, S. J. & Whiten, A. (2017) Acquisition of a socially learned tool use sequence in chimpanzees: Implications for cumulative culture. Evolution and Human Behavior [online]. DOI: 10.1016/j.evolhumbehav.2017.04.007
Vorms, M. (2012) Formats of Representation in Scientific Theorizing. (pp. 250-272) in P. Humphreys & C. Imbert (eds.) Models, Simulations, and Representations. London: Routledge.
Zhang, J. (1997) The Nature of External Representations. Cognitive Science 21(2), 179-217.
Zollman, K. J. S. (2010) The epistemic benefit of transient diversity. Erkenntnis 72 (1), 17-35.