This paper explores an alternative approach to debates about extended cognition and distributed cognition by focusing on what is the centre of the unit of analysis. For the most part, I will overlook the debates about the differences between these positions for the purposes of this review. Instead, the key point here is what we take the centre of the unit of analysis for the studying of cognition to be. Traditional approaches have primarily taken the human agent – or more narrowly, processes within the confines of the skull of an individual – to be the centre (and sometimes even the limits) of the unit of analysis. Externalist philosophical positions (which are sometimes referred to as 4E cognition) have challenged the claim that cognitive processes are confined to the skull (e.g. Bateson 1972; Clark & Chalmers 1998).
In this article Jim Davies and Kourken Michaelian go a step further and note that what traditional and externalist positions have in common is that they are “agent-centric” – something which they argue creates problems (2016, p. 307). For instance, Andy Clark, perhaps the most famous proponent of the extended mind, proposed that although cognition is not bounded by the organism, it is centred on the organism (2008). Although, Davies and Michaelian are not explicit I think the notion of what is “central” to a unit of analysis, here refers to the organisational processes that form the cognitive system. So for Clark, the human agent is the primary organiser of these composite cognitive systems insofar that they “couple” (according to some set of criteria) with other features of the environment (e.g. other agents, cognitive artefacts) [there is an extensive debate about what these coupling criteria are – for a critique see Rupert 2004, for a more detailed and nuanced account see Heersmink’s 2015 dimensional approach].
Ed Hutchins (2011) has previously critiqued Clark’s view on the basis that it overlooks the important roles that cultural practices can play in organising a composite cognitive system. Davies and Michaelian tacitly build on this critique to offer a “task-based” approach to centring the cognitive system – what they refer to as “zeroing in” (2016, p. 313). The key move here is that it is more flexible. I think that this is a really positive step for a multilevel analysis – which Hutchins (1995, pp. 176-178) has claimed is necessary for the distributed cognition approach. But I have a couple of concerns for how this task based position is cashed out:
1. Davies and Michaelian only define what they mean by a ‘cognitive task’ vaguely and openly (2016, pp. 312-313). Although this has some advantages (i.e. not dismissing fringe cases) it is ultimately unsatisfying because if one is going to remove the intuitive or de facto centre piece of the mainstream view then one should provide good reasons or grounds for what this is. Arguably, one could take a naturalistic standpoint here (something that I think that they would be amenable to given their statements about having a position that supports work in cognitive neuroscience) and propose that a cognitive task is that which is defined as the activities investigate by experimental work in the relevant sciences. There are obvious limitations to this, but it is obviously better than no account at all and is a good starting point.
2. Related to this is another problem: the task-based system is the set of components that are involved in the completion of a task. These components are defined as involved as long as this involves one-way information transfer – where information is defined as the propagation of a representation (2016, p. 313). This is the standard position in cognitive science, but another central issue for the externalist positions that Davies and Michaelian touch on in this paper is the problem of “cognitive bloat” (Adams & Aizawa 2001). Cognitive bloat is a slippery slope problem in which one can ask: if cognition is not limited to processes inside the skull then where does one draw the boundaries of the system? Furthermore, if the claim that it is sufficient for a putative component to be part of the cognitive system merely involves one-way information transfer then this raises the issue that pretty much any causal interaction with the system could be postulated as part of the system. Hence, the system balloons outwards in a dangerous fashion. In its most threatening version this leads to some sort of Gaia-esque/ panpyschism whereby the entire universe becomes one cognitive system.
The issue here is not an attack on information flow as the metaphysical underpinnings of cognitive science. Instead, the point is merely that information flow, by itself, is too permissive to prevent cognitive bloat. Instead, a further principle is required: there are several options here. Firstly, one could take a dimensional approach that undermines the problem of cognitive bloat by arguing that boundaries in nature are generally fuzzy. Rather than a clear cut distinction it is better to see a wide range of grey areas – these can be marshalled and understood by articulating a set of dimensions by which cognitive systems vary (see Heersmink 2015; Sutton 2006; Sutton and colleagues 2010). Here the question of cognitive bloat is mitigated by providing a matrix/ set of criteria to tame the slippery slope. Another option has recently been proposed by David Kaplan (2012): the mutual manipulability criterion. This principle has been taken from work on mechanisms in neuroscience and biology (see Craver 2007). The key idea here is that one can assess putative components of a system by making two idealised interventions: a top-down intervention in which the system behaviour is altered or manipulated to see if this engenders a response in the component. And a bottom-up intervention in which one alters or manipulates the component to see if this has a reciprocal impact on the behaviour of the overall system. Kaplan uses this strategy to analyse classical thought experiments like Otto and his notebook. And it has also recently been used to analyse invertebrate cognition.
3. On a separate issue: Davies and Michaelian’s paper finishes by considering how their task-based approach works in reference to two case studies. One of these draws on work that Davies has done with Nersessian and others on biomedical engineering labs. In particular, how one of these labs is trying to make synthetic blood vessels. Nancy Nersessian (2005) provides an excellent overview of how a distributed cognition framework can provide insights into this scientific practice as well as consider some related philosophical puzzles. She also places the approach within its historical context within the literature – showing how it can bridge a long running dispute between philosophers and sociologists in how we should understand scientific practices. And here is the crux of the issue: although Davies and Michaelian’s account of this real life example is understandably short because of the word counts imposed on philosophical papers, much of the richness that a distributed cognition analysis brings is lost when these ethnographic details are skipped over. Bryce Huebner (2014) has recently argued that ethnographic details are not superfluous – as often it tacitly it seems to be assumed – but are in fact philosophically important when we come to consider questions about distributed cognition. We can see this by returning to the example of the biomedical engineering lab. Nersessian (2005) emphasises that a key aspect of the work done in the lab is its interdisciplinary nature and complexity – the task-space has evolved over multiple generations of the turnover of staff. If one wants to properly understand the cognitive behaviour of the members of the lab and how they interact then one needs to understand this “cognitive history” – or what Sterelny (2003) refers to as “epistemic engineering”: by which the contemporary problem space has been shaped by the actions of previous generations. Furthermore, one can see here as well that the tools have changed and that the cultural practices by which agents interact with each other than their environment have been both transmitted and altered over time (this is another example of Tomasello’s ratchet that I have discussed in previous weeks – see weeks 10 and 15). These practices must be learnt and mastered by new members to the lab so that they collaborate with others. Hence, Nersessian points out that the physical models and other lab tools becomes sites of pedagogy and points of foci for binding the lab team together socially. These points are all overlooked – but emphasised by Hutchins’ (2011) alternative non-agent centric approach – by not taking the ethnographic details into account.
A particularly interesting aspect of Davies and Michaelian’s paper is their use of diagrams to explain their task-based approach (2016, p. 317). Below is their diagram for portraying how a task-based approach can both “scale up” and “scale down” across multiple levels of analysis of a cognitive system (or multiple systems).
Because the task-based approach does not treat the agent as the necessary centre of the unit of analysis, the researcher can move fluently across the multiple levels of analysis present in studying any cognitive system. So for instance, in the above diagram we could consider for one level of analysis that A and B are regions of the brain (forming the system C). At another level of analysis these regions in conjunction with a cognitive artefact (D) forms another wider cognitive system E. And so on. [However, this “and so on” also again shows the problem of cognitive bloat because it is unclear where one can stop this process]. Hutchins (1995) uses diagrams extensively in his discussions of distributed cognition. And I think this diagrammatic approach is really useful for getting clear what we are talking about when we discuss a distributed cognitive system and what it is composed of, as well as other pertinent questions. I think this is particularly crucial because, as John Sutton has aptly noted, although a strength of the distributed cognition approach is its ability to enumerate a wide range of interesting and bizarre cases; we must also offer some means of organising this motley array (2006, pp. 235-236).
Other useful diagrammatic approaches to distributed cognition also show how these can help clarify the many relationships between the components, thus mitigating the problem of cognitive bloat, and also emphasising the importance of ethnographic details. For instance, the diagram below outlines Furniss and colleagues’ (2015, p. 334) investigation into how specific medical apparatus used by an agent sits within a wider socio-cultural setting that can be analysed across numerous dimensions which designate their level of integration or proximity.
Adams & Aizawa 2001 The bounds of cognition. Philosophical Psychology 14, 43-64.
Bateson (1972) Steps to an Ecology of Mind. New York: Ballantine Books.
Clark (2008) Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford: Oxford University Press.
Clark & Chalmers 1998 The extended mind. Analysis 58: 7-19.
Craver (2007) Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience. Oxford: Clarendon Press.
Davies, J. & Michaelian (2016) Identifying and individuating cognitive systems: a task-based distributed cognition alternative to agent-based extended cognition. Cognitive Processing, 17 (3), 307-319.
Furniss et al (2015) Exploring medical device design and use through layers of Distributed Cognition: How a glucometer is coupled with its context. Journal of Biomedical Informatics 53, 330–341.
Huebner (2014) Macrocognition: A Theory of Distributed Minds and Collective Intentionality. Oxford: Oxford University Press.
Hutchins (1995) Cognition in the wild. Cambridge, MA: MIT Press
Hutchins 2011 Enculturating the Supersized Mind. Philosophical Studies 152, 437–446.
Kaplan 2012 How to demarcate the boundaries of cognition. Biology & Philosophy 27 (4), 545–570.
Rupert 2004 Challenges to the Hypothesis of Extended Cognition. The Journal of Philosophy 101 (8), 389-428.
Sterelny 2003 Thought in a Hostile World: The Evolution of Human Cognition. Oxford: Blackwell Publishing.
Sutton (2006) Distributed cognition – Domains and dimensions. Pragmatics& Cognition, 14(2), 235-247.
Sutton et al (2010) The psychology of memory, extended cognition, and socially distributed remembering. Phenomenology and the Cognitive Sciences, 9(4), 521-560.
Here are some links to Ed Hutchins’ work on cognitive anthropology and distributed cognition in regards to: Micronesian navigational practices, the US Navy, aviation, and scientific teams. And here is a video outlining his work on dolphin cognition.
Here is an interview in which Nersessian outlines her approach to philosophy of science, STEM in general, as well as other topics.