This week we focused on a very recent paper that is making some waves in neuroscience and cognitive science in general (as shown by discussions taking place here and here). Krakauer and colleagues are making two distinct criticisms of the modern neuroscientific methodology:
- An argument based on Marr’s (1982/2010) three levels of analysis (computational, algorithmic, and implementational) about the insufficiency of trying to make explanatory inferences from the neural wiring of the brain to how behaviour is enacted.
- An argument about the need for ecological validity – i.e. the need for a robust notion of the specific behaviour that is relevant to how the organism behaves in the real world that can then be tested and correlated with neural-activations that are being monitored.
This second argument is part of a more general critique/call for methodologies in cognitive science to be more relevant to how cognition takes place “in the wild”. This is something that we have discussed in previous weeks (for instance see week 3 and week 8). So I will not focus on it here except to note that several members of the group felt that Krakauer and colleagues failed to adequately distinguish between these two arguments which are not necessarily related.
The interesting, but not entirely novel (as they acknowledge), argument is the claim that standard methodology in modern neuroscience – driven by advances in technology and techniques for examining a range of processes within the skull – is based on a flawed implicit assumption. Viz. that by solely focusing on the neural level we will eventually be able to reverse engineer an explanation of the behaviour of an organism.
Part of this argument relies on Marr’s three levels so I will briefly summarise these before continuing:
- Computational level: the abstract task that the system is attempting to perform (e.g. flight)
- Algorithmic level: the representations and algorithms that are enacted in order to achieve the task (e.g. a bird flapping its wings)
- Implementational level: the physical medium that instantiates or realises the algorithms (e.g. feathers and flight musculature of the bird)
Krakauer and colleagues claim that the standard methodological practice in modern cognitive neuroscience is to study and intervene on the neural or implementational level and ignore the two other two levels. However, it has recently been demonstrated on a much more simplistic processor than a brain – a microprocessor – that this simply wont work. Jonas & Kording (2017) took the standard strategy of focusing solely on the implementational level and tried to make inferences back to the algorithms being enacted towards a specific computational goal (in this case the classic old-school computer games Donkey Kong, Space Invaders, and Pitfall). And they showed that this could not be reverse engineered. A crucial point here is that in the case of the microprocessor we have a very good understanding of what is going on at the algorithmic and computational level. So the fact that these could not be adequately reverse engineered from the implementational level in this exceeding simple case makes it highly dubious that this is a viable research strategy in the case of the brain – which is orders of magnitude more complex – and in which there are issues such as degeneracy and multiple realizability.
I think this is a really strong argument; but, as was noted in the group, a disappointing feature of the paper is that Krakauer and colleagues don’t adequately leverage this point by supplementing the critique with a viable alternative. One could respond in their defence that this is where the call for more work on ecologically salient and valid behavioural studies does become important. They provide a couple of nice case studies on animal cognition that demonstrate how work at the algorithmic level can greatly enhance a neuroscientific investigation: e.g.  sound localisation in barn owls and gerbils, and  prey capture behaviour in weakly electric fish (pp. 486-487). It is notable however, that in both these cases inferences from the algorithmic level to the implementational level took several decades of work – this incredible extra workload is something that might mitigate against the uptake of their ideas. Especially in the current academic climate in which many scientists are feeling torn by the pressures of publishing at a high-rate and the difficulties in obtaining grants for larger and longer term projects (Stanford 2015). On another more positive note related to the demand for ecological validity, one could even potentially use Krakauer and colleagues’ point here as an argument in favour of the importance of projects such as neuroanthropology, cognitive anthropology and ethnography for the wider project of cognitive science as an interdisciplinary field. They capture this point with the following diagram (p. 481):
[On a side note I’d just like to say that I really enjoyed the use of diagrams in this paper]
There is much else to discuss in this paper and it will be really interesting to see how both the wider cognitive science community and the community of neuroscientists respond to this. My hope is that it is not simply ignored and that it will generate lots of debate – just as I witnessed in the Macquarie University cognitive science department last year in an excellent discussion of a paper by de Witt and colleagues (2016) which raises the question of what it is that neuroimaging methods are measuring in the brain.
I conclude with a final critical comment: one subsection of the paper discusses the philosophical literature on mechanistic explanations (p. 485). Krakauer and colleagues take their lead from Bill Bechtel in defining a mechanism as “…a structure performing a function in virtue of its component parts, component operations, and their organization. The orchestrated functioning of the mechanism is responsible for one or more phenomena” (2008, p. 13). And they also go on to discuss Carl Craver’s (2007) work in this section as well. Both Bechtel and Craver have done much for exploring, explicating, and defending the underlying philosophy of neuroscience in detail. Indeed Craver’s book begins with a really interesting discussion that is particularly relevant to the interdisciplinary topics that we have been discussing in the group – so I would like to quote it at length:
“My neuroscience adviser once said of philosophy that he could not see how anyone could think without data. This view of philosophy is widespread among neuroscientists. I conjecture that this is in part because neuroscientists have mostly encountered philosophers of mind and metaphysicians. In many cases, these philosophers come to neuroscience with a set of concerns and a technical vocabulary that is out of touch with the way that neuroscientists think about their own work. Many metaphysical projects are fascinating, but the most interesting metaphysical disputes are often irrelevant to building explanations in neuroscience. One goal of this book is to convince neuroscientists and neurophilosophers that the philosophy of science can contribute meaningfully to how they think about the goals of their work and about the strategies for reaching those goals. A philosophy of neuroscience constructed by reference to the goals and strategies of contemporary neuroscience can create a bridge between the way that neuroscientists think about science and the way that philosophers think about causation, explanation, and levels. This point of agreement can then be the starting place for evaluating how, and if, neuroscientists and neurophilosophers can explain what they hope to explain with the tools that the explanatory framework of contemporary neuroscience affords” (2007, p. xi)
This is an example of bridge building work that Peter Galison (1998, ch9) referred to as the “trading zone” which often crops up in our discussions due to the interdisciplinary nature of both the group and the topic of culture and cognition.
But returning to the matter at hand; it is rather odd that Bechtel and Craver’s work is discussed here in a seemingly positive fashion whereas elsewhere in the paper Krakauer and colleagues are questioning the causal-interventionist strategy that both Bechtel and Craver defend at great length. I don’t think this is just a simple small oversight either; since Bechtel’s definition given above and endorsed by Krakauer and colleagues is followed in the same paragraph by the claim that emergence entails “downward causation”. Anyone familiar with Bechtel’s work will know that he has argued extensively against downward causation and follows Bill Wimsatt’s (1986, 2007) deflationary approach to emergent properties which renders them entirely compatible with the standard reductionist paradigms in mechanistic neuroscience (in which emergent properties are defined in a tractable manner in regards to how they fail to meet a set of criteria for aggregativity). Indeed, Craver & Bechtel even have a joint paper arguing against downward causation (2007) where they argue that interlevel relations in a mechanism are constitutive and that top-down and bottom-up causes should instead be understood as “mechanistically mediated effects” (also see Craver 2007).
Craver & Bechtel 2007 Top-down causation without top-down causes. Biology & Philosophy 22, 547-563.
Krakauer et al 2017 Neuroscience Needs Behavior: Correcting a Reductionist Bias. Neuron 93, 480-490.
Marr 1982/2010 Vision: A computational Approach. London: MIT Press.
Wimsatt 1986 Forms of Aggregativity. (pp. 259-291) in Donagan et al (eds.) Human Nature and Natural Knowledge: Essays Presented to Marjorie Grene on the Occasion of her Seventy-Fifth Birthday. Boston: D. Reidel Publishing Limited.
Additional interesting links:
One of the topics in the paper concerned the use of VR in neuroscience. For anyone interested in this topic I recommend checking out and the following two articles here and here. And this short video is also quite interesting if one can get past the Matrix allusions.
Also raised was the topic of cognitive archaeology (and here is a nice short video explaining the topic) – in particular the highly interesting Tuscon Garbage project which tests archaeological methods on modern middens in order to test their validity.