Friday, 27 January 2017

Can computational cognitivism please just go to the scientific graveyard already?

*groan* I thought we were past this. Can't people know when they've lost the dominant paradigm in a scientific field? Sadly not, and the old guard can be very resilient. Case in point: this paper titled, The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift? In essence, this is a political paper not so much a scientific one. Otherwise, this is a bit of a strange paper if you think about it, because you could summarize the argument as "Memory is not in the synapse, it's somewhere in the cell" and that's very strange indeed because a synapse is in the cell, so what is the author actually trying to say?

Basically, this goes back to the "cognitivist versus connectionist" debates of the 1980s and 1990s (in this post I use "cognitivism" rather loosely to refer to the computationalist and representationalist tendencies in cognitive science). Back then the dominant metaphor of the mind was the digital computer, i.e., a symbol processing machine, that makes use of representations of the things it needs to process (representations of sensory images, thoughts, motor programs, etc). A lot of the psychological language today comes from this: knowledge schemas, working memory (as a short term memory buffer), the idea of buffers, the idea of memory even, etc. This is all very much like the thinking of a programmer working on a computer and of the mind as a "software" running on the brain "hardware". A key tenet of this theory was that the brain itself is "not relevant" for understanding the mind, it simply implements the programs that the mind actually uses. So they surmised it was enough to study the mind itself, at most to do psychological experiments, and to completely ignore the brain, since that was not the most relevant level of analysis for "understanding". Of course nowadays this idea is somewhat bizarre, since we understand that the very nature of the neural substrate is actually very important for coding and thus, the mind.

Then in the 1980s the artificial neural network paradigm arrived on the scene, spurred by the success of the backpropagation algorithm. Back then this field was known as "connectionism", the idea that the mind could be represented more like how the brain seems to work - that is, using many parallel, distributed nonlinear processing units ("neurons"). All of a sudden now, these researchers were saying the complete opposite of the cognitivists - that actually, the mind is not separate from the physical hardware, or the brain. This led to a hot debate. The paper under discussion (Trettenbrein 2016) cites some articles that discuss this debate (Fodor & Pylyshyn 1988 were fierce advocates of the cognitivist perspective).

Another famous paper in this debate was Pinker & Prince 1988, who argued that the neural network systems (at the time) cannot encode for language, so they concluded that language cannot be implemented as a neural network, it has to be a symbol processing program. This is again a strange perspective today especially looking at all the neuroscience work done to date, but it is informative to glance at the paper to see the tone of the debate. Even just this part of the abstract is illuminating to see how the issue was framed:

"Does knowledge of language consist of mentally-represented rules? Rumelhart and McClelland have described a connectionist (parallel distributed processing) model of the acquisition of the past tense in English which successfully maps many stems onto their past tense forms, both regular (walk/walked) and irregular (go/went), and which mimics some of the errors and sequences of development of children. Yet the model contains no explicit rules, only a set of neuronstyle units which stand for trigrams of phonetic features of the stem, a set of units which stand for trigrams of phonetic features of the past form, and an array of connections between the two sets of units whose strengths are modified during learning. Rumelhart and McClelland conclude that linguistic rules may be merely convenient approximate fictions and that the real causal processes in language use and acquisition must be characterized as the transfer of activation levels among units and the modification of the weights of their connections."

As you can see, from today's perspective this debate seems very strange, because how could you reasonably argue for ignoring the brain these days? Of course language is implemented as "transfer of activation levels among units", in other words, action potentials across cells in a network! But back then, it was hotly contested, largely because in the face of the stunning successes of artificial neural networks, the "old guard" of the cognitivists did not want to lose the status of being top dog in terms of the people who can be said to understand the mind. This is how science works, as shown by Thomas Kuhn: when there is a revolution and paradigm shift in any field, the old guard is the one that most resists, and they unfortunately can have a lot of power since they are heads of departments and so on. Anyway this is a bit off topic but it's important to keep in mind that scientific debates always have political/human elements in them (this is one of the biggest findings of the social studies of science practice of the last half century).

So back to the Trettenbrein paper. It makes more sense now when you look at it, because Trettenbrein is arguing again from the cognitivist perspective. In essence, I think he (and probably others) saw the papers cited on how memory persists even after retrograde amnesia and reversal of LTP and so on (from the Tonegawa lab and the other paper in Aplysia), and used this to attack the very idea that memory is in the synapses, which is basically to attack the "connectionist" (or, nowadays we could just call this the neuroscientific, as opposed to cognitive) idea that the right level of understanding the brain is to analyze neurons and networks.

You can see this every step of the way, since each step makes little sense in the paper. For example, the mind being Turing-complete is a very antiquated way of thinking and not at all established... this goes back to the old days of seeing the mind as a computer program, and people wanted to "prove" that the mind could be Turing complete, i.e., a universal computing machine. But this term has a very specific set of criteria that is useful for applying to machines we create, not so much to the brain. For example, a trivial way of countering this is to say that Turing-completeness requires infinite memory (in the strict definition), which we obviously cannot have. So as much as Trettengrein says that Gallistel and King 2009 "convincingly" argue for the mind being a Turing machine, there are many others who argue that the mind is not Turing-complete. This is just a sneaky way of undercutting neuroscience and saying that the mind is best studied as a computer program, not as a neural network.

You can also see it in the distinction between learning and memory. This is again a strange thing to do, but it makes sense if you realize that the connectionist paradigm considers learning and memory to be part and parcel of the same process. So by attacking this, Trettenbrein is trying to undermine connectionism. And yet, his argument is deeply flawed, because we do know now that learning and memory are separate, and that connectionist ideas can still account for this. James McClelland way back implemented a neural network that demonstrated "two stage" learning, where a module (the hippocampus) encoded memory patterns that were then stored in long term storage in cortical regions. So the learning (hippocampal representations) and memory (long-term cortical storage) was already successfully "separated" using a connectionist paradigm more than 10 years ago. And of course today, we have a lot of confirmatory neurobiological evidence for this distinction between learning and memory. So Trettenbrein's criticism as a way of boosting cognitivism is misguided and flawed.

But the most important shortcoming in this approach of attacking the idea that memory is in the synapses is that it fails to bring up an alternative idea that would not also fail to discredit connectionism after all. I'll explain what I mean in detail. First, he says (p. 5),

"Lastly, all of this is not to say that synaptic plasticity and networks are of no importance for learning and memory. Fodor and Pylyshyn (1988) already reviewed the implications of connectionist models, concluding that connectionism might best be understood as a neutral “theory of implementation” of the actual cognitive architecture, provided that one gives up anti-representational tendencies inherent to the approach. As a consequence, the question no longer is whether symbolic representations are “real,” but how (i.e., on what level) they are actually implemented in the brain. The challenge for critics of the synaptic plasticity hypothesis will therefore be to come up with concrete suggestions for how memory might be implemented on the sub-cellular level and how cells then relate to the networks in which they are embedded."

This paragraph is inherently a bad argument. Trettenbrein starts by saying that synaptic plasticity might still be important for learning and memory, but that it's an unimportant implementational detail, and that the real level of understanding of the brain is in terms of symbolic representations. Translation: "we cognitivists are right, connectionists are wrong". And what is the alternative? Well, as he says, the challenge is how to look at memory at the sub-cellular level. Which is trivially absurd since of course, the synapses are in cells, so they are already at the sub-cellular level! What he is really trying to say is, "How can we cognitivists find representational implementations in the brain?" But the failure is that even if we don't consider synapses and instead turn to something else, the fact of the matter is that whatever that "something else" is, it will inevitably still be within an individual cell, and thus that the brain's processing is still and will always be a matter of distributed parallel processing units, which is the core of the connectionist approach.

In other words, there is no way the cognitivists can win here. The best they can do is confuse and muddy the debate by writing papers like this that are short on hard scientific reasoning, bring up a bad argument, and then end it off with an audacious and outrageous title like "the Demise of the Synapse...".

It's important to emphasize that the issue here is not that he is arguing for single neuron computation versus network computation, nor for oscillations being important or not for coding. He is inherently coming from a perspective that is hostile to all such considerations, that simply want to sweep all of that under the rug, and say that they are "unimportant implementational details", i.e., that neuroscience is wasting its time and that we should go back to the "good old days" of cognitivist thinking. This is personally very objectionable to me. I will never forget back when I was studying cognitive science, that after I started learning about neurophysiology, I would encounter situations where during the course of a debate regarding some aspect of the mind, I brought up some ideas from neuroscience as evidence, for example for the nature of memory processes. One such time a senior person, in response to this, said flatly that "we don't know anything about how the brain works" as a way to shut down any attempt of using neuroscience. In other words, I know from firsthand experience how cognitivist type people can be inherently hostile to any idea of how the brain works as being important in any way to understanding the mind, because then they lose their throne of being the ones who "truly understand" the mind. The egos here are big enough that the very suggestion that someone doing electrophysiology or computational modelling of the brain can also be said to be helping to understand the mind is deeply offensive to them, since they think the best way is their way, i.e., mainly from a philosophy of mind point of view.

It's unfortunate that this paper was published in Frontiers in Synaptic Neuroscience but, on the other hand, it's heartening to know that the approach of neuroscience in understanding the mind by teasing apart mechanisms in the brain will inexorably march on, and that these sad cognitivist voices will concomitantly continue to grow even more absurd.

No comments:

Post a Comment