Sunday, 3 December 2017

Brains and Car Engines, revisited

[A paper that I had already written about has been published in PLoS Computational Biology about half a year after its preprint was up on on bioRxiv, and it has been making the rounds again in social media and amongst scientist circles. The paper I'm referring to is, "Could a neuroscientist understand a microprocessor?" I had written a longer treatment of it in an email that was more elaborate than my previous blog post, so I am converting it to blog format and posting here as a "continuation" of sorts of my previous post.]

The paper is quite the shocker, to put it bluntly. I think most neuroscientists quaked in their boots when they read this because it directly and effectively criticizes most of the approaches we use that we think are helping us with "understanding" the brain. This is a paper with primarily philosophical and methodological implications. It cleverly (and deviously!) uses "standard" neuroscience tools or, more precisely, signal processing and statistical tools (spectral analysis, Granger causality, dimensional reduction, etc.) on a microprocessor to generate data that is strikingly similar to what we see when we apply these techniques to neural tissue (power law, oscillations, transistor tuning curves, etc). And yet, the brain is obviously not a microprocessor, not even close. So what kind of understanding can these tools alone give us if they basically can't distinguish between two strikingly different structures of high complexity? The authors say "not much", and it's really, really hard to avoid agreeing with them after reading the paper. :)

The blow is softened for those researchers who, from the very beginning, emphasize that it's not good enough to build a model just for the sake of building a model, but rather to ask why we're building it, i.e., to answer a specific biological question. Answering a specific question means that we end up thinking about functional relevance a lot. So it's not enough to just measure power spectra and so on, we have to link it to biological function. I think the authors of the paper tacitly acknowledge this kind of more "sophisticated" approach by contrasting it with, as they put it, "naive" uses of standard neuroscience tools. This relates to the Aristotelian "four causes" (especially the final cause), but I think the authors of the paper would probably argue that even finding a formal cause is not good enough, and moreover that a final cause has important overlooked dependencies that make it particularly difficult to obtain.

The most important of these dependencies in my mind is knowing inputs and outputs to the system, because the evolutionary inputs and outputs are really what drives the development of an organism and its nervous system in the first place. And this is where we are sorely lacking in neuroscience. The section in the paper titled "What does it mean to understand a system" goes into this a little bit, as well as in the discussion towards the end, where they say, "In the case of the processor, we really understand how it works. ... for each of these modules we know how its outputs depend on its inputs." Now I think the authors could have made a much stronger case for neuroscientists to focus on getting more information on inputs (e.g., the pattern of synaptic inputs in a specific in vivo behavioural context) and outputs (what is the specific efferent fiber activity, on a per axonal basis, from a system in a specific in vivo behavioural context). Because once you have that, the way the inputs are transformed into outputs (on a per-brain system level) might almost easily "fall out" since you'll just be able to observe them, after which point you can generate a theory to account for the general case.

This is incidentally why comparative neuroscience is so appealing, because there, more than anywhere else, we have some hope of actually mapping out inputs and outputs. If you have "only" a few thousand cells in an invertebrate ganglion, you can more or less start characterizing the inputs and outputs. I'll never forget how a former colleague once said that when he felt depressed about the state of neuroscience he would go take a look at the invertebrate posters at SfN to be reinvigorated, and I've since then always taken this advice myself.

But even here the task becomes monumentally complex. Here's a particular favourite comparative neuro paper of mine, titled "In Search of the Engram in the Honeybee Brain" (published as a book chapter).

One of the key arguments of the paper is that the memory engram is not just something found in the "memory region" of the brain, but rather a product of the entire nervous system of the organism, from sensory to motor representations. You can demonstrate this quite directly in the bee since these pathways are much more completely characterized than in mammals. But even with mammals we are starting to learn (or be reminded of), for example, the mass of extrinsic connections between hippocampus and all sorts of brain regions that we wouldn't typically think of when we consider "cognitive" spatial and semantic maps - these include the amygdala, nucleus accumbens, even olfactory bulb for god's sake (see almost any of James McClelland's works for perspectives on this, as well as more recently Strange et al 2014, "Functional organization of the hippocampal longitudinal axis", Nature).

Yet if the results in the bee hold true as a general principle of nervous system function, it means that we will have to understand how each and every one of these regions generate information flow into, and out of, the hippocampus, before we can truly and fully "understand" how the hippocampus works. Slowly we're moving in that direction, with the hippocampal field becoming more aware of longitudinal differences and how dorsal hippocampus is more involved in spatial memory whereas the ventral is more implicated in emotional and fear processing (after many years of neglecting some of the early studies that hinted at this way back). This even brings up the question of whether the hippocampus is even one unified structure or rather a system of somewhat loosely coupled modules pivoting around a dorsal and ventral pole, as the gene expression studies might suggest (Bloss et al 2016 from Spruston's lab) that is nevertheless bound together, as the theta travelling waves studies suggest (e.g. Patel et al 2012 from Buzsaki's lab). In which case, even talking about studying "the hippocampus" is not correct anymore, since someone could then ask "Which hippocampus?" or "Which hippocampal module?"

To get back on point, perhaps the most important missing piece of info then is the pattern of inputs and outputs to any brain region, crucially as measured in an actual behavioural state. The authors of the Jonas and Kording "microprocessor paper" make two brief mentions of efforts where this is taking place, and interestingly enough both are in neuroengineering. One of them is the Berger/Marmarelis hippocampal prosthetic chip (that I'm very familiar with, having followed their work for several years). To me this is an odd work to cite in their section "What does it mean to understand a system" because although they do look at inputs and outputs (which is why they cited it), the work is importantly not setup in any way to actually understand the input/output transformations. In a nutshell, they implant a chip in CA1 with electrodes both near the DG border as well as CA3 (this has been done in rodent and monkey so far). They then place the animal in a specific learning paradigm, such as delayed non-match to sample task. During the delay period (i.e., when the rodent needs to remember what lever was signalled in the sampling phase so that they can push the other one when queried by a light turning on after the delay period is over), they record the pattern of activity seen in both the input and output electrodes. They then take these multunit data and fit the coefficients of the kernels of a set of Volterra series (think of them as Taylor series with built-in historicity or "memory"). Then what they can do is repeat the experiment but inject NMDA blockers locally to CA1. At that point of course the rodent performs terribly. But then if they turn on the chip - i.e., taking the DG inputs (at a very coarse level of course since they only have a few sampling electrodes), feed them through the fitted Volterra series, and then output the corresponding spike trains on the output electrodes, the rodent can recover and perform the task. Amazing! What's scary is that with no NMDA blocker and the chip turned on the rodent performs even better than in control! (So if you've seen any sci-fi movies with augmented humans with superhuman memory etc... it might actually be coming one day!)

Now the catch with this approach is that it treats the CA1 region as a black box! There is no real understanding of the input/output transformations, because they have simply captured the correlation of input to output activity in one behavioural paradigm, i.e., the activity produced by the functioning of the actual CA1. They have not captured the general mechanism for how the CA1 produces these transformations. The proof of this is that the fitted kernel coefficients only work for that specific behavioural task! To this point, one of the PIs, Ted Berger, said (personal communication but perhaps also mentioned in some interviews) that as a general tool for human memory enhancement (e.g., during old age and dementia) they would want to create a set of coefficients for particular useful contexts. So there would be a "kitchen program", a "bathroom program", and so on (kind of like in the Matrix where they would give Neo programs for specific skills like piloting a helicopter and learning different martial arts). This is fine for clinical use, but not enough for understanding the brain of course, since we have yet to realize how the CA1 (and all the other regions involved) perform the input/output mappings in a general way, under any context.

In conclusion, my argument is that the Jonas and Kording "microprocessor" paper ends with too broad of a set of lessons. Everything they conclude with is true taken individually: we need better models in neuroscience, better data analysis methods, etc., but all in all the common point that these share (I assert) is that having a knowledge of input/output patterns is crucial. I actually think we'll be fine since new technologies are making it easier and easier to record single unit activities from very large amounts of neurons at once. The CaMPARI work for instance is one of these new techniques that seem incredibly promising; see e.g., Zolnik et al 2016.

To use the microprocessor analogy again, we already have the tools to analyze large scale activity and response properties of individual units/neurons (the spectral analyses, dimensionality reduction, and all that). But what network-wide neuronal interrogation in behavioural contexts gives us is the exact instructions that are being fed into the microprocessor, as well as the outputs, and this is what will tell us the computation itself. Then how the computation arises will fall out almost inevitably by observing how the input gets transformed into the output. Then we might be able to develop some theories about overall function of that component in question; e.g., in the microprocessor case we could then produce a theory of the ALU (arithmetic logic unit) and describe its function in detail, which is to perform arithmetic operation on a bitwise level, i.e., on the individual bits of the input numbers encoded in base-2. We would then even be able to immediately make inferences for why certain design decisions were made (by evolution in the case of the brain). For instance, the ALU and microprocessors in general use base-2 (rather than base-10) because it's an extremely convenient way of representing numbers that can allow for a wide range of arithmetic operations (addition, subtraction, multiplication) to be performed using minimal operations that entail simply comparing or shifting the bits in one direction or another, so it's easy and efficient to implement in hardware. This is also the kind of inference we'll be able to make once we get a much more complete map of input/output to any brain region. The ultimate test for whether we've understood the brain might then be, as Jonas and Kording state for the microprocessor case, when "many students of electrical engineering would know multiple ways of implementing the same function."

My own take-home from this paper was to reassess how I view my own (very limited) work in the overall sphere of neuroscience, and to reaffirm a commitment to linking neuronal activity with function rather than being satisfied with examining how some measure of neuronal output changes in response to some manipulation, without also having an idea of what it means for the overall system the model is embedded in. To repeat the end of my previous post on this subject which contains a useful analogy (I quite like using analogies to help with thinking things through, what the cognitive scientist Daniel Dennett would call "intuition pumps"):

"This results in a state of affairs in neuroscience (at least systems level) where we are content with scratching the surface, e.g., measure changes in frequencies under different conditions. It's like measuring the spectral signature that the sound of a car's engine makes, and thinking that by comparing the peak power under idle vs driving conditions we are any closer to understanding how the engine works. However, we are actually in a very primitive state of "understanding". We don't know anything about pistons and drivetrains, let alone the principles of internal combustion. Yet it's only by understanding these that we would truly know how a car engine works. The same applies to the brain."

Friday, 27 January 2017

Can computational cognitivism please just go to the scientific graveyard already?

*groan* I thought we were past this. Can't people know when they've lost the dominant paradigm in a scientific field? Sadly not, and the old guard can be very resilient. Case in point: this paper titled, The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift? In essence, this is a political paper not so much a scientific one. Otherwise, this is a bit of a strange paper if you think about it, because you could summarize the argument as "Memory is not in the synapse, it's somewhere in the cell" and that's very strange indeed because a synapse is in the cell, so what is the author actually trying to say?

Basically, this goes back to the "cognitivist versus connectionist" debates of the 1980s and 1990s (in this post I use "cognitivism" rather loosely to refer to the computationalist and representationalist tendencies in cognitive science). Back then the dominant metaphor of the mind was the digital computer, i.e., a symbol processing machine, that makes use of representations of the things it needs to process (representations of sensory images, thoughts, motor programs, etc). A lot of the psychological language today comes from this: knowledge schemas, working memory (as a short term memory buffer), the idea of buffers, the idea of memory even, etc. This is all very much like the thinking of a programmer working on a computer and of the mind as a "software" running on the brain "hardware". A key tenet of this theory was that the brain itself is "not relevant" for understanding the mind, it simply implements the programs that the mind actually uses. So they surmised it was enough to study the mind itself, at most to do psychological experiments, and to completely ignore the brain, since that was not the most relevant level of analysis for "understanding". Of course nowadays this idea is somewhat bizarre, since we understand that the very nature of the neural substrate is actually very important for coding and thus, the mind.

Then in the 1980s the artificial neural network paradigm arrived on the scene, spurred by the success of the backpropagation algorithm. Back then this field was known as "connectionism", the idea that the mind could be represented more like how the brain seems to work - that is, using many parallel, distributed nonlinear processing units ("neurons"). All of a sudden now, these researchers were saying the complete opposite of the cognitivists - that actually, the mind is not separate from the physical hardware, or the brain. This led to a hot debate. The paper under discussion (Trettenbrein 2016) cites some articles that discuss this debate (Fodor & Pylyshyn 1988 were fierce advocates of the cognitivist perspective).

Another famous paper in this debate was Pinker & Prince 1988, who argued that the neural network systems (at the time) cannot encode for language, so they concluded that language cannot be implemented as a neural network, it has to be a symbol processing program. This is again a strange perspective today especially looking at all the neuroscience work done to date, but it is informative to glance at the paper to see the tone of the debate. Even just this part of the abstract is illuminating to see how the issue was framed:

"Does knowledge of language consist of mentally-represented rules? Rumelhart and McClelland have described a connectionist (parallel distributed processing) model of the acquisition of the past tense in English which successfully maps many stems onto their past tense forms, both regular (walk/walked) and irregular (go/went), and which mimics some of the errors and sequences of development of children. Yet the model contains no explicit rules, only a set of neuronstyle units which stand for trigrams of phonetic features of the stem, a set of units which stand for trigrams of phonetic features of the past form, and an array of connections between the two sets of units whose strengths are modified during learning. Rumelhart and McClelland conclude that linguistic rules may be merely convenient approximate fictions and that the real causal processes in language use and acquisition must be characterized as the transfer of activation levels among units and the modification of the weights of their connections."

As you can see, from today's perspective this debate seems very strange, because how could you reasonably argue for ignoring the brain these days? Of course language is implemented as "transfer of activation levels among units", in other words, action potentials across cells in a network! But back then, it was hotly contested, largely because in the face of the stunning successes of artificial neural networks, the "old guard" of the cognitivists did not want to lose the status of being top dog in terms of the people who can be said to understand the mind. This is how science works, as shown by Thomas Kuhn: when there is a revolution and paradigm shift in any field, the old guard is the one that most resists, and they unfortunately can have a lot of power since they are heads of departments and so on. Anyway this is a bit off topic but it's important to keep in mind that scientific debates always have political/human elements in them (this is one of the biggest findings of the social studies of science practice of the last half century).

So back to the Trettenbrein paper. It makes more sense now when you look at it, because Trettenbrein is arguing again from the cognitivist perspective. In essence, I think he (and probably others) saw the papers cited on how memory persists even after retrograde amnesia and reversal of LTP and so on (from the Tonegawa lab and the other paper in Aplysia), and used this to attack the very idea that memory is in the synapses, which is basically to attack the "connectionist" (or, nowadays we could just call this the neuroscientific, as opposed to cognitive) idea that the right level of understanding the brain is to analyze neurons and networks.

You can see this every step of the way, since each step makes little sense in the paper. For example, the mind being Turing-complete is a very antiquated way of thinking and not at all established... this goes back to the old days of seeing the mind as a computer program, and people wanted to "prove" that the mind could be Turing complete, i.e., a universal computing machine. But this term has a very specific set of criteria that is useful for applying to machines we create, not so much to the brain. For example, a trivial way of countering this is to say that Turing-completeness requires infinite memory (in the strict definition), which we obviously cannot have. So as much as Trettengrein says that Gallistel and King 2009 "convincingly" argue for the mind being a Turing machine, there are many others who argue that the mind is not Turing-complete. This is just a sneaky way of undercutting neuroscience and saying that the mind is best studied as a computer program, not as a neural network.

You can also see it in the distinction between learning and memory. This is again a strange thing to do, but it makes sense if you realize that the connectionist paradigm considers learning and memory to be part and parcel of the same process. So by attacking this, Trettenbrein is trying to undermine connectionism. And yet, his argument is deeply flawed, because we do know now that learning and memory are separate, and that connectionist ideas can still account for this. James McClelland way back implemented a neural network that demonstrated "two stage" learning, where a module (the hippocampus) encoded memory patterns that were then stored in long term storage in cortical regions. So the learning (hippocampal representations) and memory (long-term cortical storage) was already successfully "separated" using a connectionist paradigm more than 10 years ago. And of course today, we have a lot of confirmatory neurobiological evidence for this distinction between learning and memory. So Trettenbrein's criticism as a way of boosting cognitivism is misguided and flawed.

But the most important shortcoming in this approach of attacking the idea that memory is in the synapses is that it fails to bring up an alternative idea that would not also fail to discredit connectionism after all. I'll explain what I mean in detail. First, he says (p. 5),

"Lastly, all of this is not to say that synaptic plasticity and networks are of no importance for learning and memory. Fodor and Pylyshyn (1988) already reviewed the implications of connectionist models, concluding that connectionism might best be understood as a neutral “theory of implementation” of the actual cognitive architecture, provided that one gives up anti-representational tendencies inherent to the approach. As a consequence, the question no longer is whether symbolic representations are “real,” but how (i.e., on what level) they are actually implemented in the brain. The challenge for critics of the synaptic plasticity hypothesis will therefore be to come up with concrete suggestions for how memory might be implemented on the sub-cellular level and how cells then relate to the networks in which they are embedded."

This paragraph is inherently a bad argument. Trettenbrein starts by saying that synaptic plasticity might still be important for learning and memory, but that it's an unimportant implementational detail, and that the real level of understanding of the brain is in terms of symbolic representations. Translation: "we cognitivists are right, connectionists are wrong". And what is the alternative? Well, as he says, the challenge is how to look at memory at the sub-cellular level. Which is trivially absurd since of course, the synapses are in cells, so they are already at the sub-cellular level! What he is really trying to say is, "How can we cognitivists find representational implementations in the brain?" But the failure is that even if we don't consider synapses and instead turn to something else, the fact of the matter is that whatever that "something else" is, it will inevitably still be within an individual cell, and thus that the brain's processing is still and will always be a matter of distributed parallel processing units, which is the core of the connectionist approach.

In other words, there is no way the cognitivists can win here. The best they can do is confuse and muddy the debate by writing papers like this that are short on hard scientific reasoning, bring up a bad argument, and then end it off with an audacious and outrageous title like "the Demise of the Synapse...".

It's important to emphasize that the issue here is not that he is arguing for single neuron computation versus network computation, nor for oscillations being important or not for coding. He is inherently coming from a perspective that is hostile to all such considerations, that simply want to sweep all of that under the rug, and say that they are "unimportant implementational details", i.e., that neuroscience is wasting its time and that we should go back to the "good old days" of cognitivist thinking. This is personally very objectionable to me. I will never forget back when I was studying cognitive science, that after I started learning about neurophysiology, I would encounter situations where during the course of a debate regarding some aspect of the mind, I brought up some ideas from neuroscience as evidence, for example for the nature of memory processes. One such time a senior person, in response to this, said flatly that "we don't know anything about how the brain works" as a way to shut down any attempt of using neuroscience. In other words, I know from firsthand experience how cognitivist type people can be inherently hostile to any idea of how the brain works as being important in any way to understanding the mind, because then they lose their throne of being the ones who "truly understand" the mind. The egos here are big enough that the very suggestion that someone doing electrophysiology or computational modelling of the brain can also be said to be helping to understand the mind is deeply offensive to them, since they think the best way is their way, i.e., mainly from a philosophy of mind point of view.

It's unfortunate that this paper was published in Frontiers in Synaptic Neuroscience but, on the other hand, it's heartening to know that the approach of neuroscience in understanding the mind by teasing apart mechanisms in the brain will inexorably march on, and that these sad cognitivist voices will concomitantly continue to grow even more absurd.

Sunday, 29 May 2016

Brains and car engines

This paper makes a solid point. The argument about whether a brain is like a microprocessor or not (obviously not) is a distraction to the main point, which is that without understanding the flow of information we are simply sorting through patterns in the data coming out of neuroscience without coming to any real understanding. 

For instance, it's quite surprising how much effort is spent on measuring and tracking different oscillations in the brain. These are correlative with respect to the actual information flows, but not the same thing as the latter. We rather need to tease apart the mechanisms with which the constitutive neurons communicate with each other, in various behavioural and functional contexts. From there, we may then find how the oscillations arise, e.g., they are a byproduct of particular forms of processing. Instead, we put the cart before the horse and focus excessively on the oscillations first (or any large scale activity), before examining cell specific electrophysiology and neurochemical interactions. Indeed, many neuroscientists are loathe to go that "low level"!

This results in a state of affairs in neuroscience (at least systems level) where we are content with scratching the surface, e.g., measure changes in frequencies under different conditions. It's like measuring the spectral signature that the sound of a car's engine makes, and thinking that by comparing the peak power under idle vs driving conditions we are any closer to understanding how the engine works. However, we are actually in a very primitive state of "understanding". We don't know anything about pistons and drivetrains, let alone the principles of internal combustion. Yet it's only by understanding these that we would truly know how a car engine works. The same applies to the brain.

Sunday, 22 May 2016

Murakami'd

I've been Murakami'd. Which is to say, I finished reading The Wind-Up Bird Chronicle written by Haruki Murakami. At least, anyone looking at me from the outside would conclude that that's what it means since, according to most external signifiers, I was sitting quietly for long periods of time reading this one particular book. This process repeated itself over the course of many days, with nary a sign that universes were colliding and separating, except perhaps for the occasional burst of laughter, furrowing of the brow, or quick exhalation of the breath.

There really is not much that I can say about the book itself, due to its very nature. Some of that is because it is fundamentally a surrealist book and, from very brief ventures into literary criticism, I see that there are many different ways a book can be said to be "surrealist", so I won't get into this at all, not least because I am not a literary critic. However, this is one of those rare books that shook me to my very core, and produced an experience that is difficult to otherwise come by. Some books have a tendency to do that to me, and each one in a different way. Murakami's work is different from anything I've ever read before, and elicited such a strange constellation of emotions and realizations, that I had to label it as its own thing, using its own verb - to be Murakami'd. And so, I felt compelled to sit down and mark down my experience in words immediately after the last page was turned - at least, as best as I could - as a record for myself, to be returned to in the future and accessed once more, talisman-like, sort of like how the primary protagonist of the novel would (literally) descend into his well and clutch his baseball bat tightly when venturing on his dream-reality journeys.

One of the things I've realized is that reality is not real, nor is unreality not un-real. Anyone who studies the history or philosophy of science knows this (at least the former), but to know it like this, as a ton of bricks hitting you, is something else. These are not meant to be cute transpositions of each other but are both equally and fully valid truths, juxtaposed together merely as a form of convenience of expression. The way Cinnamon retold and reshaped his mother's tales in the novel, holding to "the assumption that fact may not be truth, and truth may not be factual", may seem to be saying the same thing, but the emphasis is slightly different. What I'm referring to has more to do with the real/surreal distinction, or more accurately the real/constructed distinction. This is touched upon directly in an excellent interview with Murakami, where he says, "I don’t want to persuade the reader that it’s a real thing; I want to show it as it is. In a sense, I’m telling those readers that it’s just a story—it’s fake. But when you experience the fake as real, it can be real. It’s not easy to explain." This does not mean that you merely become "engrossed" in the story. We are talking about realities. What is real? How do you define 'real'? (I hear these sentences in Lawrence Fishburne's voice.) It's what arises when our minds or mental faculties meet with the (putatively) "objective" world through the mediums of our very idiosyncratically tuned (by evolution) faculties of sensation. (I cheated there, by parroting back from Buddhist metaphysics, but I think it's a very good summary of the human reality-constructing process.)

Therefore, what any one of us experiences as "real" is not objective nor necessarily shared by any other person, and I utterly blank on wondering what the Universe is like, as witnessed by an entirely different species, with differently tuned faculties of sensation and mind. Now, it's easy to ponder this when we talk about the "subjective stuff" - feelings, thoughts, emotions. But when we talk about the "external, objective stuff" - cars, wind, the Internet - we think there is only one possibility, and we all have the same access to it (except perhaps, the narrative goes, at the quantum level, but then that doesn't really affect us at the macroscopic scale; it all "cancels out", right, and anyway, scientists will eventually figure out what's really going on there too). Part of what it means to be Murakami'd, for me, is to have this certainty deeply shaken, to see that even the "external stuff" is highly contingent, elusive, ephemeral. When you get stabbed by a knife in a dream reality, and have a subsequent wound in the non-dream reality, is that strange? Why should it be? Is it just because it hasn't yet happened to us? What's to say that it couldn't? And which reality would then be the real one, which the dream reality? Or are they both dreams? Hume absolutely hit the nail on the head with his problem of induction - we really can never be sure that the sun will rise tomorrow, at all. The Popperian response (or Bayesian, for that matter) is merely instrumental or "practical" but does not address the deeply metaphysical conundrum. Sure, we can say that such-and-such "laws" of physics preclude it*, and the sun will only not rise when in 4 or 5 billion years, it swells to such a size that it engulfs the Earth - sure, then you can say that the sun "does not rise", but this is very problematic.

* remember that there are no such laws "hanging" out there in space. These "laws" are merely tentative descriptions that humans have imposed on the worlds-in-their-heads (i.e., their conceptual models). The motions of the planets are absolutely not "governed" by the laws of gravity. They are governed by something else entirely, that we do not yet understand and probably have no hope of truly understanding. We merely label it "gravity" and have a neat set of equations to describe it, but made a terrible mistake by mixing up the semantics and making it seem that a description is an actual statement of causal fact. Frankly, it is a miracle that the New Horizons probe made it to Pluto at all. What a fluke, that the planetary mechanics that were concocted in a puny ape-mind actually worked at such a grand scale! (The universe may yet have its last laugh, however, especially if the MOND theories are correct, and gravity ends up working quite differently at the truly mega-scale.)


What is to say that there is not some other process, as yet unobserved, that could also lead to the cessation of the sun rising? Perhaps some "cascading quantum effect" where the sun dissipates or dismantles itself without causing damage to our planet? Suppose you could formulate a scientifically palatable theory that could account for such a phenomenon (using proper language, of course: "cascading quantum effect" is a step in the right direction, I think). Can we calculate the chances of that happening? How could we? But then, when and if it does happen, all we can do is say "Oh, well, so I guess that happened." Soo desu ka. We really have no say in the matter, one way or another, despite what our physics textbooks and Nobel laureates say. In other words, we are quite full of hubris when we claim privileged knowledge of how reality works - by our very definitions of the scientific process, even. The very fact that theories can change, is testament to our temporary understanding. Before we go further, no, evolution is "not just a theory" - the vast body of evidence for it can't just disappear, but it can be expanded, changed, and, ultimately, take on an entirely unrecognizable form. (Lamarck says hi.) The "theory of evolution" can be relegated to the graveyard of the pessimistic induction. And so, we really have a very shaky grasp on what "reality" actually is, how it is constructed, and how it self-perpetuates.

(As you can imagine, to be Murakami'd is an excellent antidote to the pernicious religious sentiments of scientific materialism, something to which I am prone to. But that is not to say that I will cease to be a practising scientist, nor that I will suddenly side with the anti-vaccers. Those people are still bat-shit crazy, Murakami or no.)

So then, when the scientific materialist narrative is set aside and recognized as the flashy new smartphone model in a series of world-models, what tools can we use to understand reality? I really don't know, but Murakami does have a way of showing you that things are not as they seem, that what appears unreal can actually be very real, in a concrete sense. In essence, he brings up the utter indeterminacy of epistemology and ontology, in other words, the inseparability of what/how we know, with what actually "is". And then - crucially - he demonstrates how you can set a blender to it, and mix up the epistemology, remarkably also mixing up the ontology! It's amazing to see, though, how deeply ingrained and overly confident our world-models are, when it takes something like The Wind-Up Bird Chronicle to smack you in the head in order to realize that, actually, maybe that's not the truth after all.

I'm afraid that in the above paragraphs I've strayed too far into the familiar sphere of the philosophy of science due to my having studied it, but please do not take to be Murakami'd to mean that I started thinking about these subjects in exactly this way. It was far more visceral and non-verbal. I merely used the conceptual tools that I am familiar with to try to describe the experience, but I see now that I was only elaborating on the intellectual implications of the experience, not the experience itself.

[This post was written in the summer of 2015 but only published in May 2016.]

The brain may not be a digital computer, but it sure ain't "empty"

The following article ("The empty brain" by Robert Epstein) came up in my circle of Cognitive Science friends from university on Facebook. Normally I am sympathetic to viewpoints that try to show that the brain is not a digital computer, or doesn't have a von Neumann architecture. All of these ideas are quite outdated and we probably don't have to be beating the dead horse anymore at this point. Although the article started off that way, it quickly took a turn into some very strange territory, a quite reductionistic take on what the brain is "doing". The author became apparently hell-bent on saying that nothing ever happens in the brain at all, that the only intelligent behaviour is vis-a-vis interactions with the world. The "How and where, after all, is the memory stored in the cell?" is a shocking question and dead giveaway. For starters, there is such a thing called synaptic plasticity, dendritic remodelling, and ion channel and receptor expression/recycling, all of which serve to change the neuron's input/output function as a function of experience. How can these things be so blatantly ignored when discussing the question of memory formation in the brain?

Regarding the whole "there are no algorithms, encoders, decoders, ..." bit (paraphrasing), this is also eyebrow-raising. When light enters the eye and hits the retina, the pattern and intensity of light gets converted into a series of action potentials. Why would you not call that an analog-digital converter? Also, most neurons exhibit idiosyncrasies in how they respond to different types of synaptic input. Some neurons suppress low frequency inputs, only responding to high frequency ones, or vice versa. These are, quite literally, high-pass and low-pass filters, respectively. All components of information processing systems. And we're not even getting into what we know so far regarding neural circuits, all of which strongly indicate that information processing is taking place, yes, even representation, storage, retrieval, etc.

Insofar as some patterns of electrical activity are manifested in response to particular configurations of sensory input, we are allowed, or even obligated, to say that information is being processed and transformed, so that certain patterns of (sensory) inputs can then lead to patterns of (motor) output that facilitate survival of the organism in an uncertain environment. It's kind of the point of having a central nervous system in the first place. Or is the author assuming that the brain does absolutely, literally nothing? Maybe he takes a page out of Aristotle's book and believes it's a giant radiator. This is the only way he can get away with his audacious and, frankly, ignorant statements. By "ignorant" I simply mean that his arguments would not be formulatable had he read the most rudimentary "Neuro 101 for dummies" type textbook from the past 25 years.

And this isn't even getting into deeper philosophical questions of different kinds of information processing and the nature of representation, or anything like that.

Tuesday, 8 July 2014

The increasing folly of the Human Brain Project debate

So, this happened recently. In a nutshell, a 200-scientist-strong open letter has been revealed ahead of a review of the Human Brain Project (HBP), criticizing it as having gone "off-course". The BBC article linked to has a nice summary of the issues involved, and a pointed defence by the leader-visionary-"guru" of the HBP, Henry Markram. Here are some of my thoughts on the issue (a version of which was first posted in /r/neuroscience).

It is only natural for researchers with vested interests in different levels of analysis - in this case, more abstract computational models that ignore the molecular and subcellular levels of detail, even the cellular level entirely (with point process neuronal models, for example) - to be opposed to so much funding going into the HBP, which inherently is geared towards simulating even the smallest functionally relevant level of analysis (viz., the molecular). This open letter is a window into the general phenomenon of competing visions and paradigms, only amplified because the stakes are so much higher (1.2 Bn Euro higher, to be exact).

On the one hand, I agree that more independent review would be helpful in order to stop some of the more un-scientific moves that the HBP has been taking in terms of letting go of people who do not "toe the line", as outlined in the above BBC article. On the other hand, there would be a downside to independent review as well, in that ideological differences from the reviewers may unnecessarily stifle the project. This is a problem with the reviewing process in most journals, in fact, so in that sense, nothing new there.

From my point of view, I believe that the framing of this debate in terms of the amount of money being "only invested in one person's vision" is misleading and avoids the bigger picture. The fact remains that we do have too much neuroscientific data, and the research & funding structures are geared so as to encourage little bite-sized bits of research that demonstrate some effect of one molecule, or modulation of a synapse, or any similar isolated aspect of the nervous system - i.e., towards "quick returns". True, newer tools like optogenetics are allowing for larger-scale investigations into the nuances of function of entire circuits, but even then, the brain is complex enough that the story of any individual opto paper is inherently narrow and limited. We do need to integrate all of this data, and what better way than to throw it all into one big computational simulation that doubles up as a data repository?

The HBP project aims to be a "service provider" as discussed in the BBC article linked to above. Even in computational neuroscience, where there is fierce debate as to appropriate levels of analysis of study and therefore understanding of brain function - there is no debate as to the fact that neurons do operate on a molecular level. This huge diversity of neurotransmitters, ion channels, cell types, even glial cells (*groan*, cries almost every neuroscientist who realizes that we can't continue to ignore them) has evolved for a reason, and each one has shown to have some kind of functionally relevant role for a neuron, circuit, and therefore behaviour. So whatever abstract models we use in our pet studies, must necessarily bottom out at the lowest level of detail in order to be relevant to understanding of the actual brain. Otherwise, we are no better than armchair philosophers trying to understand how the brain works. You need to examine the actual product of evolution, the actual tissue itself - the very nuts and bolts - and understand it at that level.

No, the HBP will never be complete, and no, it will probably be grossly incorrect in many, many ways - at least because important facts about the brain are not known and remain to be discovered. That shouldn't stop us from starting somewhere. As Markram says, sure, we can invest all this money into the usual ecosystem of research. But that will ultimately generate another few hundred isolated and entirely independent papers with more data, but no more integrated understanding of the brain.

The bottom line is that what is at stake is the question of how best to continue doing neuroscience work. Henry Markram believes (as do many others, let's not forget that - it's not just a "single quirky guy's vision" as critics may want you believe) that some kind of integrated approach that starts to put it all together is needed. It won't be perfect, but we have enough data as it is that warrants such an approach now - in fact, it was needed yesterday. Thomas Trappenberg of Dalhousie University presented an amusing yet powerful slide in a talk given at the recent 2014 Canadian Association for Neuroscience conference plotting the pagecount of the venerable Principles of Neural Science textbook versus time of publication of each of its editions - showing a strong positive trend. He poignantly argued for the role of computational modelling research in pushing down the pagecount. Experimental neuroscience drives the "neuroscience pagecount" up by providing more and more data, and computational neuroscience does - or rather should - push it down by providing integrated theories of brain function. My argument here is that the HBP is ideally poised to do the latter. Certainly, it won't even provide all the answers, and it's not meant to. For instance, the criticism of the HBP replicating the entire brain and still not providing any answer about its function is correct in a way. It is indeed silly to think that when the "switch is turned on", the simulation will exhibit (rat) cognition. We need input from the environment, not just to provide data but also to entrain the brain and calibrate its endogenously generated rhythms - just think of the unravelling of the mind that occurs when humans are subjected to sensory deprivation. (For a fuller treatment on this issue of the environment serving to entrain or calibrate the brain, see Buzsáki's excellent treatise,  Rhythms of the Brain - which I have an autographed hardcover copy of!!).

What the HBP will provide, however, is a repository for integrating the swathes of data we already have, and a framework for testing any ideas of the brain. No, it will never be complete, but it is badly overdue, and thoughts of continuing to live without an integrating framework that can be tested, prodded, and drawn upon - instead continuing each researcher's narrow pet projects in isolation from one another - is as past folly as it would be to pretend to be studying and understanding genetics without having the entire genome sequenced.

In that sense, the HBP can only help in any and all endeavours in understanding the brain by providing that baseline model with as much cellular and molecular detail incorporated as possible, because any higher levels of analysis will ultimately have to interface with it (or at least with the level of detail the HBP is aiming to capture) in order to show ultimate relevance in terms of the brain. The brain, as a biological system, is inherently different in nature than the phenomena that many computational neuroscientists (coming as they do, mostly from physics and engineering backgrounds) are comfortable dealing with - which is in the framework of physical systems that can be explained with a handful of equations. The brain, sadly, is not such a system and is not amenable to "spherical cow" levels of analysis. As a biological system, it rather follows the rule of being a horrible mess of interacting factors rather than a product of a few physical laws (that can be elegantly summarized in a few equations). That's not to say that no simplifying analysis can be done, and that no fruitful results will emerge from such studies. On the contrary, we can learn many useful facts about the brain by building and analyzing simplified models. It's just that inherently, any such endeavours will miss the mark in important ways. The "answer", then, is to stop thinking in terms of a zero-sum game (which is the attitude that signatories of this open letter seem to be doing) and instead consider it as a joint project or venture. Indeed, the more abstract levels of analysis have been too much in the limelight for many years, without paying any real dividends. For example, the connectionist paradigm, started in the 1980s, hasn't given us any concrete and large-scale understanding of the brain, and has rather unfortunately (for our knowledge of the brain but not for commercial ventures obviously) and quietly devolved into machine learning tricks for learning Netflix user preferences, etc. (That Netflix Tech Blog link even refers to Deep Belief Nets as being "trendy"!)

In fact, such an approach that the HBP is embarking on is badly overdue, and vastly underrepresented. It's not a popular approach precisely because it accepts the messiness of the brain and doesn't shirk away from it by abstracting it away. Sure, it's a double-edged sword, in that by opening the Pandora's box of the molecular level, you risk missing out on what we do not yet know, but that is part and parcel of any scientific approach. Thus, kudos to the HBP and Henry Markram for managing to get this kind of project off the ground. And because it is such a large project, it necessarily requires a lot of funding. This is a Manhattan- or Human Genome-flavour of a Project (with a capital "P").

I believe it will only help further our understanding of the brain in an integrated way that can evolve over time and with contribution from other levels of analysis. Those who are opposed to it, in my opinion, are doing so unfortunately primarily on personal and ideological grounds -- i.e., on ultimately selfish and jealous grounds -- than on valid scientific rebuttals.

Sadly, I lack Markram's eloquence and diplomacy in addressing the critics, but sometimes you have to grab the bull by the horns and address the real issue rather than skirt around it and be afraid to step on eggshells (meaning other people's egos).

-- PhD candidate in computational neuroscience, whose own biases have been amply revealed, he hopes.

Friday, 9 December 2011

first post!!1

Here we go! This is space for my personal musings. There are a few goals for this blog:
  • Opinionate freely* on subjects to my liking.
  • Practice writing.
  • Organize my thoughts.
* Perhaps not entirely. Because this is publicly accessible, with my name attached, I must be cognizant of the fact that colleagues, friends, family members, and others that play integral roles in, and Have Grave Effects(tm) on, my life may read this at some point. Therefore, I must address the blog as if to them. Zen Master Ta Hui said (paraphrasing) "When alone, be as if with others. When with others, be as if alone. Then you won't go astray." There is more context to that, but the point as it applies here is that I am not just blurting out information to the internet in an anonymous fashion.

And I would prefer it that way. When writing things anonymously, there is a tendency, I find, to somewhat "regress" into prejudiced views and be uninhibited in saying them. Think how easy it is to get road rage. Or how online comments can descend into nasty name-calling and putting down of others, even though one wouldn't dare do it in real life. Not just that, but we wouldn't even feel the need or think of doing so when with a real person, because we would have all the social parts of the brain fire up and we would realize, "Oh yeah, this is an autonomous person that has their own desires, wishes, and beliefs, and is worthy of happiness and respect." (Ideally.) I'm not saying that I would descent into savage name-calling by posting anonymously, just that I'm trying to buck the trend of spreading negativity, however subtle, by hiding behind anonymity.

Moving onward. I always found it a little underwhelming that most blogs have their first post just be along the lines of, "first post!!1" (I know, it's the title of this post.) Therefore, to buck another trend, I will actually provide some content here. Well, I suppose I already have with the discussion on anonymity above. So how about discussing the name of the blog now?

Cortex Dump. What's with that? Well, "cortex" or bark in Latin (of the arboreal, not canine kind), in neuroscience, is used to refer to brain structures or layered groups of neurons that typically exhibit a folded, bark-like appearance. The most well known example of cortex is that which we see when we peer at an intact brain. These many foldings of the outer layer of the brain is the so-called neocortex or cerebral cortex that is putatively responsible, or plays a dominant role in, all of the higher-level cognitive functions ascribed to humans (and other animals, primates & non-primates alike),  such as language, thought, and memory**.

** OK, so there is a lot more to it than that - there is of course the hippocampus, which undeniably plays a huge role in learning & memory, but cortex is just as important (and this coming from a budding hippocampologist!), especially, it seems, for the consolidation of long-term memories. For the record, cortex isn't the end-all and be-all for other cognitive and behavioural functions, either (as some would like to believe) - subcortical regions play a huge role in our mental lives. Take the "lowly" brainstem. It's not just responsible for autonomous functions such as breathing, which are in themselves vital, of course. But did you know that the integration of whole-body movements involve the reticular formation, a region of the brainstem? Not only that, but the brainstem is the source of all of the major modulatory neurotransmitters such as dopamine and serotonin - integral for the setting of moods, for example - and is vital for general arousal, sleep, etc.

Since the aforementioned "personal musings" are arguably composed nearly entirely of higher-order thinking and other processes that are cortex-dependent, I use the word "cortex" in the blog title.

Now how about "dump"? What is that supposed to mean? I certainly don't intend it to be of the fecal kind (though I do not claim that this blog is not "shit"), but more along the lines of the Unix command dump(1). Again, caveat emptor - it is not entirely an unfiltered dump, as already discussed. But it's close enough. To again use Unix terminology, this blog is most definitely not a device-level dump (à la dd(1)). It's not even a filesystem dump. It's more of a tar with a bunch of --exclude directives. (I realize I may have alienated a lot of readers - if there are any, that is - but I thought the analogy witty enough to include, even if for self amusement purposes only.)

In other words, this blog is an outpouring of thought straight from my cortex!