Saturday, 27 September 2025

Free Will

Saint John the Golden Mouth (Sveti Jovan Zlatousti), in one of his lessons spoke about, "There is no destiny, everything depends on our free will." Vis-a-vis the "illusion of free will" (cf., Illusion of Conscious Will by Daniel Wegner, a phenomenal and thought-provoking book) and Benjamin Libet's experiment (precis: subjects were allowed spontaneous hand movements, with brain activity being measured; Libet found that there was measurable brain activity termed "readiness potential", or RP, hundreds of milliseconds prior to the subjects' "feeling" of having decided to move their hand).

It's not that the preceding brain activity and subsequent feeling of action/decision are separate; they are indeed part and parcel of the same mechanism. "Well then, because we are not aware of that preceding activity, we don't have free will." You fool, that activity is exactly encompassed inside "free will", as part of the mind-body-universe coupling (see On the Origin of Objects by Brian Cantwell Smith). Or part and parcel of the Unified Whole, the God/Buddha-Nature system we are all part of. Your rationale against it is itself situated within this free system. Don't look to the underlying mechanism for proof for or against free will, so long as you don't fully understand what's going on under the hood.*

* As a simple (though not perfect) example of how we might not understand what's going on under the hood, suppose an astronaut is sitting in a rocket, about to launch. The astronaut is our free will, the rocket is our hand/body. The astronaut pushes the button to turn on the engines, and the rocket surges upward! (Or, if you wish to nitpick, replace "astronaut" with "engineer in control room", the analogy is the same.) "But," an outsider interjects, "We could actually measure that several seconds prior to the astronaut pushing the button, there was some water being ejected on the pad to absorb the blast! Since it happened before the button push, we therefore can't say the astronaut caused the rocket launch, thus there is no astronaut!!" *Picard facepalm* That person just doesn't know that the astronaut-rocket-launchpad-control center is part of one big system and that, indeed, the astronaut pushing the button is a crucial component of this entire process. 

Rather, look to the behavioural outcomes. If you stand at an intersection and don't know where to go, don't you ultimately pick a side, or do you stand there until you starve to death? You're not Buridan's Ass (the animal), are you? And you don't have some dice rolling homunculus ("little man") in your head, do you? If you did, who is telling the homunculus to roll the dice? That is also an intentional act. You can't dispel the need for a chooser. And indeed giving the example St. John provided, when you get sick, don't you go seek medicine? Your outcome is not preordained. You can seek for help, or not. And some people don't, like my poor late uncle, or only when it's too late. 

But to go back to Libet. Even though we can measure some preceding brain activity, we still nevertheless choose, and if one has any qualms about it, the onus is on them to reconcile it within themselves, not to deny their very own experience and, worse, impose that view on others! That is just extreme scientific materialism-as-religion, no better than any other extreme religious zealotry.

Then, who is choosing? How far up the astronaut-rocket-launchpad-control center system do we have to go before we find the "chooser"? Ah, that is the question, isn't it? It can only be resolved by closely paying attention to one's thoughts, actions, and experiences in every moment, whether walking or standing, sitting or lying down. No amount of thinking about it, no amount of conceptualizing, will ever help, and will indeed simply muddy the waters more and more.

Sunday, 3 December 2017

Brains and Car Engines, revisited

[A paper that I had already written about has been published in PLoS Computational Biology about half a year after its preprint was up on on bioRxiv, and it has been making the rounds again in social media and amongst scientist circles. The paper I'm referring to is, "Could a neuroscientist understand a microprocessor?" I had written a longer treatment of it in an email that was more elaborate than my previous blog post, so I am converting it to blog format and posting here as a "continuation" of sorts of my previous post.]

The paper is quite the shocker, to put it bluntly. I think most neuroscientists quaked in their boots when they read this because it directly and effectively criticizes most of the approaches we use that we think are helping us with "understanding" the brain. This is a paper with primarily philosophical and methodological implications. It cleverly (and deviously!) uses "standard" neuroscience tools or, more precisely, signal processing and statistical tools (spectral analysis, Granger causality, dimensional reduction, etc.) on a microprocessor to generate data that is strikingly similar to what we see when we apply these techniques to neural tissue (power law, oscillations, transistor tuning curves, etc). And yet, the brain is obviously not a microprocessor, not even close. So what kind of understanding can these tools alone give us if they basically can't distinguish between two strikingly different structures of high complexity? The authors say "not much", and it's really, really hard to avoid agreeing with them after reading the paper. :)

The blow is softened for those researchers who, from the very beginning, emphasize that it's not good enough to build a model just for the sake of building a model, but rather to ask why we're building it, i.e., to answer a specific biological question. Answering a specific question means that we end up thinking about functional relevance a lot. So it's not enough to just measure power spectra and so on, we have to link it to biological function. I think the authors of the paper tacitly acknowledge this kind of more "sophisticated" approach by contrasting it with, as they put it, "naive" uses of standard neuroscience tools. This relates to the Aristotelian "four causes" (especially the final cause), but I think the authors of the paper would probably argue that even finding a formal cause is not good enough, and moreover that a final cause has important overlooked dependencies that make it particularly difficult to obtain.

The most important of these dependencies in my mind is knowing inputs and outputs to the system, because the evolutionary inputs and outputs are really what drives the development of an organism and its nervous system in the first place. And this is where we are sorely lacking in neuroscience. The section in the paper titled "What does it mean to understand a system" goes into this a little bit, as well as in the discussion towards the end, where they say, "In the case of the processor, we really understand how it works. ... for each of these modules we know how its outputs depend on its inputs." Now I think the authors could have made a much stronger case for neuroscientists to focus on getting more information on inputs (e.g., the pattern of synaptic inputs in a specific in vivo behavioural context) and outputs (what is the specific efferent fiber activity, on a per axonal basis, from a system in a specific in vivo behavioural context). Because once you have that, the way the inputs are transformed into outputs (on a per-brain system level) might almost easily "fall out" since you'll just be able to observe them, after which point you can generate a theory to account for the general case.

This is incidentally why comparative neuroscience is so appealing, because there, more than anywhere else, we have some hope of actually mapping out inputs and outputs. If you have "only" a few thousand cells in an invertebrate ganglion, you can more or less start characterizing the inputs and outputs. I'll never forget how a former colleague once said that when he felt depressed about the state of neuroscience he would go take a look at the invertebrate posters at SfN to be reinvigorated, and I've since then always taken this advice myself.

But even here the task becomes monumentally complex. Here's a particular favourite comparative neuro paper of mine, titled "In Search of the Engram in the Honeybee Brain" (published as a book chapter).

One of the key arguments of the paper is that the memory engram is not just something found in the "memory region" of the brain, but rather a product of the entire nervous system of the organism, from sensory to motor representations. You can demonstrate this quite directly in the bee since these pathways are much more completely characterized than in mammals. But even with mammals we are starting to learn (or be reminded of), for example, the mass of extrinsic connections between hippocampus and all sorts of brain regions that we wouldn't typically think of when we consider "cognitive" spatial and semantic maps - these include the amygdala, nucleus accumbens, even olfactory bulb for god's sake (see almost any of James McClelland's works for perspectives on this, as well as more recently Strange et al 2014, "Functional organization of the hippocampal longitudinal axis", Nature).

Yet if the results in the bee hold true as a general principle of nervous system function, it means that we will have to understand how each and every one of these regions generate information flow into, and out of, the hippocampus, before we can truly and fully "understand" how the hippocampus works. Slowly we're moving in that direction, with the hippocampal field becoming more aware of longitudinal differences and how dorsal hippocampus is more involved in spatial memory whereas the ventral is more implicated in emotional and fear processing (after many years of neglecting some of the early studies that hinted at this way back). This even brings up the question of whether the hippocampus is even one unified structure or rather a system of somewhat loosely coupled modules pivoting around a dorsal and ventral pole, as the gene expression studies might suggest (Bloss et al 2016 from Spruston's lab) that is nevertheless bound together, as the theta travelling waves studies suggest (e.g. Patel et al 2012 from Buzsaki's lab). In which case, even talking about studying "the hippocampus" is not correct anymore, since someone could then ask "Which hippocampus?" or "Which hippocampal module?"

To get back on point, perhaps the most important missing piece of info then is the pattern of inputs and outputs to any brain region, crucially as measured in an actual behavioural state. The authors of the Jonas and Kording "microprocessor paper" make two brief mentions of efforts where this is taking place, and interestingly enough both are in neuroengineering. One of them is the Berger/Marmarelis hippocampal prosthetic chip (that I'm very familiar with, having followed their work for several years). To me this is an odd work to cite in their section "What does it mean to understand a system" because although they do look at inputs and outputs (which is why they cited it), the work is importantly not setup in any way to actually understand the input/output transformations. In a nutshell, they implant a chip in CA1 with electrodes both near the DG border as well as CA3 (this has been done in rodent and monkey so far). They then place the animal in a specific learning paradigm, such as delayed non-match to sample task. During the delay period (i.e., when the rodent needs to remember what lever was signalled in the sampling phase so that they can push the other one when queried by a light turning on after the delay period is over), they record the pattern of activity seen in both the input and output electrodes. They then take these multunit data and fit the coefficients of the kernels of a set of Volterra series (think of them as Taylor series with built-in historicity or "memory"). Then what they can do is repeat the experiment but inject NMDA blockers locally to CA1. At that point of course the rodent performs terribly. But then if they turn on the chip - i.e., taking the DG inputs (at a very coarse level of course since they only have a few sampling electrodes), feed them through the fitted Volterra series, and then output the corresponding spike trains on the output electrodes, the rodent can recover and perform the task. Amazing! What's scary is that with no NMDA blocker and the chip turned on the rodent performs even better than in control! (So if you've seen any sci-fi movies with augmented humans with superhuman memory etc... it might actually be coming one day!)

Now the catch with this approach is that it treats the CA1 region as a black box! There is no real understanding of the input/output transformations, because they have simply captured the correlation of input to output activity in one behavioural paradigm, i.e., the activity produced by the functioning of the actual CA1. They have not captured the general mechanism for how the CA1 produces these transformations. The proof of this is that the fitted kernel coefficients only work for that specific behavioural task! To this point, one of the PIs, Ted Berger, said (personal communication but perhaps also mentioned in some interviews) that as a general tool for human memory enhancement (e.g., during old age and dementia) they would want to create a set of coefficients for particular useful contexts. So there would be a "kitchen program", a "bathroom program", and so on (kind of like in the Matrix where they would give Neo programs for specific skills like piloting a helicopter and learning different martial arts). This is fine for clinical use, but not enough for understanding the brain of course, since we have yet to realize how the CA1 (and all the other regions involved) perform the input/output mappings in a general way, under any context.

In conclusion, my argument is that the Jonas and Kording "microprocessor" paper ends with too broad of a set of lessons. Everything they conclude with is true taken individually: we need better models in neuroscience, better data analysis methods, etc., but all in all the common point that these share (I assert) is that having a knowledge of input/output patterns is crucial. I actually think we'll be fine since new technologies are making it easier and easier to record single unit activities from very large amounts of neurons at once. The CaMPARI work for instance is one of these new techniques that seem incredibly promising; see e.g., Zolnik et al 2016.

To use the microprocessor analogy again, we already have the tools to analyze large scale activity and response properties of individual units/neurons (the spectral analyses, dimensionality reduction, and all that). But what network-wide neuronal interrogation in behavioural contexts gives us is the exact instructions that are being fed into the microprocessor, as well as the outputs, and this is what will tell us the computation itself. Then how the computation arises will fall out almost inevitably by observing how the input gets transformed into the output. Then we might be able to develop some theories about overall function of that component in question; e.g., in the microprocessor case we could then produce a theory of the ALU (arithmetic logic unit) and describe its function in detail, which is to perform arithmetic operation on a bitwise level, i.e., on the individual bits of the input numbers encoded in base-2. We would then even be able to immediately make inferences for why certain design decisions were made (by evolution in the case of the brain). For instance, the ALU and microprocessors in general use base-2 (rather than base-10) because it's an extremely convenient way of representing numbers that can allow for a wide range of arithmetic operations (addition, subtraction, multiplication) to be performed using minimal operations that entail simply comparing or shifting the bits in one direction or another, so it's easy and efficient to implement in hardware. This is also the kind of inference we'll be able to make once we get a much more complete map of input/output to any brain region. The ultimate test for whether we've understood the brain might then be, as Jonas and Kording state for the microprocessor case, when "many students of electrical engineering would know multiple ways of implementing the same function."

My own take-home from this paper was to reassess how I view my own (very limited) work in the overall sphere of neuroscience, and to reaffirm a commitment to linking neuronal activity with function rather than being satisfied with examining how some measure of neuronal output changes in response to some manipulation, without also having an idea of what it means for the overall system the model is embedded in. To repeat the end of my previous post on this subject which contains a useful analogy (I quite like using analogies to help with thinking things through, what the cognitive scientist Daniel Dennett would call "intuition pumps"):

"This results in a state of affairs in neuroscience (at least systems level) where we are content with scratching the surface, e.g., measure changes in frequencies under different conditions. It's like measuring the spectral signature that the sound of a car's engine makes, and thinking that by comparing the peak power under idle vs driving conditions we are any closer to understanding how the engine works. However, we are actually in a very primitive state of "understanding". We don't know anything about pistons and drivetrains, let alone the principles of internal combustion. Yet it's only by understanding these that we would truly know how a car engine works. The same applies to the brain."

Friday, 27 January 2017

Can computational cognitivism please just go to the scientific graveyard already?

*groan* I thought we were past this. Can't people know when they've lost the dominant paradigm in a scientific field? Sadly not, and the old guard can be very resilient. Case in point: this paper titled, The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift? In essence, this is a political paper not so much a scientific one. Otherwise, this is a bit of a strange paper if you think about it, because you could summarize the argument as "Memory is not in the synapse, it's somewhere in the cell" and that's very strange indeed because a synapse is in the cell, so what is the author actually trying to say?

Basically, this goes back to the "cognitivist versus connectionist" debates of the 1980s and 1990s (in this post I use "cognitivism" rather loosely to refer to the computationalist and representationalist tendencies in cognitive science). Back then the dominant metaphor of the mind was the digital computer, i.e., a symbol processing machine, that makes use of representations of the things it needs to process (representations of sensory images, thoughts, motor programs, etc). A lot of the psychological language today comes from this: knowledge schemas, working memory (as a short term memory buffer), the idea of buffers, the idea of memory even, etc. This is all very much like the thinking of a programmer working on a computer and of the mind as a "software" running on the brain "hardware". A key tenet of this theory was that the brain itself is "not relevant" for understanding the mind, it simply implements the programs that the mind actually uses. So they surmised it was enough to study the mind itself, at most to do psychological experiments, and to completely ignore the brain, since that was not the most relevant level of analysis for "understanding". Of course nowadays this idea is somewhat bizarre, since we understand that the very nature of the neural substrate is actually very important for coding and thus, the mind.

Then in the 1980s the artificial neural network paradigm arrived on the scene, spurred by the success of the backpropagation algorithm. Back then this field was known as "connectionism", the idea that the mind could be represented more like how the brain seems to work - that is, using many parallel, distributed nonlinear processing units ("neurons"). All of a sudden now, these researchers were saying the complete opposite of the cognitivists - that actually, the mind is not separate from the physical hardware, or the brain. This led to a hot debate. The paper under discussion (Trettenbrein 2016) cites some articles that discuss this debate (Fodor & Pylyshyn 1988 were fierce advocates of the cognitivist perspective).

Another famous paper in this debate was Pinker & Prince 1988, who argued that the neural network systems (at the time) cannot encode for language, so they concluded that language cannot be implemented as a neural network, it has to be a symbol processing program. This is again a strange perspective today especially looking at all the neuroscience work done to date, but it is informative to glance at the paper to see the tone of the debate. Even just this part of the abstract is illuminating to see how the issue was framed:

"Does knowledge of language consist of mentally-represented rules? Rumelhart and McClelland have described a connectionist (parallel distributed processing) model of the acquisition of the past tense in English which successfully maps many stems onto their past tense forms, both regular (walk/walked) and irregular (go/went), and which mimics some of the errors and sequences of development of children. Yet the model contains no explicit rules, only a set of neuronstyle units which stand for trigrams of phonetic features of the stem, a set of units which stand for trigrams of phonetic features of the past form, and an array of connections between the two sets of units whose strengths are modified during learning. Rumelhart and McClelland conclude that linguistic rules may be merely convenient approximate fictions and that the real causal processes in language use and acquisition must be characterized as the transfer of activation levels among units and the modification of the weights of their connections."

As you can see, from today's perspective this debate seems very strange, because how could you reasonably argue for ignoring the brain these days? Of course language is implemented as "transfer of activation levels among units", in other words, action potentials across cells in a network! But back then, it was hotly contested, largely because in the face of the stunning successes of artificial neural networks, the "old guard" of the cognitivists did not want to lose the status of being top dog in terms of the people who can be said to understand the mind. This is how science works, as shown by Thomas Kuhn: when there is a revolution and paradigm shift in any field, the old guard is the one that most resists, and they unfortunately can have a lot of power since they are heads of departments and so on. Anyway this is a bit off topic but it's important to keep in mind that scientific debates always have political/human elements in them (this is one of the biggest findings of the social studies of science practice of the last half century).

So back to the Trettenbrein paper. It makes more sense now when you look at it, because Trettenbrein is arguing again from the cognitivist perspective. In essence, I think he (and probably others) saw the papers cited on how memory persists even after retrograde amnesia and reversal of LTP and so on (from the Tonegawa lab and the other paper in Aplysia), and used this to attack the very idea that memory is in the synapses, which is basically to attack the "connectionist" (or, nowadays we could just call this the neuroscientific, as opposed to cognitive) idea that the right level of understanding the brain is to analyze neurons and networks.

You can see this every step of the way, since each step makes little sense in the paper. For example, the mind being Turing-complete is a very antiquated way of thinking and not at all established... this goes back to the old days of seeing the mind as a computer program, and people wanted to "prove" that the mind could be Turing complete, i.e., a universal computing machine. But this term has a very specific set of criteria that is useful for applying to machines we create, not so much to the brain. For example, a trivial way of countering this is to say that Turing-completeness requires infinite memory (in the strict definition), which we obviously cannot have. So as much as Trettengrein says that Gallistel and King 2009 "convincingly" argue for the mind being a Turing machine, there are many others who argue that the mind is not Turing-complete. This is just a sneaky way of undercutting neuroscience and saying that the mind is best studied as a computer program, not as a neural network.

You can also see it in the distinction between learning and memory. This is again a strange thing to do, but it makes sense if you realize that the connectionist paradigm considers learning and memory to be part and parcel of the same process. So by attacking this, Trettenbrein is trying to undermine connectionism. And yet, his argument is deeply flawed, because we do know now that learning and memory are separate, and that connectionist ideas can still account for this. James McClelland way back implemented a neural network that demonstrated "two stage" learning, where a module (the hippocampus) encoded memory patterns that were then stored in long term storage in cortical regions. So the learning (hippocampal representations) and memory (long-term cortical storage) was already successfully "separated" using a connectionist paradigm more than 10 years ago. And of course today, we have a lot of confirmatory neurobiological evidence for this distinction between learning and memory. So Trettenbrein's criticism as a way of boosting cognitivism is misguided and flawed.

But the most important shortcoming in this approach of attacking the idea that memory is in the synapses is that it fails to bring up an alternative idea that would not also fail to discredit connectionism after all. I'll explain what I mean in detail. First, he says (p. 5),

"Lastly, all of this is not to say that synaptic plasticity and networks are of no importance for learning and memory. Fodor and Pylyshyn (1988) already reviewed the implications of connectionist models, concluding that connectionism might best be understood as a neutral “theory of implementation” of the actual cognitive architecture, provided that one gives up anti-representational tendencies inherent to the approach. As a consequence, the question no longer is whether symbolic representations are “real,” but how (i.e., on what level) they are actually implemented in the brain. The challenge for critics of the synaptic plasticity hypothesis will therefore be to come up with concrete suggestions for how memory might be implemented on the sub-cellular level and how cells then relate to the networks in which they are embedded."

This paragraph is inherently a bad argument. Trettenbrein starts by saying that synaptic plasticity might still be important for learning and memory, but that it's an unimportant implementational detail, and that the real level of understanding of the brain is in terms of symbolic representations. Translation: "we cognitivists are right, connectionists are wrong". And what is the alternative? Well, as he says, the challenge is how to look at memory at the sub-cellular level. Which is trivially absurd since of course, the synapses are in cells, so they are already at the sub-cellular level! What he is really trying to say is, "How can we cognitivists find representational implementations in the brain?" But the failure is that even if we don't consider synapses and instead turn to something else, the fact of the matter is that whatever that "something else" is, it will inevitably still be within an individual cell, and thus that the brain's processing is still and will always be a matter of distributed parallel processing units, which is the core of the connectionist approach.

In other words, there is no way the cognitivists can win here. The best they can do is confuse and muddy the debate by writing papers like this that are short on hard scientific reasoning, bring up a bad argument, and then end it off with an audacious and outrageous title like "the Demise of the Synapse...".

It's important to emphasize that the issue here is not that he is arguing for single neuron computation versus network computation, nor for oscillations being important or not for coding. He is inherently coming from a perspective that is hostile to all such considerations, that simply want to sweep all of that under the rug, and say that they are "unimportant implementational details", i.e., that neuroscience is wasting its time and that we should go back to the "good old days" of cognitivist thinking. This is personally very objectionable to me. I will never forget back when I was studying cognitive science, that after I started learning about neurophysiology, I would encounter situations where during the course of a debate regarding some aspect of the mind, I brought up some ideas from neuroscience as evidence, for example for the nature of memory processes. One such time a senior person, in response to this, said flatly that "we don't know anything about how the brain works" as a way to shut down any attempt of using neuroscience. In other words, I know from firsthand experience how cognitivist type people can be inherently hostile to any idea of how the brain works as being important in any way to understanding the mind, because then they lose their throne of being the ones who "truly understand" the mind. The egos here are big enough that the very suggestion that someone doing electrophysiology or computational modelling of the brain can also be said to be helping to understand the mind is deeply offensive to them, since they think the best way is their way, i.e., mainly from a philosophy of mind point of view.

It's unfortunate that this paper was published in Frontiers in Synaptic Neuroscience but, on the other hand, it's heartening to know that the approach of neuroscience in understanding the mind by teasing apart mechanisms in the brain will inexorably march on, and that these sad cognitivist voices will concomitantly continue to grow even more absurd.

Wednesday, 18 January 2017

Don't politicize

"Politicizing an issue" - when did this become a bad thing? Sometimes, indeed quite often, certain scientific, health, or economic statements are political, and we should expect them to be political.

When is a statement that is implicitly advocating for a particular socio-political view even unjustified from doing so? Upon what basis? Scientific? But we are already lumping scientific statements as being political, so it's disqualified from being a grand arbiter. Religious? Ah, but then which one? Or "The Voter's Will"? Well, today we know that the latter especially is worth nothing, and is easily manipulated in as little as 140 character messages.

What, then? Simply this: a politician who wields the "you are politicizing the issue" stick is, in fact, browbeating the opposition while simultaneously providing no independent justification for their actions or viewpoints. They use the don't politicize attack as a way of negating the opponent's political standpoint over their own, i.e., they are precisely using it to politicize.

Why would you even do this? Well, if you lacked any scientific, economic, sociological, or any other rational basis for your viewpoint, you are perhaps best served by wielding the don't politicize stick. Which also explains why it is especially wielded frequently by right-wing types, whose views are increasingly on the wrong side of history (cf., slavery), on the wrong side of science (cf., climate change denial), and on the wrong side of common human decency (cf., views on abortion). Indeed, for such a one, there is almost no other tool than the *don't politicize* stick.

It's a particularly juicy form of doublethink.

It's blunt, it's unsophisticated, and it works. Why does it work? Because we don't stand up to it. We just report it in the news and don't say anything back. Politicize freely, yes, make every statement political, since otherwise, we capitulate to those who simply want to impose naked power over everyone else.

Scientists should politicize. Everyone should politicize. Not just the ones in power, the ones who are certainly not without their hidden agendas and who are in fact the least likely to care for the general welfare.

Politicizing is the basis of a democracy. When did it become a dirty word? Well, now we know why democracies have gone down the toilet. Their very fabric of existence has been sullied in a clever way: by forbidding their engine of expression.

[This was written in January 2017 but I seemingly didn't click Publish until September 2025]

Sunday, 29 May 2016

Brains and car engines

This paper makes a solid point. The argument about whether a brain is like a microprocessor or not (obviously not) is a distraction to the main point, which is that without understanding the flow of information we are simply sorting through patterns in the data coming out of neuroscience without coming to any real understanding. 

For instance, it's quite surprising how much effort is spent on measuring and tracking different oscillations in the brain. These are correlative with respect to the actual information flows, but not the same thing as the latter. We rather need to tease apart the mechanisms with which the constitutive neurons communicate with each other, in various behavioural and functional contexts. From there, we may then find how the oscillations arise, e.g., they are a byproduct of particular forms of processing. Instead, we put the cart before the horse and focus excessively on the oscillations first (or any large scale activity), before examining cell specific electrophysiology and neurochemical interactions. Indeed, many neuroscientists are loathe to go that "low level"!

This results in a state of affairs in neuroscience (at least systems level) where we are content with scratching the surface, e.g., measure changes in frequencies under different conditions. It's like measuring the spectral signature that the sound of a car's engine makes, and thinking that by comparing the peak power under idle vs driving conditions we are any closer to understanding how the engine works. However, we are actually in a very primitive state of "understanding". We don't know anything about pistons and drivetrains, let alone the principles of internal combustion. Yet it's only by understanding these that we would truly know how a car engine works. The same applies to the brain.

Sunday, 22 May 2016

Murakami'd

I've been Murakami'd. Which is to say, I finished reading The Wind-Up Bird Chronicle written by Haruki Murakami. At least, anyone looking at me from the outside would conclude that that's what it means since, according to most external signifiers, I was sitting quietly for long periods of time reading this one particular book. This process repeated itself over the course of many days, with nary a sign that universes were colliding and separating, except perhaps for the occasional burst of laughter, furrowing of the brow, or quick exhalation of the breath.

There really is not much that I can say about the book itself, due to its very nature. Some of that is because it is fundamentally a surrealist book and, from very brief ventures into literary criticism, I see that there are many different ways a book can be said to be "surrealist", so I won't get into this at all, not least because I am not a literary critic. However, this is one of those rare books that shook me to my very core, and produced an experience that is difficult to otherwise come by. Some books have a tendency to do that to me, and each one in a different way. Murakami's work is different from anything I've ever read before, and elicited such a strange constellation of emotions and realizations, that I had to label it as its own thing, using its own verb - to be Murakami'd. And so, I felt compelled to sit down and mark down my experience in words immediately after the last page was turned - at least, as best as I could - as a record for myself, to be returned to in the future and accessed once more, talisman-like, sort of like how the primary protagonist of the novel would (literally) descend into his well and clutch his baseball bat tightly when venturing on his dream-reality journeys.

One of the things I've realized is that reality is not real, nor is unreality not un-real. Anyone who studies the history or philosophy of science knows this (at least the former), but to know it like this, as a ton of bricks hitting you, is something else. These are not meant to be cute transpositions of each other but are both equally and fully valid truths, juxtaposed together merely as a form of convenience of expression. The way Cinnamon retold and reshaped his mother's tales in the novel, holding to "the assumption that fact may not be truth, and truth may not be factual", may seem to be saying the same thing, but the emphasis is slightly different. What I'm referring to has more to do with the real/surreal distinction, or more accurately the real/constructed distinction. This is touched upon directly in an excellent interview with Murakami, where he says, "I don’t want to persuade the reader that it’s a real thing; I want to show it as it is. In a sense, I’m telling those readers that it’s just a story—it’s fake. But when you experience the fake as real, it can be real. It’s not easy to explain." This does not mean that you merely become "engrossed" in the story. We are talking about realities. What is real? How do you define 'real'? (I hear these sentences in Lawrence Fishburne's voice.) It's what arises when our minds or mental faculties meet with the (putatively) "objective" world through the mediums of our very idiosyncratically tuned (by evolution) faculties of sensation. (I cheated there, by parroting back from Buddhist metaphysics, but I think it's a very good summary of the human reality-constructing process.)

Therefore, what any one of us experiences as "real" is not objective nor necessarily shared by any other person, and I utterly blank on wondering what the Universe is like, as witnessed by an entirely different species, with differently tuned faculties of sensation and mind. Now, it's easy to ponder this when we talk about the "subjective stuff" - feelings, thoughts, emotions. But when we talk about the "external, objective stuff" - cars, wind, the Internet - we think there is only one possibility, and we all have the same access to it (except perhaps, the narrative goes, at the quantum level, but then that doesn't really affect us at the macroscopic scale; it all "cancels out", right, and anyway, scientists will eventually figure out what's really going on there too). Part of what it means to be Murakami'd, for me, is to have this certainty deeply shaken, to see that even the "external stuff" is highly contingent, elusive, ephemeral. When you get stabbed by a knife in a dream reality, and have a subsequent wound in the non-dream reality, is that strange? Why should it be? Is it just because it hasn't yet happened to us? What's to say that it couldn't? And which reality would then be the real one, which the dream reality? Or are they both dreams? Hume absolutely hit the nail on the head with his problem of induction - we really can never be sure that the sun will rise tomorrow, at all. The Popperian response (or Bayesian, for that matter) is merely instrumental or "practical" but does not address the deeply metaphysical conundrum. Sure, we can say that such-and-such "laws" of physics preclude it*, and the sun will only not rise when in 4 or 5 billion years, it swells to such a size that it engulfs the Earth - sure, then you can say that the sun "does not rise", but this is very problematic.

* remember that there are no such laws "hanging" out there in space. These "laws" are merely tentative descriptions that humans have imposed on the worlds-in-their-heads (i.e., their conceptual models). The motions of the planets are absolutely not "governed" by the laws of gravity. They are governed by something else entirely, that we do not yet understand and probably have no hope of truly understanding. We merely label it "gravity" and have a neat set of equations to describe it, but made a terrible mistake by mixing up the semantics and making it seem that a description is an actual statement of causal fact. Frankly, it is a miracle that the New Horizons probe made it to Pluto at all. What a fluke, that the planetary mechanics that were concocted in a puny ape-mind actually worked at such a grand scale! (The universe may yet have its last laugh, however, especially if the MOND theories are correct, and gravity ends up working quite differently at the truly mega-scale.)


What is to say that there is not some other process, as yet unobserved, that could also lead to the cessation of the sun rising? Perhaps some "cascading quantum effect" where the sun dissipates or dismantles itself without causing damage to our planet? Suppose you could formulate a scientifically palatable theory that could account for such a phenomenon (using proper language, of course: "cascading quantum effect" is a step in the right direction, I think). Can we calculate the chances of that happening? How could we? But then, when and if it does happen, all we can do is say "Oh, well, so I guess that happened." Soo desu ka. We really have no say in the matter, one way or another, despite what our physics textbooks and Nobel laureates say. In other words, we are quite full of hubris when we claim privileged knowledge of how reality works - by our very definitions of the scientific process, even. The very fact that theories can change, is testament to our temporary understanding. Before we go further, no, evolution is "not just a theory" - the vast body of evidence for it can't just disappear, but it can be expanded, changed, and, ultimately, take on an entirely unrecognizable form. (Lamarck says hi.) The "theory of evolution" can be relegated to the graveyard of the pessimistic induction. And so, we really have a very shaky grasp on what "reality" actually is, how it is constructed, and how it self-perpetuates.

(As you can imagine, to be Murakami'd is an excellent antidote to the pernicious religious sentiments of scientific materialism, something to which I am prone to. But that is not to say that I will cease to be a practising scientist, nor that I will suddenly side with the anti-vaccers. Those people are still bat-shit crazy, Murakami or no.)

So then, when the scientific materialist narrative is set aside and recognized as the flashy new smartphone model in a series of world-models, what tools can we use to understand reality? I really don't know, but Murakami does have a way of showing you that things are not as they seem, that what appears unreal can actually be very real, in a concrete sense. In essence, he brings up the utter indeterminacy of epistemology and ontology, in other words, the inseparability of what/how we know, with what actually "is". And then - crucially - he demonstrates how you can set a blender to it, and mix up the epistemology, remarkably also mixing up the ontology! It's amazing to see, though, how deeply ingrained and overly confident our world-models are, when it takes something like The Wind-Up Bird Chronicle to smack you in the head in order to realize that, actually, maybe that's not the truth after all.

I'm afraid that in the above paragraphs I've strayed too far into the familiar sphere of the philosophy of science due to my having studied it, but please do not take to be Murakami'd to mean that I started thinking about these subjects in exactly this way. It was far more visceral and non-verbal. I merely used the conceptual tools that I am familiar with to try to describe the experience, but I see now that I was only elaborating on the intellectual implications of the experience, not the experience itself.

[This post was written in the summer of 2015 but only published in May 2016.]

The brain may not be a digital computer, but it sure ain't "empty"

The following article ("The empty brain" by Robert Epstein) came up in my circle of Cognitive Science friends from university on Facebook. Normally I am sympathetic to viewpoints that try to show that the brain is not a digital computer, or doesn't have a von Neumann architecture. All of these ideas are quite outdated and we probably don't have to be beating the dead horse anymore at this point. Although the article started off that way, it quickly took a turn into some very strange territory, a quite reductionistic take on what the brain is "doing". The author became apparently hell-bent on saying that nothing ever happens in the brain at all, that the only intelligent behaviour is vis-a-vis interactions with the world. The "How and where, after all, is the memory stored in the cell?" is a shocking question and dead giveaway. For starters, there is such a thing called synaptic plasticity, dendritic remodelling, and ion channel and receptor expression/recycling, all of which serve to change the neuron's input/output function as a function of experience. How can these things be so blatantly ignored when discussing the question of memory formation in the brain?

Regarding the whole "there are no algorithms, encoders, decoders, ..." bit (paraphrasing), this is also eyebrow-raising. When light enters the eye and hits the retina, the pattern and intensity of light gets converted into a series of action potentials. Why would you not call that an analog-digital converter? Also, most neurons exhibit idiosyncrasies in how they respond to different types of synaptic input. Some neurons suppress low frequency inputs, only responding to high frequency ones, or vice versa. These are, quite literally, high-pass and low-pass filters, respectively. All components of information processing systems. And we're not even getting into what we know so far regarding neural circuits, all of which strongly indicate that information processing is taking place, yes, even representation, storage, retrieval, etc.

Insofar as some patterns of electrical activity are manifested in response to particular configurations of sensory input, we are allowed, or even obligated, to say that information is being processed and transformed, so that certain patterns of (sensory) inputs can then lead to patterns of (motor) output that facilitate survival of the organism in an uncertain environment. It's kind of the point of having a central nervous system in the first place. Or is the author assuming that the brain does absolutely, literally nothing? Maybe he takes a page out of Aristotle's book and believes it's a giant radiator. This is the only way he can get away with his audacious and, frankly, ignorant statements. By "ignorant" I simply mean that his arguments would not be formulatable had he read the most rudimentary "Neuro 101 for dummies" type textbook from the past 25 years.

And this isn't even getting into deeper philosophical questions of different kinds of information processing and the nature of representation, or anything like that.