Tuesday 8 July 2014

The increasing folly of the Human Brain Project debate

So, this happened recently. In a nutshell, a 200-scientist-strong open letter has been revealed ahead of a review of the Human Brain Project (HBP), criticizing it as having gone "off-course". The BBC article linked to has a nice summary of the issues involved, and a pointed defence by the leader-visionary-"guru" of the HBP, Henry Markram. Here are some of my thoughts on the issue (a version of which was first posted in /r/neuroscience).

It is only natural for researchers with vested interests in different levels of analysis - in this case, more abstract computational models that ignore the molecular and subcellular levels of detail, even the cellular level entirely (with point process neuronal models, for example) - to be opposed to so much funding going into the HBP, which inherently is geared towards simulating even the smallest functionally relevant level of analysis (viz., the molecular). This open letter is a window into the general phenomenon of competing visions and paradigms, only amplified because the stakes are so much higher (1.2 Bn Euro higher, to be exact).

On the one hand, I agree that more independent review would be helpful in order to stop some of the more un-scientific moves that the HBP has been taking in terms of letting go of people who do not "toe the line", as outlined in the above BBC article. On the other hand, there would be a downside to independent review as well, in that ideological differences from the reviewers may unnecessarily stifle the project. This is a problem with the reviewing process in most journals, in fact, so in that sense, nothing new there.

From my point of view, I believe that the framing of this debate in terms of the amount of money being "only invested in one person's vision" is misleading and avoids the bigger picture. The fact remains that we do have too much neuroscientific data, and the research & funding structures are geared so as to encourage little bite-sized bits of research that demonstrate some effect of one molecule, or modulation of a synapse, or any similar isolated aspect of the nervous system - i.e., towards "quick returns". True, newer tools like optogenetics are allowing for larger-scale investigations into the nuances of function of entire circuits, but even then, the brain is complex enough that the story of any individual opto paper is inherently narrow and limited. We do need to integrate all of this data, and what better way than to throw it all into one big computational simulation that doubles up as a data repository?

The HBP project aims to be a "service provider" as discussed in the BBC article linked to above. Even in computational neuroscience, where there is fierce debate as to appropriate levels of analysis of study and therefore understanding of brain function - there is no debate as to the fact that neurons do operate on a molecular level. This huge diversity of neurotransmitters, ion channels, cell types, even glial cells (*groan*, cries almost every neuroscientist who realizes that we can't continue to ignore them) has evolved for a reason, and each one has shown to have some kind of functionally relevant role for a neuron, circuit, and therefore behaviour. So whatever abstract models we use in our pet studies, must necessarily bottom out at the lowest level of detail in order to be relevant to understanding of the actual brain. Otherwise, we are no better than armchair philosophers trying to understand how the brain works. You need to examine the actual product of evolution, the actual tissue itself - the very nuts and bolts - and understand it at that level.

No, the HBP will never be complete, and no, it will probably be grossly incorrect in many, many ways - at least because important facts about the brain are not known and remain to be discovered. That shouldn't stop us from starting somewhere. As Markram says, sure, we can invest all this money into the usual ecosystem of research. But that will ultimately generate another few hundred isolated and entirely independent papers with more data, but no more integrated understanding of the brain.

The bottom line is that what is at stake is the question of how best to continue doing neuroscience work. Henry Markram believes (as do many others, let's not forget that - it's not just a "single quirky guy's vision" as critics may want you believe) that some kind of integrated approach that starts to put it all together is needed. It won't be perfect, but we have enough data as it is that warrants such an approach now - in fact, it was needed yesterday. Thomas Trappenberg of Dalhousie University presented an amusing yet powerful slide in a talk given at the recent 2014 Canadian Association for Neuroscience conference plotting the pagecount of the venerable Principles of Neural Science textbook versus time of publication of each of its editions - showing a strong positive trend. He poignantly argued for the role of computational modelling research in pushing down the pagecount. Experimental neuroscience drives the "neuroscience pagecount" up by providing more and more data, and computational neuroscience does - or rather should - push it down by providing integrated theories of brain function. My argument here is that the HBP is ideally poised to do the latter. Certainly, it won't even provide all the answers, and it's not meant to. For instance, the criticism of the HBP replicating the entire brain and still not providing any answer about its function is correct in a way. It is indeed silly to think that when the "switch is turned on", the simulation will exhibit (rat) cognition. We need input from the environment, not just to provide data but also to entrain the brain and calibrate its endogenously generated rhythms - just think of the unravelling of the mind that occurs when humans are subjected to sensory deprivation. (For a fuller treatment on this issue of the environment serving to entrain or calibrate the brain, see Buzsáki's excellent treatise,  Rhythms of the Brain - which I have an autographed hardcover copy of!!).

What the HBP will provide, however, is a repository for integrating the swathes of data we already have, and a framework for testing any ideas of the brain. No, it will never be complete, but it is badly overdue, and thoughts of continuing to live without an integrating framework that can be tested, prodded, and drawn upon - instead continuing each researcher's narrow pet projects in isolation from one another - is as past folly as it would be to pretend to be studying and understanding genetics without having the entire genome sequenced.

In that sense, the HBP can only help in any and all endeavours in understanding the brain by providing that baseline model with as much cellular and molecular detail incorporated as possible, because any higher levels of analysis will ultimately have to interface with it (or at least with the level of detail the HBP is aiming to capture) in order to show ultimate relevance in terms of the brain. The brain, as a biological system, is inherently different in nature than the phenomena that many computational neuroscientists (coming as they do, mostly from physics and engineering backgrounds) are comfortable dealing with - which is in the framework of physical systems that can be explained with a handful of equations. The brain, sadly, is not such a system and is not amenable to "spherical cow" levels of analysis. As a biological system, it rather follows the rule of being a horrible mess of interacting factors rather than a product of a few physical laws (that can be elegantly summarized in a few equations). That's not to say that no simplifying analysis can be done, and that no fruitful results will emerge from such studies. On the contrary, we can learn many useful facts about the brain by building and analyzing simplified models. It's just that inherently, any such endeavours will miss the mark in important ways. The "answer", then, is to stop thinking in terms of a zero-sum game (which is the attitude that signatories of this open letter seem to be doing) and instead consider it as a joint project or venture. Indeed, the more abstract levels of analysis have been too much in the limelight for many years, without paying any real dividends. For example, the connectionist paradigm, started in the 1980s, hasn't given us any concrete and large-scale understanding of the brain, and has rather unfortunately (for our knowledge of the brain but not for commercial ventures obviously) and quietly devolved into machine learning tricks for learning Netflix user preferences, etc. (That Netflix Tech Blog link even refers to Deep Belief Nets as being "trendy"!)

In fact, such an approach that the HBP is embarking on is badly overdue, and vastly underrepresented. It's not a popular approach precisely because it accepts the messiness of the brain and doesn't shirk away from it by abstracting it away. Sure, it's a double-edged sword, in that by opening the Pandora's box of the molecular level, you risk missing out on what we do not yet know, but that is part and parcel of any scientific approach. Thus, kudos to the HBP and Henry Markram for managing to get this kind of project off the ground. And because it is such a large project, it necessarily requires a lot of funding. This is a Manhattan- or Human Genome-flavour of a Project (with a capital "P").

I believe it will only help further our understanding of the brain in an integrated way that can evolve over time and with contribution from other levels of analysis. Those who are opposed to it, in my opinion, are doing so unfortunately primarily on personal and ideological grounds -- i.e., on ultimately selfish and jealous grounds -- than on valid scientific rebuttals.

Sadly, I lack Markram's eloquence and diplomacy in addressing the critics, but sometimes you have to grab the bull by the horns and address the real issue rather than skirt around it and be afraid to step on eggshells (meaning other people's egos).

-- PhD candidate in computational neuroscience, whose own biases have been amply revealed, he hopes.

No comments:

Post a Comment