I have previously written about how I think that cognitive neuroscience as a scientific discipline (and I know that this is not a universally held view) has largely moved on from publishing studies demonstrating the neural correlates of “x”, where x might be behaviours as diverse as maternal love, urinating, or thinking about god. There are still a few of these sorts of studies published each year, and because the public are, it seems, fascinated by stories about blobs on brains, the media portrayal of cognitive neuroscience tends to focus on such findings.
|Some blobs on a brain|
This is all very entertaining if you like your science presented to you in a breakfast TV sofa sort of way. However, the downside is that people who are not regular readers of the fMRI research literature think that the media portrayal of cognitive neuroscience is an accurate representation of the field. In fact, I would argue, this is far from the case. In my experience of working in cognitive neuroscience for the last decade or more, most researchers I have encountered are not interested in so-called “blobology”. Instead, they work very hard each day carefully designing theoretically motivated experiments using cognitive neuroscience techniques to produce empirical data that can be used to differentiate between cognitive theories about how functions like memory, language, vision, attention, and so on, might operate.
However, the field of cognitive neuroscience is still relatively young. As such, its accepted methodological and analytic conventions are still being worked out. There are some statistical methods that have been used quite widely in the field, but which people are starting to identify as not being sufficiently rigorous for the kinds of interpretations that have been made. The widespread use of these practices has happened mainly because new researchers have tended to learn fMRI methods informally through knowledge handed down by other researchers in the lab, who themselves will have learned from previous researchers, and so on, as there has been no standard textbook with a validated and generally accepted set of approved methods. Recent articles highlighting issues such as that it is usually inappropriate to use the same dataset for selection and selective analysis, and that interaction analyses are often conducted incorrectly, have served the very useful purpose of alerting neuroscience researchers to ways in which they might improve the rigour of their analytical methods.
As far as I’m concerned, these articles have been a thoroughly excellent contribution to the field, and a sign of a healthy, thriving scientific discipline that is willing to examine its core methods for possible weaknesses and, if they are found, to highlight them prominently. While it might seem odd that a field would allow a paper that does little more than count statistical errors in other papers to be published in the field’s flagship journal, I think it is splendid. Other fields should care as much about their time-honoured, adamantine practices.
It is a shame that some commentators see these articles as a sign that cognitive neuroscience is weak or inherently flawed or, as one prominent figure has described it, “the soft end of science... really just at the stamp-collecting stage. There aren't any real hypotheses, more just post hoc rationalisations.” These commentators have a tendency to dismiss the field of cognitive neuroscience with the disdain they usually lavish on areas like homeopathy, chiropractics and other such mumbo jumbo. I feel such views are narrow-minded, and reflect the personal prejudices of people who, if they really value science and wish to encourage those who seek to practice it with the most rigour they can, might like to reconsider their preconceptions.
I just today came across an article that, to me, is a prime example of the way in which cognitive neuroscience is constantly seeking to improve as an empirical discipline. Russ Poldrack, widely regarded as one the most sensible methodologists in the field, has a paper in press in the journal NeuroImage entitled “The Future of fMRI in Cognitive Neuroscience”. In the article, he outlines how over the next 20 years, the field needs to increase its methodological rigour, consistently use more robust methods for statistical inference, concentrate to a greater degree on identifying connectivity patterns across the brain rather than focusing on single regions, and make other improvements to the way in which theoretical inferences are drawn from neuroimaging data. This is an important paper, and all cognitive neuroscientists should read it. But I believe all commentators who are sceptical about cognitive neuroscience should also read it. It may change their view.
As Poldrack concludes:
fMRI has advanced cognitive neuroscience research in a way that has been nothing short of revolutionary, though at the same time there are fundamental limits to the standard imaging approach that have not been widely appreciated. I am hopeful that 20 years from now, the history of fMRI in cognitive neuroscience will show that the field attacked this problem head on and developed new, robust methods for better understanding the relation between mental processes and brain function.
I very much agree, and think that there is a good chance that Poldrack’s hope will be fulfilled.
Poldrack RA (2011). The future of fMRI in cognitive neuroscience. NeuroImage PMID: 21856431
(edited on 15/9/11 to include ResearchBlogging citation - thanks @deevybee!)