Wednesday, February 20, 2013

Model-Based FMRI Analysis: Thoughts

Model-based FMRI analysis is so hot right now. It's so hot, it could take a crap, wrap it in tin foil, put hooks on it, and sell it as earrings to the Queen of England.* It seems as though every week, I see another model-based study appear in Journal of Neuroscience, Nature Neuroscience, and Humungo Garbanzo BOLD Responses. Obviously, in order to effect our entry into such an elite club, we should understand some of the basics of what it's all about.

When people ask me what I do, I usually reply "Oh, this and that." When pressed for details, I panic and tell them that I do model-based FMRI analysis. In truth, I sit in a room across from the guy who actually does the modeling work, and then simply apply it to my data; very little of what I do requires more than the mental acumen needed to operate a stapler. However, I do have some foggy notions about how it works, so pay heed, lest you stumble and fall when pressed for details about why you do what you do, and are thereupon laughed at for a fool.

Using a model-based analysis is conceptually very similar to what we do with a basic univariate analysis with the canonical Blood Oxygenation Level Dependent (BOLD) response; with the canonical BOLD response, we have a model for what we think the signal should look like in response to an event, either instantaneous or over a longer period of time, often by convolving each event with a mathematically constructed gamma function called the hemodynamic response function (HRF). We then use this to construct an ideal model of what we think the signal at each voxel should look like, and then increase or decrease the height of each HRF in order to optimize the fit of our ideal model to the actual signal observed in each voxel.


HRF convolved with a punctate response. This approximate shape can also be plotted by calling upon an SPM.mat file using a canonical HRF by loading the SPM.mat file into memory and typing "plot(SPM.xBF.bf)"

Model-based analyses add another layer to this by providing an estimate of how much the height of this HRF can fluctuate (or "modulate") in response to additional continuous (or "parametric") data for each trial, such as reaction time. The model can provide estimates for how much the BOLD signal should vary on a trial-by-trial basis, which are then inserted into the general linear model (GLM) as parametric modulators; the BOLD response can then correlate either positively or negatively with the parametric modulator, signaling whether more or less of that modulator leads to increased or decreased height in the BOLD response.

To illustrate this, a recent paper by Ide et al (2013) applied a Bayesian model to a simple stop-go task, in which participants either had Go trials in which participants made a response, or Stop trials in which participants had to inhibit their response.The stop signal appeared only on a fraction of the trials, but after a variable delay, which made it difficult to predict when the stop signal would occur. The researchers used a Bayesian model in order to update the estimated prior about the occurrence of the stop signal, as well as the probability of committing an error. Think of the model as representing what an ideal subject would do, and try to place yourself in his shoes; after a long string of Go trials, you begin to suspect more and more that the next trial will have a Stop signal. When you are highly certain that a Stop signal will occur, but it doesn't, according to the model that should lead to greater activity, as captured by the parametric modulator generated by the model on that trial. This is applied to each subject and then observed where it is a good fit relative to the observed timecourse at each voxel.

Model-based regressors applied to FMRI data (Ide et al, Figure 3). The magenta region in (A) shows the contrast of parametric modulators for the probability of a stop trial, P(stop), on Go trials as opposed to Stop trials. In graph C, note the close correspondence of the model predictions to observed FMRI activity in response to each combination of trials.


In addition to neuroimaging data, it is also useful to compare model predictions to behavioral data. For reaction time, to use one example, RT should go up as the expectancy for a stop signal also increases, as a subject with a higher subjective probability for a stop signal will take more time in order to avoid committing an error. The overlay of model predictions and behavioral data collected from subjects provides a useful validation check of the model predictions:

A) Relationship of RT to the probability of a stop trial, P(stop). As P(stop) increases, so does RT, presumably in order to prevent errors of commission on these trials. B) Relationship of P(stop) to error rates on stop trials. Looking at the left side of the graph, if there is a subjectively low probability of receiving a stop trial, the actual occurrence of a stop trial will catch the subject relatively unprepared, leading to an increased error rate on those trials. Taken from Figure 2 of Ide et al, 2013.

Note, however, that this is a Bayesian model as applied to the mind; it's an estimation of what the experimenters think the subject is thinking during the task, given the trial history and what happens on the current trial. In this study, the methods for testing the significance and size of the parameter estimates are still done using null hypothesis significance testing methods.
 


*cf. Zoolander, 2001

No comments:

Post a Comment