Tuesday, August 19, 2014

Gone for a Few Days

Going to be out of the state for a few days; a few of you have written questions to me about technical issues or methods, and I'll try to get back to you as soon as I can!

In the meantime, as is custom whenever I go on vacation, here is another late masterwork by Brahms. Always a mysterious figure, one of the greatest enigmas surrounding Brahms's life was his relationship with Clara Schumann, piano virtuoso and widow of the famous composer Robert Schumann. Scholars have speculated for decades what happened between them - because, obviously, knowing whether they hooked up or not is essential for how we listen to and appreciate the music of Brahms, and such a line of inquiry is a valid funding opportunity for government grants.

Fortunately, however, I recently came across one of Brahms's lost journals in a second-hand bookstore; and the secrets disclosed within will finally give us an answer. Let's take a look:


November 4th, 1876: Cloudy today. Composed first symphony.

August 9th, 1879: Went clubbing with Liszt and Wagner. Hooked up with Clara Schumann. No doubt about it.

October 24th, 1884: Note to self: Find out why I'm writing my journal in English.

April 3rd, 1897: Died from cancer.





Monday, August 18, 2014

Bayesian Inference Web App That Explains (Nearly) Everything



For those still struggling to understand the concepts of Bayesian inference - in other words, all of you - there is a web app developed by Rasmus Baath which allows one to see the process unfolding in real time. Similar to an independent-samples t-test, we are trying to estimate a parameter of the mean difference between the populations that the samples were drawn from; however, the Bayesian approach offers much richer information about a distribution of parameter values, and, more importantly, which ones are more credible than others, given the data that has been collected.

The web app is simple to use: You input data values for your two groups, specify the number of samples and burn-in samples (although the defaults for this are fine), and then hit the Start button. The MCMC chain begins sampling the posterior distribution, which builds up a distribution of credible parameter values, and 95% of the distribution containing the parameter values with the most credibility is labeled as the highest density interval, or HDI. This same procedure is applied to all of the other parameters informed by the data, including normality, effect size, and individual group means and standard deviations, which provides a much richer set of information than null hypothesis significance testing (NHST).

Because of this I plan to start incorporating more Bayesian statistics into my posts, and also because I believe it will overtake, replace, and destroy NHST as the dominant statistical method in the next ten years, burning its crops, sowing salt in its fields, looting its stores, stampeding its women, and ravishing its cattle. All of this, coming from someone slated to teach the traditional NHST approach next semester; which, understandably, has me conflicted. On the one hand, I feel pressured to do to whatever is most popular at the moment; but on the other, I am an opportunistic coward.

In any case, Bayesian inference is becoming a more tractable technique, thanks to programs that interface with the statistics package R, such such JAGS. Using this to estimate parameters for region of interest data, I think, will be a good first step for introducing Bayesian methods to neuroimagers.


Friday, August 8, 2014

Parametric Modulation with SPM: Why Order Matters

Those who are the youngest children in the family are keenly aware of the disadvantages of being last in the pecking order; the quality of toys, clothes, and nutrition is often best for the eldest children in the family, and gradually tapers off as you reach the last kids, who are usually left with what are called, in parent euphemisms, "hand-me-downs."

In reality, these are things that should have been discarded several years ago due to being broken, unsafe, containing asbestos, or clearly being overused, such as pairs of underwear that have several holes in them so large that you're unsure of where exactly your legs go.

In SPM land, a similar thing happens to parametric modulators, with later modulators getting the dregs of the variance not wanted by the first modulators. Think of variance as toys and cake, and parametric modulators as children; everybody wants the most variance, but only the first modulators get the lion's share of it; the last modulators get whatever wasn't given to the first modulators.

The reason this happens is because when GLMs are set up, SPM uses a command, spm_orth, to orthogonalize the modulators. The first modulator is essentially untouched, but all other modulators coming after it are orthogonalized with respect to the first; and this proceeds in a stepwise function, with the second modulator getting whatever was unassigned to the first, the third getting whatever was unassigned to the second, and so on.

(You can see this for yourself by doing a couple of quick simulations in Matlab. Type x1=rand(10,1), and x2=rand(10,1), and then combine the two into a matrix with xMat = [x1 x2]. Type spm_orth(xMat), and notice that the first column is unchanged, while the second is greatly altered. You can then flip it around by creating another matrix, xMatRev = [x2 x1], and using spm_orth again to see what effect this change has.)

Be aware of this, as this can dramatically alter your results depending on how your modulators were ordered. (If they were completely orthogonal by design, then the spm_orth command should have virtually no impact; but this scenario is rare.) This came to my attention when looking at a contrast of a parametric modulator, which was one of three assigned to a main regressor. In one GLM, the modulator was last, and correspondingly explained almost no variance in the data; while in another GLM, the modulator was first, and loaded on much more of the brain. The following two figures show the difference, using exactly the same contrast and correction threshold, but simply reversing the order of the modulators:


Results window when parametric modulator was the last one in order.


Results window when parametric modulator was the first one in order.

This can be dealt with in a couple of ways:
  1. Comment out the spm_orth call in two scripts used in setting up the GLM: line 229 in spm_get_ons.m, and lines 285-287 in spm_fMRI_design.m. However, this might affect people who actually do want the orthogonalization on, and also this increases the probability that you will screw something up. Also note that doing this means that the modulators will not get de-meaned when entered into the GLM.
  2. Create your own modulators by de-meaning the regressor, inserting your value at the TR where the modulator occurred, then convolving this with an HRF (usually the one in SPM.xBF.bf). If needed, downsample the output to match the original TR.

It is well to be aware of all this, not only for your own contrasts, but to see how this is done in other papers. I recall reading one a long time ago - I can't remember the name - where several different models were used, in particular different models where the modulators were entered in different orders. Not surprisingly, the results were vastly different, although why this was the case wasn't clearly explained. Just something to keep in mind, as both a reader and a reviewer.

I should also point out that this orthogonalization does not occur in AFNI; the order of modulators has no influence on how they are estimated. A couple of simulations should help illustrate this:

Design matrices from AFNI with simulated data (one regressor, two parametric modulators.) Left: PM1 entered first, PM2 second. Right: Order reversed. The values are the same in both windows, just transposed to illustrate which one was first in the model.

More information on the topic can be found here, and detailed instructions on how to construct your own modulators can be found on Tor Wager's lab website.

Wednesday, August 6, 2014

DTI Analysis: Soup to Nuts Playlist




Instead of going through each DTI analysis step individually, I've collated everything into a Youtube playlist down below. Just remember that we are using data from the FSL practical course here, and also remember that suing somebody for giving out bad advice, although it is admittedly an easy way to become fantastically wealthy, won't necessarily make you happier.

In any case, just to briefly go over the rest of the steps: After correcting for magnetic field distortions and eddy currents, tensors are fitted using the dtifit command (or simply going through the FDT menu in the FSL interface). Once this has been done for each subject, a series of TBSS tools are used, each one prefixed by "tbss"; for example, tbss_1_preproc, tbss_2_reg, and so on. (You can find all of these in the $FSLDIR/bin directory, and if you have a good handle on Unix programming, you can inspect the code yourself.) After you have run all of those for your dataset, you set up the appropriate experimental design and contrasts, and use the randomise tool to perform statistics in your tractography mask.

Keep in mind that this is just beginner's material; and that if you were to compare your DTI competence to dating, if would be like you were still in that awkward teenager phase, unable to talk to anybody or make eye contact.

However, much of this material has already been covered in other blogs and online resources, provided by several other highly talented scientists and researchers, and - as much as I constantly fantasize about establishing a monopoly over neuroimaging information - there is no practical way to destroy them all.

Therefore, instead of posting redundant information, I highly recommend checking out an ongoing step-by-step series on TORTOISE by Peter Molfese, which you can compare to your results with FSL, and another blog dedicated to diffusion imaging, diffusion-imaging.com. The latter site covers virtually every piece of diffusion imaging software and pipeline out there, and is a good place to start.


Monday, August 4, 2014

DTI Analysis, Steps 1 & 2: Distortion Correction and Eddy Correction

DTI data comes into this world just as we do - raw, unprocessed, unthresholded, and disgusting. Today's parents are much too easy on their children; back in my day, boys and girls were expected to correct their own erratic motion artifacts, to strip their own skulls when too much cranium started sprouting around their cortex, and reach significance through their own diligence and hard work. "No dinner for you until you've gotten rid of those filthy distortion currents on your hands, young man!" was a common saying around my household.

Unfortunately, these days we live in a society increasingly avoiding taking responsibility for their own actions, filled with people more and more unwilling to strip their own skulls. (Even though that would save neuroimaging researchers countless hours of preprocessing.) This trend is the result of several complex cultural and sociological factors, by which I mean: Lawyers. Things have gotten so bad that you could, to take a random example, potentially sue someone posting advice on an unlicensed website if following those instructions somehow messed up your data.

This actually happened to a friend of mine who publishes the nutrition blog tailored for Caucasian males, White Man's Secret. After one of his readers formed kidney stones as a result of following advice to drink a two-liter of Mello Yello every day, the author was sued for - and I quote from the court petition - "Twelve Bazillion Dollars."

In any case, DTI data takes some time to clean up before you can do any statistics on it. However, the preprocessing steps are relatively few, and are analogous to what is done in a typical FMRI preprocessing pipeline. The steps are to identify and correct any distortions during the acquisition, and then to correct for eddy currents.

The first step, distortion correction, can be done with FSL's topup command. Note that this will require two diffusion weighted images, one with the encoding direction in anterior-to-posterior direction (AP) and one encoded in the posterior-to-anterior direction (PA). You will also need a textfile containing information about the encoding directions and the total readout time (roughly the amount of time between the first echo and the last echo); for example, let's say we have a text file, acqparams.txt, containing the following rows:

0 -1 0 0.0665
0 1 0 0.0665

Here a -1 means AP, and 1 means PA. The last column is the readout time. If you have any questions about this, ask your scanner technician. "What if he's kind of creepy?" I don't care, girl; you gotta find out somehow.

While you're at it, also ask which image from the scan was the AP scan, and which was the PA scan. Once you have identified each, you should combine the two using fslmerge:

fslmerge -t AP_PA_scan AP_scan PA_scan 

Once you have that, you have all you need for topup:

topup --imain=AP_PA_scan --datain=acqparams.txt --out=topup_AP_PA_scan

This will produce an image, topup_AP_PA_scan, of where the distortions are located in your DTI scan. This will then be called by the eddy tool to undo any distortions at the same time that it does eddy correction.

The only other preparation you need for the eddy correction is an index list of where the acquisition parameters hold (which should usually just be a 1 for each row for each scan), and a mask excluding any non-brain content, which you can do with bet:

bet dwi_image betted_dwi_image -m -f 0.2

Then feed this into the eddy command (note that a bvecs and bvals image should be automatically generated if you imported your data using a tool such as dcm2niigui):

eddy --imain=dwi_image --mask=betted_dwi_image --index=index.txt --acqp=acqparams.txt --bvecs=bvecs --bvals=bvals --fwhm=0 --topup=topup_AP_PA_scan --flm=quadratic --out=eddy_unwarped_images

Note that the --fwhm and --flm arguments I chose here are the defaults that FSL recommends.

The differences before and after can be seen in fslview:




Of course, the distortion correction step isn't strictly necessary; and you can still do a much simpler version of eddy correction by simply selecting FDT Diffusion from the FSL menu and then selecting the Eddy Correction tab, which only requires input and output arguments. All I'm saying is that, if you're feeling particularly litigious, you should probably be safe and do both steps.



Friday, August 1, 2014

Blistering Fast Functional Connectivity Analysis with 3dTproject

Because speed is associated with freedom, happiness, and the American way - we want results, we want visible results, and we want them now, darn it! - it is no surprise that neuroimaging analysis is becoming exponentially faster and more efficient. Gone are the days when you could run a slice-timing step, go to bed, and wake up eight hours later when it finally finished. Most likely it crashed, meaning you had to run the entire thing all over again, but that didn't matter - the point was that you at least felt as though you were doing something during that time.

Regardless, we are approaching a radically different era now; and one of the harbingers of that era is AFNI's 3dTproject. Released a few months ago, this tool is now the default in both uber_subject.py and afni_proc.py when you do resting state analyses. It's quicker, more efficient, and allows fewer chances to mess things up, which is a good thing.

To include 3dTproject in your analysis pipeline, simply apply "rest" to the analysis initialization screen when running uber_subject.py, or copy example #9 from the help documentation of afni_proc.py. Under no circumstances should you try running 3dTproject manually, since you, possessing the hand-eye coordination of a cheese burrito, will inevitably make a typo and screw something up. Nevertheless, if you insist on doing it yourself, I recommend using the following code that is automatically generated by the .py scripts:

3dTproject -polort 0 -input pb04.$subj.r*.blur+tlrc.HEAD -censor motion_${subj}_censor.1D -cenmode ZERO -ort X.nocensor.xmat.1D -prefix errts.${subj}.tproject

Needless to say, you will have to have already run your preprocessing steps and have run 3dDeconvolve with the -x1D_stop option to generate the necessary matrices.