http://lcni-3.uoregon.edu/phenowiki/api.php?action=feedcontributions&user=Katie&feedformat=atomPheno Wiki - User contributions [en]2024-03-29T14:55:25ZUser contributionsMediaWiki 1.26.2http://lcni-3.uoregon.edu/phenowiki/index.php?title=PAM&diff=8307PAM2012-06-14T22:06:46Z<p>Katie: </p>
<hr />
<div>Back to [[LA5C]]<br />
<br />
=Background=<br />
This task was run as a part of the Consortium for Neuropsychiatric Phenomics ([[CNP]]) project. It was designed in collaboration with Russ Poldrack, Theo van Erp, and Becca Schwarzlose. It is an associative memory task in which subjects view pairs of objects and must remember not only whether they have seen them before but how they were originally paired.<br />
<br />
=Task Design=<br />
==PAM Encoding (PAMenc)==<br />
For all encoding trials, one figure is in black and white, and one is in color (orange). The subject must indicate by button press which side the colored object is on (this is the same as the RK encoding paradigm, but different from the NAPLS PAM encoding). Subjects are instructed to remember the objects and the relationship between the objects. The ITI is jittered.<br />
<br />
The encoding task consists of 64 trials:<br/><br />
*24 control trials- pairs of scrambled stimuli<br/><br />
*40 memory trials- pairs of line drawings of objects<br/><br />
<br />
*Control trials last 2 seconds<br/><br />
*Encoding trials last 4 seconds (1 with just words, and then 3 for words + pictures<br/><br />
*All time that is not accounted for in between trials is “null”<br/><br />
<br />
PAMenc is 242 TRs long, with a TR of 2000ms.<br />
<br />
==PAM Retrieval (PAMret)==<br />
The retrieval task requires the subjects to rate their confidence in their memory of the pairing. There are 4 possible response options ranging from "Sure correct" to "Sure incorrect". These can be analyzed later as a spectrum, or binarized into yes/no type responses.<br />
<br />
The retrieval task consists of 104 trials<br/><br />
*24 control trials- on one side of the screen is one of the 4 retrieval confidence response options "Sure correct", "maybe correct", "maybe incorrect", or "sure incorrect". On the other side of the screen is "xxxx". Subjects are asked to press the button (1-4) that corresponds to the response option displayed<br/><br />
*40 correct trials- items are shown paired as they were at encoding<br/><br />
*40 incorrect trials- items are shown paired differently than they were at encoding (some objects are the same, j ust paired incorrectly)<br/><br />
<br />
PAMret is 268 TRs long, with a TR of 2000ms.<br />
<br />
==Design Documentation==<br />
These files were created by Theo, to show the trial-by-trial information including onsets and delays. The layout is meant to be compatible with E-Prime. They open in Excel.<br/><br />
[[:Media:PAMenc_trialinfo.xlsx]]<br/><br />
[[:Media:PAMret_trialinfo.xlsx]]<br/><br />
<br />
These files were created by Theo to show the stimuli pairing and presentation order. They open in excel. <br/><br />
[[:Media:PAMenc_stims.xlsx]]<br/><br />
[[:Media:PAMret_stims.xlsx]]<br/><br />
<br />
These files were created by Eric Miller, the CNP RA and are designed so that the trial information is easier to interpret in relation to the matlab output.<br/><br />
[[:Media:Pamenc_actualtrialinfo.xlsx]]<br/><br />
[[:Media:Pamrec_actualtrialinfo.xlsx ]]<br/><br />
<br />
=Analysis=<br />
The PAM is divided into two sections, PAMENC and PAMRET for the encoding and retrieval phases. In the subjects directory, these are separate, but in the scripts directory they are not. The tasks are linked together for scoring by two files, rectrialcodes_during enc and enctrialcodes_duringrec to keep track of how the trials in each section correspond to each other.<br />
<br />
'''IMPORTANT''': Partway into the study, the PAM underwent a substantial change. We are only concerned with data after this change. The first subject on the new version is 10506. All files that related to the new version are labeled with _fixed.<br />
<br />
==Scoring Behavioral Data==<br />
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/PAM<br />
<br />
All scripts pull up the file ‘sublist_pamenc’ and “sublist_pamrec”to determine which subjects to run. Before you run a new batch, edit that file using emacs (emacs sublist_pamenc). IDs are in the format of CNP_12345A. It is just a transient file, so you can delete what is in there. If you’d like to save the old version, just save it as sublist with the date appended. The script will only recognize the plain ‘sublist; files.<br />
<br />
'''PAMENC'''<br/><br />
*in matlab, run score_pamenc_behavior.m<br/><br />
*this should create something called summaryscore_PAMENC_fixed.txt in everybodies behav/PAMENC folders; You can check who has these files (and therefore who needs to be run) by typing <br/><br />
ls /space/raid2/data/poldrack/CNP/CONTROLS/*A/behav/PAMENC/*fixed*<br />
<br />
to summarize the data, edit make_big_pamenc_fixed_score_log.sh with the appropriate group and date. Then run, it will create a summaryscore_output and summaryscore_subjlist file.<br />
<br />
'''PAMRET'''<br/><br />
*in matlab run score_pamrec_behavior.m<br/><br />
*this should create something called summaryscore_PAMRET_fixed.txt in everybodies behav/PAMENC folders; You can check who has these files (and therefore who needs to be run) by typing <br/><br />
ls /space/raid2/data/poldrack/CNP/CONTROLS/*A/behav/PAMRET/*fixed*<br />
<br />
to summarize the data, edit make_big_pamret_fixed_score_log.sh with the appropriate group and date. Then run, it will create a summaryscore_output and summaryscore_subjlist file.<br />
<br />
You can now copy these into excel, although you might need to use the ‘text to columns’ tool to get each number to go into its own cell.<br />
<br />
==Creating Onset Files (EVs)==<br />
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/PAM<br/><br />
<br />
These scripts also uses the sublist files- so, you can easily run the behavioral scoring and these scripts on the same list of new people. Update sublist_pamenc and sublist_pamrec as described above.<br/><br />
<br />
'''PAMENC'''<br/><br />
in matlab, run make_pamenc_model1_onsets_function.m<br/><br />
<br />
running this will create a series of files in each persons own behav/PAMENC. After both scripts have been run, the folder should look like this:<br/><br />
<br />
PAMenc_fixed_10638.mat <br/> <br />
pamenc_onsets_model1_hiconf_miss.txt <br/> <br />
pamenc_onsets_model1_loconf_miss.txt<br/><br />
pamenc_onsets_model1_control.txt <br/> <br />
pamenc_onsets_model1_junk.txt <br/> <br />
summaryscore_PAMENC_fixed.txt<br/><br />
pamenc_onsets_model1_hiconf_hit.txt <br/><br />
pamenc_onsets_model1_loconf_hit.txt<br/><br />
<br />
The onset files will have contents that look something like this:<br/><br />
22.0397 4 1<br/><br />
92.5125 4 1<br/><br />
135.0052 4 1<br/><br />
155.5230 4 1<br/><br />
220.5196 4 1<br/><br />
226.0131 4 1<br/><br />
226.0132 4 1<br/><br />
240.0118 4 1<br/><br />
261.5227 4 1<br/><br />
292.0186 4 1<br/><br />
324.0038 4 1<br/><br />
347.0201 4 1<br/><br />
347.0205 4 1<br/><br />
347.0205 4 1<br/><br />
376.5071 4 1<br/><br />
412.0166 4 1<br/><br />
412.0167 4 1<br/><br />
443.5219 4 1<br/><br />
443.5219 4 1<br/><br />
448.0226 4 1<br/><br />
448.0227 4 1<br/><br />
448.0227 4 1<br/><br />
473.5214 4 1<br/><br />
473.5215 4 1<br/><br />
<br />
<br />
'''PAMRET'''<br/><br />
in matlab, run make_pamrec_model1_onsets_function.m<br/><br />
<br />
running this will create a series of files in each persons own behav/PAMRET. After both scripts have been run, the folder should look like this:<br/><br />
<br />
pamrec_onsets_model1_all_incorr.txt<br/><br />
pamrec_onsets_model1_controltrial.txt<br/><br />
pamrec_onsets_model1_hiconfno_corr.txt<br/><br />
pamrec_onsets_model1_hiconfno_incorr.txt<br/><br />
pamrec_onsets_model1_hiconfyes_corr.txt<br/><br />
pamrec_onsets_model1_hiconfyes_incorr.txt<br/><br />
pamrec_onsets_model1_lowconfno_corr.txt<br/><br />
pamrec_onsets_model1_lowconfno_incorr.txt<br/><br />
pamrec_onsets_model1_lowconfyes_corr.txt<br/><br />
pamrec_onsets_model1_lowconfyes_incorr.txt<br/><br />
pamrec_onsets_model3_controltrial.txt<br/><br />
pamrec_onsets_model3_falseneg.txt<br/><br />
pamrec_onsets_model3_falsepos.txt<br/><br />
pamrec_onsets_model3_trueneg.txt<br/><br />
pamrec_onsets_model3_truepos.txt<br/><br />
PAMret_fixed_11062.mat<br/><br />
summaryscore_PAMRET_fixed.txt<br/><br />
trialcount_PAMRET_model3.txt<br/><br />
trialcount_PAMRET.txt<br/><br />
<br />
==PAM Models==<br />
'''PAMENC'''<br />
*'''Model1''' - PamEnc only has one model, which includes each of these conditions:<br />
:control -- scrambled trials<br/><br />
:hiconf_hit -- responded to correctly at retrieval with high confidence<br/><br />
:hiconf_miss -- responded to incorrectly at retrieval with high confidence<br/><br />
:loconf_hit -- responded to correctly at retrieval with low confidence<br/><br />
:loconf_miss -- responded to incorrectly at retrieval with low confidence<br/><br />
:junk -- missed trials, motion, etc<br/><br />
<br />
<br />
'''PAMRET'''<br />
*'''Model1'''- Model 1 is the most basic version, and includes all of the possible conditions. The problem with model 1 is that often people have missing conditions (most frequently those that are something like "hiconf yes- incorr"). Because of this, doing group analyses, in which there cannot be missing conditions, is challenging.<br />
<br />
'''Retrieval Model1 onsets are:'''<br/><br />
:hiconfno_corr -- items which got a "sure incorrect" response that were indeed incorrectly paired<br/><br />
:hiconfno_incorr -- items which got a "sure incorrect" response that were actually correctly paired<br/><br />
:hiconfyes_corr -- items which got a "sure correct" response that were indeed correctly paired <br/><br />
:hiconfyes_incorr -- items which got a "sure correct" response that were actually incorrectly paired<br/><br />
:lowconfno_corr -- items which got a "maybe incorrect response that were indeed incorrectly paired<br/><br />
:lowconfno_incorr -- items which got a "maybe incorrect" response that were actually correctly paired<br/><br />
:lowconfyes_corr -- items which got a "maybe correct" response that were indeed correctly paired<br/><br />
:lowconfyes_incorr -- items which got a "maybe correct" response that were actually incorrectly paired<br/><br />
:control trials<br />
<br />
*'''Model2'''- Model 2 is similar to Model1, except that the incorrect conditions, which were frequently empty, have been combined into a single "incorrect" condition.<br />
<br />
'''Retrieval Model2 onsets are:'''<br/><br />
:hiconfno_corr -- items which got a "sure incorrect" response that were indeed incorrectly paired<br/><br />
:hiconfno_incorr -- items which got a "sure incorrect" response that were actually correctly paired<br/><br />
:hiconfyes_corr -- items which got a "sure correct" response that were indeed correctly paired <br/><br />
:hiconfyes_incorr -- items which got a "sure correct" response that were actually incorrectly paired<br/><br />
:lowconfno_corr -- items which got a "maybe incorrect response that were indeed incorrectly paired<br/><br />
:lowconfno_incorr -- items which got a "maybe incorrect" response that were actually correctly paired<br/><br />
:lowconfyes_corr -- items which got a "maybe correct" response that were indeed correctly paired<br/><br />
:lowconfyes_incorr -- items which got a "maybe correct" response that were actually incorrectly paired<br/><br />
:control trials<br />
<br />
*'''Model3'''- Model 3 was designed to have the least number of missing conditions. It models only the signal detection type conditions for the task.<br />
<br />
'''Retrieval Model3 onsets are:'''<br/><br />
:falseneg<br />
:falsepos<br />
:trueneg<br />
:truepos<br />
:control trials<br />
<br />
<br />
For each model, since missing conditions is such an issue, you can look at the trialcount.txt files (trialcount_PAMRET_model3.txt is for model 3, and trialcount_PAMRET.txt is for models 1 and 2). You may want to request this, along with the summaryscore_PAMRET_fixed.txt file along with your imaging files.<br />
<br />
==Running First Levels==<br />
/space/raid2/data/poldrack/CNP/scripts/run_level1_scripts<br/><br />
<br />
The primary scripts for running first levels are:<br/><br />
/space/raid2/data/poldrack/CNP/scripts/run_level1_scripts/PAMENC/PAMENC_firstlevel_model1.sh<br/><br />
/space/raid2/data/poldrack/CNP/scripts/run_level1_scripts/PAMRET/PAMRET_firstlevel_model1.sh<br/><br />
<br />
These do the first phase of PAMENC and PAMRET fMRI processing. They check for the relevant files, create an individualized .fsf file for each subject, run pre- and post stats.<br/><br />
<br />
Each script takes 4 arguments:<br/><br />
1 group vs subject analysis, <br/><br />
2. population (CONTROL, SCHZ, etc) <br/><br />
3. which subject to run <br/><br />
4. whether to run FSL or just create the fsf file (run or norun)<br/><br />
<br />
There are a few ways you can run it:<br/><br />
a. to run on one person (here, CNP_10159A) and run FSL, go to the appropriate directory,<br/><br />
./PAMENC_firstlevel_model1.sh subject CONTROLS 10159 run<br/><br />
<br />
b. to run on an entire group (all controls, all patients, etc)<br/><br />
./PAMRET_firstlevel_model1.sh group CONTROLS all run<br/><br />
<br />
c. to run a specific group of people, you can use a second script that calls this one, run_multiple_scap.sh. for this script, you have to edit it first using emacs, and basically fill in the people you want to run in the for-loop at the top, for instance<br/><br />
for id in 10523 10501 10159; do<br/><br />
<br />
you also need to edit the other relevant options, such as population and whether to run all the way through. It’ll automatically run in single-subject mode, and just loop through these people.<br/><br />
<br />
This can also be submitted to the grid, after it is edited, by typing<br/><br />
sge qsub run_multiple_pamenc.sh<br/><br />
<br />
==Checking Data==<br />
Data for this paradigm are logged on the HTAC data base. To get access contact Stone Shih or Fred Sabb.<br />
<br />
=Publications=<br />
<br />
----<br />
Link back to [[LA5C]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=DTI_Quality_Control&diff=8306DTI Quality Control2012-06-14T18:43:50Z<p>Katie: /* Quality Control */</p>
<hr />
<div>Link back to [[LA5C]] page.<br />
<br />
'''These are the useable subjects for each group, on each scanner.<br />
<br />
[[File:DTI_Rankings.png]]<br />
<br />
<br />
<br />
----<br />
=Quality Control=<br />
The checking procedures for CNP follow the Cannon Lab DTI QA Protocol, which was adapted from procedures in Paul Thompson's lab.<br/><br />
The information used in QA is based on: a PDF generated by the dti_proc script, a text file generated from the DTI_QA script, and on visual inspection of the data.<br />
<br />
==DTI_QA script==<br />
The additional QA script is located at /space/raid2/data/poldrack/CNP/scripts/DTI_QA.sh<br />
<br />
===Usage===<br />
The usage is DTI_QA <subjectID> <group>, or dti_proc all <group>.<br />
<br />
For example, to run the script on one subject in the CNP schizophrenia group,<br/><br />
:> dti_proc_temp.sh CNP_50006B SCHZ &<br />
<br />
Alternatively, to run the script on all subjects in the CNP schizophrenia group,<br/><br />
:> dti_proc_temp.sh all SCHZ &<br />
<br />
===Script Actions===<br />
This script creates a small text file to be used in QA. If the script runs properly it:<br/><br />
1. matches bvals and bvecs<br/><br />
2. calculates mean in-mask FA and MD<br/><br />
3. calculates motion in each direction<br/><br />
4. creates a standard deviation file for regular and mcf images<br/><br />
5. uses regional masks to calculate the percentage of cropped voxels in the occipital lobe, frontal lobe, superior region, temporal lobes and cerebellum.<br/><br />
<br />
===Output===<br />
The script will create a <b>dti_report.txt</b> file in the raw DTI directory of the subject.<br />
<br />
==How to Do QA==<br />
===Check Diagnostic Log===<br />
After running the dti_proc.sh script, check the diagnostic log for quality assurance of the DTI data:<br/><br />
<br />
Open the dti_diag.pdf, using the command <b>[[Basic UNIX Commands#evince | evince]]</b>:<br />
:> evince dti_diag.pdf &<br />
<br />
3. Check whether the bvals and bvecs match the CNP standards, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
4. Go to the raw DTI directory of the subject, and open the dti_report.txt file, using the command <b>[[Basic UNIX Commands#emacs | emacs]]</b>:<br />
:> cd /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/DTI_64DIR_3<br />
:> ls<br />
:> emacs dti_report.txt &<br />
<br />
5. Check whether the bvals and bvecs match the CNP standards, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
===Check for Artifacts===<br />
Artifacts observed in this data set include-<br />
:-missing slices- this would be on only one volume, and consist of an entire isolated missing horizontal slice<br />
:-vibration artifact (only on BMC subjects)- this usually shows as a red patch directly on the midline, primarily in the parietal region<br />
:-striping<br />
:-cropping<br />
<br />
===Check Raw Data===<br />
====Check FA Map====<br />
After running the dti_proc_regressor.sh script, check the fractional anisotropy (FA) map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open the FA map in FSLView:<br />
:> fslview dtifit_FA.nii.gz &<br />
<br />
3. Check if the FA map includes the entire brain and if the FA map looks unusual or not<br />
<br />
====Check Color Map====<br />
After running the dti_proc.sh script, check the color map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open both the FA and color maps in FSLView (or add the color map to FSLView, if you are already viewing the FA map):<br />
:> fslview dtifit_FA.nii.gz dtifit_V1.nii.gz &<br />
<br />
3. To view the color map, select the dtifit_V1 file, and press the "i" button. An "Overlay Information Dialog" window will appear. For "Display as:", select "RGB" and for "Modulation:", select "dtifit_FA". Close this window.<br />
<br />
4. The color map should now display with a dark background and red/blue/green tracts.<br />
<br />
5. Check if the directions of the major fiber tracts are colored appropriately by scrolling through the slices. In the coronal view, the corticospinal tract (superior-inferior) should be blue. In the sagittal view, the corpus callosum (right-left) should be red. In the axial view, the anterior-posterior tracts should be green.<br />
<br />
Also, check if the color map includes the entire brain and if the color map looks unusual or not. Log this in the appropriate google doc.<br />
<br />
====Check for Cropping====<br />
In the output from the DTI_QA script is a number for each of a set of regions (cerebellum, superior, temporal, frontal). This number represents the percentage of voxels missing in that region. If the number is greater than about 10, you should go back and look at the FA map to make sure that actual tract data is not missing (some small percent, which would usually represent grey matter, can be missing off the edges without much effect). There will always be a large percentage of cerebellum voxels missing, but this can be ignored.<br />
<br />
====Watch raw data as movie====<br />
Load up the raw data file (like DTI_64dir_7.nii.gz) into fslview, and watch through each volume as a movie. It is normal for the first volume to be much brighter, that is the B0 image.<br />
<br />
==QA Rating System==<br />
After logging the intermediate steps in the google doc, a final rating can be calculated. This is based on:<br />
:1. Coverage flag (based on cropping measures rated 0=no cropping, 1=minor cropping, 2=severe unusable cropping)<br />
:2. Motion flags (based on watching raw data as movie, and on pdfs).<br />
:3. Tensor direction flags (based on bvals and bvecs and color map)<br />
:4. Artifact flags<br />
<br />
The overall Quality score is generated from these measures and varies from 1-4. <br />
:1=excellent<br />
:2=good (useable, but depending on analysis might want to take a look at reason for score)<br />
:3=fair (useable, but depending on analysis might want to take a look at reason for score)<br />
:4=unusable (all individuals with vibration artifacts are in this category, along with any others with irreconcilable problems)<br />
:-1= not evaluated<br />
<br />
The scores and reasons for them are available on the HTAC database.<br />
<br />
<br />
[[File:DTI_Rankings.png]]<br />
<br />
<br />
<br />
----<br />
Link back to [[LA5C]] page. <br/><br />
Return to [[CNP]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=DTI&diff=8305DTI2012-06-14T18:41:33Z<p>Katie: /* Quality Control */</p>
<hr />
<div>__TOC__<br />
<br />
<b> Back to [[HTAC]]</b><br/><br />
<b> Back to [[LA5C]]</b><br />
<br />
=Scan Parameters=<br />
The DTI was acquired using a 64 direction sequence. Parameters were: 2mm slices, TR/TE=9000/93, 1 average, 96x96 matrix, 90 degree flip angle, axial slices, b=1000.<br />
<br />
=Processing=<br />
==dti_proc script==<br />
dti_proc.sh is a script written by Russ Poldrack to preprocess the raw CNP DTI data. The path for the script is /space/raid2/data/poldrack/CNP/scripts/dti_proc.sh. <br />
<br />
To run the script on an entire subject group (CONTROLS, SCHZ, BIPOLAR, ADHD) use the wrapper script run_dti_proc.sh<br />
<br />
NOTE: An alternate version exists called dti_proc_regressor. The difference between the two versions is that the regressor script contains a nuisance regressor based on the Galachan, 2010 paper, that can to some extent correct for vibration artifacts. This artifact exists on BMC data before Fall 2010. After Fall 2010 the table was bolted down, correcting the artifact. The CCN table was bolted at installation, thus avoiding the artifact entirely. Unfortunately it does not entirely correct the issue, so the shareable data uses the original version of the script with people who show the artifact marked for elimination.<br />
<br />
===Usage===<br />
For example, to run the script on one CNP subject,<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/scripts:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/scripts<br />
<br />
2. Find the path of the raw DTI directory for one subject, and run the script, e.g.:<br />
:> dti_proc /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/raw/DTI_64DIR_3 &<br />
<br />
===Script Actions===<br />
The dti_proc script processes the data using FSL Diffusion Toolbox (FDT). <br />
<br />
Steps:<br/><br />
1. B0 image is skull stripped, creating B0_image_brain<br/><br />
2. the raw data are registered to the first (B0) image using mcflirt- this both corrects for eddy currents and helps motion, the resulting file is dti_mcf<br/><br />
3. dtifit is run, creating FA, L1, L2, MD and other processed images in subject space<br/><br />
4. the FA and color maps are registered to MNI space (FA2std and V12std).<br/><br />
5. Generic ROIs from the JHU atlas are applied to the FA image in standard space. <br/><br />
:WARNING- the resulting values are NOT to be used as data, rather just as first pass markers for scan integrity. They are not well registered enough to be used for anything, as evidenced by the low FA values.<br/><br />
6. Motion is calculated<br/><br />
7. bvecs and bvals for each subject are compared to the CNP standards. <br/><br />
:NOTE: there are different bvecs and bvals for subjects on the CCN and BMC scanners, as well as for a subset of subjects scanned during the transition and intial set up of the CCN<br/><br />
7. a pdf is generated (dti_diag.pdf)<br/><br />
<br />
===Output===<br />
For the subject indicated, the script will output the following files in the raw DTI directory:<br />
:B0_image_brain_mask.nii.gz<br />
:B0_image_brain.nii.gz<br />
:B0_image.nii.gz<br />
:dti_diag.pdf<br />
:dtifit_FA2std.nii.gz<br />
:dtifit_FA.nii.gz<br />
:dtifit_L1.nii.gz<br />
:dtifit_L2.nii.gz<br />
:dtifit_L3.nii.gz<br />
:dtifit.log<br />
:dtifit_MD.nii.gz<br />
:dtifit_MO.nii.gz<br />
:dtifit_SO.nii.gz<br />
:dtifit_V12std.nii.gz<br />
:dtifit_V1.nii.gz<br />
:dtifit_V2.nii.gz<br />
:dtifit_V3.nii.gz<br />
:dti_mcf.par<br />
:dti_proc.log<br />
:FA2std.mat<br />
<br />
=Quality Control=<br />
A complete description of the QA can be found at [[DTI_Quality_Control]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=DTI&diff=8304DTI2012-06-14T18:41:12Z<p>Katie: </p>
<hr />
<div>__TOC__<br />
<br />
<b> Back to [[HTAC]]</b><br/><br />
<b> Back to [[LA5C]]</b><br />
<br />
=Scan Parameters=<br />
The DTI was acquired using a 64 direction sequence. Parameters were: 2mm slices, TR/TE=9000/93, 1 average, 96x96 matrix, 90 degree flip angle, axial slices, b=1000.<br />
<br />
=Processing=<br />
==dti_proc script==<br />
dti_proc.sh is a script written by Russ Poldrack to preprocess the raw CNP DTI data. The path for the script is /space/raid2/data/poldrack/CNP/scripts/dti_proc.sh. <br />
<br />
To run the script on an entire subject group (CONTROLS, SCHZ, BIPOLAR, ADHD) use the wrapper script run_dti_proc.sh<br />
<br />
NOTE: An alternate version exists called dti_proc_regressor. The difference between the two versions is that the regressor script contains a nuisance regressor based on the Galachan, 2010 paper, that can to some extent correct for vibration artifacts. This artifact exists on BMC data before Fall 2010. After Fall 2010 the table was bolted down, correcting the artifact. The CCN table was bolted at installation, thus avoiding the artifact entirely. Unfortunately it does not entirely correct the issue, so the shareable data uses the original version of the script with people who show the artifact marked for elimination.<br />
<br />
===Usage===<br />
For example, to run the script on one CNP subject,<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/scripts:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/scripts<br />
<br />
2. Find the path of the raw DTI directory for one subject, and run the script, e.g.:<br />
:> dti_proc /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/raw/DTI_64DIR_3 &<br />
<br />
===Script Actions===<br />
The dti_proc script processes the data using FSL Diffusion Toolbox (FDT). <br />
<br />
Steps:<br/><br />
1. B0 image is skull stripped, creating B0_image_brain<br/><br />
2. the raw data are registered to the first (B0) image using mcflirt- this both corrects for eddy currents and helps motion, the resulting file is dti_mcf<br/><br />
3. dtifit is run, creating FA, L1, L2, MD and other processed images in subject space<br/><br />
4. the FA and color maps are registered to MNI space (FA2std and V12std).<br/><br />
5. Generic ROIs from the JHU atlas are applied to the FA image in standard space. <br/><br />
:WARNING- the resulting values are NOT to be used as data, rather just as first pass markers for scan integrity. They are not well registered enough to be used for anything, as evidenced by the low FA values.<br/><br />
6. Motion is calculated<br/><br />
7. bvecs and bvals for each subject are compared to the CNP standards. <br/><br />
:NOTE: there are different bvecs and bvals for subjects on the CCN and BMC scanners, as well as for a subset of subjects scanned during the transition and intial set up of the CCN<br/><br />
7. a pdf is generated (dti_diag.pdf)<br/><br />
<br />
===Output===<br />
For the subject indicated, the script will output the following files in the raw DTI directory:<br />
:B0_image_brain_mask.nii.gz<br />
:B0_image_brain.nii.gz<br />
:B0_image.nii.gz<br />
:dti_diag.pdf<br />
:dtifit_FA2std.nii.gz<br />
:dtifit_FA.nii.gz<br />
:dtifit_L1.nii.gz<br />
:dtifit_L2.nii.gz<br />
:dtifit_L3.nii.gz<br />
:dtifit.log<br />
:dtifit_MD.nii.gz<br />
:dtifit_MO.nii.gz<br />
:dtifit_SO.nii.gz<br />
:dtifit_V12std.nii.gz<br />
:dtifit_V1.nii.gz<br />
:dtifit_V2.nii.gz<br />
:dtifit_V3.nii.gz<br />
:dti_mcf.par<br />
:dti_proc.log<br />
:FA2std.mat<br />
<br />
=Quality Control=<br />
A complete description of the QA can be found at [[DTI_QA]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=DTI&diff=8303DTI2012-06-14T18:40:36Z<p>Katie: /* Quality Control */</p>
<hr />
<div>__TOC__<br />
<br />
<b> Back to [[HTAC]]</b><br />
<br />
=Scan Parameters=<br />
The DTI was acquired using a 64 direction sequence. Parameters were: 2mm slices, TR/TE=9000/93, 1 average, 96x96 matrix, 90 degree flip angle, axial slices, b=1000.<br />
<br />
=Processing=<br />
==dti_proc script==<br />
dti_proc.sh is a script written by Russ Poldrack to preprocess the raw CNP DTI data. The path for the script is /space/raid2/data/poldrack/CNP/scripts/dti_proc.sh. <br />
<br />
To run the script on an entire subject group (CONTROLS, SCHZ, BIPOLAR, ADHD) use the wrapper script run_dti_proc.sh<br />
<br />
NOTE: An alternate version exists called dti_proc_regressor. The difference between the two versions is that the regressor script contains a nuisance regressor based on the Galachan, 2010 paper, that can to some extent correct for vibration artifacts. This artifact exists on BMC data before Fall 2010. After Fall 2010 the table was bolted down, correcting the artifact. The CCN table was bolted at installation, thus avoiding the artifact entirely. Unfortunately it does not entirely correct the issue, so the shareable data uses the original version of the script with people who show the artifact marked for elimination.<br />
<br />
===Usage===<br />
For example, to run the script on one CNP subject,<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/scripts:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/scripts<br />
<br />
2. Find the path of the raw DTI directory for one subject, and run the script, e.g.:<br />
:> dti_proc /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/raw/DTI_64DIR_3 &<br />
<br />
===Script Actions===<br />
The dti_proc script processes the data using FSL Diffusion Toolbox (FDT). <br />
<br />
Steps:<br/><br />
1. B0 image is skull stripped, creating B0_image_brain<br/><br />
2. the raw data are registered to the first (B0) image using mcflirt- this both corrects for eddy currents and helps motion, the resulting file is dti_mcf<br/><br />
3. dtifit is run, creating FA, L1, L2, MD and other processed images in subject space<br/><br />
4. the FA and color maps are registered to MNI space (FA2std and V12std).<br/><br />
5. Generic ROIs from the JHU atlas are applied to the FA image in standard space. <br/><br />
:WARNING- the resulting values are NOT to be used as data, rather just as first pass markers for scan integrity. They are not well registered enough to be used for anything, as evidenced by the low FA values.<br/><br />
6. Motion is calculated<br/><br />
7. bvecs and bvals for each subject are compared to the CNP standards. <br/><br />
:NOTE: there are different bvecs and bvals for subjects on the CCN and BMC scanners, as well as for a subset of subjects scanned during the transition and intial set up of the CCN<br/><br />
7. a pdf is generated (dti_diag.pdf)<br/><br />
<br />
===Output===<br />
For the subject indicated, the script will output the following files in the raw DTI directory:<br />
:B0_image_brain_mask.nii.gz<br />
:B0_image_brain.nii.gz<br />
:B0_image.nii.gz<br />
:dti_diag.pdf<br />
:dtifit_FA2std.nii.gz<br />
:dtifit_FA.nii.gz<br />
:dtifit_L1.nii.gz<br />
:dtifit_L2.nii.gz<br />
:dtifit_L3.nii.gz<br />
:dtifit.log<br />
:dtifit_MD.nii.gz<br />
:dtifit_MO.nii.gz<br />
:dtifit_SO.nii.gz<br />
:dtifit_V12std.nii.gz<br />
:dtifit_V1.nii.gz<br />
:dtifit_V2.nii.gz<br />
:dtifit_V3.nii.gz<br />
:dti_mcf.par<br />
:dti_proc.log<br />
:FA2std.mat<br />
<br />
=Quality Control=<br />
A complete description of the QA can be found at [[DTI_QA]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=HTAC&diff=8302HTAC2012-06-14T18:38:56Z<p>Katie: /* LA2K Standardized Task Methods and Parameters */</p>
<hr />
<div>==CNP HTAC Organizational Wiki==<br />
<br />
This Wiki is a collaborative site for interaction with the CNP HTAC dataset, primarily the LA2k data. We hope that many people will contribute, and this site can make the HTAC database a very well organized and well document data resource. Questions about this site can be directed to me,<br />
<br />
thanks<br />
<br />
Fred<br />
<br />
===Bootcamp Slides===<br />
<br />
* [[bootcamp_April14| From April 14th ]]<br />
<br />
===CNP Who to bug and FAQ===<br />
Decision tree for open questions about LA2k<br />
<br />
* Check the [[HTAC_FAQ]] for an answer<br />
* Post a question to the [[HTAC_FAQ]]<br />
* Check the HTAC database/codebook for questions about variables/tests at http://npistat.org/htac/<br />
* If there is a question about statistics or analysis, direct your questions to Bob and Catherine Sugar<br />
* If your question is about the database, direct your questions to Catherine Sugar<br />
* If your question is about details about the execution of the HTAC, direct your questions to Bob<br />
* If your question is about the screening, inclusion/exclusion criteria, or participants, direct your questions to Bob<br />
* If your question is about any [[MM measures]], direct your question to Bob and Ty<br />
* If your question is about any [[RI measures]], direct your questions to Bob and Eydie<br />
* If you've made it this far and still havent emailed someone, you can ask Fred who to email. <br />
<br />
===CNP Publication and Query Guidelines===<br />
<br />
* [[LA2k manual]]<br />
* [[CNP publication policy]]<br />
* [[CNP data query policy]]<br />
<br />
===LA2K Standardized Task Methods and Parameters===<br />
<br />
* [[CNP_Stop_Signal]] <br />
* [[CNP_BART]] <br />
* [[CNP_DRL]] <br />
* [[CNP_RL]] <br />
* [[CNP_DDT]] <br />
* [[CNP_CPT]] <br />
* [[CNP_TS]] <br />
* [[CNP_SCWT]] <br />
* [[CNP_ANT]] <br />
* [[CNP_SCAP]] <br />
* [[CNP_VCAP]] <br />
* [[CNP_SMNM]] <br />
* [[CNP_VMNM]]<br />
* [[CNP_RK]] <br />
* [[CNP_SR]]<br />
<br />
===[[LA5C]]===<br />
* [[LA5C#Participants | Participants]]<br />
* [[LA5C#Contact_People | Contact People]]<br />
* [[LA5C#General_Methods | General Methods]]<br />
* [[LA5C#Facilities | Facilities]]<br />
* [[LA5C#Measures | Measures]]<br />
:'''Functional Data:'''<br/><br />
:*[[BART]]<br/><br />
:*[[BREATH HOLDING]]<br/><br />
:*[[PAM]]<br/><br />
:*[[RESTING STATE]]<br/><br />
:*[[SCAP]]<br/><br />
:*[[STOPSIGNAL]]<br/><br />
<br />
:'''Structural Data:'''<br/><br />
:*[[DTI]]<br/><br />
:*[[MATCHED BANDWIDTH HIRES]]<br/><br />
:*[[MPRAGE]]<br/><br />
:*[[TASKSWITCHING]]<br/><br />
* [[LA5C#Requesting_and_Analyzing_Data | Requesting and Analyzing LA5C Data]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=LA5C&diff=8301LA5C2012-06-14T18:37:01Z<p>Katie: </p>
<hr />
<div>====LA5C====<br />
<br />
The LA5C was a nested study within LA2K, furthering the work to include patients. The assessed population included 200 healthy controls recruited from LA2K, 100 schizophrenia patients, 100 bipolar patients, and 100 ADHD patients. <br />
<br />
See here for the LA2K manual: [[LA2k_manual]]. <br/><br />
<br />
===Participants===<br />
The participants, ages 21-50, were recruited by community advertisements from the Los Angeles area and completed extensive neuropsychogical testing, in addition to fMRI scanning. To be included individuals had to be either “White, Not of Hispanic or Latino Origin” or “Hispanic or Latino, of Any Race” following NIH designations of racial and ethnic minority groups, and have completed at least 8 years of education (other racial and ethnic minority groups were excluded because this was thought to increase risk of confounding planned genetic studies). For participants who spoke both English and Spanish, language for testing was determined by a verbal fluency test. Participants were screened for neurological disease, history of head injury with loss of consciousness or cognitive sequelae, use of psychoactive medications, substance dependence within past 6 months, history of major mental illness or ADHD, and current mood or anxiety disorder. Self-reported history of psychopathology was verified with the SCID-IV (First, Spitzer, Gibbon, & Williams, 1995). Urinalysis was used to screen for drugs of abuse (cannabis, amphetamine, opioids, cocaine, benzodiazepines) on the day of testing and excluded if results were positive. <br />
<br />
A portion of this large sample took part in two separate fMRI sessions, which each included one-hour of behavioral testing and a one-hour scan on the same day. Participants were recruited from the parent study to participate in the fMRI portion if they successfully completed all previous testing sessions, and did not meet the following additional exclusion criteria: history of significant medical illness, contraindications for MRI (including pregnancy), any mood-altering medication on scan day (based on self-report), vision that was insufficient to see task stimuli, and left-handedness. <br />
<br />
After receiving a thorough explanation, all participants gave written informed consent according to the procedures approved by the University of California Los Angeles Institutional Review Board.<br />
<br />
===Contact People===<br />
For more information on the LA5C please contact:<br/><br />
Eliza Congdon, Ph.D. [mailto:econgdon@ucla.edu email econgdon@ucla.edu]<br/><br />
Katie Karlsgodt, Ph.D. [mailto:kkarlsgo@ucla.edu email kkarlsgo@ucla.edu]<br/><br />
Russell Poldrack, Ph.D. [mailto:poldrack@mail.utexas.edu email poldrack@mail.utexas.edu]<br/><br />
Fred Sabb, Ph.D. [mailto:fwsabb@gmail.com email fwsabb@gmail.com]<br/><br />
<br />
===General Methods===<br />
====Documentation====<br />
Full documentation of study procedures can be found in the LA5C Manual. For parameters of individual scans or further information on specific tasks, please see pages below.<br />
<br />
====Overall Study Design====<br />
Subjects participated in two scans ("A" and "B") in a counterbalanced fashion. <br/><br />
Scan A included: localizer, MPRAGE, DTI, Matched Bandwidth Hires, BART, PAM encoding and PAM retrieval.<br/><br />
Scan B included: localizer, Matched Bandwidth Hires, Resting State, Breath Hold Task, StopSignal, SCAP, and Taskswitching<br/><br />
<br />
====ID Numbers====<br />
Identification numbers were assigned in the order of recruitment. Healthy controls have ID numbers starting with 1 (CNP_11156). Patients with schizophrenia have ID's starting with 5 (CNP_50004), patients with Bipolar Disorder start with 6, and patients with ADHD start with 7. A and B scans were designated by appending an A or B to the ID- CNP_11156A and CNP_11156B are the two scans for the same individual.<br />
<br />
===Facilities===<br />
Subjects scanned at two different facilities, each with a Siemens Trio, using the same sequences. Quality control measures were in place before deciding to switch to the CCN scanner, however, as always, there are small differences. Therefore, which scanner was used must be accounted for in all analyses.<br/><br />
<br />
[[BMC| Ahmanson Lovelace Brain Mapping Center]]<br/><br />
[[CCN| Staglin Center for Cognitive Neuroscience]]<br/><br />
<br />
===Measures===<br />
'''Image Acquisition:'''<br/><br />
[[Parameters]]<br/><br />
[[Procedure]]<br/><br />
<br />
'''Functional:'''<br/><br />
[[BART]]<br/><br />
[[BREATH HOLDING]]<br/><br />
[[PAM]]<br/><br />
[[RESTING STATE]]<br/><br />
[[SCAP]]<br/><br />
[[STOPSIGNAL]]<br/><br />
[[TASKSWITCHING]]<br/><br />
<br />
'''Structural:'''<br/><br />
[[DTI]]<br/><br />
[[MATCHED BANDWIDTH HIRES]]<br/><br />
[[MPRAGE]]<br/><br />
<br />
'''Physio Acquisition'''<br/><br />
[[Physio Data Setup]]<br/><br />
<br />
===Preprocessing and Quality Control===<br />
<br />
<br />
'''Preprocessing:'''<br/><br />
[[Initial Processing]]<br/><br />
[[Quality Control]]<br/><br />
[[LA5C Exclusions: Withdrawn, Dropped, Unusable]]<br/><br />
[[LA3C ID Switches]]<br/><br />
[[LA5C CCN Re-Scans]]<br/><br />
[[BMC/CCN Scanner Switch & Marker History]]<br/><br />
[[MPRAGE Motion]]<br/><br />
[[BOLD Motion]]<br />
<br />
[[DTI Quality Control]]<br><br />
[[Freesurfer Quality Control]]<br><br />
<br />
[[Final Tables of Usables Ns]]<br><br />
<br />
[[Notes on changes made to HTAC Database or Final Log Files post-QC Completion (6/13/12)]]<br><br />
<br />
===Requesting and Analyzing Data===<br />
[[CNP_data_query_policy]]<br />
<br />
[[Notes on Downloading Imaging Workflow Data]] <br/><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Link back to [[HTAC]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=DTI&diff=8300DTI2012-06-14T18:34:14Z<p>Katie: /* Check FA Map */</p>
<hr />
<div>__TOC__<br />
<br />
<b> Back to [[HTAC]]</b><br />
<br />
=Scan Parameters=<br />
The DTI was acquired using a 64 direction sequence. Parameters were: 2mm slices, TR/TE=9000/93, 1 average, 96x96 matrix, 90 degree flip angle, axial slices, b=1000.<br />
<br />
=Processing=<br />
==dti_proc script==<br />
dti_proc.sh is a script written by Russ Poldrack to preprocess the raw CNP DTI data. The path for the script is /space/raid2/data/poldrack/CNP/scripts/dti_proc.sh. <br />
<br />
To run the script on an entire subject group (CONTROLS, SCHZ, BIPOLAR, ADHD) use the wrapper script run_dti_proc.sh<br />
<br />
NOTE: An alternate version exists called dti_proc_regressor. The difference between the two versions is that the regressor script contains a nuisance regressor based on the Galachan, 2010 paper, that can to some extent correct for vibration artifacts. This artifact exists on BMC data before Fall 2010. After Fall 2010 the table was bolted down, correcting the artifact. The CCN table was bolted at installation, thus avoiding the artifact entirely. Unfortunately it does not entirely correct the issue, so the shareable data uses the original version of the script with people who show the artifact marked for elimination.<br />
<br />
===Usage===<br />
For example, to run the script on one CNP subject,<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/scripts:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/scripts<br />
<br />
2. Find the path of the raw DTI directory for one subject, and run the script, e.g.:<br />
:> dti_proc /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/raw/DTI_64DIR_3 &<br />
<br />
===Script Actions===<br />
The dti_proc script processes the data using FSL Diffusion Toolbox (FDT). <br />
<br />
Steps:<br/><br />
1. B0 image is skull stripped, creating B0_image_brain<br/><br />
2. the raw data are registered to the first (B0) image using mcflirt- this both corrects for eddy currents and helps motion, the resulting file is dti_mcf<br/><br />
3. dtifit is run, creating FA, L1, L2, MD and other processed images in subject space<br/><br />
4. the FA and color maps are registered to MNI space (FA2std and V12std).<br/><br />
5. Generic ROIs from the JHU atlas are applied to the FA image in standard space. <br/><br />
:WARNING- the resulting values are NOT to be used as data, rather just as first pass markers for scan integrity. They are not well registered enough to be used for anything, as evidenced by the low FA values.<br/><br />
6. Motion is calculated<br/><br />
7. bvecs and bvals for each subject are compared to the CNP standards. <br/><br />
:NOTE: there are different bvecs and bvals for subjects on the CCN and BMC scanners, as well as for a subset of subjects scanned during the transition and intial set up of the CCN<br/><br />
7. a pdf is generated (dti_diag.pdf)<br/><br />
<br />
===Output===<br />
For the subject indicated, the script will output the following files in the raw DTI directory:<br />
:B0_image_brain_mask.nii.gz<br />
:B0_image_brain.nii.gz<br />
:B0_image.nii.gz<br />
:dti_diag.pdf<br />
:dtifit_FA2std.nii.gz<br />
:dtifit_FA.nii.gz<br />
:dtifit_L1.nii.gz<br />
:dtifit_L2.nii.gz<br />
:dtifit_L3.nii.gz<br />
:dtifit.log<br />
:dtifit_MD.nii.gz<br />
:dtifit_MO.nii.gz<br />
:dtifit_SO.nii.gz<br />
:dtifit_V12std.nii.gz<br />
:dtifit_V1.nii.gz<br />
:dtifit_V2.nii.gz<br />
:dtifit_V3.nii.gz<br />
:dti_mcf.par<br />
:dti_proc.log<br />
:FA2std.mat<br />
<br />
=Quality Control=<br />
The checking procedures for CNP follow the Cannon Lab DTI QA Protocol, which was adapted from procedures in Paul Thompson's lab.<br />
<br />
==DTI_QA script==<br />
The additional QA script is located at /space/raid2/data/poldrack/CNP/scripts/DTI_QA.sh<br />
<br />
===Usage===<br />
The usage is DTI_QA <subjectID> <group>, or dti_proc all <group>.<br />
<br />
For example, to run the script on one subject in the CNP schizophrenia group,<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/scripts:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/scripts<br />
<br />
2. Run the script, e.g.:<br />
:> dti_proc_temp.sh CNP_50006B SCHZ &<br />
<br />
3. Alternatively, to run the script on all subjects in the CNP schizophrenia group,<br/><br />
:> dti_proc_temp.sh all SCHZ &<br />
<br />
===Script Actions===<br />
This script creates a small text file to be used in QA. If the script runs properly it:<br/><br />
1. matches bvals and bvecs<br/><br />
2. calculates mean in-mask FA and MD<br/><br />
3. calculates motion in each direction<br/><br />
4. creates a standard deviation file for regular and mcf images<br/><br />
5. uses regional masks to calculate the percentage of cropped voxels in the occipital lobe, frontal lobe, superior region, temporal lobes and cerebellum.<br/><br />
<br />
===Output===<br />
The script will create a <b>dti_report.txt</b> file in the raw DTI directory of the subject.<br />
<br />
==How to Do QA==<br />
===Check Diagnostic Log===<br />
After running the dti_proc.sh script, check the diagnostic log for quality assurance of the DTI data:<br/><br />
1. Log on to func and go to the directory, for example, /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open the dti_diag.pdf, using the command <b>[[Basic UNIX Commands#evince | evince]]</b>:<br />
:> evince dti_diag.pdf &<br />
<br />
3. Check whether the bvals and bvecs match the CNP standards, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
4. Go to the raw DTI directory of the subject, and open the dti_report.txt file, using the command <b>[[Basic UNIX Commands#emacs | emacs]]</b>:<br />
:> cd /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/DTI_64DIR_3<br />
:> ls<br />
:> emacs dti_report.txt &<br />
<br />
5. Check whether the bvals and bvecs match the CNP standards, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
===Check for Artifacts===<br />
Artifacts observed in this data set include-<br />
:-missing slices- this would be on only one volume, and consist of an entire isolated missing horizontal slice<br />
:-vibration artifact (only on BMC subjects)- this usually shows as a red patch directly on the midline, primarily in the parietal region<br />
:-striping<br />
:-cropping<br />
<br />
===Check Raw Data===<br />
====Check FA Map====<br />
After running the dti_proc_regressor.sh script, check the fractional anisotropy (FA) map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open the FA map in FSLView:<br />
:> fslview dtifit_FA.nii.gz &<br />
<br />
3. Check if the FA map includes the entire brain and if the FA map looks unusual or not<br />
<br />
====Check Color Map====<br />
After running the dti_proc.sh script, check the color map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open both the FA and color maps in FSLView (or add the color map to FSLView, if you are already viewing the FA map):<br />
:> fslview dtifit_FA.nii.gz dtifit_V1.nii.gz &<br />
<br />
3. To view the color map, select the dtifit_V1 file, and press the "i" button. An "Overlay Information Dialog" window will appear. For "Display as:", select "RGB" and for "Modulation:", select "dtifit_FA". Close this window.<br />
<br />
4. The color map should now display with a dark background and red/blue/green tracts.<br />
<br />
5. Check if the directions of the major fiber tracts are colored appropriately by scrolling through the slices. In the coronal view, the corticospinal tract (superior-inferior) should be blue. In the sagittal view, the corpus callosum (right-left) should be red. In the axial view, the anterior-posterior tracts should be green.<br />
<br />
Also, check if the color map includes the entire brain and if the color map looks unusual or not. Log this in the appropriate google doc.<br />
<br />
====Check for Cropping====<br />
In the output from the DTI_QA script is a number for each of a set of regions (cerebellum, superior, temporal, frontal). This number represents the percentage of voxels missing in that region. If the number is greater than about 10, you should go back and look at the FA map to make sure that actual tract data is not missing (some small percent, which would usually represent grey matter, can be missing off the edges without much effect). There will always be a large percentage of cerebellum voxels missing, but this can be ignored.<br />
<br />
====Watch raw data as movie====<br />
Load up the raw data file (like DTI_64dir_7.nii.gz) into fslview, and watch through each volume as a movie. It is normal for the first volume to be much brighter, that is the B0 image.<br />
<br />
==QA Rating System==<br />
After logging the intermediate steps in the google doc, a final rating can be calculated. This is based on:<br />
:1. Coverage flag (based on cropping measures rated 0=no cropping, 1=minor cropping, 2=severe unusable cropping)<br />
:2. Motion flags (based on watching raw data as movie, and on pdfs).<br />
:3. Tensor direction flags (based on bvals and bvecs and color map)<br />
:4. Artifact flags<br />
<br />
The overall Quality score is generated from these measures and varies from 1-4. <br />
:1=excellent<br />
:2=good (useable, but depending on analysis might want to take a look at reason for score)<br />
:3=fair (useable, but depending on analysis might want to take a look at reason for score)<br />
:4=unusable (all individuals with vibration artifacts are in this category, along with any others with irreconcilable problems)<br />
:-1= not evaluated<br />
<br />
The scores and reasons for them are available on the HTAC database.<br />
<br />
<br />
[[File:DTI_Rankings.png]]<br />
<br />
<br />
<br />
----<br />
Link back to [[LA5C]] page. <br/><br />
Return to [[CNP]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=LA5C&diff=8299LA5C2012-06-14T18:26:27Z<p>Katie: /* Facilities */</p>
<hr />
<div>====LA5C====<br />
<br />
The LA5C was a nested study within LA2K, furthering the work to include patients. The assessed population included 200 healthy controls recruited from LA2K, 100 schizophrenia patients, 100 bipolar patients, and 100 ADHD patients. <br />
<br />
<br />
<br />
See here for the LA2K manual: [[LA2k_manual]]. <br/><br />
<br />
===Participants===<br />
The participants, ages 21-50, were recruited by community advertisements from the Los Angeles area and completed extensive neuropsychogical testing, in addition to fMRI scanning. To be included individuals had to be either “White, Not of Hispanic or Latino Origin” or “Hispanic or Latino, of Any Race” following NIH designations of racial and ethnic minority groups, and have completed at least 8 years of education (other racial and ethnic minority groups were excluded because this was thought to increase risk of confounding planned genetic studies). For participants who spoke both English and Spanish, language for testing was determined by a verbal fluency test. Participants were screened for neurological disease, history of head injury with loss of consciousness or cognitive sequelae, use of psychoactive medications, substance dependence within past 6 months, history of major mental illness or ADHD, and current mood or anxiety disorder. Self-reported history of psychopathology was verified with the SCID-IV (First, Spitzer, Gibbon, & Williams, 1995). Urinalysis was used to screen for drugs of abuse (cannabis, amphetamine, opioids, cocaine, benzodiazepines) on the day of testing and excluded if results were positive. <br />
<br />
A portion of this large sample took part in two separate fMRI sessions, which each included one-hour of behavioral testing and a one-hour scan on the same day. Participants were recruited from the parent study to participate in the fMRI portion if they successfully completed all previous testing sessions, and did not meet the following additional exclusion criteria: history of significant medical illness, contraindications for MRI (including pregnancy), any mood-altering medication on scan day (based on self-report), vision that was insufficient to see task stimuli, and left-handedness. <br />
<br />
After receiving a thorough explanation, all participants gave written informed consent according to the procedures approved by the University of California Los Angeles Institutional Review Board.<br />
<br />
===Contact People===<br />
For more information on the LA5C please contact:<br/><br />
Eliza Congdon, Ph.D. [mailto:econgdon@ucla.edu email econgdon@ucla.edu]<br/><br />
Katie Karlsgodt, Ph.D. [mailto:kkarlsgo@ucla.edu email kkarlsgo@ucla.edu]<br/><br />
Russell Poldrack, Ph.D. [mailto:poldrack@mail.utexas.edu email poldrack@mail.utexas.edu]<br/><br />
Fred Sabb, Ph.D. [mailto:fwsabb@gmail.com email fwsabb@gmail.com]<br/><br />
<br />
===General Methods===<br />
====Documentation====<br />
Full documentation of study procedures can be found in the LA5C Manual. For parameters of individual scans or further information on specific tasks, please see pages below.<br />
<br />
====Overall Study Design====<br />
Subjects participated in two scans ("A" and "B") in a counterbalanced fashion. <br/><br />
Scan A included: localizer, MPRAGE, DTI, Matched Bandwidth Hires, BART, PAM encoding and PAM retrieval.<br/><br />
Scan B included: localizer, Matched Bandwidth Hires, Resting State, Breath Hold Task, StopSignal, SCAP, and Taskswitching<br/><br />
<br />
====ID Numbers====<br />
Identification numbers were assigned in the order of recruitment. Healthy controls have ID numbers starting with 1 (CNP_11156). Patients with schizophrenia have ID's starting with 5 (CNP_50004), patients with Bipolar Disorder start with 6, and patients with ADHD start with 7. A and B scans were designated by appending an A or B to the ID- CNP_11156A and CNP_11156B are the two scans for the same individual.<br />
<br />
===Facilities===<br />
Subjects scanned at two different facilities, each with a Siemens Trio, using the same sequences. Quality control measures were in place before deciding to switch to the CCN scanner, however, as always, there are small differences. Therefore, which scanner was used must be accounted for in all analyses.<br/><br />
<br />
[[BMC| Ahmanson Lovelace Brain Mapping Center]]<br/><br />
[[CCN| Staglin Center for Cognitive Neuroscience]]<br/><br />
<br />
===Measures===<br />
'''Image Acquisition:'''<br/><br />
[[Parameters]]<br/><br />
[[Procedure]]<br/><br />
<br />
'''Functional:'''<br/><br />
[[BART]]<br/><br />
[[BREATH HOLDING]]<br/><br />
[[PAM]]<br/><br />
[[RESTING STATE]]<br/><br />
[[SCAP]]<br/><br />
[[STOPSIGNAL]]<br/><br />
[[TASKSWITCHING]]<br/><br />
<br />
'''Structural:'''<br/><br />
[[DTI]]<br/><br />
[[MATCHED BANDWIDTH HIRES]]<br/><br />
[[MPRAGE]]<br/><br />
<br />
'''Physio Acquisition'''<br/><br />
[[Physio Data Setup]]<br/><br />
<br />
===Preprocessing and Quality Control===<br />
<br />
<br />
'''Preprocessing:'''<br/><br />
[[Initial Processing]]<br/><br />
[[Quality Control]]<br/><br />
[[LA5C Exclusions: Withdrawn, Dropped, Unusable]]<br/><br />
[[LA3C ID Switches]]<br/><br />
[[LA5C CCN Re-Scans]]<br/><br />
[[BMC/CCN Scanner Switch & Marker History]]<br/><br />
[[MPRAGE Motion]]<br/><br />
[[BOLD Motion]]<br />
<br />
[[DTI Quality Control]]<br><br />
[[Freesurfer Quality Control]]<br><br />
<br />
[[Final Tables of Usables Ns]]<br><br />
<br />
[[Notes on changes made to HTAC Database or Final Log Files post-QC Completion (6/13/12)]]<br><br />
<br />
===Requesting and Analyzing Data===<br />
[[CNP_data_query_policy]]<br />
<br />
[[Notes on Downloading Imaging Workflow Data]] <br/><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Link back to [[HTAC]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=LA5C&diff=8298LA5C2012-06-14T18:26:15Z<p>Katie: /* Facilities */</p>
<hr />
<div>====LA5C====<br />
<br />
The LA5C was a nested study within LA2K, furthering the work to include patients. The assessed population included 200 healthy controls recruited from LA2K, 100 schizophrenia patients, 100 bipolar patients, and 100 ADHD patients. <br />
<br />
<br />
<br />
See here for the LA2K manual: [[LA2k_manual]]. <br/><br />
<br />
===Participants===<br />
The participants, ages 21-50, were recruited by community advertisements from the Los Angeles area and completed extensive neuropsychogical testing, in addition to fMRI scanning. To be included individuals had to be either “White, Not of Hispanic or Latino Origin” or “Hispanic or Latino, of Any Race” following NIH designations of racial and ethnic minority groups, and have completed at least 8 years of education (other racial and ethnic minority groups were excluded because this was thought to increase risk of confounding planned genetic studies). For participants who spoke both English and Spanish, language for testing was determined by a verbal fluency test. Participants were screened for neurological disease, history of head injury with loss of consciousness or cognitive sequelae, use of psychoactive medications, substance dependence within past 6 months, history of major mental illness or ADHD, and current mood or anxiety disorder. Self-reported history of psychopathology was verified with the SCID-IV (First, Spitzer, Gibbon, & Williams, 1995). Urinalysis was used to screen for drugs of abuse (cannabis, amphetamine, opioids, cocaine, benzodiazepines) on the day of testing and excluded if results were positive. <br />
<br />
A portion of this large sample took part in two separate fMRI sessions, which each included one-hour of behavioral testing and a one-hour scan on the same day. Participants were recruited from the parent study to participate in the fMRI portion if they successfully completed all previous testing sessions, and did not meet the following additional exclusion criteria: history of significant medical illness, contraindications for MRI (including pregnancy), any mood-altering medication on scan day (based on self-report), vision that was insufficient to see task stimuli, and left-handedness. <br />
<br />
After receiving a thorough explanation, all participants gave written informed consent according to the procedures approved by the University of California Los Angeles Institutional Review Board.<br />
<br />
===Contact People===<br />
For more information on the LA5C please contact:<br/><br />
Eliza Congdon, Ph.D. [mailto:econgdon@ucla.edu email econgdon@ucla.edu]<br/><br />
Katie Karlsgodt, Ph.D. [mailto:kkarlsgo@ucla.edu email kkarlsgo@ucla.edu]<br/><br />
Russell Poldrack, Ph.D. [mailto:poldrack@mail.utexas.edu email poldrack@mail.utexas.edu]<br/><br />
Fred Sabb, Ph.D. [mailto:fwsabb@gmail.com email fwsabb@gmail.com]<br/><br />
<br />
===General Methods===<br />
====Documentation====<br />
Full documentation of study procedures can be found in the LA5C Manual. For parameters of individual scans or further information on specific tasks, please see pages below.<br />
<br />
====Overall Study Design====<br />
Subjects participated in two scans ("A" and "B") in a counterbalanced fashion. <br/><br />
Scan A included: localizer, MPRAGE, DTI, Matched Bandwidth Hires, BART, PAM encoding and PAM retrieval.<br/><br />
Scan B included: localizer, Matched Bandwidth Hires, Resting State, Breath Hold Task, StopSignal, SCAP, and Taskswitching<br/><br />
<br />
====ID Numbers====<br />
Identification numbers were assigned in the order of recruitment. Healthy controls have ID numbers starting with 1 (CNP_11156). Patients with schizophrenia have ID's starting with 5 (CNP_50004), patients with Bipolar Disorder start with 6, and patients with ADHD start with 7. A and B scans were designated by appending an A or B to the ID- CNP_11156A and CNP_11156B are the two scans for the same individual.<br />
<br />
===Facilities===<br />
Subjects scanned at two different facilities, each with a Siemens Trio, using the same sequences. Quality control measures were in place before deciding to switch to the CCN scanner, however, as always, there are small differences. Therefore, which scanner was used must be accounted for in all analyses.<br/><br />
<br />
[[BMC| Ahamanson Lovelace Brain Mapping Center]]<br/><br />
[[CCN| Staglin Center for Cognitive Neuroscience]]<br/><br />
<br />
===Measures===<br />
'''Image Acquisition:'''<br/><br />
[[Parameters]]<br/><br />
[[Procedure]]<br/><br />
<br />
'''Functional:'''<br/><br />
[[BART]]<br/><br />
[[BREATH HOLDING]]<br/><br />
[[PAM]]<br/><br />
[[RESTING STATE]]<br/><br />
[[SCAP]]<br/><br />
[[STOPSIGNAL]]<br/><br />
[[TASKSWITCHING]]<br/><br />
<br />
'''Structural:'''<br/><br />
[[DTI]]<br/><br />
[[MATCHED BANDWIDTH HIRES]]<br/><br />
[[MPRAGE]]<br/><br />
<br />
'''Physio Acquisition'''<br/><br />
[[Physio Data Setup]]<br/><br />
<br />
===Preprocessing and Quality Control===<br />
<br />
<br />
'''Preprocessing:'''<br/><br />
[[Initial Processing]]<br/><br />
[[Quality Control]]<br/><br />
[[LA5C Exclusions: Withdrawn, Dropped, Unusable]]<br/><br />
[[LA3C ID Switches]]<br/><br />
[[LA5C CCN Re-Scans]]<br/><br />
[[BMC/CCN Scanner Switch & Marker History]]<br/><br />
[[MPRAGE Motion]]<br/><br />
[[BOLD Motion]]<br />
<br />
[[DTI Quality Control]]<br><br />
[[Freesurfer Quality Control]]<br><br />
<br />
[[Final Tables of Usables Ns]]<br><br />
<br />
[[Notes on changes made to HTAC Database or Final Log Files post-QC Completion (6/13/12)]]<br><br />
<br />
===Requesting and Analyzing Data===<br />
[[CNP_data_query_policy]]<br />
<br />
[[Notes on Downloading Imaging Workflow Data]] <br/><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Link back to [[HTAC]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=LA5C&diff=8297LA5C2012-06-14T18:23:32Z<p>Katie: /* General Methods */</p>
<hr />
<div>====LA5C====<br />
<br />
The LA5C was a nested study within LA2K, furthering the work to include patients. The assessed population included 200 healthy controls recruited from LA2K, 100 schizophrenia patients, 100 bipolar patients, and 100 ADHD patients. <br />
<br />
<br />
<br />
See here for the LA2K manual: [[LA2k_manual]]. <br/><br />
<br />
===Participants===<br />
The participants, ages 21-50, were recruited by community advertisements from the Los Angeles area and completed extensive neuropsychogical testing, in addition to fMRI scanning. To be included individuals had to be either “White, Not of Hispanic or Latino Origin” or “Hispanic or Latino, of Any Race” following NIH designations of racial and ethnic minority groups, and have completed at least 8 years of education (other racial and ethnic minority groups were excluded because this was thought to increase risk of confounding planned genetic studies). For participants who spoke both English and Spanish, language for testing was determined by a verbal fluency test. Participants were screened for neurological disease, history of head injury with loss of consciousness or cognitive sequelae, use of psychoactive medications, substance dependence within past 6 months, history of major mental illness or ADHD, and current mood or anxiety disorder. Self-reported history of psychopathology was verified with the SCID-IV (First, Spitzer, Gibbon, & Williams, 1995). Urinalysis was used to screen for drugs of abuse (cannabis, amphetamine, opioids, cocaine, benzodiazepines) on the day of testing and excluded if results were positive. <br />
<br />
A portion of this large sample took part in two separate fMRI sessions, which each included one-hour of behavioral testing and a one-hour scan on the same day. Participants were recruited from the parent study to participate in the fMRI portion if they successfully completed all previous testing sessions, and did not meet the following additional exclusion criteria: history of significant medical illness, contraindications for MRI (including pregnancy), any mood-altering medication on scan day (based on self-report), vision that was insufficient to see task stimuli, and left-handedness. <br />
<br />
After receiving a thorough explanation, all participants gave written informed consent according to the procedures approved by the University of California Los Angeles Institutional Review Board.<br />
<br />
===Contact People===<br />
For more information on the LA5C please contact:<br/><br />
Eliza Congdon, Ph.D. [mailto:econgdon@ucla.edu email econgdon@ucla.edu]<br/><br />
Katie Karlsgodt, Ph.D. [mailto:kkarlsgo@ucla.edu email kkarlsgo@ucla.edu]<br/><br />
Russell Poldrack, Ph.D. [mailto:poldrack@mail.utexas.edu email poldrack@mail.utexas.edu]<br/><br />
Fred Sabb, Ph.D. [mailto:fwsabb@gmail.com email fwsabb@gmail.com]<br/><br />
<br />
===General Methods===<br />
====Documentation====<br />
Full documentation of study procedures can be found in the LA5C Manual. For parameters of individual scans or further information on specific tasks, please see pages below.<br />
<br />
====Overall Study Design====<br />
Subjects participated in two scans ("A" and "B") in a counterbalanced fashion. <br/><br />
Scan A included: localizer, MPRAGE, DTI, Matched Bandwidth Hires, BART, PAM encoding and PAM retrieval.<br/><br />
Scan B included: localizer, Matched Bandwidth Hires, Resting State, Breath Hold Task, StopSignal, SCAP, and Taskswitching<br/><br />
<br />
====ID Numbers====<br />
Identification numbers were assigned in the order of recruitment. Healthy controls have ID numbers starting with 1 (CNP_11156). Patients with schizophrenia have ID's starting with 5 (CNP_50004), patients with Bipolar Disorder start with 6, and patients with ADHD start with 7. A and B scans were designated by appending an A or B to the ID- CNP_11156A and CNP_11156B are the two scans for the same individual.<br />
<br />
===Facilities===<br />
[[BMC]]<br/><br />
[[CCN]]<br/><br />
<br />
===Measures===<br />
'''Image Acquisition:'''<br/><br />
[[Parameters]]<br/><br />
[[Procedure]]<br/><br />
<br />
'''Functional:'''<br/><br />
[[BART]]<br/><br />
[[BREATH HOLDING]]<br/><br />
[[PAM]]<br/><br />
[[RESTING STATE]]<br/><br />
[[SCAP]]<br/><br />
[[STOPSIGNAL]]<br/><br />
[[TASKSWITCHING]]<br/><br />
<br />
'''Structural:'''<br/><br />
[[DTI]]<br/><br />
[[MATCHED BANDWIDTH HIRES]]<br/><br />
[[MPRAGE]]<br/><br />
<br />
'''Physio Acquisition'''<br/><br />
[[Physio Data Setup]]<br/><br />
<br />
===Preprocessing and Quality Control===<br />
<br />
<br />
'''Preprocessing:'''<br/><br />
[[Initial Processing]]<br/><br />
[[Quality Control]]<br/><br />
[[LA5C Exclusions: Withdrawn, Dropped, Unusable]]<br/><br />
[[LA3C ID Switches]]<br/><br />
[[LA5C CCN Re-Scans]]<br/><br />
[[BMC/CCN Scanner Switch & Marker History]]<br/><br />
[[MPRAGE Motion]]<br/><br />
[[BOLD Motion]]<br />
<br />
[[DTI Quality Control]]<br><br />
[[Freesurfer Quality Control]]<br><br />
<br />
[[Final Tables of Usables Ns]]<br><br />
<br />
[[Notes on changes made to HTAC Database or Final Log Files post-QC Completion (6/13/12)]]<br><br />
<br />
===Requesting and Analyzing Data===<br />
[[CNP_data_query_policy]]<br />
<br />
[[Notes on Downloading Imaging Workflow Data]] <br/><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Link back to [[HTAC]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Parameters&diff=8296Parameters2012-06-14T18:16:39Z<p>Katie: </p>
<hr />
<div>''Image acquisition'' <br/><br />
Data were collected using a 3T Siemens Trio MRI scanner. For each task run, functional T2*-weighted echoplanar images (EPIs) were collected with the following parameters: slice thickness = 4 mm, 34 slices, TR = 2 s, TE = 30 ms, flip angle = 90°, matrix 64 x 64, FOV = 192 mm. Additionally, a T2-weighted matched-bandwidth high-resolution anatomical scan (same slice prescription as EPI) and MPRAGE were collected. The parameters for the T2 were: 4mm slices, TR/TE=5000/34, 4 averages, matrix = 128x128, 90 degree flip angle. The parameters for MPRAGE were the following: TR = 1.9 s, TE = 2.26 ms, FOV = 250, matrix = 256 x 256, saggital plane, slice thickness = 1 mm, 176 slices. DTI parameters were: 64 directions, 2mm slices, TR/TE=9000/93, 1 average, 96x96 matrix, 90 degree flip angle, axial slices, b=1000<br/><br />
<br/><br />
The number of EPIs collected for each task are as follows: <br/><br />
BART: 267 <br/><br />
BHT: 79 <br/><br />
RESTING: 152 <br/><br />
TS: 208 <br/><br />
SST: 184 <br/><br />
SCAP: 291 <br/><br />
PAMENC: 242 <br/><br />
PAMRET: 268 <br/><br />
<br />
<br />
<br />
<br />
<br />
----<br />
Link back to [[LA5C]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=MATCHED_BANDWIDTH_HIRES&diff=8295MATCHED BANDWIDTH HIRES2012-06-14T18:15:22Z<p>Katie: </p>
<hr />
<div>For use in registration of functional tasks a T2-weighted matched-bandwidth high-resolution anatomical scan (same slice prescription as EPI) was taken. The parameters were as follows: 4mm slices, TR/TE=5000/34, 4 averages, 128x128, 90 degree flip angle.<br />
<br />
MBH files were skull stripped with BET and checked during the QA process.<br />
<br />
Back to [[LA5C]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=DTI_Quality_Control&diff=8294DTI Quality Control2012-06-14T18:12:18Z<p>Katie: /* Check FA Map */</p>
<hr />
<div>Link back to [[LA5C]] page.<br />
<br />
'''These are the useable subjects for each group, on each scanner.<br />
<br />
[[File:DTI_Rankings.png]]<br />
<br />
<br />
<br />
----<br />
=Quality Control=<br />
The checking procedures for CNP follow the Cannon Lab DTI QA Protocol, which was adapted from procedures in Paul Thompson's lab.<br />
<br />
==DTI_QA script==<br />
The additional QA script is located at /space/raid2/data/poldrack/CNP/scripts/DTI_QA.sh<br />
<br />
===Usage===<br />
The usage is DTI_QA <subjectID> <group>, or dti_proc all <group>.<br />
<br />
For example, to run the script on one subject in the CNP schizophrenia group,<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/scripts:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/scripts<br />
<br />
2. Run the script, e.g.:<br />
:> dti_proc_temp.sh CNP_50006B SCHZ &<br />
<br />
3. Alternatively, to run the script on all subjects in the CNP schizophrenia group,<br/><br />
:> dti_proc_temp.sh all SCHZ &<br />
<br />
===Script Actions===<br />
This script creates a small text file to be used in QA. If the script runs properly it:<br/><br />
1. matches bvals and bvecs<br/><br />
2. calculates mean in-mask FA and MD<br/><br />
3. calculates motion in each direction<br/><br />
4. creates a standard deviation file for regular and mcf images<br/><br />
5. uses regional masks to calculate the percentage of cropped voxels in the occipital lobe, frontal lobe, superior region, temporal lobes and cerebellum.<br/><br />
<br />
===Output===<br />
The script will create a <b>dti_report.txt</b> file in the raw DTI directory of the subject.<br />
<br />
==How to Do QA==<br />
===Check Diagnostic Log===<br />
After running the dti_proc.sh script, check the diagnostic log for quality assurance of the DTI data:<br/><br />
1. Log on to func and go to the directory, for example, /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open the dti_diag.pdf, using the command <b>[[Basic UNIX Commands#evince | evince]]</b>:<br />
:> evince dti_diag.pdf &<br />
<br />
3. Check whether the bvals and bvecs match the CNP standards, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
4. Go to the raw DTI directory of the subject, and open the dti_report.txt file, using the command <b>[[Basic UNIX Commands#emacs | emacs]]</b>:<br />
:> cd /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/DTI_64DIR_3<br />
:> ls<br />
:> emacs dti_report.txt &<br />
<br />
5. Check whether the bvals and bvecs match the CNP standards, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
===Check for Artifacts===<br />
Artifacts observed in this data set include-<br />
:-missing slices- this would be on only one volume, and consist of an entire isolated missing horizontal slice<br />
:-vibration artifact (only on BMC subjects)- this usually shows as a red patch directly on the midline, primarily in the parietal region<br />
:-striping<br />
:-cropping<br />
<br />
===Check Raw Data===<br />
====Check FA Map====<br />
After running the dti_proc_regressor.sh script, check the fractional anisotropy (FA) map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open the FA map in FSLView:<br />
:> fslview dtifit_FA.nii.gz &<br />
<br />
3. Check if the FA map includes the entire brain and if the FA map looks unusual or not<br />
<br />
====Check Color Map====<br />
After running the dti_proc.sh script, check the color map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open both the FA and color maps in FSLView (or add the color map to FSLView, if you are already viewing the FA map):<br />
:> fslview dtifit_FA.nii.gz dtifit_V1.nii.gz &<br />
<br />
3. To view the color map, select the dtifit_V1 file, and press the "i" button. An "Overlay Information Dialog" window will appear. For "Display as:", select "RGB" and for "Modulation:", select "dtifit_FA". Close this window.<br />
<br />
4. The color map should now display with a dark background and red/blue/green tracts.<br />
<br />
5. Check if the directions of the major fiber tracts are colored appropriately by scrolling through the slices. In the coronal view, the corticospinal tract (superior-inferior) should be blue. In the sagittal view, the corpus callosum (right-left) should be red. In the axial view, the anterior-posterior tracts should be green.<br />
<br />
Also, check if the color map includes the entire brain and if the color map looks unusual or not. Log this in the appropriate google doc.<br />
<br />
====Check for Cropping====<br />
In the output from the DTI_QA script is a number for each of a set of regions (cerebellum, superior, temporal, frontal). This number represents the percentage of voxels missing in that region. If the number is greater than about 10, you should go back and look at the FA map to make sure that actual tract data is not missing (some small percent, which would usually represent grey matter, can be missing off the edges without much effect). There will always be a large percentage of cerebellum voxels missing, but this can be ignored.<br />
<br />
====Watch raw data as movie====<br />
Load up the raw data file (like DTI_64dir_7.nii.gz) into fslview, and watch through each volume as a movie. It is normal for the first volume to be much brighter, that is the B0 image.<br />
<br />
==QA Rating System==<br />
After logging the intermediate steps in the google doc, a final rating can be calculated. This is based on:<br />
:1. Coverage flag (based on cropping measures rated 0=no cropping, 1=minor cropping, 2=severe unusable cropping)<br />
:2. Motion flags (based on watching raw data as movie, and on pdfs).<br />
:3. Tensor direction flags (based on bvals and bvecs and color map)<br />
:4. Artifact flags<br />
<br />
The overall Quality score is generated from these measures and varies from 1-4. <br />
:1=excellent<br />
:2=good (useable, but depending on analysis might want to take a look at reason for score)<br />
:3=fair (useable, but depending on analysis might want to take a look at reason for score)<br />
:4=unusable (all individuals with vibration artifacts are in this category, along with any others with irreconcilable problems)<br />
:-1= not evaluated<br />
<br />
The scores and reasons for them are available on the HTAC database.<br />
<br />
<br />
[[File:DTI_Rankings.png]]<br />
<br />
<br />
<br />
----<br />
Link back to [[LA5C]] page. <br/><br />
Return to [[CNP]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Quality_Control&diff=8293Quality Control2012-06-14T18:11:18Z<p>Katie: /* Alternate QA Methods */</p>
<hr />
<div>Back to [[LA5C]]<br />
<br />
=Check setup_subject=<br />
To make sure everything ran correctly check:<br/><br />
*directory structure<br />
*files<br />
*upload behavior files<br />
<br />
'''For details on setup subjects go here: [[Initial_Processing]]<br />
<br />
=Inspect Diagnostic Reports=<br />
* Flag excessive motion<br />
<br />
* Diagnostic Reports<br />
** Stored in the raw/TASK folder for each task<br />
** Use display or kpdf to open it<br />
[[Image:DxR.jpg]]<br />
<br />
[[Image:Setupdx1.jpg]]<br />
<br />
[[Image:Setupdx2.jpg]]<br />
<br />
==Diagnostic Examples: Histograms==<br />
<br />
These histograms show the range of signal in the raw image. It is normal to have a large peak centered around 0 that represents the background, or non-brain voxels. If this is missing, or low, then there may be an issue with background noise. It is then normal to have another, much lower, peak farther out centered around 800-1200, that represents the data in the brain.<br />
<br />
In these examples, you can see that the obviously strange looking one on the lower right hand side is from a scan that was entirely noise- this is a very abnormal example.<br />
<br />
[[Image:Histogram.jpg]]<br />
<br />
==Diagnostic Examples: SNR==<br />
<br />
This measure is calculated based on the binarized mask created when BET is run.<br />
<br />
For SNR, there are a few things to check. First of all you want to be sure it is at least above 20-30. Lower than that indicates very low signal. Second, it is important to check for large fluctuations (for instance, if there is substantial instability at the start or end of the scan). Often, something that looks like a large fluctuation is actually not relevant when you look at the y-axis and see that it actually represents a very small percent change. This axis re-scales itself based on the data, so you must check it in order to interpret the findings. For instance, example 2 below looks similar to the others, but the scale is very large, and this is likely significant, where in other similar looking graphs with smaller ranges, it would not be.<br />
<br />
Sometimes there will be a spike in the exact center of the graph. This likely represents an artifact based on fat suppression. The motion correction is based on the center image, so all images other than the center image have been interpolated to some extent. If fat wasn't sufficiently suppressed, the middle image may have much lower SNR because in all the other images the higher intensity fat has been blurred around as the correction occurred. The decrease in SNR seems to be totally based on an increase in background noise, not on any decrease within the brain and the within brain data will be fine. This is not usually relevant, and will be common across subjects within a protocol (see example 3, below). You can detect it by checking other subjects on the same protocol, and by checking to see if it is happening precisely on the middle scan.<br />
<br />
[[Image:SNR.jpg]]<br />
<br />
==Diagnostic Examples: Slice Intensity==<br />
<br />
This image represents the signal intensity across each time point on each slice. There is normally a thick red band that represents the area with the most data, which for us is the head. There are sometimes white dots in this- if there are only a few, it should be noted, but doesn't usually correlate with significant problems with the data. If the pattern is messy, or blurry, or the red band is not particularly wide or dark, then this can often correlate with motion, or otherwise bad data. This image in and of itself is not sufficient evidence for eliminating subjects, but can help you understand the impact other parts of the diagnostics (like motion, or SNR) are having on your data.<br />
<br />
The top image is an example of a normal scan.<br />
<br />
The middle images are in descending order of messiness.<br />
<br />
The bottom image is entirely noise (it is the same as the example of the bad histogram, above). You can see that there is no specificity in terms of head location, since the entire thing is red.<br />
<br />
[[Image:SliceInt.jpg]]<br />
<br />
==Diagnostic Examples: Motion==<br />
<br />
Motion is probably the most important component of data checking, as it can really ruin the data, especially with interleaved scans. Usually, anything over 2-3 mm translation is noteworthy, and potentially warrants exclusions.<br />
<br />
When checking motion, be sure to look at the y-axis. This graph also re-scales itself, so it can be misleading if you don't pay attention to scale. It is often useful to look at how the motion spikes here line up with artifacts seen in the previous graphs as it will often line up with disruptions in things like the color map.<br />
<br />
Any unusual motion should be logged. Generally, spikes in motion are much more disruptive than a slow drift, which can be more easily modelled out by FSL.<br />
<br />
[[Image:MotionDx.jpg]]<br />
<br />
==Diagnostic Examples: MPRAGE==<br />
[[Image:Good.jpg]]<br />
<br />
Good<br />
<br />
[[Image:Bad.jpg]]<br />
<br />
Bad<br />
<br />
==Diagnostic Criteria==<br />
{| class="wikitable" border="1"<br />
|-<br />
! Parameter<br />
! Mean range<br />
! Features<br />
|-<br />
| Global in-mask signal: Mean<br />
| 4-12<br />
| gradual decline/overall negative slope<br />
|-<br />
| Log power spectrum of global signal timecourse<br />
| 2-9<br />
| gradual decline/negative slope, long wave<br />
|-<br />
| Signal to noise (SNR) ration (based on BET mask)<br />
| 2-8<br />
| no isolated spikes<br />
|-<br />
| Mean slice intensity by time<br />
|<br />
| range of colors, no white spots, no white lines, no large bands of color<br />
|-<br />
| Motion parameters: Translation<br />
| 0.2-1.2mm<br />
| no large spikes that extend the overall range by >1mm, flag any spike >2mm or >3mm<br />
|}<br />
=Inspect raw data with fslview=<br />
<br />
* Number of scans<br />
** In fslview<br />
** fslvols <.nii file><br />
<br />
* Motion<br />
<br />
* Orientation<br />
<br />
* Brain extraction<br />
<br />
[[Image:fslview.jpg]]<br />
<br />
* The arrow at the top emphasizes the film strip, clicking plays a video of slices (check for motion).<br />
* The arrow at the bottom emphasizes volume, allowing you to select specific slices.<br />
<br />
=Inspect BET=<br />
* Usage: bet <input fileroot> <output fileroot> [options] example: bet spt100hr.hdr spt100hr_brain.hdr<br />
<br />
* How it works: Searches out from center, looking for area of decreased signal, and cuts off there (knowing this will help a lot with understanding where artifacts come from).<br />
<br />
* Run, then check using slices. If needs fixing: <br />
** -f, fractional intensity threshold, ranges from 0-1. The default is .5, and making this number smaller causes the resulting brain to be bigger, while making it larger (>.5) makes the resulting brain smaller.<br />
** -g, vertical gradient in fractional intensity threshold, ranges from -1 - 1. The default is 0. Closer to 1 makes brain have smaller top, bigger bottom. Closer to -1 makes brain have smaller bottom, bigger top (so select based on which part of brain is not being correctly estimated).<br />
** example: bet spt100hr.hdr spt100hr.brain.hdr -f 0.4 –g .2<br />
<br />
<br />
==BET examples==<br />
* MPRAGE after setup_subject default bet<br />
** Usage : > bet mprage.nii.gz mprage_brain –B –f 0.20<br />
<br />
<br />
[[Image:BET.jpg]]<br />
<br />
<br />
'''Details on Skull Stripping with BET can be found here: [[Skull_Strip_with_BET]]<br />
<br />
<br />
----<br />
Link back to [[LA5C]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Quality_Control&diff=8292Quality Control2012-06-14T18:10:53Z<p>Katie: /* Check setup_subject */</p>
<hr />
<div>Back to [[LA5C]]<br />
<br />
=Check setup_subject=<br />
To make sure everything ran correctly check:<br/><br />
*directory structure<br />
*files<br />
*upload behavior files<br />
<br />
'''For details on setup subjects go here: [[Initial_Processing]]<br />
<br />
=Inspect Diagnostic Reports=<br />
* Flag excessive motion<br />
<br />
* Diagnostic Reports<br />
** Stored in the raw/TASK folder for each task<br />
** Use display or kpdf to open it<br />
[[Image:DxR.jpg]]<br />
<br />
[[Image:Setupdx1.jpg]]<br />
<br />
[[Image:Setupdx2.jpg]]<br />
<br />
==Diagnostic Examples: Histograms==<br />
<br />
These histograms show the range of signal in the raw image. It is normal to have a large peak centered around 0 that represents the background, or non-brain voxels. If this is missing, or low, then there may be an issue with background noise. It is then normal to have another, much lower, peak farther out centered around 800-1200, that represents the data in the brain.<br />
<br />
In these examples, you can see that the obviously strange looking one on the lower right hand side is from a scan that was entirely noise- this is a very abnormal example.<br />
<br />
[[Image:Histogram.jpg]]<br />
<br />
==Diagnostic Examples: SNR==<br />
<br />
This measure is calculated based on the binarized mask created when BET is run.<br />
<br />
For SNR, there are a few things to check. First of all you want to be sure it is at least above 20-30. Lower than that indicates very low signal. Second, it is important to check for large fluctuations (for instance, if there is substantial instability at the start or end of the scan). Often, something that looks like a large fluctuation is actually not relevant when you look at the y-axis and see that it actually represents a very small percent change. This axis re-scales itself based on the data, so you must check it in order to interpret the findings. For instance, example 2 below looks similar to the others, but the scale is very large, and this is likely significant, where in other similar looking graphs with smaller ranges, it would not be.<br />
<br />
Sometimes there will be a spike in the exact center of the graph. This likely represents an artifact based on fat suppression. The motion correction is based on the center image, so all images other than the center image have been interpolated to some extent. If fat wasn't sufficiently suppressed, the middle image may have much lower SNR because in all the other images the higher intensity fat has been blurred around as the correction occurred. The decrease in SNR seems to be totally based on an increase in background noise, not on any decrease within the brain and the within brain data will be fine. This is not usually relevant, and will be common across subjects within a protocol (see example 3, below). You can detect it by checking other subjects on the same protocol, and by checking to see if it is happening precisely on the middle scan.<br />
<br />
[[Image:SNR.jpg]]<br />
<br />
==Diagnostic Examples: Slice Intensity==<br />
<br />
This image represents the signal intensity across each time point on each slice. There is normally a thick red band that represents the area with the most data, which for us is the head. There are sometimes white dots in this- if there are only a few, it should be noted, but doesn't usually correlate with significant problems with the data. If the pattern is messy, or blurry, or the red band is not particularly wide or dark, then this can often correlate with motion, or otherwise bad data. This image in and of itself is not sufficient evidence for eliminating subjects, but can help you understand the impact other parts of the diagnostics (like motion, or SNR) are having on your data.<br />
<br />
The top image is an example of a normal scan.<br />
<br />
The middle images are in descending order of messiness.<br />
<br />
The bottom image is entirely noise (it is the same as the example of the bad histogram, above). You can see that there is no specificity in terms of head location, since the entire thing is red.<br />
<br />
[[Image:SliceInt.jpg]]<br />
<br />
==Diagnostic Examples: Motion==<br />
<br />
Motion is probably the most important component of data checking, as it can really ruin the data, especially with interleaved scans. Usually, anything over 2-3 mm translation is noteworthy, and potentially warrants exclusions.<br />
<br />
When checking motion, be sure to look at the y-axis. This graph also re-scales itself, so it can be misleading if you don't pay attention to scale. It is often useful to look at how the motion spikes here line up with artifacts seen in the previous graphs as it will often line up with disruptions in things like the color map.<br />
<br />
Any unusual motion should be logged. Generally, spikes in motion are much more disruptive than a slow drift, which can be more easily modelled out by FSL.<br />
<br />
[[Image:MotionDx.jpg]]<br />
<br />
==Diagnostic Examples: MPRAGE==<br />
[[Image:Good.jpg]]<br />
<br />
Good<br />
<br />
[[Image:Bad.jpg]]<br />
<br />
Bad<br />
<br />
==Diagnostic Criteria==<br />
{| class="wikitable" border="1"<br />
|-<br />
! Parameter<br />
! Mean range<br />
! Features<br />
|-<br />
| Global in-mask signal: Mean<br />
| 4-12<br />
| gradual decline/overall negative slope<br />
|-<br />
| Log power spectrum of global signal timecourse<br />
| 2-9<br />
| gradual decline/negative slope, long wave<br />
|-<br />
| Signal to noise (SNR) ration (based on BET mask)<br />
| 2-8<br />
| no isolated spikes<br />
|-<br />
| Mean slice intensity by time<br />
|<br />
| range of colors, no white spots, no white lines, no large bands of color<br />
|-<br />
| Motion parameters: Translation<br />
| 0.2-1.2mm<br />
| no large spikes that extend the overall range by >1mm, flag any spike >2mm or >3mm<br />
|}<br />
=Inspect raw data with fslview=<br />
<br />
* Number of scans<br />
** In fslview<br />
** fslvols <.nii file><br />
<br />
* Motion<br />
<br />
* Orientation<br />
<br />
* Brain extraction<br />
<br />
[[Image:fslview.jpg]]<br />
<br />
* The arrow at the top emphasizes the film strip, clicking plays a video of slices (check for motion).<br />
* The arrow at the bottom emphasizes volume, allowing you to select specific slices.<br />
<br />
=Inspect BET=<br />
* Usage: bet <input fileroot> <output fileroot> [options] example: bet spt100hr.hdr spt100hr_brain.hdr<br />
<br />
* How it works: Searches out from center, looking for area of decreased signal, and cuts off there (knowing this will help a lot with understanding where artifacts come from).<br />
<br />
* Run, then check using slices. If needs fixing: <br />
** -f, fractional intensity threshold, ranges from 0-1. The default is .5, and making this number smaller causes the resulting brain to be bigger, while making it larger (>.5) makes the resulting brain smaller.<br />
** -g, vertical gradient in fractional intensity threshold, ranges from -1 - 1. The default is 0. Closer to 1 makes brain have smaller top, bigger bottom. Closer to -1 makes brain have smaller bottom, bigger top (so select based on which part of brain is not being correctly estimated).<br />
** example: bet spt100hr.hdr spt100hr.brain.hdr -f 0.4 –g .2<br />
<br />
<br />
==BET examples==<br />
* MPRAGE after setup_subject default bet<br />
** Usage : > bet mprage.nii.gz mprage_brain –B –f 0.20<br />
<br />
<br />
[[Image:BET.jpg]]<br />
<br />
<br />
'''Details on Skull Stripping with BET can be found here: [[Skull_Strip_with_BET]]<br />
<br />
==Alternate QA Methods==<br />
Some additional, older methods can be found here: [[Artifact_Detection]]<br />
<br />
----<br />
<i>Improve our wiki by [http://cnswww.psych.ucla.edu/wiki/index.php?title=Raw_Data_QA&action=edit editing this page]!</i><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
----<br />
Link back to [[LA5C]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Quality_Control&diff=8291Quality Control2012-06-14T18:10:22Z<p>Katie: </p>
<hr />
<div>Back to [[LA5C]]<br />
<br />
=Check setup_subject=<br />
To make sure everything ran correctly check:<br/><br />
*directory structure<br />
*files<br />
*upload behavior files<br />
<br />
'''For details on setup subjects go here: [[Setup_Subjects_Script]]<br />
<br />
=Inspect Diagnostic Reports=<br />
* Flag excessive motion<br />
<br />
* Diagnostic Reports<br />
** Stored in the raw/TASK folder for each task<br />
** Use display or kpdf to open it<br />
[[Image:DxR.jpg]]<br />
<br />
[[Image:Setupdx1.jpg]]<br />
<br />
[[Image:Setupdx2.jpg]]<br />
<br />
==Diagnostic Examples: Histograms==<br />
<br />
These histograms show the range of signal in the raw image. It is normal to have a large peak centered around 0 that represents the background, or non-brain voxels. If this is missing, or low, then there may be an issue with background noise. It is then normal to have another, much lower, peak farther out centered around 800-1200, that represents the data in the brain.<br />
<br />
In these examples, you can see that the obviously strange looking one on the lower right hand side is from a scan that was entirely noise- this is a very abnormal example.<br />
<br />
[[Image:Histogram.jpg]]<br />
<br />
==Diagnostic Examples: SNR==<br />
<br />
This measure is calculated based on the binarized mask created when BET is run.<br />
<br />
For SNR, there are a few things to check. First of all you want to be sure it is at least above 20-30. Lower than that indicates very low signal. Second, it is important to check for large fluctuations (for instance, if there is substantial instability at the start or end of the scan). Often, something that looks like a large fluctuation is actually not relevant when you look at the y-axis and see that it actually represents a very small percent change. This axis re-scales itself based on the data, so you must check it in order to interpret the findings. For instance, example 2 below looks similar to the others, but the scale is very large, and this is likely significant, where in other similar looking graphs with smaller ranges, it would not be.<br />
<br />
Sometimes there will be a spike in the exact center of the graph. This likely represents an artifact based on fat suppression. The motion correction is based on the center image, so all images other than the center image have been interpolated to some extent. If fat wasn't sufficiently suppressed, the middle image may have much lower SNR because in all the other images the higher intensity fat has been blurred around as the correction occurred. The decrease in SNR seems to be totally based on an increase in background noise, not on any decrease within the brain and the within brain data will be fine. This is not usually relevant, and will be common across subjects within a protocol (see example 3, below). You can detect it by checking other subjects on the same protocol, and by checking to see if it is happening precisely on the middle scan.<br />
<br />
[[Image:SNR.jpg]]<br />
<br />
==Diagnostic Examples: Slice Intensity==<br />
<br />
This image represents the signal intensity across each time point on each slice. There is normally a thick red band that represents the area with the most data, which for us is the head. There are sometimes white dots in this- if there are only a few, it should be noted, but doesn't usually correlate with significant problems with the data. If the pattern is messy, or blurry, or the red band is not particularly wide or dark, then this can often correlate with motion, or otherwise bad data. This image in and of itself is not sufficient evidence for eliminating subjects, but can help you understand the impact other parts of the diagnostics (like motion, or SNR) are having on your data.<br />
<br />
The top image is an example of a normal scan.<br />
<br />
The middle images are in descending order of messiness.<br />
<br />
The bottom image is entirely noise (it is the same as the example of the bad histogram, above). You can see that there is no specificity in terms of head location, since the entire thing is red.<br />
<br />
[[Image:SliceInt.jpg]]<br />
<br />
==Diagnostic Examples: Motion==<br />
<br />
Motion is probably the most important component of data checking, as it can really ruin the data, especially with interleaved scans. Usually, anything over 2-3 mm translation is noteworthy, and potentially warrants exclusions.<br />
<br />
When checking motion, be sure to look at the y-axis. This graph also re-scales itself, so it can be misleading if you don't pay attention to scale. It is often useful to look at how the motion spikes here line up with artifacts seen in the previous graphs as it will often line up with disruptions in things like the color map.<br />
<br />
Any unusual motion should be logged. Generally, spikes in motion are much more disruptive than a slow drift, which can be more easily modelled out by FSL.<br />
<br />
[[Image:MotionDx.jpg]]<br />
<br />
==Diagnostic Examples: MPRAGE==<br />
[[Image:Good.jpg]]<br />
<br />
Good<br />
<br />
[[Image:Bad.jpg]]<br />
<br />
Bad<br />
<br />
==Diagnostic Criteria==<br />
{| class="wikitable" border="1"<br />
|-<br />
! Parameter<br />
! Mean range<br />
! Features<br />
|-<br />
| Global in-mask signal: Mean<br />
| 4-12<br />
| gradual decline/overall negative slope<br />
|-<br />
| Log power spectrum of global signal timecourse<br />
| 2-9<br />
| gradual decline/negative slope, long wave<br />
|-<br />
| Signal to noise (SNR) ration (based on BET mask)<br />
| 2-8<br />
| no isolated spikes<br />
|-<br />
| Mean slice intensity by time<br />
|<br />
| range of colors, no white spots, no white lines, no large bands of color<br />
|-<br />
| Motion parameters: Translation<br />
| 0.2-1.2mm<br />
| no large spikes that extend the overall range by >1mm, flag any spike >2mm or >3mm<br />
|}<br />
=Inspect raw data with fslview=<br />
<br />
* Number of scans<br />
** In fslview<br />
** fslvols <.nii file><br />
<br />
* Motion<br />
<br />
* Orientation<br />
<br />
* Brain extraction<br />
<br />
[[Image:fslview.jpg]]<br />
<br />
* The arrow at the top emphasizes the film strip, clicking plays a video of slices (check for motion).<br />
* The arrow at the bottom emphasizes volume, allowing you to select specific slices.<br />
<br />
=Inspect BET=<br />
* Usage: bet <input fileroot> <output fileroot> [options] example: bet spt100hr.hdr spt100hr_brain.hdr<br />
<br />
* How it works: Searches out from center, looking for area of decreased signal, and cuts off there (knowing this will help a lot with understanding where artifacts come from).<br />
<br />
* Run, then check using slices. If needs fixing: <br />
** -f, fractional intensity threshold, ranges from 0-1. The default is .5, and making this number smaller causes the resulting brain to be bigger, while making it larger (>.5) makes the resulting brain smaller.<br />
** -g, vertical gradient in fractional intensity threshold, ranges from -1 - 1. The default is 0. Closer to 1 makes brain have smaller top, bigger bottom. Closer to -1 makes brain have smaller bottom, bigger top (so select based on which part of brain is not being correctly estimated).<br />
** example: bet spt100hr.hdr spt100hr.brain.hdr -f 0.4 –g .2<br />
<br />
<br />
==BET examples==<br />
* MPRAGE after setup_subject default bet<br />
** Usage : > bet mprage.nii.gz mprage_brain –B –f 0.20<br />
<br />
<br />
[[Image:BET.jpg]]<br />
<br />
<br />
'''Details on Skull Stripping with BET can be found here: [[Skull_Strip_with_BET]]<br />
<br />
==Alternate QA Methods==<br />
Some additional, older methods can be found here: [[Artifact_Detection]]<br />
<br />
----<br />
<i>Improve our wiki by [http://cnswww.psych.ucla.edu/wiki/index.php?title=Raw_Data_QA&action=edit editing this page]!</i><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
----<br />
Link back to [[LA5C]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:SNR.jpg&diff=8290File:SNR.jpg2012-06-14T18:09:40Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:SliceInt.jpg&diff=8289File:SliceInt.jpg2012-06-14T18:09:30Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:Setupdx2.jpg&diff=8288File:Setupdx2.jpg2012-06-14T18:08:52Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:Setupdx1.jpg&diff=8287File:Setupdx1.jpg2012-06-14T18:08:36Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:MotionDx.jpg&diff=8286File:MotionDx.jpg2012-06-14T18:08:26Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:Histogram.jpg&diff=8285File:Histogram.jpg2012-06-14T18:08:17Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:Fslview.jpg&diff=8284File:Fslview.jpg2012-06-14T18:08:06Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:DxR.jpg&diff=8283File:DxR.jpg2012-06-14T18:07:56Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:Directory.jpg&diff=8282File:Directory.jpg2012-06-14T18:07:41Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:BET.jpg&diff=8281File:BET.jpg2012-06-14T18:07:34Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=File:Bad.jpg&diff=8280File:Bad.jpg2012-06-14T18:07:17Z<p>Katie: </p>
<hr />
<div></div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Quality_Control&diff=8279Quality Control2012-06-14T18:06:47Z<p>Katie: </p>
<hr />
<div><br />
=Check setup_subject=<br />
To make sure everything ran correctly check:<br/><br />
*directory structure<br />
*files<br />
*upload behavior files<br />
<br />
'''For details on setup subjects go here: [[Setup_Subjects_Script]]<br />
<br />
=Inspect Diagnostic Reports=<br />
* Flag excessive motion<br />
<br />
* Diagnostic Reports<br />
** Stored in the raw/TASK folder for each task<br />
** Use display or kpdf to open it<br />
[[Image:DxR.jpg]]<br />
<br />
[[Image:Setupdx1.jpg]]<br />
<br />
[[Image:Setupdx2.jpg]]<br />
<br />
==Diagnostic Examples: Histograms==<br />
<br />
These histograms show the range of signal in the raw image. It is normal to have a large peak centered around 0 that represents the background, or non-brain voxels. If this is missing, or low, then there may be an issue with background noise. It is then normal to have another, much lower, peak farther out centered around 800-1200, that represents the data in the brain.<br />
<br />
In these examples, you can see that the obviously strange looking one on the lower right hand side is from a scan that was entirely noise- this is a very abnormal example.<br />
<br />
[[Image:Histogram.jpg]]<br />
<br />
==Diagnostic Examples: SNR==<br />
<br />
This measure is calculated based on the binarized mask created when BET is run.<br />
<br />
For SNR, there are a few things to check. First of all you want to be sure it is at least above 20-30. Lower than that indicates very low signal. Second, it is important to check for large fluctuations (for instance, if there is substantial instability at the start or end of the scan). Often, something that looks like a large fluctuation is actually not relevant when you look at the y-axis and see that it actually represents a very small percent change. This axis re-scales itself based on the data, so you must check it in order to interpret the findings. For instance, example 2 below looks similar to the others, but the scale is very large, and this is likely significant, where in other similar looking graphs with smaller ranges, it would not be.<br />
<br />
Sometimes there will be a spike in the exact center of the graph. This likely represents an artifact based on fat suppression. The motion correction is based on the center image, so all images other than the center image have been interpolated to some extent. If fat wasn't sufficiently suppressed, the middle image may have much lower SNR because in all the other images the higher intensity fat has been blurred around as the correction occurred. The decrease in SNR seems to be totally based on an increase in background noise, not on any decrease within the brain and the within brain data will be fine. This is not usually relevant, and will be common across subjects within a protocol (see example 3, below). You can detect it by checking other subjects on the same protocol, and by checking to see if it is happening precisely on the middle scan.<br />
<br />
[[Image:SNR.jpg]]<br />
<br />
==Diagnostic Examples: Slice Intensity==<br />
<br />
This image represents the signal intensity across each time point on each slice. There is normally a thick red band that represents the area with the most data, which for us is the head. There are sometimes white dots in this- if there are only a few, it should be noted, but doesn't usually correlate with significant problems with the data. If the pattern is messy, or blurry, or the red band is not particularly wide or dark, then this can often correlate with motion, or otherwise bad data. This image in and of itself is not sufficient evidence for eliminating subjects, but can help you understand the impact other parts of the diagnostics (like motion, or SNR) are having on your data.<br />
<br />
The top image is an example of a normal scan.<br />
<br />
The middle images are in descending order of messiness.<br />
<br />
The bottom image is entirely noise (it is the same as the example of the bad histogram, above). You can see that there is no specificity in terms of head location, since the entire thing is red.<br />
<br />
[[Image:SliceInt.jpg]]<br />
<br />
==Diagnostic Examples: Motion==<br />
<br />
Motion is probably the most important component of data checking, as it can really ruin the data, especially with interleaved scans. Usually, anything over 2-3 mm translation is noteworthy, and potentially warrants exclusions.<br />
<br />
When checking motion, be sure to look at the y-axis. This graph also re-scales itself, so it can be misleading if you don't pay attention to scale. It is often useful to look at how the motion spikes here line up with artifacts seen in the previous graphs as it will often line up with disruptions in things like the color map.<br />
<br />
Any unusual motion should be logged. Generally, spikes in motion are much more disruptive than a slow drift, which can be more easily modelled out by FSL.<br />
<br />
[[Image:MotionDx.jpg]]<br />
<br />
==Diagnostic Examples: MPRAGE==<br />
[[Image:Good.jpg]]<br />
<br />
Good<br />
<br />
[[Image:Bad.jpg]]<br />
<br />
Bad<br />
<br />
==Diagnostic Criteria==<br />
{| class="wikitable" border="1"<br />
|-<br />
! Parameter<br />
! Mean range<br />
! Features<br />
|-<br />
| Global in-mask signal: Mean<br />
| 4-12<br />
| gradual decline/overall negative slope<br />
|-<br />
| Log power spectrum of global signal timecourse<br />
| 2-9<br />
| gradual decline/negative slope, long wave<br />
|-<br />
| Signal to noise (SNR) ration (based on BET mask)<br />
| 2-8<br />
| no isolated spikes<br />
|-<br />
| Mean slice intensity by time<br />
|<br />
| range of colors, no white spots, no white lines, no large bands of color<br />
|-<br />
| Motion parameters: Translation<br />
| 0.2-1.2mm<br />
| no large spikes that extend the overall range by >1mm, flag any spike >2mm or >3mm<br />
|}<br />
=Inspect raw data with fslview=<br />
<br />
* Number of scans<br />
** In fslview<br />
** fslvols <.nii file><br />
<br />
* Motion<br />
<br />
* Orientation<br />
<br />
* Brain extraction<br />
<br />
[[Image:fslview.jpg]]<br />
<br />
* The arrow at the top emphasizes the film strip, clicking plays a video of slices (check for motion).<br />
* The arrow at the bottom emphasizes volume, allowing you to select specific slices.<br />
<br />
=Inspect BET=<br />
* Usage: bet <input fileroot> <output fileroot> [options] example: bet spt100hr.hdr spt100hr_brain.hdr<br />
<br />
* How it works: Searches out from center, looking for area of decreased signal, and cuts off there (knowing this will help a lot with understanding where artifacts come from).<br />
<br />
* Run, then check using slices. If needs fixing: <br />
** -f, fractional intensity threshold, ranges from 0-1. The default is .5, and making this number smaller causes the resulting brain to be bigger, while making it larger (>.5) makes the resulting brain smaller.<br />
** -g, vertical gradient in fractional intensity threshold, ranges from -1 - 1. The default is 0. Closer to 1 makes brain have smaller top, bigger bottom. Closer to -1 makes brain have smaller bottom, bigger top (so select based on which part of brain is not being correctly estimated).<br />
** example: bet spt100hr.hdr spt100hr.brain.hdr -f 0.4 –g .2<br />
<br />
<br />
==BET examples==<br />
* MPRAGE after setup_subject default bet<br />
** Usage : > bet mprage.nii.gz mprage_brain –B –f 0.20<br />
<br />
<br />
[[Image:BET.jpg]]<br />
<br />
<br />
'''Details on Skull Stripping with BET can be found here: [[Skull_Strip_with_BET]]<br />
<br />
==Alternate QA Methods==<br />
Some additional, older methods can be found here: [[Artifact_Detection]]<br />
<br />
----<br />
<i>Improve our wiki by [http://cnswww.psych.ucla.edu/wiki/index.php?title=Raw_Data_QA&action=edit editing this page]!</i><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
----<br />
Link back to [[LA5C]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Initial_Processing&diff=8278Initial Processing2012-06-14T18:02:17Z<p>Katie: </p>
<hr />
<div>Back to [[LA5C]]<br />
<br />
Data were initially processing using the setup_subjects script (created by Russ Poldrack).<br />
<br />
<br />
=About Setup Subjects=<br />
Data are transferred in DICOM format to a server after a scan is completed. setup_subject.sh is a script for the automatic transfer and conversion of data from BMC or CCN to func. setup_subject.sh transfers the raw DICOM files to func and converts these files into usable format, for use with FSL or other tools. This script runs on the Sun Grid Engine.<br/><br />
<br />
=Usage=<br />
USAGE: setup_subject -b <base_dir> -g <group> -s <subject> -options<br />
<br />
To check the progress of the script, look at the log files in the notes/ directory using the command '''more notefile'''.<br/><br />
<br />
==Required Arguments==<br />
Required arguments:<br />
: -b base_dir: the project directory in which the subject's dir will be created (this MUST be an absolute path, i.e., starting with /)<br/><br />
: -g group: the name of the group under which the scan was performed (this MUST be in lowercase, and must also match the account name under which the data are stored on DNS0)<br/><br />
: -s subject: the subject code that was entered when scanning<br />
<br />
==Optional Arguments==<br />
Optional arguments:<br/><br />
[general flags]:<br/><br />
: -t: run in test mode - don't execute commands<br/> <br />
[processing operations]:<br/><br />
: -c: run MCFLIRT motion correction on 4D image<br/><br />
: -m: run MELODIC ICA on motion-corrected 4D image<br/><br />
: -x: run BET on the motion-corrected 4D data<br/><br />
: -w: Preprocess MP-RAGE (align, BET, and bias correction)<br/> <br />
: -D: Run raw data diagnostics<br/><br />
: -z: name of file specifying definitions for MCFLIRT_ARGS and MELODIC_ARGS<br/><br />
: -h: unwarp allegra MP-RAGE using MGH code (for Allegra 3T only)<br/><br />
[debabeler options]:<br/><br />
: -d: skip debabeling<br/><br />
: -j: location of debabeler .jar file<br/><br />
[data input options]:<br/><br />
: -u username specify a username for dns0 account (defaults to group name)<br/><br />
: -y: read data from dicom backup (for data collected prior to 9/17/05)<br/><br />
[file output options]:<br/><br />
: -k: don't automatically delete DICOM data after conversion<br/><br />
: -3: Keep 3D data after 4D conversion<br/><br />
: -f: don't fix raw directory names (leave extra numbers)<br/><br />
: -e: skip downloading of data from dns0 (assumes existing data)<br/><br />
: -a: tag to denote BOLD directories (defaults to BOLD)<br/><br />
: -q skip entry of DICOM info into database<br/><br />
: -I specify a dicom server directory<br/><br />
[grid options]:<br/><br />
: -G submit compute tasks to the Grid (default)<br/><br />
: -L run Locally (do not submit compute tasks to the Grid)<br />
<br />
==Example Command Lines==<br />
Here are example command lines to run the script:<br/><br />
:e.g. setup_subject_nifti -b /space/raid3/data/cannon/hbm_prodromal_vcap -g cannon -s 80023_00 -x -w -D -k <br />
:This command line would convert the data, keep the DICOM data (-k), skull-strip the data (-x), process the MP-RAGE (-w), and run diagnostics (-D).<br />
<br />
:e.g. setup_subject_nifti_swe -b /space/raid9/data/cannon/swe_twin_fmri -g cannonlab -s 2006600001 -c -x -D<br />
:This command line would convert the data, delete the converted DICOM data by default, motion-correct the data (-c), skull-strip the data (-x), and run diagnostics (-D).<br />
<br />
You should always run setup subjects on the grid, so that it goes faster and doesn't hog the individual processors. For more detail on the grid, go here: [[Sun_Grid_Engine]]<br />
<br />
=Detail=<br />
==Directories==<br />
Creates directory structure in the specified -b folder, including the subject directory and the subdirectories analysis/, behav/, dicom/, notes/, raw/:<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/dicom<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/raw<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/behav<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/analysis<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/notes<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/raw/EPI_aflab_run1<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/raw/EPI_aflab_run2<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/raw/EPI_stscap<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/raw/struct<br />
<br />
==Preprocessing==<br />
This script completes the following preprocessing steps:<br />
* Transfers raw data from BMC ('''cp -r''') into the dicom/ directory. Need a BMC account, and to give your username -u.<br />
* Creates log file in '''subjectdir/notes/setup_log.date'''<br />
* Converts from DICOM to NIFTI ('''dcm2nii'''). Afterward, the script either deletes it or not, depending on the -k option.<br />
* Motion correction ('''mcflirt'''): registers each volume across time to the middle volume and makes motion parameter files and plots for each task to put into later analyses.<br />
* Brain extraction ('''bet''' and '''betfunc2''')<br />
* Optional - run '''melodic'''<br />
* Create diagnostic .pdfs for each run of each task<br />
* Basic DTI analysis (calls proc_dti script, creates FA, MD, etc.)<br />
<br />
==Command Lines==<br />
To manually run the command lines in the script:<br />
<br />
1. First copy the raw data recursively to the appropriate subject directory, using the command <b>[[Basic UNIX Commands#cp | cp]]</b>:<br />
:e.g. <b>cp -r</b> 2006600001 2006600001/dicom<br />
<br />
2. Convert all [[Basic fMRI Definitions#DICOM | DICOM]] files to [[Basic fMRI Definitions#NIFTI | NIFTI]] format, using the command <b>[[Basic UNIX Commands#dcm2nii|dcm2nii]]</b>:<br />
:<b>dcm2nii</b> ${DICOMDIRECTORY}<br />
:e.g. dcm2nii 001/<br />
<br />
Copy the output of this file into the functional directories located in $SUBDIR/raw/.<br />
<br />
3. Run motion correction and registration on all raw functional files, using the command <b>[[Basic UNIX Commands#mcflirt |mcflirt]]</b>:<br />
:<b>mcflirt -in</b> ${SUBCODE}_${BOLDTAG}*${IMG_SUFFIX} <b>-plots -report -mats</b><br />
:e.g. mcflirt -in 2006600001_EPI_stscap.nii.gz<br />
<br />
4. Skull-strip all raw functional files and all motion-corrected, registered functional files, using the command <b>[[Basic UNIX Commands#betfunc2 | betfunc2]]</b>:<br />
:<b>betfunc2</b> $FOUR_D_FILENAME_SHORT $BET_4D_FILENAME<br />
:e.g. betfunc2 2006600001_EPI_stscap.nii.gz 2006600001_EPI_stscap_brain.nii.gz<br />
:e.g. betfunc2 2006600001_EPI_stscap_mcf.nii.gz 2006600001_EPI_stscap_mcf_brain.nii.gz<br />
<br />
5. Skull-strip the structural files, e.g. T2 or SPGR, using the command <b>[[Basic UNIX Commands#bet | bet]]</b>:<br />
:<b>bet</b> $SUBDIR/raw/struct/${SUBCODE}_t2.$IMG_SUFFIX $SUBDIR/raw/struct/${SUBCODE}_t2_brain.$IMG_SUFFIX<br />
:e.g. bet /space/raid9/data/cannon/swe_twin_fmri/2006600001/raw/struct/2006600001_t2.nii.gz /space/raid9/data/cannon/swe_twin_fmri/2006600001/raw/struct/2006600001_t2_brain.nii.gz</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Initial_Processing&diff=8277Initial Processing2012-06-14T18:02:03Z<p>Katie: Created page with 'Back to LA5C Data were initially processing using the setup_subjects script (created by Russ Poldrack). =About= Data are transferred in DICOM format to a server after a sc…'</p>
<hr />
<div>Back to [[LA5C]]<br />
<br />
Data were initially processing using the setup_subjects script (created by Russ Poldrack).<br />
<br />
<br />
=About=<br />
Data are transferred in DICOM format to a server after a scan is completed. setup_subject.sh is a script for the automatic transfer and conversion of data from BMC or CCN to func. setup_subject.sh transfers the raw DICOM files to func and converts these files into usable format, for use with FSL or other tools. This script runs on the Sun Grid Engine.<br/><br />
<br />
=Usage=<br />
USAGE: setup_subject -b <base_dir> -g <group> -s <subject> -options<br />
<br />
To check the progress of the script, look at the log files in the notes/ directory using the command '''more notefile'''.<br/><br />
<br />
==Required Arguments==<br />
Required arguments:<br />
: -b base_dir: the project directory in which the subject's dir will be created (this MUST be an absolute path, i.e., starting with /)<br/><br />
: -g group: the name of the group under which the scan was performed (this MUST be in lowercase, and must also match the account name under which the data are stored on DNS0)<br/><br />
: -s subject: the subject code that was entered when scanning<br />
<br />
==Optional Arguments==<br />
Optional arguments:<br/><br />
[general flags]:<br/><br />
: -t: run in test mode - don't execute commands<br/> <br />
[processing operations]:<br/><br />
: -c: run MCFLIRT motion correction on 4D image<br/><br />
: -m: run MELODIC ICA on motion-corrected 4D image<br/><br />
: -x: run BET on the motion-corrected 4D data<br/><br />
: -w: Preprocess MP-RAGE (align, BET, and bias correction)<br/> <br />
: -D: Run raw data diagnostics<br/><br />
: -z: name of file specifying definitions for MCFLIRT_ARGS and MELODIC_ARGS<br/><br />
: -h: unwarp allegra MP-RAGE using MGH code (for Allegra 3T only)<br/><br />
[debabeler options]:<br/><br />
: -d: skip debabeling<br/><br />
: -j: location of debabeler .jar file<br/><br />
[data input options]:<br/><br />
: -u username specify a username for dns0 account (defaults to group name)<br/><br />
: -y: read data from dicom backup (for data collected prior to 9/17/05)<br/><br />
[file output options]:<br/><br />
: -k: don't automatically delete DICOM data after conversion<br/><br />
: -3: Keep 3D data after 4D conversion<br/><br />
: -f: don't fix raw directory names (leave extra numbers)<br/><br />
: -e: skip downloading of data from dns0 (assumes existing data)<br/><br />
: -a: tag to denote BOLD directories (defaults to BOLD)<br/><br />
: -q skip entry of DICOM info into database<br/><br />
: -I specify a dicom server directory<br/><br />
[grid options]:<br/><br />
: -G submit compute tasks to the Grid (default)<br/><br />
: -L run Locally (do not submit compute tasks to the Grid)<br />
<br />
==Example Command Lines==<br />
Here are example command lines to run the script:<br/><br />
:e.g. setup_subject_nifti -b /space/raid3/data/cannon/hbm_prodromal_vcap -g cannon -s 80023_00 -x -w -D -k <br />
:This command line would convert the data, keep the DICOM data (-k), skull-strip the data (-x), process the MP-RAGE (-w), and run diagnostics (-D).<br />
<br />
:e.g. setup_subject_nifti_swe -b /space/raid9/data/cannon/swe_twin_fmri -g cannonlab -s 2006600001 -c -x -D<br />
:This command line would convert the data, delete the converted DICOM data by default, motion-correct the data (-c), skull-strip the data (-x), and run diagnostics (-D).<br />
<br />
You should always run setup subjects on the grid, so that it goes faster and doesn't hog the individual processors. For more detail on the grid, go here: [[Sun_Grid_Engine]]<br />
<br />
=Detail=<br />
==Directories==<br />
Creates directory structure in the specified -b folder, including the subject directory and the subdirectories analysis/, behav/, dicom/, notes/, raw/:<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/dicom<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/raw<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/behav<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/analysis<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/notes<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/raw/EPI_aflab_run1<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/raw/EPI_aflab_run2<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/raw/EPI_stscap<br />
:/space/raid9/data/cannon/swe_twin_fmri/${subjectid}/raw/struct<br />
<br />
==Preprocessing==<br />
This script completes the following preprocessing steps:<br />
* Transfers raw data from BMC ('''cp -r''') into the dicom/ directory. Need a BMC account, and to give your username -u.<br />
* Creates log file in '''subjectdir/notes/setup_log.date'''<br />
* Converts from DICOM to NIFTI ('''dcm2nii'''). Afterward, the script either deletes it or not, depending on the -k option.<br />
* Motion correction ('''mcflirt'''): registers each volume across time to the middle volume and makes motion parameter files and plots for each task to put into later analyses.<br />
* Brain extraction ('''bet''' and '''betfunc2''')<br />
* Optional - run '''melodic'''<br />
* Create diagnostic .pdfs for each run of each task<br />
* Basic DTI analysis (calls proc_dti script, creates FA, MD, etc.)<br />
<br />
==Command Lines==<br />
To manually run the command lines in the script:<br />
<br />
1. First copy the raw data recursively to the appropriate subject directory, using the command <b>[[Basic UNIX Commands#cp | cp]]</b>:<br />
:e.g. <b>cp -r</b> 2006600001 2006600001/dicom<br />
<br />
2. Convert all [[Basic fMRI Definitions#DICOM | DICOM]] files to [[Basic fMRI Definitions#NIFTI | NIFTI]] format, using the command <b>[[Basic UNIX Commands#dcm2nii|dcm2nii]]</b>:<br />
:<b>dcm2nii</b> ${DICOMDIRECTORY}<br />
:e.g. dcm2nii 001/<br />
<br />
Copy the output of this file into the functional directories located in $SUBDIR/raw/.<br />
<br />
3. Run motion correction and registration on all raw functional files, using the command <b>[[Basic UNIX Commands#mcflirt |mcflirt]]</b>:<br />
:<b>mcflirt -in</b> ${SUBCODE}_${BOLDTAG}*${IMG_SUFFIX} <b>-plots -report -mats</b><br />
:e.g. mcflirt -in 2006600001_EPI_stscap.nii.gz<br />
<br />
4. Skull-strip all raw functional files and all motion-corrected, registered functional files, using the command <b>[[Basic UNIX Commands#betfunc2 | betfunc2]]</b>:<br />
:<b>betfunc2</b> $FOUR_D_FILENAME_SHORT $BET_4D_FILENAME<br />
:e.g. betfunc2 2006600001_EPI_stscap.nii.gz 2006600001_EPI_stscap_brain.nii.gz<br />
:e.g. betfunc2 2006600001_EPI_stscap_mcf.nii.gz 2006600001_EPI_stscap_mcf_brain.nii.gz<br />
<br />
5. Skull-strip the structural files, e.g. T2 or SPGR, using the command <b>[[Basic UNIX Commands#bet | bet]]</b>:<br />
:<b>bet</b> $SUBDIR/raw/struct/${SUBCODE}_t2.$IMG_SUFFIX $SUBDIR/raw/struct/${SUBCODE}_t2_brain.$IMG_SUFFIX<br />
:e.g. bet /space/raid9/data/cannon/swe_twin_fmri/2006600001/raw/struct/2006600001_t2.nii.gz /space/raid9/data/cannon/swe_twin_fmri/2006600001/raw/struct/2006600001_t2_brain.nii.gz</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=LA5C&diff=8276LA5C2012-06-14T17:59:05Z<p>Katie: /* Preprocessing and Quality Control */</p>
<hr />
<div>====LA5C====<br />
<br />
The LA5C was a nested study within LA2K, furthering the work to include patients. The assessed population included 200 healthy controls recruited from LA2K, 100 schizophrenia patients, 100 bipolar patients, and 100 ADHD patients. <br />
<br />
<br />
<br />
See here for the LA2K manual: [[LA2k_manual]]. <br/><br />
<br />
===Participants===<br />
The participants, ages 21-50, were recruited by community advertisements from the Los Angeles area and completed extensive neuropsychogical testing, in addition to fMRI scanning. To be included individuals had to be either “White, Not of Hispanic or Latino Origin” or “Hispanic or Latino, of Any Race” following NIH designations of racial and ethnic minority groups, and have completed at least 8 years of education (other racial and ethnic minority groups were excluded because this was thought to increase risk of confounding planned genetic studies). For participants who spoke both English and Spanish, language for testing was determined by a verbal fluency test. Participants were screened for neurological disease, history of head injury with loss of consciousness or cognitive sequelae, use of psychoactive medications, substance dependence within past 6 months, history of major mental illness or ADHD, and current mood or anxiety disorder. Self-reported history of psychopathology was verified with the SCID-IV (First, Spitzer, Gibbon, & Williams, 1995). Urinalysis was used to screen for drugs of abuse (cannabis, amphetamine, opioids, cocaine, benzodiazepines) on the day of testing and excluded if results were positive. <br />
<br />
A portion of this large sample took part in two separate fMRI sessions, which each included one-hour of behavioral testing and a one-hour scan on the same day. Participants were recruited from the parent study to participate in the fMRI portion if they successfully completed all previous testing sessions, and did not meet the following additional exclusion criteria: history of significant medical illness, contraindications for MRI (including pregnancy), any mood-altering medication on scan day (based on self-report), vision that was insufficient to see task stimuli, and left-handedness. <br />
<br />
After receiving a thorough explanation, all participants gave written informed consent according to the procedures approved by the University of California Los Angeles Institutional Review Board.<br />
<br />
===Contact People===<br />
For more information on the LA5C please contact:<br/><br />
Eliza Congdon, Ph.D. [mailto:econgdon@ucla.edu email econgdon@ucla.edu]<br/><br />
Katie Karlsgodt, Ph.D. [mailto:kkarlsgo@ucla.edu email kkarlsgo@ucla.edu]<br/><br />
Russell Poldrack, Ph.D. [mailto:poldrack@mail.utexas.edu email poldrack@mail.utexas.edu]<br/><br />
Fred Sabb, Ph.D. [mailto:fwsabb@gmail.com email fwsabb@gmail.com]<br/><br />
<br />
===General Methods===<br />
link to LA5C manual<br />
<br />
overall study design (scan A, scan B, etc)<br />
<br />
id numbers<br />
<br />
counterbalancing<br />
<br />
===Facilities===<br />
[[BMC]]<br/><br />
[[CCN]]<br/><br />
<br />
===Measures===<br />
'''Image Acquisition:'''<br/><br />
[[Parameters]]<br/><br />
[[Procedure]]<br/><br />
<br />
'''Functional:'''<br/><br />
[[BART]]<br/><br />
[[BREATH HOLDING]]<br/><br />
[[PAM]]<br/><br />
[[RESTING STATE]]<br/><br />
[[SCAP]]<br/><br />
[[STOPSIGNAL]]<br/><br />
[[TASKSWITCHING]]<br/><br />
<br />
'''Structural:'''<br/><br />
[[DTI]]<br/><br />
[[MATCHED BANDWIDTH HIRES]]<br/><br />
[[MPRAGE]]<br/><br />
<br />
'''Physio Acquisition'''<br/><br />
[[Physio Data Setup]]<br/><br />
<br />
===Preprocessing and Quality Control===<br />
<br />
<br />
'''Preprocessing:'''<br/><br />
[[Initial Processing]]<br/><br />
[[Quality Control]]<br/><br />
[[LA5C Exclusions: Withdrawn, Dropped, Unusable]]<br/><br />
[[LA3C ID Switches]]<br/><br />
[[LA5C CCN Re-Scans]]<br/><br />
[[BMC/CCN Scanner Switch & Marker History]]<br/><br />
[[MPRAGE Motion]]<br/><br />
[[BOLD Motion]]<br />
<br />
[[DTI Quality Control]]<br><br />
[[Freesurfer Quality Control]]<br><br />
<br />
[[Final Tables of Usables Ns]]<br><br />
<br />
[[Notes on changes made to HTAC Database or Final Log Files post-QC Completion (6/13/12)]]<br><br />
<br />
===Requesting and Analyzing Data===<br />
[[CNP_data_query_policy]]<br />
<br />
[[Notes on Downloading Imaging Workflow Data]] <br/><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Link back to [[HTAC]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=DTI&diff=8275DTI2012-06-14T17:56:22Z<p>Katie: </p>
<hr />
<div>__TOC__<br />
<br />
<b> Back to [[HTAC]]</b><br />
<br />
=Scan Parameters=<br />
The DTI was acquired using a 64 direction sequence. Parameters were: 2mm slices, TR/TE=9000/93, 1 average, 96x96 matrix, 90 degree flip angle, axial slices, b=1000.<br />
<br />
=Processing=<br />
==dti_proc script==<br />
dti_proc.sh is a script written by Russ Poldrack to preprocess the raw CNP DTI data. The path for the script is /space/raid2/data/poldrack/CNP/scripts/dti_proc.sh. <br />
<br />
To run the script on an entire subject group (CONTROLS, SCHZ, BIPOLAR, ADHD) use the wrapper script run_dti_proc.sh<br />
<br />
NOTE: An alternate version exists called dti_proc_regressor. The difference between the two versions is that the regressor script contains a nuisance regressor based on the Galachan, 2010 paper, that can to some extent correct for vibration artifacts. This artifact exists on BMC data before Fall 2010. After Fall 2010 the table was bolted down, correcting the artifact. The CCN table was bolted at installation, thus avoiding the artifact entirely. Unfortunately it does not entirely correct the issue, so the shareable data uses the original version of the script with people who show the artifact marked for elimination.<br />
<br />
===Usage===<br />
For example, to run the script on one CNP subject,<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/scripts:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/scripts<br />
<br />
2. Find the path of the raw DTI directory for one subject, and run the script, e.g.:<br />
:> dti_proc /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/raw/DTI_64DIR_3 &<br />
<br />
===Script Actions===<br />
The dti_proc script processes the data using FSL Diffusion Toolbox (FDT). <br />
<br />
Steps:<br/><br />
1. B0 image is skull stripped, creating B0_image_brain<br/><br />
2. the raw data are registered to the first (B0) image using mcflirt- this both corrects for eddy currents and helps motion, the resulting file is dti_mcf<br/><br />
3. dtifit is run, creating FA, L1, L2, MD and other processed images in subject space<br/><br />
4. the FA and color maps are registered to MNI space (FA2std and V12std).<br/><br />
5. Generic ROIs from the JHU atlas are applied to the FA image in standard space. <br/><br />
:WARNING- the resulting values are NOT to be used as data, rather just as first pass markers for scan integrity. They are not well registered enough to be used for anything, as evidenced by the low FA values.<br/><br />
6. Motion is calculated<br/><br />
7. bvecs and bvals for each subject are compared to the CNP standards. <br/><br />
:NOTE: there are different bvecs and bvals for subjects on the CCN and BMC scanners, as well as for a subset of subjects scanned during the transition and intial set up of the CCN<br/><br />
7. a pdf is generated (dti_diag.pdf)<br/><br />
<br />
===Output===<br />
For the subject indicated, the script will output the following files in the raw DTI directory:<br />
:B0_image_brain_mask.nii.gz<br />
:B0_image_brain.nii.gz<br />
:B0_image.nii.gz<br />
:dti_diag.pdf<br />
:dtifit_FA2std.nii.gz<br />
:dtifit_FA.nii.gz<br />
:dtifit_L1.nii.gz<br />
:dtifit_L2.nii.gz<br />
:dtifit_L3.nii.gz<br />
:dtifit.log<br />
:dtifit_MD.nii.gz<br />
:dtifit_MO.nii.gz<br />
:dtifit_SO.nii.gz<br />
:dtifit_V12std.nii.gz<br />
:dtifit_V1.nii.gz<br />
:dtifit_V2.nii.gz<br />
:dtifit_V3.nii.gz<br />
:dti_mcf.par<br />
:dti_proc.log<br />
:FA2std.mat<br />
<br />
=Quality Control=<br />
The checking procedures for CNP follow the Cannon Lab DTI QA Protocol, which was adapted from procedures in Paul Thompson's lab.<br />
<br />
==DTI_QA script==<br />
The additional QA script is located at /space/raid2/data/poldrack/CNP/scripts/DTI_QA.sh<br />
<br />
===Usage===<br />
The usage is DTI_QA <subjectID> <group>, or dti_proc all <group>.<br />
<br />
For example, to run the script on one subject in the CNP schizophrenia group,<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/scripts:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/scripts<br />
<br />
2. Run the script, e.g.:<br />
:> dti_proc_temp.sh CNP_50006B SCHZ &<br />
<br />
3. Alternatively, to run the script on all subjects in the CNP schizophrenia group,<br/><br />
:> dti_proc_temp.sh all SCHZ &<br />
<br />
===Script Actions===<br />
This script creates a small text file to be used in QA. If the script runs properly it:<br/><br />
1. matches bvals and bvecs<br/><br />
2. calculates mean in-mask FA and MD<br/><br />
3. calculates motion in each direction<br/><br />
4. creates a standard deviation file for regular and mcf images<br/><br />
5. uses regional masks to calculate the percentage of cropped voxels in the occipital lobe, frontal lobe, superior region, temporal lobes and cerebellum.<br/><br />
<br />
===Output===<br />
The script will create a <b>dti_report.txt</b> file in the raw DTI directory of the subject.<br />
<br />
==How to Do QA==<br />
===Check Diagnostic Log===<br />
After running the dti_proc.sh script, check the diagnostic log for quality assurance of the DTI data:<br/><br />
1. Log on to func and go to the directory, for example, /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open the dti_diag.pdf, using the command <b>[[Basic UNIX Commands#evince | evince]]</b>:<br />
:> evince dti_diag.pdf &<br />
<br />
3. Check whether the bvals and bvecs match the CNP standards, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
4. Go to the raw DTI directory of the subject, and open the dti_report.txt file, using the command <b>[[Basic UNIX Commands#emacs | emacs]]</b>:<br />
:> cd /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/DTI_64DIR_3<br />
:> ls<br />
:> emacs dti_report.txt &<br />
<br />
5. Check whether the bvals and bvecs match the CNP standards, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
===Check for Artifacts===<br />
Artifacts observed in this data set include-<br />
:-missing slices- this would be on only one volume, and consist of an entire isolated missing horizontal slice<br />
:-vibration artifact (only on BMC subjects)- this usually shows as a red patch directly on the midline, primarily in the parietal region<br />
:-striping<br />
:-cropping<br />
<br />
===Check Raw Data===<br />
====Check FA Map====<br />
After running the dti_proc_regressor.sh script, check the fractional anisotropy (FA) map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open the FA map in FSLView:<br />
:> fslview dtifit_FA.nii.gz &<br />
<br />
3. Check if the FA map includes the entire brain and if the FA map looks unusual or not, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
====Check Color Map====<br />
After running the dti_proc.sh script, check the color map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open both the FA and color maps in FSLView (or add the color map to FSLView, if you are already viewing the FA map):<br />
:> fslview dtifit_FA.nii.gz dtifit_V1.nii.gz &<br />
<br />
3. To view the color map, select the dtifit_V1 file, and press the "i" button. An "Overlay Information Dialog" window will appear. For "Display as:", select "RGB" and for "Modulation:", select "dtifit_FA". Close this window.<br />
<br />
4. The color map should now display with a dark background and red/blue/green tracts.<br />
<br />
5. Check if the directions of the major fiber tracts are colored appropriately by scrolling through the slices. In the coronal view, the corticospinal tract (superior-inferior) should be blue. In the sagittal view, the corpus callosum (right-left) should be red. In the axial view, the anterior-posterior tracts should be green.<br />
<br />
Also, check if the color map includes the entire brain and if the color map looks unusual or not. Log this in the appropriate google doc.<br />
<br />
====Check for Cropping====<br />
In the output from the DTI_QA script is a number for each of a set of regions (cerebellum, superior, temporal, frontal). This number represents the percentage of voxels missing in that region. If the number is greater than about 10, you should go back and look at the FA map to make sure that actual tract data is not missing (some small percent, which would usually represent grey matter, can be missing off the edges without much effect). There will always be a large percentage of cerebellum voxels missing, but this can be ignored.<br />
<br />
====Watch raw data as movie====<br />
Load up the raw data file (like DTI_64dir_7.nii.gz) into fslview, and watch through each volume as a movie. It is normal for the first volume to be much brighter, that is the B0 image.<br />
<br />
==QA Rating System==<br />
After logging the intermediate steps in the google doc, a final rating can be calculated. This is based on:<br />
:1. Coverage flag (based on cropping measures rated 0=no cropping, 1=minor cropping, 2=severe unusable cropping)<br />
:2. Motion flags (based on watching raw data as movie, and on pdfs).<br />
:3. Tensor direction flags (based on bvals and bvecs and color map)<br />
:4. Artifact flags<br />
<br />
The overall Quality score is generated from these measures and varies from 1-4. <br />
:1=excellent<br />
:2=good (useable, but depending on analysis might want to take a look at reason for score)<br />
:3=fair (useable, but depending on analysis might want to take a look at reason for score)<br />
:4=unusable (all individuals with vibration artifacts are in this category, along with any others with irreconcilable problems)<br />
:-1= not evaluated<br />
<br />
The scores and reasons for them are available on the HTAC database.<br />
<br />
<br />
[[File:DTI_Rankings.png]]<br />
<br />
<br />
<br />
----<br />
Link back to [[LA5C]] page. <br/><br />
Return to [[CNP]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=MPRAGE&diff=8274MPRAGE2012-06-14T17:52:06Z<p>Katie: </p>
<hr />
<div>For registration purposes and also for structural imaging analyses, an MPRAGE was acquired during scan A.<br/><br />
The parameters for MPRAGE were the following: TR = 1.9 s, TE = 2.26 ms, FOV = 250, matrix = 256 x 256, saggital plane, slice thickness = 1 mm, 176 slices.<br />
<br />
The structural data was processed in Freesurfer, details can be found at: [[Freesurfer_Quality_Control]]<br />
<br />
A list of individuals with excessive motion resulting in unuseable data can be found at: [[MPRAGE_Motion]]<br />
<br />
<br />
<br />
<br />
<br />
Back to [[LA5C]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=MPRAGE&diff=8273MPRAGE2012-06-14T17:49:23Z<p>Katie: Created page with 'For registration purposes and also for structural imaging analyses, an MPRAGE was acquired during scan A.<br/> The parameters for MPRAGE were the following: TR = 1.9 s, TE = 2.26…'</p>
<hr />
<div>For registration purposes and also for structural imaging analyses, an MPRAGE was acquired during scan A.<br/><br />
The parameters for MPRAGE were the following: TR = 1.9 s, TE = 2.26 ms, FOV = 250, matrix = 256 x 256, saggital plane, slice thickness = 1 mm, 176 slices.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=MATCHED_BANDWIDTH_HIRES&diff=8272MATCHED BANDWIDTH HIRES2012-06-14T17:48:39Z<p>Katie: </p>
<hr />
<div>For use in registration of functional tasks a T2-weighted matched-bandwidth high-resolution anatomical scan (same slice prescription as EPI) was taken. The parameters were as follows: 4mm slices, TR/TE=5000/34, 4 averages, 128x128, 90 degree flip angle�.<br />
<br />
<br />
<br />
Back to [[LA5C]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=MATCHED_BANDWIDTH_HIRES&diff=8271MATCHED BANDWIDTH HIRES2012-06-14T17:48:21Z<p>Katie: Created page with 'For use in registration of functional tasks a T2-weighted matched-bandwidth high-resolution anatomical scan (same slice prescription as EPI) was taken. The parameters were as fo…'</p>
<hr />
<div>For use in registration of functional tasks a T2-weighted matched-bandwidth high-resolution anatomical scan (same slice prescription as EPI) was taken. The parameters were as follows: 4mm slices, TR/TE=5000/34, 4 averages, 128x128, 90 degree flip angle�.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=SCAP&diff=8270SCAP2012-06-14T17:44:47Z<p>Katie: </p>
<hr />
<div>===SCAP===<br />
<br />
==Task Background Info==<br />
<br />
Sample Text<br />
<br />
During the SDRT (or, SCAP), subjects were shown a target array of 1, 3, 5 or 7 yellow circles positioned pseudorandomly around a central fixation cross. After a delay, subjects were shown a single green circle and were required to indicate whether that circle was in the same position as one of the target circles had been. A relatively long stimulus presentation was used to allow subjects to fully encode the target array, minimizing a potential encoding bias on the basis of set size interaction. Likewise, decision or selection requirements were kept constant across set sizes to reduce possible effects of set size on response processes. In addition to load, delay period was manipulated, with delays of 1.5, 3 or 4.5 seconds. Trial events included a 2-sec target-array presentation, a 1.5, 3 or 4.5 sec delay period, and a 3-sec fixed response interval. A central fixation was visible throughout each of the 48 trials (12 per memory set size, with 4 at each delay length for each memory set). Half the trials were true-positive, and half were true-negative.<br />
(Glahn, 2003; Cannon, 2005)<br />
<br />
==Scoring Behavioral Data==<br />
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP<br/><br />
<br />
1. all scripts pull up the file ‘sublist’ to determine which subjects to run. Before you run a new batch, edit that file using emacs (emacs sublist). IDs are in the format of CNP_12345B. It is just a transient file, so you can delete what is in there. If you’d like to save the old version, just save it as sublist with the date appended. The script will only recognize the plain ‘sublist; file.<br/><br />
<br />
2. in matlab, run score_scap_behavioral_sublist.m<br/><br />
<br />
3. running this should create a file called summaryscore_all.txt in each persons behav/SCAP folder. You can check who has these files (and therefore who needs to be run) by typing <br/><br />
ls /space/raid2/data/poldrack/CNP/CONTROLS/*B/behav/SCAP/*<br/><br />
<br />
4. To create a text file summarizing all the data (which can be put into excel), run the script make_big_scap_score_log.sh which will pull from all the subjects who have ‘B’ directories and create a file called scap_summaryscore_all.txt. Since it pulls all the subjects, its ok to write over this file. To run, just go into /space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP and type<br/><br />
./make_big_scap_score_log.sh<br/><br />
<br />
You can now copy this into excel, although you might need to use the ‘text to columns’ tool to get each number to go into its own cell.<br/><br />
<br />
==Creating Onset files (EVs)==<br />
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP<br/><br />
<br />
1. this script also uses the sublist file- so, you can easily run the behavioral scoring and this script on the same list of new people. Update sublist as described above.<br/><br />
<br />
2. in matlab, run make_scap_onsets_function_sublist.m<br/><br />
<br />
3. running this will create a series of files in each persons own behav/SCAP. After both scripts have been run, the folder should look like this:<br/><br />
<br />
cond10_onsets.txt cond2_onsets.txt cond6_onsets.txt junk_onsets.txt<br/><br />
cond11_onsets.txt cond3_onsets.txt cond7_onsets.txt SCAP_10575.mat<br/><br />
cond12_onsets.txt cond4_onsets.txt cond8_onsets.txt summaryscore_all.txt<br/><br />
cond1_onsets.txt cond5_onsets.txt cond9_onsets.txt<br/><br />
<br />
4. The onset files will have contents that look something like this:<br/><br />
110.0196 8 1<br/><br />
289.0072 8 1<br/><br />
344.0090 8 1<br/><br />
373.0157 8 1<br/><br />
<br />
==Running First Level Analyses==<br />
/space/raid2/data/poldrack/CNP/scripts/run_level1_scripts/SCAP<br/><br />
<br />
1. The primary script for running first levels is SCAP_firstlevel_model1.sh. This script does the first phase of SCAP fMRI processing. It checks for the relevant files, creates an individualized .fsf file for each subject, runs pre- and post stats.<br />
<br />
It takes 4 arguments:<br/><br />
1 group vs subject analysis, <br/><br />
2. population (CONTROL, SCHZ, etc) <br/><br />
3. which subject to run <br/><br />
4. whether to run FSL or just create the fsf file (run or norun)<br/><br />
<br />
There are a few ways you can run it:<br/><br />
a. to run on one person (here, CNP_10159B) and run FSL, go to the directory,<br/><br />
./SCAP_firstlevel_model1.sh subject CONTROLS 10159 run<br/><br />
<br />
b. to run on an entire group (all controls, all patients, etc)<br/><br />
./SCAP_firstlevel_model1.sh group CONTROLS all run<br/><br />
<br />
c to run a specific group of people, you can use a second script that calls this one, run_multiple_scap.sh. for this script, you have to edit it first using emacs, and basically fill in the people you want to run in the for-loop at the top, for instance<br/><br />
for id in 10523 10501 10159; do<br/><br />
<br />
you also need to edit the other relevant options, such as population and whether to run all the way through. It’ll automatically run in single-subject mode, and just loop through these people.<br/><br />
<br />
This can also be submitted to the grid, after it is edited, by typing<br/><br />
sge qsub run_multiple_scap.sh<br/><br />
<br />
==Data Checking==<br />
After first levels were run, data was checked for artifacts, motion effects, and unusual activation.<br/><br />
If a condition was missing that was noted in the log but the subject is still available for download. <br/><br />
<br />
==List of Models==<br />
SCAP_model1<br />
<br />
==Model description and contrasts==<br />
[[SCAP model1 detail]]<br />
<br />
==Completed analyses==<br />
===Papers===<br />
===Abstracts===<br />
[[Karlsgodt_2011_ACNP | Karlsgodt et al, 2011 American College of Neuropsychopharmoacology]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=SCAP&diff=8269SCAP2012-06-14T17:44:32Z<p>Katie: </p>
<hr />
<div>===SCAP===<br />
<br />
==Task Background Info==<br />
<br />
Sample Text<br />
<br />
During the SDRT (or, SCAP), subjects were shown a target array of 1, 3, 5 or 7 yellow circles positioned pseudorandomly around a central fixation cross. After a delay, subjects were shown a single green circle and were required to indicate whether that circle was in the same position as one of the target circles had been. A relatively long stimulus presentation was used to allow subjects to fully encode the target array, minimizing a potential encoding bias on the basis of set size interaction. Likewise, decision or selection requirements were kept constant across set sizes to reduce possible effects of set size on response processes. In addition to load, delay period was manipulated, with delays of 1.5, 3 or 4.5 seconds. Trial events included a 2-sec target-array presentation, a 1.5, 3 or 4.5 sec delay period, and a 3-sec fixed response interval. A central fixation was visible throughout each of the 48 trials (12 per memory set size, with 4 at each delay length for each memory set). Half the trials were true-positive, and half were true-negative.<br />
(Glahn, 2003; Cannon, 2005)<br />
<br />
==Scoring Behavioral Data==<br />
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP<br/><br />
<br />
1. all scripts pull up the file ‘sublist’ to determine which subjects to run. Before you run a new batch, edit that file using emacs (emacs sublist). IDs are in the format of CNP_12345B. It is just a transient file, so you can delete what is in there. If you’d like to save the old version, just save it as sublist with the date appended. The script will only recognize the plain ‘sublist; file.<br/><br />
<br />
2. in matlab, run score_scap_behavioral_sublist.m<br/><br />
<br />
3. running this should create a file called summaryscore_all.txt in each persons behav/SCAP folder. You can check who has these files (and therefore who needs to be run) by typing <br/><br />
ls /space/raid2/data/poldrack/CNP/CONTROLS/*B/behav/SCAP/*<br/><br />
<br />
4. To create a text file summarizing all the data (which can be put into excel), run the script make_big_scap_score_log.sh which will pull from all the subjects who have ‘B’ directories and create a file called scap_summaryscore_all.txt. Since it pulls all the subjects, its ok to write over this file. To run, just go into /space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP and type<br/><br />
./make_big_scap_score_log.sh<br/><br />
<br />
You can now copy this into excel, although you might need to use the ‘text to columns’ tool to get each number to go into its own cell.<br/><br />
<br />
==Creating Onset files (EVs)==<br />
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP<br/><br />
<br />
1. this script also uses the sublist file- so, you can easily run the behavioral scoring and this script on the same list of new people. Update sublist as described above.<br/><br />
<br />
2. in matlab, run make_scap_onsets_function_sublist.m<br/><br />
<br />
3. running this will create a series of files in each persons own behav/SCAP. After both scripts have been run, the folder should look like this:<br/><br />
<br />
cond10_onsets.txt cond2_onsets.txt cond6_onsets.txt junk_onsets.txt<br/><br />
cond11_onsets.txt cond3_onsets.txt cond7_onsets.txt SCAP_10575.mat<br/><br />
cond12_onsets.txt cond4_onsets.txt cond8_onsets.txt summaryscore_all.txt<br/><br />
cond1_onsets.txt cond5_onsets.txt cond9_onsets.txt<br/><br />
<br />
4. The onset files will have contents that look something like this:<br/><br />
110.0196 8 1<br/><br />
289.0072 8 1<br/><br />
344.0090 8 1<br/><br />
373.0157 8 1<br/><br />
<br />
==Running First Level Analyses==<br />
/space/raid2/data/poldrack/CNP/scripts/run_level1_scripts/SCAP<br/><br />
<br />
1. The primary script for running first levels is SCAP_firstlevel_model1.sh. This script does the first phase of SCAP fMRI processing. It checks for the relevant files, creates an individualized .fsf file for each subject, runs pre- and post stats.<br />
<br />
It takes 4 arguments:<br/><br />
1 group vs subject analysis, <br/><br />
2. population (CONTROL, SCHZ, etc) <br/><br />
3. which subject to run <br/><br />
4. whether to run FSL or just create the fsf file (run or norun)<br/><br />
<br />
There are a few ways you can run it:<br/><br />
a. to run on one person (here, CNP_10159B) and run FSL, go to the directory,<br/><br />
./SCAP_firstlevel_model1.sh subject CONTROLS 10159 run<br/><br />
<br />
b. to run on an entire group (all controls, all patients, etc)<br/><br />
./SCAP_firstlevel_model1.sh group CONTROLS all run<br/><br />
<br />
c to run a specific group of people, you can use a second script that calls this one, run_multiple_scap.sh. for this script, you have to edit it first using emacs, and basically fill in the people you want to run in the for-loop at the top, for instance<br/><br />
for id in 10523 10501 10159; do<br/><br />
<br />
you also need to edit the other relevant options, such as population and whether to run all the way through. It’ll automatically run in single-subject mode, and just loop through these people.<br/><br />
<br />
This can also be submitted to the grid, after it is edited, by typing<br/><br />
sge qsub run_multiple_scap.sh<br/><br />
<br />
==Data Checking==<br />
After first levels were run, data was checked for artifacts, motion effects, and unusual activation.<br/><br />
If a condition was missing that was noted in the log but the subject is still available for download. <br/><br />
<br />
==List of Models==<br />
SCAP_model1<br />
<br />
==Model description and contrasts==<br />
[[SCAP model1 detail]]<br />
<br />
==Completed analyses==<br />
===Papers===<br />
===Abstracts/results===<br />
[[Karlsgodt_2011_ACNP | Karlsgodt et al, 2011 American College of Neuropsychopharmoacology]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Karlsgodt_2011_ACNP&diff=8268Karlsgodt 2011 ACNP2012-06-14T17:43:13Z<p>Katie: Created page with 'Presented at ACNP, December 2011: Capacity-Based Differences in Structural Connectivity and Functional Network Activation Associated With Spatial Working Memory Katherine H. Kar…'</p>
<hr />
<div>Presented at ACNP, December 2011: <br />
Capacity-Based Differences in Structural Connectivity and Functional Network Activation Associated With Spatial Working Memory<br />
Katherine H. Karlsgodt, Eliza Congdon, Russell A. Poldrack, Angelica A. Bato, Fred W. Sabb, Edythe London, Robert Bilder, Tyrone D. Cannon<br />
<br />
Background: Working memory is a core cognitive function that is thought to play a role in a number of more complex, higher-level processes. However, working memory capacity varies substantially even across healthy individuals. While there are indications that white matter structure, grey-matter integrity, neural signaling changes, and other factors may contribute to this variation, the roots of these individual differences are still under investigation. It is of particular interest to probe what neural signatures differentiate high-performing individuals, as this information may help us understand how to improve functioning in individuals who have lower performance either due to natural variation or to effects of neurocognitive disorders. Here we sought to assess differences in functional activation in a large sample of healthy individuals with a wide range of behavioral performance using functional magnetic resonance imaging (fMRI) during a spatial working memory task.<br />
<br />
Methods: As a part of the Consortium for Neuropsychiatric Phenomics project at UCLA, we assessed 117 healthy community participants aged 21-50 years. We administered a Sternberg-style spatial working memory task with 4 levels of difficulty during fMRI. To quantify performance differences, we calculated each subject’s working memory capacity using Cowan’s formula. We then performed a voxel-wise analysis, corrected for age and sex, to determine which activation patterns were correlated and anti-correlated with individual working memory capacity.<br />
<br />
Results: Across the entire group, the task elicited activation in regions previously associated with working memory, namely the superior frontal lobes, superior parietal lobes, anterior cingulate, and striatum. In addition, there was significantly decreased activation in regions associated with the default mode network, including medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and superior temporal lobes. Notably, voxel-wise regression of working memory capacity predicting functional activation across the whole task revealed that the primary difference in activation associated with higher capacity was a more pronounced decrease in mPFC activation during task performance.<br />
<br />
Discussion: Individuals with higher working memory capacity were characterized by more successful disengagement of areas associated with the default mode network during task performance. This effect suggests that the hallmark of high performance is dexterous coordination of interactive neural networks rather than simply increased or decreased activation in isolated task-related nodes. The finding has implications for our understanding of why certain healthy individuals have higher and lower working memory abilities. It also can inform our conceptualization of working memory deficits in patient populations, particularly those associated with neural connectivity deficits, such as schizophrenia.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=SCAP&diff=8267SCAP2012-06-14T17:43:08Z<p>Katie: /* Abstracts/results */</p>
<hr />
<div>===SCAP===<br />
<br />
==Task Background Info==<br />
<br />
Sample Text<br />
<br />
During the SDRT (or, SCAP), subjects were shown a target array of 1, 3, 5 or 7 yellow circles positioned pseudorandomly around a central fixation cross. After a delay, subjects were shown a single green circle and were required to indicate whether that circle was in the same position as one of the target circles had been. A relatively long stimulus presentation was used to allow subjects to fully encode the target array, minimizing a potential encoding bias on the basis of set size interaction. Likewise, decision or selection requirements were kept constant across set sizes to reduce possible effects of set size on response processes. In addition to load, delay period was manipulated, with delays of 1.5, 3 or 4.5 seconds. Trial events included a 2-sec target-array presentation, a 1.5, 3 or 4.5 sec delay period, and a 3-sec fixed response interval. A central fixation was visible throughout each of the 48 trials (12 per memory set size, with 4 at each delay length for each memory set). Half the trials were true-positive, and half were true-negative.<br />
(Glahn, 2003; Cannon, 2005)<br />
<br />
==Scoring Behavioral Data==<br />
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP<br/><br />
<br />
1. all scripts pull up the file ‘sublist’ to determine which subjects to run. Before you run a new batch, edit that file using emacs (emacs sublist). IDs are in the format of CNP_12345B. It is just a transient file, so you can delete what is in there. If you’d like to save the old version, just save it as sublist with the date appended. The script will only recognize the plain ‘sublist; file.<br/><br />
<br />
2. in matlab, run score_scap_behavioral_sublist.m<br/><br />
<br />
3. running this should create a file called summaryscore_all.txt in each persons behav/SCAP folder. You can check who has these files (and therefore who needs to be run) by typing <br/><br />
ls /space/raid2/data/poldrack/CNP/CONTROLS/*B/behav/SCAP/*<br/><br />
<br />
4. To create a text file summarizing all the data (which can be put into excel), run the script make_big_scap_score_log.sh which will pull from all the subjects who have ‘B’ directories and create a file called scap_summaryscore_all.txt. Since it pulls all the subjects, its ok to write over this file. To run, just go into /space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP and type<br/><br />
./make_big_scap_score_log.sh<br/><br />
<br />
You can now copy this into excel, although you might need to use the ‘text to columns’ tool to get each number to go into its own cell.<br/><br />
<br />
==Creating Onset files (EVs)==<br />
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP<br/><br />
<br />
1. this script also uses the sublist file- so, you can easily run the behavioral scoring and this script on the same list of new people. Update sublist as described above.<br/><br />
<br />
2. in matlab, run make_scap_onsets_function_sublist.m<br/><br />
<br />
3. running this will create a series of files in each persons own behav/SCAP. After both scripts have been run, the folder should look like this:<br/><br />
<br />
cond10_onsets.txt cond2_onsets.txt cond6_onsets.txt junk_onsets.txt<br/><br />
cond11_onsets.txt cond3_onsets.txt cond7_onsets.txt SCAP_10575.mat<br/><br />
cond12_onsets.txt cond4_onsets.txt cond8_onsets.txt summaryscore_all.txt<br/><br />
cond1_onsets.txt cond5_onsets.txt cond9_onsets.txt<br/><br />
<br />
4. The onset files will have contents that look something like this:<br/><br />
110.0196 8 1<br/><br />
289.0072 8 1<br/><br />
344.0090 8 1<br/><br />
373.0157 8 1<br/><br />
<br />
==Running First Level Analyses==<br />
/space/raid2/data/poldrack/CNP/scripts/run_level1_scripts/SCAP<br/><br />
<br />
1. The primary script for running first levels is SCAP_firstlevel_model1.sh. This script does the first phase of SCAP fMRI processing. It checks for the relevant files, creates an individualized .fsf file for each subject, runs pre- and post stats.<br />
<br />
It takes 4 arguments:<br/><br />
1 group vs subject analysis, <br/><br />
2. population (CONTROL, SCHZ, etc) <br/><br />
3. which subject to run <br/><br />
4. whether to run FSL or just create the fsf file (run or norun)<br/><br />
<br />
There are a few ways you can run it:<br/><br />
a. to run on one person (here, CNP_10159B) and run FSL, go to the directory,<br/><br />
./SCAP_firstlevel_model1.sh subject CONTROLS 10159 run<br/><br />
<br />
b. to run on an entire group (all controls, all patients, etc)<br/><br />
./SCAP_firstlevel_model1.sh group CONTROLS all run<br/><br />
<br />
c to run a specific group of people, you can use a second script that calls this one, run_multiple_scap.sh. for this script, you have to edit it first using emacs, and basically fill in the people you want to run in the for-loop at the top, for instance<br/><br />
for id in 10523 10501 10159; do<br/><br />
<br />
you also need to edit the other relevant options, such as population and whether to run all the way through. It’ll automatically run in single-subject mode, and just loop through these people.<br/><br />
<br />
This can also be submitted to the grid, after it is edited, by typing<br/><br />
sge qsub run_multiple_scap.sh<br/><br />
<br />
==Data Checking==<br />
After first levels were run, data was checked for artifacts, motion effects, and unusual activation.<br/><br />
If a condition was missing that was noted in the log but the subject is still available for download. <br/><br />
<br />
==List of Models==<br />
SCAP_model1<br />
<br />
==Model description and contrasts==<br />
[[SCAP model1 detail]]<br />
<br />
==Behavioral variables==<br />
[[SCAP model1 behavioral variables]]<br />
<br />
==Completed analyses==<br />
<br />
==Abstracts/results==<br />
<br />
[[Karlsgodt_2011_ACNP | Karlsgodt et al, 2011 American College of Neuropsychopharmoacology]]<br />
<br />
==Group level fsfs==</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=SCAP&diff=8266SCAP2012-06-14T17:41:45Z<p>Katie: /* Data Checking */</p>
<hr />
<div>===SCAP===<br />
<br />
==Task Background Info==<br />
<br />
Sample Text<br />
<br />
During the SDRT (or, SCAP), subjects were shown a target array of 1, 3, 5 or 7 yellow circles positioned pseudorandomly around a central fixation cross. After a delay, subjects were shown a single green circle and were required to indicate whether that circle was in the same position as one of the target circles had been. A relatively long stimulus presentation was used to allow subjects to fully encode the target array, minimizing a potential encoding bias on the basis of set size interaction. Likewise, decision or selection requirements were kept constant across set sizes to reduce possible effects of set size on response processes. In addition to load, delay period was manipulated, with delays of 1.5, 3 or 4.5 seconds. Trial events included a 2-sec target-array presentation, a 1.5, 3 or 4.5 sec delay period, and a 3-sec fixed response interval. A central fixation was visible throughout each of the 48 trials (12 per memory set size, with 4 at each delay length for each memory set). Half the trials were true-positive, and half were true-negative.<br />
(Glahn, 2003; Cannon, 2005)<br />
<br />
==Scoring Behavioral Data==<br />
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP<br/><br />
<br />
1. all scripts pull up the file ‘sublist’ to determine which subjects to run. Before you run a new batch, edit that file using emacs (emacs sublist). IDs are in the format of CNP_12345B. It is just a transient file, so you can delete what is in there. If you’d like to save the old version, just save it as sublist with the date appended. The script will only recognize the plain ‘sublist; file.<br/><br />
<br />
2. in matlab, run score_scap_behavioral_sublist.m<br/><br />
<br />
3. running this should create a file called summaryscore_all.txt in each persons behav/SCAP folder. You can check who has these files (and therefore who needs to be run) by typing <br/><br />
ls /space/raid2/data/poldrack/CNP/CONTROLS/*B/behav/SCAP/*<br/><br />
<br />
4. To create a text file summarizing all the data (which can be put into excel), run the script make_big_scap_score_log.sh which will pull from all the subjects who have ‘B’ directories and create a file called scap_summaryscore_all.txt. Since it pulls all the subjects, its ok to write over this file. To run, just go into /space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP and type<br/><br />
./make_big_scap_score_log.sh<br/><br />
<br />
You can now copy this into excel, although you might need to use the ‘text to columns’ tool to get each number to go into its own cell.<br/><br />
<br />
==Creating Onset files (EVs)==<br />
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP<br/><br />
<br />
1. this script also uses the sublist file- so, you can easily run the behavioral scoring and this script on the same list of new people. Update sublist as described above.<br/><br />
<br />
2. in matlab, run make_scap_onsets_function_sublist.m<br/><br />
<br />
3. running this will create a series of files in each persons own behav/SCAP. After both scripts have been run, the folder should look like this:<br/><br />
<br />
cond10_onsets.txt cond2_onsets.txt cond6_onsets.txt junk_onsets.txt<br/><br />
cond11_onsets.txt cond3_onsets.txt cond7_onsets.txt SCAP_10575.mat<br/><br />
cond12_onsets.txt cond4_onsets.txt cond8_onsets.txt summaryscore_all.txt<br/><br />
cond1_onsets.txt cond5_onsets.txt cond9_onsets.txt<br/><br />
<br />
4. The onset files will have contents that look something like this:<br/><br />
110.0196 8 1<br/><br />
289.0072 8 1<br/><br />
344.0090 8 1<br/><br />
373.0157 8 1<br/><br />
<br />
==Running First Level Analyses==<br />
/space/raid2/data/poldrack/CNP/scripts/run_level1_scripts/SCAP<br/><br />
<br />
1. The primary script for running first levels is SCAP_firstlevel_model1.sh. This script does the first phase of SCAP fMRI processing. It checks for the relevant files, creates an individualized .fsf file for each subject, runs pre- and post stats.<br />
<br />
It takes 4 arguments:<br/><br />
1 group vs subject analysis, <br/><br />
2. population (CONTROL, SCHZ, etc) <br/><br />
3. which subject to run <br/><br />
4. whether to run FSL or just create the fsf file (run or norun)<br/><br />
<br />
There are a few ways you can run it:<br/><br />
a. to run on one person (here, CNP_10159B) and run FSL, go to the directory,<br/><br />
./SCAP_firstlevel_model1.sh subject CONTROLS 10159 run<br/><br />
<br />
b. to run on an entire group (all controls, all patients, etc)<br/><br />
./SCAP_firstlevel_model1.sh group CONTROLS all run<br/><br />
<br />
c to run a specific group of people, you can use a second script that calls this one, run_multiple_scap.sh. for this script, you have to edit it first using emacs, and basically fill in the people you want to run in the for-loop at the top, for instance<br/><br />
for id in 10523 10501 10159; do<br/><br />
<br />
you also need to edit the other relevant options, such as population and whether to run all the way through. It’ll automatically run in single-subject mode, and just loop through these people.<br/><br />
<br />
This can also be submitted to the grid, after it is edited, by typing<br/><br />
sge qsub run_multiple_scap.sh<br/><br />
<br />
==Data Checking==<br />
After first levels were run, data was checked for artifacts, motion effects, and unusual activation.<br/><br />
If a condition was missing that was noted in the log but the subject is still available for download. <br/><br />
<br />
==List of Models==<br />
SCAP_model1<br />
<br />
==Model description and contrasts==<br />
[[SCAP model1 detail]]<br />
<br />
==Behavioral variables==<br />
[[SCAP model1 behavioral variables]]<br />
<br />
==Completed analyses==<br />
<br />
==Abstracts/results==<br />
<br />
Presented at ACNP, December 2011: <br />
Capacity-Based Differences in Structural Connectivity and Functional Network Activation Associated With Spatial Working Memory<br />
Katherine H. Karlsgodt, Eliza Congdon, Russell A. Poldrack, Angelica A. Bato, Fred W. Sabb, Edythe London, Robert Bilder, Tyrone D. Cannon<br />
<br />
Background: Working memory is a core cognitive function that is thought to play a role in a number of more complex, higher-level processes. However, working memory capacity varies substantially even across healthy individuals. While there are indications that white matter structure, grey-matter integrity, neural signaling changes, and other factors may contribute to this variation, the roots of these individual differences are still under investigation. It is of particular interest to probe what neural signatures differentiate high-performing individuals, as this information may help us understand how to improve functioning in individuals who have lower performance either due to natural variation or to effects of neurocognitive disorders. Here we sought to assess differences in functional activation in a large sample of healthy individuals with a wide range of behavioral performance using functional magnetic resonance imaging (fMRI) during a spatial working memory task.<br />
<br />
Methods: As a part of the Consortium for Neuropsychiatric Phenomics project at UCLA, we assessed 117 healthy community participants aged 21-50 years. We administered a Sternberg-style spatial working memory task with 4 levels of difficulty during fMRI. To quantify performance differences, we calculated each subject’s working memory capacity using Cowan’s formula. We then performed a voxel-wise analysis, corrected for age and sex, to determine which activation patterns were correlated and anti-correlated with individual working memory capacity.<br />
<br />
Results: Across the entire group, the task elicited activation in regions previously associated with working memory, namely the superior frontal lobes, superior parietal lobes, anterior cingulate, and striatum. In addition, there was significantly decreased activation in regions associated with the default mode network, including medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and superior temporal lobes. Notably, voxel-wise regression of working memory capacity predicting functional activation across the whole task revealed that the primary difference in activation associated with higher capacity was a more pronounced decrease in mPFC activation during task performance.<br />
<br />
Discussion: Individuals with higher working memory capacity were characterized by more successful disengagement of areas associated with the default mode network during task performance. This effect suggests that the hallmark of high performance is dexterous coordination of interactive neural networks rather than simply increased or decreased activation in isolated task-related nodes. The finding has implications for our understanding of why certain healthy individuals have higher and lower working memory abilities. It also can inform our conceptualization of working memory deficits in patient populations, particularly those associated with neural connectivity deficits, such as schizophrenia.<br />
<br />
==Group level fsfs==</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=DTI_Quality_Control&diff=8265DTI Quality Control2012-06-14T00:05:08Z<p>Katie: </p>
<hr />
<div>Link back to [[LA5C]] page.<br />
<br />
'''These are the useable subjects for each group, on each scanner.<br />
<br />
[[File:DTI_Rankings.png]]<br />
<br />
<br />
<br />
----<br />
=Quality Control=<br />
The checking procedures for CNP follow the Cannon Lab DTI QA Protocol, which was adapted from procedures in Paul Thompson's lab.<br />
<br />
==DTI_QA script==<br />
The additional QA script is located at /space/raid2/data/poldrack/CNP/scripts/DTI_QA.sh<br />
<br />
===Usage===<br />
The usage is DTI_QA <subjectID> <group>, or dti_proc all <group>.<br />
<br />
For example, to run the script on one subject in the CNP schizophrenia group,<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/scripts:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/scripts<br />
<br />
2. Run the script, e.g.:<br />
:> dti_proc_temp.sh CNP_50006B SCHZ &<br />
<br />
3. Alternatively, to run the script on all subjects in the CNP schizophrenia group,<br/><br />
:> dti_proc_temp.sh all SCHZ &<br />
<br />
===Script Actions===<br />
This script creates a small text file to be used in QA. If the script runs properly it:<br/><br />
1. matches bvals and bvecs<br/><br />
2. calculates mean in-mask FA and MD<br/><br />
3. calculates motion in each direction<br/><br />
4. creates a standard deviation file for regular and mcf images<br/><br />
5. uses regional masks to calculate the percentage of cropped voxels in the occipital lobe, frontal lobe, superior region, temporal lobes and cerebellum.<br/><br />
<br />
===Output===<br />
The script will create a <b>dti_report.txt</b> file in the raw DTI directory of the subject.<br />
<br />
==How to Do QA==<br />
===Check Diagnostic Log===<br />
After running the dti_proc.sh script, check the diagnostic log for quality assurance of the DTI data:<br/><br />
1. Log on to func and go to the directory, for example, /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open the dti_diag.pdf, using the command <b>[[Basic UNIX Commands#evince | evince]]</b>:<br />
:> evince dti_diag.pdf &<br />
<br />
3. Check whether the bvals and bvecs match the CNP standards, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
4. Go to the raw DTI directory of the subject, and open the dti_report.txt file, using the command <b>[[Basic UNIX Commands#emacs | emacs]]</b>:<br />
:> cd /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/DTI_64DIR_3<br />
:> ls<br />
:> emacs dti_report.txt &<br />
<br />
5. Check whether the bvals and bvecs match the CNP standards, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
===Check for Artifacts===<br />
Artifacts observed in this data set include-<br />
:-missing slices- this would be on only one volume, and consist of an entire isolated missing horizontal slice<br />
:-vibration artifact (only on BMC subjects)- this usually shows as a red patch directly on the midline, primarily in the parietal region<br />
:-striping<br />
:-cropping<br />
<br />
===Check Raw Data===<br />
====Check FA Map====<br />
After running the dti_proc_regressor.sh script, check the fractional anisotropy (FA) map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open the FA map in FSLView:<br />
:> fslview dtifit_FA.nii.gz &<br />
<br />
3. Check if the FA map includes the entire brain and if the FA map looks unusual or not, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
====Check Color Map====<br />
After running the dti_proc.sh script, check the color map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open both the FA and color maps in FSLView (or add the color map to FSLView, if you are already viewing the FA map):<br />
:> fslview dtifit_FA.nii.gz dtifit_V1.nii.gz &<br />
<br />
3. To view the color map, select the dtifit_V1 file, and press the "i" button. An "Overlay Information Dialog" window will appear. For "Display as:", select "RGB" and for "Modulation:", select "dtifit_FA". Close this window.<br />
<br />
4. The color map should now display with a dark background and red/blue/green tracts.<br />
<br />
5. Check if the directions of the major fiber tracts are colored appropriately by scrolling through the slices. In the coronal view, the corticospinal tract (superior-inferior) should be blue. In the sagittal view, the corpus callosum (right-left) should be red. In the axial view, the anterior-posterior tracts should be green.<br />
<br />
Also, check if the color map includes the entire brain and if the color map looks unusual or not. Log this in the appropriate google doc.<br />
<br />
====Check for Cropping====<br />
In the output from the DTI_QA script is a number for each of a set of regions (cerebellum, superior, temporal, frontal). This number represents the percentage of voxels missing in that region. If the number is greater than about 10, you should go back and look at the FA map to make sure that actual tract data is not missing (some small percent, which would usually represent grey matter, can be missing off the edges without much effect). There will always be a large percentage of cerebellum voxels missing, but this can be ignored.<br />
<br />
====Watch raw data as movie====<br />
Load up the raw data file (like DTI_64dir_7.nii.gz) into fslview, and watch through each volume as a movie. It is normal for the first volume to be much brighter, that is the B0 image.<br />
<br />
==QA Rating System==<br />
After logging the intermediate steps in the google doc, a final rating can be calculated. This is based on:<br />
:1. Coverage flag (based on cropping measures rated 0=no cropping, 1=minor cropping, 2=severe unusable cropping)<br />
:2. Motion flags (based on watching raw data as movie, and on pdfs).<br />
:3. Tensor direction flags (based on bvals and bvecs and color map)<br />
:4. Artifact flags<br />
<br />
The overall Quality score is generated from these measures and varies from 1-4. <br />
:1=excellent<br />
:2=good (useable, but depending on analysis might want to take a look at reason for score)<br />
:3=fair (useable, but depending on analysis might want to take a look at reason for score)<br />
:4=unusable (all individuals with vibration artifacts are in this category, along with any others with irreconcilable problems)<br />
:-1= not evaluated<br />
<br />
The scores and reasons for them are available on the HTAC database.<br />
<br />
<br />
[[File:DTI_Rankings.png]]<br />
<br />
<br />
<br />
----<br />
Link back to [[LA5C]] page. <br/><br />
Return to [[CNP]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=DTI_Quality_Control&diff=8264DTI Quality Control2012-06-14T00:01:54Z<p>Katie: </p>
<hr />
<div>Link back to [[LA5C]] page.<br />
<br />
'''These are the useable subjects for each group, on each scanner.<br />
<br />
[[File:DTI_Rankings.png]]<br />
<br />
<br />
<br />
----<br />
=QA Procedures=<br />
[[DTI Analysis Tools and Scripts List]]<br />
<br />
#Check Raw DTI Data Exists<br />
Check that the raw DTI data exists for each subject<br />
<br />
#Run dti_proc.sh Script<br />
<b>dti_proc.sh</b> is a script written by Russ Poldrack, originally for CNP to preprocess raw DTI data. The path for the script is /space/raid2/data/poldrack/CNP/scripts/DTI/dti_proc.sh. The directory structure will need to be adapted for individual projects. The usage is dti_proc <input directory path>.<br/><br />
<br />
Note: an alternate version called "dti_proc_regressor" exists. The difference between the two versions is that the regressor script contains a nuisance regressor based on the Galachan, 2010 paper, that we hoped could to some extent correct for vibration artifacts. This artifact exists on BMC data before Fall 2010. After Fall 2010 the table was bolted down, correcting the artifact. The CCN table was bolted at installation, thus avoiding the artifact entirely. However, this effort to correct the artifact was unsuccessful and the individuals with vibration artifacts were eliminated. <br/><br />
<br />
For example, to run the script on one CNP subject,<br/><br />
Find the path of the raw DTI directory for one subject, and run the script, e.g.:<br />
:> dti_proc /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/raw/DTI_64DIR_3 &<br />
<br />
For the subject indicated, the script will output the following files in the raw DTI directory:<br />
:B0_image_brain_mask.nii.gz<br />
:B0_image_brain.nii.gz<br />
:B0_image.nii.gz<br />
:dti_diag.pdf<br />
:dtifit_FA2std.nii.gz<br />
:dtifit_FA.nii.gz<br />
:dtifit_L1.nii.gz<br />
:dtifit_L2.nii.gz<br />
:dtifit_L3.nii.gz<br />
:dtifit.log<br />
:dtifit_MD.nii.gz<br />
:dtifit_MO.nii.gz<br />
:dtifit_SO.nii.gz<br />
:dtifit_V12std.nii.gz<br />
:dtifit_V1.nii.gz<br />
:dtifit_V2.nii.gz<br />
:dtifit_V3.nii.gz<br />
:dti_mcf.par<br />
:dti_proc.log<br />
:FA2std.mat<br />
<br />
#Run DTI_QA.sh Script<br />
To obtain additional QA measures, to supplement the dti_diag.pdf file, then run the <b>DTI_QA.sh</b> script. This script was written by Katie Karlsgodt to produce a text file which checks that the bvals and bvecs match the CNP standards, and calculates the mean MD, mean FA, mean motion, and the maximum motion, and other QA measures.<br />
<br />
The path for the script is /space/raid2/data/poldrack/CNP/scripts/DTI//DTI_QA.<br/><br />
The usage is DTI_QA <subjectID> <group>, or dti_proc all <group>.<br />
<br />
For example, to run the script on one subject in the CNP schizophrenia group,<br/><br />
<br />
:> dti_proc_temp.sh CNP_50006B SCHZ &<br />
Alternatively, to run the script on all subjects in the CNP schizophrenia group,<br/><br />
:> dti_proc_temp.sh all SCHZ &<br />
<br />
The script will create a <b>dti_report.txt</b> file in the raw DTI directory of the subject.<br />
<br />
#Check Preprocessed DTI Data<br />
##Check Diagnostic PDF<br />
Open the dti_diag.pdf, using the command <b>[[Basic UNIX Commands#evince | evince]]</b>:<br />
:> evince dti_diag.pdf &<br />
<br />
###Check whether the bvals and bvecs match the CNP standards.<br/><br />
Note: across scanners (BMC vs CCN) and within the CCN scanner there were multiple permutations of the 64 direction sequence. All subjects were run using their own directions so the analysis is valid within subject, but they have been flagged so that users are aware if someone does not match the scanner standard.<br/><br />
<br />
##Check Diagnostic text file<br />
Go to the raw DTI directory of the subject, and view the dti_report.txt file, using more or emacs<br/><br />
### Check whether the bvals and bvecs match the CNP standards<br />
### Check degree of cropping. The cerebellum will almost always have the most. If any other regions have more than 10% cropping, check the raw data carefully and rate degree of cropping on the log.<br />
<br />
##Check for Artifacts<br />
Artifacts included in the CNP data include the Siemen's vibration artifact (indicated by a red blob on the midline in the dorsal posterior part of the brain), blank slices (usually associated with motion) on individual<br/><br />
The first step to check for artifacts is to watch the raw data like a movie, going through each direction. Open in fslview, click the "film" icon, and watch each volume, inspecting for noise, dropped slices, etc.<br />
<br />
###Check FA Map<br />
After running the dti_proc_regressor.sh script, check the fractional anisotropy (FA) map for quality assurance:<br/><br />
Open the FA map in FSLView:<br />
:> fslview dtifit_FA.nii.gz &<br />
Check if the FA map includes the entire brain and if the FA map looks unusual or not<br />
<br />
###Check Color Map<br />
After running the dti_proc.sh script, check the color map for quality assurance:<br/><br />
Open both the FA and color maps in FSLView (or add the color map to FSLView, if you are already viewing the FA map):<br />
:> fslview dtifit_FA.nii.gz dtifit_V1.nii.gz &<br />
<br />
To view the color map, select the dtifit_V1 file, and press the "i" button. An "Overlay Information Dialog" window will appear. For "Display as:", select "RGB" and for "Modulation:", select "dtifit_FA". Close this window.<br />
<br />
Check if the directions of the major fiber tracts are colored appropriately by scrolling through the slices. In the coronal view, the corticospinal tract (superior-inferior) should be blue. In the sagittal view, the corpus callosum (right-left) should be red. In the axial view, the anterior-posterior tracts should be green.<br />
<br />
Also, check if the color map includes the entire brain and if the color map looks unusual or not. <br />
<br />
###Check Line Image<br />
While viewing the V1 image, click the 'i' and change the view from RGB to Lines. The lines should be aligned and orderly, particularly in the largest tracts.<br />
<br />
##Rate Data<br />
Based on the QA findings all data were rated on a scale from 1-4 with 1 being excellent and 4 being unusable.<br />
<br />
<br />
Return to [[DTI_Resources]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=DTI_Quality_Control&diff=8263DTI Quality Control2012-06-13T23:53:59Z<p>Katie: </p>
<hr />
<div>Link back to [[LA5C]] page.<br />
<br />
'''These are the useable subjects for each group, on each scanner.<br />
<br />
[[File:DTI_Rankings.png]]<br />
<br />
<br />
<br />
----<br />
=QA Procedures=<br />
[[DTI Analysis Tools and Scripts List]]<br />
<br />
==Check Raw DTI Data==<br />
Check that the raw DTI data exists for each subject<br />
<br />
==Run dti_proc.sh Script==<br />
<b>dti_proc.sh</b> is a script written by Russ Poldrack, originally for CNP to preprocess raw DTI data. The path for the script is /space/raid2/data/poldrack/CNP/scripts/DTI/dti_proc.sh. The directory structure will need to be adapted for individual projects. The usage is dti_proc <input directory path>.<br/><br />
<br />
Note: an alternate version called "dti_proc_regressor" exists. The difference between the two versions is that the regressor script contains a nuisance regressor based on the Galachan, 2010 paper, that we hoped could to some extent correct for vibration artifacts. This artifact exists on BMC data before Fall 2010. After Fall 2010 the table was bolted down, correcting the artifact. The CCN table was bolted at installation, thus avoiding the artifact entirely. However, this effort to correct the artifact was unsuccessful and the individuals with vibration artifacts were eliminated. <br/><br />
<br />
For example, to run the script on one CNP subject,<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/scripts:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/scripts<br />
<br />
2. Find the path of the raw DTI directory for one subject, and run the script, e.g.:<br />
:> dti_proc /space/raid2/data/poldrack/CNP/SCHZ/CNP_50006B/raw/DTI_64DIR_3 &<br />
<br />
For the subject indicated, the script will output the following files in the raw DTI directory:<br />
:B0_image_brain_mask.nii.gz<br />
:B0_image_brain.nii.gz<br />
:B0_image.nii.gz<br />
:dti_diag.pdf<br />
:dtifit_FA2std.nii.gz<br />
:dtifit_FA.nii.gz<br />
:dtifit_L1.nii.gz<br />
:dtifit_L2.nii.gz<br />
:dtifit_L3.nii.gz<br />
:dtifit.log<br />
:dtifit_MD.nii.gz<br />
:dtifit_MO.nii.gz<br />
:dtifit_SO.nii.gz<br />
:dtifit_V12std.nii.gz<br />
:dtifit_V1.nii.gz<br />
:dtifit_V2.nii.gz<br />
:dtifit_V3.nii.gz<br />
:dti_mcf.par<br />
:dti_proc.log<br />
:FA2std.mat<br />
<br />
==Run DTI_QA.sh Script==<br />
To obtain additional QA measures, to supplement the dti_diag.pdf file, then run the <b>DTI_QA.sh</b> script. This script was written by Katie Karlsgodt to produce a text file which checks that the bvals and bvecs match the CNP standards, and calculates the mean MD, mean FA, mean motion, and the maximum motion, and other QA measures.<br />
<br />
The path for the script is /space/raid2/data/poldrack/CNP/scripts/DTI//DTI_QA.<br/><br />
The usage is DTI_QA <subjectID> <group>, or dti_proc all <group>.<br />
<br />
For example, to run the script on one subject in the CNP schizophrenia group,<br/><br />
<br />
1. Run the script, e.g.:<br />
:> dti_proc_temp.sh CNP_50006B SCHZ &<br />
<br />
The script will create a <b>dti_report.txt</b> file in the raw DTI directory of the subject.<br />
<br />
==Check Preprocessed DTI Data==<br />
===Check Diagnostic Log===<br />
After running the dti_proc.sh script, check the diagnostic log for quality assurance of the DTI data:<br/><br />
<br />
2. Open the dti_diag.pdf, using the command <b>[[Basic UNIX Commands#evince | evince]]</b>:<br />
:> evince dti_diag.pdf &<br />
<br />
3. Check whether the bvals and bvecs match the CNP standards.<br/><br />
Note: across scanners (BMC vs CCN) and within the CCN scanner there were multiple permutations of the 64 direction sequence. All subjects were run using their own directions so the analysis is valid within subject, but they have been flagged so that users are aware if someone does not match the scanner standard.<br/><br />
<br />
<br />
<br />
Alternatively, to run the script on all subjects in the CNP schizophrenia group,<br/><br />
:> dti_proc_temp.sh all SCHZ &<br />
<br />
2. Go to the raw DTI directory of the subject, and view the dti_report.txt file, using more or emacs<br />
-- Check whether the bvals and bvecs match the CNP standards<br />
-- Check degree of cropping. The cerebellum will almost always have the most. If any other regions have more than 10% cropping, check the raw data carefully and rate degree of cropping on the log.<br />
<br />
====Check for Artifacts====<br />
<br />
Artifacts included in the CNP data include the Siemen's vibration artifact (indicated by a red blob on the midline in the dorsal posterior part of the brain), blank slices (usually associated with motion) on individual<br />
<br />
===Check FA Map===<br />
After running the dti_proc_regressor.sh script, check the fractional anisotropy (FA) map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open the FA map in FSLView:<br />
:> fslview dtifit_FA.nii.gz &<br />
<br />
3. Check if the FA map includes the entire brain and if the FA map looks unusual or not, and log this in the [https://spreadsheets.google.com/ccc?key=0AhLKRRgAIOCVdG9rY0szNFppcGZ0QmF3ejBYLUY5S0E&hl=en#gid=0 CNP DTI QA Google Document].<br />
<br />
===Check Color Map===<br />
After running the dti_proc.sh script, check the color map for quality assurance:<br/><br />
1. Log on to func and go to the directory /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*:<br />
:> ssh $username@funcserv1<br />
:> cd /space/raid2/data/poldrack/CNP/${group}/CNP_{subjectID}/raw/DTI_64DIR_*<br />
:> ls<br />
<br />
2. Open both the FA and color maps in FSLView (or add the color map to FSLView, if you are already viewing the FA map):<br />
:> fslview dtifit_FA.nii.gz dtifit_V1.nii.gz &<br />
<br />
The FSLView GUI will display these files as:<br />
<br />
[[Image: dti fslview.jpg ]]<br/><br/><br />
<br />
3. To view the color map, select the dtifit_V1 file, and press the "i" button. An "Overlay Information Dialog" window will appear. For "Display as:", select "RGB" and for "Modulation:", select "dtifit_FA". Close this window.<br />
<br />
[[Image: dti display options.jpg ]]<br/><br/><br />
<br />
4. The color map should now display as:<br />
<br />
[[Image: dti color map.jpg]]<br/><br/><br />
<br />
5. Check if the directions of the major fiber tracts are colored appropriately by scrolling through the slices. In the coronal view, the corticospinal tract (superior-inferior) should be blue. In the sagittal view, the corpus callosum (right-left) should be red. In the axial view, the anterior-posterior tracts should be green.<br />
<br />
Also, check if the color map includes the entire brain and if the color map looks unusual or not. Log this in the appropriate google doc.<br />
<br />
<br />
<br />
<br />
Return to [[DTI_Resources]]</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Notes_on_Downloading_Imaging_Workflow_Data&diff=8262Notes on Downloading Imaging Workflow Data2012-06-13T23:39:44Z<p>Katie: </p>
<hr />
<div>An extensive amount of QC went into the imaging data and we attempted to document this as much as possible both in the HTAC database and here on the wiki. However, there are multiple considerations to take into account when determining whether a subject's data is usable. This makes it difficult to download data directly from the HTAC database to query the imaging data. <br/><br />
<br />
We have prepared lists of usable subjects, for each type of scan, following QC (as documented in this wiki). Rather than having users attempt to decipher the Imaging Workflow forms in the HTAC database codebook (which we used primarily for logging), we propose the following system: <br/><br />
<br />
'''1. User submits query, stating which task, level of analysis, associated data, and group are requested (e.g., Stop-signal first-level models for all Controls).''' <br/><br />
* Task: see [[LA5C]] page on wiki for full list of tasks <br/><br />
* Level of analysis: Raw data, or completed first-level models (which have undergone complete QC) <br/><br />
* Associated data: behavioral data corresponding to the task of interest; mprage; mbw <br/><br />
* Group: Controls, specific Patient group, or All <br/><br />
<br />
'''2. Once the query is approved, the requested data will be copied into an Approved Analysis directory.''' <br/><br />
Approved analysis directories are located at space/raid2/data/poldrack/CNP/approved_analyses<br/><br />
<br />
'''3. We will provide the user with a file which includes the following information, for each task/scan requested:''' <br/><br />
* PTID (LA2K ID; primary ID) <br/><br />
* FUNC_ID (in most cases, agrees with LA2K ID; in handful of cases, represents their original ID that scan data were collected under. see [[LA3C ID Switches]] page) <br/><br />
* Completed Status (Primary LA2K Status variable) <br/><br />
* 5C_Status (Primary LA5C Status variable) <br/><br />
* Scanner (1 = BMC; 2 = CCN) <br/><br />
* BEHAV_NOTE (overall QC note) <br/><br />
* A set of FLAGS which indicate whether the subject should be distributed, or whether the subject can be shared but may be flagged for moderate motion, suspicious performance, etc. <br/><br />
<br />
* NOTE: Ensuring that the same set of subjects are distributed for a given task (following complete QC) will ensure that absolutely unusable subjects are not distributed, so the maximum number of potentially usable subjects is consistent across analyses (or at least queries). We want to ensure that subjects with excessive motion, incomplete data, or otherwise unusable data are not analyzed. However, what is reflected in the Notes field for each task/scan are things that were flagged during QC, but which are really up to the user (e.g., moderate motion, suspicious performance). These things may be more important for some types of analyses than others OR these flags might help to explain some odd results (which we weren't able to detect initially). As a result of this, there is the potential for slight variability in the final Ns across analyses given specific methods and goals, but this system at least ensures that as many possible subjects with potentially usable data are made available for analysis. <br/><br />
<br />
<br />
<br />
If you chose to select variables directly from the Imaging Workflow form for download, we suggest the following: <br/><br />
* Completed <br/><br />
* ImgB_5CStatus or ImgA_5CStatus (depending on whether task of interest is in A or B scan) <br/><br />
* Overall: Scanner <br/><br />
* Overall: Note <br/><br />
* Flag for Elimination (all fields) for your task of interest (includes Flag_Share and Flag_S (notes)) <br/><br />
<br />
After downloading: <br/><br />
* Filter on the Completed variable: <br/><br />
** Remove all subjects with Completed Status OTHER than 1 <br/><br />
* Filter on 5C_Status: <br/><br />
** Remove all subjects with 5C_Status OTHER than 2 <br/><br />
** Remove all subjects with Flag_Share (for your task) = 0 <br/><br />
<br />
--REVIEW NOTES UNDER [[Quality_Control | DATA QUALITY CONTROL]] on the LA5C wiki page --- <br/><br />
<br />
<br />
<br />
<br />
Link back to [[LA5C]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Notes_on_Downloading_Imaging_Workflow_Data&diff=8261Notes on Downloading Imaging Workflow Data2012-06-13T23:39:02Z<p>Katie: </p>
<hr />
<div>An extensive amount of QC went into the imaging data and we attempted to document this as much as possible both in the HTAC database and here on the wiki. However, there are multiple considerations to take into account when determining whether a subject's data is usable. This makes it difficult to download data directly from the HTAC database to query the imaging data. <br/><br />
<br />
We have prepared lists of usable subjects, for each type of scan, following QC (as documented in this wiki). Rather than having users attempt to decipher the Imaging Workflow forms in the HTAC database codebook (which we used primarily for logging), we propose the following system: <br/><br />
<br />
'''1. User submits query, stating which task, level of analysis, associated data, and group are requested (e.g., Stop-signal first-level models for all Controls).''' <br/><br />
* Task: see [[LA5C]] page on wiki for full list of tasks <br/><br />
* Level of analysis: Raw data, or completed first-level models (which have undergone complete QC) <br/><br />
* Associated data: behavioral data corresponding to the task of interest; mprage; mbw <br/><br />
* Group: Controls, specific Patient group, or All <br/><br />
<br />
'''2. Once the query is approved, the requested data will be copied into an Approved Analysis directory.''' <br/><br />
<br />
'''3. We will provide the user with a file which includes the following information, for each task/scan requested:''' <br/><br />
* PTID (LA2K ID; primary ID) <br/><br />
* FUNC_ID (in most cases, agrees with LA2K ID; in handful of cases, represents their original ID that scan data were collected under. see [[LA3C ID Switches]] page) <br/><br />
* Completed Status (Primary LA2K Status variable) <br/><br />
* 5C_Status (Primary LA5C Status variable) <br/><br />
* Scanner (1 = BMC; 2 = CCN) <br/><br />
* BEHAV_NOTE (overall QC note) <br/><br />
* A set of FLAGS which indicate whether the subject should be distributed, or whether the subject can be shared but may be flagged for moderate motion, suspicious performance, etc. <br/><br />
<br />
* NOTE: Ensuring that the same set of subjects are distributed for a given task (following complete QC) will ensure that absolutely unusable subjects are not distributed, so the maximum number of potentially usable subjects is consistent across analyses (or at least queries). We want to ensure that subjects with excessive motion, incomplete data, or otherwise unusable data are not analyzed. However, what is reflected in the Notes field for each task/scan are things that were flagged during QC, but which are really up to the user (e.g., moderate motion, suspicious performance). These things may be more important for some types of analyses than others OR these flags might help to explain some odd results (which we weren't able to detect initially). As a result of this, there is the potential for slight variability in the final Ns across analyses given specific methods and goals, but this system at least ensures that as many possible subjects with potentially usable data are made available for analysis. <br/><br />
<br />
<br />
<br />
If you chose to select variables directly from the Imaging Workflow form for download, we suggest the following: <br/><br />
* Completed <br/><br />
* ImgB_5CStatus or ImgA_5CStatus (depending on whether task of interest is in A or B scan) <br/><br />
* Overall: Scanner <br/><br />
* Overall: Note <br/><br />
* Flag for Elimination (all fields) for your task of interest (includes Flag_Share and Flag_S (notes)) <br/><br />
<br />
After downloading: <br/><br />
* Filter on the Completed variable: <br/><br />
** Remove all subjects with Completed Status OTHER than 1 <br/><br />
* Filter on 5C_Status: <br/><br />
** Remove all subjects with 5C_Status OTHER than 2 <br/><br />
** Remove all subjects with Flag_Share (for your task) = 0 <br/><br />
<br />
--REVIEW NOTES UNDER [[Quality_Control | DATA QUALITY CONTROL]] on the LA5C wiki page --- <br/><br />
<br />
<br />
<br />
<br />
Link back to [[LA5C]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Notes_on_Downloading_Imaging_Workflow_Data&diff=8260Notes on Downloading Imaging Workflow Data2012-06-13T23:38:52Z<p>Katie: </p>
<hr />
<div>An extensive amount of QC went into the imaging data and we attempted to document this as much as possible both in the HTAC database and here on the wiki. However, there are multiple considerations to take into account when determining whether a subject's data is usable. This makes it difficult to download data directly from the HTAC database to query the imaging data. <br/><br />
<br />
We have prepared lists of usable subjects, for each type of scan, following QC (as documented in this wiki). Rather than having users attempt to decipher the Imaging Workflow forms in the HTAC database codebook (which we used primarily for logging), we propose the following system: <br/><br />
<br />
'''1. User submits query, stating which task, level of analysis, associated data, and group are requested (e.g., Stop-signal first-level models for all Controls).''' <br/><br />
* Task: see [[LA5C]] page on wiki for full list of tasks <br/><br />
* Level of analysis: Raw data, or completed first-level models (which have undergone complete QC) <br/><br />
* Associated data: behavioral data corresponding to the task of interest; mprage; mbw <br/><br />
* Group: Controls, specific Patient group, or All <br/><br />
<br />
2. Once the query is approved, the requested data will be copied into an Approved Analysis directory. <br/><br />
<br />
3. We will provide the user with a file which includes the following information, for each task/scan requested: <br/><br />
* PTID (LA2K ID; primary ID) <br/><br />
* FUNC_ID (in most cases, agrees with LA2K ID; in handful of cases, represents their original ID that scan data were collected under. see [[LA3C ID Switches]] page) <br/><br />
* Completed Status (Primary LA2K Status variable) <br/><br />
* 5C_Status (Primary LA5C Status variable) <br/><br />
* Scanner (1 = BMC; 2 = CCN) <br/><br />
* BEHAV_NOTE (overall QC note) <br/><br />
* A set of FLAGS which indicate whether the subject should be distributed, or whether the subject can be shared but may be flagged for moderate motion, suspicious performance, etc. <br/><br />
<br />
* NOTE: Ensuring that the same set of subjects are distributed for a given task (following complete QC) will ensure that absolutely unusable subjects are not distributed, so the maximum number of potentially usable subjects is consistent across analyses (or at least queries). We want to ensure that subjects with excessive motion, incomplete data, or otherwise unusable data are not analyzed. However, what is reflected in the Notes field for each task/scan are things that were flagged during QC, but which are really up to the user (e.g., moderate motion, suspicious performance). These things may be more important for some types of analyses than others OR these flags might help to explain some odd results (which we weren't able to detect initially). As a result of this, there is the potential for slight variability in the final Ns across analyses given specific methods and goals, but this system at least ensures that as many possible subjects with potentially usable data are made available for analysis. <br/><br />
<br />
<br />
<br />
If you chose to select variables directly from the Imaging Workflow form for download, we suggest the following: <br/><br />
* Completed <br/><br />
* ImgB_5CStatus or ImgA_5CStatus (depending on whether task of interest is in A or B scan) <br/><br />
* Overall: Scanner <br/><br />
* Overall: Note <br/><br />
* Flag for Elimination (all fields) for your task of interest (includes Flag_Share and Flag_S (notes)) <br/><br />
<br />
After downloading: <br/><br />
* Filter on the Completed variable: <br/><br />
** Remove all subjects with Completed Status OTHER than 1 <br/><br />
* Filter on 5C_Status: <br/><br />
** Remove all subjects with 5C_Status OTHER than 2 <br/><br />
** Remove all subjects with Flag_Share (for your task) = 0 <br/><br />
<br />
--REVIEW NOTES UNDER [[Quality_Control | DATA QUALITY CONTROL]] on the LA5C wiki page --- <br/><br />
<br />
<br />
<br />
<br />
Link back to [[LA5C]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Notes_on_Downloading_Imaging_Workflow_Data&diff=8259Notes on Downloading Imaging Workflow Data2012-06-13T23:38:45Z<p>Katie: </p>
<hr />
<div>An extensive amount of QC went into the imaging data and we attempted to document this as much as possible both in the HTAC database and here on the wiki. However, there are multiple considerations to take into account when determining whether a subject's data is usable. This makes it difficult to download data directly from the HTAC database to query the imaging data. <br/><br />
<br />
We have prepared lists of usable subjects, for each type of scan, following QC (as documented in this wiki). Rather than having users attempt to decipher the Imaging Workflow forms in the HTAC database codebook (which we used primarily for logging), we propose the following system: <br/><br />
<br />
''1. User submits query, stating which task, level of analysis, associated data, and group are requested (e.g., Stop-signal first-level models for all Controls).'' <br/><br />
* Task: see [[LA5C]] page on wiki for full list of tasks <br/><br />
* Level of analysis: Raw data, or completed first-level models (which have undergone complete QC) <br/><br />
* Associated data: behavioral data corresponding to the task of interest; mprage; mbw <br/><br />
* Group: Controls, specific Patient group, or All <br/><br />
<br />
2. Once the query is approved, the requested data will be copied into an Approved Analysis directory. <br/><br />
<br />
3. We will provide the user with a file which includes the following information, for each task/scan requested: <br/><br />
* PTID (LA2K ID; primary ID) <br/><br />
* FUNC_ID (in most cases, agrees with LA2K ID; in handful of cases, represents their original ID that scan data were collected under. see [[LA3C ID Switches]] page) <br/><br />
* Completed Status (Primary LA2K Status variable) <br/><br />
* 5C_Status (Primary LA5C Status variable) <br/><br />
* Scanner (1 = BMC; 2 = CCN) <br/><br />
* BEHAV_NOTE (overall QC note) <br/><br />
* A set of FLAGS which indicate whether the subject should be distributed, or whether the subject can be shared but may be flagged for moderate motion, suspicious performance, etc. <br/><br />
<br />
* NOTE: Ensuring that the same set of subjects are distributed for a given task (following complete QC) will ensure that absolutely unusable subjects are not distributed, so the maximum number of potentially usable subjects is consistent across analyses (or at least queries). We want to ensure that subjects with excessive motion, incomplete data, or otherwise unusable data are not analyzed. However, what is reflected in the Notes field for each task/scan are things that were flagged during QC, but which are really up to the user (e.g., moderate motion, suspicious performance). These things may be more important for some types of analyses than others OR these flags might help to explain some odd results (which we weren't able to detect initially). As a result of this, there is the potential for slight variability in the final Ns across analyses given specific methods and goals, but this system at least ensures that as many possible subjects with potentially usable data are made available for analysis. <br/><br />
<br />
<br />
<br />
If you chose to select variables directly from the Imaging Workflow form for download, we suggest the following: <br/><br />
* Completed <br/><br />
* ImgB_5CStatus or ImgA_5CStatus (depending on whether task of interest is in A or B scan) <br/><br />
* Overall: Scanner <br/><br />
* Overall: Note <br/><br />
* Flag for Elimination (all fields) for your task of interest (includes Flag_Share and Flag_S (notes)) <br/><br />
<br />
After downloading: <br/><br />
* Filter on the Completed variable: <br/><br />
** Remove all subjects with Completed Status OTHER than 1 <br/><br />
* Filter on 5C_Status: <br/><br />
** Remove all subjects with 5C_Status OTHER than 2 <br/><br />
** Remove all subjects with Flag_Share (for your task) = 0 <br/><br />
<br />
--REVIEW NOTES UNDER [[Quality_Control | DATA QUALITY CONTROL]] on the LA5C wiki page --- <br/><br />
<br />
<br />
<br />
<br />
Link back to [[LA5C]] page.</div>Katiehttp://lcni-3.uoregon.edu/phenowiki/index.php?title=Notes_on_Downloading_Imaging_Workflow_Data&diff=8258Notes on Downloading Imaging Workflow Data2012-06-13T23:38:00Z<p>Katie: </p>
<hr />
<div>An extensive amount of QC went into the imaging data and we attempted to document this as much as possible both in the HTAC database and here on the wiki. However, there are multiple considerations to take into account when determining whether a subject's data is usable. This makes it difficult to download data directly from the HTAC database to query the imaging data. <br/><br />
<br />
We have prepared lists of usable subjects, for each type of scan, following QC (as documented in this wiki). Rather than having users attempt to decipher the Imaging Workflow forms in the HTAC database codebook (which we used primarily for logging), we propose the following system: <br/><br />
<br />
1. User submits query, stating which task, level of analysis, associated data, and group are requested (e.g., Stop-signal first-level models for all Controls). <br/><br />
* Task: see [[LA5C]] page on wiki for full list of tasks <br/><br />
* Level of analysis: Raw data, or completed first-level models (which have undergone complete QC) <br/><br />
* Associated data: behavioral data corresponding to the task of interest; mprage; mbw <br/><br />
* Group: Controls, specific Patient group, or All <br/><br />
<br />
2. Once the query is approved, the requested data will be copied into an Approved Analysis directory. <br/><br />
<br />
3. We will provide the user with a file which includes the following information, for each task/scan requested: <br/><br />
* PTID (LA2K ID; primary ID) <br/><br />
* FUNC_ID (in most cases, agrees with LA2K ID; in handful of cases, represents their original ID that scan data were collected under. see [[LA3C ID Switches]] page) <br/><br />
* Completed Status (Primary LA2K Status variable) <br/><br />
* 5C_Status (Primary LA5C Status variable) <br/><br />
* Scanner (1 = BMC; 2 = CCN) <br/><br />
* BEHAV_NOTE (overall QC note) <br/><br />
* A set of FLAGS which indicate whether the subject should be distributed, or whether the subject can be shared but may be flagged for moderate motion, suspicious performance, etc. <br/><br />
<br />
* NOTE: Ensuring that the same set of subjects are distributed for a given task (following complete QC) will ensure that absolutely unusable subjects are not distributed, so the maximum number of potentially usable subjects is consistent across analyses (or at least queries). We want to ensure that subjects with excessive motion, incomplete data, or otherwise unusable data are not analyzed. However, what is reflected in the Notes field for each task/scan are things that were flagged during QC, but which are really up to the user (e.g., moderate motion, suspicious performance). These things may be more important for some types of analyses than others OR these flags might help to explain some odd results (which we weren't able to detect initially). As a result of this, there is the potential for slight variability in the final Ns across analyses given specific methods and goals, but this system at least ensures that as many possible subjects with potentially usable data are made available for analysis. <br/><br />
<br />
<br />
<br />
If you chose to select variables directly from the Imaging Workflow form for download, we suggest the following: <br/><br />
* Completed <br/><br />
* ImgB_5CStatus or ImgA_5CStatus (depending on whether task of interest is in A or B scan) <br/><br />
* Overall: Scanner <br/><br />
* Overall: Note <br/><br />
* Flag for Elimination (all fields) for your task of interest (includes Flag_Share and Flag_S (notes)) <br/><br />
<br />
After downloading: <br/><br />
* Filter on the Completed variable: <br/><br />
** Remove all subjects with Completed Status OTHER than 1 <br/><br />
* Filter on 5C_Status: <br/><br />
** Remove all subjects with 5C_Status OTHER than 2 <br/><br />
** Remove all subjects with Flag_Share (for your task) = 0 <br/><br />
<br />
--REVIEW NOTES UNDER [[Quality_Control | DATA QUALITY CONTROL]] on the LA5C wiki page --- <br/><br />
<br />
<br />
<br />
<br />
Link back to [[LA5C]] page.</div>Katie