Difference between revisions of "STOPSIGNAL"

From Pheno Wiki
Jump to: navigation, search
(Task Background Info)
 
(5 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
==Task Background Info==
 
==Task Background Info==
 
'''Sample Text'''
 
'''Sample Text'''
In the Stop-signal task, participants were presented with a series of Go stimuli (left- and rightwards pointing arrows inside of a circle) to which they were instructed to respond quickly. This speeded reaction time task established a prepotency to respond. On a subset of trials (25%), the Go stimulus was followed, after a variable delay, by a Stop-signal (an auditory signal), to which the participants were instructed to inhibit their response. The onset of the Stop-signal, or stop-signal delay (SSD), was varied and depended on the participant’s performance, such that it was decreased after a previous failure to inhibit and increased after a previous inhibition. The SSD for each stop trial was selected from one of two interleaved staircases of SSD values, with each SSD increasing or decreasing by 50 ms according to whether or not the participant successfully inhibited on the previous stop trial. This one-up/one-down tracking procedure ensures that participants inhibit on approximately half of all trials and controls for difficulty level across participants. Participants were told that correctly responding and inhibiting were equally important.  
+
In the Stop-signal task, participants were presented with a series of Go stimuli (left- and rightwards pointing arrows) to which they were instructed to respond quickly. This speeded reaction time task established a prepotency to respond. On a subset of trials (25%), the Go stimulus was followed, after a variable delay, by a Stop-signal (an auditory signal), to which the participants were instructed to inhibit their response. The onset of the Stop-signal, or stop-signal delay (SSD), was varied and depended on the participant’s performance, such that it was decreased after a previous failure to inhibit and increased after a previous inhibition. The SSD for each stop trial was selected from one of two interleaved staircases of SSD values, with each SSD increasing or decreasing by 50 ms according to whether or not the participant successfully inhibited on the previous stop trial. This one-up/one-down tracking procedure ensures that participants inhibit on approximately half of all trials and controls for difficulty level across participants. Participants were told that correctly responding and inhibiting were equally important.  
  
Trials began with a white circular fixation ring in the center of the screen for 500 ms. Go stimuli were presented for 1 s or until the participant responded, followed by a null period. Stop trials were identical to Go trials, except for the onset of the Stop-signal after a variable SSD. If the participant responded, the arrow and fixation circle disappeared for the remaining time, followed by the null period. Jittered null events were imposed between every trial, with the duration of null events samples from an exponential distribution (null events range from 0.5 to 4 s, with a mean of 1 s). Participants performed the task outside of the scanner; in this behavioral version, the SSD for each of Stop trial was selected from one of two interleaved staircases, each starting with SSD values of 250 and 350 ms. The last SSD values of the two ladders were used as starting values for the subsequent task run performed in the scanner. In each run, participants completed a total of 128 trials (75% Go trials); the task duration was 6 min and 4 s.  
+
All trials were preceded by a 500 ms fixation cross in the center of the screen, then each trial began with the appearance of an arrow and ended after 1000 ms, followed by the null period. Jittered null events separated every trial (with a blank screen), with the duration of null events sampled from an exponential distribution (null events ranged from 0.5 to 4 s, with a mean of 1 s).  
 +
 
 +
Participants performed the task outside of the scanner; in this behavioral version, the SSD for each of Stop trial was selected from one of two interleaved staircases, each starting with SSD values of 250 and 350 ms. The last SSD values of the two ladders were used as starting values for the subsequent task run performed in the scanner. In each run, participants completed a total of 128 trials (75% Go trials); the task duration was 6 min and 4 s.  
  
 
Participants first saw a demo version of the Stop-signal task, before completing a full behavioral run (128 trials). The demo consisted of 8 trials (3 Stop trials), and lasted approximately 25 s.
 
Participants first saw a demo version of the Stop-signal task, before completing a full behavioral run (128 trials). The demo consisted of 8 trials (3 Stop trials), and lasted approximately 25 s.
Line 32: Line 34:
  
 
Stopping and going fast are equally important.
 
Stopping and going fast are equally important.
 +
  
 
==Scoring Behavioral Data==
 
==Scoring Behavioral Data==
 
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/STOPSIGNAL<br/>
 
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/STOPSIGNAL<br/>
  
2. There are 2 scripts, one for making onset files and one for scoring behavioral data. Yes, there is a redundancy there. <br/>
+
1. All scripts can either be run so that you do all subjects of a given group at once, or do one subject at a time.
 +
There are comments in the code that instruct you to indicate first with group (e.g., CONTROLS) you will be doing. By default, it will run all subjects in the selected group unless you choose to comment in a couple lines and enter in a particular subject's ID (e.g., CNP_10150). This applies to both SST scripts. <br/>
  
2. All scripts can either be run so that you do all subjects of a given group at once, or do one subject at a time.
+
2. in matlab, run 'STOPSIGNAL_full_run_ssrtquantile.m' <br/>
There are comments in the code that instruct you to indicate first with group (e.g., CONTROLS) you will be doing. By default, it will run all subjects in the selected group unless you choose to comment in a couple lines and enter in a particular subject's ID (e.g., CNP_10150). This applies to both scripts. <br/>
+
 
+
3. In matlab, run 'make_stopsig_onsets.m' <br/>
+
This will either create onset files (in SUBJ/behav/STOPSIGNAL) for each subject in your group or just the subject you specified.
+
The main issue to consider for Stop-signal onset files is whether the "junk" category is empty: if there were incorrect Go trials, these are classified as "junk" and the default model is appropriate; if there were no incorrect Go trials, then the "junk" category is empty, and we need to run a special (cleverly-titled "no_junk" model for this subject).
+
There are 2 ways you can check this: <br/>
+
a. Open the GROUP_stopsig_log.txt: it will list each subject that has an empty "junk" onset file. <br/>
+
b. Open the subject's run1_junk_onsets.txt file in their behav/STOPSIGNAL directory. <br/>
+
 
+
4. in matlab, run 'STOPSIGNAL_full_run_ssrtquantile.m' <br/>
+
 
This will either create behavioral output (in SUBJ/behav/STOPSIGNAL) for each subject in your group or just the subject you specified. This script creates 2 files: <br/>
 
This will either create behavioral output (in SUBJ/behav/STOPSIGNAL) for each subject in your group or just the subject you specified. This script creates 2 files: <br/>
a. It adds to the GROUP_stopsig_group_output.txt file, by adding a line for each subject with their summary scores. <br/>
+
a. It adds to the 'GROUP_stopsig_group_output.txt' file, by adding a line for each subject with their summary scores. <br/>
b. It creates a SUBJ_STOPSIGNAL_behav_output.txt file in the subject's behav/STOPSIGNAL directory. <br/>
+
b. It creates a 'SUBJ_STOPSIGNAL_behav_output.txt' file in the subject's behav/STOPSIGNAL directory. <br/>
  
The file created under 4a will be used for analysis of behavioral data collected during the Stop-signal scan. In most cases, this data will be provided after query. For a full description of the task and scoring of variables, see Stop-signal under LA2K.
+
The file created under 2a will be used for analysis of behavioral data collected during the Stop-signal scan. In most cases, this data will be provided after query. For a full description of the task and scoring of variables, see Stop-signal under LA2K [[CNP_Stop_Signal]]. <br/>
[[CNP_Stop_Signal]] <br/>
+
 
+
 
+
 
+
==Creating Onset Files (EVs)==
+
 
+
 
+
==Running First Level Analyses==
+
 
+
 
+
==List of Models==
+
 
+
 
+
==Behavioral Variables==
+
 
+
 
+
 
+
 
+
 
+
----
+
Link back to [[LA5C]] page.
+
 
+
 
+
 
+
==Scoring Behavioral Data==
+
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP<br/>
+
 
+
1. all scripts pull up the file ‘sublist’ to determine which subjects to run. Before you run a new batch, edit that file using emacs (emacs sublist). IDs are in the format of CNP_12345B. It is just a transient file, so you can delete what is in there. If you’d like to save the old version, just save it as sublist with the date appended. The script will only recognize the plain ‘sublist; file.<br/>
+
 
+
2. in matlab, run score_scap_behavioral_sublist.m<br/>
+
 
+
3. running this should create a file called summaryscore_all.txt in each persons behav/SCAP folder.  You can check who has these files (and therefore who needs to be run) by typing <br/>
+
ls /space/raid2/data/poldrack/CNP/CONTROLS/*B/behav/SCAP/*<br/>
+
 
+
4. To create a text file summarizing all the data (which can be put into excel), run the script make_big_scap_score_log.sh which will pull from all the subjects who have ‘B’ directories and create a file called scap_summaryscore_all.txt. Since it pulls all the subjects, its ok to write over this file. To run, just go into /space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP and type<br/>
+
./make_big_scap_score_log.sh<br/>
+
 
+
You can now copy this into excel, although you might need to use the ‘text to columns’ tool to get each number to go into its own cell.<br/>
+
  
 
==Creating Onset files (EVs)==
 
==Creating Onset files (EVs)==
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/SCAP<br/>
+
/space/raid2/data/poldrack/CNP/scripts/behav_analyze/STOPSIGNAL<br/>
  
1. this script also uses the sublist file- so, you can easily run the behavioral scoring and this script on the same list of new people. Update sublist as described above.<br/>
+
1. All scripts can either be run so that you do all subjects of a given group at once, or do one subject at a time.
 +
There are comments in the code that instruct you to indicate first with group (e.g., CONTROLS) you will be doing. By default, it will run all subjects in the selected group unless you choose to comment in a couple lines and enter in a particular subject's ID (e.g., CNP_10150). This applies to both SST scripts. <br/>
  
2. in matlab, run make_scap_onsets_function_sublist.m<br/>
+
2. In matlab, run 'make_stopsig_onsets.m' <br/>
 
+
This will either create onset files (in SUBJ/behav/STOPSIGNAL) for each subject in your group or just the subject you specified.  
3. running this will create a series of files in each persons own behav/SCAP. After both scripts have been run, the folder should look like this:<br/>
+
The main issue to consider for Stop-signal onset files is whether the "junk" category is empty: if there were incorrect Go trials, these are classified as "junk" and the default model is appropriate; if there were no incorrect Go trials, then the "junk" category is empty, and we need to run a special (cleverly-titled "no_junk" model for this subject).
 +
There are 2 ways you can check this: <br/>
 +
a. Open the 'GROUP_stopsig_log.txt': it will list each subject that has an empty "junk" onset file. <br/>
 +
b. Open the subject's 'run1_junk_onsets.txt' file in their behav/STOPSIGNAL directory. <br/>
  
cond10_onsets.txt  cond2_onsets.txt  cond6_onsets.txt  junk_onsets.txt<br/>
+
3. running this script will create 4 onset files in each subject's behav/STOPSIGNAL directory: <br/>
cond11_onsets.txt cond3_onsets.txt  cond7_onsets.txt  SCAP_10575.mat<br/>
+
'run1_go_onsets.txt' <br/>
cond12_onsets.txt  cond4_onsets.txt  cond8_onsets.txt  summaryscore_all.txt<br/>
+
'run1_junk_onsets.txt' <br/>
cond1_onsets.txt  cond5_onsets.txt  cond9_onsets.txt<br/>
+
'run1_succ_stop_onsets.txt' <br/>
 +
'run1_unsucc_stop_onsets.txt' <br/>
  
 
4. The onset files will have contents that look something like this:<br/>
 
4. The onset files will have contents that look something like this:<br/>
110.0196 8 1<br/>
+
8.8882 1.5 1 <br/>
289.0072 8 1<br/>
+
10.8944 1.5 1 <br/>
344.0090 8 1<br/>
+
13.2653 1.5 1 <br/>
373.0157 8 1<br/>
+
16.5149 1.5 1 <br/>
  
 
==Running First Level Analyses==
 
==Running First Level Analyses==
/space/raid2/data/poldrack/CNP/scripts/run_level1_scripts/SCAP<br/>
+
/space/raid2/data/poldrack/CNP/scripts/run_level1_scripts/STOPSIGNAL<br/>
  
1. The primary script for running first levels is  SCAP_firstlevel_model1.sh. This script does the first phase of SCAP fMRI processing. It checks for the relevant files, creates an individualized .fsf file for each subject, runs pre- and post stats.
+
1. As hinted at above, we have 2 Stop-signal scripts, depending on whether the subject has an empty junk file or not, and then we have separate scripts for each group. For CONTROLS, the w scripts are: <br/>
 +
'run_level1_STOPSIGNAL_CONTROLS.sh' <br/>
 +
'run_level1_nojunk_STOPSIGNAL_CONTROLS.sh' <br/>
  
It takes 4 arguments:<br/>
+
Each of these scripts creates an individualized .fsf file for each subject (which is then stored in /space/raid2/data/poldrack/CNP/scripts/designs/STOPSIGNAL), then runs it, and the output is stored in the subject's analysis/STOPSIGNAL directory. <br/>
1 group vs subject analysis, <br/>
+
2. population (CONTROL, SCHZ, etc) <br/>
+
3. which subject to run <br/>
+
4. whether to run FSL or just create the fsf file (run or norun)<br/>
+
  
There are a few ways you can run it:<br/>
+
2. To run each script, open it in emacs. <br/>
a. to run on one person (here, CNP_10159B) and run FSL, go to the directory,<br/>
+
a. As is commented into the script, you need to comment in certain sections depending on whether the subject was scanned at BMC vs. CCN. This is in two places, at the top and bottom of the script. <br/>
./SCAP_firstlevel_model1.sh subject CONTROLS 10159 run<br/>
+
b. Enter in the subject(s) ID(s) on a new line of code (e.g., 'for id in 11131, do'). <br/>
 +
c. Save and submit to grid to run. <br/>
  
b. to run on an entire group (all controls, all patients, etc)<br/>
 
./SCAP_firstlevel_model1.sh group CONTROLS all run<br/>
 
  
c to run a specific group of people, you can use a second script that calls this one, run_multiple_scap.sh. for this script, you have to edit it first using emacs, and basically fill in the people you want to run in the for-loop at the top, for instance<br/>
+
==Data Checking==
for id in 10523 10501 10159; do<br/>
+
  
you also need to edit the other relevant options, such as population and whether to run all the way through. It’ll automatically run in single-subject mode, and just loop through these people.<br/>
 
 
This can also be submitted to the grid, after it is edited, by typing<br/>
 
sge qsub run_multiple_scap.sh<br/>
 
 
==Data Checking==
 
  
 
==List of Models==
 
==List of Models==
SCAP_model1
+
level1_STOPSIGNAL_model1
  
 
==Model description and contrasts==
 
==Model description and contrasts==
[[SCAP model1 detail]]
+
[[STOPSIGNAL model1 detail]]
  
 
==Behavioral variables==
 
==Behavioral variables==
[[SCAP model1 behavioral variables]]
+
[[STOPSIGNAL model1 behavioral variables]]
  
==Completed analyses==
 
  
==Abstracts/results==
 
  
Presented at ACNP, December 2011:
+
----
Capacity-Based Differences in Structural Connectivity and Functional Network Activation Associated With Spatial Working Memory
+
Link back to [[LA5C]] page. <br/>
Katherine H. Karlsgodt, Eliza Congdon, Russell A. Poldrack, Angelica A. Bato, Fred W. Sabb, Edythe London, Robert Bilder, Tyrone D. Cannon
+
Link back to [[HTAC]] page.
 
+
Background: Working memory is a core cognitive function that is thought to play a role in a number of more complex, higher-level processes. However, working memory capacity varies substantially even across healthy individuals. While there are indications that white matter structure, grey-matter integrity, neural signaling changes, and other factors may contribute to this variation, the roots of these individual differences are still under investigation. It is of particular interest to probe what neural signatures differentiate high-performing individuals, as this information may help us understand how to improve functioning in individuals who have lower performance either due to natural variation or to effects of neurocognitive disorders. Here we sought to assess differences in  functional activation in a large sample of healthy individuals with a wide range of behavioral performance using functional magnetic resonance imaging (fMRI) during a spatial working memory task.
+
 
+
Methods:  As a part of the Consortium for Neuropsychiatric Phenomics project at UCLA, we assessed 117 healthy community participants aged 21-50 years. We administered a Sternberg-style spatial working memory task with 4 levels of difficulty during fMRI. To quantify performance differences, we calculated each subject’s working memory capacity using Cowan’s formula. We then performed a voxel-wise analysis, corrected for age and sex, to determine which activation patterns were correlated and anti-correlated with individual working memory capacity.
+
 
+
Results: Across the entire group, the task elicited activation in regions previously associated with working memory, namely the superior frontal lobes, superior parietal lobes, anterior cingulate, and striatum. In addition, there was significantly decreased activation in regions associated with the default mode network, including medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and superior temporal lobes. Notably, voxel-wise regression of working memory capacity predicting functional activation across the whole task revealed that the primary difference in activation associated with higher capacity was a more pronounced decrease in mPFC activation during task performance.
+
 
+
Discussion: Individuals with higher working memory capacity were characterized by more successful disengagement of areas associated with the default mode network during task performance. This effect suggests that the hallmark of high performance is dexterous coordination of interactive neural networks rather than simply increased or decreased activation in isolated task-related nodes. The finding has implications for our understanding of why certain healthy individuals have higher and lower working memory abilities. It also can inform our conceptualization of working memory deficits in patient populations, particularly those associated with neural connectivity deficits, such as schizophrenia.
+
 
+
==Group level fsfs==
+

Latest revision as of 21:20, 17 August 2012

STOP-SIGNAL TASK (SST)

Task Background Info

Sample Text In the Stop-signal task, participants were presented with a series of Go stimuli (left- and rightwards pointing arrows) to which they were instructed to respond quickly. This speeded reaction time task established a prepotency to respond. On a subset of trials (25%), the Go stimulus was followed, after a variable delay, by a Stop-signal (an auditory signal), to which the participants were instructed to inhibit their response. The onset of the Stop-signal, or stop-signal delay (SSD), was varied and depended on the participant’s performance, such that it was decreased after a previous failure to inhibit and increased after a previous inhibition. The SSD for each stop trial was selected from one of two interleaved staircases of SSD values, with each SSD increasing or decreasing by 50 ms according to whether or not the participant successfully inhibited on the previous stop trial. This one-up/one-down tracking procedure ensures that participants inhibit on approximately half of all trials and controls for difficulty level across participants. Participants were told that correctly responding and inhibiting were equally important.

All trials were preceded by a 500 ms fixation cross in the center of the screen, then each trial began with the appearance of an arrow and ended after 1000 ms, followed by the null period. Jittered null events separated every trial (with a blank screen), with the duration of null events sampled from an exponential distribution (null events ranged from 0.5 to 4 s, with a mean of 1 s).

Participants performed the task outside of the scanner; in this behavioral version, the SSD for each of Stop trial was selected from one of two interleaved staircases, each starting with SSD values of 250 and 350 ms. The last SSD values of the two ladders were used as starting values for the subsequent task run performed in the scanner. In each run, participants completed a total of 128 trials (75% Go trials); the task duration was 6 min and 4 s.

Participants first saw a demo version of the Stop-signal task, before completing a full behavioral run (128 trials). The demo consisted of 8 trials (3 Stop trials), and lasted approximately 25 s.

Task Instructions

Demo Instructions We’re going to review the task with the arrows and tones. During this task you will see an arrow in the middle of the screen pointing either to the left or right. As soon as you see the arrow, respond as quickly and accurately as possible, by pressing the LEFT (blue) button if it’s pointing left and the RIGHT (yellow) button if its pointing right.

When you hear a beep, do not respond to that particular arrow. Both going fast and stopping are equally important. This task is designed to be difficult and for subjects to make mistakes, so don’t get frustrated it it’s hard. Just make sure NOT to slow down your responses to wait for the beep so that you are no longer going when you are supposed to. You won’t always be able to stop when you hear a beep, so just try your best.

Behavioral Instructions Now we’re going to run through the whole task with the arrows and tones. You want to respond as quickly and accurately as possible to the arrows, but to not respond when you hear a beep.

Remember that both going fast and stopping are equally important. Make sure NOT to slow down your responses to wait for the beep so that you are no longer going when you are supposed to. You won’t always be able to stop when you hear a beep, so just try your best.

Scan Instructions The next task is the one where you have to press the button that corresponds to the direction the arrow is pointing. So press the first button if it’s pointing left, and the second button if it’s pointing right. You want to push the button fast, but sometimes, you’ll hear a beep, and when you hear the beep you should try really hard to not push any button at all. It’s equally important to try to go fast and to stop when you hear the beep, so if you’re trying to stop every time it means you’re going too slow. As long as you’re stopping some of the time, though, it means you’re doing a good job.

Participants saw on the screen Press the left button if you see the arrow pointing left Press the right button if you see the arrow pointing right

Press the button as FAST as you can when you see the arrow.

But if you hear a beep, try very hard to STOP yourself from pressing the button.

Stopping and going fast are equally important.


Scoring Behavioral Data

/space/raid2/data/poldrack/CNP/scripts/behav_analyze/STOPSIGNAL

1. All scripts can either be run so that you do all subjects of a given group at once, or do one subject at a time. There are comments in the code that instruct you to indicate first with group (e.g., CONTROLS) you will be doing. By default, it will run all subjects in the selected group unless you choose to comment in a couple lines and enter in a particular subject's ID (e.g., CNP_10150). This applies to both SST scripts.

2. in matlab, run 'STOPSIGNAL_full_run_ssrtquantile.m'
This will either create behavioral output (in SUBJ/behav/STOPSIGNAL) for each subject in your group or just the subject you specified. This script creates 2 files:
a. It adds to the 'GROUP_stopsig_group_output.txt' file, by adding a line for each subject with their summary scores.
b. It creates a 'SUBJ_STOPSIGNAL_behav_output.txt' file in the subject's behav/STOPSIGNAL directory.

The file created under 2a will be used for analysis of behavioral data collected during the Stop-signal scan. In most cases, this data will be provided after query. For a full description of the task and scoring of variables, see Stop-signal under LA2K CNP_Stop_Signal.

Creating Onset files (EVs)

/space/raid2/data/poldrack/CNP/scripts/behav_analyze/STOPSIGNAL

1. All scripts can either be run so that you do all subjects of a given group at once, or do one subject at a time. There are comments in the code that instruct you to indicate first with group (e.g., CONTROLS) you will be doing. By default, it will run all subjects in the selected group unless you choose to comment in a couple lines and enter in a particular subject's ID (e.g., CNP_10150). This applies to both SST scripts.

2. In matlab, run 'make_stopsig_onsets.m'
This will either create onset files (in SUBJ/behav/STOPSIGNAL) for each subject in your group or just the subject you specified. The main issue to consider for Stop-signal onset files is whether the "junk" category is empty: if there were incorrect Go trials, these are classified as "junk" and the default model is appropriate; if there were no incorrect Go trials, then the "junk" category is empty, and we need to run a special (cleverly-titled "no_junk" model for this subject). There are 2 ways you can check this:
a. Open the 'GROUP_stopsig_log.txt': it will list each subject that has an empty "junk" onset file.
b. Open the subject's 'run1_junk_onsets.txt' file in their behav/STOPSIGNAL directory.

3. running this script will create 4 onset files in each subject's behav/STOPSIGNAL directory:
'run1_go_onsets.txt'
'run1_junk_onsets.txt'
'run1_succ_stop_onsets.txt'
'run1_unsucc_stop_onsets.txt'

4. The onset files will have contents that look something like this:
8.8882 1.5 1
10.8944 1.5 1
13.2653 1.5 1
16.5149 1.5 1

Running First Level Analyses

/space/raid2/data/poldrack/CNP/scripts/run_level1_scripts/STOPSIGNAL

1. As hinted at above, we have 2 Stop-signal scripts, depending on whether the subject has an empty junk file or not, and then we have separate scripts for each group. For CONTROLS, the w scripts are:
'run_level1_STOPSIGNAL_CONTROLS.sh'
'run_level1_nojunk_STOPSIGNAL_CONTROLS.sh'

Each of these scripts creates an individualized .fsf file for each subject (which is then stored in /space/raid2/data/poldrack/CNP/scripts/designs/STOPSIGNAL), then runs it, and the output is stored in the subject's analysis/STOPSIGNAL directory.

2. To run each script, open it in emacs.
a. As is commented into the script, you need to comment in certain sections depending on whether the subject was scanned at BMC vs. CCN. This is in two places, at the top and bottom of the script.
b. Enter in the subject(s) ID(s) on a new line of code (e.g., 'for id in 11131, do').
c. Save and submit to grid to run.


Data Checking

List of Models

level1_STOPSIGNAL_model1

Model description and contrasts

STOPSIGNAL model1 detail

Behavioral variables

STOPSIGNAL model1 behavioral variables



Link back to LA5C page.
Link back to HTAC page.