Difference between revisions of "Documentation/4.1/Modules/TrainModel"
(Added panel info and screenshot) |
m |
||
Line 49: | Line 49: | ||
** <span style="color:green">'''Highest Quality Images Index'''</span> [<span style="color:orange">----inputIndexOfBestImages</span>] : The index in the list of images that represents the best T1, T2, and FLAIR images. These images will be used as the standard that the other images are intensity standardized to. It defaults to the first image in the list. (This number is 1-indexed) ''Default value: 1'' | ** <span style="color:green">'''Highest Quality Images Index'''</span> [<span style="color:orange">----inputIndexOfBestImages</span>] : The index in the list of images that represents the best T1, T2, and FLAIR images. These images will be used as the standard that the other images are intensity standardized to. It defaults to the first image in the list. (This number is 1-indexed) ''Default value: 1'' | ||
* <span style="color:blue">'''''Advanced Options'''''</span>: Advanced input parameters | * <span style="color:blue">'''''Advanced Options'''''</span>: Advanced input parameters | ||
− | ** <span style="color:green">'''Percent NonLesion'''</span> [<span style="color:orange">----inputPercentNonLesion</span>] : What percent of the nonlesion voxels should be used for training. Higher numbers results in larger model files and potentially slower runtimes. | + | ** <span style="color:green">'''Percent NonLesion'''</span> [<span style="color:orange">----inputPercentNonLesion</span>] : What percent of the nonlesion voxels should be used for training. Higher numbers results in larger model files and potentially slower runtimes. ''Default value: 5'' |
* <span style="color:blue">'''''Output Options'''''</span>: Output Options | * <span style="color:blue">'''''Output Options'''''</span>: Output Options | ||
** <span style="color:green">'''Output Model Filename'''</span> [<span style="color:orange">----outputModel</span>] : Required: Filename to save the generated model to. | ** <span style="color:green">'''Output Model Filename'''</span> [<span style="color:orange">----outputModel</span>] : Required: Filename to save the generated model to. |
Revision as of 19:55, 22 June 2012
Home < Documentation < 4.1 < Modules < TrainModel
Introduction and Acknowledgements
Extension: LesionSegmentation | |||
|
Module Description
This module is used to train new segmentation models for white matter lesion segmentation. In order to use this tool your data must include a T1, T2, FLAIR, brain mask, and expert lesion segmentation for each subject. All data must be preprocessed including intra-subject co-registration, AC-PC alignment, bias correction, consistent spacing between sequences, and brain mask creation.
Use Cases
- Training a new model.
In order to train a new model you must first have preprocessed data on a number of subjects. The data required includes T1, T2, FLAIR, brain masks, and lesion masks. All data must be preprocessed including intra-subject co-registration, AC-PC alignment, bias correction, consistent spacing between sequences, and brain mask creation. Subjects do not need to be registered to each other. A model can be created on a single subject, but greater than 6 is recommended and between 10 and 15 is best. The more subjets included in the model the slower model creation will be and the slower segmentation using that model will be. However, models using more subjects will almost always be more accurate.
Navigate to Modules->Segmentation->LesionSegmentation->TrainModel. The TrainModel panel looks like: (Image of TrainModel panel to go here.)
The required inputs are
Tutorials
Coming soon!
Panels and their use
A list panels in the interface, their features, what they mean, and how to use them.
|
Similar Modules
References
- Scully M, Anderson B, Lane T, Gasparovic C, Magnotta V, Sibbitt W, Roldan C, Kikinis R and Bockholt HJ (2010) An automated method for segmenting white matter lesions through multi-level morphometric feature classification with application to lupus. Front. Hum. Neurosci. doi:10.3389/fnhum.2010.00027
http://frontiersin.org/neuroscience/humanneuroscience/paper/10.3389/fnhum.2010.00027/
Information for Developers
Section under construction. |