Deletions are marked like this. | Additions are marked like this. |

Line 18: | Line 18: |

The weighted linear averaging approach is described in our [http://www.nmr.mgh.harvard.edu/~fischl/reprints/Han_NeuroImage_2006_thickness_reliability.pdf thickness validation paper]. The idea is to apply the Linear Discriminant Analysis (LDA) technique in order to find a optimal set of weights (a projection vector) such that the weighted averge volume has the best contrast-to-noise ratio between a pair of tissue classes (usually between WM and GM). | The weighted linear averaging approach is described in our [[http://www.nmr.mgh.harvard.edu/~fischl/reprints/Han_NeuroImage_2006_thickness_reliability.pdf|thickness validation paper]]. The idea is to apply the Linear Discriminant Analysis (LDA) technique in order to find a optimal set of weights (a projection vector) such that the weighted averge volume has the best contrast-to-noise ratio between a pair of tissue classes (usually between WM and GM). |

# Multi-Echo FLASH sequence analysis

*Author: Xiao Han (see also: XiaoNotes)*

MEF refers to the multi-echo FLASH sequence.

MEF provides several advantages over simple FLASH sequence or even the MPRAGE sequence, in that it has less distortion, higher contrast-to-noise ratio for subcortical structures, and it allows the direct estimation of underlying tissue parameters. One disadvantage is that the large dimensionality of the data (multiple-volumes for a single subject) makes it harder for the data analysis.

FreeSurfer is not optimized for the processing of MEF data yet, and this page describes some possible processing schemes.

## Scheme #1 (synthesize a single image and then apply the standard FreeSurfer pipeline)

Like in typically FreeSurfer processing stream where multiple scans (with the same contrast property) are averaged after motion correction to generate a single volume for later processing, the multiple volumes from the MEF sequence can also be combined to get one volume of good quality.

Due to the difference in image contrast of images acquired at different flip angle or echo time settings, a simple average would fail. Weighted average or nonlinear combination is thus necessary. FreeSurfer has tools for both approaches.

The weighted linear averaging approach is described in our thickness validation paper. The idea is to apply the Linear Discriminant Analysis (LDA) technique in order to find a optimal set of weights (a projection vector) such that the weighted averge volume has the best contrast-to-noise ratio between a pair of tissue classes (usually between WM and GM).

The program "mri_ms_LDA" can be used to compute this weighting, and the usage is:

mri_ms_LDA -lda 2 3 \ -label manual_label.mgz \ -weight weights.txt \ input_vol1.mgz \ input_vol2.mgz \ ... \ input_volN.mgz

where "-lda 2 3" indicates optimizing for the two tissue classes with label 2 and 3. "manual_label.mgz" is the manual labeling volume (maybe partially labelled) that should contain voxels with values 2 or 3. "weights.txt" is the output text file containing the computed weights. input_vol# are the MEF volumes (assumed to be already registered and have the same size as the label volume). The number of weights will equal to the number of input volumes.

Usually, the optimal weighting can be computed from a set of training subjects, and then the optimal weights can be used to process new data.

Suppose the weights are computed, mri_ms_LDA can be used to apply them to new MEF data as follows:

mri_ms_LDA -lda 2 3 \ -weight optimal_weights.txt \ -W -synth synthesized_vol.mgz \ input_vol1 \ input_vol2 \ ... \ input_volN

where "synthesized_vol.mgz" is the output volume generated by the weighted average.

The nonlinear synthesization scheme uses the program "mri_ms_fitparms" to first estimate the tissue parameter maps (T1, PD, and/or T2*), and applies "mri_synthesize" to generate a new image volume with good gray/white contrast.

Of the two approaches, the nonlinear one works better so far. Note that the dimension reduction of the nonlinear approach is actually performed in two steps: first reducing the input data dimension to 3 (tissue parameter maps), and then to the final single volume. Optimal weighting computation with the LDA method is sensitive to manual ROI selection when the input data dimension is high.

## Scheme #2 (processing in the multi-spectral space)

This scheme is still under active investigation.

One major disadvantage of the first scheme is that one cannot easily get a single volume optimal for both gray/white segmentation and gray/CSF segmentation. Thus, ideally segmentation and surface reconstruction are better performed direcly using the multi-spectral information of the MEF data. However, a very high dimensional intensity space can still cause much trouble for existing multi-spectral segmentation methods. In the approach we currently take, we usually first generate one single average volume for each flip angle, instead of using the individual echo volumes. In the following description we assume that the MEF sequence uses two flip angles, flash30 and flash5. The whole procedure for surface reconstruction from such a MEF data set can be summarized as follows (notes for NMR users: example scripts can be found under /autofs/space/dijon_035/users/xhan/pfizer/mef_avg/scripts/)

### Step 1:

Generate average volume for each flip angle (flash30 and flash5 respectively)

mri_average -noconform flash30_echo?.mgz flash30_avg_tmp.mgz mri_average -noconform flash5_echo?.mgz flash5_avg_tmp.mgz

### Step 2:

Convert the avgerage volumes above to isotropic resolution and size (similar to the COR format), but not converting to BYTE type:

mri_transform_to_COR -out_type 3 flash30_avg_tmp.mgz flash30_avg.mgz

The option "-out_type 3" keeps the output volume intensity value in float-type. Similarly, we get flash5_avg.mgz.

### Step 3:

Apply INU correction using nu_correct (N3) on the two average volumes. Usually, two passes of nu_correct are enough at this stage. The outputs are saved as flash30avg_inu.mgz and flash5avg_inu.mgz. Note that, a program called "mri_apply_inu_correction" can be used to apply the bias field correction result to each individual echo volumes if needed. Type the program name for usage.

### Step 4:

At this stage, skull-stripping is required. One option is to generated a synthesized volume using 0.9527*flash30avg_inu.mgz - 0.3039*flash5avg_inu.mgz, where the weights (0.9527, -0.3039) are precomputed for achieving good gray/white contrast. The "mri_ms_LDA" program mentioned above can be used to compute this weighted average. The usual skull-stripping in FreeSurfer can then be used to get the mask for brain regions. We will call this brain mask as brain_mask.mgz, which is needed in the following subcortical segmentation.

### Step 5:

Subcortical segmentation can be performed in two ways. One is to use the synthesized volume (and convert to true COR format, i.e., BYTE intensity type), and apply the subcortical segmentation using the RB atlas and the standard FreeSurfer subcortical segmentation procedure. We also built a specific atlas for MEF data, which has exactly two channels (one for flash30, and one for flash5). The atlas is stored in FLASH_atlas.gca. The application of this atlas is similar to the standard aseg, which involves the following steps:

mri_em_register -novar -mask brain.mgz \ flash30avg_inu.mgz \ flash5avg_inu.mgz \ $GCA transforms/talairach.lta mri_ca_normalize -mask brain.mgz \ flash30avg_inu.mgz \ flash5avg_inu.mgz \ $GCA transforms/talairach.lta \ norm30.mgz \ norm5.mgz mri_ca_register -mask brain.mgz \ -T transforms/talairach.lta \ norm30.mgz \ norm5.mgz \ $GCA \ transforms/talairach.m3z mri_ca_label -novar -E 0 \ norm30.mgz \ norm5.mgz \ transforms/talairach.m3z \ $GCA \ aseg_mef.mgz

Further optimization of the program parameters is still necessary, since the default parameters are mostly optimized for the subcortical segmentation using the single-channel RB atlas.

### Step 6:

Tissue segmentation using an EM algorithm. EM segmentation is based on a Gaussian-mixture model that has been used widely in brain image segmentation. Such a model and the iterative EM algorithm is very sensitive to the algorithm initialization. One way to address this weakness is to incorporate prior models of tissue classes. The atlas registration result in Step 5 can be used to provide prior information about the spatial distribution of each individual tissue class. Note that this tissue segmentation step aims to classify the brain tissues into one of three classes: GM, WM, and CSF. The subcortical segmentation used a much more detailed tissue classification category, and thus the resulting labels need to be properly merged to be assigned to one of the three tissue classes. The EM segmentation method we currently implemented also takes into account intenisty nonuniformty (INU) in each input channel and a MRF regularization model that controls the spatial regularity of the tissue segmentation. The command line looks as follows:

mri_ms_EM_withatlas -hard_seg \ -remove_cerebellum \ -M 3 -synthonly \ -gca FLASH_atlas.gca \ -noconform \ -xform transforms/talairach.m3z \ flash30avg_inu.mgz \ flash5avg_inu.mgz \ ./

The last option "./" denotes the output directory. Several output volumes are generated by this program, and they will be used in later steps. "WM_seg.mgz" is the hard WM segmentation, "EM_combined.mgz" is a synthesized volume that encodes the probabilistic EM segmentation results. In the EM_combined.mgz volume, the intensity value at each voxel location is a weighted sum of three tissue class means, where the weights are set equal to the a posterior probability for each tissue class. The class means used in the weighted summation are set to be 220 for WM, 110 for GM, and 30 for CSF. "corrected_input_0.mgz" and "corrected_input_1.mgz" correspond to the two input volumes but with intensity bias field being removed.

### Step 7:

Generate topology correct initial WM surface for each hemisphere. The whole process of generating the final lh.orig and rh.orig surfaces from the initial WM segmentation (WM_seg.mgz in Step 6) is exactly the same as in standard FreeSurfer pipeline, which will be omitted here. One problem we currently encountered is that WM is often "under-segmented" using the EM algorithm as compared to typical FreeSurfer results, which can cause some difficulty for the topology correction step (topology correction sometimes breaks the thin WM strand at the temporal lobe). One way to get around this problem is to compute the WM segmentation using standard FreeSurfer where the synthesized volume of Step 4 can be used as the orig volume.

### Step 8:

Multi-spectral surface deformation. This step follows the same strategy as the mris_make_surfaces step in standard FreeSurfer recon-all, but uses more than one image as the inputs (flash30 and flash5). In the current implementation, the method relies on the full EM segmentation result to estimate the mean and std of WM and GM of the input volumes. An alternative apporach is to use the WM segmentation only and compute the WM and GM statistics using the border voxels along the boundary of the WM segmentation. The mean and std are both vectors of length 2, corresponding to the two input channels (30 and 5). The tissue statistics are used to constrain the surface deformation similar to the original FreeSurfer method. Intensity gradient information is also used to guide the surface deformation. Although gradient computation of vector-valued images is a well-studied problem (as in color image processing), currently we compute the image gradient separately from the 30 and 5 channels and simply use their weighted sum to get a final scalar intensity gradient measure (as needed when searching for local gradient maximum). The weights are chosen emperically. For the gray/white surface, the weights are chosen to be the same as in Step 4 (0.9 for 30 and -0.3 for 5; the sign takes into account of the fact that the 30 and 5 channels have opposite contrast at the gray/white interface). For the pial surface, weighting for flash30 is set to be one and zero for flash5, because the pial surface cannot be reliably estimated in the flash5 channel. The command line for the MEF surface deformation is as follows:

mris_mef_surfaces -mgz -em EM_combined \ -flash30 corrected_input_0 \ -flash5 corrected_input_1 \ $s $hemi

For each subject and each hemisphere, the program generates the white and pial surfaces and the thickness map as the outputs, same as in mris_make_surfaces.

## Future directions:

- EM segmentation in the tissue parameter space. Converting the original MEF data into the three intrinsic tissue parameter maps is the most efficient way of conducting MEF data dimension reduction. The nonlinear process of tissue parameter estimation , however, colors the image noise. As a result the Gaussian Mixture model in the standard EM method is no longer valid in the (T1, PD, T2*) parameter space. Non-parametric statistical models may address this problem, such as the Parzen window method. Unfortunately, an EM method based on such a non-parametric model may be computationaly very expensive. Still, it is a direction worth further exploration.
- Multi-spetral surface deformation. Again, surface deformation in the intrinsic tissue parameter space with partial volume model and with proper combination of intensity and gradient information will be an ideal goal.
- Intra-subject registration of multi-spectral data. Registration error cannot be ignored for accurate segmentation of MEF data. Registration may involve the registration across different flip-angles, or even across old and even echo volumes.