Skip to content

Compulsory merger of multiple anat? #31

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
andersonwinkler opened this issue Sep 25, 2018 · 5 comments
Open

Compulsory merger of multiple anat? #31

andersonwinkler opened this issue Sep 25, 2018 · 5 comments
Labels
effort:high Estimated high effort task freesurfer FreeSurfer related improvements and issues impact:medium Estimated medium impact task multiple images Multiple T1w or T2w images (e.g., longitudinal, multisession, etc)

Comments

@andersonwinkler
Copy link

Hi all,

Thanks again for this great tool.

It seems that when a subject has more than one anat (e.g., in multiple sessions), all are merged before doing recon-all. This is problematic if not all subjects have the same number of T1 scans: those with more scans will have a lower variance than those with just 1 scan, which violates a key statistical assumption for both parametric and non-parametric tests, that is, the one of homoscedasticity (homogeneity of variances).

Would it be possible to have an option to disable this behaviour, that is, allow the possibility of running recon-all separately for each T1, such that repetitions can be properly treated at the time of the statistical analysis?

Thanks!

All the best,

Anderson

@effigies
Copy link
Member

I do think this is a reasonable request, but how it should be done is an open question. The biggest hurdle in moving away from the single T1w template is deciding on the appropriate anatomical targets for each BOLD run.

If the concern is how to get each individual T1w image in the same space, we do preserve the rigid transform from each anatomical image to the T1w template, so the originals may be realigned.

Finally, if the concern is simply FreeSurfer, you may run recon-all manually, using the correct inputs. If you run recon-all -autorecon1 -noskullstrip and then run fMRIPrep, we will still inject our ANTs skull-stripped brain and finish the job. We register T1.mgz to our T1w template in order to keep things in the correct spaces.

@andersonwinkler
Copy link
Author

Hi Chris,

Thanks for the feedback. I see the difficulty. Here we use the closest in time, but this could be an issue as the acquisition date/time may not be present in the json files. I guess we can always run all separately indeed as you suggest.

Thanks!

All the best,

Anderson

@effigies
Copy link
Member

I am still interested in how people would like to do this, in general. If we can develop a consensus on a couple best practices approaches to multi-session data, these could easily become options. It's just very hard to pick something right now that's going to work for most cases. The one thing our approach has going for it is that all of the preprocessed BOLD files should be in alignment with each other.

In your approach, do you register your BOLD series to the temporally closest T1w image, and then use the T1w-template warp to do your BOLD analysis in the template space? Or do you mutually align the T1w images (either to the first, or some per-subject template), so that all BOLD files registered to each T1w image are still in alignment? Or something else?

@andersonwinkler
Copy link
Author

Hi Chris,

Seems to me that compulsory merger doesn't give much flexibility. Consider for example a longitudinal analysis with a long span between sessions, child participants, etc.

How about if instead the user controlled that by a flag (merge/don't merge) and, in the case of not merging, at least one anat must be present for each session. It would be for the user to ensure that it is present, either by having it, or by linking the anat from a different session.

Another option could be get the closest in time, but the AcquisitionDateTime parameter doesn't seem to be always present, and plus, not necessarily the closest is the best (e.g., due to quality issues).

Thanks again!

Cheers,

Anderson

@oesteban oesteban transferred this issue from nipreps/fmriprep Jan 8, 2019
@oesteban oesteban added freesurfer FreeSurfer related improvements and issues multiple images Multiple T1w or T2w images (e.g., longitudinal, multisession, etc) labels Jan 8, 2019
@franklin-feingold franklin-feingold added effort:high Estimated high effort task impact:medium Estimated medium impact task labels Mar 12, 2019
@oesteban
Copy link
Member

oesteban commented Mar 10, 2021

I've been thinking of this and perhaps the solution requires a divide and conquer approach:

  • sMRIPrep should be able to run in two modes: "template" (current) and "longitudinal" (each T1w is processed independently). The "longitudinal" mode could potentially also generate an extra "template" if necessary.
  • Then, the problem arrives when applying this in fMRIPrep. For adults, I can only see extreme cases where having split timepoints can be useful. Unless you are doing some morphological analysis, having a much better image registrations overall at the cost of breaking homoscedasticity seems like a good deal. And yet, for your functional processing you can use the template for registration and use independent timepoints in your final analysis. For babies, a single template/reference doesn't work anymore (same for rodents or monkeys, as changes in their brains are typically more obvious for the experimental manipulation).

I guess sMRIPrep could easily cater both modes. Then we can think this through in the context of downstream applications.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort:high Estimated high effort task freesurfer FreeSurfer related improvements and issues impact:medium Estimated medium impact task multiple images Multiple T1w or T2w images (e.g., longitudinal, multisession, etc)
Projects
None yet
Development

No branches or pull requests

4 participants