-
Notifications
You must be signed in to change notification settings - Fork 42
Brain extraction workflow does not reuse cache #120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I believe this is a result of not using the content hash, since nipype was patched too recently (nipy/nipype#3072) |
But I'm running with the same version of nipype... |
What do you find below the last line ( |
I don't think it was. But I showed the first outdated cache on the first line. Anyway, not at that computer now. Can try rerunning on Monday. |
Oh I see, yes - you'll need to increase the verbosity to see the inputs that changed. |
Ah, the issue is that the brain extraction mask isn't pre-downloaded into the Docker image, so the timestamp hash is new every time. |
Sorry, I wasn't clear before - we set the hash to content and write it to the config file, but then the config file is ignored because of that bug in nipype, I guess |
Wait, you want to do content hashes in general? Isn't that going to take a very long time? |
We've been doing it for a while in fMRIPrep and it doesn't seem to be a lot slower. Regardless, I agree that mask should be cached. |
I'm adding it to #117. |
While testing FreeSurfer, two immediately succeeding runs with the same working directory rerun large chunks of the brain extraction workflow, due to an "outdated cache" detected for
init_aff
. This task had previously run successfully.The text was updated successfully, but these errors were encountered: