Troubleshooting and Help

C-PAC is beta software, which means that it is still under active development and has not yet undergone large-scale testing. As such, although we have done our best to ensure a stable pipeline, there may still be a few bugs that we did not catch.

If you find a bug, or would like to request a new feature please visit the GitHub Issues page for CPAC.

If you have a question that is not answered in the User Guide or encounter an issue not covered in the ‘Common Issues and Solutions’ section below, you should submit it to Neurostars.

View Crash Files

C-PAC ≥1.8.0

As of C-PAC v1.8.0, crash files are plain text files and can be read with any text editor.

CPAC ≤1.7.2

If you have the cpac Python package, you can simply run

cpac crash /path/to/crash-file.pklz

where /path/to/crash-file.pklz is the path to the crash file you wish to view (with any necessary options, e.g., --image to specify a C-PAC image to use).

Otherwise, you need to enter the container by typing:

docker run -i -t --entrypoint='/bin/bash' --rm -v /Users/You/some_folder:/outputs fcpindi/c-pac:latest

Crash files are generated by Nipype, and can be viewed from the terminal by typing:

nipypecli crash crash-file.pklz

where crash-file.pklz is the name of the crash file you wish to view.

Common Issues

bids-validator: command not found

bids-validator is missing from C-PAC image

New in version 1.8.6.

See issue #2110 for the latest developments on this issue.

When running C-PAC with an input BIDS directory, no data configuration file and without the --skip_bids_validator flag, C-PAC will crash with a bids-validator: command not found message.

Workarounds

Either of these workarounds should allow you to run versions of C-PAC affected by this issue:

  • Add the --skip_bids_validator flag to your run command. If desired, run the BIDS validator on your input data prior to using C-PAC.

  • Use a data configuration file to specify your input data.

Planned resolution
  • Restoring or replacing bids-validator in an upcoming version of C-PAC.

My end-to-end surface pipeline with ABCD post-processing is hanging/stalling.

Freesurfer-based pipeline hangs if recon-all and ABCD surface post-processing are run in the same pipeline

See issue #2104 for the latest developments on this issue.

When running both recon-all within C-PAC via Nipype and the ABCD surface post-processing workflow,

# PREPROCESSING
# -------------
surface_analysis:

  # Will run Freesurfer for surface-based analysis. Will output traditional Freesurfer derivatives.
  # If you wish to employ Freesurfer outputs for brain masking or tissue segmentation in the voxel-based pipeline,
  # select those 'Freesurfer-' labeled options further below in anatomical_preproc.
  freesurfer:
    run_reconall: On

  # Run ABCD-HCP post FreeSurfer and fMRISurface pipeline
  post_freesurfer:
    run: On

the pipeline tends to hang roughly around the timeseries warp-to-template part of the pipeline, although the warp-to-template portion seems unrelated to the stall.

(The stall always happens with one of the abcd timeseries warp-to-template nodeblock’s nodes being listed in the pypeline.log as the last to complete - and when you search the working directory, often ANTs registration never even started.)

However, post-proc runs well on its own. It’s the combination.

Workarounds
Preferred

Running FreeSurfer recon-all first and ingressing its outputs, which is now our recommended usage, doesn’t hit this problem.

Alternative

Running it normally, hitting a stall, and simply restarting the pipeline run also gets past it. That is, cancelling the stalled pipeline, and then re-starting it as-is via warm restart (if the Nipype working directory remains).

Planned resolution
  • Removing FreeSurfer recon-all as a workflow within C-PAC, instead requiring its outputs as input data for configurations that involve surface analysis.

I have a pipeline configuration that used to work, but now I’m getting errors that start with OSError: File /ndmg_atlases/label/Human/. Why?

On July 20, 2020, Neuroparc released v1.0. In moving from v0 to v1, the paths to several of the Neuroparc atlases changed. The atlases are used by C-PAC in its default and preconfigured pipelines. C-PAC v1.7.0 includes the Neuroparc v1.0 paths, but if you are using a pipeline based on C-PAC 1.6.2a or older, you will need to update any Nueroparc v0 paths in your config

All paths begin with /ndmg_atlases/label/Human/

Neuroparc v0

Neuroparc v1

aal_space-MNI152NLin6_res-1x1x1.nii.gz

AAL_space-MNI152NLin6_res-1x1x1.nii.gz

aal_space-MNI152NLin6_res-2x2x2.nii.gz

AAL_space-MNI152NLin6_res-2x2x2.nii.gz

AAL2zourioMazoyer2002.nii.gz

AAL_space-MNI152NLin6_res-1x1x1.nii.gz

brodmann_space-MNI152NLin6_res-1x1x1.nii.gz

Brodmann_space-MNI152NLin6_res-1x1x1.nii.gz

brodmann_space-MNI152NLin6_res-2x2x2.nii.gz

Brodmann_space-MNI152NLin6_res-2x2x2.nii.gz

CorticalAreaParcellationfromRestingStateCorrelationsGordon2014.nii.gz

CAPRSC_space-MNI152NLin6_res-1x1x1.nii.gz

desikan_space-MNI152NLin6_res-1x1x1.nii.gz

Desikan_space-MNI152NLin6_res-1x1x1.nii.gz

desikan_space-MNI152NLin6_res-2x2x2.nii.gz

Desikan_space-MNI152NLin6_res-2x2x2.nii.gz

DesikanKlein2012.nii.gz

DesikanKlein_space-MNI152NLin6_res-1x1x1.nii.gz

glasser_space-MNI152NLin6_res-1x1x1.nii.gz

Glasser_space-MNI152NLin6_res-1x1x1.nii.gz

Juelichgmthr252mmEickhoff2005.nii.gz

Juelich_space-MNI152NLin6_res-1x1x1.nii.gz

MICCAI2012MultiAtlasLabelingWorkshopandChallengeNeuromorphometrics.nii.gz

MICCAI_space-MNI152NLin6_res-1x1x1.nii.gz

princetonvisual-top_space-MNI152NLin6_res-2x2x2.nii.gz

Princetonvisual-top_space-MNI152NLin6_res-2x2x2.nii.gz

Schaefer2018-200-node_space-MNI152NLin6_res-1x1x1.nii.gz

Schaefer200_space-MNI152NLin6_res-1x1x1.nii.gz

Schaefer2018-300-node_space-MNI152NLin6_res-1x1x1.nii.gz

Schaefer300_space-MNI152NLin6_res-1x1x1.nii.gz

Schaefer2018-400-node_space-MNI152NLin6_res-1x1x1.nii.gz

Schaefer400_space-MNI152NLin6_res-1x1x1.nii.gz

Schaefer2018-1000-node_space-MNI152NLin6_res-1x1x1.nii.gz

Schaefer1000_space-MNI152NLin6_res-1x1x1.nii.gz

slab907_space-MNI152NLin6_res-1x1x1.nii.gz

Slab907_space-MNI152NLin6_res-1x1x1.nii.gz

yeo-7_space-MNI152NLin6_res-1x1x1.nii.gz

Yeo-7_space-MNI152NLin6_res-1x1x1.nii.gz

yeo-7-liberal_space-MNI152NLin6_res-1x1x1.nii.gz

Yeo-7-liberal_space-MNI152NLin6_res-1x1x1.nii.gz

yeo-17_space-MNI152NLin6_res-1x1x1.nii.gz

Yeo-17_space-MNI152NLin6_res-1x1x1.nii.gz

yeo-17-liberal_space-MNI152NLin6_res-1x1x1.nii.gz

Yeo-17-liberal_space-MNI152NLin6_res-1x1x1.nii.gz

My run is crashing with the crash files indicating a memory error. What is going on?

This issue often occurs when scans are resampled to a higher resolution. This is because functional images, the template that you are resampling to, and the resulting generated image are all loaded into memory at various points in the C-PAC pipeline. Typically these images are compressed, but they will be uncompressed when loaded. If the amount of memory required to load the images exceeds the amount of memory available, C-PAC will crash.

For instance, suppose you are resampling a 50 MB (100 MB uncompressed) resting-state scan with 3 mm isotropic voxels to a 3 MB (15 MB uncompressed) 1 mm template. By dividing the voxel volume for the original resting state scan by the voxel volume for the template, you can determine that the voxel count of the resampled resting-state scan will be 27 times greater. Therefore, if you multiply the uncompressed file size by 27 you can estimate that the resampled scan alone will take up 2.7 GB of space. You will need at least this much RAM for C-PAC to load the scan. If you are running multiple subjects simultaneously, you would need to multiply this estimate by the number of subjects. Note that the template, original image, and any other open applications will also have their own memory requirements, so the estimate would be closer to 2.8 GB per subject plus however much RAM is used before C-PAC is started (assuming no new applications are started mid-run).

To avoid this error, you will need to either get more RAM, run fewer subjects at once, or consider downsampling your template to a lower resolution.

I’m re-running a pipeline, but I am receiving many crashes. Most of these crashes tell me that a file that has been moved or no longer exists is being used as an input for a step in the C-PAC pipeline. What is going on and how can I tell C-PAC to use the correct inputs?

One of the features of Nipype (which C-PAC is built upon) is that steps that have been run before are not re-run when you re-start a pipeline. Nipype accomplishes this by associating a value with a step based on the properties of that step (i.e., hashing). Nipype has two potential values that it can associate with a step: a value based on the size and date of the files created by the step, and a value based upon the data present within the files themselves. The first value is what C-PAC uses with its pipelines, since it is much more computationally practical. Since this value only looks at the size and date of files to determine whether or not a step has been run, it will not see that the file’s path has changed, and it will assume that all paths are consistent with the path structure from when the pipeline was run before.

To work around this error, you will need to delete the working directory associated with the previous run and create a new directory to replace it for the new run.

How should I cite C-PAC in my paper?

Please cite the abstract located here.