Run on Docker¶
A C-PAC Docker image is available so that you can easily get an analysis running without needing to install C-PAC.
The Docker image is designed following the specification established by the BIDS-Apps project, an initiative to create a collection of reproducible neuroimaging workflows that can be executed as self-contained environments using Docker containers. These workflows take as input any dataset that is organized according to the Brain Imaging Data Structure (BIDS) standard and generating first-level outputs for this dataset. However, you can provide the C-PAC Docker image with a custom non-BIDS dataset by entering your own data configuration file. More details below.
In addition, we have created a Docker default pipeline configuration as part of this initiative that allows you to run the C-PAC pipeline on your data in an environment that is fully provisioned with all of C-PAC’s dependencies - more details about the default pipeline are available further below. If you wish to run your own pipeline configuration, you can also provide this to the Docker image at run-time.
To start, first pull the image from Docker Hub:
docker pull fcpindi/c-pac:latest
Once this is complete, you can use the fcpindi/c-pac:latest
image tag to invoke runs. The full C-PAC Docker image usage options are shown here, with some specific use cases.
See also
As a quick example, in order to run the C-PAC Docker container in participant mode, for one participant, using a BIDS dataset stored on your machine or server, and using the Docker image’s default pipeline configuration (broken into multiple lines for visual clarity):
docker run -i --rm \
-v /Users/You/local_bids_data:/bids_dataset \
-v /Users/You/some_folder:/outputs \
-v /tmp:/tmp \
fcpindi/c-pac:latest /bids_dataset /outputs participant
Note, the -v
flags map your local filesystem locations to a “location” within the Docker image. (For example, the /bids_dataset
and /outputs
directories in the command above are arbitrary names). If you provided /Users/You/local_bids_data
to the bids_dir
input parameter, Docker would not be able to access or see that directory, so it needs to be mapped first. In this example, the local machine’s /tmp
directory has been mapped to the /tmp
name because the C-PAC Docker image’s default pipeline sets the working directory to /tmp
. If you wish to keep your working directory somewhere more permanent, you can simply map this like so: -v /Users/You/working_dir:/tmp
.
You can also provide a link to an AWS S3 bucket containing a BIDS directory as the data source:
docker run -i --rm \
-v /Users/You/some_folder:/outputs \
-v /tmp:/tmp \
fcpindi/c-pac:latest s3://fcp-indi/data/Projects/ADHD200/RawDataBIDS /outputs participant
In addition to the default pipeline, C-PAC comes packaged with a growing library of pre-configured pipelines that are ready to use. To run the C-PAC Docker container with one of the pre-packaged pre-configured pipelines, simply invoke the --preconfig
flag, shown below. See the full selection of pre-configured pipelines here.
docker run -i --rm \
-v /Users/You/local_bids_data:/bids_dataset \
-v /Users/You/some_folder:/outputs \
-v /tmp:/tmp \
fcpindi/c-pac:latest /bids_dataset /outputs --preconfig anat-only
To run the C-PAC Docker container with a pipeline configuration file other than one of the pre-configured pipelines, assuming the configuration file is in the /Users/You/Documents
directory:
docker run -i --rm \
-v /Users/You/local_bids_data:/bids_dataset \
-v /Users/You/some_folder:/outputs \
-v /tmp:/tmp \
-v /Users/You/Documents:/configs \
-v /Users/You/resources:/resources \
fcpindi/c-pac:latest /bids_dataset /outputs participant --pipeline-file /configs/pipeline_config.yml
In this case, we need to map the directory containing the pipeline configuration file /Users/You/Documents
to a Docker image virtual directory /configs
. Note we are using this /configs
directory in the --pipeline-file
input flag. In addition, if there are any ROIs, masks, or input files listed in your pipeline configuration file, the directory these are in must be mapped as well- assuming /Users/You/resources
is your directory of ROI and/or mask files, we map it with -v /Users/You/resources:/resources
. In the pipeline configuration file you are providing, these ROI and mask files must be listed as /resources/ROI.nii.gz
(etc.) because we have mapped /Users/You/resources
to /resources
.
Finally, to run the Docker container with a specific data configuration file (instead of providing a BIDS data directory):
docker run -i --rm \
-v /Users/You/any_directory:/bids_dataset \
-v /Users/You/some_folder:/outputs \
-v /tmp:/tmp \
-v /Users/You/Documents:/configs \
fcpindi/c-pac:latest /bids_dataset /outputs participant --data-config-file /configs/data_config.yml
Note: we are still providing /bids_dataset
to the bids_dir
input parameter. However, we have mapped this to any directory on your machine, as C-PAC will not look for data in this directory when you provide a data configuration YAML with the --data-config-file
flag. In addition, if the dataset in your data configuration file is not in BIDS format, just make sure to add the --skip-bids-validator
flag at the end of your command to bypass the BIDS validation process.
The full list of parameters and options that can be passed to the Docker container are shown below:
Usage: cpac run¶
$ cpac run --help
Loading 🐳 Docker
Loading 🐳 fcpindi/c-pac:latest as "root (0)" with these directory bindings:
local Docker mode
---------------------------- ---------------------------- ------
/etc/passwd /etc/passwd ro
/root/.cpac /root/.cpac rw
/home/circleci/build /home/circleci/build rw
/home/circleci/build/log /home/circleci/build/log rw
/home/circleci/build/working /home/circleci/build/working rw
/home/circleci/build/outputs /home/circleci/build/outputs rw
Logging messages will refer to the Docker paths.
usage: run.py [-h] [--pipeline-file PIPELINE_FILE] [--group-file GROUP_FILE]
[--data-config-file DATA_CONFIG_FILE] [--preconfig PRECONFIG]
[--aws-input-creds AWS_INPUT_CREDS]
[--aws-output-creds AWS_OUTPUT_CREDS] [--n-cpus N_CPUS]
[--mem-mb MEM_MB] [--mem-gb MEM_GB]
[--runtime-usage RUNTIME_USAGE]
[--runtime-buffer RUNTIME_BUFFER]
[--num-ants-threads NUM_ANTS_THREADS]
[--random-seed RANDOM_SEED]
[--save-working-dir [SAVE_WORKING_DIR]] [--fail-fast FAIL_FAST]
[--participant-label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
[--participant-ndx PARTICIPANT_NDX] [--T1w-label T1W_LABEL]
[--bold-label BOLD_LABEL [BOLD_LABEL ...]] [-v]
[--bids-validator-config BIDS_VALIDATOR_CONFIG]
[--skip-bids-validator] [--anat-only]
[--user_defined USER_DEFINED] [--tracking-opt-out]
[--monitoring]
bids_dir output_dir {participant,group,test_config,cli}
C-PAC Pipeline Runner. Copyright (C) 2022-2024 C-PAC Developers. This program
comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome
to redistribute it under certain conditions. For details, see https://fcp-
indi.github.io/docs/nightly/license or the COPYING and COPYING.LESSER files
included in the source code.
positional arguments:
bids_dir The directory with the input dataset formatted
according to the BIDS standard. Use the format
s3://bucket/path/to/bidsdir to read data directly from
an S3 bucket. This may require AWS S3 credentials
specified via the --aws_input_creds option.
output_dir The directory where the output files should be stored.
If you are running group level analysis this folder
should be prepopulated with the results of the
participant level analysis. Use the format
s3://bucket/path/to/bidsdir to write data directly to
an S3 bucket. This may require AWS S3 credentials
specified via the --aws_output_creds option.
{participant,group,test_config,cli}
Level of the analysis that will be performed. Multiple
participant level analyses can be run independently
(in parallel) using the same output_dir. test_config
will run through the entire configuration process but
will not execute the pipeline.
options:
-h, --help show this help message and exit
--pipeline-file PIPELINE_FILE, --pipeline_file PIPELINE_FILE
Path for the pipeline configuration file to use. Use
the format s3://bucket/path/to/pipeline_file to read
data directly from an S3 bucket. This may require AWS
S3 credentials specified via the --aws_input_creds
option.
--group-file GROUP_FILE, --group_file GROUP_FILE
Path for the group analysis configuration file to use.
Use the format s3://bucket/path/to/pipeline_file to
read data directly from an S3 bucket. This may require
AWS S3 credentials specified via the --aws_input_creds
option. The output directory needs to refer to the
output of a preprocessing individual pipeline.
--data-config-file DATA_CONFIG_FILE, --data_config_file DATA_CONFIG_FILE
Yaml file containing the location of the data that is
to be processed. This file is not necessary if the
data in bids_dir is organized according to the BIDS
format. This enables support for legacy data
organization and cloud based storage. A bids_dir must
still be specified when using this option, but its
value will be ignored. Use the format
s3://bucket/path/to/data_config_file to read data
directly from an S3 bucket. This may require AWS S3
credentials specified via the --aws_input_creds
option.
--preconfig PRECONFIG
Name of the preconfigured pipeline to run. Available
preconfigured pipelines: ['abcd-options', 'abcd-prep',
'anat-only', 'benchmark-FNIRT', 'blank', 'ccs-
options', 'default', 'default-deprecated', 'fmriprep-
ingress', 'fmriprep-options', 'fx-options', 'monkey',
'ndmg', 'nhp-macaque', 'preproc', 'rbc-options',
'rodent']. See https://fcp-
indi.github.io/docs/nightly/user/pipelines/preconfig
for more information about the preconfigured
pipelines.
--aws-input-creds AWS_INPUT_CREDS, --aws_input_creds AWS_INPUT_CREDS
Credentials for reading from S3. If not provided and
s3 paths are specified in the data config we will try
to access the bucket anonymously use the string "env"
to indicate that input credentials should read from
the environment. (E.g. when using AWS iam roles).
--aws-output-creds AWS_OUTPUT_CREDS, --aws_output_creds AWS_OUTPUT_CREDS
Credentials for writing to S3. If not provided and s3
paths are specified in the output directory we will
try to access the bucket anonymously use the string
"env" to indicate that output credentials should read
from the environment. (E.g. when using AWS iam roles).
--n-cpus N_CPUS, --n_cpus N_CPUS
Number of execution resources per participant
available for the pipeline. This flag takes precidence
over max_cores_per_participant in the pipeline
configuration file.
--mem-mb MEM_MB, --mem_mb MEM_MB
Amount of RAM available per participant in megabytes.
Included for compatibility with BIDS-Apps standard,
but mem_gb is preferred. This flag takes precedence
over maximum_memory_per_participant in the pipeline
configuration file.
--mem-gb MEM_GB, --mem_gb MEM_GB
Amount of RAM available per participant in gigabytes.
If this is specified along with mem_mb, this flag will
take precedence. This flag also takes precedence over
maximum_memory_per_participant in the pipeline
configuration file.
--runtime-usage RUNTIME_USAGE, --runtime_usage RUNTIME_USAGE
Path to a callback.log from a prior run of the same
pipeline configuration (including any resource-
management parameters that will be applied in this
run, like 'n_cpus' and 'num_ants_threads'). This log
will be used to override per-node memory estimates
with observed values plus a buffer.
--runtime-buffer RUNTIME_BUFFER, --runtime_buffer RUNTIME_BUFFER
Buffer to add to per-node memory estimates if
--runtime_usage is specified. This number is a
percentage of the observed memory usage.
--num-ants-threads NUM_ANTS_THREADS, --num_ants_threads NUM_ANTS_THREADS
The number of cores to allocate to ANTS-based
anatomical registration per participant. Multiple
cores can greatly speed up this preprocessing step.
This number cannot be greater than the number of cores
per participant.
--random-seed RANDOM_SEED, --random_seed RANDOM_SEED
Random seed used to fix the state of execution. If
unset, each process uses its own default. If set, a
`random.log` file will be generated logging the random
state used by each process. If set to a positive
integer (up to 2147483647), that integer will be used
to seed each process. If set to 'random', a random
seed will be generated and recorded for each process.
--save-working-dir [SAVE_WORKING_DIR], --save_working_dir [SAVE_WORKING_DIR]
Save the contents of the working directory.
--fail-fast FAIL_FAST, --fail_fast FAIL_FAST
Stop worklow execution on first crash?
--participant-label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...], --participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]
The label of the participant that should be analyzed.
The label corresponds to sub-<participant_label> from
the BIDS spec (so it does not include "sub-"). If this
parameter is not provided all participants should be
analyzed. Multiple participants can be specified with
a space separated list.
--participant-ndx PARTICIPANT_NDX, --participant_ndx PARTICIPANT_NDX
The index of the participant that should be analyzed.
This corresponds to the index of the participant in
the data config file. This was added to make it easier
to accommodate SGE array jobs. Only a single
participant will be analyzed. Can be used with
participant label, in which case it is the index into
the list that follows the participant_label flag. Use
the value "-1" to indicate that the participant index
should be read from the AWS_BATCH_JOB_ARRAY_INDEX
environment variable.
--T1w-label T1W_LABEL, --T1w_label T1W_LABEL
C-PAC only runs one T1w per participant-session at a
time, at this time. Use this flag to specify any BIDS
entity (e.g., "acq-VNavNorm") or sequence of BIDS
entities (e.g., "acq-VNavNorm_run-1") to specify which
of multiple T1w files to use. Specify "--T1w_label
T1w" to choose the T1w file with the fewest BIDS
entities (i.e., the final option of [*_acq-
VNavNorm_T1w.nii.gz, *_acq-HCP_T1w.nii.gz,
*_T1w.nii.gz"]). C-PAC will choose the first T1w it
finds if the user does not provide this flag, or if
multiple T1w files match the --T1w_label provided. If
multiple T2w files are present and a comparable filter
is possible, T2w files will be filtered as well. If no
T2w files match this --T1w_label, T2w files will be
processed as if no --T1w_label were provided.
--bold-label BOLD_LABEL [BOLD_LABEL ...], --bold_label BOLD_LABEL [BOLD_LABEL ...]
To include a specified subset of available BOLD files,
use this flag to specify any BIDS entity (e.g., "task-
rest") or sequence of BIDS entities (e.g. "task-
rest_run-1"). To specify the bold file with the fewest
BIDS entities in the file name, specify "--bold_label
bold". Multiple `--bold_label`s can be specified with
a space-separated list. If multiple `--bold_label`s
are provided (e.g., "--bold_label task-rest_run-1
task-rest_run-2", each scan that includes all BIDS
entities specified in any of the provided
`--bold_label`s will be analyzed. If this parameter is
not provided all BOLD scans should be analyzed.
-v, --version show program's version number and exit
--bids-validator-config BIDS_VALIDATOR_CONFIG, --bids_validator_config BIDS_VALIDATOR_CONFIG
JSON file specifying configuration of bids-validator:
See https://github.com/bids-standard/bids-validator
for more info.
--skip-bids-validator, --skip_bids_validator
Skips bids validation.
--anat-only, --anat_only
run only the anatomical preprocessing
--user_defined USER_DEFINED
Arbitrary user defined string that will be included in
every output sidecar file.
--tracking-opt-out, --tracking_opt-out
Disable usage tracking. Only the number of
participants on the analysis is tracked.
--monitoring Enable monitoring server on port 8080. You need to
bind the port using the Docker flag "-p".
Note that any of the optional arguments above will over-ride any pipeline settings in the default pipeline or in the pipeline configuration file you provide via the --pipeline-file
parameter.
Further usage notes:
You can run only anatomical preprocessing easily, without modifying your data or pipeline configuration files, by providing the
--anat-only
flag.As stated, the default behavior is to read data that is organized in the BIDS format. This includes data that is in Amazon AWS S3 by using the format
s3://<bucket_name>/<bids_dir>
for thebids_dir
command line argument. Outputs can be written to S3 using the same format for theoutput_dir
. Credentials for accessing these buckets can be specified on the command line (using--aws-input-creds
or--aws-output-creds
).When the app is run, a data configuration file is written to the working directory. This file can be passed into subsequent runs, which avoids the overhead of re-parsing the BIDS input directory on each run (i.e. for cluster or cloud runs). These files can be generated without executing the C-PAC pipeline using
test_config
as the analysis level.The
participant_label
andparticipant_ndx
arguments allow the user to specify which of the many datasets should be processed, which is useful when parallelizing the run of multiple participants.