Network Centrality

CPAC.network_centrality.create_resting_state_graphs(allocated_memory=None, wf_name='resting_state_graph')[source]

Workflow to calculate degree and eigenvector centrality as well as local functional connectivity density (lfcd) measures for the resting state data.

generate_graph : boolean
when true the workflow plots the adjacency matrix graph and converts the adjacency matrix into compress sparse matrix and stores it in a .mat file. By default its False
wf_name : string
name of the workflow
wf : workflow object
resting state graph workflow object

Source

Workflow Inputs:

inputspec.subject: string (nifti file)
    path to resting state input data for which centrality measure is to be calculated
    
inputspec.template : string (existing nifti file)
    path to mask/parcellation unit 

inputspec.method_option: string (int)
    0 for degree centrality, 1 for eigenvector centrality, 2 for lFCD

inputspec.threshold: string (float)
    pvalue/sparsity_threshold/threshold value

inputspec.threshold_option: string (int)
    threshold options:  0 for probability p_value, 1 for sparsity threshold, any other for threshold value
   
centrality_options.weight_options : string (list of boolean)
    list of two booleans for binarize and weighted options respectively

centrality_options.method_options : string (list of boolean)
    list of two booleans for Degree and Eigenvector centrality method options respectively

Workflow Outputs:

outputspec.centrality_outputs : string (list of nifti files)
    path to list of centrality outputs for binarized or/and weighted and
    degree or/and eigen_vector 

outputspec.threshold_matrix : string (numpy file)
    path to file containing thresholded correlation matrix

outputspec.correlation_matrix : string (numpy file)
    path to file containing correlation matrix

outputspec.graph_outputs : string (mat and png files)
    path to matlab compatible sparse adjacency matrix files 
    and adjacency graph images 

Order of commands:

  • load the data and template, based on template type (parcellation unit ar mask) extract timeseries
  • Calculate the correlation matrix for the image data for each voxel in the mask or node in the parcellation unit
  • Based on threshold option (p_value or sparsity_threshold), calculate the threshold value
  • Threshold the correlation matrix
  • Based on weight options for edges in the network (binarize or weighted), calculate Degree or Vector Based centrality measures

High Level Workflow Graph:

workflows/../images/resting_state_centrality.dot.png

Detailed Workflow Graph:

workflows/../images/resting_state_centrality_detailed.dot.png
>>> import resting_state_centrality as graph
>>> wflow = graph.create_resting_state_graphs()
>>> wflow.inputs.centrality_options.method_options=[True, True]
>>> wflow.inputs.centrality_options.weight_options=[True, True]
>>> wflow.inputs.inputspec.subject = '/home/work/data/rest_mc_MNI_TR_3mm.nii.gz'
>>> wflow.inputs.inputspec.template = '/home/work/data/mask_3mm.nii.gz'
>>> wflow.inputs.inputspec.threshold_option = 1
>>> wflow.inputs.inputspec.threshold = 0.0744
>>> wflow.base_dir = 'graph_working_directory'
>>> wflow.run()
CPAC.network_centrality.load(datafile, template=None)[source]

Method to read data from datafile and mask/parcellation unit and store the mask data, timeseries, affine matrix, mask type and scans. The output of this method is used by all other nodes.

Note that this function also will internally compute it’s own brain mask by getting all voxels with non-zero variance in the timeseries.

datafile : string (nifti file)
path to subject data file
template : string (nifti file) or None (default: None)
path to mask/parcellation unit if none, then will be mask with all 1s
timeseries_data : ndarray
Masked timeseries of the input data.
affine : ndarray
Affine matrix of the input data
final_mask : ndarray
Mask/parcellation unit matrix
template_type : string
0 for mask, 1 for parcellation unit
scans : string (int)
total no of scans in the input data

Exception

CPAC.network_centrality.get_centrality_by_rvalue(ts_normd, template, method_option, weight_options, r_value, block_size)[source]

Method to calculate degree/eigenvector centrality and lFCD via correlation (r-value) threshold

ts_normd : ndarray (float)
timeseries of shape (ntpts x nvoxs) that is normalized; i.e. the data is demeaned and divided by its L2-norm
template : ndarray
three dimensional array with non-zero elements corresponding to the indices at which the lFCD metric is analyzed
method_option : integer
0 - degree centrality calculation, 1 - eigenvector centrality calculation, 2 - lFCD calculation
weight_options : list (boolean)
weight_options[0] - True or False to perform binary counting weight_options[1] - True or False to perform weighted counting
threshold : a float
threshold (as correlation r) value
block_size : an integer
the number of rows (voxels) to compute timeseries correlation over at any one time
out_list : list (string, ndarray)
list of (string,ndarray) elements corresponding to: string - the name of the metric ndarray - the array of values to be mapped for that metric
CPAC.network_centrality.get_centrality_by_sparsity(ts_normd, method_option, weight_options, threshold, block_size)[source]

Method to calculate degree/eigenvector centrality via sparsity threshold

ts_normd : ndarray
timeseries of shape (ntpts x nvoxs) that is normalized; i.e. the data is demeaned and divided by its L2-norm
method_option : integer
0 - degree centrality calculation, 1 - eigenvector centrality calculation, 2 - lFCD calculation
weight_options : list (boolean)
weight_options[0] - True or False to perform binary counting weight_options[1] - True or False to perform weighted counting
threshold : a float
sparsity_threshold value
block_size : an integer
the number of rows (voxels) to compute timeseries correlation over at any one time
out_list : list (string, ndarray)
list of (string,ndarray) elements corresponding to: string - the name of the metric ndarray - the array of values to be mapped for that metric
CPAC.network_centrality.get_centrality_fast(timeseries, method_options)[source]

Method to calculate degree and eigen vector centrality. Relative to get_centrality_opt, it runs fast by not directly computing the correlation matrix. As a consequence, there are several differences/ limitations from the standard approach:

  1. Cannot specify a correlation threshold
  2. As a consequence, the weighted dense matrix centrality is computed
  3. No memory limit is specified since it is assumed to be ok

Note that the approach doesn’t directly calculate the complete correlation matrix.

timeseries_data : numpy array
timeseries of the input subject
method_options : string (list of boolean)
list of two booleans for binarize and weighted options respectively
out_list : string (list of tuples)
list of tuple containing output name to be used to store nifti image for centrality and centrality matrix. this will only be weighted since the fast approaches are limited to this type of output.

Exception

CPAC.network_centrality.map_centrality_matrix(centrality_matrix, aff, mask, template_type)[source]

Method to map centrality matrix to a nifti image

centrality_matrix : tuple (string, array_like)
tuple containing matrix name and degree/eigenvector centrality matrix
aff : ndarray
Affine matrix of the input data
mask : ndarray
Mask or roi data matrix
template_type : int
type of template: 0 for mask, 1 for roi
out_file : string (nifti image)
nifti image mapped from the centrality matrix

Exception

CPAC.network_centrality.get_cent_zscore(wf_name='z_score')[source]

Workflow to calculate z-scores

wf_name : string
name of the workflow

wf : workflow object

Source

Workflow Inputs:

inputspec.input_file : string
    path to input functional derivative file for which z score has to be calculated
inputspec.mask_file : string
    path to whole brain functional mask file required to calculate zscore

Workflow Outputs:

outputspec.z_score_img : string
     path to image containing Normalized Input Image Z scores across full brain.

High Level Workflow Graph:

workflows/../images/zscore.dot.png

Detailed Workflow Graph:

workflows/../images/zscore_detailed.dot.png
>>> import get_zscore as z
>>> wf = z.get_zscore()
>>> wf.inputs.inputspec.input_file = '/home/data/graph_working_dir/calculate_centrality/degree_centrality_binarize.nii.gz'
>>> wf.inputs.inputspec.mask_file = '/home/data/graphs/GraphGeneration/new_mask_3m.nii.gz'
>>> wf.run()
CPAC.network_centrality.calc_corrcoef(X, Y=None)[source]

Method to calculate correlation Each of the columns in X will be correlated with each of the columns in Y. Each column represents a variable, with the rows containing the observations.

X : numpy array
array of shape x1, x2
Y : numpy array
array of shape y1, y2
r : numpy array
array containing correlation values of shape x2, y2
CPAC.network_centrality.calc_centrality(datafile, template, method_option, threshold_option, threshold, weight_options, allocated_memory)[source]

Method to calculate centrality and map them to a nifti file

datafile : string (nifti file)
path to subject data file
template : string (nifti file)
path to mask/parcellation unit
method_option : integer
0 - degree centrality calculation, 1 - eigenvector centrality calculation, 2 - lFCD calculation
threshold_option : an integer
0 for probability p_value, 1 for sparsity threshold, 2 for actual threshold value, and 3 for no threshold and fast approach
threshold : a float
pvalue/sparsity_threshold/threshold value
weight_options : list (boolean)
list of booleans, where, weight_options[0] corresponds to binary counting and weight_options[1] corresponds to weighted counting (e.g. [True,False])
allocated_memory : string
amount of memory allocated to degree centrality
out_list : list
list containing out mapped centrality images
CPAC.network_centrality.convert_pvalue_to_r(scans, threshold)[source]

Method to calculate correlation threshold from p_value

scans : int
Total number of scans in the data
threshold : float
input p_value
rvalue : float
correlation threshold value
CPAC.network_centrality.calc_blocksize(timeseries, memory_allocated=None, include_full_matrix=False, sparsity_thresh=0.0)[source]

Method to calculate blocksize to calculate correlation matrix as per the memory allocated by the user. By default, the block size is 1000 when no memory limit is specified.

If memory allocated is specified, then block size is calculated as memory allocated subtracted by the memory of the timeseries and centrality output, then divided by the size of one correlation map. That is how many correlation maps can we calculate simultaneously in memory?

timeseries : numpy array
timeseries data: nvoxs x ntpts
memory_allocated : float
memory allocated in GB for degree centrality
include_full_matrix : boolean
Boolean indicating if we’re using the entire correlation matrix in RAM (needed during eigenvector centrality). Default is False
sparsity_thresh : float
a number between 0 and 1 that represents the number of connections to keep during sparsity thresholding. Default is 0.0.
block_size : an integer
size of block for matrix calculation
CPAC.network_centrality.cluster_data(img, thr, xyz_a, k=26)[source]

docstring for cluster_data

CPAC.network_centrality.degree_centrality(corr_matrix, r_value, method, out=None)[source]

Calculate centrality for the rows in the corr_matrix using a specified correlation threshold. The centrality output can be binarized or weighted.

corr_matrix : numpy.ndarray r_value : float method : str

Can be ‘binarize’ or ‘weighted’
out : numpy.ndarray (optional)
If specified then should have shape of corr_matrix.shape[0]

out : numpy.ndarray

CPAC.network_centrality.eigenvector_centrality(corr_matrix, r_value=None, method=None, to_transform=True, ret_eigenvalue=False)[source]
>>> # Simulate Data
>>> import numpy as np
>>> ntpts = 100; nvoxs = 1000
>>> m = np.random.random((ntpts,nvoxs))
>>> mm = m.T.dot(m) # note that need to generate connectivity matrix here
>>> # Execute
>>> from CPAC.network_centrality.core import eigenvector_centrality
>>> eigenvector = slow_eigenvector_centrality(mm)
CPAC.network_centrality.fast_eigenvector_centrality(m, maxiter=99, verbose=True)[source]

The output here is based on a transfered correlation matrix of m. Where it equals (1+r)/2.

[1]Wink, A.M., de Munck, J.C., van der Werf, Y.D., van den Heuvel, O.A., Barkhof, F., 2012. Fast Eigenvector Centrality Mapping of Voxel-Wise Connectivity in Functional Magnetic Resonance Imaging: Implementation, Validation, and Interpretation. Brain Connectivity 2, 265–274.
>>> # Simulate Data
>>> import numpy as np
>>> ntpts = 100; nvoxs = 1000
>>> m = np.random.random((ntpts,nvoxs)) # note that don't need to generate connectivity matrix
>>> # Execute
>>> from CPAC.network_centrality.core import fast_eigenvector_centrality
>>> eigenvector = fast_eigenvector_centrality(m)