Pybraincompare

Semantic and computational comparison methods for brain imaging data, and visualization of outputs. Modules include:

compare

An example scatterplot image comparison, dynamically rendered using python and d3 from two raw statstical brain maps and an atlas image, scatterplot comparison demo is available. A new addition (beta) is a canvas based scatterplot that can render 150K + points.

ontology

This module will let you convert a triples data structure defining relationships in an ontology to a an interactive d3 visualization, ontology tree demo is available. Reverse inference tree also in development.

network

This module will work with visualization of functional connectivity data, connectivity demo is available.

QA for Statistical Maps

This module will generate a web report for a list of statistical maps, demo is available. Much work to be done! Please submit an issue if you have feedback.

This package is maintained by Vanessa Sochat at Stanford University, proud member of Poldracklab. Pybraincompare drives image comparison in NeuroVault and published methods have also been included. Please submit bugs and feature requests on the Github Repo.

Contents:

Installation

manual Installation

git clone https://github.com/vsoch/pybraincompare
cd pybraincompare
sudo python setup.py install

pip Installation

pip install pybraincompare

Spatial Comparison and Visualization

Spatial image comparison means using a metric to derive a score that represents the similarity of two brain maps based on voxel values. Pybraincompare provides visualization methods (eg, Similarity Search, Atlas SVG) as well as computational ones (Spatial Computation) to perform and visualize this task.

Atlas SVG

You can generate an orthogonal view svg image, colored by region, for a brain atlas. As an example, we will use the MNI atlas provided by the package. These are also the atlases that can be integrated into a scatterplot comparison<http://vbmis.com/bmi/share/neurovault/scatter_atlas.html>_, with points colored by the atlas label. See here for the full example below.

from pybraincompare.compare import atlas as Atlas
from pybraincompare.mr.datasets import get_mni_atlas

# Color based on atlas labels
atlases = get_mni_atlas(voxdims=["2"])
atlas = atlases["2"]

# We return both paths and entire svg file
atlas.svg["coronal"]
atlas.paths["coronal"]

# Save svg views to file
output_directory = "/home/vanessa/Desktop"
atlas.save_svg(output_directory) # default will save all views generated
atlas.save_svg(output_directory,views=["coronal"]) # specify a custom set

Scatterplot Comparison

You can generate an interactive scatterplot (with an optional atlas to color regions by), and a demo is provided, along with this example as `script<https://github.com/vsoch/pybraincompare/blob/master/example/make_scatter.py>`_. First import functions to get images and standard brains, generate atlas SVGs, view a static html file from your local machine, and resample images:

# Create a scatterplot from two brain images
from pybraincompare.mr.datasets import get_pair_images, get_mni_atlas
from pybraincompare.compare.mrutils import resample_images_ref, get_standard_brain, get_standard_mask, do_mask
from pybraincompare.compare.maths import calculate_correlation
from pybraincompare.compare import scatterplot, atlas as Atlas
from pybraincompare.template.visual import view
from nilearn.image import resample_img
import numpy
import nibabel

The first example shows comparison with nifti image input

# SCATTERPLOT COMPARE ---- (with nifti input) ---------------------------------------------------

# Images that we want to compare - they must be in MNI space
image_names = ["image 1","image 2"]
images = get_pair_images(voxdims=["2","8"])

html_snippet,data_table = scatterplot.scatterplot_compare(images=images,
                                                 image_names=image_names,
                                                 corr_type="pearson")
view(html_snippet)

You may also need to use pybraincompare to resample images first, an example is provided below:

# RESAMPLING IMAGES -----------------------------------------------------------------------------

# If you use your own standard brain (arg reference) we recommend resampling to 8mm voxel
# Here you will get the resampled images and mask returned
reference = nibabel.load(get_standard_brain("FSL"))
images_resamp,ref_resamp = resample_images_ref(images,
                                           reference=reference,
                                           interpolation="continuous",
                                           resample_dim=[8,8,8])

Here is an example of an equivalent comparison, but using image vectors as input. This is an ideal solution in the case that you are saving reduced representations of your data. See pybraincompare.mr.transformation to generate such vectors.

# SCATTERPLOT COMPARE ---- (with vector input) ---------------------------------------------------

# We are also required to provide atlas labels, colors, and correlations, so we calculate those in advance
# pybraincompare comes with mni atlas at 2 resolutions, 2mm and 8mm
# 8mm is best resolution for rendering data in browser, 2mm is ideal for svg renderings
atlases = get_mni_atlas(voxdims=["8","2"])
atlas = atlases["8"]
atlas_rendering = atlases["2"]

This function will return a data frame with image vectors, colors, labels. The images must already be registered / in same space as reference

corrs_df = calculate_correlation(images=images_resamp,mask=ref_resamp,atlas=atlas,corr_type="pearson")

Option 1: Canvas based, no mouseover of points, will render 100,000’s of points. An example is provided. Note that we are specifying an output_directory, and so the result files will be saved there, and you can drop this into a web folder, or serve one locally to view (python -m SimpleHTTPServer 9999).

output_directory = "/home/vanessa/Desktop/test"
scatterplot.scatterplot_canvas(image_vector1=corrs_df.INPUT_DATA_ONE,
                          image_vector2=corrs_df.INPUT_DATA_TWO,
                          image_names=image_names,
                          atlas_vector = corrs_df.ATLAS_DATA,
                          atlas_labels=corrs_df.ATLAS_LABELS,
                          atlas_colors=corrs_df.ATLAS_COLORS,
                          output_directory=output_directory)

Option 2: D3 based, with mouseover of points, limited sampling of images.

html_snippet,data_table = scatterplot.scatterplot_compare_vector(image_vector1=corrs_df.INPUT_DATA_ONE,
                                                             image_vector2=corrs_df.INPUT_DATA_TWO,
                                                             image_names=image_names,
                                                             atlas=atlas_rendering,
                                                             atlas_vector = corrs_df.ATLAS_DATA,
                                                             atlas_labels=corrs_df.ATLAS_LABELS,
                                                             atlas_colors=corrs_df.ATLAS_COLORS,
                                                             corr_type="pearson")
view(html_snippet)

You can also use your own atlas, and here is how to generate it.

# CUSTOM ATLAS ---------------------------------------------------------------------------------

# You can specify a custom atlas, including a nifti file and xml file
atlas = Atlas.atlas(atlas_xml="MNI.xml",atlas_file="MNI.nii")
Default slice views are "coronal","axial","sagittal"

Plot Histogram

These functions have not been properly developed, however here is how to plot a histogram for an input image. This example is also provided as a script:

#!/usr/bin/python

from pybraincompare.report.histogram import plot_histogram
from pybraincompare.mr.datasets import get_pair_images

image = get_pair_images()[0]
plot_histogram(image)

Semantic Image Comparison

Semantic image comparison means comparing images by way of texy data that is associated with them, for example meta data, or in the case of functional neuroimaging, using an ontology like the Cognitive Atlas to associate images with contrasts and cognitive concepts. Pybraincompare has several methods for reverse inference that will be better documented with a publication, and for now, here are examples to generate trees of ontology structures.

Ontology Tree

This function will generate an interactive d3 representation of a graph data structure demo and script

#!/usr/bin/env python2

from pybraincompare.ontology.tree import named_ontology_tree_from_tsv
from pybraincompare.template.visual import view

# This defines a "mock" ontology for the openfmri proof of concept set - names should be unique
relationship_table = "data/cogatlas_annotate_triples.tsv"

# Create a data structure for d3 visualization and data analysis (no output json, specify with output_json=)
data_structure = named_ontology_tree_from_tsv(relationship_table)
html_snippet = make_ontology_tree_d3(data_structure)

# View in browser!
view(html_snippet)

Reverse Inference Tree

This function will generate an interactive d3 representation of a graph data structure with mouseover tags that show a reverse inference score for a node, demo and a script are available.

#!/usr/bin/env python2

import pandas as pd
from pybraincompare.ontology.tree import named_ontology_tree_from_tsv, make_ontology_tree_d3
from pybraincompare.template.visual import view

# This defines a "mock" ontology for the openfmri proof of concept set - names should be unique
relationship_table = "data/cogatlas_annotate_triples.tsv"

# We want to prepare a dictionary of meta data to correspond to some nodes
reverse_inference_df = pd.read_csv("data/reverse_inference_scores.tsv",sep="\t")
reverse_inferences = dict()
for row in reverse_inference_df.iterrows():
    reverse_inferences[row[1].NODE_GROUP] = row[1].REVERSE_INFERENCE_SCORE

# Create a data structure for d3 visualization and data analysis (no output json, specify with output_json=)
data_structure = named_ontology_tree_from_tsv(relationship_table, meta_data = reverse_inferences)
html_snippet = make_reverse_inference_tree_d3(data_structure)

# View in browser!
view(html_snippet)

Additional Tools

QA Report

Pybraincompare can also generate a QA report for a set of images. This is useful to quickly run QA, and then serve the result / send to someone to easily see and understand the data. An example is provided along with this script

#!/usr/bin/env python2

from glob import glob
from pybraincompare.report.webreport import run_qa

# Here is a set of 144 images, these are 9 openfmri studies in Neurovault, resampled to MNI 2mm
mrs = glob("/home/vanessa/Documents/Work/BRAINMETA/IMAGE_COMPARISON/mr/resampled/*.nii.gz")

# Run quality analysis
outdir = "/home/vanessa/Desktop/test"
run_qa(mrs,software="FSL",html_dir=outdir,voxdim=[2,2,2],outlier_sds=6,investigator="Vanessa",nonzero_thresh=0.25)

# For large datasets (where memory is an issue) you can set calculate_mean_histogram=False

Connectogram

Generate a connectogram d3 visualization from a square connectivity matrix demo and script

from pybraincompare.compare.network import connectogram
from pybraincompare.template.visual import view
import pandas

# A square matrix, tab separated file, with row and column names corresponding to node names
connectivity_matrix = "data/parcel_matrix.tsv"

parcel_info = pandas.read_csv("data/parcels.csv")
networks = list(parcel_info["Community"])
# should be in format "L-1" for hemisphere (L or R) and group number (1..N)
groups = ["%s-%s" %(parcel_info["Hem"][x],parcel_info["ID"][x]) for x in range(0,parcel_info.shape[0])]

# A threshold value for the connectivity matrix to determine neighbors, eg, a value of .95 means we only keep top 5% of positive and negative connections, and the user can explore this top percent
threshold = 0.99
html_snippet = connectogram(matrix_file=connectivity_matrix,groups=groups,threshold=threshold,network_names=networks)

view(html_snippet)

pybraincompare

pybraincompare package

Subpackages
pybraincompare.compare package
Submodules
pybraincompare.compare.atlas module
pybraincompare.compare.maths module
pybraincompare.compare.mrutils module
pybraincompare.compare.network module
pybraincompare.compare.scatterplot module
pybraincompare.compare.search module
Module contents
pybraincompare.mr package
Submodules
pybraincompare.mr.datasets module
pybraincompare.mr.transformation module
Module contents
pybraincompare.ontology package
Submodules
pybraincompare.ontology.export module

export.py: part of pybraincompare package Functions to export images and data

pybraincompare.ontology.export.priors_to_brain_image(priors_df, output_name, mask_image)[source]

Function to save brain images for each level of priors

pybraincompare.ontology.graph module
pybraincompare.ontology.inference module
pybraincompare.ontology.tree module
Module contents
pybraincompare.report package
Submodules
pybraincompare.report.animate module
pybraincompare.report.animate.animate_figure()[source]

Animate subplots from current matplotlib figure

pybraincompare.report.colors module

colors.py: part of pybraincompare package Color stuffs

pybraincompare.report.colors.get_colors(N, color_format='decimal')[source]

Get colors that I like

pybraincompare.report.colors.peterson_roi_labels(colors=True)[source]

Colors for Peterson ROI labels

pybraincompare.report.colors.random_colors(N)[source]

Generate N random colors

pybraincompare.report.histogram module
pybraincompare.report.image module

image.py: part of pybraincompare package Functions for static images

pybraincompare.report.image.make_anat_image(nifti_file, png_img_file=None)[source]

Make anat image

pybraincompare.report.image.make_glassbrain_image(nifti_file, png_img_file=None)[source]

Make glassbrain image, optional save image to png file (not vector)

pybraincompare.report.image.make_roi_image(nifti_file, png_img_file=None)[source]

Make roi (mask) image

pybraincompare.report.image.make_stat_image(nifti_file, png_img_file=None)[source]

Make statmap image

pybraincompare.report.image.plot_vline(cur_val, label, ax)[source]

from chrisfilo https://github.com/chrisfilo/mriqc

pybraincompare.report.qa module

qa.py: part of pybraincompare package Functions to check quality of statistical maps

pybraincompare.report.qa.central_tendency(data)[source]

Central tendency: standard measures of central tendency and variance

pybraincompare.report.qa.count_voxels(masked_in, masked_out)[source]

Count voxels in and outside the mask

pybraincompare.report.qa.get_percent_nonzero(masked_in)[source]

Estimate thresholded We basically check to see if number of zero voxels exceeds some

percentage (not thresholded)
pybraincompare.report.qa.get_voxel_range(nii_obj)[source]

Get the maximum and minimum value in the image

pybraincompare.report.qa.header_metrics(image)[source]

Metrics: Extract metrics from the header

pybraincompare.report.qa.is_only_positive(nii_obj)[source]

Are there only positive values?

pybraincompare.report.qa.is_thresholded(nii_obj, brain_mask, threshold=0.95)[source]

Is a nifti image thresholded? Adopted from chrisfilo for neurovault Threshold should be the percentage of voxels we want “good” Returns True/False and ratio of good voxels

pybraincompare.report.qa.outliers(masked_data, n_std=6)[source]

Outliers outliers (e.g., more than ~6 SD from the mean, maybe less

depending on the action)
pybraincompare.report.qa.t_to_z(mr, dof)[source]
pybraincompare.report.webreport module
Module contents
pybraincompare.template package
Submodules
pybraincompare.template.futils module
pybraincompare.template.templates module
pybraincompare.template.visual module
Module contents
pybraincompare.testing package
Submodules
pybraincompare.testing.test_connectogram module
pybraincompare.testing.test_correlation module
pybraincompare.testing.test_histogram module
pybraincompare.testing.test_masking module
pybraincompare.testing.test_scatterplot_compare module
pybraincompare.testing.test_transformation module
Module contents
Module contents

Indices and tables