Colormap
ROIs
Data Opacity
Datasets
=

Welcome to the pycortex WebGL MRI viewer!

This viewer shows how information about thousands of object and action categories is represented across human neocortex. The data come from brain activity measurements made using fMRI while a participant watched hours of movie trailers. Computational modeling procedures were used to determine how 1705 distinct object and action categories are represented in the brain. Further details on this work can be found in this video or in the paper by Huth, Nishimoto, Vu and Gallant (2012), "A continuous semantic space describes the representation of thousands of object and action categories across the human brain", Neuron 2012.
If you have problems, email us.

Warning:This page uses WebGL, an experimental web technology. It will not work in all browsers or on all platforms. For the best experience we recommend using Google Chrome, maximizing the size of your browser window, and closing other running applications (this viewer takes quite a bit of RAM).

Instructions: On the left is a 3D model of one participant's cortical surface. Use the left mouse button to rotate, right mouse button to zoom, and middle mouse button to pan. The buttons at the bottom change the way the surface is shown. When viewing the 2D flat surface, use the left mouse button to pan.
The colors on the surface indicate which object and action categories are represented in each part of the brain. These colors match the colors of the categories on the right.
On the right is a tree showing 1705 object and action categories. Mouse over a category to see its name and definition. Click on a category to see how it is represented across the cortical surface. To return to the original colorful view mouse over the bar at the very top of the brain viewer window, and then click "Semantic Space" (under "Datasets").
Back on the brain viewer, click on a location in the brain to show which categories are represented there. This view also shows how well our model can predict the responses at that location (the prediction performance, or correlation r) To return the tree to the original colorful view, click on "Show Semantic Space" under the category tree.

Caveat: All of the category-specific responses shown here are based on blood-oxygen level dependent (BOLD) responses recorded while participants watched 2 hours of natural movies. BOLD was measured using fMRI, which is very noisy and which only reveals a small fraction of the information represented in the brain. Please do not try to read too much into the category responses for a single brain area.

Hints: The 3D viewer will run much faster if the ROI labels are disabled. To do this, mouse over the bar at the very top of the brain viewer window and uncheck the box next to "Labels".
While the viewer is showing the inflated or flattened cortex, a double left click and hold will show the same location on the fiducial (folded) 3D surface. Releasing the mouse button will return to the folded/inflated view.

Credits: The pycortex WebGL MRI viewer was written by James Gao. All data are copyright UC Regents and Gallant lab. WordNet is copyright the trustees of Princeton University. This software is copyright UC Regents (see source of this page for details). The software makes use of jQuery, Three.js, and OpenCTM.
The work was supported by grants from the National Eye Institute (EY019684) and from the Center for Science of Information (CSoI), an NSF Science and Technology Center, under grant agreement CCF-0939370.

Loading data...
Loading brain...
=
=
Loading...
hToggle this box
iToggle intro msg
+Next Dataset
-Prev Dataset
Lft mouseRotate
Mid mousePan
Rgt mouseZoom