<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Publications on Gallant Lab</title><link>https://gallantlab.org/publications/</link><description>Recent content in Publications on Gallant Lab</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><copyright>© 2026</copyright><atom:link href="https://gallantlab.org/publications/index.xml" rel="self" type="application/rss+xml"/><item><title>Bilingual language processing relies on shared semantic representations that are modulated by each language (Chen et al., PNAS, 2026)</title><link>https://gallantlab.org/publications/2026-bilingual-language-processing-relies/</link><pubDate>Tue, 24 Feb 2026 00:00:00 -0800</pubDate><guid>https://gallantlab.org/publications/2026-bilingual-language-processing-relies/</guid><description>We performed fMRI scans of English-Chinese bilinguals while they read natural narratives in each language. Semantic representations are largely shared between languages, but finer-grained differences systematically alter how the same meaning is represented. Semantic brain representations in bilinguals are shared across languages but modulated by each language.</description></item><item><title>Representations of semantic relations in the human cerebral cortex (Chen et al., bioRxiv preprint, 2026)</title><link>https://gallantlab.org/publications/2026-representations-semantic-relations-human/</link><pubDate>Thu, 19 Feb 2026 00:00:00 -0800</pubDate><guid>https://gallantlab.org/publications/2026-representations-semantic-relations-human/</guid><description>Little is known about how semantic relations between concepts are encoded in the human brain. We collected fMRI data while participants answered relation-verification questions about over 1000 concept pairs (e.g., bicycle has-part wheel), then fit voxelwise encoding models to identify where and how each relation is represented. Semantic relations and concepts are encoded in the same bilateral temporal, parietal, and prefrontal regions; most voxels are preferentially selective for a single relation, and the cortical organization of preferred relations is consistent across participants.</description></item><item><title>Visual semantic tuning across the cortex shifts between tasks (Zhang and Gallant, bioRxiv preprint, 2026)</title><link>https://gallantlab.org/publications/2026-visual-semantic-tuning-cortex/</link><pubDate>Thu, 19 Feb 2026 00:00:00 -0800</pubDate><guid>https://gallantlab.org/publications/2026-visual-semantic-tuning-cortex/</guid><description>Attention modulates brain representations to prioritize task-relevant information, but how visual semantic tuning shifts between naturalistic tasks is not well understood. We used voxelwise encoding models to compare visual semantic representations across the cortex in participants who watched movies versus participants who navigated a virtual city. Visual semantic tuning differs substantially between tasks—during navigation, tuning shifts increase the representation of task-relevant objects, with the strongest shifts in place-selective and visual attention regions and the weakest in human-selective regions.</description></item><item><title>A map of the cortical functional network mediating naturalistic navigation (Zhang, Meschke, Gallant, bioRxiv preprint, 2025)</title><link>https://gallantlab.org/publications/2025-map-cortical-functional-network/</link><pubDate>Wed, 17 Dec 2025 00:00:00 -0800</pubDate><guid>https://gallantlab.org/publications/2025-map-cortical-functional-network/</guid><description>Natural navigation requires close coordination of perception, planning, and motor actions. We used fMRI to record brain activity while participants performed a taxi driver task in VR, then fit high-dimensional voxelwise encoding models to the data. Navigation is supported by a network of 11 functionally distinct cortical regions that transform perceptual inputs through decision-making processes to produce action outputs.</description></item><item><title>Disentangling Superpositions: Interpretable Brain Encoding Model with Sparse Concept Atoms (Zeng and Gallant, NeurIPS, 2025)</title><link>https://gallantlab.org/publications/2025-disentangling-superpositions-interpretable-brain/</link><pubDate>Thu, 13 Nov 2025 00:00:00 -0800</pubDate><guid>https://gallantlab.org/publications/2025-disentangling-superpositions-interpretable-brain/</guid><description>Dense ANN word embeddings entangle multiple concepts in each feature, making it difficult to interpret encoding model maps. We use a Sparse Concept Encoding Model to produce a feature space where each dimension corresponds to an interpretable concept. The resulting model matches the prediction performance of dense models while substantially enhancing interpretability.</description></item><item><title>Encoding models in functional magnetic resonance imaging: the Voxelwise Encoding Model framework (Visconti di Oleggio Castello, Deniz, et al., PsyArXiv preprint, 2025)</title><link>https://gallantlab.org/publications/2025-encoding-models-functional-magnetic/</link><pubDate>Wed, 17 Sep 2025 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2025-encoding-models-functional-magnetic/</guid><description>This review paper provides the first comprehensive guide to the Voxelwise Encoding Model (VEM) framework. The VEM framework is a framework for fitting encoding models to fMRI data. This framework is currently the most sensitive and powerful approach available for modeling fMRI data. It can be used to fit dozens of distinct models simultaneously, each model having up to several thousand distinct features. The Voxelwise Encoding Model framework also conforms to all best practices in data science, which maximizes sensitivity, reliability and generalizability of the resulting models.</description></item><item><title>Individual differences shape conceptual representation in the brain (Visconti di Oleggio Castello et al., bioRxiv preprint, 2025)</title><link>https://gallantlab.org/publications/2025-individual-differences-shape-conceptual/</link><pubDate>Fri, 22 Aug 2025 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2025-individual-differences-shape-conceptual/</guid><description>We developed a new computational framework to measure and interpret individual differences in functional brain maps. We found robust individual differences in conceptual representation that reflect cognitive traits unique to each person. This framework enables new precision neuroscience approaches to the study of complex functional representations.</description></item><item><title>The Voxelwise Encoding Model framework: A tutorial introduction to fitting encoding models to fMRI data (Dupré la Tour et al., Imaging Neuroscience, 2025)</title><link>https://gallantlab.org/publications/2025-voxelwise-encoding-model-framework/</link><pubDate>Fri, 09 May 2025 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2025-voxelwise-encoding-model-framework/</guid><description>This tutorial provides practical guidance on using the Voxelwise Encoding Model (VEM) framework for functional brain mapping. It includes hands-on examples with public datasets, code repositories, and interactive notebooks to make this powerful methodology accessible to researchers at all levels.</description></item><item><title>The cortical representation of language timescales is shared between reading and listening (Chen et al., Communications Biology, 2024)</title><link>https://gallantlab.org/publications/2024-cortical-representation-language-timescales/</link><pubDate>Mon, 01 Jul 2024 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2024-cortical-representation-language-timescales/</guid><description>Language comprehension involves integrating low-level sensory inputs into a hierarchy of increasingly high-level features. To recover this hierarchy we mapped the intrinsic timescale of language representation across the cerebral cortex during listening and reading. We find that the timescale of representation is organized similarly for the two modalities.</description></item><item><title>Phonemic segmentation of narrative speech in human cerebral cortex (Gong et al., Nature Communications, 2023)</title><link>https://gallantlab.org/publications/2023-phonemic-segmentation-narrative-speech/</link><pubDate>Thu, 29 Jun 2023 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2023-phonemic-segmentation-narrative-speech/</guid><description>This fMRI study identifies the brain representation of single phonemes, diphones, and triphones during natural speech. Many regions in and around auditory cortex represent phonemes, and we identify regions where phonemic processing and lexical retrieval are intertwined. (Collaboration with the &lt;a href='http://theunissen.berkeley.edu/'&gt;Theunissen lab&lt;/a&gt; at UCB.)</description></item><item><title>Semantic representations during language comprehension are affected by context (Deniz et al., Journal of Neuroscience, 2023)</title><link>https://gallantlab.org/publications/2023-semantic-representations-language-comprehension/</link><pubDate>Wed, 26 Apr 2023 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2023-semantic-representations-language-comprehension/</guid><description>Most neuroimaging studies of meaning use isolated words and sentences with little context. We find that increasing context improves the quality of neuroimaging data and changes where and how semantic information is represented in the brain. Findings from studies using out-of-context stimuli may not generalize to natural language used in daily life.</description></item><item><title>Feature-space selection with banded ridge regression (Dupré la Tour et al., Neuroimage, 2022)</title><link>https://gallantlab.org/publications/2022-feature-space-selection-banded/</link><pubDate>Thu, 01 Dec 2022 00:00:00 -0800</pubDate><guid>https://gallantlab.org/publications/2022-feature-space-selection-banded/</guid><description>Encoding models identify the information represented in brain recordings, but fitting multiple models simultaneously presents several challenges. This paper describes how banded ridge regression can be used to solve these problems. Furthermore, several methods are proposed to address the computational challenge of fitting banded ridge regressions on large numbers of voxels and feature spaces. All implementations are released in an open-source Python package called Himalaya.</description></item><item><title>Visual and linguistic semantic representations are aligned at the border of human visual cortex (Popham et al., Nature Neuroscience, 2021)</title><link>https://gallantlab.org/publications/2021-visual-linguistic-semantic-representations/</link><pubDate>Thu, 28 Oct 2021 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2021-visual-linguistic-semantic-representations/</guid><description>We examined the spatial organization of visual and amodal semantic functional maps. The pattern of semantic selectivity in these two networks corresponds along the boundary of visual cortex: for categories represented posterior to the boundary, the same categories are represented linguistically on the anterior side. These two networks are smoothly joined to form one contiguous map.</description></item><item><title>Design of complex neuroscience experiments using mixed-integer linear programming (Slivkoff and Gallant, Neuron, 2021)</title><link>https://gallantlab.org/publications/2021-design-complex-neuroscience-experiments/</link><pubDate>Wed, 05 May 2021 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2021-design-complex-neuroscience-experiments/</guid><description>This tutorial and primer reviews how mixed integer linear programming can be used to optimize the design of complex experiments using many different variables. The approach is particularly useful when designing complex fMRI experiments&amp;ndash;such as question answering studies&amp;ndash;that aim to manipulate and probe many dimensions simultaneously.</description></item><item><title>Voxel-based state space modeling recovers task-related cognitive states in naturalistic fMRI experiments (Zhang et al., Front. Neuro., 2021)</title><link>https://gallantlab.org/publications/2021-voxel-based-state-space/</link><pubDate>Sat, 01 May 2021 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2021-voxel-based-state-space/</guid><description>We present a voxel-based state space modeling method for recovering task-related state spaces from fMRI data. Applied to a visual attention task and a video game task, each task induces distinct brain states that can be embedded in a low-dimensional state space that reflects task parameters.</description></item><item><title>The representation of semantic information across human cerebral cortex during listening versus reading is invariant to stimulus modality (Deniz et al., J. Neurosci., 2019)</title><link>https://gallantlab.org/publications/2019-representation-semantic-information-human/</link><pubDate>Thu, 05 Sep 2019 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2019-representation-semantic-information-human/</guid><description>We show that although the representation of semantic information in the human brain is quite complex, the semantic representations evoked by listening versus reading are almost identical. The representation of language semantics is independent of the sensory modality through which the semantic information is received.</description></item><item><title>Voxelwise encoding models with non-spherical multivariate normal priors (Nunez-Elizalde, Huth &amp; Gallant, NeuroImage, 2019)</title><link>https://gallantlab.org/publications/2019-voxelwise-encoding-models-non/</link><pubDate>Thu, 15 Aug 2019 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2019-voxelwise-encoding-models-non/</guid><description>Ridge regression assumes a spherical Gaussian prior with equal variance for all model parameters, but this is not always appropriate. This paper shows how non-spherical priors via Tikhonov regression can improve encoding models. A key application is banded ridge regression, which assigns a separate regularization parameter to each feature space and provides substantially better prediction accuracy when combining multiple feature spaces.</description></item><item><title>Human scene-selective areas represent 3D configurations of surfaces (Lescroart et al., Neuron, 2019)</title><link>https://gallantlab.org/publications/2019-human-scene-selective-areas/</link><pubDate>Wed, 02 Jan 2019 00:00:00 -0800</pubDate><guid>https://gallantlab.org/publications/2019-human-scene-selective-areas/</guid><description>It has been argued that scene-selective areas in the human brain represent both the 3D structure of the local visual environment and low-level 2D features that provide cues for 3D structure. To evaluate these hypotheses we developed an encoding model of 3D scene structure and tested it against a model of low-level 2D features. We fit the models to fMRI data recorded while subjects viewed visual scenes. Scene-selective areas represent the distance to and orientation of large surfaces. The most important dimensions of 3D structure are distance and openness.</description></item><item><title>The hierarchical cortical organization of human speech processing (de Heer, Huth, Griffiths, Gallant &amp; Theunissen, J. Neurosci., 2017)</title><link>https://gallantlab.org/publications/2017-hierarchical-cortical-organization-human/</link><pubDate>Wed, 05 Jul 2017 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2017-hierarchical-cortical-organization-human/</guid><description>We used voxelwise encoding models and variance partitioning to investigate how the brain transforms speech sounds into meaning. Speech processing involves a cortical hierarchy: spectral features in A1, articulatory features in STG, and semantic features in STS and beyond. Both hemispheres are equally involved, and semantic representations appear surprisingly early in the hierarchy.</description></item><item><title>Eye movement-invariant representations in the human visual system (Nishimoto, Huth, Bilenko &amp; Gallant, Journal of Vision, 2017)</title><link>https://gallantlab.org/publications/2017-eye-movement-invariant-representations/</link><pubDate>Sun, 01 Jan 2017 00:00:00 -0800</pubDate><guid>https://gallantlab.org/publications/2017-eye-movement-invariant-representations/</guid><description>Visual representations must be robust to eye movements, but the degree of eye movement invariance across the visual hierarchy is not well understood. We used fMRI to compare brain activity while subjects watched natural movies during fixation and free viewing. Responses in ventral temporal areas are largely invariant to eye movements, while early visual areas are strongly affected. These results suggest that the ventral temporal areas maintain a stable representation of the visual world during natural vision.</description></item><item><title>Natural speech reveals the semantic maps that tile human cerebral cortex (Huth et al., Nature, 2016)</title><link>https://gallantlab.org/publications/2016-natural-speech-reveals-semantic/</link><pubDate>Wed, 27 Apr 2016 00:00:00 -0700</pubDate><guid>https://gallantlab.org/publications/2016-natural-speech-reveals-semantic/</guid><description>We collected fMRI while subjects listened to narrative stories and recovered detailed lexical-semantic maps by voxelwise modeling. The semantic system is organized into intricate patterns that are consistent across individuals. Most areas represent information about groups of related concepts, and our semantic atlas shows which concepts are represented in each area.</description></item></channel></rss>