The Human Brain Tracks Speech More Closely in Time Than Other Sounds
Sep 24, 2020
Material below summarizes the article Dynamic Time-Locking Mechanism in the Cortical Representation of Spoken Words, published on June 8, 2020, in eNeuro and authored by Ali Faisal, Anni Nora, Hanna Renvall, Jaeho Seol, Elia Formisano, and Riitta Salmelin.
Highlights
- Computational modeling of cortical responses highlights the importance of accurate temporal tracking of speech in the auditory cortices.
- This time-locked encoding mechanism is likely pivotal for transforming the acoustic features into linguistic representations.
- No similar relevance of time-locked encoding was observed for nonspeech sounds, including temporally variable human-made sounds such as laughter.
Access to the full article is available to SfN members.
Neuronline is a benefit of SfN membership. Renew your membership now to make sure you don’t lose access.
0 of 5 articles left
Login
or
Become a Member
to unlock content




