Desire and Prediction: How Different Dopamine Signals Help Us Get What We Want
- Featured in:
- SfN Journals: Research Article Summaries
Material below summarizes the article Differential Dopamine Release Dynamics in the Nucleus Accumbens Core and Shell Reveal Complementary Signals for Error Prediction and Incentive Motivation, published on August 19, 2015, in JNeurosci and authored by Michael P. Saddoris, Fabio Cacciapaglia, R. Mark Wightman, and Regina M. Carelli.
Despite dopamine being one of the most actively studied neurotransmitters in the brain, there is surprisingly little consensus on what specifically this molecule contributes to behavior in normal animals.
One of the most dominant theories of dopamine signaling in the last 20 years has been based on the seminal work of Wolfram Schultz, whose experiments with dopamine neurons demonstrated a critical link between the firing patterns of dopamine neurons and learning.
In those studies, Schultz recorded the activity of dopamine neurons in the midbrain while animals learned that a cue predicted a reward, such as a sip of juice. At first the dopamine cells would fire only with the reward delivery, but, with learning, the firing shifted to the cue. That is, the dopamine cells were making a prediction about the future, so when those expectations were violated — for example, no juice was delivered — the cells stopped firing briefly to signal the omitted event. However, when the outcome was exactly as predicted, there was no “error,” so the cell would fire along as it would under baseline conditions. This Prediction Error (PE) model has been highly influential and is considered to be a neural mechanism that shows at the cellular level how dopamine supports learning.
In parallel, another group of researchers including Kent Berridge and Terry Robinson demonstrated that a lot of learning could be accomplished in the absence of dopamine.
For example, rats completely lacking dopamine due to either lesions or genetic alterations could nonetheless learn about the value of different foods, alter their preferences when the value of the foods changed, and learn preferences for contexts where drugs were available. If dopamine was essential for learning, they argued, then why were animals so capable of learning when they didn’t have any dopamine present?
Instead, they argued that dopamine played an important role in motivation in a model they named Incentive Salience (IS). According to IS, the role of dopamine is to endow stimuli with motivational properties. That is, the dopamine signal reflects the salience of stimuli in the environment, and is directly related to the motivational and conditioned approach behavior generated by these stimuli.
In an attempt to resolve whether dopamine was more critical for PE or IS, we used a multiple-cue chain of stimuli to see how dopamine encoded these events in rats. PE models suggest that dopamine should only signal at the beginning of the chain, as all the fully-predicted stimuli — subsequent events and rewards — should elicit minimal changes in dopamine release.
In contrast, we predicted that dopamine related to IS signaling should encode all the events in the chain, especially the highly-salient stimuli that are closest in time to the reward delivery. To test this, we recorded real-time dopamine release using fast scan cyclic voltammetry in the nucleus accumbens, one of the richest targets of dopamine in the brain, which is thought to be involved with both learning and drug addiction. Further, we recorded from different parts of the accumbens — the shell and core — that receive inputs from different circuits in the brain.
We found that dopamine in the core was highly consistent with PE models. While in the shell, dopamine corresponded with IS-type encoding. When we analyzed the data based on motivation, we saw that dopamine in the shell but not the core decreased as the animal got more food and thus became less motivated to engage in the task. In contrast, when we intentionally omitted the delivery of the expected food reward at the end of the chain, dopamine signals in the core but not the shell showed a brief pause where the pellet delivery was expected (i.e., a negative prediction error), and then decreased the amount of dopamine released during the first part of the chain.
Notably, we would likely not have been able to see these effects if we had used a single cue to signal the delivery of the reward. By using a more complex behavioral design, the dopamine signal was revealed to be itself complex, varying the type of information it encoded based on where in the nucleus accumbens the dopamine was being released. In other words, there is no single “dopamine signal” in the brain, but multiple and parallel signals that collaboratively help animals encode various aspects of valuable information from their environment.
These findings show that the debate between PE and IS may be highly linked to where in the brain one is focused and which circuits are being engaged in a task. Researchers from both camps have strong and compelling data suggesting that dopamine signaling is necessary and sufficient for either PE- or IS-linked behaviors.
Assuming this is the case, then the debate from the dopamine perspective may not necessarily be about whether dopamine encodes PE or IS information, but rather about what the conditions under which PE or IS information is able to control the behavioral response of the animal are. This subtle but important distinction may help guide future studies of the neural circuits of learning and motivation, and will likely provide critical insights into future treatments for addiction.
Visit JNeurosci to read the original article and explore other content. Read other summaries of JNeurosci and eNeuro papers in the Neuronline collection SfN Journals: Research Article Summaries.
Differential Dopamine Release Dynamics in the Nucleus Accumbens Core and Shell Reveal Complementary Signals for Error Prediction and Incentive Motivation. Michael P. Saddoris, Fabio Cacciapaglia, R. Mark Wightman, Regina M. Carelli. The Journal of Neuroscience August 2015, 35(33): 11572-11582; DOI: 10.1523/JNEUROSCI.2344-15.2015