13 ml) without any delay after it fixated the correct target The

13 ml) without any delay after it fixated the correct target. The temporally discounted value of the reward from target x is denoted as DV(Ax, Dx), where

Ax and Dx indicate the magnitude and delay of the reward from target x. In the model used to analyze the animal’s choices, the learn more probability that the animal would choose TS was given by the logistic function of the difference in the temporally discounted values for the two targets, as follows. p(TS)=σ[βDV(ATS,DTS)−DV(ATL,DTL)],p(TS)=σ[βDV(ATS,DTS)−DV(ATL,DTL)],where the function σ[z] = 1+exp(−z)−1 corresponds to the logistic transformation, and β is the inverse temperature parameter. The temporally discounted value was determined Enzalutamide using a hyperbolic discount function, DV(Ax,Dx)=Ax/(1+kDx),DV(Ax,Dx)=Ax/(1+kDx),or an exponential discount function, DV(Ax,Dx)=Axexp(−kDx),where the parameter k determines the steepness of the discount function. The model parameters (k and β) were estimated using a maximum likelihood procedure as in the previous studies

(Kim et al., 2008 and Kim et al., 2009a). We analyzed all the neurons recorded in the caudate nucleus and ventral striatum, as long as they were recorded for more than two blocks (80 trials) during the intertemporal choice task. Except for two neurons, all neurons were tested at least for three blocks (120 trials). The average number of intertemporal choice trials tested for each neuron was 167.4 ± 3.7 3-mercaptopyruvate sulfurtransferase and 162.4 ± 4.1 for the CD and VS, respectively. The spike rate during the 1 s cue period was analyzed by applying a series of regression models. For each trial, we first estimated the temporally discounted values by multiplying the magnitude of reward from each target and the discount function (hyperbolic or exponential) for its delay that provided the best fit to the behavioral data in the same session. Next, we used a regression model to test whether the activity was influenced by the difference between the temporally discounted values of the left and right targets (DVL − DVR),

because this is equivalent to the decision variable used by the behavioral model described above. This regression model also included the sum of the temporally discounted values (DVsum= DVL + DVR), and the difference in the temporally discounted values for the chosen and unchosen targets (DVchosen – DVunchosen), in addition to the animal’s choice (C = 0 and 1 for the leftward and rightward choice). In other words, equation(model 1) S=a0+a1DVsum+a2(DVL−DVR)+a3(DVchosen−DVunchosen)+a4C,S=a0+a1DVsum+a2(DVL−DVR)+a3(DVchosen−DVunchosen)+a4C,where S denotes the spike rate during the cue period. The same model was also applied to the control trials with temporally discounted values replaced by fictitious values calculated as if the reward magnitude and delays were indicated by the target color and the number of yellow dots as in the intertemporal choice task.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>