Modeling the violation of reward maximization and invariance in reinforcement schedules.

It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in t...

Full description

Saved in:
Bibliographic Details
Main Authors: Giancarlo La Camera, Barry J Richmond
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2008-08-01
Series:PLoS Computational Biology
Online Access:https://doi.org/10.1371/journal.pcbi.1000131
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850127183897427968
author Giancarlo La Camera
Barry J Richmond
author_facet Giancarlo La Camera
Barry J Richmond
author_sort Giancarlo La Camera
collection DOAJ
description It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect"). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys.
format Article
id doaj-art-1e9787227f524e8fb2c0dc048b7efad3
institution OA Journals
issn 1553-734X
1553-7358
language English
publishDate 2008-08-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS Computational Biology
spelling doaj-art-1e9787227f524e8fb2c0dc048b7efad32025-08-20T02:33:44ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582008-08-0148e100013110.1371/journal.pcbi.1000131Modeling the violation of reward maximization and invariance in reinforcement schedules.Giancarlo La CameraBarry J RichmondIt is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect"). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys.https://doi.org/10.1371/journal.pcbi.1000131
spellingShingle Giancarlo La Camera
Barry J Richmond
Modeling the violation of reward maximization and invariance in reinforcement schedules.
PLoS Computational Biology
title Modeling the violation of reward maximization and invariance in reinforcement schedules.
title_full Modeling the violation of reward maximization and invariance in reinforcement schedules.
title_fullStr Modeling the violation of reward maximization and invariance in reinforcement schedules.
title_full_unstemmed Modeling the violation of reward maximization and invariance in reinforcement schedules.
title_short Modeling the violation of reward maximization and invariance in reinforcement schedules.
title_sort modeling the violation of reward maximization and invariance in reinforcement schedules
url https://doi.org/10.1371/journal.pcbi.1000131
work_keys_str_mv AT giancarlolacamera modelingtheviolationofrewardmaximizationandinvarianceinreinforcementschedules
AT barryjrichmond modelingtheviolationofrewardmaximizationandinvarianceinreinforcementschedules