Intrinsically motivated action-outcome learning and goal-based action recall: A system-level bio-constrained computational model.

Reinforcement (trial-and-error) learning in animals is driven by a multitude of processes. Most animals have evolved several sophisticated systems of 'extrinsic motivations' (EMs) that guide them to acquire behaviours allowing them to maintain their bodies, defend against threat, and reproduce. Animals have also evolved various systems of 'intrinsic motivations' (IMs) that allow them to acquire actions in the absence of extrinsic rewards. These actions are used later to pursue such rewards when they become available. Intrinsic motivations have been studied in Psychology for many decades and their biological substrates are now being elucidated by neuroscientists. In the last two decades, investigators in computational modelling, robotics and machine learning have proposed various mechanisms that capture certain aspects of IMs. However, we still lack models of IMs that attempt to integrate all key aspects of intrinsically motivated learning and behaviour while taking into account the relevant neurobiological constraints. This paper proposes a bio-constrained system-level model that contributes a major step towards this integration. The model focusses on three processes related to IMs and on the neural mechanisms underlying them: (a) the acquisition of action-outcome associations (internal models of the agent-environment interaction) driven by phasic dopamine signals caused by sudden, unexpected changes in the environment; (b) the transient focussing of visual gaze and actions on salient portions of the environment; (c) the subsequent recall of actions to pursue extrinsic rewards based on goal-directed reactivation of the representations of their outcomes. The tests of the model, including a series of selective lesions, show how the focussing processes lead to a faster learning of action-outcome associations, and how these associations can be recruited for accomplishing goal-directed behaviours. The model, together with the background knowledge reviewed in the paper, represents a framework that can be used to guide the design and interpretation of empirical experiments on IMs, and to computationally validate and further develop theories on them.

Publication type: 
Articolo
Author or Creator: 
Baldassarre, Gianluca
Mannella, Francesco
Vincenzo Guido
Redgrave, Peter
Gurney, Kevin
Mirolli, Marco
Publisher: 
Pergamon,, New York , Stati Uniti d'America
Source: 
Neural networks 41 (2013): 168–187. doi:10.1016/j.neunet.2012.09.015
info:cnr-pdr/source/autori:Baldassarre, Gianluca; Mannella, Francesco; Vincenzo Guido; Redgrave, Peter; Gurney, Kevin; Mirolli, Marco/titolo:Intrinsically motivated action-outcome learning and goal-based action recall: A system-level bio-constrained comp
Date: 
2013
Resource Identifier: 
http://www.cnr.it/prodotto/i/221702
https://dx.doi.org/10.1016/j.neunet.2012.09.015
info:doi:10.1016/j.neunet.2012.09.015
http://www.journals.elsevier.com/neural-networks/
Language: 
Eng
ISTC Author: 
Marco Mirolli's picture
Real name: