Animal learning is driven not only by biological needs but also by intrinsic motivations (IMs) serving the acquisition of knowledge. Computational modeling involving IMs is indicating that learning of motor skills requires that autonomous agents self-generate tasks/goals and use them to acquire skills solving/leading to them. We propose a neural architecture driven by IMs that is able to self-generate goals on the basis of the environmental changes caused by the agent's actions. The main novelties of the model are that it is focused on the acquisition of attention (looking) skills and that its architecture and functioning are broadly inspired by the functioning of relevant primate brain areas (superior colliculus, basal ganglia, and frontal cortex). These areas, involved in IM-based behavior learning, play important functions for reflexive and voluntary attention. The model is tested within a simple simulated pan-tilt camera robot engaged in learning to switch on different lights by looking at them, and is able to self-generate visual goals and learn attention skills under IM guidance. The model represents a novel hypothesis on how primates and robots might autonomously learn attention skills and has a potential to account for developmental psychology experiments and the underlying brain mechanisms.
Bio-Inspired Model Learning Visual Goals and Attention Skills Through Contingencies and Intrinsic Motivations
IEEE, Stati Uniti d'America
IEEE Transactions on Cognitive and Developmental Systems 10 (2018): 326–344. doi:10.1109/TCDS.2017.2772908
info:cnr-pdr/source/autori:Sperati V.; Baldassarre G./titolo:Bio-Inspired Model Learning Visual Goals and Attention Skills Through Contingencies and Intrinsic Motivations/doi:10.1109/TCDS.2017.2772908/rivista:IEEE Transactions on Cognitive and Development