Active Bayesian perception and reinforcement learning

In a series of papers, we have formalized an active Bayesian perception approach for robotics based on recent progress in understanding animal perception. However, an issue for applied robot perception is how to tune this method to a task, using: (i) a belief threshold that adjusts the speed-accuracy tradeoff; and (ii) an active control strategy for relocating the sensor e. g. to a preset fixation point. Here we propose that these two variables should be learnt by reinforcement from a reward signal evaluating the decision outcome. We test this claim with a biomimetic fingertip that senses surface curvature under uncertainty about contact location. Appropriate formulation of the problem allows use of multi-armed bandit methods to optimize the threshold and fixation point of the active perception. In consequence, the system learns to balance speed versus accuracy and sets the fixation point to optimize both quantities. Although we consider one example in robot touch, we expect that the underlying principles have general applicability.

Publication type: 
Articolo
Author or Creator: 
Lepora, Nathan F.
Martinez-Hernandez, Uriel
Pezzulo, Giovanni
Prescott, Tony J.
Publisher: 
Institute of Electrical and Electronics Engineers,, New York, NY , Stati Uniti d'America
Source: 
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems (Print) (2013): 4735–4740.
info:cnr-pdr/source/autori:Lepora, Nathan F.; Martinez-Hernandez, Uriel; Pezzulo, Giovanni; Prescott, Tony J./titolo:Active Bayesian perception and reinforcement learning/doi:/rivista:Proceedings of the ... IEEE/RSJ International Conference on Intelligent
Date: 
2013
Resource Identifier: 
http://www.cnr.it/prodotto/i/343327
Language: 
Eng
ISTC Author: 
Giovanni Pezzulo's picture
Real name: