Developing robots able to autonomously discover, select, and solve multiple new tasks in a cumulative open-endeed fashion is an important issue for autonomous robotics. This becomes even crucial if we want robots to interact with real environments where they have to face many problems which are unpredictable at design time. Moreover, the development of autonomous computational embodied models can help to enlighten the motivational and learning mechanisms underlying the versatility and adaptiveness of humans and other mammals (e.g., monkeys and rats).
Intrinsic motivations (IMs) identify the ability of humans and other mammals to modify their behaviour and learn new skills in the absence of a direct biological pressure. First studied in animal psychology (e.g. Harlow, 1950; White, 1959) and human psychology (e.g. Berlyne, 1960; Ryan & Deci, 2000), recently IMs have been investigated also with respect to their neural basis, with both experiments (e.g. Wittmann et al., 2008; Duzel et al., 2010) and computational models (e.g. Kakade & Dayan, 2002; Mirolli et al., 2013). IM learning signals can be considered a useful tool to implement more autonomous and versatile robots, driving the formation of ample repertoires of skills without the need for the user to externally assign a reward or a task to them. In the last decades much computational research based on IMs have been proposed (e.g. Schmidhuber, 1991; Barto, 2004; Oudeyer et al., 2007; Santucci et al., 2013) and nowadays IMs are an important field of research also within robotics (Baldassarre & Mirolli, 2013). In particular, IMs can play an important role in guiding an artificial system to form and select its own goals: when many different skills can be acquired, it is crucial for the system to properly select only those that can be learnt and to focus on them only for the the time necessary to learn them. The role of goals is then crucial in the driving the learning of skills, so to form ample repertoire of actions that can be used by the system in the future. Another key `ingredient' of robots capable of autonomous open-ended learning of multiple skills is the architecture of the system. Indeed, the multiple skills acquired under the drive of intrinsic motivations need to be stored in suitable, coordinated and synergistic ways, without negative interference. Moreover, the management of multiple skill learning and the intrinsic motivations guiding it, the formation of the related goals, and the link of goals with the related skills, require sophisticated well-thought architectures rather than single algorithms working in isolation. This is an important open problem as most machine-learning and robotic systems are focussed on single tasks solved with specific algorithms.
Research specific problems
- What are the learning and motivational signals that better guarantee the autonomous discovery and acquisition of different skills?
- What is the role of the goals in learning processes?
- How can we improve the autonomy and versatility of artificial agents?
- Experiments with computational models implemented in embodied architectures (simulated or real robots)
Examples of research of this type carried out by the group
(see full references below; the pdf files of the paper are retrievable from here)
- Santucci et al. (2014).
- Santucci et al. (2013).
- Mirolli et al. (2013).
- Schembri et al. (2007).
Requested motivations of the candidate
- Strong interest in the topic and motivation to carry out research on it (very important)
- Desire to acquire the knowledge and methods of the group
Requested knowledge of the candidate
- University-level knowledge on Cognitive Science (including different possible fields: robotics, psychology, epistemology, philosophy of mind, biomedical engineering, etc).
Requested skills of the candidate
- Capacity to read and understand scientific papers in English.
- Capacity to contribute to write reports in English.
- Knowing (or desire to learn) a programming language, such as Python, C++ or MATLAB.
- G. Baldassarre and M. Mirolli, Eds., Intrinsically Motivated Learning in Natural and Artificial Systems. Berlin: Springer-Verlag, 2013.
- A. Barto, S. Singh, and N. Chantanez, “Intrinsically motivated learning of hierarchical collections of skills,” in Proceedings of the Third International Conference on Developmental Learning (ICDL), 2004, pp. 112–119.
- D. Berlyne, Conflict, Arousal and Curiosity. McGraw Hill, New York, 1960.
- E. Duzel, N. Bunzeck, M. Guitart-Masip, and S. Duzel, “Noveltyrelated motivation of anticipation and exploration by dopamine (nomad):
implications for healthy aging,” Neuroscience Biobehavioural Review, vol. 34, no. 5, pp. 660–669, 2010.
- H. F. Harlow, “Learning and satiation of response in intrinsically motivated complex puzzle performance by monkeys,” Journal of Comparative
and Physiological Psychology, vol. 43, pp. 289–294, 1950.
- S. Kakade and P. Dayan, “Dopamine: generalization and bonuses.” Neural Networks, vol. 15, no. 4-6, pp. 549–559, 2002.
- M. Mirolli, V. G. Santucci, and G. Baldassarre, “Phasic dopamine as a prediction error of intrinsic and extrinsic reinforcements driving both action acquisition and reward maximization: A simulated robotic study,” Neural Networks, vol. 39, no. 0, pp. 40 – 51, 2013.
- P. Oudeyer, F. Kaplan, and V. Hafner, “Intrinsic motivation system for autonomous mental development,” in IEEE Transactions on Evolutionary Computation, 2007, pp. 703–713.
- R. M. Ryan and E. L. Deci, “Intrinsic and extrinsic motivations: Classic definitions and new directions.” Contemporary Educational Psychology, vol. 25, no. 1, pp. 54–67, 2000.
- V. G. Santucci, G. Baldassarre, and M. Mirolli, “Which is the best intrinsic motivation signal for learning multiple skills?” Frontiers in Neurorobotics, vol. 7, no. 22, 2013.
- V.G. Santucci, G. Baldassarre, and M. Mirolli, "Autonomous selection of the "what" and the "how" of learning: an intrisically motivated system tested with a two armed robot". in Proceeding of ICDL-EpiRob 2014, Genova.
- M. Schembri, M. Mirolli, and G. Baldassarre, “Evolving internal reinforcers for an intrinsically motivated reinforcement-learning robot,” in Proceedings of the 6th International Conference on Development and Learning, Y. Demiris, D. Mareschal, B. Scassellati, and J. Weng, Eds. Imperial College, London, 2007, pp. E1–6.
- J. Schmidhuber, “Curious model-building control system,” in Proceedings of International Joint Conference on Neural Networks, vol. 2. IEEE, Singapore, 1991, pp. 1458–1463.
- R. White, “Motivation reconsidered: the concept of competence.” Psychological Review, vol. 66, pp. 297–333, 1959.
- B. Wittmann, N. Daw, B. Seymour, and R. Dolan, “Striatal activity underlies novelty-based choice in humans,” Neuron, vol. 58, no. 6, pp. 967–73, 2008.