Empirical evidence from cognitive science and neuroscience suggests that the human brain maintains an internal representation of the body, or a model of the motor system, and that such an internal model would be involved in the capability to distinguish between self-generated actions and motor actions generated by other individuals.
The assumption is that the sensorimotor signals generated by our own motor actions would provide cues to agency (Weiss et al. (2011)). One of the proposals (Blakemore, Wolpert and Frith, 2000) says that when we perform a motor action, an efferent copy of the motor commands that our brain sends to our muscles would be used by a forward model that predicts the sensory consequences of the movement. Such predictions would be then compared to the actual sensory consequences and, if the two correspond, the perceived sensory consequences are attenuated, thus enabling a differentiation between self- generated sensory events and those externally generated that are not mapped to any internally generated efferent copy of the motor commands. Therefore, sensory attenuation would be related to self-agency and to "the privileged access to internally generated efferent information during one's own action" (Weiss et al., 2011). The sense of agency - or the pre-reflective experience that "I" am the author of the action I am observing - is thus
proposed to be dependent on the degree of congruence vs. incongruence between predicted and actual sensory consequences of our bodily actions.
Can robots develop a sense of agency? This Q-team aims at forming an interdisciplinary research group for the investigation of basic mechanisms that could implement similar processes in the humanoid robot Aldebaran Nao. Students will investigate and implement a computational internal model, or an internal body representation, for encoding the sensorimotor experience generated through self-exploration behaviours in the Nao robot, and will investigate how the predictive capabilities of the model can be used for classifying whether a perceptual event has been self-generated or produced by an external agent.
Dienstag 19.04, von 15:00 Uhr bis 17:00 Uhr (2 Stunden)
Dienstag 26.04, von 15:00 Uhr bis 17:00 Uhr (2 Stunden)
Dienstag 03.05, von 15:00 Uhr bis 17:00 Uhr (2 Stunden)
Dienstag 10.05, von 15:00 Uhr bis 17:00 Uhr (2 Stunden)
Dienstag 17.05, von 15:00 Uhr bis 17:00 Uhr (2 Stunden)
Dienstag 24.05, von 15:00 Uhr bis 17:00 Uhr (2 Stunden)
Dienstag 31.05, von 15:00 Uhr bis 17:00 Uhr (2 Stunden)
Dienstag 07.06, von 15:00 Uhr bis 17:00 Uhr (2 Stunden)
Dienstag 14.06, von 15:00 Uhr bis 17:00 Uhr (2 Stunden)
Dienstag 21.06, von 15:00 Uhr bis 17:00 Uhr (2 Stunden)
Dienstag 28.06, von 15:00 Uhr bis 18:00 Uhr (3 Stunden)
Donnerstag 14.07, von 15:00 Uhr bis 18:00 Uhr (3 Stunden)
Dienstag 19.07, von 15:00 Uhr bis 16:00 Uhr (2 Stunden)
The Q-team is open to Master students from Computer Science, Computational Neuroscience and Developmental Psychology. Good programming skills (C++ and/or Python) and basic knowledge of machine learning tools and artificial neural networks (in particular for students from Computer Science and Computational Neuroscience) are required. Q-team participants will have the opportunity to gain research experience with the basic practices of scientific work, including working in a team, programming a simulated and a real humanoid robot and documenting the outcomes of the scientific work. The course will be entirely held in English. The size of the Q-team is fixed to 6 students.
The registration for the Q-team is handled through the lecturer (email@example.com). In your registration request, please provide information on your academic background, on your programming skills (C++ and/or Python), and on whether you have experience with Linux/Ubuntu and with machine learning tools, in particular artificial neural networks.