By Prof. Mitsuo Kawato, ATR Computational Neuroscience Laboratories
Our working definition of computational neuroscience is to "understand the brain to the extent that we can build a machine or computer program that can solve the same problems solved by the brain using essentially the same principle". One of the most difficult computational problems in solving a variety of sensorimotor coordination tasks is to find a concise and scalable representation that can be used for supervised learning, statistical inference or reinforcement learning. For example, many motor control tasks were characterized as optimal control problems, and reinforcement learning was used as a tool to solve them. However, if a standard reinforcement-learning algorithm is used for realistic problems with large degrees of freedom systems such as a humanoid robot, it does not scale because of astronomically long learning time . Only hierarchical and modular architecture with efficient and concise representation at the higher level has the capability to overcome this difficulty.
In previous studies, control variables in hierarchical reinforcement learning, order parameter, and/or efficient representations were invented or explored by researchers. Here, I propose an automatic scheme to find them by utilizing both neural and external world data. This is based partly on our observation that the forward and inverse mapping between neural and physical variables give rise to very different representations 
1. M. Kawato, (2008) Philosophical Transactions of the Royal Society B, 363, 2201-2214
2. M. Kawato, (2008) HFSP Journal, 2, 136-142
3. G. Ganesh, E. Burdet, M. Haruno and M. Kawato, (2008) NeuroImage, 42, 1463-1472
This article was published on Jun 30, 2010