My research mainly focus on reinforcement learning with neural networks (also called deep reinforcement learning).
We aimed at designing agents that take decisions in an unknown environment, and learn through their own interaction with this environment to maximize a given criterion.
More precisely, I am working on model-free actor-critic algorithms to deal with continuous environments (in state and action), trying to make them more data-efficient without loosing the scalability of neural networks.
Through the interaction with several researchers, my personal conviction to build a general intelligence artificial contains the following points:
– a situated agent acting in a rich environment with complex interactions is a key element for intelligence,
– robotic is not a necessary condition, the rich environment can be simulated,
– giving less a priori (human) knowledge is important to not impedes agent capacities to find surprising solutions. Neural networks let them build their own representations, specific to the task they have to solve, given their sensors and effectors; representations that humans might not have designed as specific. Moreover, neural networks and gradient descent allow scalability in learning, which is an important skill to deal with decision-making in rich environments.
– a discrete set of states or actions is also a pre-defined knowledge that the agent should discover itself,
– morphology scaffolding is interesting to control the complexity of the learning : sensors and effectors of the agent should grow over time