R. S. Sutton and A. G. Barto, Adaptive computation and machine learning series, 2018.

C. J. Hanna, R. J. Hickey, D. K. Charles, and M. M. Black, Modular Reinforcement Learning architectures for artificially intelligent agents in complex game environments, Computational Intelligence and Games, pp.380-387, 2010.

J. Elman, Finding structure in time, Cognitive Science, vol.14, issue.2, pp.179-211, 1990.

J. Schmidhuber, Deep learning in neural networks: An overview, Neural Networks, vol.61, pp.85-117, 2015.

M. E. Taylor and P. Stone, Transfer Learning for Reinforcement Learning Domains: A Survey, Journal of Machine Learning Research, vol.10, issue.7, pp.1633-1685, 2009.

A. Lazaric, Transfer in Reinforcement Learning: A Framework and a Survey, Reinforcement Learning, vol.12, pp.143-173, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00772626

F. Tanaka and M. Yamamura, Multitask reinforcement learning on the distribution of MDPs, International Symposium on Computational Intelligence in Robotics and Automation. Computational Intelligence in Robotics and Automation for the New Millennium, vol.3, pp.1108-1113, 2003.

C. Devin, A. Gupta, T. Darrell, P. Abbeel, and S. Levine, Learning Modular Neural Network Policies for Multi-Task and Multi-Robot Transfer, 2016.

K. Frans, J. Ho, X. Chen, P. Abbeel, and J. Schulman, Meta Learning Shared Hierarchies. CoRR, 2017.

Y. W. Teh, V. Bapst, W. M. Czarnecki, J. Quan, J. Kirkpatrick et al., Distral: Robust Multitask Reinforcement Learning, 2017.

A. Tsymbal, The Problem of Concept Drift: Definitions and Related Work, 2004.

H. Wang and Z. Abraham, Concept drift detection for streaming data, International Joint Conference on Neural Networks, pp.1-9, 2015.

M. B. Ring, Continual Learning in Reinforcement Environments, University of Texas at Austin, 1994.

J. Xu and Z. Zhu, Reinforced Continual Learning, 2018.

P. Sweetser and P. Wyeth, GameFlow: A model for evaluating player enjoyment in games, Computers in Entertainment, vol.3, issue.3, 2005.

R. Holt and J. Mitterer, Examining video game immersion as a flow state. 108th Annual Psychological Association, 2000.

J. Chen, , 2019.

J. Chen, Flow in games (and everything else), ACM Communications, vol.50, issue.4, p.31, 2007.

I. Bonnici and A. Gouaïch, Formalisation of metamorph Reinforcement Learning, LIRMM, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01924642

K. Cho, B. Van-merrienboer, C. Gulcehre, F. Bougares, H. Schwenk et al., Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01433235

D. P. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, 2014.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang et al., Automatic differentiation in PyTorch, 2017.

. R-core-team, R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, 2018.

L. Hy, R. Arrigoni, A. Bessiere, P. Lebeltel, and O. , Teaching Bayesian Behaviours to Video Game Characters, Robotics and Autonomous Systems, vol.47, pp.177-185, 2004.
URL : https://hal.archives-ouvertes.fr/inria-00182073

F. Tencé and C. Buche, Automatable Evaluation Method Oriented toward Behaviour Believability for Video Games, 2010.

M. Polceanu, A. Mora, J. Jimenez, C. Buche, and A. Fernandez-leiva, The Believability Gene in Virtual Bots, 29th International Flairs. p. 4, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01315402