Unsupervised State Representation Learning in Partially Observable Atari Games
Journal article, Peer reviewed
Accepted version
View/ Open
Date
2023Metadata
Show full item recordCollections
Original version
Meng, L., Goodwin, M., Yazidi, A. & Engelstad, P. (2023). Unsupervised State Representation Learning in Partially Observable Atari Games. Lecture Notes in Computer Science, 14185, 212-222. https://doi.org/10.1007/978-3-031-44240-7_21Abstract
State representation learning aims to capture latent factors of an environment. Although some researchers realize the connections between masked image modeling and contrastive representation learning, the effort is focused on using masks as an augmentation technique to represent the latent generative factors better. Partially observable environments in reinforcement learning have not yet been carefully studied using unsupervised state representation learning methods. In this article, we create an unsupervised state representation learning scheme for partially observable states. We conducted our experiment on a previous Atari 2600 framework designed to evaluate representation learning models. A contrastive method called Spatiotemporal DeepInfomax (ST-DIM) has shown state-of-the-art performance on this benchmark but remains inferior to its supervised counterpart. Our approach improves ST-DIM when the environment is not fully observable and achieves higher F1 scores and accuracy scores than the supervised learning counterpart. The mean accuracy score averaged over categories of our approach is 66%, compared to 38% of supervised learning. The mean F1 score is 64% to 33%. The code can be found on https://github.com/mengli11235/MST_DIM.