Vis enkel innførsel

dc.contributor.authorEngelsvoll, Ruben Nygård
dc.contributor.authorGammelsrød, Anders
dc.contributor.authorThoresen, Bjørn-Inge Støtvig
dc.date.accessioned2020-10-15T10:50:09Z
dc.date.available2020-10-15T10:50:09Z
dc.date.issued2020
dc.identifier.citationEngelsvoll, R. N., Gammelsrød, A. & Thoresen, B. I. S. (2020) Generating Levels and Playing Super Mario Bros. with Deep Reinforcement Learning Using various techniques for level generation and Deep Q-Networks for playing (Master's thesis). University of Agder, Grimstaden_US
dc.identifier.urihttps://hdl.handle.net/11250/2683046
dc.descriptionMaster's thesis in Information- and communication technology (IKT590)en_US
dc.description.abstractThis thesis aims to explore the behavior of two competing reinforcement learning agents in Super Mario Bros. In video games, PCG can be used to assist human game designers by generating a particular aspect of the game. A human game designer can use generated game content as inspiration to build further upon, which saves time and resources. Much research has been conducted on AI in video games, including AI for playing Super Mario Bros. Additionally, there exists a research field focused on PCG for video games, which includes generation of Super Mario Bros. levels. In this thesis, the two fields of research are combined to form a GAN-inspired system of two competing AI agents. One agent is controlling Mario, and this agent represents the discriminator. The other agent generates the level Mario is playing, and represents the generator. In an ordinary GAN system, the generator is attempting to mimic a database containing real data, while the discriminator attempts to distinguish real data samples from the generated data samples. The Mario agent utilizes a DQN algorithm for learning to navigate levels, while the level generator uses a DQN-based algorithm with different types of neural networks. The DQN algorithm utilizes neural networks to predict the expected future reward for each possible action. The expected future rewards are denoted as Q-values. The results show that the generator is capable of generating content better than random when the generator model takes a sequence of tiles as input and produces a sequence of predictions of Q-values as output.en_US
dc.language.isoengen_US
dc.publisherUniversity of Agderen_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.no*
dc.subjectIKT590en_US
dc.titleGenerating Levels and Playing Super Mario Bros. with Deep Reinforcement Learning Using various techniques for level generation and Deep Q-Networks for playingen_US
dc.typeMaster thesisen_US
dc.rights.holder© 2020 Ruben Nygård Engelsvoll, Anders Gammelsrød, Bjørn-Inge Støtvig Thoresenen_US
dc.subject.nsiVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550en_US
dc.subject.nsiVDP::Matematikk og Naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420::Kunnskapsbaserte systemer: 425en_US
dc.source.pagenumber102en_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal