Solving sparse-reward problems in partially observable 3D environments using distributed reinforcement learning

dc.contributor.advisorEngelbrecht, Hermanen_ZA
dc.contributor.advisorSchoeman, J. C.en_ZA
dc.contributor.authorLouw, Jacobus Martinen_ZA
dc.contributor.otherStellenbosch University. Faculty of Engineering. Dept. of Electrical and Electronic Engineering.en_ZA
dc.date.accessioned2021-11-08T04:29:10Z
dc.date.accessioned2021-12-22T14:20:44Z
dc.date.available2021-11-08T04:29:10Z
dc.date.available2021-12-22T14:20:44Z
dc.date.issued2021-12
dc.descriptionThesis (MEng)--Stellenbosch University, 2021.en_ZA
dc.description.abstractENGLISH ABSTRACT: n this study, we address sparse-reward problems in partially observable 3D environments. The example task is set in a simulation environment where a reinforcement learning (RL) agent has to deliver a first-aid kit to an immobilised miner using an image observation. We apply a deep Q-learning algorithm with several modifications to solve this problem. We first show that it helps the agent to solve problems in the partially observable environment when the agent’s observation is augmented with a history of previous observations and performed actions. We then consider three main modifications made to the deep Q-learning algorithm to address this problem. The first is to dramatically increase the rate at which new data is generated by using a distributed system. Secondly, we utilise prioritised experience replay (PER) [39] to repeat transitions of significance more frequently to the agent. Lastly, we add the n-step return to the algorithm. The work by Hessel et al. [14] and Horgan et al. [16] shows that these modifications significantly improve the performance of the deep Q-learning algorithm on the Atari platform. The Atari platform consists mainly of simple 2D environments; however, we consider performance on a partially observable 3D environment with sparse rewards. We confirm the results of Fedus et al. [10] and show that better-performing policies are trained when the replay buffer contains more recently generated data. We show that prioritising transitions and the n-step return is very important in solving the example sparse-reward problem. In addition to these modifications we also look into strategies to improve exploration. We then demonstrate that curriculum learning (CL) or domain randomisation (DR) can be used to help the agent to solve more challenging problems where it is difficult to initially receive the reward signal. Lastly, we establish that it greatly benefits the deep Q-learning agent’s performance when CL is used in combination with DR to solve larger, more complex problems.en_ZA
dc.description.abstractAFRIKAANSE OPSOMMING: n hierdie studie spreek ons skaars-beloningsprobleme in gedeeltelik sigbare 3D-omgewings aan. In die probleem wat ons as voorbeeld gebruik, moet ’n versterkingsleeragent ’n noodhulpkissie aan ’n gestrande mynwerker in ’n simulasie-omgewing aflewer. Die agent moet aksies, gebaseer op ’n kamerabeeld, uitvoer om die taak te verrig. Ons pas ’n diep-Q-leer algoritme met ’n paar wysigings toe, om die probleem op te los. Ons toon eerstens aan dat dit die agent help om probleme in die gedeeltlik sigbare omgewing op te los, indien sy waarneming aangevul word deur vorige waarnemings en uitgevoerde aksies. Daarna oorweeg ons drie hoofsaaklike wysigings aan die diep-Q-leer algoritme om hierdie probleem op te los. Eerstens word die spoed waarteen nuwe data gegenereer word drasties verhoog deur van ’n verspreide stelsel gebruik te maak. Tweedens gebruik ons ’n geprioritiseerde ervaringsbuffer [39] om belangrike ervarings meer gereeld aan die agent terug te speel. Laastens voeg ons n-stap opdaterings by die algoritme. Die navorsing deur Hessel et al. [14] en Horgan et al. [16] toon aan dat hierdie wysigings die werksverrigting van die diep-Q-leer algoritme op die Atari-platform aansienlik verbeter. Die Atari-speletjies bestaan hoofsaaklik uit 2D-omgewings, terwyl ons die algoritme op ’n 3D-omgewing met skaars-belonings toepas. Ons bevestig die resultate van Fedus et al. [10] en toon aan dat beter gedragspatrone aangeleer word indien die ervaringsbuffer meer onlangs gegenereerde data bevat. Ons toon ook dat die prioritisering van ervaring en n-stap opdaterings baie belangrik is om die skaars-beloningsprobleem in die voorbeeld op te los. Aanvullend tot hierdie wysigings, ondersoek ons ook strategieë om die verkenning van die omgewing te verbeter. Ons toon aan dat kurrikulumleer of domein-lukraakheid die agent kan help om meer uitdagende probleme op te los, waar dit aanvanklik moeilik is om ’n beloning te ontvang. Laastens wys ons dat dit die diep-Q-leer agent verder bevoordeel indien kurrikulumleer in kombinasie met domein-lukraakheid gebruik word om groter en moeiliker probleme op te los.af_ZA
dc.description.versionMastersen_ZA
dc.format.extent144 pagesen_ZA
dc.identifier.urihttp://hdl.handle.net/10019.1/123775
dc.language.isoen_ZAen_ZA
dc.publisherStellenbosch : Stellenbosch Universityen_ZA
dc.rights.holderStellenbosch Universityen_ZA
dc.subjectSparse-reward problemsen_ZA
dc.subjectReinforcement learningen_ZA
dc.subject3D Environmentsen_ZA
dc.subjectDeep learning (Machine learning)en_ZA
dc.subjectUCTDen_ZA
dc.titleSolving sparse-reward problems in partially observable 3D environments using distributed reinforcement learningen_ZA
dc.typeThesisen_ZA
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
louw_sparse_2021.pdf
Size:
4.39 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Plain Text
Description: