dc.contributor.author | Borgersen, Karl Audun | |
dc.contributor.author | Grundetjern, Morten | |
dc.date.accessioned | 2020-10-15T08:04:39Z | |
dc.date.available | 2020-10-15T08:04:39Z | |
dc.date.issued | 2020 | |
dc.identifier.citation | Borgersen, K. A. & Grundetjern, M. (2020) A Generative Adversarial Network approach using Latent Space Manipulation for Clothing Display on Synthetic Images Using the latent space of an non-primed StyleGAN2 networkto manipulate arbitrary clothing onto generated humans (Master's thesis). University of Agder, Grimstad | en_US |
dc.identifier.uri | https://hdl.handle.net/11250/2682947 | |
dc.description | Master's thesis in Information- and communication technology (IKT590) | en_US |
dc.description.abstract | The Fashion industry is continuously inventing new clothing. A significant bottleneck when introducing newly designed clothing to the market is advertisements. This thesis looks into the possibility of streamlining this bottleneck through the automated manipulation of clothing onto generated humans. A common problem in Generative Adversarial Networks (GAN) is the black-box nature of the generated re-sults. Meaning a user has no direct input on the traits of images produced. GANs are capable of exceptional levels of "creativity" when utilizing its generator to create novel results. For instance, it is possible to project a car into a StyleGAN network trained to generate human faces. For generating a display human with certain user-specified traits, most contemporary solutions use an embedding network. Still, it is an open question of whether this approach stifles a network’s ability to create unique kinds of results. This project aims to harness the "creativity" of GANs to generate unique results not present in the original dataset by forgoing these embedding networks. More specifically, this is done through the investigation with a custom trained StyleGan2 Network and further increasing the understanding of the effects of vector manipulation in latent space. In our approach, the mapped StyleGan2 networks and corresponding latent spaces are trained from scratch, but accompanied by datasets for this training process have been compiled from several other fashion datasets, namely the "Try-on" and "Deep-Fashion" datasets. The project also examines the StyleGan2 training process and the impact of differing datasets upon the width and usability of latent space. These experiments culminated in a network with a resulting FID score of 4.3. The images produced is modified with vectors associated with different traits, such as common traits of human photos. This approach opens up for a further understanding of the allocation of latent space features in general. When used in conjunction with a optimization algorithm, these networks enable autonomously choosing ideal magnitudes for attributes in image generation. Causing generated humans to take on some of the traits of an input clothing image such as color, texture and general shape. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | University of Agder | en_US |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/deed.no | * |
dc.subject | IKT590 | en_US |
dc.title | A Generative Adversarial Network approach using Latent Space Manipulation for Clothing Display on Synthetic Images Using the latent space of an non-primed StyleGAN2 networkto manipulate arbitrary clothing onto generated humans | en_US |
dc.type | Master thesis | en_US |
dc.rights.holder | © 2020 Karl Audun Borgersen, Morten Grundetjern | en_US |
dc.subject.nsi | VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550 | en_US |
dc.subject.nsi | VDP::Matematikk og Naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420::Simulering, visualisering, signalbehandling, bildeanalyse: 429 | en_US |
dc.source.pagenumber | 68 | en_US |