Vis enkel innførsel

dc.contributor.advisorAndersen, Per-Arne
dc.contributor.advisorLei, Jiao
dc.contributor.authorHindersland, Jonatan Hertzberg
dc.date.accessioned2023-07-07T16:23:46Z
dc.date.available2023-07-07T16:23:46Z
dc.date.issued2023
dc.identifierno.uia:inspera:145679742:37244802
dc.identifier.urihttps://hdl.handle.net/11250/3077207
dc.description.abstractWithin the field of hydrology, there is a vital need to be able to predict streamflow values from hydrological basins. This has traditionally been done through physics and mathematics-based models, where measured data are combined with physics-based formulas to estimate output values. Nowadays, machine learning has been introduced as a potential way to improve the performance of these predictions. One of the classical methods for time-series prediction has been the Long Short Term Memory (LSTM) model, but the transformer model has also shown its ability to be proficient at solving these kinds of problems. The purpose of this paper is to implement a transformer model within an existing model library for streamflow prediction and analyze its performance, both in terms of how it is affected by various hyperparameters and how it compares to other models. The findings from these tests indicate that the transformer encoder is capable of achieving comparable results to the LSTM, while the full transformer model performs noticeably worse. In addition, the paper also finds that the cumulative distribution of the full transformer is significantly different than the other models, performing worse on most basins, but significantly better on the top 20\%. This indicates the model is better at specializing on certain basin groupings, at the cost of generalization. Lack of generalization is a detriment, however, if the model or the data processing could be adapted to exploit the model's ability to specialize, then it may achieve better results. This could be done by, for instance, adding an explicit categorization dimension to the model. To summarize, the transformer model does not currently outperform the state-of-the-art LSTM models, but it expresses interesting behavior that should be studied further.
dc.description.abstract
dc.language
dc.publisherUniversity of Agder
dc.titleAnalyzing the performance of transformers for streamflow prediction
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel