Vis enkel innførsel

dc.contributor.authorYazidi, Anis
dc.contributor.authorHassan, Ismail
dc.contributor.authorHammer, Hugo Lewi
dc.contributor.authorOommen, B. John
dc.date.accessioned2021-02-11T21:26:06Z
dc.date.available2021-02-11T21:26:06Z
dc.date.created2020-08-07T06:32:23Z
dc.date.issued2020
dc.identifier.citationYazidi, A., Hassan, I., Hammer, H. L. & Oommen, B. J. (2020). Achieving Fair Load Balancing by Invoking a Learning Automata-based Two Time Scale Separation Paradigm. IEEE Transactions on Neural Networks and Learning Systems. doi:en_US
dc.identifier.issn2162-237X
dc.identifier.urihttps://hdl.handle.net/11250/2727534
dc.descriptionAuthor's accepted manuscript.en_US
dc.description© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.description.abstractIn this article, we consider the problem of load balancing (LB), but, unlike the approaches that have been proposed earlier, we attempt to resolve the problem in a fair manner (or rather, it would probably be more appropriate to describe it as an ε-fair manner because, although the LB can, probably, never be totally fair, we achieve this by being ``as close to fair as possible''). The solution that we propose invokes a novel stochastic learning automaton (LA) scheme, so as to attain a distribution of the load to a number of nodes, where the performance level at the different nodes is approximately equal and each user experiences approximately the same Quality of the Service (QoS) irrespective of which node that he/she is connected to. Since the load is dynamically varying, static resource allocation schemes are doomed to underperform. This is further relevant in cloud environments, where we need dynamic approaches because the available resources are unpredictable (or rather, uncertain) by virtue of the shared nature of the resource pool. Furthermore, we prove here that there is a coupling involving LA's probabilities and the dynamics of the rewards themselves, which renders the environments to be nonstationary. This leads to the emergence of the so-called property of ``stochastic diminishing rewards.'' Our newly proposed novel LA algorithm ε-optimally solves the problem, and this is done by resorting to a two-time-scale-based stochastic learning paradigm. As far as we know, the results presented here are of a pioneering sort, and we are unaware of any comparable results.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.titleAchieving Fair Load Balancing by Invoking a Learning Automata-based Two Time Scale Separation Paradigmen_US
dc.typeJournal articleen_US
dc.typePeer revieweden_US
dc.description.versionacceptedVersionen_US
dc.rights.holder© 2020 IEEEen_US
dc.subject.nsiVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550en_US
dc.source.pagenumber14en_US
dc.source.journalIEEE Transactions on Neural Networks and Learning Systemsen_US
dc.identifier.doi10.1109/TNNLS.2020.3010888
dc.identifier.cristin1822128
cristin.qualitycode2


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel