Vis enkel innførsel

dc.contributor.authorZhang, Xuan
dc.contributor.authorGranmo, Ole-Christoffer
dc.contributor.authorOommen, B. John
dc.date.accessioned2013-04-26T08:43:15Z
dc.date.available2013-04-26T08:43:15Z
dc.date.issued2013
dc.identifier.citationZhang, X., Granmo, O.-C., & Oommen, B. J. (2013). On incorporating the paradigms of discretization and Bayesian estimation to create a new family of pursuit learning automata. Applied Intelligence(February), 1-11. doi: 10.1007/s10489-013-0424-xno_NO
dc.identifier.issn0924-669X
dc.identifier.urihttp://hdl.handle.net/11250/138002
dc.descriptionPublished version of an article in the journal: Applied Intelligence. Also available from the publisher at: http://dx.doi.org/10.1007/s10489-013-0424-xno_NO
dc.description.abstractThere are currently two fundamental paradigms that have been used to enhance the convergence speed of Learning Automata (LA). The first involves the concept of utilizing the estimates of the reward probabilities, while the second involves discretizing the probability space in which the LA operates. This paper demonstrates how both of these can be simultaneously utilized, and in particular, by using the family of Bayesian estimates that have been proven to have distinct advantages over their maximum likelihood counterparts. The success of LA-based estimator algorithms over the classical, Linear Reward-Inaction (LRI)-like schemes, can be explained by their ability to pursue the actions with the highest reward probability estimates. Without access to reward probability estimates, it makes sense for schemes like the LRI to first make large exploring steps, and then to gradually turn exploration into exploitation by making progressively smaller learning steps. However, this behavior becomes counter-intuitive when pursuing actions based on their estimated reward probabilities. Learning should then ideally proceed in progressively larger steps, as the reward probability estimates turn more accurate. This paper introduces a new estimator algorithm, the Discretized Bayesian Pursuit Algorithm (DBPA), that achieves this by incorporating both the above paradigms. The DBPA is implemented by linearly discretizing the action probability space of the Bayesian Pursuit Algorithm (BPA) (Zhang et al. in IEA-AIE 2011, Springer, New York, pp. 608-620, 2011). The key innovation of this paper is that the linear discrete updating rules mitigate the counter-intuitive behavior of the corresponding linear continuous updating rules, by augmenting them with the reward probability estimates. Extensive experimental results show the superiority of DBPA over previous estimator algorithms. Indeed, the DBPA is probably the fastest reported LA to date. Apart from the rigorous experimental demonstration of the strength of the DBPA, the paper also briefly records the proofs of why the BPA and the DBPA are ε{lunate}-optimal in stationary environments.no_NO
dc.language.isoengno_NO
dc.publisherSpringerno_NO
dc.subjectε-optimalityno_NO
dc.subjectBayesian reasoningno_NO
dc.subjectdiscretized learningno_NO
dc.subjectestimator algorithmsno_NO
dc.subjectlearning automatano_NO
dc.subjectpursuit schemesno_NO
dc.titleOn incorporating the paradigms of discretization and Bayesian estimation to create a new family of pursuit learning automatano_NO
dc.typeJournal articleno_NO
dc.typePeer reviewedno_NO
dc.subject.nsiVDP::Mathematics and natural science: 400::Information and communication science: 420::Knowledge based systems: 425no_NO
dc.source.pagenumber782-792no_NO
dc.source.volume39
dc.source.journalApplied Intelligenceno_NO
dc.source.issue4
dc.identifier.doi10.1007/s10489-013-0424-x


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel