• norsk
    • English
  • English 
    • norsk
    • English
  • Login
View Item 
  •   Home
  • Administration
  • Publikasjoner fra CRIStin
  • View Item
  •   Home
  • Administration
  • Publikasjoner fra CRIStin
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

A Conclusive Analysis of the Finite-Time Behavior of the Discretized Pursuit Learning Automaton

Zhang, Xuan; Jiao, Lei; Oommen, John; Granmo, Ole-Christoffer
Journal article, Peer reviewed
Accepted version
Thumbnail
View/Open
Jiao.pdf (977.1Kb)
URI
https://hdl.handle.net/11250/2647927
Date
2019
Metadata
Show full item record
Collections
  • Publikasjoner fra CRIStin [4768]
  • Scientific Publications in Information and Communication Technology [784]
Original version
Zhang, X., Jiao, L., Oommen, J. & Granmo, O.-C. (2019). A Conclusive Analysis of the Finite-Time Behavior of the Discretized Pursuit Learning Automaton. IEEE Transactions on Neural Networks and Learning Systems, 31(1), 284-294. doi:   10.1109/TNNLS.2019.2900639
Abstract
This paper deals with the finite-time analysis (FTA) of learning automata (LA), which is a topic for which very little work has been reported in the literature. This is as opposed to the asymptotic steady-state analysis for which there are, probably, scores of papers. As clarified later, unarguably, the FTA of Markov chains, in general, and of LA, in particular, is far more complex than the asymptotic steady-state analysis. Such an FTA provides rigid bounds for the time required for the LA to attain to a given convergence accuracy. We concentrate on the FTA of the Discretized Pursuit Automaton (DPA), which is probably one of the fastest and most accurate reported LA. Although such an analysis was carried out many years ago, we record that the previous work is flawed. More specifically, in all brevity, the flaw lies in the wrongly ``derived'' monotonic behavior of the LA after a certain number of iterations. Rather, we claim that the property should be invoked is the submartingale property. This renders the proof to be much more involved and deep. In this paper, we rectify the flaw and reestablish the FTA based on such a submartingale phenomenon. More importantly, from the derived analysis, we are able to discover and clarify, for the first time, the underlying dilemma between the DPA's exploitation and exploration properties. We also nontrivially confirm the existence of the optimal learning rate, which yields a better comprehension of the DPA itself.
Description
 
Author's accepted version (post-print).
 
© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
 
Available from 20/03/2021.
 
Publisher
IEEE
Journal
IEEE Transactions on Neural Networks and Learning Systems
Copyright
© 20XX IEEE

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit
 

 

Browse

ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsDocument TypesJournalsThis CollectionBy Issue DateAuthorsTitlesSubjectsDocument TypesJournals

My Account

Login

Statistics

View Usage Statistics

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit