Vis enkel innførsel

dc.contributor.authorHaugland, Vegard
dc.contributor.authorKjølleberg, Marius
dc.contributor.authorLarsen, Svein-Erik
dc.contributor.authorGranmo, Ole-Christoffer
dc.date.accessioned2011-11-15T13:18:30Z
dc.date.available2011-11-15T13:18:30Z
dc.date.issued2011
dc.identifier.citationHaugland, V., Kjølleberg, M., Larsen, S.-E., & Granmo, O.-C. (2011). A two-armed bandit collective for examplar based mining of frequent itemsets with applications to intrusion detection. In P. Jedrzejowicz, N. Nguyen & K. Hoang (Eds.), Computational Collective Intelligence. Technologies and Applications (Vol. 6922, pp. 72-81): Springer Berlin / Heidelberg.no_NO
dc.identifier.isbn978-3-642-23934-2
dc.identifier.urihttp://hdl.handle.net/11250/137869
dc.descriptionChapter from the book: Computational Collective Intelligence. Technologies and Applications. Also available from the publisher at SpringerLink: http://dx.doi.org/10.1007/978-3-642-23935-9_7no_NO
dc.description.abstractOver the last decades, frequent itemset mining has become a major area of research, with applications including indexing and similarity search, as well as mining of data streams, web, and software bugs. Although several efficient techniques for generating frequent itemsets with a minimum support (frequency) have been proposed, the number of itemsets produced is in many cases too large for effective usage in real-life applications. Indeed, the problem of deriving frequent itemsets that are both compact and of high quality, remains to a large degree open. In this paper we address the above problem by posing frequent itemset mining as a collection of interrelated two-armed bandit problems. In brief, we seek to find itemsets that frequently appear as subsets in a stream of itemsets, with the frequency being constrained to support granularity requirements. Starting from a randomly or manually selected examplar itemset, a collective of Tsetlin automata based two-armed bandit players aims to learn which items should be included in the frequent itemset. A novel reinforcement scheme allows the bandit players to learn this in a decentralized and on-line manner by observing one itemset at a time. Since each bandit player learns simply by updating the state of a finite automaton, and since the reinforcement feedback is calculated purely from the present itemset and the corresponding decisions of the bandit players, the resulting memory footprint is minimal. Furthermore, computational complexity grows merely linearly with the cardinality of the examplar itemset. The proposed scheme is extensively evaluated using both artificial data as well as data from a real-world network intrusion detection application. The results are conclusive, demonstrating an excellent ability to find frequent itemsets at various level of support. Furthermore, the sets of frequent itemsets produced for network instrusion detection are compact, yet accurately describe the different types of network traffic present.no_NO
dc.language.isoengno_NO
dc.publisherSpringer Berlin/Heidelbergno_NO
dc.relation.ispartofseriesLecture Notes in Computer Science;6922
dc.titleA two-armed bandit collective for examplar based mining of frequent itemsets with applications to intrusion detectionno_NO
dc.typeChapterno_NO
dc.typePeer reviewedno_NO
dc.subject.nsiVDP::Technology: 500::Information and communication technology: 550::Computer technology: 551no_NO
dc.subject.nsiVDP::Mathematics and natural science: 400::Information and communication science: 420::Algorithms and computability theory: 422no_NO
dc.source.pagenumber72-81no_NO


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel