Show simple item record

dc.contributor.authorTunheim, Svein Anders
dc.contributor.authorLei, Jiao
dc.contributor.authorShafik, Rishad Ahmed
dc.contributor.authorYakovlev, Alexandre
dc.contributor.authorGranmo, Ole-Christoffer
dc.date.accessioned2023-11-21T12:37:55Z
dc.date.available2023-11-21T12:37:55Z
dc.date.created2023-10-31T12:54:20Z
dc.date.issued2023
dc.identifier.citationTunheim, S. A., Lei, J., Shafik, R. A., Yakovlev, A. & Granmo, O.-C. (2023). Convolutional Tsetlin Machine-based Training and Inference Accelerator for 2-D Pattern Classification. Microprocessors and Microsystems: Embedded Hardware Design (MICPRO), 103, Article 104949.en_US
dc.identifier.issn1872-9436
dc.identifier.urihttps://hdl.handle.net/11250/3103846
dc.description.abstractThe Tsetlin Machine (TM) is a machine learning algorithm based on an ensemble of Tsetlin Automata (TAs) that learns propositional logic expressions from Boolean input features. In this paper, the design and implementation of a Field Programmable Gate Array (FPGA) accelerator based on the Convolutional Tsetlin Machine (CTM) is presented. The accelerator performs classification of two pattern classes in 4 × 4 Boolean images with a 2 × 2 convolution window. Specifically, there are two separate TMs, one per class. Each TM comprises 40 propositional logic formulas, denoted as clauses, which are conjunctions of literals. Include/exclude actions from the TAs determine which literals are included in each clause. The accelerator supports full training, including random patch selection during convolution based on parallel reservoir sampling across all clauses. The design is implemented on a Xilinx Zynq XC7Z020 FPGA platform. With an operating clock speed of 40 MHz, the accelerator achieves a classification rate of 4.4 million images per second with an energy per classification of 0.6 J. The mean test accuracy is 99.9% when trained on the 2-dimensional Noisy XOR dataset with 40% noise in the training labels. To achieve this performance, which is on par with the original software implementation, Linear Feedback Shift Register (LFSR) random number generators of minimum 16 bits are required. The solution demonstrates the core principles of a CTM and can be scaled to operate on multi-class systems for larger images.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleConvolutional Tsetlin Machine-based Training and Inference Accelerator for 2-D Pattern Classificationen_US
dc.title.alternativeConvolutional Tsetlin Machine-based Training and Inference Accelerator for 2-D Pattern Classificationen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© 2023 The Author(s)en_US
dc.subject.nsiVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550en_US
dc.source.volume103en_US
dc.source.journalMicroprocessors and Microsystems: Embedded Hardware Design (MICPRO)en_US
dc.identifier.doihttps://doi.org/10.1016/j.micpro.2023.104949
dc.identifier.cristin2190512
dc.source.articlenumber104949en_US
cristin.qualitycode1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Navngivelse 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal