Convolutional Tsetlin Machine-based Training and Inference Accelerator for 2-D Pattern Classification
Peer reviewed, Journal article
MetadataShow full item record
Original versionTunheim, S. A., Lei, J., Shafik, R. A., Yakovlev, A. & Granmo, O.-C. (2023). Convolutional Tsetlin Machine-based Training and Inference Accelerator for 2-D Pattern Classification. Microprocessors and Microsystems: Embedded Hardware Design (MICPRO), 103, Article 104949. https://doi.org/10.1016/j.micpro.2023.104949
The Tsetlin Machine (TM) is a machine learning algorithm based on an ensemble of Tsetlin Automata (TAs) that learns propositional logic expressions from Boolean input features. In this paper, the design and implementation of a Field Programmable Gate Array (FPGA) accelerator based on the Convolutional Tsetlin Machine (CTM) is presented. The accelerator performs classification of two pattern classes in 4 × 4 Boolean images with a 2 × 2 convolution window. Specifically, there are two separate TMs, one per class. Each TM comprises 40 propositional logic formulas, denoted as clauses, which are conjunctions of literals. Include/exclude actions from the TAs determine which literals are included in each clause. The accelerator supports full training, including random patch selection during convolution based on parallel reservoir sampling across all clauses. The design is implemented on a Xilinx Zynq XC7Z020 FPGA platform. With an operating clock speed of 40 MHz, the accelerator achieves a classification rate of 4.4 million images per second with an energy per classification of 0.6 J. The mean test accuracy is 99.9% when trained on the 2-dimensional Noisy XOR dataset with 40% noise in the training labels. To achieve this performance, which is on par with the original software implementation, Linear Feedback Shift Register (LFSR) random number generators of minimum 16 bits are required. The solution demonstrates the core principles of a CTM and can be scaled to operate on multi-class systems for larger images.