An Efficient Selection-Based kNN Architecture for Smart Embedded Hardware Accelerators

20Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

K-Nearest Neighbor (kNN) is an efficient algorithm used in many applications, e.g., text categorization, data mining, and predictive analysis. Despite having a high computational complexity, kNN is a candidate for hardware acceleration since it is a parallelizable algorithm. This paper presents an efficient novel architecture and implementation for a kNN hardware accelerator targeting modern System-on-Chips (SoCs). The architecture adopts a selection-based sorter dedicated for kNN that outperforms traditional sorters in terms of hardware resources, time latency, and energy efficiency. The kNN architecture has been designed using High-Level Synthesis (HLS) and implemented on the Xilinx Zynqberry platform. Compared to similar state-of-The-Art implementations, the proposed kNN provides speedups between $1.4\times $ and $875\times $ with 41% to 94% reductions in energy consumption. To further enhance the proposed architecture, algorithmic-level Approximate Computing Techniques (ACTs) have been applied. The proposed approximate kNN implementation accelerates the classification process by $2.3\times $ with an average reduced area size of 56% for a real-Time tactile data processing case study. The approximate kNN consumes 69% less energy with an accuracy loss of less than 3% when compared to the proposed Exact kNN.

Cite

CITATION STYLE

APA

Younes, H., Ibrahim, A., Rizk, M., & Valle, M. (2021). An Efficient Selection-Based kNN Architecture for Smart Embedded Hardware Accelerators. IEEE Open Journal of Circuits and Systems, 2, 534–545. https://doi.org/10.1109/OJCAS.2021.3108835

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free