Performance Evaluation of SoC-FPGA Based Floating-Point Implementation of GMM for Real-Time Background Subtraction

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The embedded systems continue to display as solutions of smart surveillance systems. Background subtraction using Gaussian Mixture Model (GMM) is often portrayed as a common step for video processing. This work discusses the implementation of an embedded vision system on system-on-a-chip (SoC) device that integrates both a processor and an FPGA (Field Programmable Gate Array) architecture. The conventional Register Transfer Level (RTL) design, typically used for FPGA programming is slow, and the use of floating-point arithmetic is complex. However, the use of High-Level Synthesis (HLS) tools allows describing algorithms using high-level programming languages. Three background subtraction algorithms with floating-point arithmetic were developed using a hardware-software co-design methodology. The paper presents the details of the implementation on a ZedBoard Zynq Evaluation and Development Kit, considering requirements as hardware resources and power consumption used. Also, performance comparisons among a PC-based, ARM, FPGA and SOC-FPGA implementations are presented. The results showed that frame rates needed for real-time video processing were reached.

Cite

CITATION STYLE

APA

Morantes-Guzmán, L. J., Alzate, C., Castano-Londono, L., Marquez-Viloria, D., & Vargas-Bonilla, J. F. (2019). Performance Evaluation of SoC-FPGA Based Floating-Point Implementation of GMM for Real-Time Background Subtraction. In Communications in Computer and Information Science (Vol. 1052, pp. 126–134). Springer. https://doi.org/10.1007/978-3-030-31019-6_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free