A machine learning algorithm for jitter reduction and video quality enhancement in IoT environment

ISSN: 22498958
1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Multimedia traffic has been abnormally increasing nowadays due to its greater usage and necessity. CCTV cameras (closed circuit television camera) are widely used in these days as it a matter of security concern. From shopping malls to home the usage CCTV camera plays a vital role. But the challenging part arises when the media data captured by the camera is to be transmitted to the display monitors of the owner. There are scenarios where more than one CCTV camera covers a particular region. The 70% of network traffic is caused due to CCTV surveillance. It is important to reduce the traffic and delay packets to deliver the data to the user on time. We have used wireless SDN network (software defined network) to transfer the multimedia data to the display monitor. The SDN control switch is integrated with AI module where machine learning algorithm is implemented (BAT algorithm) to prioritise the data packets. For more efficient storage data is uploaded to the IOT cloud.

Cite

CITATION STYLE

APA

Jagadessan, J., Nikita, B., Preta, G. D., & Priya, H. H. (2019). A machine learning algorithm for jitter reduction and video quality enhancement in IoT environment. International Journal of Engineering and Advanced Technology, 8(4), 667–672.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free