Fast and Accurate YOLO Framework for Live Object Detection

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

You Only Look Once (YOLO) is a popular problem-solving time visual perception framework that utilizes an individual autoencoder network to detect entity captured in an image. The key idea behind YOLO is to perform object detection in one forward pass of the network, rather than using a two-stage pipeline as in many other object detection frameworks. The framework functions by segmenting an illustration into a matrix of sections and allocating each unit the responsibility of detecting objects. The network then predicts the envelope and category probabilities for objects within each cell. YOLO uses ConvNet architecture for visual perception. The network takes an image as input and outputs a collection of envelope and category probabilities for objects within the visual representation. YOLO has proven to be effective in real-time object detection and has found extensive usage in various domains. However; it has some limitations, such as a lower accuracy compared to other frameworks and difficulty detecting smaller objects. Despite these limitations, YOLO remains a popular choice for real-time object detection due to its efficiency and speed.

Cite

CITATION STYLE

APA

Ajith Babu, R. R., Dhushyanth, H. M., Hemanth, R., Naveen Kumar, M., Sushma, B. A., & Loganayagi, B. (2023). Fast and Accurate YOLO Framework for Live Object Detection. In Lecture Notes in Networks and Systems (Vol. 757 LNNS, pp. 555–567). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-99-5166-6_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free