Scene labeling using H-LSTM by predicting the pixels using various functions

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Scene Labeling plays an important role in Scene understanding in which the pixels are classified and grouped together to form a label of an image. For this concept, so many neural networks are applied and they produce fine results. Without any preprocessing methods, the system works very well compared to methods which are using preprocessing and some graphical models. Here the neural network used to extract the features is Hierarchical LSTM method, which already gives greater result in Scene parsing in the existing method. In order to reduce the computation time and increase the Pixel accuracy H-LSTM is used with Makecform and Softmax functions were applied. The color transformation is applied using the Makecform function. The color enhancement of images has given object as input to H-LSTM function to identify the objects based on the referential shape and color. H-LSTM constructs the neural network by taking the reference pattern and the corresponding label as input. The pixels present in the neighbourhood identified with the help of neural network. In this method, the color image is converted into greyscale and then the Hierarchical LSTM method is applied. Therefore, this method gives greater results when it is implemented in Matlab tool, based on pixel accuracy and computation time when compared to other methods.

Cite

CITATION STYLE

APA

Shanmugapriya, N., & Chitra, D. (2019). Scene labeling using H-LSTM by predicting the pixels using various functions. International Journal of Recent Technology and Engineering, 8(3), 1179–1185. https://doi.org/10.35940/ijrte.C4285.098319

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free