We presented a learning model that generated natural language description of images. The model utilized the connections between natural language and visual data by produced text line based contents from a given image. Our Hybrid Recurrent Neural Network model is based on the intricacies of Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Bi-directional Recurrent Neural Network (BRNN) models. We conducted experiments on three benchmark datasets, e.g., Flickr8K, Flickr30K, and MS COCO. Our hybrid model utilized LSTM model to encode text line or sentences independent of the object location and BRNN for word representation, this reduced the computational complexities without compromising the accuracy of the descriptor. The model produced better accuracy in retrieving natural language based description on the dataset.
CITATION STYLE
Asifuzzaman Jishan, M., Mahmud, K. R., & Al Azad, A. K. (2019). Natural language description of images using hybrid recurrent neural network. International Journal of Electrical and Computer Engineering, 9(4), 2932–2940. https://doi.org/10.11591/ijece.v9i4.pp2932-2940
Mendeley helps you to discover research relevant for your work.