Abstract
The growth in the video content-based communication and information management have motivated the applications dealing with the video contents to be placed on cloud-based data-storages, which are associated with datacentres. The applications ranging from social media to distance education and surveillance to business communication or any applications related to the e-governess are considering the video data as one of the most preferred mode of communication. The video data is enriched with audio and visual contents, which makes analysis or expressing information high convenient. Computerized video is an electronic portrayal of moving visual pictures as encoded advanced information. This is as opposed to simple video, which speaks to moving visual pictures with a simple sign. The advanced video contains a progression of computerized pictures showed in quick progression. The number of applications, as mentioned, dealing with video data is increasing and as a result a large amount of video content is generated every day. Hence, the complexity of retrieving the information from the video contents are also increasing. The bottleneck of the video retrieval process is for a lower sized segment of the video content can be retrieved in low time complexity and if the information to be preserved to a higher extend, then the retrieval time complexity increases. Thus, a good number of parallel researchers have introduced various methods for video content summarization and retrieval using summarization with efficient searching methods, but most of the parallel research outcomes are criticized for either higher time complexity or for higher information loss. This problem can be ideally solved by finding the highly accurate ratio of key information video frames from the total video content. Henceforth, this work, presents a novel machine learning method for identifying the key frames, not only based on the information available in the frame, also validating the key frames with the thresholds of the objects or changes in the frame. The work is again enhanced by considering the adaptive thresholding method for distributed and collaborative video information. The measures taken in the proposed algorithm produces a 98% accuracy for video information representation and nearly 99% reduction in the video frames, which results into nearly 99% reduction in the time complexity.
Author supplied keywords
Cite
CITATION STYLE
Naga Raja, T., Venkata Ramana, V. V., & Damodharam, A. (2019). Video summarization using adaptive thresholding by machine learning for distributed cloud storage. International Journal of Engineering and Advanced Technology, 8(5), 741–748.
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.