A novel feature fusion model to mimic photographers’ active observation for scenery recomposition toward physical education

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We develop a novel computational model to mimic photographers’ observation techniques for scene decomposition. Central to our model is a hierarchical structure designed to capture human gaze dynamics accurately using the Binarized Normed Gradients (BING) objectness metric for identifying meaningful scene patches. We introduce a strategy called Locality-preserved and Observer-like Active Learning (LOAL) that constructs gaze shift paths (GSP) incrementally, allowing user interaction in the feature selection process. The GSPs are processed through a multi-layer aggregating algorithm, producing deep feature representations encoded into a Gaussian mixture model (GMM), which underpins our image retargeting approach. Our empirical analyses, supported by a user study, show that our method outperforms comparable techniques significantly, achieving a precision rate 3.2% higher than the second-best performer while halving the testing time. This streamlined approach blends aesthetics with algorithmic efficiency, enhancing AI-driven scene analysis.

Cite

CITATION STYLE

APA

Tang, D., & Wang, S. (2025). A novel feature fusion model to mimic photographers’ active observation for scenery recomposition toward physical education. Scientific Reports, 15(1). https://doi.org/10.1038/s41598-025-02678-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free