Tensor Decomposition Through Neural Architectures

N/ACitations
Citations of this article
N/AReaders
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning (ML) technologies are currently widely used in many domains of science and technology, to discover models that transform input data into output data. The main advantages of such a procedure are the generality and simplicity of the learning process, while their weaknesses remain the required amount of data needed to perform the training and the recurrent difficulties to explain the involved rationale. At present, a panoply of ML techniques exist, and the selection of a method or another depends, in general, on the type and amount of data being considered. This paper proposes a procedure which provides not a field or an image as an output, but its singular value decomposition (SVD), or an SVD-like decomposition, while injecting as input data scalars or the SVD decomposition of an input field. The result is a tensor-to-tensor decomposition, without the need for the full fields, or an input to an output SVD-like decomposition. The proposed method works for the non-hyper-parallepipedic domain, and for any space dimensionality. The results show the ability of the proposed architecture to link the input filed and output field, without requiring access to full space reconstruction.

Cite

CITATION STYLE

APA

Ghnatios, C., & Chinesta, F. (2025). Tensor Decomposition Through Neural Architectures. Applied Sciences (Switzerland), 15(4). https://doi.org/10.3390/app15041949

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free