Deep-Learning-Driven Techniques for Real-Time Multimodal Health and Physical Data Synthesis

16Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

With the advent of Artificial Intelligence for healthcare, data synthesis methods present crucial benefits in facilitating the fast development of AI models while protecting data subjects and bypassing the need to engage with the complexity of data sharing and processing agreements. Existing technologies focus on synthesising real-time physiological and physical records based on regular time intervals. Real health data are, however, characterised by irregularities and multimodal variables that are still hard to reproduce, preserving the correlation across time and different dimensions. This paper presents two novel techniques for synthetic data generation of real-time multimodal electronic health and physical records, (a) the Temporally Correlated Multimodal Generative Adversarial Network and (b) the Document Sequence Generator. The paper illustrates the need and use of these techniques through a real use case, the H2020 GATEKEEPER project of AI for healthcare. Furthermore, the paper presents the evaluation for both individual cases and a discussion about the comparability between techniques and their potential applications of synthetic data at the different stages of the software development life-cycle.

Cite

CITATION STYLE

APA

Haleem, M. S., Ekuban, A., Antonini, A., Pagliara, S., Pecchia, L., & Allocca, C. (2023). Deep-Learning-Driven Techniques for Real-Time Multimodal Health and Physical Data Synthesis. Electronics (Switzerland), 12(9). https://doi.org/10.3390/electronics12091989

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free