The way we communicate with speech goes far beyond the words we use and nonverbal cues play a pivotal role in, e.g., conveying emphasis or expressing emotion. Furthermore, speech can also serve as a biomarker for a range of health conditions, e.g., Alzheimer’s. With a strong evolution of speech technologies, in recent years, speech has been increasingly adopted for interaction with machines and environments, e.g., our homes. While strong advances are being made in capturing the different verbal and nonverbal aspects of speech, the resulting features are often made available in standalone applications and/or for very specific scenarios. Given their potential to inform adaptability and support eHealth, it’s desirable to increase their consideration as an integral part of interactive ecosystems taking profit of the rising role of speech as an ubiquitous form of interaction. In this regard, our aim is to propose how this integration can be performed in a modular and expandable manner. To this end, this work presents a first reflection on how these different dimensions may be considered in the scope of a smart environment, through a seamless and expandable integration around speech as an interaction modality by proposing a first iteration of an architecture to support this vision and a first implementation to show its feasibility and potential.
CITATION STYLE
Barros, F., Valente, A. R., Teixeira, A., & Silva, S. (2023). Harnessing the Role of Speech Interaction in Smart Environments Towards Improved Adaptability and Health Monitoring. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST (Vol. 484 LNICST, pp. 271–286). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-32029-3_24
Mendeley helps you to discover research relevant for your work.