Increasing ubiquitous collaborative intelligence between humans and machines requires human?machine communication (HMC) that is more human and less machine-like to accomplish given tasks. Although speech signals are considered the best modes of communication in HMC, background noise often interferes with these signals. Therefore, research focused on integrating lip-reading technology into HMC has gained significant attention. However, lip-reading functions effectively only in well-lit environments. In contrast, HMC may occur daily in dark environments owing to potential energy shortages, increased exploration in darkness, nighttime emergencies, etc. Herein, a possible method for HMC in the dark mode is presented, which is realized based on deep learning motion patterns of persistent luminescence (PL) of the skin surrounding the lips. An ultrasoft PL?polymer composite patch is used to record the motion pattern of the skin during speech in the dark. It is found that visual geometric group network (VGGNET-5) and residual neural network (ResNet-34) could predict spoken words in darkness with test accuracies of 98.5% and 98.75%, respectively. Furthermore, these models could effectively distinguish similar-sounding words such as ?around? and ?ground.? Dark-mode communication can allow a wide range of people, including disabled people with limited dexterity and voice tremors, to communicate with artificial intelligence machines.
CITATION STYLE
Timilsina, S., Shin, H. G., Sohn, K.-S., & Kim, J. S. (2022). Dark‐Mode Human–Machine Communication Realized by Persistent Luminescence and Deep Learning. Advanced Intelligent Systems, 4(7). https://doi.org/10.1002/aisy.202200036
Mendeley helps you to discover research relevant for your work.