This paper presents an innovative approach to enhance the querying capability of ChatGPT, a conversational artificial intelligence model, by incorporating voice-based interaction and a convolutional neural network (CNN)based impaired vision detection model. The proposed system aims to improve user experience and accessibility by allowing users to interact with ChatGPT using voice commands. Additionally, a CNN-based model is employed to detect impairments in user vision, enabling the system to adapt its responses and provide appropriate assistance. This research tackles head-on the challenges of user experience and inclusivity in artificial intelligence (AI). It underscores our commitment to overcoming these obstacles, making ChatGPT more accessible and valuable for a broader audience. The integration of voice-based interaction and impaired vision detection represents a novel approach to conversational AI. Notably, this innovation transcends novelty; it carries the potential to profoundly impact the lives of users, particularly those with visual impairments. The modular approach to system design ensures adaptability and scalability, critical for the practical implementation of these advancements. Crucially, the solution places the user at its core. Customizing responses for those with visual impairments demonstrates AI’s potential to not only understand but also accommodate individual needs and preferences.
CITATION STYLE
Ahmad, A., Jabbar, S., Akram, S., Paul, A., Raza, U., & Alshuqayran, N. M. (2024). Enhancing ChatGPT’s Querying Capability with Voice-Based Interaction and CNN-Based Impair Vision Detection Model. Computers, Materials and Continua, 78(3), 3129–3150. https://doi.org/10.32604/cmc.2024.045385
Mendeley helps you to discover research relevant for your work.