Renson Gerald*
This paper presents a conceptual framework for an AI-powered multimodal assistive system designed for individuals with visual impairments. The proposed system integrates wearable smart glasses equipped with embedded cameras, haptic feedback, and a centralized AI-driven mobile application. This mobile app synchronizes environmental data from various smart devices and service animals. Leveraging computer vision, natural language processing, and reinforcement learning, the system aims to enhance spatial awareness, object interaction, and indoor navigation. Additionally, the system will be capable of interpreting sign language gestures captured by the smart glasses' camera and converting them into real-time audio feedback for the user, enabling communication with non-verbal individuals. The research outlines the system's architecture, key components, interaction modes, and an experimental design intended for future empirical validation. This framework aspires to improve user autonomy and confidence in navigating complex environments. The design phase of the mobile user interface is currently underway to ensure intuitive and accessible interaction.