Sign In to Follow Application
View All Documents & Correspondence

Ai Based Assistive Model For Enhancing Mobility And Interaction Of Visually Challenged Individuals

Abstract: The present invention discloses an artificial intelligence-based assistive model for enhancing the mobility, interaction, and independence of visually challenged individuals. The model comprises a data acquisition module that captures environmental information through cameras and sensors, a processing unit that employs machine learning algorithms to recognize objects, detect obstacles, interpret spatial layouts, identify faces, and read textual content, and a contextual interpretation framework that prioritizes outputs based on relevance to the user. The system communicates information through auditory or haptic feedback, providing real-time guidance and interaction support in indoor and outdoor environments. The invention incorporates adaptive learning to improve recognition accuracy and personalization over time, thereby offering a unified, portable, and scalable solution capable of operating on smartphones, smart glasses, and other wearable devices. The model enhances independence, safety, and quality of life for visually challenged individuals by integrating navigation, perception, and interaction into a single assistive platform.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 August 2025
Publication Number
38/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jaipuria Institute of Management
Indirapuram, Ghaziabad, Block A, Gate No-2, Shakti Khand IV, Indirapuram Ghaziabad Uttar Pradesh India 201014

Inventors

1. Himanshu kumar Singh
Jaipuria Institute of Management, Indirapuram, Ghaziabad, Block A, Gate No-2, Shakti Khand IV, IndirapuramGhaziabadUttar PradeshIndia201014
2. Dr. Kratika Singh
Jaipuria Institute of Management, Indirapuram, Ghaziabad, Block A, Gate No-2, Shakti Khand IV, Indirapuram Ghaziabad Uttar Pradesh India 201014
3. Dr. Rashmi Bhatia
Jaipuria Institute of Management, Indirapuram, Ghaziabad, Block A, Gate No-2, Shakti Khand IV, Indirapuram Ghaziabad Uttar Pradesh India 201014
4. Dr. Ajay Tripathi
Jaipuria Institute of Management, Indirapuram, Ghaziabad, Block A, Gate No-2, Shakti Khand IV, Indirapuram Ghaziabad Uttar Pradesh India 201014
5. Dr. Smita Agarwal
Jaipuria Institute of Management, Indirapuram, Ghaziabad, Block A, Gate No-2, Shakti Khand IV, Indirapuram Ghaziabad Uttar Pradesh India 201014

Specification

Description:TECHNICAL FIELD
[0001] The present invention relates to the field of artificial intelligence and assistive technologies. More particularly, the invention pertains to an AI-driven model designed to assist visually challenged individuals in navigating their surroundings, recognizing objects, interpreting contextual information, and enhancing independent living. The invention integrates advanced machine learning algorithms, computer vision, and speech-based interaction frameworks to provide real-time assistance, thereby improving safety, accessibility, and quality of life for persons with visual impairments.
BACKGROUND OF THE INVENTION
[0002] Background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] Visual impairment is among the most significant disabilities impacting millions of individuals globally, with far-reaching consequences on mobility, independence, and quality of life. According to the World Health Organization, more than 285 million people worldwide live with some form of visual impairment, out of which nearly 39 million are completely blind and approximately 246 million experience low vision. For many of these individuals, the lack of sight makes it extremely difficult to interact with their surroundings, perform daily tasks, and navigate independently. While support from caregivers and specialized training such as orientation and mobility techniques can partially alleviate these challenges, reliance on external assistance reduces autonomy and can adversely affect self-confidence, productivity, and participation in social and economic activities.
[0004] Traditional assistive devices for visually challenged individuals have been in use for decades, with the white cane being one of the most ubiquitous tools. Although cost-effective and simple, canes have limitations, as they primarily provide tactile feedback and are unable to detect obstacles at a distance or recognize contextual information. Guide dogs are another traditional solution; however, their availability is restricted due to high training costs and limited numbers, making them inaccessible to a majority of visually impaired persons. Similarly, specialized tools such as braille, tactile maps, or audio-based navigational aids address certain aspects of accessibility, but they fail to provide comprehensive situational awareness or adapt dynamically to complex real-world environments.
[0005] In recent years, advancements in technology have expanded the scope of assistive tools, particularly with the introduction of digital solutions such as screen readers, voice assistants, and smartphone-based navigation applications. These tools leverage audio and textual feedback to enhance accessibility in digital and limited physical contexts. However, they remain inadequate in environments requiring real-time interpretation of objects, obstacles, and dynamic changes, such as crossing busy roads, identifying objects in a room, or recognizing people. Many solutions also rely heavily on internet connectivity, limiting their utility in areas with poor network infrastructure. Furthermore, existing tools tend to focus on one-dimensional support, such as navigation or reading, without offering an integrated, multifunctional solution that covers both mobility and interaction needs.
[0006] Artificial Intelligence (AI) presents an unprecedented opportunity to bridge this gap by enabling machines to interpret and convey environmental information in a way that empowers visually challenged individuals to interact more effectively with their surroundings. AI technologies such as computer vision, deep learning, and natural language processing have reached a level of maturity where they can recognize objects, detect movement, and even understand context in real time. When integrated into portable devices such as smartphones, smart glasses, or wearable systems, AI models can transform raw visual and spatial data into actionable insights delivered through audio or haptic feedback. This not only assists in obstacle detection but also enables identification of objects, reading of textual content, recognition of faces, and context-aware navigation.
[0007] Despite the potential of AI, most available systems are either experimental prototypes or fragmented solutions lacking the scalability, adaptability, and affordability required to support widespread adoption. Some research prototypes attempt to provide object recognition, but their accuracy drops in complex environments or in cases of poor lighting. Others focus solely on navigation without integrating interaction capabilities such as identifying household items, reading labels, or detecting gestures. Moreover, limited personalization features mean that many existing tools do not adapt to the unique requirements, preferences, or learning curves of individual users, thereby reducing usability and long-term effectiveness.
[0008] The background of this invention lies in addressing these limitations by creating a comprehensive AI-based model that integrates navigation, obstacle detection, and contextual awareness into a unified system. Such a system would operate in real time, learn continuously from the user’s environment and feedback, and adapt its outputs to the user’s preferred mode of communication. For example, one user may prefer auditory descriptions, while another may favor haptic vibrations for specific cues. The system would also be able to store frequently accessed information, such as routes or object categories, thereby improving efficiency and personalization over time.
[0009] The emergence of affordable computational hardware, portable cameras, and advanced AI algorithms has made it feasible to design an assistive solution that is not only technologically advanced but also practical, accessible, and affordable. When combined with wearable technology, the model can seamlessly integrate into daily life, reducing stigma and offering a discreet way for visually challenged individuals to interact with their surroundings. This marks a significant shift from the traditional paradigm of assistive tools, moving toward systems that provide dynamic, holistic support rather than isolated functionalities.
[0010] In summary, the invention builds on the limitations of existing assistive technologies and leverages the latest advancements in artificial intelligence to offer a solution that enhances independence, safety, and overall quality of life for visually challenged persons. The background of this invention emphasizes the need for a unified system capable of integrating perception, interpretation, and communication into a single, adaptive model that evolves with user feedback and environmental complexity.
[0011] All publications herein are incorporated by reference to the same extent as if each individual publication or patent application were specifically and individually indicated to be incorporated by reference. Where a definition or use of a term in an incorporated reference is inconsistent or contrary to the definition of that term provided herein, the definition of that term provided herein applies and the definition of that term in the reference does not apply.
OBJECTS OF THE INVENTION
[0012] The principal object of the present invention is to overcome the disadvantages of the prior art.
[0013] Another object of the present invention is to provide an AI-based assistive model that enables visually challenged individuals to perceive, interpret, and interact with their environment in real time, thereby enhancing independence and reducing reliance on external assistance.
[0014] Another object of the present invention aims to develop a system that integrates computer vision, deep learning, and natural language processing to identify objects, obstacles, and contextual cues, and communicate them to the user through auditory or haptic feedback.
[0015] Another object of the present invention is to design a portable and user-friendly solution that can be embedded into wearable devices such as smart glasses, smartphones, or other assistive gadgets, ensuring accessibility and ease of adoption.
[0016] Another object of the present invention is to enhance the mobility and safety of visually challenged persons by providing real-time navigation support, obstacle detection, and route guidance in both indoor and outdoor environments.
[0017] Yet another object of the present invention is to create an adaptive learning framework that continuously improves the accuracy and personalization of assistance through user feedback and contextual awareness, thereby delivering a more intuitive and reliable experience.
SUMMARY
[0018] The present invention introduces an AI-based assistive model specifically designed to enhance the mobility, interaction, and independence of visually challenged individuals by integrating multiple artificial intelligence techniques into a unified, real-time solution. Unlike conventional assistive tools that focus on limited functionalities such as tactile guidance, audio navigation, or object detection in isolation, the proposed system provides a comprehensive framework that combines computer vision, deep learning, and natural language processing to deliver contextual awareness and actionable guidance tailored to the unique needs of each user.
[0019] The invention operates by capturing visual and spatial data from the user’s environment using sensors, cameras, or portable devices such as smartphones or smart glasses. This data is processed through advanced AI algorithms trained to recognize objects, detect obstacles, interpret spatial layouts, and extract meaningful information such as text from labels or signboards. Once processed, the system translates this information into user-friendly feedback through auditory or haptic modes, enabling visually challenged individuals to perceive their surroundings in real time and take appropriate action. The communication framework of the system is highly customizable, allowing users to choose the level of detail, type of output, and preferred mode of interaction based on their comfort and requirements.
[0020] One of the central features of the invention is its adaptability. Unlike static tools, the system employs machine learning to continuously refine its performance by incorporating user feedback and environmental variations. For instance, if the system misidentifies an object or provides information at an undesired level of detail, the user can correct it, enabling the model to improve accuracy in subsequent interactions. This self-learning capability ensures that the system becomes increasingly personalized and reliable over time, making it an indispensable companion in daily life.
[0021] The invention also provides enhanced mobility support through real-time navigation assistance. By combining obstacle detection, path planning, and contextual recognition, the system allows users to move safely and independently in both indoor and outdoor environments. For example, while walking in a crowded street, the model can alert the user to approaching obstacles, indicate pedestrian crossings, and even recognize traffic signals or signboards. Indoors, it can assist in locating household items, identifying doorways, or reading text on packaging, thereby reducing dependence on external help.
[0022] Beyond navigation, the invention is designed to facilitate richer interaction with the environment. It can identify people through facial recognition, read printed or digital text aloud, and describe objects in sufficient detail to enable informed decision-making. For example, when presented with multiple food items on a shelf, the system can read out labels, compare product details, and assist the user in making a choice. Similarly, in social contexts, it can help recognize familiar faces, detect gestures, and provide cues that enable more confident participation in conversations and group activities.
[0023] The model is envisioned to be implemented in a compact and portable form factor, ensuring convenience and accessibility. Potential embodiments include integration into wearable devices like smart glasses, standalone smartphone applications, or hybrid solutions that combine hardware and software components. The design prioritizes affordability and ease of use, ensuring that the system can be adopted by a wide range of users, including those in resource-constrained regions where access to advanced assistive tools has traditionally been limited.
[0024] The system’s architecture is built for scalability, allowing the addition of new functionalities as technology evolves. For example, future versions could incorporate advanced haptic interfaces, language translation for multilingual environments, or integration with smart home systems to enable control of appliances through voice or gesture recognition. This flexibility ensures that the invention remains relevant and adaptable to emerging needs and technologies without requiring complete redesign or replacement.
[0025] In summary, the invention represents a significant advancement in assistive technology for visually challenged individuals by offering a holistic, AI-driven model that integrates perception, interpretation, and communication into a single, adaptive solution. It moves beyond the limitations of existing tools by providing comprehensive environmental awareness, personalized interaction, and real-time navigation support, all delivered in a portable and accessible format. Through continuous learning and adaptability, the system not only addresses the immediate challenges of mobility and independence but also evolves with the user, ensuring long-term effectiveness and usability. The proposed invention thus holds the potential to transform the way visually challenged individuals navigate and interact with their world, empowering them with greater confidence, autonomy, and inclusion in society.
[0026] These and other features will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings. While the invention has been described and shown with reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.
BRIEF DESCRIPTION OF DRAWINGS
[0027] So that the manner in which the above-recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may have been referred by embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
[0028] These and other features, benefits, and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein: Figures attached: N.A.
DETAILED DESCRIPTION OF THE INVENTION
[0029] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and the detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claim.
[0030] As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein are solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers, or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents acts, materials, devices, articles, and the like are included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
[0031] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase “comprising”, it is understood that we also contemplate the same composition, element, or group of elements with transitional phrases “consisting of”, “consisting”, “selected from the group of consisting of, “including”, or “is” preceding the recitation of the composition, element or group of elements and vice versa.
[0032] The present invention is described hereinafter by various embodiments with reference to the accompanying drawing, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, several materials are identified as suitable for various facets of the implementations.
[0033] The present invention relates to an artificial intelligence-based assistive model that has been designed to provide visually challenged individuals with enhanced capabilities for mobility, environmental awareness, and interaction with their surroundings. The system is conceived as a comprehensive technological framework that integrates advanced computer vision, deep learning, natural language processing, and sensor fusion to deliver real-time assistance in a manner that is both intuitive and adaptive. By combining these diverse elements, the invention moves beyond the limitations of conventional assistive tools and provides a single, unified solution capable of addressing multiple dimensions of visual impairment.
[0034] The operation of the invention begins with the acquisition of raw data from the environment. This data may be captured through a variety of sensors, including cameras, depth sensors, LiDAR modules, or even the camera units embedded within smartphones and wearable devices such as smart glasses. The choice of data acquisition hardware depends on the embodiment of the system and can be adapted for cost-effectiveness or high precision depending on user needs. Once the raw data is collected, it is transmitted to a processing unit where the AI algorithms analyze it in real time. The algorithms are trained using large-scale datasets to recognize objects, detect obstacles, identify faces, interpret spatial layouts, and read textual information. The training incorporates deep convolutional neural networks for visual recognition, recurrent neural networks for sequential data interpretation, and hybrid models that allow for contextual understanding of scenes.
[0035] The recognition process is not limited to simple detection of objects but extends to understanding their context and relevance. For example, the system can identify that an object is a moving vehicle and infer that it presents a potential hazard if the user is attempting to cross the street. Similarly, it can differentiate between permanent structures such as walls and temporary obstacles such as chairs or boxes placed in a pathway. This contextual understanding is crucial in ensuring that the user receives information that is not only accurate but also meaningful for immediate decision-making. The invention incorporates an interpretative layer that prioritizes information delivery based on relevance, ensuring that the user is not overwhelmed with unnecessary details.
[0036] The communication of information to the user is achieved through multimodal feedback systems. Depending on user preference, information can be delivered via auditory cues through earphones or bone conduction speakers, or through haptic signals such as vibrations on wearable devices. The auditory output leverages natural language processing to generate human-like speech that describes objects, spatial orientation, and navigational guidance. For users who prefer less intrusive feedback, haptic signals can be used to indicate proximity to obstacles, changes in direction, or alerts requiring immediate attention. The model allows customization of these feedback channels so that users can select their preferred mode of interaction and adjust levels of detail or frequency of updates according to their needs.
[0037] An important feature of the invention is its navigation support capability. The system integrates environmental mapping and path planning algorithms to assist users in moving independently across indoor and outdoor environments. In outdoor scenarios, the system can detect crosswalks, interpret traffic lights, and provide step-by-step navigational cues. In indoor settings such as homes, offices, or public spaces, the system assists in identifying rooms, doorways, furniture, and everyday objects. For instance, a user entering a kitchen could be guided to the location of a refrigerator, informed of items placed on a counter, or helped in reading labels on packaged goods. This real-time navigation and guidance significantly reduce the risks associated with mobility for visually impaired individuals and provide them with a higher degree of independence.
[0038] The invention also supports interaction beyond navigation by enabling recognition of people, gestures, and facial expressions. Through facial recognition technology, the system can identify known individuals, announce their presence to the user, and provide cues to facilitate social interaction. Gesture detection allows the user to recognize when someone is signaling to them, thereby bridging communication gaps in real-world contexts. Text-to-speech functionality is incorporated to allow the system to read aloud printed or digital text from books, documents, or signage. This capability extends the system’s utility into educational and professional settings where access to written material is essential.
[0039] A key aspect of the invention is its adaptability and personalization. The system is designed with machine learning algorithms that evolve over time based on user feedback and contextual variations. For instance, if the system incorrectly identifies an object or misinterprets an environment, the user can provide corrective feedback, allowing the model to update its knowledge and improve its performance in subsequent encounters. This self-learning feature ensures that the system becomes more accurate and user-specific as it continues to be used, thereby improving trust and long-term reliability. Furthermore, the model is capable of storing frequently accessed routes, commonly used objects, or familiar individuals in its memory, enabling faster recognition and more efficient interaction in repetitive scenarios.
[0040] The invention also emphasizes portability and accessibility. It can be implemented as a standalone mobile application leveraging the computational capabilities of modern smartphones, or as an integrated hardware-software solution embedded in wearable devices such as smart glasses, head-mounted systems, or wristbands. The lightweight and compact design make it practical for daily use without imposing burdens on the user. Energy-efficient algorithms and optimized processing ensure longer battery life in portable embodiments, addressing one of the common limitations of existing assistive technologies.
[0041] The architecture of the invention is modular and scalable, allowing for integration with emerging technologies and additional functionalities. For example, future implementations may incorporate advanced haptic interfaces capable of transmitting detailed spatial information, or integration with smart home ecosystems that enable visually challenged users to interact seamlessly with appliances and connected devices. Similarly, the system may evolve to incorporate language translation, providing real-time assistance in multilingual environments, or extend into tele-assistance frameworks where caregivers can remotely monitor or assist the user through cloud connectivity.
[0042] The design philosophy of the invention prioritizes affordability and accessibility to ensure that it can benefit individuals across socio-economic strata, including those in regions with limited access to high-cost assistive tools. By leveraging widely available hardware platforms such as smartphones and pairing them with optimized AI models that can run efficiently on edge devices, the system achieves a balance between technological sophistication and cost-effectiveness.
[0043] In operation, the user experiences the invention as an intuitive guide and companion. Whether navigating a crowded urban street, identifying objects in a home, participating in a classroom, or engaging in social interactions, the system delivers timely, accurate, and personalized assistance. It reduces dependence on caregivers, enhances self-confidence, and provides a sense of security and independence that is often lacking in the lives of visually challenged individuals. By creating a holistic model that integrates mobility, perception, and interaction into a unified system, the invention represents a transformative step in assistive technology for the visually impaired community.
[0044] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
[0045] Thus, the scope of the present disclosure is defined by the appended claims and includes both combinations and sub-combinations of the various features described hereinabove as well as variations and modifications thereof, which would occur to persons skilled in the art upon reading the foregoing description.
, Claims:I/We Claim:
1. An artificial intelligence-based assistive model for aiding visually challenged individuals, comprising:
a data acquisition module configured to capture environmental data through one or more sensors, including cameras or depth sensors;
a processing unit operatively connected to the data acquisition module, wherein the processing unit executes machine learning algorithms trained to recognize objects, detect obstacles, interpret spatial layouts, read textual information, and identify human faces and gestures in real time;
a contextual interpretation framework configured to prioritize recognized information based on environmental relevance, wherein said framework generates adaptive outputs; and
a multimodal feedback system configured to communicate the outputs to the user through auditory feedback, haptic feedback, or a combination thereof, wherein the model is further adapted to update and refine its recognition accuracy and personalization parameters over time based on user feedback and contextual variations.
2. The model of claim 1, wherein the machine learning algorithms comprise convolutional neural networks for visual recognition and recurrent neural networks for sequential data interpretation.
3. The model of claim 1, wherein the contextual interpretation framework differentiates between static obstacles and dynamic obstacles and prioritizes hazard-related information for immediate communication to the user.
4. The model of claim 1, wherein the multimodal feedback system employs natural language processing to generate human-like speech outputs for auditory communication.
5. The model of claim 1, wherein the haptic feedback comprises vibration patterns delivered through wearable devices to indicate proximity of obstacles, changes in navigation direction, or alerts requiring immediate attention.
6. The model of claim 1, wherein the system further comprises a navigation module configured to provide indoor and outdoor guidance through path planning and obstacle avoidance.
7. The model of claim 1, wherein the system is implemented in a portable device selected from the group consisting of smartphones, smart glasses, wristbands, or head-mounted devices.
8. The model of claim 1, wherein the system is configured to read aloud printed or digital text using optical character recognition integrated with text-to-speech processing.
9. The model of claim 1, wherein the system is further configured to identify individuals through facial recognition and notify the user of their presence.
10. The model of claim 1, wherein the processing unit operates on edge devices to enable offline operation and reduce dependence on continuous internet connectivity.

Documents

Application Documents

# Name Date
1 202511082480-STATEMENT OF UNDERTAKING (FORM 3) [30-08-2025(online)].pdf 2025-08-30
2 202511082480-REQUEST FOR EARLY PUBLICATION(FORM-9) [30-08-2025(online)].pdf 2025-08-30
3 202511082480-POWER OF AUTHORITY [30-08-2025(online)].pdf 2025-08-30
4 202511082480-FORM-9 [30-08-2025(online)].pdf 2025-08-30
5 202511082480-FORM FOR SMALL ENTITY(FORM-28) [30-08-2025(online)].pdf 2025-08-30
6 202511082480-FORM 1 [30-08-2025(online)].pdf 2025-08-30
7 202511082480-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [30-08-2025(online)].pdf 2025-08-30
8 202511082480-EVIDENCE FOR REGISTRATION UNDER SSI [30-08-2025(online)].pdf 2025-08-30
9 202511082480-EDUCATIONAL INSTITUTION(S) [30-08-2025(online)].pdf 2025-08-30
10 202511082480-DECLARATION OF INVENTORSHIP (FORM 5) [30-08-2025(online)].pdf 2025-08-30
11 202511082480-COMPLETE SPECIFICATION [30-08-2025(online)].pdf 2025-08-30