Sign In to Follow Application
View All Documents & Correspondence

Intelligent Medical Report Analysis And Health Recommendation System

Abstract: The invention relates to an intelligent medical report analysis and health recommendation system (100) comprising an image acquisition module (101) configured to receive medical reports in image or scanned format, an image preprocessing module (102) for enhancing image quality, an OCR module (103) to extract machine readable text and a text processing module (104) uses natural language processing to clean and structure the extracted text. A user input module (105) collects contextual data such as age, sex, symptoms and lifestyle habits and an AI module (106) comprising a language interaction framework and a large language model processes these inputs to generate personalized health recommendations. A chat interaction module (107) enables multi-turn, context-aware responses and a user interface module (108) user interaction and output display. The system is configured to operate locally on a computing device (D), providing real-time interpretation and personalized health recommendation without reliance on medical professionals.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
21 June 2025
Publication Number
27/2025
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

AMRITA VISHWA VIDYAPEETHAM
Amrita Vishwa Vidyapeetham, Coimbatore Campus, Coimbatore - 641112, Tamil Nadu, India

Inventors

1. GOLLA, Ram
1-101 Main Road, Mosalapalli, Ambajipeta mandal, Dr. B. R. Ambedkar Konaseema Andhra Pradesh – 533239, India
2. POLA, Srinitha
5-11-47/1, Padmavathi Nilayam, Shanthi Nagar, Nalgonda, Telangana – 508001, India
3. KALLA, Vaishnavi
8-101, Pathapatnam, Srikakulam Andhra Pradesh – 532213, India
4. METTU, Sathvika
Pavan Paras, Railpet, 3rd Lane, Ongole, Andhra Pradesh – 523001, India
5. GUPTA, Kritesh Kumar
Teachers Colony, Pratap Nagar, Jalalabad, Shahajahanpur, Uttar Pradesh – 242221, India

Specification

Description:FIELD OF THE INVENTION
The present invention relates to an intelligent medical report analysis and health recommendation system. More particularly, the present invention relates to an intelligent system for automated interpretation of medical reports and generation of personalized health recommendations including dietary and lifestyle modifications.

BACKGROUND OF THE INVENTION
In last few years, there has been a growing awareness among people regarding the importance of timely and accurate interpretation of medical reports particularly blood test reports as part of preventive healthcare. Patients routinely undergo blood and other lab tests where the reports are often delivered in printed or scanned format. These documents contain clinical data, medical abbreviations, and numerical values that are difficult for most people to understand without professional help. While those in urban areas may consult doctors or lab technicians for explanations, individuals in rural areas often do not have immediate access to such support. In underserved regions, there is a widespread gap in access to timely medical counsel, leading to delayed intervention, incorrect self-diagnosis or complete inaction. Even in cities, getting a report reviewed can involve delays, consultation fees or logistical hurdles. As a result, many people either misinterpret their health reports or ignore them altogether, leading to delayed treatment or unnecessary anxiety.
Although there are a few tools like symptom checkers and AI based medical chatbots in the market, these tools typically respond to general questions and rely on structured inputs rather than real medical reports and documents.
Reference is made to the research paper titled “Leveraging LLM: Implementing an Advanced AI Chatbot for Healthcare” which discloses using the application of Large Language Models (LLMs) in healthcare settings, mainly focusing on addressing general illness inquiries through chatbot interfaces. Leveraging the capabilities of LLMs, explore their potential to provide responses to users seeking information about common health concerns. It states that LLM have the capacity continuously learn and improve from user interaction through various benchmarking experiments and evaluation of the accuracy (61%) of LLM-based chatbots in understanding and responding to user queries related to general illnesses. The findings demonstrate the performance of LLMs against established benchmarks, shedding light on their efficacy in healthcare applications. The paper provides an insight to the development of chatbot systems capable of providing reliable and informative support to individuals seeking medical guidance for general health issues.
The inventions in the existing state of art lack the ability to process scanned report images and extract meaningful medical data through OCR or language processing. They lack the ability to personalize advice based on age, symptoms, habits or medical history and offer conversational health feedback linked directly to the uploaded report. These tools fail to provide an integrated system that allows users to upload a medical report, understand it and provide personalized health, lifestyle and dietary recommendations to the user in a simple manner without the need of any clinical supervision. The existing tools fails to incorporate personalized data such as age, weight, gender, dietary habits or symptom history while generating recommendations providing generic and non-specific advice and recommendations. The NLP based tools are mostly trained for FAQ-style interaction or predefined symptom-question mapping and hence, fail to perform true semantic understanding of medical report contents. These tools do not support natural multi-turn conversations where a user can upload a medical report, get a summary, ask follow-up questions and receive personalized answers based on both their medical report and personal information. The limited availability of medical professionals in remote and rural areas underscores the need for AI-based systems that can assist users in understanding their medical reports and receiving timely, personalized health guidance.
Therefore, there is a pressing need for a system that can analyze and interpret medical reports, process user-specific contextual data and generate personalized health, lifestyle and dietary recommendations and provide interactive explanation and query resolution through a conversational interface.

ADVANTAGES OF THE INVENTION OVER THE EXISTING STATE OF ART:
The present invention enables users to upload actual medical reports in image format, which are automatically processed and interpreted without the requirement of any prior medical knowledge. It provides personalized, easy-to-understand summaries and health recommendations tailored to each user's profile, including age, symptoms, medical history and lifestyle. The present invention enables dynamic, multi-turn conversations that allow users to ask specific follow-up questions and receive context-aware responses rooted in their own medical data. This bridges the gap between report generation and user comprehension, especially for people in rural, remote or underserved areas where professional medical advice is not readily available. The system operates autonomously on a local device, offering scalable, real-time interpretation of medical reports, health guidance and recommendation without requiring any interaction with professional medical professional or external platforms.

OBJECT OF THE INVENTION
In order to obviate the drawbacks in the existing state of the art, the principal object of the present invention is to provide an intelligent system for the interpretation of medical reports and the generation of personalized health recommendations without requiring the involvement of medical professionals.
Another object of the present invention is to provide a system capable of receiving user-uploaded medical reports in image format and autonomously identifying medically relevant textual content therein.
Another object of the invention is to provide a framework configured to transform unstructured medical report data into structured, clinically relevant insights through automated language-based analysis.
Another object of the invention is to enable the collection of contextual user information, such as demographic data, symptoms, ongoing medication, lifestyle factors and dietary habits, to facilitate personalized reasoning in medical interpretation.
Yet another object of the invention is to generate personalized health recommendations including dietary suggestions, lifestyle adjustments, preventive health guidance, and chronic condition indicators based on combined medical and contextual data.
Another object of the invention is to facilitate interactive engagement through a conversational interface that allows users to ask queries related to their medical reports and receive natural language responses grounded in their uploaded data.
It is also an object of the invention to provide a simplified, accessible user interface intended for individuals with limited digital literacy, particularly in rural or resource-constrained environments.
Another object of the invention is to enable real-time, explainable and user-personalized medical insight delivery to reduce dependence on clinical consultation in non-emergency contexts.
Another object of the invention is to offer a modular, scalable solution that may be deployed across a range of digital platforms, including mobile devices, web applications and local edge computing environments.

SUMMARY OF THE INVENTION
The present invention provides an Intelligent medical report analysis and personalized health recommendation system. The system is implemented on a computing device comprising a processor configured to execute image preprocessing, text extraction, natural language processing, and generative AI modules. The system enables users to upload medical reports in image or scanned document format via an image acquisition module. These images are processed by an image preprocessing module using a computer vision library to perform grayscaling, resizing, de-skewing and smoothing. The preprocessed image is then sent to an optical character recognition (OCR) module, which extracts machine-readable text from the image using one or more OCR engines. The extracted text is processed by a text processing module configured to tokenize, normalize and clean the content and to identify medical parameters, test values and health-related entities using natural language processing.
A user input module in the system collects contextual user information, including demographic data, symptoms, medications, lifestyle habits and dietary preferences. This information is input to a generative AI module comprising a language interaction framework and a large language model. The generative AI module analyzes both the structured medical data and the contextual user profile to generate personalized recommendations. The system further includes a chat interaction module that enables the user to ask follow-up questions and receive context-sensitive responses based on the uploaded report and user data. A user interface module presents selectable options to the user, including submitting the medical report, interacting via chat, and accessing customer care support. The AI-generated dietary suggestions, disease risk alerts and lifestyle recommendations are provided in natural language, allowing non-expert users to easily understand their medical data and take informed action.
The technical advancement of the present invention lies in its integration of image processing, optical character recognition, natural language processing and generative AI within a unified, context-aware architecture executed locally on a computing device without relying on remote servers or cloud resources. The invention delivers a tangible technical effect by transforming unstructured medical report images into meaningful, personalized health insights thus enabling autonomous medical report interpretation and guidance without reliance on clinical professionals. This improves healthcare accessibility, particularly in low-resource or rural environments, and supports scalable deployment across digital platforms.

BRIEF DESCRIPTION OF DRAWINGS
Figure 1 illustrates a schematic block diagram of the intelligent medical report analysis and health recommendation system, showing the interconnection between the functional modules implemented on a user device.
Figure 2 illustrates the working methodology of the image preprocessing and OCR stages of the system, specifically depicting the use of computer vision techniques and optical character recognition for extracting text from medical report images.
Figure 3 depicts the user registration or sign-up interface screen, enabling entry of user details for profile creation and onboarding into the system.
Figure 4 depicts the home interface screen of the system, providing options such as uploading a medical report, accessing the chat interface and access customer support.
Figure 5 depicts the report upload screen of the user interface, through which users can select or capture a medical report image using the image acquisition module.
Figure 6 depicts the AI-generated recommendation screen, displaying health recommendations based on the analysis of the uploaded report and user data.
Figure 7 depicts the chat interface screen, showing the multi-turn interactive conversation between the user and the system for clarification of report values and follow-up guidance.
Figure 8 illustrates a process flow diagram representing the end-to-end method steps carried out by the system including image acquisition, OCR, NLP processing, LLM-based recommendation generation and response display.
DETAILED DESCRIPTION OF THE INVENTION WITH ILLUSTRATIONS AND NON-LIMITING EXAMPLES
While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made, and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material to the teachings of the invention without departing from its scope. However, one of the ordinary skills in art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded with the widest scope consistent with the principles and features described herein.
Throughout the specification and claims, the following terms take the meanings explicitly associated herein unless the context clearly dictates otherwise. The meaning of “a”, “an”, and “the” include plural references. Additionally, a reference to the singular includes a reference to the plural unless otherwise stated or inconsistent with the disclosure herein.
A person of ordinary skill in art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the way functions are performed. It is to be noted that the drawings are to be regarded as being schematic representations and elements that are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose becomes apparent to a person skilled in the art.
The present invention relates to an intelligent system for medical report analysis, interpretation and personalized health recommendation generation. The system utilizes Python based libraries and frameworks to implement image processing, text extraction, natural language understanding and AI driven interpretation and response generation.
In one exemplary embodiment, the system is implemented on a User device (D) which is a computing device comprising image acquisition capability, local memory, processing hardware, a graphical user interface, and wireless communication functionality. The user device (D) serves as the primary platform for receiving user inputs, acquiring medical reports, executing all processing functions of the system and displaying AI-generated outputs and health recommendations.
The entire processing pipeline of the system including image preprocessing, optical character recognition (OCR), text extraction, natural language processing (NLP) and generative AI inference, is configured to be executed locally on the user device (D). This localized architecture enables autonomous operation, enhances data privacy and ensures real-time responsiveness even in offline or low-connectivity settings.
As used herein, the term “computing device (D)” refers broadly to and includes, without limitation, smartphones, tablets, laptops, desktop computers, and other processor-enabled devices that integrate image acquisition capability, storage, wireless communication interfaces (e.g., Wi-Fi or cellular) and graphical user interaction functionality.
The system comprises the following functional modules:
Image Acquisition Module (101):
The image acquisition module is configured to receive medical reports in image or scanned document formats, such as JPEG, PNG, or PDF. The input may be obtained through file selection, drag-and-drop, or screenshot capture mechanisms. This module enables the user to upload scanned medical reports for automated analysis. The uploaded image is temporarily stored for subsequent preprocessing.

Image Preprocessing Module (102):
The image preprocessing module is configured to enhance the quality of the uploaded medical report image and prepare it for accurate text recognition. It performs operations such as grayscaling, resizing, de-skewing and noise reduction. This module is implemented using one or more computer vision libraries such as OpenCV (cv2), to perform these preprocessing steps. The processed image output from this module is then passed to the OCR module for text extraction. These enhancements improve the readability and alignment of the document, thereby increasing the efficiency and accuracy of the OCR module that follows.

Optical Character Recognition (OCR) Module (103):
The OCR module is configured to convert the preprocessed medical report image into machine-readable text. It performs character recognition to extract both structured and unstructured textual content from the medical report. This text extraction involves detecting characters, words and lines as well as recognizing the correct text. The OCR module is implemented using one or more OCR engines, such as Tesseract OCR via the Pytesseract interface in Python. The output of this module comprises a raw text from the uploaded medical report which is forwarded to subsequent modules for structuring and clinical parameter extraction.

Text Processing Module (104):
The text processing module is configured to clean and structure the text extracted by the OCR module (103). It performs tokenization (breaking the text into smaller components like words), normalization (standardizing spelling or converting everything to lowercase) and entity recognition (identifying entities like names, ages, diseases, symptoms, etc.) using a Python based natural language processing (NLP) library such as SpaCy. This prepares the extracted text for further analysis and understanding. To limit the processing of unnecessary (not very essential) words, a machine learning library such as Scikit-learn is used to define the stop words. The structured output generated by this module includes identified key medical parameters such as glucose levels, lipid profile values, and their associated units and ranges enabling the system to map clinical indicators to health contexts.

User Input Module (105)
The user input module (105) is configured to collect contextual information from the user that supplements the medical report data and its interpretation. Inputs may include age, sex, weight, height, dietary preferences, physical activity levels, existing medical conditions and reported symptoms. This module captures user specific variables and data points that may not be directly stated in the medical report but are essential for personalized recommendation generation by the AI module.

Generative AI Module (106)
The generative AI module (106) is responsible for generating natural language outputs as personalized health recommendations based on the structured medical report and user inputs. It comprises:
 A language interaction framework, implemented using LangChain, which handles prompt structuring, session flow and context management; and
 A large language model, such as LLaMA 3, which may be locally hosted using an LLM platform, such as Ollama, configured to process structured inputs and generate intelligent, medically relevant outputs.
This module forms the core of the system’s recommendation engine, producing dietary advice, lifestyle suggestions, and potential health risk assessments personalized to the user’s profile.

Chat Interaction Module (107)
The chat interaction module is configured to facilitate a natural language conversation between the user and the system. It supports multi-turn interactions, query clarification and follow-up dialogue, enabling an intuitive experience for report interpretation and health query resolution. It receives questions from the user regarding their medical report or health condition and generates context-aware responses using the generative AI module. This component ensures that recommendations are not static but evolve based on user interaction, thereby enhancing usability and trust.

User Interface Module (108)
The user interface module (108) provides a front-end for user engagement with the system. It enables users to upload medical reports, enter personal health data, initiate chat sessions and view system-generated outputs. The interface is designed to be intuitive and minimal, catering to users with no technical or medical background. It also provides access to customer support details, if needed.

Method and Working of the invention:
In one exemplary embodiment, the adopted pipeline for information extraction and interpretation of the medical report is illustrated in Figure 1 showing the interconnection between various functional modules of the system implemented on a user device (D). The system (100) begins operation when a user uploads a medical report in image format through the image acquisition (101) module of the user device (D). The uploaded image is processed by the image preprocessing module (102) using a computer vision library, such as OpenCV (cv2), to enhance readability through operations such as smoothing, grayscaling, de-skewing and resizing. The image preprocessing module may be implemented using any image processing toolkit capable of performing grayscaling, resizing, de-skewing, and smoothing.
The processed image is then passed to the OCR module (103), which employs an optical character recognition (OCR) engine, such as Tesseract via the pytesseract wrapper, to convert the visual report into machine-readable text, as illustrated in Figure 2. This process includes detection of characters, words, and lines using language models embedded in the OCR engine. The resulting raw text is forwarded to the text processing module (104), where a natural language processing (NLP) library, such as SpaCy, performs tokenization, normalization and entity recognition. To filter out non-essential terms, a machine learning library, such as Scikit-learn is used to define stop words, resulting in structured extraction of relevant medical information from the report.
To enable contextual query resolution and meaningful response generation, the system integrates a generative AI module (106) comprising a language interaction framework, implemented using LangChain, and a large language model, such as LLaMA 3 which is locally hosted using an LLM platform, such as Ollama. The language interaction framework manages the interaction flow and formats the input prompt, while also importing and interfacing with the LLM. The large language model analyzes the cleaned and processed text and generates responses based on report data and contextual inputs. These outputs may include dietary suggestions, identification of chronic disease risks, suggested medications, or answers to follow-up queries. The AI-generated recommendations are personalized by combining the extracted report content with prior user inputs.
The user interface module (108) enables initialization of the system through three selectable options; import medical report, chat with us and customer care. The system prompts the user to enter personal data such as name, age, sex, height and weight (Figure 3), followed by the options to upload a report, chat, or access customer support (Figure 4). Upon selecting the report submission option, the user is prompted to either upload an image of their medical report or capture one using their device (D). This image is then processed by the OCR module (103) to extract relevant textual information. Simultaneously, the system (100) collects preliminary user data, such as symptoms, known medical conditions, current medications, smoking or alcohol habits, dietary preferences and exercise routine through the user input module (105), as illustrated in Figure 5. This combined input is subsequently analyzed by the LLM to generate a personalized interpretation of the medical report, along with tailored health recommendations (refer to Figure 6). The chat interaction module (107) is activated to allow multi-turn dialogue with the user for query-based follow-ups. Alternatively, if the user selects the Chat option directly, the system (100) allows the user to input general medical questions, which are handled by the same generative AI module (refer to Figure 7). The third option, customer care, reveals call and email contact details for user support. This option facilitates manual assistance for technical or medical queries as needed. A detailed flow diagram of the system representing the end-to-end method steps carried out by the system including image acquisition, OCR, NLP processing, LLM-based recommendation generation and response display is illustrated in Figure 8.
In one embodiment, the system is implemented as a locally hosted application using a web-based interface framework configured to enable user interaction and real-time display of AI-generated responses. The web-based interface may be constructed using any suitable front-end development framework or toolkit that supports interactive data exchange and real-time output visualization such as, but not limited to, the Streamlit API for uploading of medical reports, entry of contextual user data and interaction with the system. Although the present invention is designed to operate locally on user device (D), the system (100) architecture is structured to also support cloud-based deployment, enabling broader accessibility, scalability and integration with external platforms.
, Claims:1. An intelligent medical report analysis and health recommendation system (100) comprising:
 an image acquisition module (101) configured to receive a medical report in image or scanned document format from a user device (D);
 an image preprocessing module (102) configured to process the image received from the image acquisition module by performing at least one of grayscaling, resizing, de-skewing or smoothing;
 an optical character recognition (OCR) module (103) configured to convert the preprocessed image into machine-readable text;
 a text processing module (104) configured to tokenize, normalize, and clean the extracted text and to identify medical entities, test values and context using natural language processing (NLP);
 a user input module (105) configured to collect user-specific contextual data including age, sex, lifestyle, symptoms and medical history;
 a generative AI module (106) comprising
• a language interaction framework configured to manage conversation flow and prompt formatting; and
• a large language model configured to process the cleaned text and contextual user inputs to generate personalized health recommendations;
 a chat interaction module (107) configured to provide interactive responses to user queries based on the uploaded medical report and the associated context; and
 a user interface module (108) configured to display selectable options to the user including uploading a medical report, initiating chat interaction, and accessing customer support;
wherein the system (100) is configured to integrate image-based medical report processing, NLP assisted structuring, contextual user input and generative AI inference within a unified architecture to enable automated, personalized and interactive health guidance including diet and lifestyle recommendations, based on the uploaded report, without requiring any medical professional intervention.
2. The system (100) as claimed in claim 1, wherein the user device (D) is a computing device comprising image acquisition capability, local memory, processing hardware, a graphical user interface and wireless communication functionality and is configured to locally execute image preprocessing, optical character recognition, text processing, generative AI and chat interaction functions of the system (100).
3. The system (100) as claimed in claim 1, wherein the image preprocessing module (102) is implemented using one or more computer vision libraries configured for image enhancement and noise reduction.
4. The system as claimed in claim 1, wherein the optical character recognition (OCR) module (103) comprises one or more OCR engines configured to extract textual data from medical report images.
5. The system as claimed in claim 1, wherein the text processing module (104) is implemented using natural language processing (NLP) to clean and structure the extracted medical text by performing tokenization, normalization, entity recognition and defining stop words.
6. The system as claimed in claim 1, wherein the user input module (105) is configured to receive structured data including name, age, sex, height, weight, symptoms, medications, exercise habits and dietary preferences from the user
7. The system as claimed in claim 1, wherein the large language model is locally hosted on the user device (D) using an LLM platform configured to support on-device generative AI inference.
8. The system as claimed in claim 1, wherein the chat interaction module (107) maintains contextual memory across multiple user interactions to clarify medical report values, provide report-specific query responses and follow-up health recommendations in natural language.
9. The system as claimed in claim 1, wherein the user interface module (108) is configured to display the received data input forms, present AI-generated outputs comprising health recommendations and display customer care support options for technical support or unresolved queries.
10. A method for intelligent medical report analysis and health recommendation system (100) as claimed in claim 1, the method comprising the steps of:
 receiving a medical report from a user device (D) in image or scanned document format;
 preprocessing the image to enhance visual quality using one or more computer vision libraries;
 extracting text from the preprocessed image using one or more OCR engines;
 using natural language processing (NLP) to clean and structure the extracted text;
 receiving contextual user information including demographic attributes, symptoms and medical history;
 generating personalized health recommendations using a locally hosted large language model based on the structured report data and user inputs;
 displaying the generated personalized health recommendations through a graphical user interface of the computing device (D); and
 enabling interactive, context-sensitive responses to user queries via a chat interface linked to the uploaded report;
wherein the method enables automated and personalized health guidance from unstructured medical reports without requiring any medical professional intervention.
11. The method as claimed in claim 10, wherein the image preprocessing comprises one or more image enhancement techniques including grayscaling, resizing, de-skewing or noise reduction of uploaded images.
12. The method as claimed in claim 10, wherein the wherein the optical character recognition (OCR) comprises converting the pre-processed image into machine-readable text using said OCR engines.
13. The method as claimed in claim 10, wherein the NLP processing comprises tokenization, normalization, entity recognition and removal of irrelevant text by defining stop words.
14. The method as claimed in claim 10, wherein generating personalized recom-mendations comprises processing the structured report data and user contextual inputs using a generative AI model configured to generate personalized health recommendations in natural language.

Documents

Application Documents

# Name Date
1 202541059692-STATEMENT OF UNDERTAKING (FORM 3) [21-06-2025(online)].pdf 2025-06-21
2 202541059692-FORM FOR SMALL ENTITY(FORM-28) [21-06-2025(online)].pdf 2025-06-21
3 202541059692-FORM 1 [21-06-2025(online)].pdf 2025-06-21
4 202541059692-FIGURE OF ABSTRACT [21-06-2025(online)].pdf 2025-06-21
5 202541059692-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [21-06-2025(online)].pdf 2025-06-21
6 202541059692-EVIDENCE FOR REGISTRATION UNDER SSI [21-06-2025(online)].pdf 2025-06-21
7 202541059692-EDUCATIONAL INSTITUTION(S) [21-06-2025(online)].pdf 2025-06-21
8 202541059692-DRAWINGS [21-06-2025(online)].pdf 2025-06-21
9 202541059692-DECLARATION OF INVENTORSHIP (FORM 5) [21-06-2025(online)].pdf 2025-06-21
10 202541059692-COMPLETE SPECIFICATION [21-06-2025(online)].pdf 2025-06-21
11 202541059692-FORM-9 [22-06-2025(online)].pdf 2025-06-22
12 202541059692-FORM 18 [22-06-2025(online)].pdf 2025-06-22
13 202541059692-FORM-26 [17-09-2025(online)].pdf 2025-09-17