Abstract: The present invention relates to AI-enabled parawise reply generation system and method for generating responses to communications using a generative AI model (110) hosted on an LLM server (106) connected to a computing device (102). Users can choose between manual "Reply Mode" and AI-assisted "Auto Reply Mode". In Auto Reply Mode, contextual documents (130) are segmented (204) using a paragraph Fragmentor (118), vectorized (208) by a vectorizer (120), and processed by a prompt generator (122) to generate responses (310, 316) via the LLM model. These responses appear in editable fields (706), can be reviewed (616), digitally signed (620), and submitted as a secure link (630). The system supports multiple file formats (DOCX, PDF, TXT) and stores all data in a document database (126) accessible through case room documents (128). The invention enhances productivity, reduces drafting time, and ensures accuracy by using contextual document analysis through the LLM interface (112).
Description:FIELD OF THE INVENTION
[001] This invention relates to systems, methods and implementation for automated Parawise reply generation using artificial intelligence, natural language processing, and domain-specific contextual document processing, particularly applicable in legal, compliance, and contractual communications.
BACKGROUND OF THE INVENTION
[002] In modern digital communication, particularly in legal, business, and contractual contexts, it is often necessary to respond to lengthy, paragraph-structured communications such as notices, claims, contractual drafts, or formal correspondences. Current response drafting methods require the recipient to manually review the incoming communication alongside reference documents, then compose a response paragraph by paragraph.
[003] Although numerous AI portals and text generation tools exist, they typically respond to a single user query and do not provide a mechanism to automatically generate Parawise replies aligned with the structure of the received communication. Such tools lack integrated processing of structured contextual document sets, resulting in inefficiencies, higher risk of omissions, and longer turnaround times.
[004] Further, existing systems do not offer a combined manual drafting with contextual support and AI-assisted Parawise drafting, integrated within a single interface and workflow. Accordingly, there is a need for a system and method capable of:
a. Analysing the structure of received communications;
b. Processing a defined set of contextual documents (including PDFs, DOCX, XLSX, CSV, TXT, RTF, PPTX, and others);
c. Automatically correlating each paragraph of the received communication with relevant contextual information;
d. Producing accurate, Parawise replies either manually (with user assistance) or automatically using generative AI models or combined; and
e. By combining personalization with efficiency tactics like model compression and caching, systems can respond more accurately to user-specific needs, use less computational power and, generate outputs faster (due to lighter models and intelligent caching), which collectively achieves your goals—even if each step typically appears in its own literature area.
OBJECT OF THE INVENTION
[005] The primary object of the present invention is to provide a computer-implemented system and method for generating Parawise responses to received documents or communications using AI language models (LLMs), so that it is contextually accurate, efficient, and user-friendly.
[006] Another object of the invention is to enable automated fragmentation, vectorization, and contextual prompt generation from a set of documents, which are then utilized to create AI-generated responses aligned Parawise to the user's original communication.
[007] It is a further object of the invention to render the AI-generated responses in a structured, editable user interface, allowing for manual revisions, draft management, digital signing, and secure sharing.
[008] Yet another object of the invention is to facilitate integration with domain-specific templates, especially for legal, compliance, or formal contractual communication, by embedding LLM outputs into predefined document fields.
[009] A still further object is to allow for the continuous improvement of prompts through the retention of previous queries and responses, and to enable the use of multi-modal input formats including scanned documents, audio, or video content, where applicable.
[0010] An additional object of the invention is to ensure a transparent and auditable drafting process by incorporating features such as change tracking, metadata tagging, and case association.
[0011] Another object of the invention is improved accuracy, reduced latency, lower resource use.
SUMMARY OF THE INVENTION
[0012] The invention discloses a computing architecture comprising a client device, a document server with document generation engine, and an LLM server. A received document set is fragmented into token-optimized segments, vectorized for contextual retrieval, and processed with user queries to generate AI-assisted Parawise replies. The system supports manual reply mode and auto-reply mode (editable), integrates template population, Real time editing, secure digital signing, and case-room document grouping.
[0013] One of the embodiments of the present invention provides a computer-implemented method, system, and interface for generating Parawise responses to communications using generative AI. A user may select between:
a. Reply Mode – The received communication is displayed with blank text boxes beneath each paragraph for manual drafting. A split-screen interface enables the user to access reference materials, draft collaboratively, and manage rights-based document sharing.
b. Auto Reply Mode – The received communication is analysed by the system, which prompts the user to upload relevant contextual documents. The generative AI (LLM) processes these documents to draft Parawise replies automatically, which the user may review, edit, digitally sign, and forward.
[0014] The backend employs a document processing pipeline comprising:
a. Paragraph Fragmentor - Splits document sets into manageable fragments based on LLM token limits;
b. Vectorizer- Converts fragments into document vectors for contextual matching;
c. Prompt Generator – Creates AI prompts combining queries, document vectors, and prior conversation context; and-
d. Text Generator – Generates structured replies using predefined templates.
e. By integrating personalization, model optimization, and intelligent caching, the invention delivers enhanced accuracy for individual users while minimizing computational resource consumption and output latency.:
f. Personalize Responses- Tailor the AI output to align with individual user preferences and behaviours, thereby improving relevance and quality
g. Reduce Computational Load- Employ model compression techniques—such as pruning, quantization, or distillation—to reduce the size and complexity of the AI model, thus lowering processor demand during inference.
h. Minimize Latency-Utilize efficient caching mechanisms to store and reuse frequently accessed computations or personalized model adjustments, accelerating response generation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The Technology is described in details below with reference to the attached drawing figures.
[0016] FIG. 1 The Figure illustrates an example of the backend process in which the concept and technology may be deployed in accordance with an aspect described herein:
[0017] FIG. 2 Illustrates an example of a process for generating document vectors from document fragments, in accordance with an aspect described herein:
[0018] FIG 2A Illustrates the process of flow of information from input to the output of the information with an aspect described herein.
[0019] FIG. 3 Illustrates an example of the process providing a prompt to an LLM Model for generating an Output, in accordance with an aspect described herein:
[0020] FIG. 4A Illustrates an original document received as input and FIG 4B and FIG 4C example of a template for drafting of the response in accordance with the aspects described herein:
[0021] FIG. 5A and FIG 5B Illustrates the Flow Diagram of the process for implementing aspects of the Technology in accordance with an aspect described herein:
[0022] FIG. 6 Illustrates the front-end process flow for the “Reply Mode”
[0023] FIG. 7 Illustrates the front-end process flow for the “Auto Reply Mode” using generative AI
DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present technology provide a novel mechanism for generating upfront, accurate editable Parawise reply generation using a deep learning and natural language processing–powered system. The technology transforms legal or contextual replies into an interactive and dynamic experience, analogous to supplying and receiving the documents in legal processes such as courts, arbitration tribunals, filing of suits and negotiation. Users may interactively edit or remove contextual replies, triggering Real time updates.
[0025] In certain embodiments, the system can perform context-aware targeted replies based on objects identified by AI. For example, if the AI detects a conflicting text / vector, the system may prompt the user to confirm whether disassembly is required.
[0026] Some embodiments include the capability to generate follow-up questions automatically based on visual data provided by users. For instance, during a moving scenario, the AI may detect that an external clearance from other government offices / certifications is also required and ask the user whether additional services are needed—then adjust the user’s response.
[0027] Certain embodiements allow users to correct or update responses made by the system. If a reply is misidentified, the user may manually update the reply in Auto Reply mode, and the output estimate is adjusted in substantially Real time. Moreover, if a user wishes to exclude a reply and want to reserve for future submission, they can remove it from the response, and the system will reflect the change promptly. The current methods of computer-generated documents are limited to using a query to query a dataset and recall information from a large database and generate an analysed response in the form of a document.
[0028] The present invention reduces the latency of the output and enhances the accuracy by adaptively learning from the user thereby decreasing the use of the processors and AI algorithms through smaller specialized models: It customizes the heavily model to fine tuned smaller model which runs faster than a large general-purpose model, therby reducing inference latency. The present invention also uses Model pruning and quantization: Techniques that shrink model size (by pruning or quantizing weights) lowers computational demand—leading to faster response times and less processor load. A response by the user is if already accepted against a promt in previous drafts will generate same response with custom modifications. Further, Cache-based optimizations: Adaptive caching—recall of recent user queries or response patterns—can avoid redundant computations and speed up outputs.Personalization: Models that gradually adapt to the user's preferences—like recognizing writing style or frequently used terms—can deliver more relevant and accurate outputs over time.Domain tuning: By training on user-specific data (such as preferred formatting or phrasing), the model better aligns with the user’s needs, reducing irrelevant or off-target responses.
[0029] The present invention using Large Language Models (LLMs) and train, such as generative AI Models to generate a Parawise response using information from a defined document set. This concept can be deployed in a variety of use case scenarios, such as legal processes, business contractual communication and negotiation of instruments and so forth.
[0030] The disclosed system is adaptable to multiple operational domains, including legal documentation, business correspondence, technical report drafting, and structured data-based communication. By integrating para-wise contextual drafting with automated document parsing and AI-assisted response generation, the system reduces manual drafting time by an estimated 40–60% in tested scenarios, while increasing content accuracy through contextual reference validation.
[0031] Referring to FIG. 1, the system comprises a computing device (102) connected via a network (104) to an LLM server (106) and a document server (114). The LLM server includes an LLM engine (112) accessing an LLM model (110) stored in an LLM database (108). The document server includes a document generation engine (116) connected to a document database (126).
[0032] In FIG. 2, a paragraph Fragmentor (118) divides the uploaded document set (202) into fragments (204a–204d). These are vectorized by vectorizer (120) to generate a document vector set (212) linked to fragment IDs (210). This enables efficient contextual retrieval during prompt generation. Prompt Generation and Output.
[0033] LLM model comprises a generative AI Model with a request for the user to provide contextual relevant input in the form of text and data. Some examples of generative AI models that are LLMs are ChatGPT, Gemini etc. The proposed Auto Reply mode will also use LLM model.
[0034] With reference now to FIG. 1, an example operating environment 100 in which aspects of the technology may be employed is provided. Among other components or engines not shown, operating environment 100 comprises computing device 102, in communication via network 104 with LLM server 106 and document server 114.
[0035] Network 104 may include one or more networks (e.g., public network or virtual private network [VPN]), as shown with network 104. Network 104 may include, without limitation, one or more local area networks (LANs), wide area networks (WANs), or any other communication net-work or method.
[0036] It is noted and emphasized that any additional or fewer components, in any arrangement, may be employed to achieve the desired functionality within the scope of the present disclosure. Although the various components of FIG.1 are shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines may more accurately be grey or fuzzy. Although some components of FIG. 1 are depicted as single components, the depictions are intended as examples in nature and in number and are not to be construed as limiting for all implementations of the present disclosure. The functionality of operating environment 100 can be further described based on the functionality and features of its components. Other arrangements and elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) can be used in addition to or instead of those shown, and some elements may be omitted altogether.
[0037] Further, some of the elements described in relation to FIG. 1, such as those described in relation to document generation engine 116 and those executed by LLM server 106, are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein are being per-formed by one or more entities and may be carried out by hardware, firmware, or software. For instance, various functions may be carried out by a processor executing computer executable instructions stored in memory, such as LLM database 108 and document database 126. Moreover, functions of document generation engine 116 or other functions described in the disclosure may be performed by computing device 102, LLM server 106, document server 114, or any other component, and in any combination. Thus, it will be realized that in other suitable operating environment arrangements, functions may be executed in various combinations, orders, and devices. Thus, for the sake of example, while document generation engine 116 is shown as executed by document server 114, some of these functions could be executed by computing device 102 or LLM server 106. Likewise, functions that will be described as performed by LLM engine 112 of LLM server 106 may be performed by computing device 102 or document server 114, and so forth.
[0038] Continuing with the example illustrated in FIG. 1, computing device 102 generally communicates with document server 114 to input information and receive outputs. As with other components of FIG. 1, computing device 102 is intended to represent one or more computing devices. One suitable example of a computing device that can be employed as computing device 102. In implementations, computing device 102 is a client-side or front-end device.
[0039] A user can use computing device 102 to input a document set. A document set may comprise one or more documents that relate to an event. For instance, in the context of using the technology to generate a legal reply, the document set can include all of the documents related to a single case. These documents may be combined into a single file or stored as multiple files. In some cases, document sets may comprise hundreds or thousands of pages, although all size document sets are contemplated.
[0040] In an aspect, computing device 102 may be used by a user to input queries that request information derived from a document set, such as requesting contextually relevant information responsive to a query, or requesting generation of a document responsive to the query and comprising the contextually relevant information. Such information or documents may be received by computing device 102 and rendered at a display device for providing to a user.
[0041] Documents in a document set may comprise any form of document. These include text, image, and audio-based files. Some example documents intended to be within the scope of document that may be found in the document set provided as DOC/DOCX, PDF (Portable Document Format), XLS/XLSX, CSV (Comma Separated Values), TXT (Plain text file); RTF (Rich Text Format), PPT/PPTX, ODT, ODS, HTML (Hyper Text Mark-up Language), JSON (JavaScript Object Notation); XML (extensible Mark-up Language), MP4 (MPEG-4), MOV, WMV (Windows Media Video), AVI (Audio Video Interleaved), MP3 (MPEG-3),AAC (Advanced Audio Coding), WMA (Windows Media Audio), and so forth.
[0042] In general, document server 114 receives information from computing device 102, such as a document set, and generates outputs or documents with information derived from the document set. It does so, for example, by communicating with LLM server 106 to employ LLM model 110 to derive the information.
[0043] Referring briefly to LLM server 106, LLM server 106 generally employs LLM engine 112. LLM engine 112 accesses LLM model 110 stored in LLM database 108 to receive inputs, such as prompts and generate outputs by providing the prompts to LLM model 110, which generates an output in accordance with its training.
[0044] LLM database 108 generally stores information, including data, computer instructions (e.g., software program instructions, routines, or services), or models used in embodiments of the described technologies. For instance, such stored information may be used by LLM engine 112. Although depicted as a single database component, LLM database 108 may be embodied as one or more databases or may be in the cloud. In aspects, LLM database 108 is representative of a distributed ledger network. While illustrated as part of LLM server 106, in another configuration, LLM database 108 is remote from LLM server 106. In connection with FIG. 6, memory 612 describes some example hardware suitable for use as LLM database 108.
[0045] LLM model 110 may comprise a generative AI model. Generative AI starts with a prompt that could be in the form of text, images, videos, or audio, or any input that LLM model 110 can process based on its training and model configuration. LLM model 110 then generates and returns new content in response to the prompt. In general, LLMs are advanced machine learning models that can understand natural language inputs and provide contextually relevant natural language outputs. Content can include a myriad of contextual information, which are responsive to the inputs, such as prompts or other information, such as a document set. Some example outputs include contextually relevant text, solutions to problems, or realistic images or audio.
[0046] Some example generative AI models that are LLMs and may be suitable for use with the current technology include ChatGPT, Bard, DALL-E, Midjourney, DeepMind, and the like. LLM model 110 may be a single LLM model or may be multiple models working in coordination to generate an output.
[0047] LLM model 110 can be trained so that it generates a response based on accordance with its training. During training, the model learns to predict a target output (like the next word in a sequence or masked word) based on input vectors. The “knowledge” of the model is encoded in the weights that define how it transforms and combines the input vectors to make its prediction.
[0048] As an example, one suitable model for LLM model 110 comprises a transformer architecture, having an encoder to process an input, and a decoder to process the output, e.g., a generative pre-trained transformer. The model can be pre-trained using a large document corpus. Some commonly used textual datasets are Common Crawl, The Pile, Maxitive Text, Wikipedia, and GitHub. The datasets may run up to 10 trillion words in size. The text can be split into tokens, e.g., words or characters. The transformer architecture can then be trained to predict the next token in a sequence based on the training data. For instance, this may be done via back propagation, which calculates the gradient of the loss with respect to the model parameters, and an optimisation algorithm, which adjusts the parameters to minimise the loss. The Adam optimisation algorithm may be used for this process.
[0049] The pre-trained model can be fine-tuned using supervised learning. Here, a dataset is generated with input-output pairs that are known. For natural language processing, word and sentence structures can be used as the dataset, providing a natural language input and a known appropriate response. In some cases, a dataset corresponding to a field of a document set for which the model is used may be suitable for fine tuning the model. That is, if the model is to be used to draft legal documents, fine-tuning may be done using a corpus of legal documents, and likewise for other fields. The fine-tuned model may then be subject to further optimization processes and provided for use as LLM model 110.
[0050] One example training process suitable for training LLM model 110 is described in Training Language Models to Following Instructions with Human Feedback, Long Ouyang, et al., 4 Mar. 2022, available at https://doi.org/10.48550/arXiv.2203.02155, which is hereby expressly incorporated by reference in its entirety.
[0051] Having trained LLM model 110, LLM model 110 is stored at LLM database 108 for use by LLM engine 112 when employed by LLM server 106. Document server 114, in the illustrated example, communicates with LLM server 106 to provide prompts, e.g., inputs to LLM model 110, and receive outputs for use by document generation engine 116 in generating information or documents that are provided to computing device 102, as will be further described.
[0052] At a high level, document server 114 is a computing device that implements functional aspects of operating environment 100, such as one or more functions of document generation engine 116 to generate information or documents and provide them to computing device 102. One suitable example of a computing device that can be employed as document server. In implementations, document server 114 represents a back-end or server-side device.
[0053] Components of document server 114 may interface with components of LLM server 106 to perform certain functions. Generally, LLM server 106 is a computing device that implements functional aspects of operating environment 100, such as one or more functions of LLM engine 112, to receive inputs and output contextually relevant information by employing LLM model 110. This can be utilized by components of document server 114 to generate contextually relevant information from a document set and generate documents using such information. One suitable example of a computing device that can be employed as LLM server 106 While document server 114 and LLM server 106 are illustrated as separate servers employing separate engines, in other aspects of the invention one or more servers may be used to implement the described functionality.
[0054] To generate contextually relevant information derived from a document set, document server 114 may
[0055] employ document generation engine 116. In the example illustrated, document generation engine 116 comprises Paragraph fragments 118, vectorizer 120, Prompt Generator 122, and text generator 124. These components may access or otherwise store information within document database 126.
[0056] Document database 126 generally stores information, including data, computer instructions (e.g., software program instructions, routines, or services), or models used in embodiments of the described technologies. For instance, such stored information may be used by document generation engine 116. Although depicted as a single database component, document database 126 may be embodied as one or more databases or may be in the cloud. In aspects, document database 126 is representative of a distributed ledger network. While illustrated as part of document server 114, in another configuration, document database 126 is remote from document server 114. In connection with FIG.6, memory 612 describes some example hardware suitable for use as document database 126.
[0057] In some aspects of the technology, document server 114 provides case room features to computing device 102. In doing so, documents pertaining to a specific event may be associated with one another. A user may access a case room, and in doing so, document generation engine 116 of document server 114 may access case room documents 128, which are documents corresponding to the specific event. As an example, in a legal use case, the case room documents may be grouped based on a single case. In an insurance use case, specific documents may be group based on a single claim. Some of these documents may include the document set, such as document set 130, as well as LLM queries and outputs, such as LLM queries and outputs 134, as will be further described.
[0058] In general, document generation engine 116 can generate contextually relevant information from a document set, such as document set 130. In aspects, the information generated by document generation engine 116 is provided to computing device 102 or may be used to generate a document, which may be provided to computing device 102. To do so, document set 130, having been received from computing device 102 and stored in document database 126, may be accessed and divided into a plurality of document fragments using paragraph Fragmentor 118.
[0059] In an implementation, paragraph Fragmentor 118 is configured to divide document set 130 into a plurality of document fragments using a recursive character text splitter. A recursive character text splitter is a function or algorithm that uses recursion to divide a given text string into smaller units based on certain conditions or delimiters, such as spaces, commas, or other characters. Recursive splitting can be used to parse sentences, identify grammatical structures, or handle nested structures in text data.
[0060] In an aspect, document set 130 is subject to OCR (optical character recognition) to determine text within the document. Audio and video may be provided in their native audio or video formats, or may be transcribed to text and divided using paragraph Fragmentor 118.
[0061] In some cases, the document division is based on a limitation of LLM model 110. While LLM model 110 will be further described, some LLM models that are suitable are computationally expensive to employ, meaning they use large amounts of computing resources. As such, some LLM models, such as commonly used generative AI models like ChatGPT, have a token limitation. Thus, paragraph Fragmentor 118 may divide document set 130 into a plurality of document fragments based on the token limitation of LLM model 110.
[0062] For example, in the context of using generative AI, such as ChatGPT, tokens refer to the units into which the input text is divided for processing. In natural language processing (NLP), a token can represent a single character or a word, depending on the granularity chosen.
[0063] To provide an example, consider the sentence: “I love ice cream.” In a character-level tokenization, each character (including spaces) would be treated as a separate token: [‘I’, ‘ ’, ‘I’, ‘0’, ‘v’, ‘e’, ©’, ‘7’, ‘ce’, ‘e’, 6’, ce’, ‘2’, e’, ‘a’, ‘m’, ‘.’]. In a word-level tokenization, each word “would be treated as a separate token: ['I', ‘love’, ‘ice’,‘cream’, ‘.’].
[0064] Generative AI models like ChatGPT have a maximum limit on the number of tokens they can process in one go. For instance, at the time of drafting this disclosure, the token limit for gpt-35-turbo is 4096 tokens, whereas the token limits for gpt-4 and gpt-4-32k are 8192 and 32768, respectively. These limits include the token count from both the message array sent and the model response. If the input text exceeds this limit, it needs to be truncated or split into smaller chunks to fit within the model’s capacity.
[0065] Thus, based on identifying the token limit for the particular model being employed as LLM model 110, Paragraph Fragmentor 118 splits document set 130 into the plurality of document fragments. In doing so, each fragment may comprise less than the total token limitation, such as the limitation of characters or words determined by the token limitation.
[0066] In some cases, paragraph Fragmentor 118 identifies the document fragments based on delimiters. That is, Paragraph Fragmentor 118 may use recursive character text splitting to identify a specific delimiter before a threshold number of characters or words, e.g., a threshold corresponding to a token capacity of LLM model 110, or other determined or received threshold value. In this way, paragraph Fragmentor 118 may reduce the chance that text is divided between document fragments in a manner that reduces the context of the divided text. For instance, the delimiter chosen may be a period, paragraph return, section heading, page break, and so forth. This aids in keeping contextually similar text grouped within each document fragment.
[0067] In some aspects, paragraph Fragmentor 118 divides document set 130 into document fragments based on a data size of the fragment. For example, a threshold data size value may be determined or otherwise received. Paragraph Fragmentor 118 may divide document set 130 such that each document fragment has a data size equal to or less than the threshold data size. For instance, this may be done to identify and provide document fragments (or vectors thereof, as will be described) to LLMs that may receive image, audio, or video, while it also may be suitable for text-based document sets as well. Any threshold data value may be used, although some examples could include 50 KB (kilobytes), 100 kB, 250 KB, 500 KB, 1 MB (megabyte), 50 MB, 100 MB, 500 MB, 1 GB (gigabyte), 5 GB, 10 GB, 50 GB, and so forth.
[0068] In some aspects, paragraph Fragmentor 118 divides document set 130 by content. In such cases, each document fragment is divided so that each document fragment includes related content. That is, the content in a document fragment is all related to a same subject. For example, this may be done by dividing document set 130 based on pages, file type, section headings, or other like subject matter delimiters.
[0069] Having divided the document into a plurality of document fragments, the document fragments can be represented as a vector using vectorizer 120. A document vector is a mathematical or computational structure that comprises of an ordered list of numbers. Thus, each document fragment is represented as a point in a multidimensional vector space, and each dimension corresponds to a feature derived from text (such as a specific word or phrase), images, audio, or video within the document fragment.
[0070] Vectorizer 120 may utilize a vectorizing algorithm. Some examples that may be suitable for use include Bag of Words (BoW), TF-IDF (Term Frequency-Inverse Document Frequency), and Doc2Vec. The document vectors can be used to identify similarity between document fragments, classify or cluster document fragments, or serve as input to machine learning models, such as LLM model 110, as will be further described, among other uses.
[0071] FIG. 2 illustrates a process performed by Paragraph Fragmentor 118 and vectorizer 120. Here, document set 202 may be a document set such as those previously described, and may contain one or more pages of documents of one or more file types. Document set 202 may be a text only document, or may include other forms of media, such as images, audio, or video.
[0072] Document set 202 is provided to paragraph Fragmentor 118, which divides document set 202 into document fragments 204. Document fragments 204 comprise a plurality of document fragments that includes document fragment 206a, document fragment 2065, document fragment 206c, and document fragment 206d. While illustrated as four document fragments, it is contemplated that document fragments 204 could include any number of document fragments within the plurality of document fragments.
[0073] Each document fragment of document fragments 204 is provided to vectorizer 120, which generates document vectors 208. In this example, an index that includes document fragment ID 210 corresponding to each document fragment of document fragments 204 is shown in respective association with corresponding document vectors of document vector set 212. It will be realised that document vectors 208 may be represented or stored in other forms that can be accessed by a computing device, and the on illustrated with respect to FIG. 2 is only one example. Document vectors 208 may be stored for future use by computing devices, such as storing document vectors in document database 126 for use by LLM server 106 and components thereof. For example, vectors representing document fragments generated from document set 130 are stored in case room documents 128 as document vector set 132.
[0074] Prompt Generator 122 generally generates a prompt for prompting LLM server 106. The prompt is provided to LLM model 110 by LLM engine 112 to generate an output, which is received by document server 114. As will be described, the information generated by LLM engine 112 may be new content that is derived based on document set 130.
[0075] Prompts may include any natural language text string (Hindi, Tamil & Telegu) with information or a request for information.
[0076] Depending on the LLM model 110, a prompt may also include other media, such as images, video, or audio. Some prompts may include requests for document generation.
[0077] Prompts generated by Prompt Generator 122 can include queries received from computing device 102. That is, a user can input a query at computing device 102, which can include any natural language query with a request for information or document generation. Queries received from computing device 102 may also include images, video, or audio in some cases. Queries from computing device 102 may comprise a document set identifier, such as a case number, claim number, or other type of identifier. In an aspect, the document set identifier may be based on the user inputting the query in a case room. This identifier may be used by Prompt Generator 122 to identify case room documents 128, including document set 130, document vector set 132, LLM queries and outputs 134, or other case room related documents.
[0078] When generating an output responsive to a prompt, LLM engine 112 may provide a document set, or vectors thereof, to LLM model 110, which generates and provides an output based on the document set. That is, the content generated as part of the output may be contextually relevant to the query provided in the prompt and derived from the document set, such as document set 130. In this way, prompts may include queries that request information that is derived from a document set, such as summaries of the document set, questions related to the content of the document set, and so forth. LLM model 110 processes the prompt to determine the natural language context and may provide a natural language output satisfying the prompt using the information derived from the document set. Thus, for instance, Prompt Generator 122 may receive a query from computing device 102, generate a prompt having the query along with document vector set 132 (which corresponds to the vectors of the document set identified in relation to the query), and provide a prompt to LLM server 106 for processing by LLM engine 112. The output responsive to the prompt may be provided by LLM server 106 and received by document server 114.
[0079] FIG. 3 illustrates an example prompt that may be generated by Prompt Generator 122. Here, first prompt 302 comprises first query 304, which may be received from a computing device, such as computing device 102. First prompt 302 further comprises document vector set 306, which includes a set of vectors that each correspond to a document fragment generated from a document set, as was described with reference to paragraph Fragmentor 118 and vectorizer 120. First prompt 302 is provided as an input to LLM model 308. LLM model 110 is an example usable as LLM model 308. In response, LLM model 308 provides as a first output 310, which is responsive to first query 304 and comprises information contextually relevant to first query 304, as derived from a document set from which document vector set 306 was generated. In some cases, first prompt 302, or components thereof, such as the query, and the first output 310 may be stored for later use by components of document generation engine 116. For instance, these may be stored in document database 126 as part of LLM queries and outputs 134.
[0080] A user may provide subsequent queries related to the same document set, e.g., by using a case room or identifying the document set in another particular manner. In doing so, the response to the query may be derived from the document set, such as document set 130, in addition to being responsive to the context of previous queries and outputs, such as those generated and illustrated in FIG. 3. That is, some LLM models suitable for use as LLM model 110 not only provide contextually relevant responses to a query, but also do so in the context of prior queries and prior outputs. This versatility reduces the number of inputs a user has to provide for the model to understand the contextual relevance of the query. It allows the user to provide inputs that are more akin to a more natural language discussion. Another prompt generated by Prompt Generator 122 in this manner. In this example, second prompt 312 comprises second query 314, which is received from a computing device, such as computing device 102. In an aspect, second prompt 312 is a prompt subsequent to first prompt 302. It may be a request for information or to generate a document in which context is needed from a prior prompt to output a contextually relevant response. As such, second prompt 312 may relate to the same document set as first prompt 302, and therefore second prompt 312 further comprises document vector set 306. So that LLM model 308 can provide an output based on a prior query, second prompt 312 also comprises first query 304 and first output 310.
[0081] Having received second prompt 312 as an input, LLM model 308 outputs second output 316. Second output 316 is responsive to second query 314 and provides contextually relevant information derived from the document set and the prior queries and outputs, such as first query 304 and first output 310. It will be realised that any number of prior queries and outputs may be provided to LLM model 308. In doing so, LLM model 308 can provide outputs having information derived from a document set with the context of any prior queries and outputs related to the document set.
[0082] In an aspect, a query received by computing device 102 requests information derived from document set 130. In such cases, once the output is received from LLM server 106, the output may be provided by document server 114 to computing device 102 as a response to the query. In other cases, the information may be used to generate a document.
[0083] To generate a document, document generation engine 116 may employ text generator 124. In general, text generator 124 generates a document based on information received from LLM server 106, responsive to a prompt.
[0084] In one example method, text generator 124 uses a document template to generate a document. The document template may be accessed from document templates 136 stored in document database 126. In some cases, a document template comprises fields. Each field is a location where text or images may be inserted to include the document generated using the document template. In some cases, a field has a corresponding descriptor that identifies the content that should be placed into the field. Put another way, the descriptor describes the input to the field.
[0085] FIG. 4A illustrates an example document template 400. This particular example is for a legal complaint. It includes various text fields, along with their corresponding descriptors indicating the information to be placed within each field. One example is document template field 402, which has a corresponding descriptor 404 identifying the information to input into field 402 when generating a document from document template 400.
[0086] In an example case, to generate a document, a descriptor may serve as a query to include within a prompt generated by Prompt Generator 122. The prompt may comprise further queries from a computing device, document vectors, or any other prior queries and outputs related to a document set for which information is derived when generating the document.
[0087] The prompt may be provided for input to a model, such as LLM model 110. The output provided by the model is inserted into the field corresponding to the query to generate the document. FIG. 4B illustrates an example of document template 400 of FIG. 4A having outputs from a model inserted into fields to generate a document. One example illustrated includes field 402 and output 406, which has been inserted into field 402.
[0088] It will be understood that this is one example method in which a document may be generated using the technology. In another aspect, an LLM model, such as LLM model 110, is trained on documents of a same document type for which there is a request to generate. Thus, as an example, an LLM model may be trained using a dataset that comprises legal documents, including complaints, that have been indicated as such within the training data. Based on this, the model may generate documents of the same document type. The generated document may have new content generated by the LLM to complete the generated document, where the new content is derived from a document set. The document may be generated in response to a query from a computing device, and a prompt comprising the query and a document vector set is input to the model. The generated document may be provided to the computing device and rendered at a display.
[0089] Generated documents may be stored for future use. As illustrated in FIG. 1, a document generated using document generator 124 may be stored as generated documents 138.
[0090] Turning now to FIG. 5, flow diagram 500 having an example process for implementing aspects of the technology is provided. As will be understood from further discussion, flow diagram 500 is illustrated with reference numerals to aid in describing the process. The order of the reference numerals is not intended to impart any particular order or sequence of the process. The illustration in FIG. 5 is an example, and it will be realized by those of ordinary skill in the art that other processes having more or fewer operations can be performed from the described technology, just as those operations in FIG. 5 may depart from the order in which they are illustrated in some aspects of the technology.
[0091] In flow diagram 500 starts at block 502 and proceeds to initialize an application at block 504. In generation, an application is a software program that comprises instructions for performing one or more of the operations described throughout this disclosure. The application may be stored locally at a computing device, such as computing device 102, or may be remote from computing device 102, or may comprise one or more applications that are local, remote, or both.
[0092] Upon initializing the application at block 504, user interface elements may be rendered and displayed at a computing device. At block 506, an interface element that permits upload of a document set to a server, such as document server 114. Here, this is illustrated as displaying a sidebar for an API permitting key or file uploading. If a selection is made for an API key input, for instance, that identifies a case room or other document set identifier, then at block 508, an API session is opened at block 508 corresponding to the API key input. If a selection is made to upload a file, a file is uploaded and submitted at block 510.
[0093] At block 512, document vectors are retrieved. At block 514, a chat memory is created. This may include saving queries and outputs as previously described. The Al queries and outputs may be grouped or otherwise saved with respect to a particular document set, e.g., a case room where subsequent queries, which continue from prior queries and outputs, can be generated for the document set.
[0094] At block 516, application tools are created or otherwise initialized. Some example application tools include a search tool at block 518, a write file tool at block 520, and a read file tool at block 522. These tools provide the user with various functionalities, including the ability to search for document sets and other information related to the document set, such as prior queries and responses, the ability to create new document sets or case rooms for document sets; or accessing a document set; respectively, and the like.
[0095] At block 524, a communication link with an LLM server is established. At block 526, an initial prompt to the LLM server is provided to initiate a message thread with an LLM model.
[0096] At block 528, files are uploaded. For instance, this may be a document set. These may be uploaded at the computing device via the user interface. At block 530, the uploaded file is parsed, and at block 532, the parsed file or the document set is checked to determine a file type. Based on the file type, various functions may be used to parse the document. For example, if the document is a PDF, a PDF parsing function is used at block 534; if the document is a DOCX file, a DOCX parsing function is used at block 536; if the document is a text document (TXT), a TXT parsing function may be used at block 538; and so forth, based on the file type.
[0097] The parsed document is cleaned at block 540. For instance, duplicate documents may be removed. OCR (optical character recognition) may be applied to the document to determine characters and words present in the document. Irrelevant documentation may be removed.
[0098] Moreover, some file types may be converted into another type of file for vectorization, which may be dependent on input requirements for the algorithm vectorizing the document. In one example, at block 542, files (e.g., the document set) are converted to DOCX files.
[0099] At block 542, the document set is divided into document fragments. This can be done using recursive character text splitting, as described. At block 546, the document fragments are created based on the division determined at block 544, and the document fragments are tagged with metadata to indicate the portion of the document set from which the document fragments were divided. They may be tagged with an identifier identifying the document fragment or document set, among other metadata. The document fragments may be indexed at block 548.
[00100] At block 550, vectors are generated for each document fragment. These may be provided to an LLM model at block 554. If there is an error with the LLM server 106, then an error message may be displayed at the computing device, illustrated in block 552.
[00101] A prompt is generated at block 556. The prompt may include a query related to the document set that is received from the computing device. The query may further comprise the document vector set. Various other instances may be added when generating the prompt. Some examples include constraints at block 558, tools at block 560, resources at block 562, and performance evaluation at block 564. From these, a prompt string is generated at block 566, and the request is executed at block 568, e.g., by communicating the generated prompt string to an LLM server, such as LLM server 106.
[00102] Responsive to communicating the prompt at block 568, an output is received from the model at block 570. Tokens corresponding to the model capability are counted at block 572. Where the answer reference source documents, e.g., from the document set, the source documents may be retrieved at lock 574. Source documents may be retrieved by identifying them via the index. Other documents may be retrieved from a data store or via a network, such as an intranet or the Internet. At block 576, the output is communicated to the computing device as the answer, where the computing device displays the answer in response. The output (e.g., the answer) may be stored for future use, such as being included in subsequent prompts, at block 578. The process finishes at block 580.
[00103] As noted, a user query may be included in the prompt. At block 582, a user query is received and processed. The API key and document configuration are checked at block 584. For instance, it may be determined whether the query is associated with a particular document set. If the API is open and the document set is available, the query is included in the prompt at block 566. If API communication is not established with the model server, an error message may be displayed at block 586.
[00104] FIG.6 This illustrates the process of the user choosing the option of the “Reply Mode” by the receiver and the drafting of the response without any generative AI tool. References 602 & 604 represents the process of the receiver using “Reply Mode” to open the document received from the sender respectively. At 606 text boxes appear below each of the paragraphs received from the sender. The 608 indicates to a process of the receiver manually drafting a Parawise response to the document received. Reference 610 allows the receiver to submit the document for initiating 618 and 620 the API for the digital signatures or 612 allow for saving a draft version of the Parawise responses drafted by the receiver into the database of the portal 624. The receiver can through 614 recall the draft saved and if required edit and then initiate 610, 618 and 620 initiate digital signature API. Through 622 the digitally signed document is saved as a draft in the database of the portal 624 which can be recalled and edited 626 and 628 submitted and forwarded to the original sender as a link 630 to access the digitally signed document as a reply as a link edit the draft of the reply and recall the same 614 and then edit 616 if necessary and finally submit 618 as an option. The existing response drafting methods are limited to using the communication received from the sender only as a reference point. The receiver drafts a response manually by reviewing the available information given in the contextual documents and without the support of any generative AI interface.
[00105] REPLY MODE
a. Creation of Parawise Text Boxes for manually filling in the responses
b. Facility of Split Screen into two part. One on left containing the original document with Parawise text boxes and on the right half the following facilities for the user;
c. The User will be able to access the following subset of features as a part of the split screen within the “Reply Mode”;
d. Create a new Blank Document on the right-hand side of the split screen
e. Recall reference document/text/transcript/image/sound/video from the local or external database including and not limited to platforms such as WhatsApp/twitter X etc.
f. Option to select, copy and paste textual content from the reference document on the right-side pane onto the reply mode document on the left-hand side
g. Share the working screen and permission for collaborative drafting with any other person logged into the application
h. Grant the following options of rights to the other person for collaborative drafting; View /View & Edit
i. Save the document with rights embedded such as; Viewing/Editing/Printing/Copying/Set Password
j. Share the saved document link via WhatsApp/Email or as an attachment
[00106] Turning to FIG.7 This illustrates the process of the user choosing the option of the “Reply Mode” by the receiver and the drafting of the response without any generative AI tool.
[00107] References 702 & 704 represent the process of the receiver using “Reply Mode” to open the document received from the sender respectively. At 706 text boxes appear below each of the paragraphs received from the sender.
[00108] The 712 indicates to a process of the receiver with AI drafting a Parawise response to the document received. Reference 714 allows the receiver to submit the document for initiating 716 and 720 the API for the digital signatures or 716 allows for saving a draft version of the responses drafted by the receiver into the database of the portal 722. The receiver can through 718 recall the draft saved and if required edit and then initiate 714, 720 and 724 initiate digital signature API. Through 726 the digitally signed document is saved as a draft in the database of the portal 722 which can be recalled and edited 728 and 730 submitted and forwarded to the original sender as a link 732 to access the digitally signed document as a reply as a link , edit the draft of the reply and recall the same 718 and then edit 718 if necessary and finally submit 714 as an option.
[00109] The current methods of computer-generated documents are limited to using a query to query a dataset and recall information from a large database and generate an analysed response in the form of a document.
[00110] The proposed concept is to use aspects of the Technology using Large Language Models (LLMs) and train, such as generative AI Models to generate a Parawise response using information from a defined document set. This concept can be deployed in a variety of use case scenarios, such as legal processes, business contractual communication and negotiation of instruments and so forth.
[00111] The LLMs will aid in the drafting of the response basis contextual information from the predefined documents shared. The LLMs can correlate the context of the communication received from the sender with the information derived from the document set shared and draft an appropriate para wise response.
[00112] The Document set the LLMs would be able to process using the generative AI interface would include text and data. Some examples of documents intended to be used for contextual reference by the LLMs would include PDF (Portable Document Format), XLS/XLSX, CSV (Comma Separated Values), TXT (Plain Text file); RTF (Rich Text Format), PPT/PPTX, DOC/DOCX and so forth.
[00113] There are several problems with such a process as it is specific to a query and does not take up the task of generating a Parawise response to any communication received from the sender. From this aspect the current features as referenced in FIG.6 and FIG.7 are unique and different from any generative AI Models available.
[00114] AUTO REPLY MODE
a. Creation of Parawise Text Boxes for the generative AI Auto drafting the response basis review of specific contextual documents as submitted by the responder including and not limited to the auto generated transcripts of virtual interactions drawn from social media platforms such as WhatsApp’s/twitter X etc.
b. Facility of Split Screen into two part. One on left containing the original document with Parawise text boxes and on the right half the following facilities for the user;
c. The User will be able to access the following subset of features as a part of the split screen within the “Auto Reply Mode”;
d. Create a new Blank Document on the right-hand side of the split screen
e. Recall reference document/text/transcript/image/sound/video from the local or external database
f. Option to select, copy and paste textual content from the reference document on the right-side pane onto the reply mode document on the left-hand side
g. Share the working screen and permission for collaborative drafting with any other person logged into the application
h. Grant the following options of rights to the other person for collaborative drafting; View /View & Edit
i. Save the document with rights embedded as; Viewing / Editing / Printing / Copying /Set Password
j. Share the saved document link via WhatsApp/Email or as an attachment
k. Reply as it is with the petition first one is reply of. Comparative view of paras of both the plaintiff and respondent is seen simultaneously thereby increasing efficiency.
[00115] The system enables sharing of the drafted response, including combined responses, via print or digital means, with optional screen sharing to third parties. It supports manual review and content matching for improved efficiency, thereby reducing labour-intensive processes. Users can edit the presented data manually, reference specific points within the source material, and access a subset of documents alongside split-screen playback of video or photographs.
[00116] The system further provides automatic documentation indexation by page, and can automatically rephrase indexed content where required. Flow diagrams (FIG. 6, FIG. 7) illustrate the operational steps for both modes, including document upload, OCR processing, vector creation, AI prompt execution, and output delivery. FIG. 7 demonstrates the front-end process for Auto Reply Mode. This approach enhances accuracy, reduces drafting time, and ensures the reply is directly tied to the paragraph structure of the original communication.
TECHNICAL ADVANTAGES
[00117] The present invention offers the following key technical advantages:
[00118] Automated Parawise Drafting – Reduces manual effort by automatically generating structured replies aligned with the paragraph structure of incoming communications.
[00119] Contextual Document Integration – Uses vectorized document fragments to ensure replies are based on relevant portions of predefined contextual data.
[00120] Dual Operational Modes – Offers both manual and AI-assisted drafting within a unified interface.
[00121] Template-driven Output – Ensures consistency and accuracy in legal/business document formatting.
[00122] Token-optimized Processing – Handles large datasets efficiently by splitting them according to LLM token limits.
[00123] Collaborative Drafting Features – Supports Real time multi-user collaboration with granular access controls, multi user editing.
[00124] Multi-format Document Support – Accepts and processes varied file types, including text, spreadsheet, presentation, and multimedia formats (such as MP3, MP4, MOV).
[00125] Secure Digital Signature Integration – Allows direct signing and secure sharing of final replies. , Claims:We Claim
1. A computer-implemented method for generating a paragraph-wise response to a received document, the method comprising:
a. receiving, at a computing device (102), a document set (130) comprising contextual documents;
b. segmenting the document set (130) into a plurality of document fragments (204) using a paragraph Fragmentor (118);
c. generating document vectors (208) from said fragments (204) using a vectorizer (120); receiving a communication from a sender at said computing device (102), said communication comprising a plurality of paragraphs;
d. generating, by a prompt generator (122), a prompt including the document vector set (132) and said communication;
e. transmitting the prompt to a large language model (110) hosted on an LLM server (106);
f. receiving a paragraph-wise response generated by the LLM (110); and rendering said response on a user interface in editable text boxes (706) corresponding to each paragraph of the sender communication.
2. A system for drafting paragraph-wise replies to communication using artificial intelligence-generated content, the system comprising:
a. a computing device (102) for receiving sender communication and a document set (130); a paragraph Fragmentor (118) adapted to segment said document set into document fragments (204);
b. a vectorizer (120) adapted to convert each fragment into a document vector (208);
c. a prompt generator (122) configured to construct a prompt including said document vector set (132) and said communication;
d. a large language model engine (112) hosted on a server (106) for generating contextually relevant paragraph-wise responses;
e. and a document generation engine (116) having a text generator (124) for embedding the responses into a predefined document template (400).
3. A non-transitory computer-readable medium storing computer-executable instruction which, when executed by a processor, cause a system to perform steps comprising:
a. receiving a communication from a sender and contextual documents from a user;
b. segmenting said contextual documents into document fragments (204);
c. generating vector representations (208) from said fragments using a vectorizer (120);
d. generating a prompt incorporating the document vector set (132) and said communication using a prompt generator (122);
e. transmitting said prompt to a large language model (110);
f. receiving a paragraph-wise response from said model;
g. and embedding said response into a structured document using a text generator (124).
4.The method as claimed in claim 1, wherein the paragraph Fragmentor (118) utilizes a recursive character text splitter that splits the document set (130) based on token limitations of the large language model (110).
5.The method as claimed in claim 1, wherein the vectorizer (120) employs one or more algorithms selected from Bag of Words (BoW), Term Frequency-Inverse Document Frequency (TF-IDF), or Doc2Vec.
6.The method as claimed in claim 1, wherein the large language model (110) comprises a transformer-based architecture selected from ChatGPT, Gemini, Bard, DeepMind, or equivalents thereof.
7.The method as claimed in claim 1, wherein the paragraph-wise response is displayed in editable text boxes (706) that are rendered below each paragraph of the received communication.
8.The method as claimed in claim 1, wherein the generated paragraph-wise response is saved as a draft (612), edited (616), digitally signed using an API (620), and forwarded to the sender via a secure link (630).
9.The system as claimed in claim 2, wherein the document generation engine (116) populates fields (402) of a document template (400) using output (406) received from the large language model (110).
10.The system as claimed in claim 2, wherein the computing device (102) provides selectable modes comprising: a Reply Mode for manual drafting (Fig. 6), and an Auto Reply Mode (Fig. 7) for AI-assisted drafting.
11.The system as claimed in claim 2, wherein previous queries and outputs (134) generated by the large language model (110) are stored and appended to subsequent prompts to maintain context continuity.
12.The method as claimed in claim 1, wherein document fragments (204) are tagged with metadata for indexing (548), identification, and retrieval from a vector store.
13.The method as claimed in claim 1, wherein the communication is related to legal, compliance, or contractual matters and the paragraph-wise response is generated for use in formal documentation.
14.The method as claimed in claim 1, wherein the document set (130) includes one or more file formats selected from: PDF, DOCX, XLSX, CSV, TXT, RTF, PPTX, JSON, HTML, or XML.
15.The method as claimed in claim 1, wherein the LLM-generated response is transmitted to the sender through a hyperlink or secure document link (630) embedded within a communication.
16.The system as claimed in claim 2, further comprising an audit trail module configured to track and log changes made by the user to the AI-generated draft before final submission.
17.The method as claimed in claim 1, wherein the contextual documents comprise transcribed or OCR-processed documents derived from scanned, audio, or video files.
18.The method as claimed in claim 1, wherein the system is hosted on a cloud-based infrastructure and accessed via an API key (508) integrated into a third-party communication or document management platform.
19.The system as claimed in claim 2, wherein the user interface comprises a split-screen layout, with a first pane for displaying paragraph-wise editable text boxes and a second pane for accessing one or more reference documents, transcripts, media files, or blank drafts for contextual assistance during drafting.
20.The method as claimed in claim 1, wherein the user selects, copies, and pastes content from the second pane into the editable text boxes of the first pane using a drag-and-drop or clipboard mechanism.
21.The system as claimed in claim 2, wherein the user is enabled to create and work on a new blank document within the second pane of the split screen simultaneously while generating or editing responses in the first pane.
22.The system as claimed in claim 2, wherein the split-screen user interface includes functionality to recall one or more reference files comprising document, transcript, image, audio, video, or web-based data from a local or external database, including social media or messaging platforms.
23.The system as claimed in claim 2, wherein the computing device (102) includes a collaborative drafting module that enables sharing of the document screen with a second user and assigns user-specific access rights selected from: view only, or view and edit.
24.The method as claimed in claim 1, wherein the user saves the generated or edited document with embedded access control rights selected from: viewing, editing, printing, copying, or password protection.
25.The system as claimed in claim 2, wherein the saved document or a link to the document is shareable through one or more communication platforms including WhatsApp, email, or third-party APIs.
26.The method as claimed in claim 1, wherein the user is permitted to selectively enable or disable collaborative editing and restrict access permissions to ensure secure document handling.
27.The system as claimed in claim 2, wherein the blank document created in the second pane is populated using AI-suggested inputs retrieved from the referenced contextual documents.
28.The system as claimed in claim 2, wherein the collaborative drafting environment logs edits made by all users and includes a version control module to track revision history.
| # | Name | Date |
|---|---|---|
| 1 | 202511084415-STATEMENT OF UNDERTAKING (FORM 3) [05-09-2025(online)].pdf | 2025-09-05 |
| 2 | 202511084415-REQUEST FOR EXAMINATION (FORM-18) [05-09-2025(online)].pdf | 2025-09-05 |
| 3 | 202511084415-REQUEST FOR EARLY PUBLICATION(FORM-9) [05-09-2025(online)].pdf | 2025-09-05 |
| 4 | 202511084415-PROOF OF RIGHT [05-09-2025(online)].pdf | 2025-09-05 |
| 5 | 202511084415-POWER OF AUTHORITY [05-09-2025(online)].pdf | 2025-09-05 |
| 6 | 202511084415-FORM-9 [05-09-2025(online)].pdf | 2025-09-05 |
| 7 | 202511084415-FORM FOR SMALL ENTITY(FORM-28) [05-09-2025(online)].pdf | 2025-09-05 |
| 8 | 202511084415-FORM FOR SMALL ENTITY [05-09-2025(online)].pdf | 2025-09-05 |
| 9 | 202511084415-FORM 18 [05-09-2025(online)].pdf | 2025-09-05 |
| 10 | 202511084415-FORM 1 [05-09-2025(online)].pdf | 2025-09-05 |
| 11 | 202511084415-FIGURE OF ABSTRACT [05-09-2025(online)].pdf | 2025-09-05 |
| 12 | 202511084415-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [05-09-2025(online)].pdf | 2025-09-05 |
| 13 | 202511084415-DRAWINGS [05-09-2025(online)].pdf | 2025-09-05 |
| 14 | 202511084415-DECLARATION OF INVENTORSHIP (FORM 5) [05-09-2025(online)].pdf | 2025-09-05 |
| 15 | 202511084415-COMPLETE SPECIFICATION [05-09-2025(online)].pdf | 2025-09-05 |
| 16 | 202511084415-MSME CERTIFICATE [21-11-2025(online)].pdf | 2025-11-21 |
| 17 | 202511084415-FORM28 [21-11-2025(online)].pdf | 2025-11-21 |
| 18 | 202511084415-FORM 18A [21-11-2025(online)].pdf | 2025-11-21 |