Abstract: A method for modifying an input prompt is provided. The method includes receiving the input prompt from a selected system (102), and generating a raw output based on the input prompt by a conversational system (100). Further, the method includes selecting a validator from a plurality of validators for validating accuracy of the raw output, and comparing input-data specified in the raw output with the same input-data retrieved from a master data source system using the selected validator. Furthermore, the method includes identifying one or more errors in the input prompt based on the comparison, and generating an error correction instruction based on the one or more identified errors and the input-data retrieved from the mater data source system. In addition, the method includes generating a refined input prompt based on the error correction instruction, and generating a final response to the input prompt based on the refined input prompt.
Description:
INTELLIGENT CONVERSATIONAL SYSTEM AND ASSOCIATED METHOD FOR PROVIDING HEALTHCARE INFORMATION
RELATED ART
[0001] Embodiments of the present specification relate generally to an intelligent conversational system, and more particularly to a generative artificial intelligence (Gen-AI) powered conversational system that can be used by healthcare professionals and patients to obtain desired healthcare information.
[0002] Access to accurate medical information is of paramount importance, both to healthcare professionals such as doctors, lab technicians, nurses, researchers, medical device developers, and patients. For example, the medical device developers need access to medical compliance information and medical standards to develop medical devices that adhere to the necessary standards. In another example, doctors may need access to latest medical and/or clinical information to devise suitable treatment plans for their patients per the latest medical standards and advancements.
[0003] Accordingly, in recent times, several medical information providing systems have been developed to provide access to desired medical information to both the patients and healthcare professionals. Such systems generally include a chatbot. The chatbot generally provides a text or a voice-based interface that allows a healthcare professional to submit a query to the chatbot and to receive an answer to the query automatically generated by the chatbot based on pre-programmed responses. However, the chatbot may output inaccurate or unsatisfactory results if the query submitted to the chatbot includes, for example, a typo error, a spelling error, a grammatical error, and inaccurate information.
[0004] Certain present-day chatbots are known to review input queries for spelling errors and attempt to learn from past corrections manually performed by the user to mitigate those errors to subsequently refine the query automatically. For example, the US patent application US20230177263A1 describes a chatbot system that automatically refines a query having errors based on corrections made previously to the same query by the user. Though the chatbot system refines the query, the chatbot system requires user intervention and relies on the corrections made in the past manually by the user for refining the query. Such a chatbot system, thus, is not truly an autonomous system that can automatically perform query refinement and output accurate results. Further, the chatbot system needs to be frequently trained whenever the user refines queries for enabling the chatbot system to learn corrections made to those queries by the user.
[0005] Accordingly, there remains a need for an improved chat system or a conversational system that provides desired healthcare information to users accurately without needing the conversational system to require manual intervention and undergo frequent trainings.
BRIEF DESCRIPTION
[0006] It is an objective of the present disclosure to provide a method for automatically modifying an input prompt to a conversational system. The method includes receiving the input prompt from a selected system by the conversational system, and generating a raw output based on the received input prompt by the conversational system for further validation using a large language model-based validator system (110). Further, the method includes selecting a validator from a plurality of validators associated with the conversational system based on one or more keywords in the input prompt for validating accuracy of the raw output using the large language model-based validator system. Furthermore, the method includes comparing a set of input data related to an entity specified in the raw output with the same set of input data related to the entity retrieved from a master data source system by the conversational system using the selected validator. In addition, the method includes identifying one or more errors in the input prompt based on the comparison. The one or more identified errors correspond to one or more of a data mismatch between the set of input data specified in the raw output and the same set of input data retrieved from the master data source system, missing data, a contextual error in the input prompt, and a typo error in the input prompt. The method further includes generating an error correction instruction based on the one or more errors identified in the input prompt and the set of input data retrieved from the master data source system using the conversational system.
[0007] Further, the method includes generating a refined input prompt by automatically correcting the one or more identified errors in the input prompt based on the error correction instruction using a large language model-based prompt refinement system. The error correction instruction is generated based on the same set of input data retrieved from the master data source system. The method further includes generating a final response to the input prompt based on the refined input prompt using the conversational system, and outputting the final response by the conversational system in response to the input prompt initially received by the conversational system.
[0008] Generating the raw output by the conversational system includes receiving the input prompt including clinical notes of a patient in a natural language along with an instruction to convert the clinical notes of the patient to a structured format by the conversational system. The structured format includes one of JavaScript Object Notation format, a continuity of care document format, and a fast healthcare interoperability resources format. Further, the method includes converting the clinical notes of the patient in the natural language to the structured format based on the instruction, and generating the raw output including the clinical notes of the patient in the structured format. Selecting the validator from the plurality of validators includes comparing a keyword selected from the input prompt including the clinical notes of the patient with a set of keywords mapped to each of the plurality of validators in a knowledge database. Further, the method includes identifying a matching keyword stored in the knowledge database that matches with the keyword selected from the input prompt and further identifying a particular validator that is mapped to the matching keyword in the knowledge database.
[0009] Furthermore, the method includes validating accuracy of the raw output generated by the conversational system using the identified validator. The plurality of validators includes a domain-specific application context validator, a structured query language query error validator, a business rules validator, a regulatory compliance validator, a python code debug validator, semantic specification validator, and a personal identifiable information validator. The entity corresponds to a patient and the master data source system corresponds to a hospital information system that stores electronic health records of a plurality of patients.
[0010] Generating the error correction instruction includes identifying a subset of input data selected from the set of input data that is incorrectly specified in the raw output generated by the conversational system. Further, the method includes generating the error correction instruction to correct the subset of input data that is incorrectly specified in the raw output based on the same subset of input data retrieved from the hospital information system. Generating the final response to the input prompt includes one or more of identifying if the final response generated by the conversational system includes personal identifiable information using the personal identifiable information validator, and excluding the personal identifiable information from the final response. Further, the method may include identifying if the input prompt relates to a request for information related to a specific domain or other information that is unrelated to the specific domain by analyzing one or more keywords in the input prompt using the domain-specific application context validator, and generating the final response to the input prompt only when the input prompt is identified to relate to the request for information related to the specific domain. Furthermore, the method may include converting the input prompt received in a natural language from the selected system into a structured query language query and validating if the structured query language query includes a syntax error using the structured query language query error validator.
[0011] In addition, the method may include performing semantic analysis on the input prompt using the semantic specification validator, and automatically generating the refined input prompt by refining the input prompt based on the semantic analysis. Moreover, the method may include identifying if the input prompt relating to a request for a regulatory information includes an error using the regulatory compliance validator based on a prior training provided to the conversational system, and automatically refining the input prompt to eliminate the error in the input prompt based on the prior training. Receiving the input prompt from the selected system includes receiving a complaints file including a list of complaints related to a medical device, and an instruction provided by a user in a natural language to automatically identify and retrieve complaints that include incomplete information from the complaints file. The complaints including incomplete information correspond to one or more of a blank device identifier, a blank device manufacturer identifier, and a blank entry that fails to specify a type of the medical device. Generating the raw output includes converting the instruction provided by the user in the natural language to a structured query language (SQL) query.
[0012] Generating the error correction instruction includes identifying if the SQL query generated by the conversational system specifies a table name and a column name needed by the conversational system for retrieving the complaints with incomplete information from the complaints file. Further, the method includes identifying that the SQL query includes one or more errors when one or more of the table name and the column name are missing in the SQL query generated by the conversational system. Furthermore, the method includes determining one or more of the table name and the column name that are missing in the SQL query from a copy of the complaints file stored in a complaints database. Moreover, the method includes generating the error correction instruction based on one or more of the table name and the column name determined from the copy of the complaints file. In addition, the method includes automatically refining the SQL query generated by the conversational system to generate a refined SQL query including one or more of the table name and the column name specified in the error correction instruction generated by the conversational system. The method further includes automatically outputting the complaints that include incomplete information in response to the input prompt initially received by the conversational system based on the refined SQL query including one or more of the table name and the column name specified in the error correction instruction.
[0013] It is another objective of the present disclosure to provide a conversational system that automatically modifies an input prompt. The conversational system includes a selected system that is communicatively coupled to the conversational system via a communications link. The selected system includes an application including one or more associated graphical user interfaces. The one or more associated graphical user interfaces provide a chatbox for enabling a user to submit the input prompt to the conversational system. Further, the conversational system includes a large language model that receives the input prompt from the selected system, and generates a raw output based on the received input prompt for further validation using the large language model. Furthermore, the conversational system includes a large language model-based validator system that selects a validator from a plurality of associated validators based on one or more keywords in the input prompt and one or more keywords stored in a knowledge database for validating accuracy of the raw output. Further, the validator system compares a set of input data related to an entity specified in the raw output with the same set of input data related to the entity retrieved from a master data source system using the selected validator. Furthermore, the validator system identifies one or more errors in the input prompt based on the comparison. The one or more identified errors correspond to one or more of a data mismatch between the set of input data specified in the raw output and the same set of input data retrieved from the master data source system, missing data, a contextual error in the input prompt, and a typo error in the input prompt.
[0014] Moreover, the validator system generates an error correction instruction based on the one or more errors identified in the input prompt and the set of input data retrieved from the master data source system. The conversational system also includes a prompt refinement system that generates a refined input prompt by automatically correcting the one or more identified errors in the input prompt based on the error correction instruction. The error correction instruction is generated based on the same set of input data retrieved from the master data source system. Further, the prompt refinement system generates a final response to the input prompt based on the refined user prompt, and outputs the final response in response to the input prompt initially received by the large language model.
[0015] The knowledge database includes a set of keywords mapped to each of the plurality of associated validators. The plurality of associated validators includes one or more of a personal identifiable information validator that identifies if the final response generated by the conversational system includes personal identifiable information and excludes the personal identifiable information from the final response. A domain-specific application context validator that identifies if the input prompt relates to a request for information related to a specific domain or other information that is unrelated to the specific domain by analyzing one or more keywords in the input prompt. Further, the domain-specific application context validator generates the final response to the input prompt only when the input prompt is identified to relate to the request for information related to the specific domain.
[0016] A structured query language query error validator that converts the input prompt received in a natural language from the selected system into a structured query language query and validates if the structured query language query includes any syntax errors. A python code debug validator that validates if the raw output corresponding to one or more python strings generated by the large language model include any syntax errors. A semantic specification validator that performs semantic analysis on the input prompt and automatically generates the refined input prompt by refining the input prompt based on the semantic analysis. A regulatory compliance validator that identifies if the input prompt relating to a request for a regulatory information includes an error based on a prior training provided to the conversational system, and automatically refines the input prompt to eliminate the error in the input prompt based on the prior training. A business rules validator that identifies if the raw output generated by the large language model adheres to one or more predefined rules, and further identifies the input prompt to include an error when the raw output fails to adhere to one or more of the plurality of predefined rules. The selected system corresponds to one of a hospital system, a medical device company system, a healthcare system, an automotive system, an industrial system, an aerospace system, a rail system, and a media and entertainment system. The master data source system corresponds to one of a hospital information system that stores electronic health records of a plurality of patients, a complaints database that stores complaints associated with a medical device in a complaints file, and a domain-specific database system that stores domain-specific information.
BRIEF DESCRIPTION OF DRAWINGS
[0017] These and other features, aspects, and advantages of the claimed subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0018] FIG. 1 illustrates a system diagram depicting an exemplary conversational system that may be used by a medical practitioner to automatically generate clinical notes of a patient in a structured format, in accordance with aspects of the present disclosure;
[0019] FIGS. 2A-B illustrate a flow diagram depicting an exemplary method for accurately generating clinical notes of the patient in the structured format using the conversational system of FIG. 1, in accordance with aspects of the present disclosure;
[0020] FIG. 3 illustrates a user interface view of a healthcare application depicting a chatbox through which an exemplary user prompt including clinical notes of the patient to be converted into the structured format is submitted as an input to the conversational system of FIG. 1, in accordance with aspects of the present disclosure;
[0021] FIG. 4 illustrates a final output including an exemplary response that is accurately generated by the conversational system of FIG. 1 in response to the exemplary user prompt submitted as the input to the conversational system of FIG. 1, in accordance with aspects of the present disclosure;
[0022] FIG. 5 illustrates an exemplary response that is inaccurately generated by a conventional conversational system in response to the exemplary user prompt of FIG. 3 submitted as the input to the conventional conversational system; and
[0023] FIG. 6 illustrates another system diagram depicting another embodiment of the conversational system of FIG. 1 that may be used by a representative of a medical device company to automatically retrieve complaints related to a medical device from a complaints file, in accordance with aspects of the present disclosure.
DETAILED DESCRIPTION
[0024] The following description presents an exemplary generative artificial intelligence (generative AI)-powered conversational system such as a chatbot system that receives a user prompt including a specific request for healthcare information from a user, and provides the necessary healthcare information in one or more of a textual format and an audio format. Particularly, embodiments described herein disclose a conversational system that intelligently identifies a context of the input user prompt to identify any errors present therein. The conversational system then automatically refines the user prompt to provide accurate healthcare information without requiring manual intervention or multiple re-trainings. Thus, the present conversational system is capable outputting accurate and non-hallucinated results to the users without requiring highly specialized and expensive processing systems.
[0025] The rise of generative AI, in particular large language models (LLMs) like ChatGPT, has revolutionized how users search for and consume information. Despite its spectacular potential, generative AI isn’t without its shortcomings. For example, LLMs are designed to serve probabilistic answers from training data and can be a little too eager to provide answers, resulting in inaccurate or hallucinated outputs. Additionally, the quality of outputs is only as good as the quality of input prompts as generative AI needs clear and specific instructions to understand what one wants it to do. Accordingly, user prompts including one or more errors such as a grammatical error, a spelling error, a contextual error, or only partial information may result in the LLM outputting false or misleading information that may be grammatically correct but is factually incorrect or is non-sensical. As a result, prompt engineering has become a significant skill that determines the quality and relevance of the output, enabling effective use of the LLMs. Such prompt engineering solutions are of specific importance to healthcare data systems as there is no room for trivial mistakes or mismanagement because healthcare decisions directly affect a patient’s life.
[0026] Typical patient medical records stored in the healthcare data systems include medical history, medications, past and current illnesses, treatment history, and vitals that are used for accurately diagnosing and treating patients. However, patient data may become inaccurate, inconsistent, insufficient, outdated, duplicated, or corrupted due to many factors, including human error, misspellings, and outdated software. Research shows that one out of five patients may not be matched entirely to electronic health records (EHR). This results in the medical staff either having to search through the records, costing time or creating another patient record, leading to a duplicate record in the system, causing further confusion. Reports show that around 32% of organizations have such duplicate records. As a result, patients may be misidentified and may receive improper treatment and ineffective medications. Incorrect or missing medical data may also lead to claims of fraudulence and misconduct, which in turn, may result in significant delays or denials of reimbursements or even legal action including penalties and closure of the healthcare institution. Statistics show that incorrect medical data, thus, costs an average of 15% to 25% of revenue of a healthcare institution.
[0027] A primary reason for incorrect medical data is manual error. For example, a user may erroneously input the same patient identification number (ID) as part of a user prompt to a conversational system to retrieve corresponding medical reports. For example, the user prompt may include “Get me medical reports of a cancer patient1_ID#ABC123 having a contact number +91-8011234567 and a cancer patient2_ID#ABC123 having a contact number “+91-9287653210.” In this scenario, conventional generative AI based conversational systems, such as ChatGPT 4.0, fail to recognize that the user has wrongly entered the same patient ID for both the patients, and associated context for the request. Further, the conventional generative AI based conversational systems will fetch the same medical records corresponding to the patient ID “ABC123” twice from an electronic health records (EHR) database as output, thereby requiring the user to realize his or her error and rectify the user prompt.
[0028] Thus, the conventional chatbot systems require user intervention and need users to manually correct errors in user prompts. Certain conventional chatbot systems may learn patterns from the corrections made manually by the users, and automatically perform corrections to user prompts incorrectly submitted by the users in the future based on the learnt patterns. However, this necessitates such conventional chatbot systems to undergo frequent trainings to learn new corrections made whenever the users refine and resubmit the user prompts.
[0029] In contrast, the conversational system described in the present disclosure realizes the context behind the user prompt submitted by the user, and automatically refines the user prompt when the user prompt includes one or more errors to output non-hallucinated and accurate results to the user. Thus, the present conversational system neither requires user intervention nor needs to undergo frequent trainings for outputting accurate and non-hallucinated results to the users. For instance, with reference to the previously noted example, the present conversational system compares the contact number “+91-8011234567” of the first patient with details of a plurality of patients stored in the EHR database and identifies that the correct patient ID of the first patient corresponding to the contact number “+91-8011234567” is “ABC124”. Further, the conversational system identifies the patient ID “ABC123” of the second patient mentioned in the user prompt as correct when the patient ID and the contact number of the second patient mentioned in the user prompt match with corresponding details of the second patient stored in the EHR database. Accordingly, in the previously noted example, the conversational system identifies that only the patient ID of the first patient in the user prompt is incorrect, while the patient ID of the second patient is correct.
[0030] Subsequently, the conversational system automatically refines the user prompt initially submitted by the user to “Get me medical reports of a cancer patient1_ID#ABC124 having a contact number +91-8011234567 and a cancer patient2_ID#ABC123 having a contact number “+91-9287653210.” Further, the conversational system correctly fetches two different medical records corresponding the patient ID “ABC124” and the patient ID “ABC123” from the EHR database based on the refined user prompt, and provides accurate output to the user.
[0031] It may be noted that different embodiments of the present conversational system may be used by different healthcare professionals to obtain desired healthcare information. For example, the conversational system may be used by a patient to predict a diabetic score based on his or her vitals. Alternatively, the conversational system may be used by a doctor to obtain information related to various patients undergoing a particular treatment and an associated success rate. Further, the conversational system may also be used by the doctor to obtain information on latest developments in the medical domain to devise suitable treatment plans for patients.
[0032] The conversational system may also be used by a medical practitioner to automatically convert clinical notes of a patient entered in a textual format into a specific desired format for enabling the patient to share the clinical notes to various hospitals easily on need basis. Examples of the specific desired format include such as a JavaScript Object Notation (JSON) format, a continuity of care document (CCD) file format, and a fast healthcare interoperability resources (FHIR) format. Further, the conversational system provides a medical device manufacturer with quick and easy access to medical compliance requirements and standards information required for developing complaint medical devices. In addition to healthcare-related applications, the conversational system may also be used in other selected systems for automatically refining user prompts when the user prompts include errors and for outputting non-hallucinated and accurate results to users. Example of such selected systems include a conversational system associated with an automotive system, an industrial system, an aerospace system, a rail system, and a media and entertainment system. For clarity, an embodiment of the present conversational system is described herein in greater detail with reference to a conversational system that accurately provides each of a medical practitioner and a medical device manufacturer with a desired healthcare information per their needs.
[0033] FIG. 1 illustrates an exemplary generative AI based conversational system (100) that may be used, for example, by a medical practitioner to automatically generate clinical notes of a patient in a structured format. Generally, in a hospital, healthcare professionals such as doctors, nurses, and/or physician assistants who provide a necessary medical attention to a patient maintain a medical record of the patient. The medical record of the patient includes clinical notes entered by various healthcare professionals over a period of time. Such clinical notes entered by the healthcare professionals include, for example, demographic information of the patient, medical history of the patient, symptoms, diagnosis, treatments, and any other information pertinent to the care of the patient.
[0034] Typically, the hospital maintains the medical record including the clinical notes entered by the healthcare professionals as physical hardcopies or as digital records in a repository of the hospital. A user, for example, a medical practitioner working in the hospital converts the clinical notes entered by the various healthcare professionals into a structured format such as into a JSON format, a CCD file format, or a FHIR format using a software application that is implemented with a FHIR interface. Further, the medical practitioner stores the clinical notes, thus converted into the structured format in a patient access application, for example, in an Apple’s HealthKit application in order to allow the patient to access and share his or her medical information via the patient access application with a different hospital, if needed.
[0035] The present generative AI based conversational system (100) aids the medical practitioner to convert the clinical notes of the patient into a structured format automatically, while ensuring that the clinical notes are transcribed with accurate data and context. To that end, the generative AI based conversational system (100) is communicatively coupled to a selected system (102), for example, a hospital system (102) via a communications link (104). Examples of the communications link (104) include a content delivery network, a broadcast network, a Wi-Fi network, an Ethernet, and a cellular data network. In one embodiment, the generative AI based conversational system (100) corresponds to a chatbot system, which employs retrieval augmented generation (RAG) along with large language models (LLM) such as leveraged open artificial intelligence (openAI) to output accurate healthcare information to the user. Further, the generative AI based conversational system (100) may be developed using one or more frameworks such as a LangChain framework and a pinecone framework. These Langchain and pinecone frameworks along with one or more advanced large language models (LLMs) facilitate the generative AI based conversational system (100) to have meaningful and contextually appropriate conversations with users. The LLMs for example, include Google’s Bidirectional Encoder Representations from Transformers (BERT), eXtreme MultiLingual Language Model (XLNet) developed by Carnegie Mellon University and Google, Azure OpenAI Generative Pre-trained Transformer 3 (GPT-3) and/or ChatGPT.
[0036] To that end, the LLMs that are trained on vast amount of healthcare-related data to generate a response to a user input that is contextually relevant and grammatically correct. Specifically, the generative AI based conversational system (100) uses deep learning techniques and architectures such as transformer networks to automatically refine a user prompt if the user prompt includes errors and to further generate a response that is grammatically and contextually accurate based on the refined user prompt. The generative AI based conversational system (100) may also specifically employ LLMs such as Google’s BERT, BARD, XLNet, or ChatGPT 4.0 for healthcare-related document summarization related tasks as ChatGPT 4.0 provides relevant and concise information while summarizing documents. For the sake of simplicity, the generative AI based conversational system (100) is simply referred herein afterwards as the conversational system (100) throughout various embodiments of the present disclosure.
[0037] In one embodiment, the medical practitioner may submit an input prompt (101) including clinical notes of a patient to be converted into the structured format to the conversational system (100) using the hospital system (102). Examples of the hospital system (102) include a smartphone, a desktop, a laptop, and a tablet. Specifically, the hospital system (102) includes a healthcare application (106) that includes one or more associated graphical user interfaces (GUIs) (108). The GUIs (108) of the healthcare application (106) include a chatbox (109) through which the medical practitioner may submit the input prompt (101) including clinical notes to be converted into the structured format to the conversational system (100). For the sake of simplicity, the input prompt (101) is simply referred herein afterwards as a user prompt (101) throughout various embodiments of the present disclosure.
[0038] In one embodiment, the chatbox (109) in the GUIs (108) enables the medical practitioner to submit the user prompt (101) as a textual or speech input to the conversational system (100). To that end, the hospital system (102) includes a speech-to-text converter, which automatically converts the user prompt (101) provided as the speech input by the medical practitioner into the textual input, and further submits the converted textual input to the conversational system (100) via the chatbox (109).
[0039] Upon receiving the user prompt (101) including the clinical notes of the patient from the hospital system (102), a validator system (110) in the conversational system (100) analyzes one or more keywords in the user prompt (101). Specifically, the validator system (110) analyzes the keywords in the user prompt (101) submitted by the medical practitioner to validate if the medical practitioner is requesting for healthcare specific information or any other information that is irrelevant to healthcare domain. For example, the medical practitioner may submit a user prompt (101) as “What is the weather forecast for today,” which is obviously not a question related to the healthcare domain. In this example, the validator system (110) identifies that the medical practitioner is not asking for healthcare domain specific information based on one or more keywords in the user prompt (101), as described further in detail with reference to FIGS. 2A-B.
[0040] Accordingly, the conversational system (100) outputs a message that “I'm afraid I don't have enough context to know the weather today.” Thus, the conversational system (100) described in the present disclosure is specifically customized to answer questions related to only the healthcare domain. Specifically, the conversational system (100) answers healthcare specific questions submitted by users by obtaining data from a known and reliable external master data source system such as from a hospital information system (112). As a result, the answers provided by the conversational system (100) is highly reliable, accurate, satisfactory, and do not include any hallucinated results typically seen with conventional generative AI chatbot systems that use varied and often unreliable sources.
[0041] In certain embodiments, the validator system (110) successfully validates that the medical practitioner is requesting for healthcare specific information when the user prompt (101) submitted by the medical practitioner to the conversational system (100) includes the clinical notes of the patient. Subsequently, the conversational system (100) generates a raw output in the structured format, for example JSON format, per a request of the medical practitioner. The validator system (110) then verifies if the raw output generated by the conversational system (100) is accurate using a ‘SQL query error’ validator. Specifically, the validator system (110) verifies if clinical details of the patient mentioned in the raw output match with clinical details of the same patient stored in a reliable master data source system such as in the hospital information system (112).
[0042] In one embodiment, the hospital information system (112) corresponds to a hospital database that stores and maintains EHR of a plurality of patients. For example, the validator system (110) successfully verifies the raw output as accurate when the clinical details of the patient such as the patient’s ID, contact number, symptoms, gender, age, and treatment details mentioned in the raw output match with the corresponding clinical details of the patient stored in the hospital information system (112). The conversational system (100) then transmits the verified raw output to the hospital system (102) via the communications link (104). The hospital system (102) subsequently displays the verified raw output generated by the conversational system (100) in an associated display unit (114) as a response to the user prompt (101) submitted by the medical practitioner.
[0043] Conversely, when the clinical details of the patient mentioned in the raw output do not match with the clinical details of the patient stored in the hospital information system (112), the validator system (110) identifies that the raw output generated by the conversational system (100) is incorrect and likely includes one or more errors. Accordingly, in this scenario, the conversational system (100) uses an associated prompt refinement system (116) to refine the user prompt (101) based on accurate details of the patient retrieved from the hospital information system (112), as described in a greater detail with reference to FIGS. 2A-B. Further, the prompt refinement system (116) submits the refined user prompt to the conversational system (100) such that the conversational system (100) generates a refined output including the clinical notes of the patient in the JSON format accurately without any errors, as described in a greater detail with reference to FIGS. 2A-B.
[0044] Subsequently, the conversational system (100) transmits the refined output including the clinical notes of the patient in the JSON format to the hospital system (102) via the communications link (104). The hospital system (102) then displays the refined output received from the conversational system (100) in the associated display unit (114) as the response to the user prompt (101) submitted by the medical practitioner. Subsequently, the medical practitioner may store the refined output including clinical information of the patient in a patient access application (118) for providing access to the patient to his or her own clinical information via the patient access application (118). An example of the patient access application (118) includes Apple’s HealthKit, Healow, Mediassist or any other proprietary patient access application. In one embodiment, the patient access application (118) also allows the patient to easily share access to his or her clinical information stored in the patient access application (118) with one or more selected medical practitioners in another hospital as and when needed to enable the medical practitioners in that hospital to become quickly aware of existing medical conditions of the patient.
[0045] In one embodiment, the conversational system (100) and associated systems including the validator system (110) and the prompt refinement system (116), for example, may include one or more of general-purpose processors and specialized processors to refine user prompts as needed and to generate and output accurate responses to user queries without any errors. In certain embodiments, the conversational system (100), the validator system (110), and the prompt refinement system (116) may also include one or more of graphical processing units, microprocessors, programming logic arrays, field programming gate arrays, integrated circuits, system on chips, and/or other suitable computing devices. Additionally, certain operations of the conversational system (100), the validator system (110), and the prompt refinement system (116) may be implemented by suitable code on a processor-based system, such as a general-purpose or a special-purpose computer. A specific example of the conversational system (100) that may be used by a medical practitioner or user to obtain a desired healthcare information from the conversational system (100) is subsequently described with reference to FIGS. 2A-B.
[0046] FIGS. 2A-B illustrate a flow diagram depicting an exemplary method (200) for accurately generating clinical notes of a patient in a structured format using the conversational system (100) of FIG. 1. The order in which the exemplary method is described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the method disclosed herein, or an equivalent alternative method. Additionally, certain blocks may be deleted from the method or augmented by additional blocks with added functionality without departing from the claimed scope of the subject matter described herein.
[0047] At step (202), the conversational system (100) receives a user prompt in a natural language from a hospital system (102). In one embodiment, a medical practitioner may submit the user prompt to the conversational system (100) via a chatbox (109) provided in the GUIs (108) of the healthcare application (106). The user prompt thus submitted by the medical practitioner includes clinical notes of a patient in a natural language to be converted into a structured format by the conversational system (100). The user prompt may also specify a particular output format such as a JSON, a CCD file, or a FHIR format in which the clinical notes of the patient have to be generated by the conversational system (100).
[0048] In a particular scenario, a user prompt (302) submitted by the medical practitioner in a natural language via the chatbox (109) may include clinical notes of the patient in English, while further specifying the output to be generated in JSON format. The user prompt (302), for example, may include “152-YEAR-OLD FEMALE, CONTACT NUMBER: +12739926, WITH PATIENT ID #110764 HAS CHRONIC MACULAR RASH TO FACE AND HAIR, WORSE IN BEARD, EYEBROWS AND NARES. THE RASH IS ITCHY, FLAKY AND SLIGHTLY SCALY. MODERATE RESPONSE TO OTC STEROID CREAM. PATIENT HAS BEEN USING CREAM FOR 2 WEEKS AND ALSO SUFFERS FROM DIABETES. PLEASE GIVE A STRUCTURED RESPONSE IN A JSON FORMAT.
[0049] Accordingly, at step (204), the conversational system (100) generates an instruction based on the output format in which a response to the user prompt (302) has to be generated, and adds the generated instruction to the user prompt (302). For example, with reference to the previously noted user prompt (302), the conversational system (100) generates the instruction as “You are a helpful assistant able to purely express yourself in JSON. Given the following doctor's notes on a patient, compare the details with existing patient record, and strictly generate a structured JSON as an output.” The conversational system (100) then adds the generated instruction to the original user prompt (302) to generate the refined user prompt.
[0050] To that end, in one embodiment, the conversational system (100) includes an orchestrator that, in turn, includes a LangChain framework. The LangChain framework automatically interacts with an external system such as with the hospital information system (112) for retrieving details of the patient from the hospital information system (100). In addition to the LangChain framework, the conversational system (100) may also employ a retrieval-augmented generation (RAG) framework. The RAG framework integrates the dense retrieval of information with sequence-to-sequence models including LLMs for executing natural language processing tasks such as for question answering, text completion, and/or conversing with end users.
[0051] In one exemplary scenario, the RAG framework includes two main components including a retriever component and a generator component. The retriever component automatically retrieves details of the patient from the hospital information system (112) using associated application programming interfaces (APIs). Subsequently, the retriever component provides the retrieved patient details as an input to the generator component for further processing.
[0052] At step (206), the conversational system (100) generates a raw output including a preliminary response to the user prompt in the output format using the RAG framework. For example, the conversational system (100) generates a raw output in the JSON format including the following information.
[0053] “{“PATIENT_ID”: “#110764”,
[0054] "GENDER": "FEMALE “,
[0055] "AGE":152",
[0056] ”CONTACT NUMBER": ”+12739926",
[0057] ‘SYMPTOMS":[{"SYMPTOM":"CHRONIC MACULAR RASH”, "AFFECTED_AREA":"FACE AND HAIR, WORSE IN BEARD, EYEBROWS AND NARES"}, {"SYMPTOM":"ITCHY“, "AFFECTED_AREA":"WHOLE BODY"}, {"SYMPTOM“:"FLAKY","AFFECTED_AREA":"WHOLEBODY"},{"SYMPTOM":"SLIGHTLYSCALY","AFFECTED_AREA":"WHOLE BODY"}],
[0058] "CURRENT_MEDS":[{"MEDICATION":"OTC STEROID CREAM“, "RESPONSE":"MODERATE RESPONSE"}],
[0059] "MISCELLANEOUS":"PATIENT HAS BEEN USING CREAM FOR 2 WEEKS AND ALSO SUFFERS FROM DIABETES"}.
[0060] Subsequently, at step (208), the validator system (110) intelligently selects one or more validators from a plurality of associated validators to validate if the raw output generated by the conversational system (100) includes any error. In certain embodiments, the validator system (110) generally includes multiple numbers of associated validators designed to validate if the raw output generated by the conversational system (100) includes any error. These validators are developed, for example, using dot network enabled technology (.NET) or python language functions to automatically identify if the user prompt and the raw output generated by the conversational system (100) include any errors such as grammatical errors, missing information, incorrect information, contextual errors, spelling errors, and typo errors. In addition, these validators automatically refine the user prompt, and eliminate errors if any in the user prompt to enable the conversational system (100) to output accurate healthcare information to a user. For the sake of simplicity, the validator system (110) is subsequently described with reference to three different validators including a ‘domain-specific application context’ validator such as ‘Healthcare-app context’, ‘SQL query error’, and ‘Business rules validators.’ However, it is to be understood that the validator system (110) can have any number and types of validators depending upon a specific type of healthcare information to be provided as an output by the conversational system (100).
[0061] For example, the validator system (110) may also include additional validators such as ‘regulatory compliance,’ ‘python code debug,’ ‘semantic specification,’ and ‘personal identifiable information (PII)’ to identify if a user prompt submitted by a user or the raw output generated by the conversational system (100) includes any errors. For example, the conversational system (100) including the ‘regulatory compliance’ validator allows a medical device company to develop a medical device that adheres to medical standards. In this example, the conversational system (100) is previously trained with different medical standards such as ISO 10993, ISO 14971, ISO 13485, IEC 62304, and IEC 60601. When a medical device developer submits a user prompt including a query related to a medical device to the conversational system (100), the ‘regulatory compliance’ validator validates if the user prompt includes any error based on the medical standards related data used previously to train the conversational system (100). For example, a user prompt submitted by the medical device developer to the conversational system (100) may correspond to “Does usage of a specific type of plastic material in making a petri dish make the petri dish biocompatible per IEC 62304 standard? In this example, the ‘regulatory compliance’ validator identifies that regulatory information including the biocompatibility of a medical device is generally described in ISO 10993 and is not described in IEC 62304 based on the trainings previously provided to the conversational system (100). Further, in this example, the conversational system (100) automatically refines the user prompt by modifying a reference to the IEC standard in the user prompt from “IEC 62304” to “ISO 10993” to provide an accurate response to the user prompt initially submitted by the medical device developer.
[0062] In another example, the ‘python code debug’ validator validates if a raw output corresponding to python strings includes any errors. Specifically, the ‘python code debug’ validator parses the python strings into an abstract syntax tree and determines if there are any syntax errors in the python strings. The ‘python code debug’ validator then identifies the determined syntax errors as errors in the raw output generated by the conversational system (100). In yet another example, the ‘semantic specification’ validator performs semantic analysis on a user prompt submitted by a user. For instance, a user may submit a user prompt to the conversational system (100) as “Provide me the list of medicines that have both caffeine and paracetamol as active ingredients.” In this example, the ‘semantic specification’ validator identifies what that the user is asking for by performing semantic analysis on the user prompt. Further, the ‘semantic specification’ validator automatically refines the initially submitted user prompt, for example, to “Provide me the list of medicines that have caffeine + paracetamol as active ingredients.” The refined user prompt including ‘+’ sign indicates to the conversational system (100) that the conversational system (100) needs to provide names of medicines that should mandatorily have both caffeine and paracetamol drug combinations.
[0063] Further, the ‘PII’ validator identifies if the raw output or a final response generated by the conversational system (100) includes any personally identifiable information of a user. For example, the ‘PII’ validator identifies if the raw output or the final response generated by the conversational system (100) includes such as a name of the patient and/or a contact number of the patient that reveals identity of the patient. Further, the ‘PII’ validator selectively excludes such personal information of the patient from the final response generated by the conversational system (100) to ensure privacy of the patient is not compromised.
[0064] In one embodiment the validator system (110) selects one or more validators that are suitable for validating the raw output from the plurality of associated validators based on one or more keywords in the user prompt (302) submitted by the medical practitioner. To that end, the validator system (110) includes a knowledge database (120) that stores a plurality of keywords mapped to each of the plurality of validators. For example, the validator ‘Healthcare-app context’ may be mapped to keywords that are not specific to healthcare domain such as weather, traffic, places to visit, and restaurants in the knowledge database (120). The validator ‘SQL query error’ may be mapped to keywords including numerical values such as patient IDs and patient contact numbers in the knowledge database (120). The validator ‘Business rules validators’ may be mapped to business related keywords such as business, complaints, finance, revenue, and income in the knowledge database (120).
[0065] When the medical practitioner submits a user prompt that is completely irrelevant to healthcare domain, for example, as ‘Please suggest me some good restaurants nearby’, the validator system (110) compares a keyword such as ‘restaurants’ in the user prompt with the keywords mapped to each of the validators in the knowledge database (120). Based on the comparison, the validator system (110), for example, identifies that the keyword “restaurants” in the user prompt matches with the same keyword “restaurants” mapped to the validator ‘Healthcare-app context’ in the knowledge database (120). Accordingly, in this example, the validator system (110) selects the validator ‘Healthcare-app context’ as a suitable validator that is to be used for validating a raw output generated by the conversational system (100).
[0066] Further, in the previously noted example, the selected validator ‘Healthcare-app context’ identifies that the user prompt submitted by the medical practitioner is not related to healthcare domain based on the keyword “restaurants” specified in the user prompt. Accordingly, in this example, the conversational system (100) generates and displays a response to the user prompt as “I'm afraid I don't have enough context to suggest good restaurants closer to you”, on the display unit (114) of the hospital system (102) instead of presenting inaccurate or nonsensical responses.
[0067] In another example, the medical practitioner submits the previously noted user prompt (302) including numerical values such as ‘patient’s ID’ and ‘patient’s contact number.’ Further, the validator system (110) identifies that the keywords “patient’s ID” and “patient’s contact number” mentioned in the user prompt (302) match with the keywords mapped to the validator ‘SQL query error’ in the knowledge database (120). Accordingly, in this example, the validator system (110) selects the validator ‘SQL query error’ as the suitable validator that is to be used for validating the raw output. It is to be understood that the validator system (110) may similarly select the validator ‘Business rules validators’ as the suitable validator for validating the raw output when the user prompt submitted by the medical practitioner includes business-related keywords that match with the keywords mapped to the validator ‘Business rules validators’ in the knowledge database (120).
[0068] Thus, the validator system (110) described in the present disclosure does not merely select and execute all available validators such as ‘Healthcare-app context’, ‘SQL query error’, and ‘Business rules validators’ for all user prompts. The validator system (110) intelligently identifies and selects one or more suitable validators from the plurality of associated validators based on the keywords in the user prompt, and uses only the selected validators for validating the raw output generated by the conversational system (100).
[0069] In certain embodiments, at step (210), the validator system (110) validates if the raw output generated by the conversational system (100) includes any errors using the selected validators. For example, the validator system (110) selects the validator ‘SQL query error’ as the suitable validator to be used for validating the raw output generated by the conversational system (100), as noted previously. The validator system (110) then compares a set of input data related to an entity such as patient details related to a patient specified in the raw output with the patient details related to the same patient retrieved from a master data source system. An example of master data source system includes the hospital information system (112). For instance, the validator system (110) compares the patient details specified in the raw output with the patient details retrieved from the hospital information system (112) using the LangChain framework, RAG framework, and/or APIs, and further identifies if the raw output generated by the conversational system (100) includes any error based on the comparison. Further, the validator system (110) automatically refines the user prompt based on the retrieved patient details when the validator system (110) identifies that the generated raw output includes error. Further, the generator component of the RAG framework generates a final response to the user prompt initially submitted by the medical practitioner based on the refined user prompt.
[0070] For example, the conversational system (100) retrieves an ID of the patient “#110764” from the raw output generated by the conversational system (100). Subsequently, the conversational system (100) compares the retrieved patient ID “#110764” with patient IDs of a plurality of patients stored in the hospital information system (112) to identify a specific patient ID that matches with the retrieved patient ID “#110764”. The conversational system (100) then retrieves an electronic health record (EHR) data associated with the matched patient ID.
[0071] In certain embodiments, the validator system (110) uses the retrieved EHR data as a golden reference data. The validator system (110) compares the raw output generated by the conversational system (100) with the retrieved EHR data, and identifies if there are any mismatches between the patient details mentioned in the raw output and the patient details mentioned in the retrieved EHR data. When the validator system (110) identifies any mismatches between the patient details included the raw output and the patient details stored in the retrieved EHR data, the validator system (110) identifies those mismatches as errors in the raw output and in the user prompt (302). In one embodiment, the conversational system (100) uses the pinecone framework that corresponds to a cloud-based vector database to compare the patient details specified in the raw output with the retrieved EHR data. The pinecone framework represents data as vectors and allows for quick search and identification of data similarity between information obtained from two different data sources, thus proving suitable for natural language processing and patient record matching tasks.
[0072] For example, the age of the patient mentioned in the raw output is ‘152’. However, the actual age of the patient is only 52, which is correctly mentioned in the EHR data. In this example, the validator system (110) compares the age of the patient mentioned in the raw output with the age of the patient mentioned in the EHR data using the selected validator ‘SQL query error’ using the pinecone framework. Based on the comparison, the validator system (110) identifies that the age of the patient is incorrectly mentioned in the raw output as ‘152’, and further identifies this mismatch in the age of the patient as a first error in the raw output.
[0073] In another example, the contact number of the patient erroneously mentioned in the raw output is ‘+12739926’. However, the actual contact number of the patient is ‘+1273992645’, which is correctly mentioned in the EHR data. In this example, the validator system (110) identifies that the contact number of the patient is incorrectly mentioned as ‘+12739926’ in the raw output by comparing the contact number of the patient mentioned in the raw output with the contact number of the patient mentioned in the EHR data. Further, the validator system (110) identifies the contact number of the patient ‘+12739926’ incorrectly mentioned in the raw output as a second error in the raw output.
[0074] In addition to identifying errors related to numerical values such as errors related to the incorrect age and contact number of the patient mentioned in the raw output, the validator system (110) is also capable of identifying contextual errors. For example, the raw output erroneously mentions the gender of the patient as ‘Female’. Further, the raw output mentions one of the patient symptoms as “Chronic macular rash that is worse in beard.” In this example, the conversational system (100) concludes that females do not generally include beard based on trainings previously provided to the conversational system (100). Accordingly, the conversational system (100) identifies that the gender of the patient is incorrectly mentioned as ‘female’ in the raw output. The conversational system (100) also identifies this incorrect gender of the patient in the raw output as a third error in the raw output. Subsequently, the validator system (110) retrieves a correct gender of the patient from the EHR data, for example, which includes the correct gender of the patient as ‘Male.’
[0075] Further, the raw output includes that the rash-related symptoms such as “itchy”, “flaky”, and “slightly scaly” exist all over the body of the patient. However, for example, the patient does not have these symptoms throughout the whole body in reality. The EHR data stored in the hospital information system (112) also does not particularly specify body areas affected by these symptoms. In this example, the validator system (110) compares details related to these symptoms mentioned in the raw output with details of the symptoms mentioned in the EHR data, and identifies that there are no affected areas specifically mentioned in the EHR data for these symptoms. Accordingly, in this example, the validator system (110) identifies that the description of the symptoms included in the raw output is incorrect. Further, the validator system (110) identifies this incorrect description of the symptoms included in the raw output as a fourth error in the raw output.
[0076] At step (212), the conversational system (100) generates an error correction instruction upon identifying one or more errors in the raw output generated by the conversational system (100). For instance, the conversational system (100) identifies that the raw output generated by the conversational system (100) includes four different errors, as noted previously. The first error is related to incorrect age of the patient included in the raw output, and the second error is related to incorrect contact number of the patient included in the raw output. The third error is related to incorrect gender of the patient included in the raw output, and the fourth error is related to incorrect description of the patient symptoms included in the raw output.
[0077] With reference to the previously noted examples, the conversational system (100) generates the error correction instruction based on correct details of the patient retrieved from the EHR data, which is stored in the hospital information system (112). For example, the error correction instruction generated based on correct details of the patient retrieved from the EHR data indicates to the conversational system (100) that the correct age of the patient should be 52, the correct contact number of the patient should be ‘+1273992645’, and the correct gender of the patient should be male. Further, the error correction instruction indicates to the conversational system (100) that the correct description of the patient symptoms should be ‘Affected areas for the symptoms such as itchy, flaky, and slightly scaly should be null and should not be whole body as the EHR data has no information on affected areas for these symptoms.’
[0078] Subsequently, at step (214), the prompt refinement system (116) automatically refines the initial user prompt (302) based on the error correction instruction generated by the conversational system (100) to generate a refined user prompt. An example of the generated refined user prompt with correct patient details includes, “52 y/o male, contact number: +1273992645, with patient id #110764 has chronic macular rash to face and hair, worse in beard, eyebrows and nares. The rash is itchy, flaky and slightly scaly. Moderate response to OTC steroid cream. Patient has been using cream for 2 weeks and also suffers from diabetes. Please give structured response in JSON format. No affected areas are mentioned for the symptoms itchy, flaky and scaly in the hospital information system.”
[0079] At step (216), the conversational system (100) generates the response to the initial user prompt (302) based on the refined user prompt generated by the prompt refinement system (116). For example, the conversational system (100) generates a final response (402) as
[0080] {“PATIENT_ID”: “#110764”,
[0081] "GENDER": "MALE“,
[0082] "AGE": 52",
[0083] ”CONTACT NUMBER": ”+1273992645",
[0084] 'SYMPTOMS': [{'SYMPTOM': 'CHRONIC MACULAR RASH', 'AFFECTED_AREA': 'FACE'},{'SYMPTOM': 'CHRONIC MACULAR RASH', 'AFFECTED_AREA': 'HAIR'},{'SYMPTOM': 'ITCHY', 'AFFECTED_AREA': 'NULL'},{'SYMPTOM': 'FLAKY', 'AFFECTED_AREA': 'NULL'},{'SYMPTOM': 'SLIGHTLY SCALY', 'AFFECTED_AREA': 'NULL'},],
[0085] "CURRENT_MEDS": [{"MEDICATION":"OTC STEROIDCREAM“, "RESPONSE":"MODERATE RESPONSE"}],
[0086] "MISCELLANEOUS": "PATIENT HAS BEEN USING CREAM FOR 2 WEEKS AND ALSO SUFFERS FROM DIABETES"}
[0087] It may be noted that the final response (402) generated based on the refined user prompt includes clinical notes of the patient in the JSON format as initially requested by the medical practitioner. Also, it may be noted that all patient related details such as the age, contact number, gender, and description of the patient symptoms mentioned in the generated response (402) are accurate and per the EHR data of the patient stored in the hospital information system (112). Thus, the conversational system (100) is capable of providing accurate information to a user even when the user prompt (302) submitted by the user includes multiple errors. The conversational system (100) automatically refines the user prompt (302) having errors based on information obtained from a reliable external master data source system such as from the hospital information system (112) and outputs accurate information to the user without requiring any user intervention.
[0088] In contrast, conventional conversational systems, such as ChatGPT4.0, generate a response to a user prompt without analyzing if the user prompt includes any errors. For example, providing the previously noted user prompt (302) with errors to a conventional conversational system such as to ChatGPT4.0 results in an inaccurate and unsatisfactory response (502), as depicted in FIG. 5. For example, the ChatGPT4.0 response (502) includes,
[0089] “PATIENT INFORMATION:
ID: #110764
AGE: 152 YEARS
GENDER: FEMALE
CONTACT NO: +12739926
[0090] MEDICAL HISTORY: DIABETES
[0091] PRESENTING COMPLAINT: CHRONIC MACULAR RASH ON THE FACE AND HAIR, PARTICULARLY EXACERBATED IN THE BEARD, EYEBROWS, AND NARES (NOSTRILS).
[0092] SYMPTOMS: ITCHY, FLAKY, SLIGHTLY SCALY RASH.
[0093] AFFECTED AREA: WHOLE-BODY
[0094] CURRENT TREATMENT: OVER-THE-COUNTER (OTC) STEROID CREAM.DURATION OF USE: 2 WEEKS.RESPONSE: MODERATE IMPROVEMENT NOTED.”
[0095] It may be noted that the response (502) generated as an output by the conventional conversational system incorrectly displays the age of the patient as 152, the gender of the patient as female, the contact number of the patient as +12739926, and the area affected by the rash as whole body.
[0096] As conventional conversational systems neither verify or validate user prompts for errors, nor refine the user prompt to eliminate those errors, the response generated by such systems is often inaccurate and unsatisfactory, thereby necessitating multiple inputs from the user to extract the desired response. However, the conversational system (100) described in the present disclosure automatically identifies errors, if any, in the user prompt submitted by the user, refines the user prompt to eliminate those errors from the user prompt, and outputs accurate results to the user without requiring user intervention.
[0097] Further, at step (218), the conversational system (100) transmits the response (402) that is generated based on the refined user prompt to the hospital system (102) via the communications link (104). The hospital system (102) then displays the response (402) including the clinical notes of the patient in the JSON format to the medical practitioner via the associated display unit (114). The medical practitioner may then store the response (402) received as a reply from the conversational system (100) in the patient access application (118) for providing access to the patient to his or her clinical information via the patient access application (118).
[0098] Use of the present conversational system (100), thus, prevents erroneous, missing, or unwanted duplication of data in medical records, which ensures patient safety as diagnosis and treatment is based on accurate medical information. The present conversational system (100) also reduces revenue loss associated with processing inefficiencies by the medical staff, while aiding adherence to applicable data privacy, regulatory and compliance laws.
[0099] In an alternative embodiment, the conversational system (100) may also be used by a medical device company to retrieve complaints raised by various users regarding an associated medical device. For example, FIG. 7 illustrates a system diagram depicting the exemplary conversational system (100) that allows a representative of the medical device company to automatically retrieve complaints related to a particular medical device from a complaints file, for example, stored in Microsoft excel format.
[00100] In this exemplary embodiment, a particular type of medical device (601), for example, a specific model of a medical imaging device (601) such as an X-ray machine or a computed tomography machine deployed in various hospitals and/or scanning centers is communicatively coupled to a medical device company (MDC) system (602). Examples of the MDC system (602) include a dedicated computer server, a smartphone, a desktop, a laptop, and a tablet. When hospitals and/or scanning centers face any issues with the medical device (601), they may share the complaints with the MDC system (602) via the communications link (104).
[00101] The MDC system (602) receives the complaints related to the medical device (601) raised by various healthcare professionals over a period of time and generates the complaint file. Typically, the medical device company maintains the complaints file that includes information such as an ID of the medical device (601), an ID of the manufacturer, information related to a type of the medical device (601), and complaints raised by various users using the medical device (601) arranged in various rows and columns of the complaints file. Sometimes, size of the data stored in the complaints file is in terabytes. Currently, a user, for example, a service engineer working for the medical device company needs to manually review details in rows and columns of the complaints file to identify issues such as missing data in the rows and columns. Such manual checking of the complaints file is a time-consuming process and is prone to manual error.
[00102] Instead of using such error prone manual checking approach, the medical device company may use the present conversational system (100), for example, to automatically retrieve complaints including incomplete or inaccurate details from the complaints file. To that end, the conversational system (100) is communicatively coupled to the selected system (102), for example, to a medical device company (MDC) system (602) via the communications link (104), as noted previously. Specifically, the MDC system (602) includes a medtech complaints management (MCM) application (604) that includes one or more associated graphical user interfaces (GUIs) (606).
[00103] The GUIs (606) of the MCM application (604) include a chatbox (607) through which the user submits the complaints file and a user prompt (609) to the conversational system (100) for retrieving the complaints that include incomplete or inaccurate details from the complaints file. For example, the user submits the user prompt (609) in a natural language to the conversational system (100) through the chatbox (607) as “Fetch me the complaints that include blank device ID.”
[00104] Upon receiving the complaints file and the user prompt (609) submitted by the user, the conversational system (100) converts the user prompt (609) submitted by the user into a SQL query, for example, using ChatGPT 3.5. For instance, the conversational system (100) converts the user prompt (609) “Fetch me complaints that have blank device ID from the complaints file” submitted in English by the user into the SQL query, “SELECT DISTINCT device_id FROM table_name FROM column_name.” The SQL query, thus generated from the user prompt (609), corresponds to a raw output of the conversational system (100).
[00105] Post generating the raw output including the SQL query, the conversational system (100) retrieves a copy of the complaints file stored in a complaints database (608) that resides within the MDC system (602). Further, the conversational system (100) compares and verifies if data in each cell of the complaints file submitted by the user as an input to the conversational system (100) matches with data in each corresponding cell in the complaints file retrieved from the complaints database (608). If the data in the complaints file submitted by the user exactly matches with the data in the complaints file retrieved from the complaints database (608), the conversational system (100) uses the complaints file submitted by the user to retrieve the complaints with blank device ID. Otherwise, the conversational system (100) uses the complaints file retrieved from the complaints database (608) for retrieving the complaints with blank device ID.
[00106] In certain embodiments, the conversational system (100) subsequently identifies a suitable validator for validating the raw output including the SQL query generated by the conversational system (100). With reference to one of the previously noted examples, the validator system (110) compares the keyword “complaints” retrieved from the user prompt (609) with keywords mapped to each of the different validators in the knowledge database (120). Specifically, the validator system (110), for example, identifies that the keyword “complaints” in the user prompt (609) matches with one of the keywords mapped to the validator “Business rules validators” in the knowledge database (120). Accordingly, in the previously noted example, the validator system (110) selects the validator “Business rules validators’ as the suitable validator to be used for validating the raw output including the SQL query generated by the conversational system (100).
[00107] The validator system (110) then validates the raw output “SELECT DISTINCT device_id FROM table_name FROM column_name” generated by the conversational system (100) using the selected validator “Business rules validators.” Specifically, the validator system (110) checks if the raw output generated by the conversational system (100) adheres to each of a plurality of predefined rules using the validator “Business rules validators.” For example, a first predefined rule may indicate that the raw output generated by the conversational system (100) should include the word “select”. Further, a second predefined rule may indicate that the generated raw output should not include the words such as “delete” and “update” as presence of such words in the raw output may cause the conversational system (100) to change data in the complaints file, which is not intended. Furthermore, a third predefined rule may indicate that a set of input data including the ‘table_name’ and ‘column_name’ mentioned in the raw output should match with the same set of input data including the ‘table_name’ and ‘column_name’, respectively mentioned in the complaints file retrieved from the complaints database (608).
[00108] The validator system (110) validates if the raw output “SELECT DISTINCT device_id FROM table_name FROM column_name” generated by the conversational system (100) adheres to each of the exemplary first, second, and third predefined rules, noted previously. For example, the validator system (110) may identify the generated raw output as adhering to the first and second predefines rules as the raw output includes only the word “select” and does not include the words “delete” and “update.” However, the validator system (110) may also identify the generated raw output as not adhering to the third predefined rule when the raw output fails to include exact details of ‘table_name’ and ‘column_name’ needed for retrieving the complaints with blank device ID.
[00109] As the validator system (110) fails to successfully validate the third predefined rule, the validator system (110) generates an error message. The generated error message indicates to the conversational system (100) that the ‘name of the table’ and the ‘name of the column in the complaints file’ that the conversational system (100) needs for retrieving the complaints are missing in the raw output. Subsequently, the conversational system (100) retrieves these names missing from the raw output from the complaints file retrieved from the complaints database (608), and generates an error correction instruction based on the retrieved names. For example, the conversational system (100) identifies from the complaints file that the ‘name of the table’ missing in the raw output is ‘device_complaints’ and the ‘name of the column’ missing in the raw output is ‘column_B.’ Subsequently, the conversational system (100) generates the error correction instruction indicating to the conversational system (100) that the correct table name missing in the raw output is ‘device_complaints’ and the correct column name missing in the raw output is ‘column_B.’
[00110] The prompt refinement system (116) then refines the user prompt (609) initially submitted by the user and generates a refined user prompt based on the generated error correction instruction. For example, the prompt refinement system (116) refines the original user prompt (609) “Fetch me complaints that have blank device ID from the complaints file” initially submitted by the user and generates a refined user prompt as “Fetch me complaints that have blank device ID from column_B of the device_complaints table” based on the example error correction instruction noted previously.
[00111] The conversational system (100) then converts the refined user prompt “Fetch me complaints that have blank device ID from column_B of the device_complaints table” into a refined SQL query, for example, as “SELECT DISTINCT device_ID FROM column_B FROM device_complaints table.” Further, the conversational system (100) executes the refined SQL query and filters a list of complaints that include blank device ID in ‘column_B’ of the ‘device_complaints’ table in the complaints file. Furthermore, the conversational system (100) transmits the filtered list of complaints that include blank or missing device ID to the MDC system (202) via the communications link (104). The MDC system (202) then displays the filtered list of complaints including a blank device ID in an associated display unit (610) for enabling the user to visually perceive the filtered lists of complaints, and take necessary actions as appropriate. Thus, the conversational system (100) described in the present disclosure automatically refines the user prompt including incorrect information and/or values, errors, and missing information, and subsequently, re-runs the refined user prompt to output accurate and reliable healthcare information to users.
[00112] The conversational system (100) described in the present disclosure employs generative AI techniques to automatically identify errors in a user prompt submitted as an input to the conversational system (100) by a user. Examples of such errors identified in the user prompt by the generative AI conversational system (100) include, but are not limited to, grammatical errors, contextual errors, and errors in numerical values. Further, the generative AI conversational system (100) generates the error correction instruction based on the errors identified in the user prompt and information retrieved from a reliable external master data source system such as from the hospital information system (112) and/or complaints database (208), as noted previously with reference to FIGS. 2A-B.
[00113] In addition, the generative AI conversational system (100) automatically refines the user prompt to eliminate the identified errors from the user prompt based on the generated error correction instruction, and generates a response requested by the user based on the refined user prompt that is accurate. Hence, the response that is generated by the generative AI conversational system (100) is also accurate, non-hallucinated, and reliable. Further, the generative AI conversational system (100) can be used with various selected systems such as with healthcare systems, automotive systems, industrial systems, aerospace systems, rail systems, and media and entertainment systems to generate responses to user prompts that are accurate, non-hallucinated, and reliable. However, conventional generative AI based conversational systems do not automatically refine user prompts when the user prompts include errors and execute user prompts, as is. Hence, the responses generated by the conventional generative AI based conversational systems based on such user prompts are generally inaccurate, not reliable, and may include hallucinated data. The generative AI based conversational system (100) described in the present disclosure mitigates these issues by automatically refining the user prompts including errors and provides accurate and non-hallucinated results as outputs to users.
[00114] Although specific features of various embodiments of the present systems and methods may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments shown in the different figures.
[00115] While only certain features of the present systems and methods have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes.
LIST OF NUMERAL REFERENCES:
100 Conversational system
101, 302, 609 User prompt
109, 607 Chatbox
102 Hospital system
104 Communications link
106 Healthcare application
108, 606 Graphical user interfaces
110 Validator system
112 Hospital information system
114, 610 Display unit
116 Prompt refinement system
118 Patient access application
120 Knowledge database
200-218 Steps of a method for accurately generating clinical notes of a patient in a structured format using the conversational system (100)
402 Exemplary response generated by the conversational system (100) for the user prompt (302) submitted by a user
502 Exemplary response generated by a conventional conversational system for the user prompt (302) submitted by a user
601 Medical device
602 Medical device company system
604 Medtech complaints management application
608 Complaints database
, C , Claims:
We claim:
1. A method for automatically modifying an input prompt to a conversational system (100), comprising:
receiving the input prompt from a selected system (102) by the conversational system (100);
generating a raw output based on the received input prompt by the conversational system (100) for further validation using a large language model-based validator system (110);
selecting a validator from a plurality of validators associated with the conversational system (100) based on one or more keywords in the input prompt for validating accuracy of the raw output using the large language model-based validator system (110);
comparing a set of input data related to an entity specified in the raw output with the same set of input data related to the entity retrieved from a master data source system by the conversational system (100) using the selected validator;
identifying one or more errors in the input prompt based on the comparison, wherein the one or more identified errors correspond to one or more of a data mismatch between the set of input data specified in the raw output and the same set of input data retrieved from the master data source system, missing data, a contextual error in the input prompt, and a typo error in the input prompt;
generating an error correction instruction based on the one or more errors identified in the input prompt and the set of input data retrieved from the master data source system using the conversational system (100);
generating a refined input prompt by automatically correcting the one or more identified errors in the input prompt based on the error correction instruction using a large language model-based prompt refinement system (116), wherein the error correction instruction is generated based on the same set of input data retrieved from the master data source system;
generating a final response to the input prompt based on the refined input prompt using the conversational system (100); and
outputting the final response by the conversational system (100) in response to the input prompt initially received by the conversational system (100).
2. The method as claimed in claim 1, wherein generating the raw output by the conversational system (100) comprises:
receiving the input prompt comprising clinical notes of a patient in a natural language along with an instruction to convert the clinical notes of the patient to a structured format by the conversational system (100), wherein the structured format comprises one of JavaScript Object Notation format, a continuity of care document format, and a fast healthcare interoperability resources format;
converting the clinical notes of the patient in the natural language to the structured format based on the instruction; and
generating the raw output comprising the clinical notes of the patient in the structured format.
3. The method as claimed in claim 2, wherein selecting the validator from the plurality of validators comprises:
comparing a keyword selected from the input prompt comprising the clinical notes of the patient with a set of keywords mapped to each of the plurality of validators in a knowledge database (120);
identifying a matching keyword stored in the knowledge database (120) that matches with the keyword selected from the input prompt and further identifying a particular validator that is mapped to the matching keyword in the knowledge database (120); and
validating accuracy of the raw output generated by the conversational system (100) using the identified validator, wherein the plurality of validators comprises a domain-specific application context validator, a structured query language query error validator, a business rules validator, a regulatory compliance validator, a python code debug validator, semantic specification validator, and a personal identifiable information validator.
4. The method as claimed in claim 3, wherein the entity corresponds to a patient, and wherein the master data source system corresponds to a hospital information system (112) that stores electronic health records of a plurality of patients.
5. The method as claimed in claim 4, wherein generating the error correction instruction comprises:
identifying a subset of input data selected from the set of input data that is incorrectly specified in the raw output generated by the conversational system (100); and
generating the error correction instruction to correct the subset of input data that is incorrectly specified in the raw output based on the same subset of input data retrieved from the hospital information system (112).
6. The method as claimed in claim 5, wherein generating the final response to the input prompt comprises one or more of:
identifying if the final response generated by the conversational system (100) comprises personal identifiable information using the personal identifiable information validator, and excluding the personal identifiable information from the final response;
identifying if the input prompt relates to a request for information related to a specific domain or other information that is unrelated to the specific domain by analyzing one or more keywords in the input prompt using the domain-specific application context validator, and generating the final response to the input prompt only when the input prompt is identified to relate to the request for information related to the specific domain;
converting the input prompt received in a natural language from the selected system (102) into a structured query language query and validating if the structured query language query includes a syntax error using the structured query language query error validator;
performing semantic analysis on the input prompt using the semantic specification validator, and automatically generating the refined input prompt by refining the input prompt based on the semantic analysis; and
identifying if the input prompt relating to a request for a regulatory information comprises an error using the regulatory compliance validator based on a prior training provided to the conversational system (100), and automatically refining the input prompt to eliminate the error in the input prompt based on the prior training.
7. The method as claimed in claim 1, wherein receiving the input prompt from the selected system (102) comprises receiving a complaints file comprising a list of complaints related to a medical device, and an instruction provided by a user in a natural language to automatically identify and retrieve complaints that comprise incomplete information from the complaints file, wherein the incomplete information corresponds to one or more of a blank device identifier, a blank device manufacturer identifier, and a blank entry that fails to specify a type of the medical device.
8. The method as claimed in claim 7, wherein generating the raw output comprises converting the instruction provided by the user in the natural language to a structured query language (SQL) query.
9. The method as claimed in claim 8, wherein generating the error correction instruction comprises:
identifying if the SQL query generated by the conversational system (100) specifies a table name and a column name needed by the conversational system (100) for retrieving the complaints with incomplete information from the complaints file;
identifying that the SQL query comprises one or more errors when one or more of the table name and the column name are missing in the SQL query generated by the conversational system (100);
determining one or more of the table name and the column name that are missing in the SQL query from a copy of the complaints file stored in a complaints database (208);
generating the error correction instruction based on one or more of the table name and the column name determined from the copy of the complaints file;
automatically refining the SQL query generated by the conversational system (100) to generate a refined SQL query including one or more of the table name and the column name specified in the error correction instruction generated by the conversational system (100); and
automatically outputting the complaints that comprise incomplete information in response to the input prompt initially received by the conversational system (100) based on the refined SQL query including one or more of the table name and the column name specified in the error correction instruction.
10. A conversational system (100) that automatically modifies an input prompt, comprising:
a selected system (102) that is communicatively coupled to the conversational system (100) via a communications link (104), wherein the selected system (102) comprises an application (106) comprising one or more associated graphical user interfaces (108), wherein the one or more associated graphical user interfaces (108) provide a chatbox (109) for enabling a user to submit the input prompt to the conversational system (100);
a large language model that receives the input prompt from the selected system (102), and generates a raw output based on the received input prompt for further validation using the large language model;
a large language model-based validator system (110) that:
selects a validator from a plurality of associated validators based on one or more keywords in the input prompt and one or more keywords stored in a knowledge database (120) for validating accuracy of the raw output;
compares a set of input data related to an entity specified in the raw output with the same set of input data related to the entity retrieved from a master data source system using the selected validator;
identifies one or more errors in the input prompt based on the comparison, wherein the one or more identified errors correspond to one or more of a data mismatch between the set of input data specified in the raw output and the same set of input data retrieved from the master data source system, missing data, a contextual error in the input prompt, and a typo error in the input prompt;
generates an error correction instruction based on the one or more errors identified in the input prompt and the set of input data retrieved from the master data source system;
a prompt refinement system (116) that:
generates a refined input prompt by automatically correcting the one or more identified errors in the input prompt based on the error correction instruction, wherein the error correction instruction is generated based on the same set of input data retrieved from the master data source system; and
generates a final response to the input prompt based on the refined user prompt; and
outputs the final response in response to the input prompt initially received by the large language model.
11. The conversational system (100) as claimed in claim 10, wherein the knowledge database (120) comprises a set of keywords mapped to each of the plurality of associated validators, wherein the plurality of associated validators comprises one or more of:
a personal identifiable information validator that identifies if the final response generated by the conversational system (100) comprises personal identifiable information and excludes the personal identifiable information from the final response;
a domain-specific application context validator that identifies if the input prompt relates to a request for information related to a specific domain or other information that is unrelated to the specific domain by analyzing one or more keywords in the input prompt, and generates the final response to the input prompt only when the input prompt is identified to relate to the request for information related to the specific domain;
a structured query language query error validator that converts the input prompt received in a natural language from the selected system (102) into a structured query language query and validates if the structured query language query includes any syntax errors;
a python code debug validator that validates if the raw output corresponding to one or more python strings generated by the large language model comprise any syntax errors;
a semantic specification validator that performs semantic analysis on the input prompt and automatically generates the refined input prompt by refining the input prompt based on the semantic analysis;
a regulatory compliance validator that identifies if the input prompt relating to a request for a regulatory information comprises an error based on a prior training provided to the conversational system (100), and automatically refines the input prompt to eliminate the error in the input prompt based on the prior training; and
a business rules validator that identifies if the raw output generated by the large language model adheres to one or more predefined rules, and further identifies the input prompt to comprise an error when the raw output fails to adhere to one or more of the plurality of predefined rules.
12. The conversational system (100) as claimed in claim 11, wherein the selected system (102) corresponds to one of a hospital system (102), a medical device company system (702), a healthcare system, an automotive system, an industrial system, an aerospace system, a rail system, and a media and entertainment system, and wherein the master data source system corresponds to one of a hospital information system (112) that stores electronic health records of a plurality of patients, a complaints database (608) that stores complaints associated with a medical device in a complaints file, and a domain-specific database system that stores domain-specific information.
| # | Name | Date |
|---|---|---|
| 1 | 202441060357-POWER OF AUTHORITY [09-08-2024(online)].pdf | 2024-08-09 |
| 2 | 202441060357-FORM-9 [09-08-2024(online)].pdf | 2024-08-09 |
| 3 | 202441060357-FORM 3 [09-08-2024(online)].pdf | 2024-08-09 |
| 4 | 202441060357-FORM 18 [09-08-2024(online)].pdf | 2024-08-09 |
| 5 | 202441060357-FORM 1 [09-08-2024(online)].pdf | 2024-08-09 |
| 6 | 202441060357-FIGURE OF ABSTRACT [09-08-2024(online)].pdf | 2024-08-09 |
| 7 | 202441060357-DRAWINGS [09-08-2024(online)].pdf | 2024-08-09 |
| 8 | 202441060357-COMPLETE SPECIFICATION [09-08-2024(online)].pdf | 2024-08-09 |
| 9 | 202441060357-FORM-26 [23-08-2024(online)].pdf | 2024-08-23 |