Abstract: Disclosed herein is a system (102) for for handling topic-drift during conversation. The system monitors the conversation between user and agent in real-time. The system identifies topic-drift during the conversation by providing conversation cycles pertaining to the conversation to a first pre-trained model to obtain one or more topic-wise probability distributions at different instants of time. The system compares one topic-wise probability distribution obtained at one instant of time with another topic-wise probability distribution obtained at a subsequent instant of time, and further calculate a divergence value between the one and the other topic-wise probability distributions based on the comparison. The topic drift is notified to the first agent when the divergence value is greater than a first threshold value. The system provides the first agent with information relevant to the new topic, or re-route the conversation to a second agent based on a request from the first agent.
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
AND
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“SYSTEM AND METHOD FOR HANDLING A TOPIC-DRIFT IN AN ONGOING
CONVERSATION”
Name and Address of the Applicant: Zensar Technologies Limited, Zensar Knowledge Park, Plot#4, MIDC, Kharadi, Off Nagar
Road, Pune, Maharashtra – 411014, India
Nationality: India
The following specification particularly describes the invention and the manner in which it is to
be performed.
TECHNICAL FIELD
The present invention relates to the field of conversation analysis technique, and more particularly to analyzing a conversation for handling a topic-drift during the conversation.
BACKGROUND OF INVENTION
Customer satisfaction plays a pivotal role for an organization in establishing relationship with their customers and thereby profitability. Business Process Outsourcing (BPO) is a measure took by many organization towards establishing the relationship and attaining the profitability objectives. Every time when a customer encounters some challenge with operability of the product/services offered by the organization, he/she tends to dial in the customer center to seek resolution from a customer agent to the encountered malfunctioning.
It has been noticed that sometimes the customer tends to drift from an ongoing topic and suddenly prompts the customer agent to respond on the drifted topic’s query[ies]. In such instance, there is a likelihood that the customer agent may be aware or doesn’t have expertise in the drifted topic. Thus, in such scenarios, the customer agent may not be able to get the relevant information pertaining to the drifted topic leading to customer dissatisfaction. This becomes the pivotal point where the relationship is actually being established or hampered with their customers and thereby profitability is impacted in the longer run. Another challenge is to analyse the real-time data i.e. the ongoing conversation and keep the agent prepared for handling any topic drift. Since the conversation data is continuously streamed on the network, it is another challenge prevent the system’s overhead by efficiently clearing the call traffic. Thus, at one end the challenge is to efficiently cater customer’s need, and at other hand to improve the efficiency of the agent’s system handling the customer queries.
In view of the above, there exists a need of a system and method overcome the challenges exist in the art and automatically identify such topic drifts and provide relevant information on the screen for consumption by the customer agent, in real-time, so as to respond to the drifted topic’s query[ies] on the fly.
The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
SUMMARY OF THE INVENTION
The present disclosure overcomes one or more shortcomings of the prior art and provides additional advantages discussed throughout the present disclosure. Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.
In one embodiment of the present disclosure, a method for handling a topic-drift during a conversation is disclosed. The method further comprises monitoring the conversation between a user and a first agent in real-time. Further, the method comprises identifying the topic-drift during the conversation by, providing one or more conversation cycles pertaining to the conversation to a first pre-trained model to obtain one or more topic-wise probability distributions at different instants of time, wherein each conversation cycle indicates a pair of sentences exchanged between the user and the first agent, comparing one topic-wise probability distribution obtained at one instant of time (t1) with another topic-wise probability distribution obtained at a subsequent instant of time (t2), calculating a divergence value between the one topic-wise probability distribution and the other topic-wise probability distributions based on the comparison, and notifying the first agent of a new topic when the divergence value is greater than a first threshold value. The method further comprises performing at least one of, based on the notifying, providing the first agent with information relevant to the new topic, and re-routing the conversation to a second agent automatically or upon a request from the first agent.
In one embodiment of the present disclosure, a system for handling a topic-drift during a conversation is disclosed. The system comprises a monitoring unit configured to monitor the conversation between a user and a first agent in real-time. The system further comprises an identification unit to identify the topic-drift during the conversation is configured to,
provide one or more conversation cycles pertaining to the conversation to a first pre-trained model to obtain one or more topic-wise probability distributions at different instants of time, wherein each conversation cycle indicates a pair of sentences exchanged between the user and the first agent, compare one topic-wise probability distribution obtained at one instant of time (t1) with another topic-wise probability distribution obtained at a subsequent instant of time (t2), calculate a divergence value between the one topic-wise probability distribution and the other topic-wise probability distributions based on the comparison, and notify the first agent of a new topic when the divergence value is greater than a first threshold value. The system further comprises an execution unit is configured to perform, based on the notifying, at least one of provide the first agent with information relevant to the new topic, and re-route the conversation to a second agent automatically or upon a request from the first agent.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF DRAWINGS
The embodiments of the disclosure itself, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings. One or more embodiments are now described, by way of example only, with reference to the accompanying drawings in which:
Figure 1 shows an exemplary environment 100 of a system for handling a topic-drift during a conversation, in accordance with an embodiment of the present disclosure;
Figure 2 shows a block diagram 200 illustrating a system for handling a topic-drift during a conversation, in accordance with an embodiment of the present disclosure;
Figure 3 shows a method 300 for handling a topic-drift during a conversation, in accordance with an embodiment of the present disclosure; and
Figure 4 shows a block diagram of an exemplary computer system 400 for implementing the embodiments consistent with the present disclosure.
The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The foregoing has broadly outlined the features and technical advantages of the present disclosure in order that the detailed description of the disclosure that follows may be better understood. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure.
The novel features which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure.
Disclosed herein is a system and method for handling a topic-drift during a conversation. With the advent of technology, user agent conversation has seen an exponential growth in past decades. Having customer support facility is now a common practice in every industry. Customer handling also plays a vital role in a growth of any company. To provide an effective assistance mechanism, it is important to correctly and timely understand the customer queries and intention and provide them with relevant information. Any delay in
either understanding the customer’s query or in providing relevant information may result in losing of customer’s patience in the call.
The present disclosure addresses this issue by providing a mechanism to not only efficiently understand the customer query but also provide them with the relevant information. Usually, the customer calls to customer care agent related to any topic. During the initial conversation, the agents are generally able to understand the query. However, as the conversation advances, there might be a possibility of topic drift. That is, customer and the agent may start talking other topics which may or may not be related to the initial topic, and hence the topic drift happens. The present disclosure thus provides a mechanism to handle such topic drifts during the conversation and timely help the agent with the relevant information. The present disclosure also discloses a mechanism of rerouting the call to another suitable agent capable of handling the query related to the topic drift detected. The detailed working of the system has been explained in the upcoming paragraphs.
Figure 1 shows an exemplary environment 100 of a system for handling a topic-drift during a conversation, in accordance with an embodiment of the present disclosure. It must be understood to a person skilled in art that the system may also be implemented in various environments, other than as shown in Fig. 1.
The detailed explanation of the exemplary environment 100 is explained in conjunction with Figure 2 that shows a block diagram 200 of a system 102 for handling a topic-drift during a conversation, in accordance with an embodiment of the present disclosure. Although the present disclosure is explained considering that the system 102 is implemented on a server, it may be understood that the system 102 may be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, a cloud-based computing environment.
In one implementation, the system 102 may comprise an I/O interface 202, a processor 204, a memory 206 and the units 208. The memory 206 may be communicatively coupled to the processor 204 and the units 208. The memory 206 further stores a first pre-trained
model 220, a second pre-trained model 222, topic-wise probability distributions 214, and agent distributions 216. The memory 206 may also store data blocks 218. However, according to an embodiment, the data blocks 218 may be external to the system 102. The significance and use of each of the stored quantities is explained in the upcoming paragraphs of the specification. The processor 204 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 204 is configured to fetch and execute computer-readable instructions stored in the memory 206. The I/O interface 202 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 202 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 202 may facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 202 may include one or more ports for connecting many devices to one another or to another server.
In one implementation, the units 208 may comprise a monitoring unit 220, an identification unit 222, an execution unit 224, and a conversion unit 226. According to embodiments of present disclosure, these units 220-226 may comprise hardware components like processor, microprocessor, microcontrollers, application-specific integrated circuit for performing various operations of the system 102. It must be understood to a person skilled in art that the processor 204 may perform all the functions of the units 220-226 according to various embodiments of the present disclosure.
Now referring back to Figure 1, the environment 100 shows a system 102 handling a topic drift during the conversation between the user and the agent. When a call initiated from the user side, the system 102 allocates a first agent for attending the call. At the start, the agent allocation easily happens as the user itself choses a topic which helps and the system 102 to select suitable agent. The example shown in Fig. 1 is related to an environment in which the user calls a bank's customer care service for his/her query. As the call progresses, the
monitoring unit 220 starts monitoring the conversation between the user and the first agent in real-time. The intent of monitoring the conversation is to keep a track on possible topics being discussed between the user and the first agent.
The conversation is basically a series of conversion cycles such that each conversation cycle indicates a pair of sentences exchanged between the user and the first agent mostly in a speech format. Hence, for analyzing the conversation, the conversion unit 226 which may be speech-to-text converter converts the one or more conversation cycles into a textual format. However, the conversation captured may also be in a form of text format or a combination of audio and text format.
Post conversion, the identification unit 222 performs the following operations for identifying the topic-drift. Firstly, the identification unit 222 provides the conversation cycles pertaining to the conversation to a first pre-trained model 210 to obtain one or more topic-wise probability distributions at different instants of time. The training of the first pre-trained model 210 is explained herein:
Training of first pre-trained model 210
During the training phase, the identification unit 222 obtains from the one or more data blocks 218, textual documents comprising information used by the one or more "past agents" to handle "past user queries". For example, the textual documents may comprise documents containing information about topics such as In-out case, Loans, Credit Cards, Trading. Further, textual documents may also include information pertaining to frequently asked questions (FAQs), standard operating procedures (SOPs) or any document which could help in training the model. The textual documents are processed to identify a plurality of topics within each textual document. Further, a plurality of probability distributions is generated corresponding to the plurality of topics within each textual document. This way, the first pre-trained model 210 is trained with the probability distributions and are used in future while analyzing the conversion in real-time.
In Fig. 1, it can be observed that the topic-wise probability distributions (P) and (Q) at time "t" and "t+1" respectively are obtained for the topics "Credit Card", "Loans" and
"Trading". Though this specification, for simplicity and consistency, will consider only the aforecited topics which are related to banks, those of ordinary skill in the art will appreciate that the disclosed subject is also applicable to other domains and their respective topics.
The identification unit 222 compares the one topic-wise probability distributions (P) obtained at one instant of time "t" with another topic-wise probability distribution (Q) obtained at a subsequent instant of time "t+1". Based on the comparison, the identification unit 222 calculates a divergence value between the one and another topic-wise probability distributions using Kullback-Leibler divergence DKL technique as shown below:
In the above technique, "p" is the distribution at time "t", and "q" is the distribution at time "t-1". If the divergence value is greater than a first threshold value, then such divergence indicates new topic i.e., topic drift. According to example shown in Fig. 1, divergence value between distribution P(0.2, 0.5, 0.3) and distribution Q(0.1, 0.2, 0.7) is 0.148783. Considering the threshold value as 0.05, the divergence value is greater than the threshold value, and therefore a topic is detected. The new topic detected is notified to the first agent who is currently handling the call with user. According to an embodiment, the topic drift may be detected among the three topics shown in Fig. 1 or even new topic not shown in Fig. 1.
Post notifying, the execution unit 224 checks whether the first agent will be capable of handling the new topic. If the first agent is not capable, the execution unit 224 identifies a second agent capable of handling the new topic and re-routes the call to him/her. The identification of the second agent is explained in the subsequent paragraphs of the specification. However, if the first agent is capable, the execution unit 224 determines an intent of the user pertaining to the new topic by employing a second pre-trained model 212, for example an intent recognition model.
Training of second pre-trained model 212
The "textual documents" referred in above paragraph under the heading "Training of first
pre-trained model 210" is also used for training the second pre-trained model 212. According to an embodiment, the execution unit 224 employs natural language processing (NLP) technique on the textual documents to identify a plurality of topic-specific intents. This way, the second pre-trained model 212 is used for determining the current intent of the user pertaining to the new topic in real-time.
Further, the new topic and the intent of the user is correlated by the execution unit 224 to retrieve information relevant to the new topic from one or more data blocks 218. According to an embodiment, the execution unit 224 searches the data blocks 218 based on the new topic and the intent to retrieve a set of relevant information using search technique such as Elasticsearch. Further, the search technique automatically ranks the set of relevant information based on the combination of scores such as Lucene scores. The relevant information among the set of relevant information having the highest rank is presented to the first or the second agent.
Now referring to the scenario in which first agent is not capable of handling the new topic, the execution unit 224 identifies the second agent among the available agents capable of handling the new topic and re-routes the call to him/her. The execution unit 224 re-routes the call based on the following operations.
In first step, the execution unit 224 identifies one or more future conversation cycles by providing the one or more conversation cycles pertaining to the current conversation to the first pre-trained model 210. With help of the first pre-trained model 210, in next step, the execution unit 224 generates a predicted topic-wise probability distribution based on the one or more future conversation cycles. The predicted topic-wise probability distribution is compared vis-a-vis one or more agent distributions corresponding to one or more agents to calculate one or more divergence scores such that each divergence score corresponds to each agent of the one or more agent. According to an embodiment, the one or more agent distributions are calculated based on following technique.
Agent selection
Customer agents have different areas of expertise in topics. For example, one agent
is good in handling credit card related queries and other agent is good at loan related queries. Parameters such as resolution time (in seconds), number of queries resolved, customer feedback (converted to range from 1-5), training score etc. are considered for quantifying agent's expertise in a particular topic. Each parameter is assigned a weight to arrive at generate a score for an agent for each topic. The agent score may be generated using a function where xl, x2, x3 are parameters mentioned
above and wl, w2, w3 are the weights.
Further, to generate the probability distribution among topics for a given agent i.e. "agent topic distribution", a function may be applied such as SoftMax function as shown below:
In the above function, Zi is the score for the ith topic. Referring back to the 3 topics shown in Fig. 1, an agent matrix is calculated for let's say 3 agents as shown in below table 1.
Agents Credit Card Loan Trading Availability
Agent 1 0.4 0.6 0 1
Agent 2 0.1 0.4 0.5 1
Agent 3 0.1 0.45 0.45 0
Table 1 - Agent topic distributions
Now, the execution unit 224 compares the predicted topic-wise probability distribution (shown in Fig. 1) vis-a-vis one or more agent distributions (as shown in Table 1) corresponding to one or more agents to calculate one or more divergence scores such that each divergence score corresponds to each agent of the one or more agent. According
to an embodiment, the one or more agent's divergence score calculated is shown in below table 2.
Agents Divergence Score
Agent 1 0.3777
Agent 2 0.2449
Agent 3 0.1870
Table 2 - Agent's divergence score
Based on the above scores, the execution unit 224 selects Agent 3 as second agent which is having the least divergence score for re-routing the call. However, if the Agent 3 is not available, the call is re-routed to Agent 2 i.e., next best available agent. Also, the relevant information extracted related to the new topic is also sent to the Agent 3 for handling the topic drift.
Figure 3 depicts a method 300 for handling a topic-drift during a conversation, in accordance with an embodiment of the present disclosure.
As illustrated in Figure 3, the method 300 includes one or more blocks illustrating a method for handling a topic-drift during a conversation. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions may include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform specific functions or implement specific abstract data types.
The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the spirit and scope of the subject matter described.
At block 302, the method 300 may include monitoring the conversation between a user and a first agent in real-time.
At block 304, the method 300 may include identifying the topic-drift during the conversation based on blocks 306-312.
At block 306, the method 300 may include providing one or more conversation cycles pertaining to the conversation to a first pre-trained model 210 to obtain one or more topic-wise probability distributions (214) at different instants of time, wherein each conversation cycle indicates a pair of sentences exchanged between the user and the first agent.
At block 308, the method 300 may include comparing one topic-wise probability distribution obtained at one instant of time with another topic-wise probability distribution obtained at a subsequent instant of time.
At block 310, the method 300 may include calculating a divergence value between the one and the other topic-wise probability distributions based on the comparison.
At block 312, the method 300 may include notifying the first agent of a new topic when the divergence value is greater than a first threshold value.
At block 314, the method 300 may include providing the first agent with information relevant to the new topic based on the notifying of block 312.
At block 316, the method 300 may include re-routing the conversation to a second agent based on a request from the first agent based on the notifying of block 312.
Computer System
Figure 4 illustrates a block diagram of an exemplary computer system 400 for implementing embodiments consistent with the present disclosure. It may be understood to a person skilled in art that the computer system 400 and its components is similar to the system 102 referred in Fig. 2. In an embodiment, the computer system 400 may be a peripheral device, which is used for handling a topic-drift during a conversation. The computer system 400 may include a central processing unit ("CPU" or "processor") 402. The processor 402 may comprise at least one data processor for executing program
components for executing user or system-generated processes. The processor 402 may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
The processor 402 may be disposed in communication with one or more input/output (I/O) devices via I/O interface 401. The I/O interface 401 may employ communication protocols/methods such as, without limitation, audio, analog, digital, stereo, IEEE-1394, serial bus, Universal Serial Bus (USB), infrared, PS/2, BNC, coaxial, component, composite, Digital Visual Interface (DVI), high-definition multimedia interface (HDMI), Radio Frequency (RF) antennas, S-Video, Video Graphics Array (VGA), IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., Code-Division Multiple Access (CDMA), High-Speed Packet Access (HSPA+), Global System For Mobile Communications (GSM), Long-Term Evolution (LTE) or the like), etc. Using the I/O interface, the computer system 400 may communicate with one or more I/O devices.
In some embodiments, the processor 402 may be disposed in communication with a communication network 414 via a network interface 403. The network interface 403 may communicate with the communication network 414. The communication unit may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), Transmission Control Protocol/Internet Protocol (TCP/IP), token ring, IEEE 802.1 la/b/g/n/x, etc.
The communication network 414 may be implemented as one of the several types of networks, such as intranet or Local Area Network (LAN) and such within the organization. The communication network 414 may either be a dedicated network or a shared network, which represents an association of several types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other. Further, the communication network 414 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.
In some embodiments, the processor 402 may be disposed in communication with a memory 405 (e.g., RAM 412, ROM 413, etc. as shown in FIG. 4) via a storage interface 404. The storage interface 404 may connect to the memory 405 including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as Serial Advanced Technology Attachment (SATA), Integrated Drive Electronics (DDE), IEEE-1394, Universal Serial Bus (USB), fiber channel, Small Computer Systems Interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
The memory 405 may store a collection of program or database components, including, without limitation, user /application, an operating system, a web browser, mail client, mail server, web server and the like. In some embodiments, computer system may store user /application data, such as the data, variables, records, etc. as described in this invention. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as OracleR or SybaseR.
The operating system may facilitate resource management and operation of the computer system. Examples of operating systems include, without limitation, APPLE MACINTOSHR OS X, UNIXR, UNIX-like system distributions (E.G., BERKELEY SOFTWARE DISTRIBUTIONTM (BSD), FREEBSDTM, NETBSDTM, OPENBSDTM, etc.), LINUX DISTRIBUTIONSTM (E.G., RED HATTM, UBUNTUTM, KUBUNTUTM, etc.), IBMTM OS/2, MICROSOFTTM WINDOWSTM (XPTM, VISTATM/7/8, 10 etc.), APPLER IOSTM, GOOGLER ANDROIDTM, BLACKBERRYR OS, or the like. A user interface may facilitate display, execution, interaction, manipulation, or operation of program components through textual or graphical facilities. For example, user interfaces may provide computer interaction interface elements on a display system operatively connected to the computer system, such as cursors, icons, check boxes, menus, windows, widgets, etc. Graphical User Interfaces (GUIs) may be employed, including, without limitation, APPLE MACINTOSHR operating systems, IBMTM OS/2, MICROSOFTTM WINDOWSTM (XPTM,
VISTATM/7/8, 10 etc.), UnixR X-Windows, web interface libraries (e.g., AJAXTM, DHTMLTM, ADOBE® FLASHTM, JAVASCRIPTTM, JAVATM, etc.), or the like.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present invention. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term "computer-readable medium" should be understood to include tangible items and exclude carrier waves and transient signals, i.e., non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, non-volatile memory, hard drives, Compact Disc (CD) ROMs, Digital Video Disc (DVDs), flash drives, disks, and any other known physical storage media.
A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
When a single device or article is described herein, it will be clear that more than one device/article (whether they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether they cooperate), it will be clear that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or
circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Advantages of the embodiments of the present disclosure are illustrated herein:
Efficiently handling the topic drift during the call between the agent and the customer and providing relevant information to the agent.
Enhancing the customer's experience and also minimizing the system's overhead by efficiently addressing the customer's query in real-time.
Reference Numerals:
Reference Numeral Description
100 Exemplary environment of a system for handling a topic-drift
102 System
200 Block diagram of the system
202 I/O Interface
204 Processor
206 Memory
208 Units
210 First pre-trained model
212 Second pre-trained model
214 Topic-wise probability distributions
216 Agent distributions
218 Data blocks
220 Monitoring Unit
222 Identification Unit
224 Execution Unit
226 Conversion Unit
302-316 Method steps for handling a topic-drift
We Claim:
1. A method for handling a topic-drift during a conversation, comprising:
monitoring the conversation between a user and a first agent in real-time; identifying the topic-drift during the conversation by:
providing one or more conversation cycles pertaining to the conversation to a first pre-trained model (210) to obtain one or more topic-wise probability distributions (214) at different instants of time, wherein each conversation cycle indicates a pair of sentences exchanged between the user and the first agent;
comparing one topic-wise probability distribution obtained at one instant of time (t1) with another topic-wise probability distribution obtained at a subsequent instant of time (t2);
calculating a divergence value between the one topic-wise probability distribution and the other topic-wise probability distribution based on the comparison; and
notifying the first agent of a new topic when the divergence value is greater than a first threshold value; based on the notifying, performing at least one of:
providing the first agent with information relevant to the new topic, and
re-routing the conversation to a second agent automatically or upon a request from the first agent.
2. The method as claimed in claim 1, wherein providing the first agent with information
relevant to the new topic, further comprises:
determining an intent of the user pertaining to the new topic by employing a second pre-trained model (212) on the conversation; and
correlating the new topic and the intent of the user to retrieve information relevant to the new topic from one or more data blocks (218).
3. The method as claimed in claim 1, wherein re-routing the conversation to the second agent
based on the request from the first agent further comprises:
identifying one or more future conversation cycles by providing the one or more conversation cycles to the first pre-trained model (210);
generating a predicted topic-wise probability distribution based on the one or more future conversation cycles;
comparing the predicted topic-wise probability distribution vis-à-vis one or more agent distributions (216) corresponding to one or more agents to calculate one or more divergence scores such that each divergence score corresponds to each agent of the one or more agents; and
selecting the second agent having a least divergence score.
4. The method as claimed in claim 1, further comprises converting the one or more conversation cycles into a textual format, when the one or more conversation cycles are in a speech format.
5. The method as claimed in claim 1, wherein to obtain one or more topic-wise probability distributions (214) at different instants of time (t1 and t2), the first pre-trained model is trained by:
obtaining from the one or more data blocks (218), textual documents comprising information used by the one or more past agents to handle past user queries;
processing the textual documents to identify a plurality of topics within each textual document; and
generating a plurality of probability distributions corresponding to the plurality of topics within each textual document.
6. The method as claimed in claim 1, further comprises training the second pre-trained model (212) by employing natural language processing (NLP) technique on the textual documents to identify a plurality of topic-specific intents.
7. A system (102) for handling a topic-drift during a conversation, comprising:
a monitoring unit (220) configured to monitor the conversation between a user and a first agent in real-time;
an identification unit (222) to identify the topic-drift during the conversation is configured to:
provide one or more conversation cycles pertaining to the conversation to a first pre-trained model (210) to obtain one or more topic-wise probability distributions (214) at different instants of time, wherein each conversation cycle indicates a pair of sentences exchanged between the user and the first agent;
compare one topic-wise probability distribution obtained at one instant of time (t1) with another topic-wise probability distribution obtained at a subsequent instant of time (t2);
calculate a divergence value between the one topic-wise probability distribution and the other topic-wise probability distributions based on the comparison; and
notify the first agent of a new topic when the divergence value is greater than a first threshold value;
based on the notifying, an execution unit (224) is configured to perform at least one of:
provide the first agent with information relevant to the new topic, and
re-route the conversation to a second agent automatically or upon a request from the first agent.
8. The system (102) as claimed in claim 7, wherein to provide the first agent with information
relevant to the new topic, the execution unit (224) is further configured to:
determine an intent of the user pertaining to the new topic by employing a second pre-trained model (212); and
correlate the new topic and the intent of the user to retrieve information relevant to the new topic from one or more data blocks (218);
9. The system (102) as claimed in claim 7, wherein to re-route the conversation to the second
agent based on the request from the first agent, the execution unit (224) is further
configured to:
identify one or more future conversation cycles by providing the one or more conversation cycles to the first pre-trained model (210);
generate a predicted topic-wise probability distribution based on the one or more future conversation cycles;
compare the predicted topic-wise probability distribution vis-à-vis one or more agent distributions (216) corresponding to one or more agents to calculate one or more divergence scores such that each divergence score corresponds to each agent of the one or more agent; and
select the second agent having a least divergence score.
10. The system (102) as claimed in claim 7, further comprises a conversion unit (226) configured to convert the one or more conversation cycles into a textual format, when the one or more conversation cycles are in a speech format.
11. The system (102) as claimed in claim 7, wherein to train the first pre-trained model (210), the identification unit (222) is further configured to:
obtain from the one or more data blocks (218), textual documents comprising information used by the one or more past agents to handle past user queries;
process the textual documents to identify a plurality of topics within each textual document;
generate a plurality of probability distributions corresponding to the plurality of topics within each textual document.
12. The system (102) as claimed in claim 7, wherein to train the second pre-trained model
(212), the execution unit (224) is further configured to employ natural language processing
(NLP) technique on the textual documents to identify a plurality of topic-specific intents.
| # | Name | Date |
|---|---|---|
| 1 | 202021036190-CLAIMS [16-03-2023(online)].pdf | 2023-03-16 |
| 1 | 202021036190-STATEMENT OF UNDERTAKING (FORM 3) [21-08-2020(online)].pdf | 2020-08-21 |
| 1 | 202021036190-US(14)-HearingNotice-(HearingDate-05-05-2025).pdf | 2025-04-03 |
| 2 | 202021036190-CLAIMS [16-03-2023(online)].pdf | 2023-03-16 |
| 2 | 202021036190-COMPLETE SPECIFICATION [16-03-2023(online)].pdf | 2023-03-16 |
| 2 | 202021036190-PROVISIONAL SPECIFICATION [21-08-2020(online)].pdf | 2020-08-21 |
| 3 | 202021036190-COMPLETE SPECIFICATION [16-03-2023(online)].pdf | 2023-03-16 |
| 3 | 202021036190-POWER OF AUTHORITY [21-08-2020(online)].pdf | 2020-08-21 |
| 3 | 202021036190-FER_SER_REPLY [16-03-2023(online)].pdf | 2023-03-16 |
| 4 | 202021036190-FORM 1 [21-08-2020(online)].pdf | 2020-08-21 |
| 4 | 202021036190-FER_SER_REPLY [16-03-2023(online)].pdf | 2023-03-16 |
| 4 | 202021036190-FER.pdf | 2022-09-23 |
| 5 | 202021036190-FORM 18 [17-05-2022(online)].pdf | 2022-05-17 |
| 5 | 202021036190-FER.pdf | 2022-09-23 |
| 5 | 202021036190-DRAWINGS [21-08-2020(online)].pdf | 2020-08-21 |
| 6 | Abstract1.jpg | 2022-01-15 |
| 6 | 202021036190-FORM 18 [17-05-2022(online)].pdf | 2022-05-17 |
| 6 | 202021036190-DECLARATION OF INVENTORSHIP (FORM 5) [21-08-2020(online)].pdf | 2020-08-21 |
| 7 | Abstract1.jpg | 2022-01-15 |
| 7 | 202021036190-Proof of Right [28-01-2021(online)].pdf | 2021-01-28 |
| 7 | 202021036190-COMPLETE SPECIFICATION [30-06-2021(online)].pdf | 2021-06-30 |
| 8 | 202021036190-DRAWING [30-06-2021(online)].pdf | 2021-06-30 |
| 8 | 202021036190-COMPLETE SPECIFICATION [30-06-2021(online)].pdf | 2021-06-30 |
| 8 | 202021036190-CORRESPONDENCE-OTHERS [30-06-2021(online)].pdf | 2021-06-30 |
| 9 | 202021036190-CORRESPONDENCE-OTHERS [30-06-2021(online)].pdf | 2021-06-30 |
| 9 | 202021036190-DRAWING [30-06-2021(online)].pdf | 2021-06-30 |
| 10 | 202021036190-COMPLETE SPECIFICATION [30-06-2021(online)].pdf | 2021-06-30 |
| 10 | 202021036190-DRAWING [30-06-2021(online)].pdf | 2021-06-30 |
| 10 | 202021036190-Proof of Right [28-01-2021(online)].pdf | 2021-01-28 |
| 11 | 202021036190-DECLARATION OF INVENTORSHIP (FORM 5) [21-08-2020(online)].pdf | 2020-08-21 |
| 11 | 202021036190-Proof of Right [28-01-2021(online)].pdf | 2021-01-28 |
| 11 | Abstract1.jpg | 2022-01-15 |
| 12 | 202021036190-DECLARATION OF INVENTORSHIP (FORM 5) [21-08-2020(online)].pdf | 2020-08-21 |
| 12 | 202021036190-DRAWINGS [21-08-2020(online)].pdf | 2020-08-21 |
| 12 | 202021036190-FORM 18 [17-05-2022(online)].pdf | 2022-05-17 |
| 13 | 202021036190-DRAWINGS [21-08-2020(online)].pdf | 2020-08-21 |
| 13 | 202021036190-FER.pdf | 2022-09-23 |
| 13 | 202021036190-FORM 1 [21-08-2020(online)].pdf | 2020-08-21 |
| 14 | 202021036190-FER_SER_REPLY [16-03-2023(online)].pdf | 2023-03-16 |
| 14 | 202021036190-FORM 1 [21-08-2020(online)].pdf | 2020-08-21 |
| 14 | 202021036190-POWER OF AUTHORITY [21-08-2020(online)].pdf | 2020-08-21 |
| 15 | 202021036190-COMPLETE SPECIFICATION [16-03-2023(online)].pdf | 2023-03-16 |
| 15 | 202021036190-POWER OF AUTHORITY [21-08-2020(online)].pdf | 2020-08-21 |
| 15 | 202021036190-PROVISIONAL SPECIFICATION [21-08-2020(online)].pdf | 2020-08-21 |
| 16 | 202021036190-CLAIMS [16-03-2023(online)].pdf | 2023-03-16 |
| 16 | 202021036190-PROVISIONAL SPECIFICATION [21-08-2020(online)].pdf | 2020-08-21 |
| 16 | 202021036190-STATEMENT OF UNDERTAKING (FORM 3) [21-08-2020(online)].pdf | 2020-08-21 |
| 17 | 202021036190-STATEMENT OF UNDERTAKING (FORM 3) [21-08-2020(online)].pdf | 2020-08-21 |
| 17 | 202021036190-US(14)-HearingNotice-(HearingDate-05-05-2025).pdf | 2025-04-03 |
| 18 | 202021036190-FORM-26 [02-05-2025(online)].pdf | 2025-05-02 |
| 19 | 202021036190-Correspondence to notify the Controller [02-05-2025(online)].pdf | 2025-05-02 |
| 20 | 202021036190-Written submissions and relevant documents [19-05-2025(online)].pdf | 2025-05-19 |
| 21 | 202021036190-PatentCertificate24-06-2025.pdf | 2025-06-24 |
| 22 | 202021036190-IntimationOfGrant24-06-2025.pdf | 2025-06-24 |
| 1 | SearchHistoryE_22-09-2022.pdf |