Sign In to Follow Application
View All Documents & Correspondence

Customer Engagement Sentiment Tracking System (Cests)

Abstract: CUSTOMER ENGAGEMENT SENTIMENT TRACKING SYSTEM (CESTS) The present invention, Customer Engagement Sentiment Tracking System (CESTS), provides a comprehensive system and method for real-time sentiment analysis of customer interactions during support calls. The system utilizes voice input from customers, converts it to text using speech recognition, and applies Natural Language Processing (NLP) techniques along with machine learning algorithms to analyze the customer's emotional state, including tone, word choice, and intonation. By classifying the sentiment into categories (positive, negative, neutral) and assigning sentiment scores, CESTS offers real-time feedback to customer service agents via a user interface. This feedback is tailored to improve interactions through adaptive communication, personalized suggestions, and enhanced customer satisfaction. The invention also enables identification of areas where sentiment shifts may occur, thus allowing for more thoughtful and informed responses from the customer service representative. Additionally, the system provides a star rating for the customer service experience based on the sentiment analysis. The system ensures better customer engagement and reduces service interaction noise, ultimately leading to improved service outcomes.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 September 2024
Publication Number
38/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. MR. RADHAKRISHNAN P
SR UNIVERSITY, ANANTHASAGAR, WARANGAL, TELANGANA-506371, INDIA
2. MR. S. DEEPAN
SR UNIVERSITY, ANANTHASAGAR, WARANGAL, TELANGANA-506371, INDIA
3. DR. R. ARCHANA REDDY
SR UNIVERSITY, ANANTHASAGAR, WARANGAL, TELANGANA-506371, INDIA
4. MRS. M. R. THAMIZHKKANAL
ASSISTANT PROFESSOR, TAKSHASHILA UNIVERSITY, PONDY MAIN ROAD, TINDIVANAM, VILLUPURAM DISTRICT, 604102
5. MR. MANCHALA SRI NITHIN
UG SCHOLAR, SR UNIVERSITY, ANANTHASAGAR, WARANGAL, TELANGANA-506371, INDIA
6. MR. PABBA RITEESH
UG SCHOLAR, SR UNIVERSITY, ANANTHASAGAR, WARANGAL, TELANGANA-506371, INDIA
7. MR. MURUPOJU YASHWANTH KUMAR
UG SCHOLAR, SR UNIVERSITY, ANANTHASAGAR, WARANGAL, TELANGANA-506371, INDIA
8. MS. TAMILSELVI P
UG SCHOLAR, GSS JAIN COLLEGE FOR WOMEN, 96 VEPERY HIGH ROAD, CHENNAI 600 007, TAMIL NADU

Specification

Description:FIELD OF THE INVENTION
This invention relates to method and system for machine learning-based voice assistant with sentiment analysis for enhanced customer service interactions.
BACKGROUND OF THE INVENTION
The goal of the Customer Engagement Sentiment Tracking System (CESTS) is to enhance customer satisfaction and service quality by analysing sentiments in incoming calls and providing personalized feedback through voice assistants which is reflected in the way of star rating.
The proposed solution stands out from previous methods by offering a more comprehensive understanding of customer emotions through sentiment analysis, leading to personalized assistance tailored to individual needs and emotions. This personalized approach fosters empathy from helpdesk agents and creates positive customer experiences characterized by empathetic communication and efficient issue resolution. Integration of sentiment analysis streamlines issue resolution, ensuring quicker resolutions and higher satisfaction levels. The emphasis on clearer communication reduces misunderstandings, resulting in smoother interactions and heightened satisfaction, setting it apart from solutions that overlook emotional intelligence in customer interactions.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
The Customer Engagement Sentiment Tracking System (CESTS) is a system that analyzes the sentiments of incoming customer service calls to improve interactions. The system uses machine learning, NLP, and voice-to-text conversion techniques to process and interpret the tone, words, phrases, and intonation of the caller's voice. The goal is to classify and understand customer emotions, offering customer service agents real-time feedback on sentiment, thus allowing them to adjust their responses accordingly.
CESTS comprises several components, including:
Voice Input Conversion: A mechanism for converting customer voice input into text format.
Sentiment Analysis Engine: A model trained to detect customer emotions from call transcripts using natural language and audio features.
Real-time Sentiment Feedback: A feedback system that informs the agent of the customer’s emotional state, such as anger, frustration, satisfaction, or happiness, by assigning sentiment scores and star ratings.
Machine Learning Voice Assistant: A system that interacts with the customer, analyzes sentiment, and provides feedback or suggestions to the customer service agent to improve the conversation.
NLP Module: An NLP-based text formatting system to enhance the tone, phrasing, and wording of responses based on the emotional tone detected in the customer’s voice.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The aim of the patent "Customer Engagement Sentiment Tracking System (CESTS)" is to enhance customer service by analyzing the sentiment of incoming calls. Currently, there are limited measures to improve customer interactions apart from basic protocols. This project aims to utilize sentiment analysis techniques to understand the emotions and attitudes of customers during support calls, promoting more effective and thoughtful communication. Additionally, the project seeks to identify specific regions or situations where customer sentiment may vary, allowing for tailored responses and ultimately contributing to a more positive customer experience and reduced noise in customer service interactions.
Other objectives of the invention are:
1. Develop a machine learning-based voice assistant system to improve customer service interactions.
2. Implement a conversion mechanism to translate user inputs into text for analysis.
3. Train a sentiment analysis model on call transcripts to identify and understand customer sentiments, including words, phrases, tone, and intonation.
4. Utilize Natural Language Processing (NLP) techniques to adjust text formatting based on the customer's voice, enhancing the sophistication and effectiveness of the voice assistant.
5. Enhance customer satisfaction and interactions by creating a sophisticated voice assistant that provides personalized feedback which is reflected in the way of star rating.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
1. Components:
1. Voice Input Conversion Module:
o Converts the customer’s speech input during the call into text using speech-to-text technologies.
o This module is responsible for creating accurate call transcripts that will be analyzed further.
2. Sentiment Analysis Engine:
o Employs a sentiment analysis model trained on labeled call transcripts, audio features such as tone and pitch, and natural language cues like word choice and phrasing.
o Capable of identifying positive, neutral, or negative sentiments and mapping them to specific emotions like anger, frustration, happiness, or satisfaction.
3. Machine Learning Voice Assistant:
o This ML-based system interacts with both the customer and the agent during calls.
o It tracks sentiment and provides real-time recommendations based on the customer’s mood and emotional state.
o The assistant can prompt the agent to adapt their language or offer solutions tailored to the sentiment detected.
4. Natural Language Processing (NLP) Module:
o Enhances text and speech analysis by interpreting the customer’s words and matching them with their emotional tone.
o It adjusts text responses based on intonation, speed, and choice of words, helping service representatives provide more thoughtful and aligned responses.
5. Real-Time Sentiment Feedback System:
o This system displays real-time sentiment insights to the customer service agent, showing emotions like “angry,” “happy,” or “frustrated” based on a sentiment score.
o It presents a star rating system for each interaction, allowing service representatives to gauge the emotional outcome of the conversation.
2. Connections and Integration:
• The Voice Input Conversion Module is integrated with the customer service communication system, capturing the customer’s voice in real-time and converting it into text.
• The Sentiment Analysis Engine processes the text and audio data simultaneously. It uses machine learning models to analyze tone, pitch, and choice of words, identifying the underlying emotion.
• The Machine Learning Voice Assistant interacts with the sentiment analysis engine and provides personalized suggestions to customer service agents based on detected sentiments.
• The NLP Module continuously improves the response system by adapting text formatting and wording based on real-time emotional feedback from the customer’s tone of voice.
• The Real-Time Feedback System connects to the agent's dashboard, displaying the sentiment score and star rating throughout the call. This information helps agents modulate their responses for a more positive outcome.
________________________________________
3. Working of the System:
1. Step 1: Voice Capture and Conversion
o When a customer initiates a call, the Voice Input Conversion Module captures their voice input and converts it into text using speech-to-text technology.
o This transcript is processed in real-time by the system for analysis.
2. Step 2: Sentiment Detection
o The Sentiment Analysis Engine analyzes both the call transcript and audio features like tone, pitch, and volume to determine the customer’s sentiment.
o The model is trained to detect positive, negative, and neutral sentiments, and to classify emotions like frustration, anger, or happiness.
3. Step 3: Sentiment Feedback to Agent
o The detected sentiment is presented in real-time on the agent’s dashboard, offering insight into the customer’s emotional state.
o A star rating system is also displayed to indicate the overall sentiment of the conversation.
4. Step 4: Machine Learning Voice Assistant Interaction
o The Machine Learning Voice Assistant continuously interacts with the sentiment analysis model and provides recommendations for the agent.
o For example, if the system detects frustration, the assistant may suggest a calming tone, propose solutions, or recommend appropriate responses.
5. Step 5: NLP-based Text Adaptation
o The NLP Module adjusts text-based responses according to the emotional tone detected. If a customer is frustrated, the system ensures the responses are phrased more empathetically.
6. Step 6: Continuous Learning and Optimization
o The system continuously learns from interactions, improving its accuracy and ability to detect and classify sentiments.
o It refines the ML models and NLP algorithms to better assess customer emotions and improve the system’s response suggestions.
, C , Claims:1. A customer sentiment tracking system that comprises a Voice Input Conversion Module, Sentiment Analysis Engine, Machine Learning Voice Assistant, and NLP Module for analyzing customer emotions and tailoring responses.
2. The system as claimed in claim 1, wherein the Voice Input Conversion Module converts customer speech into text in real-time.
3. The system as claimed in claim 1, wherein the Sentiment Analysis Engine analyzes customer sentiment by detecting tone, pitch, and word choice.
4. The system as claimed in claim 1, wherein the Machine Learning Voice Assistant offers real-time suggestions to customer service agents based on the detected customer sentiment.
5. The system as claimed in claim 1, wherein the NLP Module adjusts response text formatting based on the detected emotional tone of the customer.
6. The system as claimed in claim 1, wherein the Real-Time Feedback System displays customer sentiment scores and star ratings to the agent during the interaction.
7. A method for tracking and analyzing customer sentiment during support calls, comprising the steps of:
a. Capturing voice input from the customer during a call using a voice input system;
b. Converting the captured voice input into text using a speech-to-text conversion process;
c. Analyzing the text and audio features using a sentiment analysis engine to detect emotional cues, including tone, pitch, and word choice;
d. Classifying the detected sentiment into categories such as positive, negative, neutral, or specific emotions like frustration, anger, or happiness; and
e. Providing real-time feedback to a customer service agent, displaying the customer’s sentiment and assigning a sentiment score.
8. The method as claimed in claim 7, wherein the step of capturing voice input further comprises the use of a microphone integrated with a customer service communication system to record the customer’s speech in real-time.
9. The method as claimed in claim 7, wherein the step of converting voice input into text involves the application of a speech recognition algorithm to generate a transcript of the conversation.
10. The method of claim 1, wherein the step of analyzing sentiment includes the use of a machine learning model trained on audio and text data, which detects emotions by evaluating features such as intonation, word choice, and sentence structure.

Documents

Application Documents

# Name Date
1 202441069324-STATEMENT OF UNDERTAKING (FORM 3) [13-09-2024(online)].pdf 2024-09-13
2 202441069324-REQUEST FOR EARLY PUBLICATION(FORM-9) [13-09-2024(online)].pdf 2024-09-13
3 202441069324-POWER OF AUTHORITY [13-09-2024(online)].pdf 2024-09-13
4 202441069324-FORM-9 [13-09-2024(online)].pdf 2024-09-13
5 202441069324-FORM FOR SMALL ENTITY(FORM-28) [13-09-2024(online)].pdf 2024-09-13
6 202441069324-FORM 1 [13-09-2024(online)].pdf 2024-09-13
7 202441069324-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-09-2024(online)].pdf 2024-09-13
8 202441069324-EVIDENCE FOR REGISTRATION UNDER SSI [13-09-2024(online)].pdf 2024-09-13
9 202441069324-EDUCATIONAL INSTITUTION(S) [13-09-2024(online)].pdf 2024-09-13
10 202441069324-DRAWINGS [13-09-2024(online)].pdf 2024-09-13
11 202441069324-DECLARATION OF INVENTORSHIP (FORM 5) [13-09-2024(online)].pdf 2024-09-13
12 202441069324-COMPLETE SPECIFICATION [13-09-2024(online)].pdf 2024-09-13
13 202441069324-FORM 18 [18-02-2025(online)].pdf 2025-02-18