Sign In to Follow Application
View All Documents & Correspondence

Sarcasm Detection System With Contextual Understanding

Abstract: SARCASM DETECTION SYSTEM WITH CONTEXTUAL UNDERSTANDING ABSTRACT A sarcasm detection system (100) with contextual understanding is disclosed. The system (100) comprising: a data acquisition unit (104) adapted to receive textual snippets from a computing device (102), and a processing unit (106) in communication with the data acquisition unit (104), The processing unit (106) is configured to: extract and preprocess textual snippets received by the data acquisition unit (104); apply a Bidirectional Encoder Representations from Transformers (BERT) to generate contextual embeddings from the extracted textual snippets; leverage self-attention mechanisms within the Bidirectional Encoder Representations from Transformers (BERT) to capture linguistic nuances and contextual dependencies in the extracted textual snippets; and classify the processed textual snippets as sarcastic or non-sarcastic using a SoftMax classification layer. The system (100) achieves a high validation accuracy of 91% and a weighted F1-score of 0.912, outperforming conventional sarcasm detection methods in both accuracy and reliability. Claims: 10, Figures: 6 Figure 1A is selected.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 March 2025
Publication Number
17/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR University
SR University, Ananthasagar, Warangal Telangana India 506371 patent@sru.edu.in 08702818333

Inventors

1. Ramakrishna Bodige
SR University, Ananthasagar, Hasanparthy (PO), Warangal, Telangana, India-506371.
2. Ramesh babu Akarapu
SR University, Ananthasagar, Hasanparthy (PO), Warangal, Telangana, India-506371.
3. Pramod kumar Poladi
SR University, Ananthasagar, Hasanparthy (PO), Warangal, Telangana, India-506371.

Specification

Description:BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to a language interoperation system and particularly to a sarcasm detection system with contextual understanding system.
Description of Related Art
[002] An ability to accurately interpret human language is a fundamental challenge in natural language processing (NLP). Sarcasm detection, in particular, poses significant difficulties due to its reliance on contextual cues, implicit meanings, and cultural variations. Traditional computational methods often fail to capture these nuances, as they primarily depend on explicit word patterns, sentiment lexicons, or rule-based approaches. While early machine learning models improved detection by leveraging statistical patterns and handcrafted features, they still struggled with the subtle complexities of sarcasm, such as irony, exaggeration, and contradiction.
[003] Recent advancements in deep learning, particularly transformer-based models, have led to notable improvements in natural language understanding. Large-scale pre-trained language models like Bidirectional Encoder Representations from Transformers (BERT) have shown remarkable success in tasks requiring contextual awareness. However, sarcasm detection remains challenging due to the need for deep semantic comprehension, discourse-level context, and robustness to adversarial inputs. Many existing approaches fail to generalize across diverse linguistic styles, leading to inconsistencies in sarcasm classification across different demographics and communication settings.
[004] Furthermore, current sarcasm detection solutions lack scalability and adaptability when confronted with ambiguous statements, informal speech, or variations in cultural expressions. Most models exhibit overfitting due to limited training data diversity and insufficient adversarial testing. This highlights the need for more robust approaches that integrate advanced contextual representations, adaptive learning mechanisms, and enhanced classification techniques to achieve a more reliable and generalized sarcasm detection system.
[005] There is thus a need for an improved and advanced transformers-based sarcasm detection with a contextual understanding system that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[006] Embodiments in accordance with the present invention provide a transformer based sarcasm detection with a contextual understanding system. The system comprising a data acquisition unit adapted to receive textual snippets from a computing device. The system further comprising a processing unit in communication with the data acquisition unit. The processing unit is configured to extract and preprocess textual snippets received by the data acquisition unit; apply a Bidirectional Encoder Representations from Transformers (BERT) to generate contextual embeddings from the extracted textual snippets; leverage self-attention mechanisms within the Bidirectional Encoder Representations from Transformers (BERT) to capture linguistic nuances and contextual dependencies in the extracted textual snippets; and classify the processed textual snippets as sarcastic or non-sarcastic using a SoftMax classification layer.
[007] Embodiments in accordance with the present invention further provide a method for detecting sarcasm with contextual understanding. The method comprising steps of extracting and preprocessing textual snippets received by the data acquisition unit; applying a Bidirectional Encoder Representations from Transformers (BERT) to generate contextual embeddings from the extracted textual snippets; leveraging self-attention mechanisms within the Bidirectional Encoder Representations from Transformers (BERT) to capture linguistic nuances and contextual dependencies in the extracted textual snippets; and classifying the processed textual snippets as sarcastic or non-sarcastic using a SoftMax classification layer.
[008] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a transformer-based based sarcasm detection with a contextual understanding system.
[009] Next, embodiments of the present application may provide a transformer based sarcasm detection that effectively captures deep semantic relationships to allow for improved detection of sarcasm, irony, and implied meanings in a text.
[0010] Next, embodiments of the present application may provide transformers based sarcasm detection that is resilient to misleading or ambiguous sarcasm-laden statements.
[0011] Next, embodiments of the present application may provide transformers based sarcasm detection that adapts to different communication styles, ensuring accurate sarcasm detection across diverse user groups.
[0012] Next, embodiments of the present application may provide transformers based sarcasm detection that minimizes overfitting and enhances its ability to perform well on unseen data.
[0013] Next, embodiments of the present application may provide transformers based sarcasm detection that achieves a high validation accuracy of 91% and a weighted F1-score of 0.912, outperforming conventional sarcasm detection methods in both accuracy and reliability.
[0014] These and other advantages will be apparent from the present application of the embodiments described herein.
[0015] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0017] FIG. 1A illustrates a block diagram of a sarcasm detection system with contextual understanding , according to an embodiment of the present invention;
[0018] FIG. 1B illustrates a model of the sarcasm detection system with contextual understanding, according to an embodiment of the present invention;
[0019] FIG. 1C illustrates a performance of the sarcasm detection system with contextual understanding, according to an embodiment of the present invention;
[0020] FIG. 1D illustrates a matrix of categorization of the sarcasm detection system with contextual understanding, according to an embodiment of the present invention;
[0021] FIG. 2 illustrates a block diagram of a processing unit of the sarcasm detection system with contextual understanding, according to an embodiment of the present invention; and
[0022] FIG. 3 depicts a flowchart of a method for detecting sarcasm with contextual understanding, according to an embodiment of the present invention.
[0023] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0024] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0025] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0026] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0027] FIG. 1 illustrates a block diagram of a sarcasm detection system 100 (hereinafter referred to as the system 100) with contextual understanding, according to an embodiment of the present invention. The system 100 may be adapted to receive textual snippets. Further, the system 100 may be adapted to detect a presence of sarcasm in the received textual snippets. Moreover, the system 100 may further be adapted to detect a presence of slang, misnomer, humor, and so forth in the received textual snippets. Embodiments of the present invention are intended to include or otherwise cover any figure of the textual snippets, including known, related art, and/or later developed technologies.
[0028] According to the embodiments of the present invention, the system 100 may be configured to classify the detected slang from the received textual snippets. Based on the classified slang, the system 100 may be configured to generate real-time classification results that may be configured to be integrated into real-world application such as sentiment analysis, content moderation, chatbot enhancements, generative engines, document categorizations, books categorizations, customer feedback systems, and so forth. Embodiments of the present invention are intended to include or otherwise cover any real-world application of the generated real-time classification results from the textual snippets, including known, related art, and/or later developed technologies.
[0029] According to the embodiments of the present invention, the system 100 may incorporate non-limiting hardware components to enhance the processing speed and efficiency such as the system 100 may comprise a computing device 102, a data acquisition unit 104, and a processing unit 106. In an embodiment of the present invention, the hardware components of the system 100 may be integrated with computer-executable instructions for overcoming the challenges and the limitations of the existing systems.
[0030] In an embodiment of the present invention, the computing device 102 may be adapted to upload the textual snippets to the system 100. The computing device 102 may be, but not limited to, a laptop, a mobile, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the computing device 102, including known, related art, and/or later developed technologies.
[0031] In an embodiment of the present invention, the data acquisition unit 104 may be adapted to receive the textual snippets from the computing device 102.
[0032] In an embodiment of the present invention, the processing unit 106 may be in communication with the image acquisition unit 104. The processing unit 106 may further be configured to execute computer-executable instructions to generate an output relating to the system 100. According to embodiments of the present invention, the processing unit 106 may be, but not limited to, a Programmable Logic Control (PLC) unit, a microprocessor, a development board, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the processing unit 106 including known, related art, and/or later developed technologies.
[0033] In an embodiment of the present invention, the system 100 may further comprise specialized hardware configuration of the processing unit 106 that may be implemented to ensure efficient execution of computationally intensive tasks. The processing unit 106 may comprise dedicated artificial intelligence (AI) accelerators, graphics processing units (GPUs), tensor processing units (TPUs), field programmable gate arrays (FPGAs), or application-specific integrated circuits (ASICs) to optimize the performance of deep learning models such as the BERT model. The integration of hardware components ensures real-time processing and scalability, making the system adaptable to various computing environments, including cloud-based servers, edge computing devices, and embedded systems. Optimization for low-power devices, high-performance computing clusters, and parallel processing architectures ensures optimal resource utilization and efficiency in execution. The specialized hardware configuration of the processing unit 106 may provide a technically enhanced framework for enabling operation as an integrated hardware-software solution rather than a mere computational algorithm, according to an embodiment of the present invention.
[0034] In an embodiment of the present invention, the processing unit 106 may be configured to communicate with an external system 108 via an application programming interface (API) 110 to provide real-time classification results. The API 110 may facilitate seamless integration with third-party applications for businesses and/or developers to leverage sarcasm detection capabilities in diverse use cases. In an embodiment of the present invention, the processing unit 106 may further be explained in conjunction with FIG. 2.
[0035] In an embodiment of the present invention, the external system 108 may be configured to receive processed classification results from the processing unit 106 and utilize the data for various downstream applications. The external system 108 may include, but is not limited to, customer service platforms, social media monitoring tools, content moderation systems, sentiment analysis engines, and conversational AI models. The external system 108 may be configured to operate on cloud-based servers, edge computing devices, or on-premises enterprise solutions, ensuring flexible deployment and scalability across different industries and domains.
[0036] In an embodiment of the present invention, the application programming interface (API) 110 may be configured to act as a bridge between the processing unit 106 and the external system 108 for facilitating seamless data exchange and interoperability. The API 110 may be implemented using RESTful web services, GraphQL, or WebSocket protocols for allowing efficient, low-latency communication for real-time sarcasm classification. The API 110 may be configured to support batch processing, streaming data pipelines, webhook-based event-driven architectures, and so forth for allowing businesses and developers to integrate sarcasm detection into existing workflows, chatbots, recommendation engines, sentiment analytics tools, and so forth.
[0037] FIG. 1B illustrates a model 112 of the system 100, according to an embodiment of the present invention. The model 112 depict a data flow in the system 100 along with receipt of the output as sarcastic or non-sarcastic, for a corresponding textual input.
[0038] In an embodiment of the present invention, a dataset utilized for training and operation of the system 100 may comprise of 235480 samples collected from digital sources such as, but not limited to, social media, chatrooms, articles, video subtitles, and so forth. Embodiments of the present invention are intended to include or otherwise cover any source for samples in the dataset, including known, related art, and/or later developed technologies. In an embodiment of the present invention, Bidirectional Encoder Representations from Transformers (BERT) embedding may comprise token embedding, word embedding, position embedding, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of embedding, including known, related art, and/or later developed technologies.
[0039] In an embodiment of the present invention, the Bidirectional Encoder Representations from Transformers (BERT) and Bidirectional Encoder Representations from Transformers (BERT) model deployed in the system 100 may comprise of internal layers. In an embodiment of the present invention, a combination of the Bidirectional Encoder Representations from Transformers (BERT) and the Bidirectional Encoder Representations from Transformers (BERT) model may be operated in a cohesion. The cohesive operation of the Bidirectional Encoder Representations from Transformers (BERT) and the Bidirectional Encoder Representations from Transformers (BERT) model may be projected as a double BERT model. The double BERT model may integrate a dual-transformer architecture. Further, the dual-transformer architecture may enable a robust contextual embeddings along with advanced classification techniques, furthermore leading to an achievement of superior sarcasm detection across diverse linguistic, cultural, and adversarial datasets with exceptional accuracy and scalability.
[0040] The internal layers may be, but not limited to, an attention layer, a self-attention layer, a liear attention layer, a layer normalize layer, a liear inter-attention layer, a layer normalize, a pool, or a combination thereof. Embodiments of the present invention are intended to include or otherwise cover any type of the internal layers, including known, related art, and/or later developed technologies.
[0041] In an embodiment of the present invention, the system 100 may further provide output in a concatenated format comprising of IN, SoftMax, and OUT.
[0042] FIG. 1C illustrates a performance chart 114 of the system 100, according to an embodiment of the present invention. In cases of absence of sarcasms in the textual snippets, the system 100 may report a precision score of 0.92, a recall score of 0.91, and a F1 score of 0.92. In cases of presence of sarcasms in the textual snippets, the system 100 may report a precision score of 0.89, a recall score of 0.92, and a F1 score of 0.90.
[0043] FIG. 1D illustrates a matrix 116 of categorization of the system 100, according to an embodiment of the present invention. In an embodiment of the present invention, the extracted textual snippets may be categorized as matrices elements in the matrix 116. The matrices elements of the matrix 116 may be, but not limited to, a contextual understanding, a nuanced interpretation, an ambiguous statement, a linguistic and cultural variability, a slang and informal language, a role of emotion, a conversational context, and so forth. Embodiments of the present invention are intended to include or otherwise cover any matrices elements of the matrix 116 for categorization of the textual snippets, including known, related art, and/or later developed technologies. In an exemplary scenario, a textual snippet 'I can't believe how wonderfully you handled that mess!' may be categorized as the nuanced interpretation in the matrix 116. Further, the system 100 may classify the textual snippet as ‘sarcastic’. Similarly, another textual snippet ' You really nailed that presentation, didn’t you?' may be categorized as the linguistic and cultural variability in the matrix 116. Further, the system 100 may classify the textual snippet as ‘sarcastic’.
[0044] FIG. 2 illustrates a block diagram of the processing unit 106 of the system 100, according to an embodiment of the present invention. The processing unit 106 may comprise the computer-executable instructions in form of programming modules such as a data extraction module 200, a data execution module 202, and a data employment module 204.
[0045] In an embodiment of the present invention, the data extraction module 200 may be configured to extract and preprocess textual snippets received by the data acquisition unit 104. The textual snippets may be extracted by employing self-attention and inter-attention layers to enhance classification accuracy. The data extraction module 200 may further be configured to transmit the textual snippets to the data execution module 202.
[0046] The data execution module 202 may be activated upon receipt of the textual snippets from the data extraction module 200. In an embodiment of the present invention, the data execution module 202 may be configured to apply the bidirectional encoder representations from transformers (BERT) model to generate contextual embeddings from the extracted textual snippets. The data execution module 202 may be configured to leverage self-attention mechanisms within the BERT model to capture linguistic nuances and contextual dependencies in the extracted textual snippets.
[0047] In an embodiment of the present invention, the self-attention mechanisms may be configured to analyze the relationships between words in a sentence by assigning attention scores to each token based on its relevance to others. The self-attention mechanisms may include scaled dot-product attention, which computes attention weights using key, query, and value matrices, and multi-head attention, which allows the BERT model to process multiple representations of the input text simultaneously. By incorporating relative positional encoding, the data execution module 202 may be configured to retain word-order information while maintaining flexibility in understanding contextual meaning. The self-attention mechanisms may enhance sarcasm detection by recognizing contradictions, unexpected tonal shifts, and sentiment reversals within textual snippets.
[0048] Further, the data execution module 202 may be configured to execute the BERT model for refinement of the generated contextual embeddings. The refinement process may involve fine-tuning through additional training layers, ensuring domain-specific optimization and improved classification accuracy. The generated contextual embeddings may comprise the token embedding, word embedding, and position embedding, enabling a comprehensive understanding of sentence structure and intent.
[0049] The data execution module 202 may further be configured to transmit the generated contextual embeddings to the data employment module 204 for classification and further processing. The data employment module 204 may be activated upon receipt of the generated contextual embeddings from the data employment module 204. In an embodiment of the present invention, the data employment module 204 may be configured to classify the processed textual snippets as sarcastic or non-sarcastic using a SoftMax classification layer. The classified textual snippets may be taken as input for further analysis, sentiment assessment, or integration into various downstream applications, such as social media monitoring, customer feedback analysis, chatbot enhancements, and content moderation.
[0050] In an embodiment of the present invention, the data employment module 204 may be configured to generate a confidence score corresponding to the classification outcome. The confidence score may indicate the probability of the textual snippet being sarcastic or non-sarcastic, based on the learned contextual embeddings. The generated confidence scores may assist in decision-making processes by providing an additional layer of interpretability and reliability.
[0051] Furthermore, the data employment module 204 may be configured to store the classified textual snippets along with their respective confidence scores in a structured database. The stored data may be utilized for iterative model training, performance improvement, and benchmarking against new datasets to enhance classification accuracy over time. In an exemplary scenario of the present invention, the data extraction module 200 may receive user input in the form of documents, files, paragraphs, sentences, image text, social media posts, chat messages, emails, reviews, transcripts, or any other textual content.
[0052] For example, when the received user input is "Oh great, another Monday! Just what I needed!", the data extraction module 200 may be configured to preprocess the textual input by applying tokenization, normalization, and feature extraction techniques. The data extraction module 200 may further employ self-attention and inter-attention layers to analyze linguistic dependencies and contextual relationships, thereby enhancing classification accuracy. The extracted textual snippet may be categorized into matrices elements of the matrix 116, including but not limited to, contextual understanding, ambiguous statements, linguistic variability, slang usage, emotional tone, conversational context, and so forth. The processed textual snippet may then be transmitted to the data execution module 202 for further analysis.
[0053] Upon receiving the processed textual snippet, the data execution module 202 may be configured to apply the bidirectional encoder representations from transformers (BERT) model to generate contextual embeddings. The generated embeddings may include token embedding, word embedding, and position embedding, enabling a refined understanding of implicit sentiment and sarcasm. For example, "Oh great" typically conveys a positive sentiment, but when juxtaposed with "another Monday," contradiction arises that signals a potential sarcasm. The refined contextual embeddings may then be transmitted to the data employment module 204 for classification.
[0054] The data employment module 204 may be configured to classify the processed textual snippet using a softmax classification layer. A probability distribution over possible classification outcomes may be computed to determine whether the given input is sarcastic or non-sarcastic. Based on the contextual embeddings, the statement may be classified as sarcastic with a 92% confidence score. The classified textual snippet, along with its corresponding confidence score, may be stored in a structured database (not shown) for iterative learning, performance enhancement, and benchmarking. The data employment module 204 may be configured to trigger communication with the external system 108 (as shown in the FIG. 1A) via the application programming interface (API) 110 (as shown in the FIG. 1A) for real-time classification results to be integrated into the applications such as the sentiment analysis, the content moderation, the chatbot enhancements, the generative engines, the document categorizations, the books categorizations, the customer feedback systems, and so forth.
[0055] FIG. 3 depicts a flowchart of a method 300 for detecting sarcasm with contextual understanding, according to an embodiment of the present invention.
[0056] At step 302, the system 100 may extract the textual snippets fed into the data acquisition unit 104.
[0057] At step 304, the system 100 may execute the Bidirectional Encoder Representations from Transformers (BERT) to process the textual snippets and generate contextual embeddings.
[0058] At step 306, the system 100 may leverage the self-attention mechanisms within the Bidirectional Encoder Representations from Transformers (BERT) to capture linguistic nuances and contextual dependencies in the extracted textual snippets.
[0059] At step 308, the system 100 may classify the processed textual snippets as sarcastic or non-sarcastic using the SoftMax classification layer.
[0060] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0061] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A sarcasm detection system (100) with contextual understanding, the system (100) comprising:
a data acquisition unit (104) configured to receive textual snippets from a computing device (102); and
a processing unit (106) communicatively coupled to the data acquisition unit (104); characterized in that the processing unit (106) is configured to:
extract and preprocess textual snippets received by the data acquisition unit (104);
apply a Bidirectional Encoder Representations from Transformers (BERT) to generate contextual embeddings from the extracted textual snippets;
leverage self-attention mechanisms within the Bidirectional Encoder Representations from Transformers (BERT) to capture linguistic nuances and contextual dependencies in the extracted textual snippets; and
classify the processed textual snippets as sarcastic or non-sarcastic using a SoftMax classification layer.
2. The system (100) as claimed in claim 1, wherein the extracted textual snippets are categorized as matrices elements of a matrix (116), selected form a contextual understanding, a nuanced interpretation, an ambiguous statement, a linguistic and cultural variability, a slang and informal language, a role of emotion, a conversational context, or a combination thereof.
3. The system (100) as claimed in claim 1, comprising a Bidirectional Encoder Representations from Transformers (BERT) model for refinement of the generated contextual embeddings.
4. The system (100) as claimed in claim 1, wherein the generated contextual embeddings comprise token embedding, word embedding, position embedding, or a combination thereof.
5. The system (100) as claimed in claim 1, wherein the textual snippets are extracted by employing self-attention and inter-attention layers to enhance classification accuracy.
6. A method (300) for detecting sarcasm with contextual understanding, the method (300) is characterized by steps of:
extracting and preprocessing textual snippets received by the data acquisition unit (104);
applying a Bidirectional Encoder Representations from Transformers (BERT) to generate contextual embeddings from the extracted textual snippets;
leveraging self-attention mechanisms within the Bidirectional Encoder Representations from Transformers (BERT) to capture linguistic nuances and contextual dependencies in the extracted textual snippets; and
classifying the processed textual snippets as sarcastic or non-sarcastic using a SoftMax classification layer.
7. The method (300) as claimed in claim 6, comprising a Bidirectional Encoder Representations from Transformers (BERT) model for refinement of the generated contextual embeddings.
8. The method (300) as claimed in claim 6, wherein the generated contextual embeddings comprise token embedding, word embedding, position embedding, or a combination thereof.
9. The method (300) as claimed in claim 6, wherein the textual snippets are extracted by employing self-attention and inter-attention layers to enhance classification accuracy.
10. The method (300) as claimed in claim 6, wherein the extracted textual snippets are categorized as matrices elements of a matrix (116), selected form a contextual understanding, a nuanced interpretation, an ambiguous statement, a linguistic and cultural variability, a slang and informal language, a role of emotion, a conversational context, or a combination thereof.
Date: March 21, 2025
Place: Noida
Dr. Keerti Gupta
Agent for the Applicant
(IN/PA-1529)

Documents

Application Documents

# Name Date
1 202541027372-STATEMENT OF UNDERTAKING (FORM 3) [24-03-2025(online)].pdf 2025-03-24
2 202541027372-REQUEST FOR EARLY PUBLICATION(FORM-9) [24-03-2025(online)].pdf 2025-03-24
3 202541027372-POWER OF AUTHORITY [24-03-2025(online)].pdf 2025-03-24
4 202541027372-OTHERS [24-03-2025(online)].pdf 2025-03-24
5 202541027372-FORM-9 [24-03-2025(online)].pdf 2025-03-24
6 202541027372-FORM FOR SMALL ENTITY(FORM-28) [24-03-2025(online)].pdf 2025-03-24
7 202541027372-FORM 1 [24-03-2025(online)].pdf 2025-03-24
8 202541027372-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [24-03-2025(online)].pdf 2025-03-24
9 202541027372-EDUCATIONAL INSTITUTION(S) [24-03-2025(online)].pdf 2025-03-24
10 202541027372-DRAWINGS [24-03-2025(online)].pdf 2025-03-24
11 202541027372-DECLARATION OF INVENTORSHIP (FORM 5) [24-03-2025(online)].pdf 2025-03-24
12 202541027372-COMPLETE SPECIFICATION [24-03-2025(online)].pdf 2025-03-24