Sign In to Follow Application
View All Documents & Correspondence

Irony And Stereotype Detection System

Abstract: Disclosed herein is an irony and stereotype detection system (100) that comprises a user device (102) configured to acquire real-time and batch data from social media platforms, historical user activity logs, and associated metadata, a microprocessor (110) configured to process data, which further comprises a data ingestion module (112) configured to receive data, a pre-processing module (114) configured to clean and normalise the received data, a feature extraction module (116) configured to extract features, a classification module (118) configured to classify the data as neutral and non-neutral data, an irony detection module (120) configured to detect the irony using natural language processing models, a stereotype identification module (122) configured to identify stereotypes using semantic similarity models, a profiling module (128) configured to generate behaviour profiles, an insight generation module (130) configured to generate insights, an output module (132) configured to transmit the output to the user device (102).

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
21 May 2025
Publication Number
23/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. SRAVAN KUMAR DEVULAPALLI
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. SURESH KUMAR MANDALA
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF DISCLOSURE
[0001] The present disclosure generally relates to a detection system, more specifically, relates to an irony and stereotype detection system based on deep learning.
BACKGROUND OF THE DISCLOSURE
[0002] Irony and stereotypes are pervasive elements of human communication, often shaping how individuals express opinions, humour, and critique. Irony is especially common on social media platforms, where users frequently use sarcasm or an ironic tone to respond to events, express dissatisfaction, or engage in satire. In contrast, stereotypes can appear in casual or humorous contexts, and stereotypes frequently perpetuate bias and reinforce social inequalities. The detection and interpretation of irony and stereotypes in digital communication pose significant challenges for both human readers and artificial intelligence systems, due to their reliance on context, tone, and cultural understanding. As social media becomes an increasingly dominant space for public discourse, analysing these elements is essential for understanding online behaviour, preventing harm, and designing more ethical smart systems.
[0003] Traditional systems for detecting irony and stereotypes in text still face notable limitations. Irony detection systems often rely heavily on surface-level lexical cues and fail to capture the contextual subtleties required to identify ironic or sarcastic expressions. These may misclassify ironic statements as positive or neutral due to the literal meanings of the words, ignoring the underlying intent. Similarly, some existing stereotype detection systems typically depend on predefined keyword lists or static rule-based approaches, which struggle to identify implicit or context-dependent biases. Additionally, many previous models are trained on limited or biased datasets that do not reflect the diversity and complexity of real-world language use, particularly in informal environments like social media. As a result, they often underperform in detecting nuanced expressions and may inadvertently reinforce the very biases they aim to mitigate. Moreover, a significant shortcoming in the traditional systems is the lack of integrated systems capable of handling both irony and stereotype detection within a single framework. Most research efforts and tools address these challenges separately, missing opportunities to leverage their interdependence. These limitations highlight the need for more sophisticated, context-aware, and ethically designed models that can better interpret human language in all its complexity.
[0004] The present invention offers a unified approach to address the limitations of the prior art by introducing an irony and stereotype detection system to detect and profile irony and stereotypes, particularly within informal and context-rich environments such as social media. Unlike conventional systems that treat these phenomena separately, the present invention leverages shared linguistic and contextual features to improve detection accuracy and interpretability. By integrating irony and stereotype analysis into a single framework, the present invention captures complex interactions between sarcastic tone and biased language, which are often overlooked in prior art. Additionally, the present invention demonstrates improved adaptability across diverse domains and dialects, reducing reliance on static keyword lists and enhancing robustness in real-world applications. These advantages collectively enable more effective content moderation, social behaviour analysis, and ethical smart design, offering practical value to industries such as media monitoring, public policy, and online platform governance.
[0005] Thus, in light of the above-stated discussion, there exists a need for an irony and stereotype detection system.
SUMMARY OF THE DISCLOSURE
[0006] The following is a summary description of illustrative embodiments of the invention. It is provided as a preface to assist those skilled in the art to more rapidly assimilate the detailed design discussion which ensues and is not intended in any way to limit the scope of the claims which are appended hereto in order to particularly point out the invention.
[0007] According to illustrative embodiments, the present disclosure focuses on an irony and stereotype detection system which overcomes the above-mentioned disadvantages or provide the users with a useful or commercial choice.
[0008] An objective of the present disclosure is to develop an irony and stereotype detection system based on advanced technology.
[0009] Another objective of the present disclosure is to provide a system for real-time and accurate detection of irony and stereotypes while preventing misinterpretation.
[0010] Another objective of the present disclosure is to create a system for enabling irony detection and stereotype identification within a single framework.
[0011] Another objective of the present disclosure is to provide a scalable and automated system for identifying and profiling user-generated content across social platforms.
[0012] Yet another objective of the present disclosure is to offer a system to support social media platforms, forums, and news sites in moderating harmful and misleading content.
[0013] In light of the above, in one aspect of the present disclosure, an irony and stereotype detection system is disclosed herein. The system comprises a user device configured to acquire real-time and batch data from social media platforms, historical user activity logs, and associated metadata. The system includes a microprocessor connected to the user device via a communication network and configured to process data for irony detection and stereotype identification, wherein the microprocessor further comprises a data ingestion module configured to receive data from the user device, a pre-processing module configured to clean and normalise the received data, a feature extraction module configured to extract features relevant to irony and stereotype analysis, a classification module configured to analyse the extracted features and classify as neutral and non-neutral data, an irony detection module configured to detect the irony in the classified non-neutral data using natural language processing models, a stereotype identification module configured to identify stereotypes in the classified non-neutral data using semantic similarity models, a profiling module configured to generate behaviour profiles based on pattern analysis of the detected irony and identified stereotypes, an insight generation module configured to generate insights based on the detected irony, identified stereotypes, and the generated behaviour profiles, an output module configured to transmit the classified neutral and non-neutral data, detected irony, identified stereotypes, generated behaviour profiles and insights to the user device.
[0014] In one embodiment, the system further comprises a cloud database configured to store and manage received data, extracted features, classification and profiling results, detected and identified data, generated insights and stereotype knowledge graphs.
[0015] In one embodiment, the classified neutral data is devoid of irony, stereotypes and contains objective, factual information, while the classified non-neutral data expresses irony, stereotypes and reflects non-objective, emotive content.
[0016] In one embodiment, the irony detection module utilises transformer-based natural language processing models.
[0017] In one embodiment, the stereotype identification module utilises semantic similarity models in combination with a stereotype knowledge graph.
[0018] In one embodiment, the microprocessor further comprises a stereotype categorisation module configured to categorise the identified stereotypes into stereotype categories including gender, racial, religious and cultural stereotypes.
[0019] In one embodiment, the microprocessor further comprises an evaluation module configured to evaluate the severity, frequency and potential impact of each detected irony and identified stereotype.
[0020] In one embodiment, the insight generation module assigns a risk score to each detected irony and identified stereotypes based on the severity, frequency and potential impact.
[0021] In one embodiment, the user device receives and displays the classified neutral and non-neutral data, detected irony, identified stereotypes, generated behaviour profiles and insights through a user interface.
[0022] In light of the above, in one aspect of the present disclosure, a method for developing an irony and stereotype detection system is disclosed herein. The method includes acquiring real-time and batch data from social media platforms, historical user activity logs, and associated metadata via a user device. The method also includes receiving data from the user device via a data ingestion module. The method further includes cleaning and normalising the received data via a pre-processing module. Furthermore, the method includes extracting features relevant to irony and stereotype analysis via a feature extraction module. Additionally, the method includes analysing the extracted features and classifying as neutral and non-neutral data via a classification module. Moreover, the method includes detecting the irony in the classified non-neutral data using natural language processing models via an irony detection module. Further, the method includes identifying stereotypes in the classified non-neutral data using semantic similarity models via a stereotype identification module. Also, the method includes analysing patterns in the detected irony and the identified stereotypes to generate behaviour profiles via a profiling module. In addition, the method includes generating behaviour profiles based on pattern analysis of the detected irony and identified stereotypes via an insight generation module. At last, the method includes transmitting the classified neutral and non-neutral data, detected irony, identified stereotypes, generated behaviour profiles and insights to the user device via an output module.
[0023] These and other advantages will be apparent from the present application of the embodiments described herein.
[0024] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
[0025] These elements, together with the other aspects of the present disclosure and various features are pointed out with particularity in the claims annexed hereto and form a part of the present disclosure. For a better understanding of the present disclosure, its operating advantages, and the specified object attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated exemplary embodiments of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description merely show some embodiments of the present disclosure, and a person of ordinary skill in the art can derive other implementations from these accompanying drawings without creative efforts. All of the embodiments or the implementations shall fall within the protection scope of the present disclosure.
[0027] The advantages and features of the present disclosure will become better understood with reference to the following detailed description taken in conjunction with the accompanying drawing, in which:
[0028] FIG. 1 illustrates a block diagram of an irony and stereotype detection system, in accordance with an embodiment of the present disclosure; and
[0029] FIG. 2 illustrates a flowchart of a method, outlining the sequential steps for developing an irony and stereotype detection system, in accordance with an embodiment of the present disclosure.
[0030] Like reference, numerals refer to like parts throughout the description of several views of the drawing.
[0031] The irony and stereotype detection system is illustrated in the accompanying drawings, which like reference letters indicate corresponding parts in the various figures. It should be noted that the accompanying figure is intended to present illustrations of exemplary embodiments of the present disclosure. This figure is not intended to limit the scope of the present disclosure. It should also be noted that the accompanying figure is not necessarily drawn to scale.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0032] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure.
[0033] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details.
[0034] Various terms as used herein are shown below. To the extent a term is used, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0035] The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
[0036] The terms “having”, “comprising”, “including”, and variations thereof signify the presence of a component.
[0037] Referring now to FIG. 1 to FIG. 2 to describe various exemplary embodiments of the present disclosure. FIG. 1 illustrates a block diagram of an irony and stereotype detection system 100, in accordance with an embodiment of the present disclosure.
[0038] The system 100 may include a user device 102, a user interface 104, a communication network 106, a cloud database 108, a microprocessor 110 which further comprises a data ingestion module 112, a pre-processing module 114, a feature extraction module 116, a classification module 118, an irony detection module 120, a stereotype identification module 122, a stereotype categorisation module 124, an evaluation module 126, a profiling module 128, an insight generation module 130, and output module 132.
[0039] The user device 102 is configured to acquire real-time and batch data from social media platforms, historical user activity logs, and associated metadata. The user device 102 collects user-generated real-time and batch social media data, such as tweets, posts, comments, user activity logs and user metadata such as timestamps and post history through web scraping tools.
[0040] In one embodiment of the present invention, the user device 102 may include but not limited to smartphone, laptop, computer, smart wearable device.
[0041] In one embodiment of the present invention, the user device 102 receives and displays the classified neutral and non-neutral data, detected irony, identified stereotypes, generated behaviour profiles and insights through the user interface 104. The categorised stereotypes and evaluated data are also displayed on the user interface 104 of the user device 102.
[0042] The microprocessor 110 is connected to the user device 102 via a communication network 106 and is configured to process data for irony detection and stereotype identification, wherein the microprocessor 110 further comprises several modules. The microprocessor 110 executes detection and ensures real-time processing.
[0043] In one embodiment of the present invention, the communication network 106 enables seamless data transmission within the system 100.
[0044] In one embodiment of the present invention, the communication network 106 may include wired and wireless networks.
[0045] The data ingestion module 112 is configured to receive data from the user device 102.
[0046] The pre-processing module 114 is configured to clean and normalise the received data. The module 114 removes noise such as emojis, URLs and perform tokenization, lemmatization, and language standardization.
[0047] The feature extraction module 116 is configured to extract features relevant to irony and stereotype analysis. The module converts text into numerical features, which are further used by deep learning algorithms. The module extracts syntactic, semantic features and sentiment scores.
[0048] The classification module 118 is configured to analyse the extracted features and classify as neutral and non-neutral data. The module 118 tags the type of content for deep analysis.
[0049] In one embodiment of the present invention, the classified neutral data is devoid of irony, stereotypes and contains objective, factual information, while the classified non-neutral data expresses irony, stereotypes and reflects non-objective, emotive content.
[0050] The irony detection module 120 is configured to detect the irony in the classified non-neutral data using natural language processing models. The module 120 applies attention mechanisms to detect linguistic cues such as contradiction, sentiment shift, and exaggeration. Further, the module 120 performs syntactic and semantic analysis to improve context-sensitive irony detection
[0051] In one embodiment of the present invention, the irony detection module 120 utilises transformer-based natural language processing models.
[0052] In one embodiment of the present invention, the transformer-based natural language processing models may include but not limited to BERT and RoBERTa.
[0053] The stereotype identification module 122 is configured to identify stereotypes in the classified non-neutral data using semantic similarity models. The sematic similarity models may include but not limited to SBERT and RoBERTa. These models match user-generated text with pre-saved known stereotype patterns in the cloud database 108.
[0054] In one embodiment of the present invention, the stereotype identification module 122 utilises semantic similarity models in combination with a stereotype knowledge graph.
[0055] In one embodiment of the present invention, the microprocessor 110 further comprises a stereotype categorisation module 124 configured to categorise the identified stereotypes into stereotype categories including gender, racial, religious and cultural stereotypes. The module 124 leverages a stereotype knowledge graph to map text against structured stereotype categories.
[0056] In one embodiment of the present invention, the microprocessor 110 further comprises an evaluation module 126 configured to evaluate the severity, frequency and potential impact of each detected irony and identified stereotype. The module 126 evaluates the severity of each instance, tracks the frequency of such content, and assesses its potential social and emotional impact. Also, the module 126 measures the reliability of classification and detection for accuracy.
[0057] The profiling module 128 is configured to generate behaviour profiles based on pattern analysis of the detected irony and identified stereotypes. The profiling module 128 constructs behaviour profiles that reflect the user's communication style, attitude, and potential ideological and social tendencies.
[0058] The insight generation module 130 is configured to generate insights based on the detected irony, identified stereotypes, and the generated behaviour profiles. The module 130 generates summaries, statistics, and trends from analysis to support decision-making.
[0059] In one embodiment of the present invention, the insight generation module 130 assigns a risk score to each detected irony and identified stereotypes based on the severity, frequency and potential impact.
[0060] The output module 132 is configured to transmit the classified neutral and non-neutral data, detected irony, identified stereotypes, generated behaviour profiles and insights to the user device 102.
[0061] In one embodiment of the present invention, the system 100 further comprises a cloud database 108 configured to store and manage received data, extracted features, classification and profiling results, detected and identified data, generated insights and stereotype knowledge graphs.
[0062] FIG. 2 illustrates a flowchart of a method 200, outlining the sequential steps for developing an irony and stereotype detection system 100, in accordance with an embodiment of the present disclosure.
[0063] At step 202, real-time and batch data from social media platforms, historical user activity logs, and associated metadata are acquired via a user device 102.
[0064] At step 204, data is received from the user device 102 via the data ingestion module 112.
[0065] At step 206, the received data is cleaned and normalised via the pre-processing module 114.
[0066] At step 208, features relevant to irony and stereotype analysis are extracted via the feature extraction module 116.
[0067] At step 210, the extracted features are analysed and classified as neutral and non-neutral data via the classification module 118.
[0068] At step 212, the irony in the classified non-neutral data is detected using natural language processing models via the irony detection module 120.
[0069] At step 214, stereotypes in the classified non-neutral data are identified using semantic similarity models via the stereotype identification module 122.
[0070] At step 216, patterns in the detected irony and the identified stereotypes are analysed to generate behaviour profiles via the profiling module 128.
[0071] At step 218, behaviour profiles based on pattern analysis of the detected irony and identified stereotypes are generated via the insight generation module 130.
[0072] At step 220, the classified neutral and non-neutral data, detected irony, identified stereotypes, generated behaviour profiles and insights are transmitted to the user device 102 via the output module 132.
[0073] In the best mode of operation of the present invention, the user device 102 collects diverse user-generated content such as tweets, posts, and comments along with metadata including timestamps and historical logs using web scraping tools through the user interface 104. This data is transmitted through the communication network 106 to the microprocessor 110 for processing. Initially, the data ingestion module 112 receives the input and the pre-processing module 114 standardises and cleans the received data by removing noise like emojis and URLs and applies linguistic normalisation techniques such as lemmatisation and tokenisation. The cleaned data is then processed by the feature extraction module 116, which derives syntactic, semantic, and sentiment-based features, converting text into numerical features usable by deep learning models. These features are evaluated by the classification module 118, which categorises the data as neutral or non-neutral, where non-neutral data is indicative of emotive and subjective content. The irony detection module 120, using transformer-based models like BERT or RoBERTa, analyses non-neutral data for linguistic cues such as contradiction and exaggeration to detect irony. Simultaneously, the stereotype identification module 122 applies semantic similarity models such as SBERT, RoBERTa and a stereotype knowledge graph to match the data with pre-saved historical stereotype patterns stored in the cloud database 108. Identified stereotypes are further classified into structured categories as gender, racial, religious and cultural by the stereotype categorisation module 124. The evaluation module 126 assesses each instance of detected irony and identified stereotypes for severity, frequency, and social impact. The profiling module 128 constructs user behaviour profiles based on recurring linguistic tendencies, communication style, and ideological inclinations. The insight generation module 130 generates summaries, trend analyses, and risk scores to aid strategic decision-making. Finally, the output module 132 transmits all processed results including classified data, detected irony, stereotypes, behaviour profiles, and insights, to the user device 102 for real-time display and user engagement.
[0074] The present invention provides a highly accurate, context-aware, and real-time solution for detecting irony and identifying stereotypes in user-generated content. The present invention leverages advanced natural language processing techniques, including transformer-based models and semantic similarity algorithms. The system 100 allows both batch and real-time analysis across diverse social media platforms. The integration of a stereotype knowledge graph and behavioural profiling enhances the ability of the system 100 to not only detect but also categorise stereotypes and evaluate their potential social impact. Furthermore, the system 100 is capable of generating detailed behavioural profiles and actionable insights to make informed decisions.
[0075] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it will be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0076] A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, computer software, or a combination thereof.
[0077] The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described to best explain the principles of the present disclosure and its practical application, and to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such omissions and substitutions are intended to cover the application or implementation without departing from the scope of the present disclosure.
[0078] Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0079] In a case that no conflict occurs, the embodiments in the present disclosure and the features in the embodiments may be mutually combined. The foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
, Claims:I/We Claim:
1. An irony and stereotypes detection system (100), the system (100) comprising:
a user device (102) configured to acquire real-time and batch data from social media platforms, historical user activity logs, and associated metadata;
a microprocessor (110) connected to the user device (102) via a communication network (106) and configured to process data for irony detection and stereotype identification, wherein the microprocessor (110) further comprises:
a data ingestion module (112) configured to receive data from the user device (102);
a pre-processing module (114) configured to clean and normalise the received data;
a feature extraction module (116) configured to extract features relevant to irony and stereotype analysis;
a classification module (118) configured to analyse the extracted features and classify as neutral and non-neutral data;
an irony detection module (120) configured to detect the irony in the classified non-neutral data using natural language processing models;
a stereotype identification module (122) configured to identify stereotypes in the classified non-neutral data using semantic similarity models;
a profiling module (128) configured to generate behaviour profiles based on pattern analysis of the detected irony and identified stereotypes;
an insight generation module (130) configured to generate insights based on the detected irony, identified stereotypes, and the generated behaviour profiles; and
an output module (132) configured to transmit the classified neutral and non-neutral data, detected irony, identified stereotypes, generated behaviour profiles and insights to the user device (102).
2. The system (100) as claimed in claim 1, wherein the system (100) further comprises a cloud database (108) configured to store and manage received data, extracted features, classification and profiling results, detected and identified data, generated insights and stereotype knowledge graphs.
3. The system (100) as claimed in claim 1, wherein the classified neutral data is devoid of irony, stereotypes and contains objective, factual information, while the classified non-neutral data expresses irony, stereotypes and reflects non-objective, emotive content.
4. The system (100) as claimed in claim 1, wherein the irony detection module (120) utilises transformer-based natural language processing models.
5. The system (100) as claimed in claim 1, wherein the stereotype identification module (122) utilises semantic similarity models in combination with a stereotype knowledge graph.
6. The system (100) as claimed in claim 1, wherein the microprocessor (110) further comprises a stereotype categorisation module (124) configured to categorise the identified stereotypes into stereotype categories including gender, racial, religious and cultural stereotypes.
7. The system (100) as claimed in claim 1, wherein the microprocessor (110) further comprises an evaluation module (126) configured to evaluate the severity, frequency and potential impact of each detected irony and identified stereotype.
8. The system (100) as claimed in claim 1, wherein the insight generation module (130) assigns a risk score to each detected irony and identified stereotypes based on the severity, frequency and potential impact.
9. The system (100) as claimed in claim 1, wherein the user device (102) receives and displays the classified neutral and non-neutral data, detected irony, identified stereotypes, generated behaviour profiles and insights through a user interface (104).
10. A method (200) for developing an irony and stereotype detection system (100), the method (200) comprising:
acquiring real-time and batch data from social media platforms, historical user activity logs, and associated metadata via a user device (102);
receiving data from the user device (102) via a data ingestion module (112);
cleaning and normalising the received data via a pre-processing module (114);
extracting features relevant to irony and stereotype analysis via a feature extraction module (116);
analysing the extracted features and classifying as neutral and non-neutral data via a classification module (118);
detecting the irony in the classified non-neutral data using natural language processing models via an irony detection module (120);
identifying stereotypes in the classified non-neutral data using semantic similarity models via a stereotype identification module (122);
analysing patterns in the detected irony and the identified stereotypes to generate behaviour profiles via a profiling module (128);
generating behaviour profiles based on pattern analysis of the detected irony and identified stereotypes via an insight generation module (130); and
transmitting the classified neutral and non-neutral data, detected irony, identified stereotypes, generated behaviour profiles and insights to the user device (102) via an output module (132).

Documents

Application Documents

# Name Date
1 202541048894-STATEMENT OF UNDERTAKING (FORM 3) [21-05-2025(online)].pdf 2025-05-21
2 202541048894-REQUEST FOR EARLY PUBLICATION(FORM-9) [21-05-2025(online)].pdf 2025-05-21
3 202541048894-POWER OF AUTHORITY [21-05-2025(online)].pdf 2025-05-21
4 202541048894-FORM-9 [21-05-2025(online)].pdf 2025-05-21
5 202541048894-FORM FOR SMALL ENTITY(FORM-28) [21-05-2025(online)].pdf 2025-05-21
6 202541048894-FORM 1 [21-05-2025(online)].pdf 2025-05-21
7 202541048894-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [21-05-2025(online)].pdf 2025-05-21
8 202541048894-DRAWINGS [21-05-2025(online)].pdf 2025-05-21
9 202541048894-DECLARATION OF INVENTORSHIP (FORM 5) [21-05-2025(online)].pdf 2025-05-21
10 202541048894-COMPLETE SPECIFICATION [21-05-2025(online)].pdf 2025-05-21
11 202541048894-Proof of Right [30-05-2025(online)].pdf 2025-05-30