Sign In to Follow Application
View All Documents & Correspondence

System And Method For Reviewing Content

Abstract: The present disclosure provides a system and a method for reviewing content. The proposed content review system 104 receives the content, transmitted by a router 102, and identifies one or more words selected from the received content based on one or more recognition techniques. One of the recognition techniques involves a comparison of the words of the received content with a pre-configured dataset comprising a list of words to be identified out from the content. In other recognition technique, one or more words are identified from the received content based on interpretation of a context associated with the received content by using one or more classifiers of a CNN. The identified words are either filtered out, or substituted by some pre-determined words. The reviewed content is, then, transmitted to the server 106.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 January 2020
Publication Number
32/2021
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
info@khuranaandkhurana.com
Parent Application
Patent Number
Legal Status
Grant Date
2025-02-16
Renewal Date

Applicants

Chitkara Innovation Incubator Foundation
SCO: 160-161, Sector -9c, Madhya Marg, Chandigarh- 160009, India.

Inventors

1. GERA, Ashish
Third Year Student, Department of Computer Science and Engineering, Chitkara University, Punjab, Punjab-Patiala National Highway (NH4), Patiala, Punjab-140401, India.
2. GARG, Vedanshi
Second Year Student, Department of Computer Science and Engineering, 573/165, Bhartiya Colony, New Mandi Muzaffarnagar - 251001, Uttar Pradesh, India.
3. ROY, Arina
Second Year CSE Student, Chitkara Institute of Engineering and Technology (Baddi), Atal Shiksha Kunj, Pinjore-Nalagarh National Highway (NH-21A), Kalujhinda, Distt, Baddi, 174103, Himachal Pradesh, India.

Specification

The present disclosure relates to the field of communication. In particular, the
present disclosure provides a system and method for reviewing content.
BACKGROUND
[0002] The background description includes information that may be useful in
understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] Stepping with the fast pacing world, communication technology has also gone
through a lot of advancements, and risen to a next level. In today's world, communicating with one other has become as easy as anything. One can communicate through telephone, mobile, e-mail, and through various social media apps, where communication can be done using text, audio, video, etc. Besides one to one communication, group communication and mass communication has also become an effortless task. It has become possible, and moreover, affordable for the people to communicate with one another, even while living continents apart through the day-to-day advancements in communication technology. People are able to propagate their ideologies, collaborate with one another for completion of a task or project, manage one or more tasks at hand while sitting at a remote location, etc.
[0004] At one hand, the technology has open many ways for people to communicate with
each other, and hence, making their lives lot more easier, but, at the other hand, it has served as a means to spread hate crimes, terrorism, propagating lethal ideologies, provoking a section or group of people against another section or group of people through spreading various text and audio messages among people. Some unmannered people use toxic and abusive words, and deliver hate speech to provoke others. For example, some people may abuse customer care executives of a company to vent their anger, due to poor services or poor products delivered by the company. It creates a very difficult and intolerable situation for the customer executives.

[0005] There is, therefore, a need in the art to provide an efficient and cost-effective
system to overcome the above-mentioned problems, and, provide a means for filtering out hate spreading and abusive words.
OBJECTS OF THE PRESENT DISCLOSURE
[0006] Some of the objects of the present disclosure, which at least one embodiment
herein satisfies are as listed herein below.
[0007] It is an object of the present disclosure to provide system and method for filtering
toxic and abusive words in a content.
[0008] It is another object of the present disclosure to provide system and method for
reviewing the content before transmitting the content to a server.
[0009] It is another object of the present disclosure to provide system and method for
generating a report based on the reviewed content, and transmitting the report to the server, a
user, and a corresponding government authority.
[0010] It is another object of the present disclosure to provide a cost-effective, efficient,
and accurate system.
[0011] These and other objects of the present invention will become readily apparent
from the following detailed description taken in conjunction with the accompanying drawings.
SUMMARY
[0012] The present disclosure relates to the field of communication. In particular, the
present disclosure provides a system and method for reviewing content.
[0013] An aspect of the present disclosure pertains to a method for reviewing a content,
the method comprising the steps of: receiving, at one or more processors of a processing engine configured with a Convolutional Neural Network (CNN), a first set of data packets from a first input means, wherein the first set of data packets may pertain to the content associated with any or a combination of one or more text strings and one or more verbal/ orally spoken words; and identifying, at the one or more processors, a second set of data packets selected from the received first set of data packets based on one or more recognition techniques, wherein at least one of the one or more recognition technique may comprise comparing of the received first set of data

packets with a pre-configured dataset comprising a list of words to be identified out from the content.
[0014] In an aspect, the at least one of the one or more recognition technique may
comprise interpreting of a context associated with the received first set of data packet by using one or more classifiers of a CNN.
[0015] In an aspect, the method may comprise a step of filtering, at the one or more
processors, the identified second set of data packets from the received first set of data packets,
before transmitting the received first set of data packets to a first output means.
[0016] In an aspect, the method may comprise a step of substituting, at the one or more
processors, at least one of the identified second set of data packets with corresponding one or more pre-determined set of data packets.
[0017] In an aspect, the method may comprise a step of appending, at the one or more
processors, any or a combination of the pre-configured dataset and the one or more pre-determined set of data packets, through a second input means.
[0018] In an aspect, the method may comprise a step of appending, at the one or more
processors, the pre-configured dataset based on a testing-and-training set of data packets.
[0019] In an aspect, the method may comprise a step of appending of the pre-configured
dataset based on the testing-and-training set of data packets executed through an interrogation process.
[0020] In an aspect, the method may comprise a step of authenticating, at the one or more
processors, an encrypted code entered through the second input means, to enable the appending of any or a combination of the pre-configured set of data packets and the one or more pre-determined set of data packets.
[0021] An aspect of the present disclosure pertains to a content review system, the
system comprising: a processing unit comprising one or more processors, and coupled with a memory, the memory storing instructions executable by the one or more processors configured with a Convolutional Neural Network (CNN) and configured to: receive a first set of data packets from a first input means, wherein the first set of data may pertain to a content associated with any or a combination of one or more text strings and one or more verbal/ orally spoken words; and identify a second set of data packets selected from the received first set of data packets based on one or more recognition techniques, wherein at least one of the one or more

recognition techniques may comprise comparing of the received first set of data packets with a
pre-configured dataset comprising a list of words to be identified out from the content.
[0022] In an aspect, the system may be configured between a router and a server, and
wherein the first input means and the first output means may comprise any or a combination of the router and the server.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The accompanying drawings are included to provide a further understanding of
the present disclosure, and are incorporated in and constitute a part of this specification. The
drawings illustrate exemplary embodiments of the present disclosure and, together with the
description, serve to explain the principles of the present disclosure.
[0024] The diagrams are for illustration only, which thus is not a limitation of the present
disclosure, and wherein:
[0025] FIG. 1 illustrates exemplary architecture of the proposed content reviewsystem to
illustrate its overall working in accordance with an embodiment of the present disclosure.
[0026] FIG. 2 illustrates exemplary engines of the proposed content reviewsystem in
accordance with an exemplary embodiment of the present disclosure.
[0027] FIG. 3 is a flow diagram illustrating a method for reviewing, in accordance with
an embodiment of the present disclosure.
[0028] FIG. 4 illustrates an exemplary computer system in which or with which
embodiments of the present invention can be utilized in accordance with embodiments of the
present disclosure.
DETAILED DESCRIPTION
[0029] In the following description, numerous specific details are set forth in order to
provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.
[0030] Embodiments of the present invention may be provided as a computer program
product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a

process. The machine-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).
[0031] Various methods described herein may be practiced by combining one or more
machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by engine s, routines, subroutines, or subparts of a computer program product.
[0032] If the specification states a component or feature "may", "can", "could", or
"might" be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0033] As used in the description herein and throughout the claims that follow, the
meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[0034] The recitation of ranges of values herein is merely intended to serve as a
shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g. "such as") provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification

should be construed as indicating any non-claimed element essential to the practice of the invention.
[0035] Groupings of alternative elements or embodiments of the invention disclosed
herein are not to be construed as limitations. Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein. One or more members of a group can be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is herein deemed to contain the group as modified thus fulfilling the written description of all groups used in the appended claims.
[0036] Exemplary embodiments will now be described more fully hereinafter with
reference to the accompanying drawings, in which exemplary embodiments are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure).
[0037] The present disclosure relates to the field of communication. In particular, the
present disclosure provides a system and method for reviewing content.
[0038] According to an aspect the present disclosure pertains to a method for reviewing
a content, the method including the steps of: receiving, at one or more processors of a processing engine configured with a Convolutional Neural Network (CNN), a first set of data packets from a first input means, wherein the first set of data packets can pertain to the content associated with any or a combination of one or more text strings and one or more verbal/ orally spoken words; and identifying, at the one or more processors, a second set of data packets selected from the received first set of data packets based on one or more recognition techniques, wherein at least one of the one or more recognition techniques can include comparing of the received first set of data packets with a pre-configured dataset including a list of words to be identified out from the content.

[0039] In an embodiment, the at least one of the one or more recognition techniques can
include interpreting of a context associated with the received first set of data packet by using one or more classifiers of a CNN.
[0040] In an embodiment, the method can include a step of filtering, at the one or more
processors, the identified second set of data packets from the received first set of data packets,
before transmitting the received first set of data packets to a first output means.
[0041] In an embodiment, the method can include a step of substituting, at the one or
more processors, at least one of the identified second set of data packets with corresponding one or more pre-determined set of data packets.
[0042] In an embodiment, the method can include a step of appending, at the one or more
processors, any or a combination of the pre-configured dataset and the one or more pre-determined set of data packets, through a second input means.
[0043] In an embodiment, the method can include a step of appending, at the one or more
processors, the pre-configured dataset based on a testing-and-training set of data packets.
[0044] In an embodiment, the method can include a step of appending of the pre-
configured dataset based on the testing-and-training set of data packets executed through an interrogation process.
[0045] In an embodiment, the method can include a step of authenticating, at the one or
more processors, an encrypted code entered through the second input means, to enable the appending of any or a combination of the pre-configured set of data packets and the one or more pre-determined set of data packets.
[0046] According to an aspect the present disclosure pertains to a content review system,
the system including: a processing unit comprising one or more processors, and coupled with a memory, the memory storing instructions executable by the one or more processors configured with a Convolutional Neural Network (CNN) and configured to: receive a first set of data packets from a first input means, wherein the first set of data can pertain to a content associated with any or a combination of one or more text strings and one or more verbal/ orally spoken words; and identify a second set of data packets selected from the received first set of data packets based on one or more recognition techniques, wherein at least one of the one or more recognition techniques can include comparing of the received first set of data packets with a pre-configured dataset comprising a list of words to be identified out from the content.

[0047] In an aspect, the system can be configured between a router and a server, and
wherein the first input means and the first output means can include any or a combination of the router and the server.
[0048] FIG. 1 illustrates exemplary architecture of the proposed content review system to
illustrate its overall working in accordance with an embodiment of the present disclosure.
[0049] As illustrated in the FIG. 1, according to an embodiment of the present disclosure
the proposed content review system 104 can be coupled to one or more networks, and facilitate
reviewing of content across the one or more networks. The proposed content review system 104
can receive a first set of data packets from a first input means 102. The first set of data packets
can pertain to a content associated with any or a combination of one or more text strings and one
or more verbal/ orally spoken words. The proposed content review system 104 can transmit the
reviewed first set of data packets to a first output means 106. In an embodiment, the first input
means 102 and the first output means 106 can include any or a combination of one or more
routers and one or more servers associated with the one or more networks.
[0050] In an embodiment, process of reviewing of the content associated with the
received first dataset can be carried out by executing steps such as identification of words 108, filtration of identified words 110, and substitution of identified wordsl 12 at the proposed content review system 104.
[0051] In an embodiment, in the step of identification of words 108, the proposed content
review system 104 can operate at one or morerecognition techniques such as optical word recognition technique and speech recognition technique, to identify a second set of data packets pertaining to one or more words, selected from the received first set of data packets. In an embodiment, in a first recognition technique, for identifying the second set of data packets amongst the received first set of data packets, the proposed content review system 104 can compare the received first set of data packets with a pre-configured dataset including a list of words to be recognized from the content. In an embodiment, in a second recognition technique, for identifying the second set of data packets amongst the received first set of data packets, the proposed content review system 104 can facilitate interpreting of a context associated with the received first set of data packet by using one or more classifiers of a CNN, and the second set of data packets can be identified based on the interpretation of the context.

[0052] A person skilled in the art will appreciate that the proposed content review system
104 can incorporate the first recognition technique, the second recognition technique, and other recognition techniques simultaneously, for the identification of the second set of data packets amongst the received first set of data packets, without deviating from the spirit and scope of the invention.
[0053] In an embodiment, in the step of filtration of identified words 110, the proposed
content review system 104 can filter the identified second set of data packets from the received
first set of data packets, before transmitting the received first set of data packets to the first
output means 106. The filtration 110 of the identified second set of data packets can include any
or a combination of removing, encrypting, and hiding of the identified second set of data packets.
[0054] In an embodiment, in the step of substitution of identified words 112, the
proposed content review system 104 can substitute at least one of the identified second set of data packets with corresponding one or more pre-determined set of data packets, amongst the received first set of data packets, before transmitting the received first set of data packets to the first output means 106. The one or more pre-determined set of data packets can be retrieved from the proposed content review system 104, or derived from a third source, such as a cloud, network, server, laptop, mobile, external memory, and the likes.
[0055] In an embodiment, during appending of dataset 114, the proposed content review
system 104 can facilitate appending of any or a combination of the pre-configured dataset and the one or more pre-determined set of data packets, through a second input means (not shown). The second input means can include any or a combination of keyboard, microphone, mobile, laptop, and the likes. In another embodiment, the proposed content review system 104 can automatically suggest appending of the pre-configured dataset based on a testing-and-training set of data packets. The testing-and-training set of data packets can be obtained through CNN. The appending of the pre-configured dataset based on the testing-and-training set of data packets can be executed through an interrogation process. In the interrogation process, a user can approve or disapprove the suggested appending of the pre-configured dataset. For instance, if five words are suggested by the proposed content review system 104, in case, the user approves two of the five suggested words, and disapproves or rejects other three of the five suggested words. Then, only the two approved words are being added to the pre-configured dataset by the proposed content

review system 104. The user can add a sixth word different from the five suggested words, based on the user's own understanding, to append the pre-configured dataset.
[0056] In an embodiment, during authentication 116, the proposed content review system
104 can allow appending of any or a combination of the pre-configured dataset and the one or more pre-determined set of data packets after authenticating an encrypted code, which can be entered through the second input means. The encrypted code can be any or a combination of a password set by the user, a key-code, a real-time password, and the likes.
[0057] In an illustrative embodiment, the proposed content review system 104 can
generate a report based on the reviewed content, and can transmit the generated report to any or a
combination of the server, the user, a corresponding government authority, and the likes. The
generated report can be including, but not limited to, any or a combination of IP address of a
system or input means 102 from which the first set of data is being generated, words pertaining
to the recognized second set of data packets, count of such words, IP address of a system or
output means 106 to which the received first set of data is transmitted, and the likes.
[0058] In an illustrative implementation, in an instance, the input means 102 can transmit
the first set of data packets to the proposed content review system 104, where the first set of data packets pertains to a sentence "Charging ports provided at several places could surreptitiously copy sensitive data from a smart phone, tablet or any computer device, such malicious hacking is called Juice jacking — a type of cyber-attack involving a charging port that doubles up as a data connection/ collection point." In another instance, the second set of data packets can pertain to words such as "surreptitiously", "malicious", "hacking", "sensitive", and "cyber-attack". In an embodiment, the proposed content review system 104 can identify the second set of data packets selected from the received first set of data packets based on the one or more recognition techniques, and can filter the words associated with the identified second set of data packets from the sentence associated with the received first set of data packets. In an example, the pre¬determined set of data packets can pertain to one or more substitution words or alphanumeric characters corresponding to the words associated with the identified second set of data packets, for example, "hijacking" corresponding to "hacking", and "***" corresponding to "cyber-attack". Then, the proposed content review system 104 can substitute the identified second set of data packets with the corresponding one or more pre-determined set of data packets. In another example, the words associated with the identified second set of data packets with no

corresponding pre-determined set of data packets can be eliminated from the sentence associated
with the received first set of data packets. So, the reviewed content generated by the proposed
content review system 104 can be "Charging ports provided at several places could copy data
from a smart phone, tablet or any computer device, such hijacking is called Juice jacking — a
type of *** involving a charging port that doubles up as a data connection/ collection point."
[0059] FIG. 2 illustrates exemplary engines of the proposed content review system in
accordance with an exemplary embodiment of the present disclosure.
[0060] As illustrated, the proposed content review system 104 can include one or more
processor(s) 202. The one or more processor(s) 202 can be implemented as one or more
microprocessors, microcomputers, microcontrollers, digital signal processors, central processing
units, logic circuitries, and/or any devices that manipulate data based on operational instructions.
Among other capabilities, the one or more processor(s) 202 are configured to fetch and execute
computer-readable instructions stored in a memory 204 of the proposed content review system
104. The memory 204 can store one or more computer-readable instructions or routines, which
may be fetched and executed to create or share the data units over a network service. The
memory 204 can include any non-transitory storage device including, for example, volatile
memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0061] In an embodiment, the proposed content review system 104 can also include an
interface(s) 206. The interface(s) 206 may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 206 may facilitate communication of the proposed content review system 104 with various devices coupled to the proposed content review system 104. The interface(s) 206 may also provide a communication pathway for one or more components of the proposed content review system 104. Examples of such components include, but are not limited to, processing engine(s) 208 and data 210.
[0062] In an embodiment, the processing engine(s) 208 can be implemented as a
combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 208. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 208 may be processor executable instructions stored on a non-transitory machine-readable storage medium

and the hardware for the processing engine(s) 208 may include a processing resource (for
example, one or more processors), to execute such instructions. In the present examples, the
machine-readable storage medium may store instructions that, when executed by the processing
resource, implement the processing engine(s) 208. In such examples, the proposed content
review system 104 can include the machine-readable storage medium storing the instructions and
the processing resource to execute the instructions, or the machine-readable storage medium may
be separate but accessible to the proposed content review system 104 and the processing
resource. In other examples, the processing engine(s) 208 may be implemented by electronic
circuitry. The data 210 can include data that is either stored or generated as a result of
functionalities implemented by any of the components of the processing engine(s) 208.
[0063] In an embodiment, the processing engine(s) 208 can include a filtering engine
212, an appending engine 214, and other engine(s) 216. The other engine (s) 216 can implement functionalities that supplement applications or functions performed by the proposed content review system 104 or the processing engine(s) 208.
[0064] In an embodiment, the filtering engine 212 of the proposed content review system
104 can enable filtration of a second set of data packets from a first set of data packets. The first set of data packets pertaining to a content associated with any or a combination of one or more text strings and one or more verbal/ orally spoken words, transmitted from an input means 102, can be received by the proposed content review system 104. The filtering engine 212 can facilitate identifying of the second set of data packets pertaining to one or more words, selected from the received first set of data packets based on one or more recognition techniques such as optical word recognition technique and speech recognition technique. In an illustrative embodiment, in a first recognition technique, for identifying a second set of data packets amongst the received first set of data packets, the received first set of data packets can be compared with a pre-configured dataset including a list of words to be recognized from the content. In an embodiment, in a second recognition technique, for identifying a second set of data packets amongst the received first set of data packets, a context associated with the received first set of data packet can be interpreted, by using one or more classifiers of a CNN, and the second set of data packets can be identified based on the interpretation of the context. For example, in a case, where the received first set of data packets can pertain to a spoken sentence "You are an ugly and spoiled brat", the context associated with the received first set of data

packet is interpreted, by using one or more classifiers of a CNN, and the identified second set of data packets can pertain to words "ugly" and "spoiled".
[0065] In another embodiment, the filtering engine 212 can filter the identified second set
of data packets from the received first set of data packets, before transmitting the received first set of data packets to a first output means 106. The filtration 110 of the identified second set of data packets can include any or a combination of removing, encrypting, and hiding of the identified second set of data packets. So, in the above example, the identified second set of data packets can pertain to words "ugly" and "spoiled" can be filtered out.
[0066] In another embodiment, the filtering engine 212 can enable substitution of at least
one of the identified second set of data packets with corresponding one or more pre-determined set of data packets. The one or more pre-determined set of data packets can be retrieved from the data 210 of the proposed content review system 104, or derived from a third source, such as a cloud, network, server, laptop, mobile, external memory, and the likes. In an example, if the one or more pre-determined set of data packets pertain to a pre-defined substitution sound, say "beep-beep" corresponding to the identified second set of data packets pertaining to the word "ugly".In another example, if there is no pre-determined set of data packets for the word "spoiled", then it may be eliminated or silenced from received first set of data packets. There can be a silence for a first time-duration, where the first time-duration can be associated with the time-taken to pronounce a word associated with the second set of data packets. Then, the reviewed sentence generated by the proposed content review system 104 can be "You are an beep-beep and (silence for the first time-duration) brat".
[0067] In an embodiment, the appending engine 214 of the proposed content review
system 104 can enable appending of any or a combination of the pre-configured dataset and the one or more pre-determined set of data packets, through a second input means (not shown). The second input means can include any or a combination of keyboard, microphone, mobile, laptop, and the likes. In another embodiment, the proposed content review system 104 can automatically suggest appending of the pre-configured dataset based on a testing-and-training set of data packets. The testing-and-training set of data packets can be obtained through CNN. The appending of the pre-configured dataset based on the testing-and-training set of data packets can be executed through an interrogation process. In the interrogation process, a user can approve or disapprove the suggested appending of the pre-configured dataset. For example, in a case, three

words "bomb", "missile", and "rocket" are being suggested by the proposed content review system 104, and, the user approves only one of the three suggested words, say "bomb", and disapproves or rejects other two suggested words, say"missile", and "rocket". Then, only the word "bomb" is being added to the pre-configured dataset by the proposed content review system 104. The user can add a new word independent of the suggested words, based on the user's own understanding, to append the pre-configured dataset. For an instance, if the user adds a word "disaster" on his own. Then, the appended pre-configured dataset includes the words "bomb" and "disaster", along with the pre-existing list of words to be recognized from the content.
[0068] In another embodiment, the appending engine 214 can perform the appending
after authentication of an encrypted code, which can be entered through the second input means. The encrypted code can be any or a combination of a password set by the user, a key-code, a real-time password, and the likes. In an illustrative embodiment, a report can be generated based on the reviewed content, and the generated report can be transmitted to any or a combination of the server, the user, a corresponding government authority, and the likes. The generated report can be including, but not limited to, any or a combination of IP address of a system or input means 102 from which the first set of data is being generated, words pertaining to the recognized second set of data packets, count of such words, IP address of a system or output means 106 to which the received first set of data is transmitted, and the likes.
[0069] FIG. 3 is a flow diagram illustrating a method for reviewing, in accordance with
an embodiment of the present disclosure.
[0070] As illustrated, in an embodiment, the proposed method for reviewing of a content
can include a step 302 of receiving, at one or more processors of a processing engine 104
configured with a Convolutional Neural Network (CNN), a first set of data packets from a first
input means 102. The first set of data packets pertains to the content associated with any or a
combination of one or more text strings and one or more verbal/ orally spoken words.
[0071] In an embodiment, the proposed method for reviewing of a content includes a step
304 ofidentifying, at the one or more processors, a second set of data packets pertaining to one or more words, selected from the first set of data packets received in the step 302. The second set of data packets can be identified based on one or more recognition techniques like optical word recognition technique and speech recognition technique. In an embodiment, in a first recognition

technique, for identifying the second set of data packets amongst the received first set of data packets, the received first set of data packets canbe compared with a pre-configured dataset including a list of words to be recognized from the content. In an embodiment, in a second recognition technique, identification of the second set of data packets amongst the received first set of data packets can be based on an interpretation of a context associated with the received first set of data packet through one or more classifiers of a CNN.
[0072] A person skilled in the art will appreciate that the first recognition technique and
the second recognition technique can be utilizedsimultaneously for the identification of the second set of data packets amongst the received first set of data packets, without deviating from the spirit and scope of the invention.
[0073] In an embodiment, the method can include a step of filtering, at the one or more
processors, of the identified second set of data packets from the received first set of data packets, before transmitting the received first set of data packets to a first output means 106. The filtration 110 of the identified second set of data packets can include any or a combination of removing, encrypting, and hiding of the identified second set of data packets.
[0074] In an embodiment, the method can include a step of substituting, at the one or
more processors, at least oneof the identified second set of data packets with corresponding one
or more pre-determined set of data packets, amongst the received first set of data packets, before
transmitting the received first set of data packets to the first output means 106. The one or more
pre-determined set of data packets can be retrieved from a data 210, or derived from a third
source, such as a cloud, network, server, laptop, mobile, external memory, and the likes.
[0075] In an embodiment, the method can include a step of appending, at the one or more
processors,any or a combination of the pre-configured dataset and the one or more pre¬
determined set of data packets, through a second input means. The second input means can
include any or a combination of keyboard, microphone, mobile, laptop, and the likes.
[0076] In an embodiment, appending of the pre-configured datasetcan be automatically
suggested based on a testing-and-training set of data packets. The testing-and-training set of data packets can be obtained through CNN. The appending of the pre-configured dataset based on the testing-and-training set of data packets can be executed through an interrogation process. In the interrogation process, a user can approve or disapprove the suggested appending of the pre-

configured dataset. In another embodiment, the user can add a word in the pre-configured dataset based on understanding of the user.
[0077] FIG. 4 illustrates an exemplary computer system in which or with which
embodiments of the present invention can be utilized in accordance with embodiments of the present disclosure.
[0078] As shown in FIG. 4, computer system includes an external storage device 410, a
bus 420, a main memory 430, a read only memory 440, a mass storage device 450, communication port 460, and a processor 470. A person skilled in the art will appreciate that computer system may include more than one processor and communication ports. Examples of processor 470 include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOC™ system on a chip processors or other future processors. Processor 470 may include various engine s associated with embodiments of the present invention. Communication port 460 can be any of an RS-232 port for use with a modem based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. Communication port 460 may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects.
[0079] In an embodiment, the memory 430 can be Random Access Memory (RAM), or
any other dynamic storage device commonly known in the art. Read only memory 440 can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor 470. Mass storage 450 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7102 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc.

[0080] In an embodiment, the bus 420 communicatively couples processor(s) 470 with
the other memory, storage and communication blocks. Bus 420 can be, e.g. a Peripheral
Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface
(SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as
other buses, such a front side bus (FSB), which connects processor 470 to software system.
[0081] In another embodiment, operator and administrative interfaces, e.g. a display,
keyboard, and a cursor control device, may also be coupled to bus 420 to support direct operator interaction with computer system. Other operator and administrative interfaces can be provided through network connections connected through communication port 460. External storage device 410 can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Video Disk - Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[0082] Thus, it will be appreciated by those of ordinary skill in the art that the diagrams,
schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named.
[0083] While embodiments of the present invention have been illustrated and described,
it will be clear that the invention is not limited to these embodiments only. Numerous
modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled
in the art, without departing from the spirit and scope of the invention, as described in the claim.
[0084] In the foregoing description, numerous details are set forth. It will be apparent,
however, to one of ordinary skill in the art having the benefit of this disclosure, that the present

invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, to avoid obscuring the present invention.
[0085] As used herein, and unless the context dictates otherwise, the term "coupled to" is
intended to include both direct coupling (in which two elements that are coupled to each other contact each other)and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms "coupled to" and "coupled with" are used synonymously. Within the context of this document terms "coupled to" and "coupled with" are also used euphemistically to mean "communicatively coupled with" over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary device.
[0086] It should be apparent to those skilled in the art that many more modifications
besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms "comprises" and "comprising" should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refers to at least one of something selected from the group consisting of A, B, C ... .N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc.
[0087] While the foregoing describes various embodiments of the invention, other and
further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.

ADVANTAGES OF THE PRESENT DISCLOSURE
[0088] The present disclosure provides system and method for filtering toxic and abusive
words in a content.
[0089] The present disclosure provides system and method for reviewing the content
before transmitting the content to a server.
[0090] The present disclosure provides system and method for generating a report based
on the reviewed content, and transmitting the report to the server, a user, and a corresponding
government authority.
[0091] The present disclosure provides a cost-effective, efficient, and accurate system.

We Claim:

1.A method for reviewing a content, the method comprising the steps of:
receiving, at one or more processors of a processing engine configured with a Convolutional Neural Network (CNN), a first set of data packets from a first input means, wherein the first set of data packets pertains to the content associated with any or a combination of one or more text strings and one or more verbal/ orally spoken words; and
identifying, at the one or more processors, a second set of data packets selected from the received first set of data packets based on one or more recognition techniques, wherein at least one of the one or more recognition techniques comprises comparing of the received first set of data packets with a pre-configured dataset comprising a list of words to be identified out from the content.
2. The method as claimed in claim 1, wherein the at least one of the one or more recognition techniques comprises interpreting of a context associated with the received first set of data packet by using one or more classifiers of a CNN.
3. The method as claimed in claim 2, wherein the method comprises a step of filtering, at the one or more processors, the identified second set of data packets from the received first set of data packets, before transmitting the received first set of data packets to a first output means.
4. The method as claimed in claim 2, wherein the method comprises a step of substituting, at the one or more processors, at least one of the identified second set of data packets with corresponding one or more pre-determined set of data packets.
5. The method as claimed in claim 4, wherein the method comprises a step of appending, at the one or more processors, any or a combination of the pre-configured dataset and the one or more pre-determined set of data packets, through a second input means.
6. The method as claimed in claim 1, wherein the method comprises a step of appending, at the one or more processors, the pre-configured dataset based on a test-and-train set of data packets.
7. The method as claimed in claim 6, wherein the method comprises a step of appending of the pre-configured dataset based on a testing-and-training set of data packets, executed through an interrogation process.

8. The method as claimed in claim 6, wherein the method comprises a step of authenticating, at the one or more processors, an encrypted code entered by the second input means, to enable the appending of any or a combination of the pre-configured set of data packets and the one or more pre-determined set of data packets.
9. A content review system, the system comprising:
a processing unit comprising one or more processors, and coupled with a memory, the memory storing instructions executable by the one or more processors configured with a Convolutional Neural Network (CNN) and configured to:
receive a first set of data packets from a first input means, wherein the first set of data pertains to a content associated with to any or a combination of one or more text strings and one or more verbal/ orally spoken sentences;
identify a second set of data packets selected from the received first set of data packets based on one or more recognition techniques, wherein at least one of the one or more recognition techniques comprises comparing of the received first set of data packets with a pre-configured dataset comprising a list of words to be identified out from the content.
10. The system as claimed in claim 9, wherein the system is configured between a router and
a server, and wherein the first input means and the first output means comprise any or a
combination of the router and the server.

Documents

Application Documents

# Name Date
1 202011004408-CLAIMS [12-11-2022(online)].pdf 2022-11-12
1 202011004408-Correspondence to notify the Controller [27-12-2024(online)]-1.pdf 2024-12-27
1 202011004408-IntimationOfGrant16-02-2025.pdf 2025-02-16
1 202011004408-STATEMENT OF UNDERTAKING (FORM 3) [31-01-2020(online)].pdf 2020-01-31
2 202011004408-Correspondence to notify the Controller [27-12-2024(online)].pdf 2024-12-27
2 202011004408-CORRESPONDENCE [12-11-2022(online)].pdf 2022-11-12
2 202011004408-FORM FOR STARTUP [31-01-2020(online)].pdf 2020-01-31
2 202011004408-PatentCertificate16-02-2025.pdf 2025-02-16
3 202011004408-Annexure [11-02-2025(online)].pdf 2025-02-11
3 202011004408-DRAWING [12-11-2022(online)].pdf 2022-11-12
3 202011004408-FORM FOR SMALL ENTITY(FORM-28) [31-01-2020(online)].pdf 2020-01-31
3 202011004408-FORM-26 [27-12-2024(online)].pdf 2024-12-27
4 202011004408-FER_SER_REPLY [12-11-2022(online)].pdf 2022-11-12
4 202011004408-FORM 1 [31-01-2020(online)].pdf 2020-01-31
4 202011004408-US(14)-HearingNotice-(HearingDate-30-12-2024).pdf 2024-12-16
4 202011004408-Written submissions and relevant documents [11-02-2025(online)].pdf 2025-02-11
5 202011004408-FER.pdf 2022-05-13
5 202011004408-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [31-01-2020(online)].pdf 2020-01-31
5 202011004408-Correspondence to notify the Controller [20-01-2025(online)].pdf 2025-01-20
5 202011004408-CLAIMS [12-11-2022(online)].pdf 2022-11-12
6 202011004408-US(14)-ExtendedHearingNotice-(HearingDate-27-01-2025)-1030.pdf 2025-01-18
6 202011004408-FORM 18 [17-09-2021(online)].pdf 2021-09-17
6 202011004408-EVIDENCE FOR REGISTRATION UNDER SSI [31-01-2020(online)].pdf 2020-01-31
6 202011004408-CORRESPONDENCE [12-11-2022(online)].pdf 2022-11-12
7 202011004408-Correspondence to notify the Controller [27-12-2024(online)]-1.pdf 2024-12-27
7 202011004408-DRAWING [12-11-2022(online)].pdf 2022-11-12
7 202011004408-DRAWINGS [31-01-2020(online)].pdf 2020-01-31
7 202011004408-FORM-26 [03-03-2020(online)].pdf 2020-03-03
8 202011004408-Correspondence to notify the Controller [27-12-2024(online)].pdf 2024-12-27
8 202011004408-DECLARATION OF INVENTORSHIP (FORM 5) [31-01-2020(online)].pdf 2020-01-31
8 202011004408-FER_SER_REPLY [12-11-2022(online)].pdf 2022-11-12
8 202011004408-Proof of Right [03-03-2020(online)].pdf 2020-03-03
9 202011004408-COMPLETE SPECIFICATION [31-01-2020(online)].pdf 2020-01-31
9 202011004408-FER.pdf 2022-05-13
9 202011004408-FORM-26 [27-12-2024(online)].pdf 2024-12-27
9 abstract.JPG 2020-02-05
10 202011004408-COMPLETE SPECIFICATION [31-01-2020(online)].pdf 2020-01-31
10 202011004408-FORM 18 [17-09-2021(online)].pdf 2021-09-17
10 202011004408-US(14)-HearingNotice-(HearingDate-30-12-2024).pdf 2024-12-16
10 abstract.JPG 2020-02-05
11 202011004408-CLAIMS [12-11-2022(online)].pdf 2022-11-12
11 202011004408-DECLARATION OF INVENTORSHIP (FORM 5) [31-01-2020(online)].pdf 2020-01-31
11 202011004408-FORM-26 [03-03-2020(online)].pdf 2020-03-03
11 202011004408-Proof of Right [03-03-2020(online)].pdf 2020-03-03
12 202011004408-CORRESPONDENCE [12-11-2022(online)].pdf 2022-11-12
12 202011004408-DRAWINGS [31-01-2020(online)].pdf 2020-01-31
12 202011004408-FORM-26 [03-03-2020(online)].pdf 2020-03-03
12 202011004408-Proof of Right [03-03-2020(online)].pdf 2020-03-03
13 abstract.JPG 2020-02-05
13 202011004408-FORM 18 [17-09-2021(online)].pdf 2021-09-17
13 202011004408-EVIDENCE FOR REGISTRATION UNDER SSI [31-01-2020(online)].pdf 2020-01-31
13 202011004408-DRAWING [12-11-2022(online)].pdf 2022-11-12
14 202011004408-COMPLETE SPECIFICATION [31-01-2020(online)].pdf 2020-01-31
14 202011004408-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [31-01-2020(online)].pdf 2020-01-31
14 202011004408-FER.pdf 2022-05-13
14 202011004408-FER_SER_REPLY [12-11-2022(online)].pdf 2022-11-12
15 202011004408-DECLARATION OF INVENTORSHIP (FORM 5) [31-01-2020(online)].pdf 2020-01-31
15 202011004408-FER.pdf 2022-05-13
15 202011004408-FER_SER_REPLY [12-11-2022(online)].pdf 2022-11-12
15 202011004408-FORM 1 [31-01-2020(online)].pdf 2020-01-31
16 202011004408-DRAWING [12-11-2022(online)].pdf 2022-11-12
16 202011004408-DRAWINGS [31-01-2020(online)].pdf 2020-01-31
16 202011004408-FORM 18 [17-09-2021(online)].pdf 2021-09-17
16 202011004408-FORM FOR SMALL ENTITY(FORM-28) [31-01-2020(online)].pdf 2020-01-31
17 202011004408-CORRESPONDENCE [12-11-2022(online)].pdf 2022-11-12
17 202011004408-EVIDENCE FOR REGISTRATION UNDER SSI [31-01-2020(online)].pdf 2020-01-31
17 202011004408-FORM FOR STARTUP [31-01-2020(online)].pdf 2020-01-31
17 202011004408-FORM-26 [03-03-2020(online)].pdf 2020-03-03
18 202011004408-CLAIMS [12-11-2022(online)].pdf 2022-11-12
18 202011004408-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [31-01-2020(online)].pdf 2020-01-31
18 202011004408-Proof of Right [03-03-2020(online)].pdf 2020-03-03
18 202011004408-STATEMENT OF UNDERTAKING (FORM 3) [31-01-2020(online)].pdf 2020-01-31
19 202011004408-FORM 1 [31-01-2020(online)].pdf 2020-01-31
19 202011004408-US(14)-HearingNotice-(HearingDate-30-12-2024).pdf 2024-12-16
19 abstract.JPG 2020-02-05
20 202011004408-COMPLETE SPECIFICATION [31-01-2020(online)].pdf 2020-01-31
20 202011004408-FORM FOR SMALL ENTITY(FORM-28) [31-01-2020(online)].pdf 2020-01-31
20 202011004408-FORM-26 [27-12-2024(online)].pdf 2024-12-27
21 202011004408-Correspondence to notify the Controller [27-12-2024(online)].pdf 2024-12-27
21 202011004408-DECLARATION OF INVENTORSHIP (FORM 5) [31-01-2020(online)].pdf 2020-01-31
21 202011004408-FORM FOR STARTUP [31-01-2020(online)].pdf 2020-01-31
22 202011004408-Correspondence to notify the Controller [27-12-2024(online)]-1.pdf 2024-12-27
22 202011004408-DRAWINGS [31-01-2020(online)].pdf 2020-01-31
22 202011004408-STATEMENT OF UNDERTAKING (FORM 3) [31-01-2020(online)].pdf 2020-01-31
23 202011004408-EVIDENCE FOR REGISTRATION UNDER SSI [31-01-2020(online)].pdf 2020-01-31
23 202011004408-US(14)-ExtendedHearingNotice-(HearingDate-27-01-2025)-1030.pdf 2025-01-18
24 202011004408-Correspondence to notify the Controller [20-01-2025(online)].pdf 2025-01-20
24 202011004408-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [31-01-2020(online)].pdf 2020-01-31
25 202011004408-FORM 1 [31-01-2020(online)].pdf 2020-01-31
25 202011004408-Written submissions and relevant documents [11-02-2025(online)].pdf 2025-02-11
26 202011004408-Annexure [11-02-2025(online)].pdf 2025-02-11
26 202011004408-FORM FOR SMALL ENTITY(FORM-28) [31-01-2020(online)].pdf 2020-01-31
27 202011004408-FORM FOR STARTUP [31-01-2020(online)].pdf 2020-01-31
27 202011004408-PatentCertificate16-02-2025.pdf 2025-02-16
28 202011004408-IntimationOfGrant16-02-2025.pdf 2025-02-16
28 202011004408-STATEMENT OF UNDERTAKING (FORM 3) [31-01-2020(online)].pdf 2020-01-31

Search Strategy

1 202011004408hatespeechrecognitionusingCNNE_13-05-2022.pdf

ERegister / Renewals