Sign In to Follow Application
View All Documents & Correspondence

System And Method For Detection Of An Anomaly In A Network

Abstract: A system (200) and processor-implemented method (500) for determining an anomaly in a network is provided. The method includes comparing (502), by an analysis engine (306) one or more current values of queries with a learned normal behaviour pattern. The method (500) further includes indicating (504) a presence of an anomaly by, the analysis engine (306), upon detecting a deviation from the normal behaviour and de-fining (506) at least one of one or more thresholds and one or more rules based on a result of detection. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
4. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
5. SAHU, Kishan
Ajay Villa, Gali No. 2, Ambedkar Colony, Bikaner - 334003, Rajasthan, India.
6. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
7. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera, District - Kota - 324001, Rajasthan, India.
8. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli - 421204 Maharashtra, India.
9. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
10. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
11. DE, Supriya
G2202, Sheth Avalon, Near Jupiter Hospital Majiwada, Thane West - 400601, Maharashtra, India.
12. KUMAR, Debashish
Bhairaav Goldcrest Residency, E-1304, Sector 11, Ghansoli, Navi Mumbai - 400701, Maharashtra, India.
13. TILALA, Mehul
64/11, Manekshaw Marg, Manekshaw Enclave, Delhi Cantonment, New Delhi - 110010, India.
14. KALIKIVAYI, Srinath
3-61, Kummari Bazar, Madduluru Village, S N Padu Mandal, Prakasam District, Andhra Pradesh - 523225, India
15. PANDEY, Vitap
D 886, World Bank Barra, Kanpur - 208027, Uttar Pradesh, India.

Specification

FORM 2
THE PATENTS ACT, 1970
THE PATENTS RULE 0) 003
COMPLETE SPECIFICATION
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India; Nationality: India
The following specification particularly describes
the invention and the manner in which
it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belong¬ing to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF THE INVENTION
[0002] The present disclosure relates to troubleshooting faults in a communica-
tion network. More particularly, the present disclosure relates to system and processor-implemented method for detecting an anomaly in a network.
DEFINITION
[0003] “User Profiles” refers to a collection of information associated with a particular
user. A user profile can be defined as the explicit digital representation of the identity
of the user, with respect to different parameters such as policies or rules to monitor the
particular use case and thresholds associated with respective policies. The user profile
is dependent on user and the use case. For a given user, the profile can be created based
on what kind of data that user visualizes.
[0004] “Use cases” refer to methodologies and processes used in network deployment,
mobility, software development, product design, and other fields to describe how a
system can be used to achieve specific goals or tasks. The use cases include monitoring
minutes of usage (MOU), attempted calls, answered calls for a pre-defined duration
(such as previous hour) and aggregating the plurality of profiles based on each circle
and each quarter. As name represent, there can be various key performance indicators
2

(KPIs) which are linked/attached to different use cases. These KPIs are actually use cases when they cover the end to end scenarios.
[0005] “Policy” refers to a statement of intent and is implemented as a procedure or protocol. Policy Control Function in 5G networks is a key component that enables ef-ficient policy control and management, facilitating network behavior control, network slicing, UE activities, and communication with other 5G core network functions. The policy is a combination of one or more use cases, and one or more rules. The policy is applied on data.
[0006] Rules refer to clauses or definitions which are defined for these KPIs. For ex-ample, answer-seizure ratio (ASR) as a KPI should be between 40-50. This is a rule. The answer-seizure ratio (ASR) is a measurement of network quality and call success rates in telecommunication. It is the percentage of answered telephone calls with re-spect to the total call volume.
BACKGROUND ART
[0007] The following description of related art may be intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. How¬ever, it should be appreciated that this section be used only to enhance the understand¬ing of the reader with respect to the present disclosure, and not as admissions of prior art.
[0008] Typically, anomalies in network behavior represent deviation from what is nor-mal, standard, or expected. To detect network anomalies, network owners must have a concept of expected or normal behavior. Detection of anomalies in network behavior demands the continuous monitoring of a network for unexpected trends or events. In order to quickly rectify a problem in a network, or to keep a down-time of a part or whole of the network to a minimum, there may be a requirement to monitor various types of data and to determine which of the data may be relevant to diagnosis. Conven¬tionally, a policy or a set of rules may be applied by a user to a system, and only the
3

data related to the applied rules may be monitored. In case a user needs to monitor a different use case or a different set of data, the user may need to generate a separate set of policies or rules and apply them to a troubleshooting system. Such a process may be time-consuming and cumbersome and may lead to a longer downtime of the network. [0009] To address these challenges, there is a need in the art for a means to find anom¬aly in real or near-real time, and to quickly deploy countermeasures to rectify any anomaly that is detected.
OBJECT OF THE INVENTION
[0010] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0011] A primary object of the embodiments of the present invention is to provide a system for policy management.
[0012] Yet another object of the embodiments of the present invention is to provide a system where a user is allowed to define their own profile in which the user defines their own rules to monitor the particular user case.
[0013] Yet another object of the embodiments of the present invention is to provide a system where policies are scheduled at a scheduling layer and triggered as per user requirement.
[0014] Yet another object of the embodiments of the present invention is to provide a system where an artificial intelligence (AI)/machine learning (ML) engine may be con-figured to update one or more of a plurality of policies and a user can define more than one profile to track multiple use cases and each profile threshold can be updated after applying a trained AI model.
[0015] Yet another object of the embodiments of the present invention is to provide a system where an AI/ML engine may update thresholds for each policy of the user in real or near-real time.

[0016] Yet another object of the embodiments of the present invention is to provide a
system where an AI/ML engine may assist in deploying countermeasures for a fault
detected.
[0017] These and other objectives and advantages of the embodiments of the present
invention will become readily apparent from the following detailed description taken
in conjunction with the accompanying drawings.
SUMMARY OF THE INVENTION
[0018] The following details present a simplified summary of the embodiments of the present invention to provide a basic understanding of the several aspects of the embod¬iments of the present invention. This summary is not an extensive overview of the em¬bodiments of the present invention. It is not intended to identify key/critical elements of the embodiments of the present invention or to delineate the scope of the embodi¬ments of the present invention. Its sole purpose is to present the concepts of the em¬bodiments of the present invention in a simplified form as a prelude to the more detailed description that is presented later.
[0019] The other objects and advantages of the embodiments of the present invention will become readily apparent from the following description taken in conjunction with the accompanying drawings. It should be understood, however, that the following de¬scriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and mod¬ifications may be made within the scope of the embodiments of the present invention without departing from the spirit thereof, and the embodiments of the present invention include all such modifications.
[0020] Embodiments herein relate to a system for detecting anomaly in a network. The system may provide a means to automatically detect anomalies in the network using trained AI models and define policies or rules on queries or workflow and take neces¬sary actions. The polices may be scheduled, such that they may be tracked in real or near-real time. Further, the system may provide a means to notify to responsible entities

such as end users or other authorized entities for better understanding. A user may de¬fine a plurality of rules for different use-cases in a single policy and compare the data. The user may receive a notification if some unexpected trends come into the data. Fur¬ther, an artificial intelligence (AI)/machine learning (ML) layer may be configured to in real-time or in near real-time, update profiles of the users by updating thresholds associated with respective policies. The AI engine may facilitate appropriate counter-measures to be deployed.
[0021] According to an aspect of the present technology, a processor-implemented method for detecting an anomaly in a network is provided. The method includes com-paring, by an analysis engine one or more current values of queries with a learned nor-mal behaviour pattern to identify a deviation from the normal behaviour. As used herein the term “learned normal behaviour pattern” refers to a pattern comprising learned ac¬tions adopted by the user at a specific time based on the own learning motivations and beliefs of the user. The method further includes indicating a presence of an anomaly by, the analysis engine, upon detecting a deviation from the normal behaviour and, defining at least one of one or more thresholds or one or more rules based on a result of the detection. The one or more thresholds specify a severity level of the anomaly based on the deviation from the learned normal behaviour pattern.
[0022] According to one embodiment of the present technology, a policy is scheduled at a scheduling layer and triggering the scheduled policy based on a user requirement. [0023] According to one embodiment of the present technology, the thresholds are one of statically defined or dynamically adjusted.
[0024] According to one embodiment of the present technology, the method further includes updating a policy output with one or more latest values.
[0025] According to another aspect of the present technology, a system for detecting an anomaly in a network in real or near-real time is provided. The system includes a processor configured to fetch and execute computer-readable instructions stored in a memory of the system. The system further includes a memory configured to store one

or more computer-readable instructions or routines in a non-transitory computer-read-able storage medium, fetched and executed to create or share data packets over a net-work service. The system further includes an interface to provide a communication pathway for one or more components of the system. The system further includes a user interface for configuring a plurality of profiles for one or more use cases. The system further includes an artificial intelligence (AI) engine for updating a plurality of profiles including a plurality of thresholds for keeping the plurality of profiles updated in one of: real-time or a near-real time. The system further includes an analysis engine for comparing one or more current values of queries with a learned normal behaviour pat¬tern to identify a deviation from a normal behaviour and indicating a presence of an anomaly upon detecting a deviation from the normal behaviour and defining one of one or more thresholds or one or more rules based on a result of the detection. [0026] According to one embodiment of the present technology, the analysis engine is further configured to schedule a policy at a scheduling layer and triggering the sched¬uled policy based on a user requirement.
[0027] According to one embodiment of the present technology, the analysis engine is further configured to update a policy output with one or more latest values. [0028] The present disclosure discloses a user equipment (UE) configured for detect¬ing an anomaly in a network. The user equipment includes a processor and a computer readable storage medium storing programming for execution by the processor. The programming including instructions to compare one or more current values of queries with a learned normal behaviour pattern to identify a deviation from a normal behav-iour, indicate a presence of an anomaly upon detecting a deviation from the normal behaviour, and define at least one of: one or more thresholds or one or more rules based on a result of the detection, where the one or more thresholds specify a severity level of the anomaly based on the deviation from the normal behaviour.
[0029] According to yet another aspect of the present technology, a computer program product comprising a non-transitory computer-readable medium is provided. The non-transitory computer-readable medium comprises instructions that, when executed by
7

one or more processors, cause one or more processors to perform a method. a method
for automated maintenance of databases is provided. The method includes comparing,
by an analysis engine one or more current values of queries with a learned normal
behaviour pattern to identify a deviation from a normal behaviour. As used herein the
5 term “learned normal behaviour pattern” refers to a pattern comprising learned actions
adopted by the user at a specific time based on the own learning motivations and beliefs
of the user. The method further includes indicating a presence of an anomaly by, the
analysis engine, upon detecting a deviation from the normal behaviour and defining at
least one of one or more thresholds or one or more rules based on the detection. Theone
10 or more thresholds specify a severity level of the anomaly based on the deviation from
the learned normal behaviour.
[0030] The various embodiments of the present technology offer automatically detect¬
ing anomaly using trained models and define policies or rules on queries or workflow
to take necessary actions. The polices are scheduled so that they can be tracked in near
15 real time and responsible person is notified for better understanding. Further, the user
can define multiple rules for different use-case in single policy and compare the data.
The user gets notification if some unexpected trends come into the data. Using the
present technology, the user can configure multiple profiles for different use-cases.
Moreover, the user does not need to define multiples policies for same user who want
20 to monitor multiple clauses. For example, a user wants to monitor MOU, Attempts
Calls, and Answered Calls for previous hour and aggregate based on each circle and
each quarter. These features give flexibility to end user for creating policy without
knowing the previous trends. The models compare the current values of the queries
with the learned normal patterns. When a deviation from the normal behavior is de-
25 tected, the present system indicates the presence of an anomaly and defines thresholds
or rules based on the anomaly detection results. These thresholds or rules determine
when the anomaly is triggered and specify the severity level of the anomaly based on
the deviation from the normal behavior. The thresholds can be statically defined or
8

dynamically adjusted based on historical data or statistical methods. The present inven¬
tion provides a system for policy management. The present technology provides a sys¬
tem where a plurality of policies may be provided to the system. The present technol¬
ogy provides a system where any one or more of a plurality of policies may be config-
5 ured or implemented by a user. The present technology provides a system where an
artificial intelligence (AI)/machine learning (ML) engine may be configured to update
one or more of a plurality of policies. The present technology provides a system where
an AI/ML engine may update thresholds for each policy of the user in real or near-real
time. The present technology provides a system where an AI/ML engine may assist in
10 deploying countermeasures for a fault detected.
[0031] The foregoing description of the specific embodiments will so fully reveal the
general nature of the embodiments of the present invention that others can, by applying
current knowledge, readily modify and/or adapt for various applications such specific
embodiments without departing from the generic concept, and, therefore, such adapta-
15 tions and modifications should and are intended to be comprehended within the mean¬
ing and range of equivalents of the disclosed embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
[0032] The accompanying drawings, which are incorporated herein, and constitute a
20 part of this disclosure, illustrate exemplary embodiments of the disclosed methods and
systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not
25 represent the internal circuitry of each component. It will be appreciated by those
skilled in the art that disclosure of such drawings includes the disclosure of electrical
components, electronic components or circuitry commonly used to implement such
components.
[0033] FIG. 1 illustrates an exemplary network architecture, in which or with which
9

embodiments of the present disclosure may be implemented.
[0034] FIG. 2 illustrates an exemplary block diagram of a system for determining an
anomaly in a network and triggering a countermeasure, in accordance with an
embodiment of the present disclosure;
5 [0035] FIG. 3 illustrates an exemplary schematic diagram of the system for policy
management, in accordance with an embodiment of the present disclosure;
[0036] FIG. 4 illustrates a sequential flow diagram depicting the operation of the
system for policy management, in accordance with an embodiment of the present
disclosure;
10 [0037] FIG. 5 illustrates a flowchart of a processor-implemented method for
determining an anomaly in a network and triggering a countermeasure, in accordance
with an embodiment of the present disclosure; and
[0038] FIG. 6 illustrates an exemplary computer system in which or with which
embodiments of the present disclosure may be implemented. 15
LIST OF REFERENCE NUMERALS
100- Network Architecture
102- User
104 – User Equipment
20 106 – Network
108 – System
112 – Centralized Server
202- Processor
204- Memory
25 206- Interface
210- Processing engines
220- Database
212-Artificial intelligence (AI) engine
10

216- Other Engines
302- User Interface
304- Load balancer
214, 306- Analysis engine
5 308- Scheduling layer
312- Distributed Data Lake
600- Computer system
610- External storage device
620- Bus
10 630- Main memory
640- Read only memory
650- Mass Storage Device
660- Communication Port
670- Computer System Processor 15
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0039] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present
20 disclosure may be practiced without these specific details. Several features described
hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the prob¬lems discussed above might not be fully addressed by any of the features described
25 herein.
[0040] The ensuing description provides exemplary embodiments only, and is not in-tended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art
with an enabling description for implementing an exemplary embodiment. It should be
11

understood that various changes may be made in the function and arrangement of ele¬
ments without departing from the spirit and scope of the disclosure as set forth.
[0041] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of ordinary
5 skill in the art that the embodiments may be practiced without these specific details.
For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid
10 obscuring the embodiments.
[0042] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure dia¬gram, or a block diagram. Although a flowchart may describe the operations as a se¬quential process, many of the operations can be performed in parallel or concurrently.
15 In addition, the order of the operations may be re-arranged. A process is terminated
when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can corre¬spond to a return of the function to the calling function or the main function.
20 [0043] The word “exemplary” and/or “demonstrative” is used herein to mean serving
as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design de-scribed herein as “exemplary” and/or “demonstrative” is not necessarily to be con-strued as preferred or advantageous over other aspects or designs, nor is it meant to
25 preclude equivalent exemplary structures and techniques known to those of ordinary
skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,”
and other similar words are used in either the detailed description or the claims, such
terms are intended to be inclusive in a manner similar to the term “comprising” as an
open transition word without precluding any additional or other elements.
12

[0044] Reference throughout this specification to “one embodiment” or “an embodi¬
ment” or “an instance” or “one instance” means that a particular feature, structure, or
characteristic described in connection with the embodiment is included in at least one
embodiment of the present disclosure. Thus, the appearances of the phrases “in one
5 embodiment” or “in an embodiment” in various places throughout this specification
are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0045] The terminology used herein is for the purpose of describing particular embod-
10 iments only and is not intended to be limiting of the disclosure. As used herein, the
singular forms “a”, “an” and “the” are intended to include the plural forms as well,
unless the context clearly indicates otherwise. It will be further understood that the
terms “comprises” and/or “comprising,” when used in this specification, specify the
presence of stated features, integers, steps, operations, elements, and/or components,
15 but do not preclude the presence or addition of one or more other features, integers,
steps, operations, elements, components, and/or groups thereof. As used herein, the
term “and/or” includes any and all combinations of one or more of the associated listed
items.
[0046] According to an embodiment, a system for detecting an anomaly in a network
20 is disclosed. The system may provide a means to automatically detect anomalies in a
network using trained models and define policies or rules on queries or workflow and
take necessary actions. As used herein the term “anomaly” refers to anomalies in net¬
work behavior deviate from what is normal, standard, or expected. To detect network
anomalies, network owners must have a concept of expected or normal behavior. The
25 detection of anomalies in network behavior demands the continuous monitoring of a
network for unexpected trends or events. The polices may be scheduled, such that they may be tracked in real, or near-real time. Further, the system may provide a means to notify to responsible entities such as end users or other authorized entities for better
13

understanding. In some embodiments, the user may define a plurality of rules for dif¬
ferent use-cases in a single policy and compare the data. The user may receive a noti¬
fication when some unexpected trends come into the data. Further, an artificial intelli¬
gence (AI)/machine learning (ML) layer may be configured to update in real-time or
5 in near real-time, profiles of the users by updating one or more thresholds associated
with respective policies. The AI engine may facilitate appropriate countermeasures to be deployed. Using this feature the users can take immediate action before some major fallbacks originate in the network. The policy management uses minimum and maxi¬mum threshold values coming from machine learning algorithms depending upon time
10 and geography. For example, people from Mumbai having different threshold value
compared to Uttar Pradesh because Uttar Pradesh people wake up early and start mak¬ing a call while Mumbai people wake up late So, MOU (Minute of Usage) will be higher for Uttar Pradesh people at early morning. [0047] The feature automatically detects anomaly using trained artificial intelligence
15 models and define policies or rules on queries or workflow and take necessary actions.
The polices are scheduled so that they can be tracked near real-time and notify to re-sponsible person for better understanding. The user can define multiple rules for dif-ferent use-case in single policy and compare the data. The user gets a notification if some unexpected trends come into the data. Using the present technology, the user can
20 configure multiple profiles for different use cases. The user does not need to define
multiples policy for same user who want to monitor multiple clauses. For example, the user wants to monitor MOU, Attempts Calls, and Answered Calls for previous hour and aggregate based on each circle and each quarter. This feature gives flexibility to the end user for creating policies without knowing the previous trends. The models
25 compare the current values of the queries with the learned normal patterns. If a devia-
tion from the normal behavior is detected, the present system indicates the presence of an anomaly then it defines thresholds or rules based on the anomaly detection results. These thresholds or rules determine when an anomaly is triggered and specify the se¬verity level of the anomaly based on the deviation from the normal behavior. The
14

thresholds can be statically defined or dynamically adjusted based on historical data or
statistical methods. In some embodiments, the user defines their own profile in which
user define own rules to monitor the particular use case. In some embodiments, the
user can define more than one profile to track multiple use cases and each profile
5 threshold will be updated after applying a machine learning trained model and will be
updated near real-time as the policy is executed. The policy will be scheduled at the scheduling layer and triggered as per user requirements. The policy output will keep updated with the latest values.
[0048] The various embodiments of the present disclosure will be explained in detail
10 with reference to FIGs. 1 – 6.
[0049] FIG. 1 illustrates an exemplary network architecture 100, in which or with
which embodiments of the present disclosure may be implemented. Referring to FIG.
1, the network architecture 100 may include one or more computing devices or user
equipment (104-1, 104-2…104-N) associated with one or more users (102-1, 102-
15 2…102-N) in an environment. A person of ordinary skill in the art will understand that
one or more users (102-1, 102-2…102-N) may be individually referred to as the user
102 and collectively referred to as the users 102. Similarly, a person of ordinary skill
in the art will understand that one or more user equipment (104-1, 104-2…104-N) may
be individually referred to as the user equipment 104 and collectively referred to as the
20 user equipment 104. A person of ordinary skill in the art will appreciate that the terms
“computing device(s)” and “user equipment” may be used interchangeably throughout
the disclosure. Although three user equipment 104 are depicted in FIG. 1, however any
number of the user equipment 104 may be included without departing from the scope
of the ongoing description.
25 [0050] In an embodiment, the user equipment 104 may include, but is not limited to, a
handheld wireless communication device (e.g., a mobile phone, a smart phone, a pha-
blet device, and so on), a wearable computer device (e.g., a head-mounted display com¬
puter device, a head-mounted camera device, a wristwatch computer device, and so
on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer,
15

or another type of portable computer, a media playing device, a portable gaming sys¬
tem, and/or any other type of computer device with wireless communication capabili¬
ties, and the like. In an embodiment, the user equipment 104 may include, but is not
limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combi-
5 nation of one or more of the above devices such as virtual reality (VR) devices, aug¬
mented reality (AR) devices, laptop, a general-purpose computer, desktop, personal
digital assistant, tablet computer, mainframe computer, or any other computing device,
wherein the user equipment 104 may include one or more in-built or externally coupled
accessories including, but not limited to, a visual aid device such as a camera, an audio
10 aid, a microphone, a keyboard, and input devices for receiving input from the user 102
or the entity such as touch pad, touch enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the user equipment 104 may not be restricted to the mentioned devices and various other devices may be used. [0051] Referring to FIG. 1, the user equipment 104 may communicate with a system
15 200, for example, a system for policy management, through a network 106. In an em-
bodiment, the network 106 may include at least one of a fifth generation (5G) network, 6G network, or the like. The network 106 may enable the user equipment 104 to com¬municate with other devices in the network architecture 100 and/or with the system 108. The network 106 may include a wireless card or some other transceiver connection
20 to facilitate this communication. In another embodiment, the network 106 may be im-
plemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
25 [0052] In another exemplary embodiment, the centralized server 112 may include or
comprise, by way of example but not limitation, one or more of: a stand-alone server,
a server blade, a server rack, a bank of servers, a server farm, hardware supporting a
part of a cloud service or system, a home server, hardware running a virtualized server,
one or more processors executing code to function as a server, one or more machines
16

performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
[0053] Although FIG. 1 shows exemplary components of the network architecture 100,
in other embodiments, the network architecture 100 may include fewer components,
5 different components, differently arranged components, or additional functional com-
ponents than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture 100 may perform functions described as being performed by one or more other components of the network architecture 100. [0054] FIG. 2 illustrates an exemplary block diagram of system 200 for determining
10 an anomaly in a network and triggering countermeasures in accordance with an em-
bodiment of the present disclosure. The system includes a processor 202, a memory 204, an interface 206, a processing engine 210, an AI/ML engine 212, a database 220, analysis engine 214, other engines 216. The system 200 may include one or more pro-cessors 202 and a memory 204 communicably coupled to the one or more processors
15 202. The one or more processor(s) 202 may be implemented as one or more micropro-
cessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, one or more proces-sor(s) 202 may be configured to fetch and execute computer-readable instructions
20 stored in a memory 204 of the system 200. The system 200 includes a processor 202
configured to fetch and execute computer-readable instructions stored in a memory of the system. The memory 204 may be configured to store one or more computer-reada-ble instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network
25 service. The memory 204 may include any non-transitory storage device including, for
example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like. The system 200 further includes a memory 204 configured to
17

store one or more computer-readable instructions or routines in a non-transitory com-puter-readable storage medium, fetched and executed to create or share data packets over a network service.
[0055] In an embodiment, the system 200 may include an interface(s) 206. The inter-
5 face(s) 206 may include a variety of interfaces, for example, interfaces for data input
and output devices, referred to as I/O devices, storage devices, and the like. The inter-
face(s) 206 may facilitate communication of the system 200. The interface(s) 206 may
also provide a communication pathway for one or more components of the system 200.
Examples of such components include, but are not limited to, processing unit/engine(s)
10 210 and a database 220.
[0056] The processing unit/engine(s) 210 may be implemented as a combination of
hardware and programming (for example, programmable instructions) to implement
one or more functionalities of the processing engine(s) 210. In examples described
herein, such combinations of hardware and programming may be implemented in sev-
15 eral different ways. For example, the programming for the processing engine(s) 210
may be processor-executable instructions stored on a non-transitory machine-readable
storage medium and the hardware for the processing engine(s) 210 may comprise a
processing resource (for example, one or more processors), to execute such instruc¬
tions. In the present examples, the machine-readable storage medium may store in-
20 structions that, when executed by the processing resource, implement the processing
engine(s) 210. In such examples, the system 200 may include the machine-readable
storage medium storing the instructions and the processing resource to execute the in¬
structions, or the machine-readable storage medium may be separate but accessible to
the system 200 and the processing resource. In other examples, the processing engine(s)
25 210 may be implemented by an electronic circuitry.
[0057] In some embodiments, the processing engine 210 may include an artificial in-telligence / machine learning (AI/ML) engine 212. The AI/ML engine 212 may be con¬figured to perform functions implementable by the processing engine 210. The AI/ML
engine 212 may further be trained on real-life or simulated datasets. The AI/ML engine
18

212 updates each of the plurality of profiles comprising a plurality of thresholds for keeping the profiles updated in one of: real-time or a near-real time.
[0058] The system 200 further includes a user interface 302 for configuring a plurality
of profiles for one or more use cases. According to one embodiment of the present
5 technology, one or more use cases includes at least one of monitoring minutes of usage
(MOU), attempted calls, answered calls for previous hour and aggregating the plurality of profiles based on each circle and each quarter.
[0059] The system 200 further includes an analysis engine 306 for comparing one or more current values of queries with a learned normal behaviour pattern to identify a
10 deviation from anormal behaviour. In an example, the learned normal behaviour pattern
includes learning actions adopted by the user at a specific time based on the user’s own learning motivations and beliefs. The analysis engine 306 is also configured to indicate a presence of an anomaly upon detecting a deviation from the normal behaviour and defining at least one of: one of one or more thresholds or one or more rules based on a
15 result of the detection. The one or more thresholds specify a severity level of the anom-
aly based on the deviation from the normal behaviour.
[0060] In an embodiment, the analysis engine 306 is responsible for creating policies based on the user-defined parameters and storing the policies in a distributed data lake. The analysis engine 306 also handles the validation of policy creation and scheduling
20 requests, sending notifications to the user via the user interface 302 in case of any fail-
ures, along with specific reasons for the failure.
[0061] According to one embodiment of the present technology, the analysis engine 306 is further configured to schedule a policy at a scheduling layer and trigger the scheduled policy based on a user requirement. In an embodiment, the scheduling layer
25 parses the scheduling requests received from the analysis engine 306, creates jobs
based on the specified scheduling frequency and policy details, and monitors the exe-cution of the policies by the analysis engine 306. When a policy is successfully sched-uled and executed, the scheduling layer sends a success notification to the user via the
19

user interface 302, including a severity level indicating the deviation from normal be-havior and enabling the user to take immediate action if necessary.
[0062] In an embodiment, the AI/ML engine 212 determines and dynamically updates
the policy thresholds for each data profile associated with the created policies. The
5 AI/ML engine 212 retrieves historical data from the distributed data lake, preprocesses
it, and trains the ML model to learn nor-mal behavior patterns. The AI/ML engine 212
computes the dynamic policy thresholds based on various factors such as seasonality,
trend, data distribution, time, and geography. The AI/ML engine 212 stores these
thresholds in the distributed data lake 320 for use during policy execution. The AI/ML
10 engine 212 continuously adjusts the thresholds in near real-time by applying the trained
ML model to incoming real-time data streams, detecting anomalies, comparing them
against the dynamic thresholds, triggering breach alerts when anomalies exceed the
thresholds, and updating the thresholds based on recent data patterns and detected
anomalies. In an embodiment, the profile is updated with one or more latest values of
15 the plurality of profiles. The thresholds can be one of: statically defined or dynamically
adjusted.
[0063] FIG. 3 illustrates an exemplary schematic diagram 300 of the system 200 for
determining an anomaly in a network, in accordance with an embodiment of the present
disclosure. To implement functionalities of the system 200, in some embodiments, the
20 system 200 may include a user interface 302, a load balancer 304, an analysis engine
306, a scheduling layer 308, and an AI/ML engine 212.
[0064] The load balancer 304 is used for load balancing. The load balancing refers to
efficiently distributing incoming network traffic across a group of backend servers. In
an embodiment, the load balancer 304 may be configured to distribute incoming net-
25 work traffic across the plurality of servers 112. The load balancer 304 may be config¬
ured to adjust the distribution of requests dynamically to ensure optimal resource utili¬
zation. The load balancer 304 is configured to determine a server 112 based on a num¬
ber of parameters and forwards the received e the request to the determined server 112.
20

In an example, the parameters include a number of active requests served by each ap-plication server 112, and a threshold of requests to be served.
[0065] In an embodiment, the load balancer 304 acts as an entry point for policy crea¬
tion and scheduling requests from the user interface 302. The load balancer 304 re-
5 ceives these requests, authenticates them by validating user credentials, and transfers
the authenticated requests to the analysis engine 306 for further processing. Addition¬
ally, the load balancer 304 sends a confirmation of the transfer back to the user interface
302 keeping the user informed about the status of their request.
[0066] In an embodiment, the analysis engine 306 is responsible for creating policies
10 based on the user-defined parameters received from the load balancer 304. The analysis
engine 306 validates the created policies against predefined schema and rules to ensure
their integrity and consistency. Once validated, the analysis engine 306 stores the pol¬
icies in the distributed data lake and passes the relevant information, including a unique
policy identifier, to the policy execution module within itself. The analysis engine 306
15 also handles the validation of policy creation and scheduling requests, sending notifi-
cations to the user via the user interface 302 in case of any failures, along with specific reasons for the failure.
[0067] In an embodiment, a scheduling layer 308 plays a vital role in scheduling the
execution of the created policies. The scheduling layer 308 parses the scheduling re-
20 quests received from the analysis engine 306, creates jobs based on the specified sched¬
uling frequency and policy details, and monitors the execution of the policies by the
analysis engine 306. When a policy is successfully scheduled and executed, the plat¬
form scheduling layer 308 sends a success notification to the user via the user interface
302, including a severity level indicating the deviation from normal behavior, enabling
25 the user to take immediate action if necessary.
[0068] In an embodiment, the AI/ML engine 212 is the core component responsible for determining and dynamically updating the policy thresholds for each data profile associated with the created policies. The AI/ML engine 212 retrieves historical data
from the distributed data lake 320, preprocesses it, and trains the ML model to learn
21

nor-mal behavior patterns. The AI/ML engine 212 computes the dynamic policy
thresholds based on various factors such as seasonality, trend, data distribution, time,
and ge-ography. The AI/ML engine 212 stores these thresholds in the distributed data
lake 320 for use during policy execution. The AI/ML engine 212 continuously adjusts
5 the thresholds in near real-time by applying the trained ML model to incoming real-
time data streams, detecting anomalies, comparing them against the dynamic thresh-olds, triggering breach alerts when anomalies exceed the thresholds, and updating the thresholds based on recent data patterns and detected anomalies.
[0069] . FIG. 4 illustrates a sequential flow diagram 400 depicting the operation of the
10 system 200 for policy management, in accordance with an embodiment of the present
disclosure. At step 402, a policy creation or schedule request is sent from user interface
302 to load balancer 304. The user interface 302 allows users 350 to create policies,
define data profiles, and specify rules for anomaly detection. It provides a user-friendly
and intuitive means for users to input their requirements and receive notifications and
15 policy outputs. The load balancer 304 receives the request and authenticates the request
by validating the user credentials. Once the request is authenticated, the load balancer
304 transfers it to the analysis engine for further processing. The communication be¬
tween the load balancer 304 and the analysis engine 306 is facilitated via a Hyper Text
Transfer Protocol (HTTP) request, ensuring secure and efficient data transfer.
20 [0070] At step 404, a policy creation or schedule request is sent from load balancer
304 to the analysis engine 306. t step 406, the analysis engine 306 performs policy
creation JavaScript Object Notation (JSON) and creates a policy based on the user-
defined parameters. These parameters include data profiles, thresholds, scheduling fre¬
quency, and configurations for multiple profiles to monitor different metrics for differ-
25 ent use cases. .At step 408, the policy information from analysis engine 306 is stored
in database including distributed data lake 320. The analysis engine 306 validates the
created policy against predefined schema and rules to ensure its integrity and con¬
sistency. If the validation is successful, the analysis engine 306 stores the validated
22

policy in the distributed data lake 320. The distributed data lake 320 serves as a cen¬
tralized repository for storing policies, historical data, and other relevant information.
The analysis engine 306 and the distributed data lake 320 are connected via a transmis¬
sion control protocol (TCP) connection, ensuring reliable and efficient data transfer. At
5 step 410, a policy schedule request is sent from analysis engine 306 to the scheduling
layer 308 which is responsible for scheduling the execution of the created policies
based on the scheduling frequency specified in the request. At step 412, the policy
scheduled successfully message is sent from scheduling layer 308 to analysis engine
306. At step 414 and 416, a “policy created successfully” message is sent from analysis
10 engine 306 to user interface 302 via the load balancer 304. In case the validation fails
at step 418 and 420, a message “validation failed to create or schedule policy” is sent
from analysis engine 306 to user interface 302 via load balancer 304. At step 422, a
policy execution request is sent from scheduling layer 308 to the analysis engine 306.
[0071] At step 424, the policy thresholds are updated using the AI/ML engine 212.
15 When the analysis engine 306 executes the created policy, it receives real-time data
streams and sends them to the AI/ML engine 212. The AI/ML engine 212 applies the trained ML model to the real-time data streams, comparing the current values of queries with learned normal patterns to detect anomalies.
[0072] At step 426, the policy is executed with AI/ML thresholds. At step 428, a pol-
20 icy output is written into a file and the output is notified to respective owners. If the
detected anomalies exceed the dynamic policy thresholds, the AI/ML engine 212 trig¬
gers a policy breach alert and notifies the analysis engine 306. The analysis engine 306
then sends the notification to the user via the user interface 302, along with the updated
policy output containing the latest metrics and KPIs.
25 [0073] Referring to FIGs. 3 and 4, the user 350 may configure a plurality of profiles
for different use-cases. There may not be a requirement to define a plurality of policies
for the same user (e.g., the user 350) that is monitoring different or varied clauses. For
instance, a user who is monitoring MOU, attempted calls, answered calls for a duration
of time, and aggregates based on each one or more durations of time may not need
23

different profiles created for them. Hence, the user 350 may have a flexibility for cre¬
ating policy without knowing previous trends. The system 200 may compare current
values of the queries with the learned normal patterns. If a deviation from the normal
behavior is detected, the system 200 may determine that the deviation may be indica-
5 tive of presence of an anomaly. The system 200 may further define thresholds or rules
based on the anomaly detection results. Such thresholds or rules determine when an
anomaly is triggered and specify a severity level of the anomaly based on the deviation
from the normal behavior. For example, greater the deviation, greater may be the se¬
verity of the anomaly. The thresholds may be statically defined or may be dynamically
10 adjusted based on historical data or statistical methods.
[0074] In an example scenario, the user 350 may define a policy name and a designated entity that may provide countermeasures in an event of detection of an anomaly in the network. Further, the user 350 may define their own profile including rules for moni¬toring a predefined set of use cases. The user 350 may define one or more profiles to
15 track a corresponding one or more use cases. Each profile may include different thresh-
olds that may be updated by the AI/ML engine 212. The AI/ML engine 212 may keep the profiles updated in real or near-real time, as policies are executed. Furthermore, the policy may be scheduled by the scheduling layer 308 and may be triggered as per re¬quirements. The policy output may correspond to a latest updated policy.
20 [0075] Thus, the present system 200 may provide a means to automatically detect
anomalies in a network using trained models and define policies or rules on queries or workflow and take necessary actions. The polices may be scheduled, such that they may be tracked in real, or near-real time. Further, the system 200 may provide a means to notify to responsible entities, such as end users or other authorized entities for better
25 understanding. The user 350 may define a plurality of rules for different use-cases in
single policy and compare the data. The user may receive a notification if some unex-pected trends come into the data. Such a feature may allow a user or an authorized entity to deploy countermeasures quickly before the anomaly in the network devolves.
24

The policy management may use minimum and maximum threshold values using the AI engine depending on time and/or geography.
[0076] FIG. 5 illustrates a flowchart of a processor-implemented method 500 for de¬
tecting an anomaly in a network, in accordance with an embodiment of the present
5 disclosure. At step 502, one or more current values of queries are compared, by an
analysis engine associated with the user interface, with a learned normal behaviour pattern to identify a deviation from the normal behaviour. The learned normal behav-iour pattern includes learning actions adopted by user at a specific time based on the user’s own learning motivations and beliefs. At step 504, a presence of an anomaly is
10 indicated by, the analysis engine, upon detecting a deviation from the normal behaviour
and the analysis engine defines at step 506 at least one of: one or more thresholds or one or more rules based on the detection. The one or more thresholds specify a severity level of the anomaly based on the deviation from the normal behaviour. [0077] According to one embodiment of the present technology, the method further
15 includes scheduling a policy at a scheduling layer and triggering the scheduled policy
based on a user requirement.
[0078] According to one embodiment of the present technology, the thresholds are one of: statically defined or dynamically adjusted based on one of: historical data or statis¬tical methods.
20 [0079] In an exemplary embodiment, the present disclosure discloses a user equipment
which is configured to detect an anomaly in a network. The user equipment includes a processor, and a computer readable storage medium storing programming instructions for execution by the processor. Under the programming instructions, the processor is configured to compare one or more current values of queries with a learned normal
25 behaviour pattern to identify a deviation from a normal behaviour, indicate a presence
of an anomaly upon detecting a deviation from the normal behaviour, and define at
least one of: one or more thresholds or one or more rules based on a result of the
detection, where the one or more thresholds specify a severity level of the anomaly
based on the deviation from the normal behaviour.
25

[0080] FIG. 6 illustrates an exemplary computer system 600 in which or with which
embodiments of the present disclosure may be implemented. The computer system 600
may include an external storage device 610, a bus 620, a main memory 630, a read¬
only memory 640, a mass storage device 650, a communication port(s) 660, and a pro-
5 cessor 670. A person skilled in the art will appreciate that the computer system 600
may include more than one processor and communication ports. The processor 670
may include various modules associated with embodiments of the present disclosure.
The communication port(s) 660 may be any of an RS-232 port for use with a modem-
based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using
10 copper or fiber, a serial port, a parallel port, or other existing or future ports. The com-
munication ports(s) 660 may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system 500 connects. [0081] In an embodiment, the main memory 630 may be Random Access Memory
15 (RAM), or any other dynamic storage device commonly known in the art. The read-
only memory 640 may be any static storage device(s) e.g., but not limited to, a Pro-grammable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor 670. The mass storage device 650 may be any current or future mass storage solution, which can be
20 used to store information and/or instructions. Exemplary mass storage solutions in-
clude, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire inter¬faces).
25 [0082] In an embodiment, the bus 620 may communicatively couple the processor(s)
670 with the other memory, storage, and communication blocks. The bus 620 may be, e.g. a Peripheral Component Interconnect PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB, or the like, for connecting expansion cards,
26

drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor 670 to the computer system 600.
[0083] In another embodiment, operator and administrative interfaces, e.g., a display,
keyboard, and cursor control device may also be coupled to the bus 620 to support
5 direct operator interaction with the computer system 600. Other operator and adminis-
trative interfaces can be provided through network connections connected through the
communication port(s) 660. Components described above are meant only to exemplify
various possibilities. In no way should the aforementioned exemplary computer system
600 limit the scope of the present disclosure.
10 [0084] The present disclosure provides technical advancement related to automatically
detecting anomaly using trained models and define policies or rules on queries or work¬
flow and take necessary actions. The polices are scheduled so that it can be tracked
near real time and notify to responsible person for better understanding. The user can
define multiple rules for different use-case in single policy and compare the data. The
15 user gets notification if some unexpected trends comes into the data.
[0085] While the foregoing describes various embodiments of the present disclosure,
other and further embodiments of the present disclosure may be devised without de¬
parting from the basic scope thereof. The scope of the present disclosure is determined
by the claims that follow. The present disclosure is not limited to the described embod-
20 iments, versions or examples, which are included to enable a person having ordinary
skill in the art to make and use the present disclosure when combined with information
and knowledge available to the person having ordinary skill in the art.
TECHNICAL ADVANTAGES
[0086] The present disclosure described herein above has several technical advance-
25 ment including, but not limited to, the realization of the system and the method that:
1. provides a system for policy management.
2. provides a system where a plurality of policies may be provided to the system.
27

3. provides a system where any one or more of a plurality of policies may be configured or implemented by a user.
4. provides a system where an artificial intelligence (AI)/machine learning (ML) engine may be configured to update one or more of a plurality of policies.
5 5. provides a system where an AI/ML engine may update thresholds for each policy of
the user in real or near-real time.
6. provides a system where an AI/ML engine may assist in deploying countermeasures for a fault detected.
[0087] While considerable emphasis has been placed herein on the preferred embodi-
10 ments, it will be appreciated that many embodiments can be made and that many
changes can be made in the preferred embodiments without departing from the princi¬
ples of the disclosure. These and other changes in the preferred embodiments of the
disclosure will be apparent to those skilled in the art from the disclosure herein,
whereby it is to be distinctly understood that the foregoing descriptive matter to be
15 implemented merely as illustrative of the disclosure and not as limitation.
28

WE CLAIM:
1. A processor-implemented method (500) for determining an anomaly in a network using artificial intelligence, the method comprising:
comparing (502), by an analysis engine (306) one or more current values of que-ries with a learned normal behaviour pattern to identify a deviation from a normal be-haviour;
indicating (504) a presence of an anomaly by, the analysis engine (306), upon detecting the deviation from the normal behaviour; and
defining (506), by the analysis engine (306), at least one of: one or more thresh¬olds or one or more rules based on a result of the detection, wherein the one or more thresholds specify a severity level of the anomaly based on the deviation from the nor¬mal behaviour.
2.The processor-implemented method (500) as claimed in claim 1, further comprising: scheduling a policy at a scheduling layer and triggering the scheduled policy based on a user requirement.
3. The processor-implemented method (500) as claimed in claim 1, wherein the one or more thresholds are defined statically or dynamically based on one of: historical data or statistical methods.
4. The processor-implemented method (500) as claimed in claim 1, further comprising updating a policy output with one or more latest values.
5. The processor-implemented method (500) as claimed in claim 1 wherein the
one or more current values of queries includes current values associated with one or

more parameters of minutes of usage (MOU), attempted calls, answered calls and ag-gregated values of the one or more parameters.
6. A system (200) for determining an anomaly in a network, the system (200) compris¬ing:
a memory (204); and
a processing engine (210) comprising an analysis engine (306) configured to: m.compare one or more current values of queries with a learned normal behaviour pat-tern to identify a deviation from a normal behaviour;
indicate a presence of an anomaly upon detecting the deviation from the normal behaviour; and
define at least one of: one or more thresholds or one or more rules based on a result of the detection, wherein the one or more thresholds specify a severity level of the anomaly based on the deviation from the normal behaviour.
7.. The system (200) as claimed in claim 6, wherein the analysis engine (306) is further configured to:
schedule a policy at a scheduling layer and triggering the scheduled policy based on a user requirement.
8. The system (200) as claimed in claim 6,wherein the analysis engine (306) is further
configured to update a policy output with one or more latest values.
9. The system (200) as claimed in claim 6 wherein the one or more current values
of queries include current values associated with one or more parameters of minutes of
usage (MOU), attempted calls, answered calls and aggregated values of the one or more
parameters.

10. A user equipment (UE) (104) configured for determining an anomaly in a network
(106), the UE (104) comprising:
a processor; and
a computer readable storage medium storing programming for execution by the processor, the programming including instructions to:
compare one or more current values of queries with a learned normal behaviour pattern to identify a deviation from a normal behaviour; and
indicate a presence of an anomaly upon detecting the deviation from the normal behaviour; and
defining at least one of: one or more thresholds or one or more rules based on a result of the detection, wherein the one or more thresholds specify a severity level of the anomaly based on the deviation from the normal behaviour.
11. The user equipment (UE) of claim 9, further configured for:
scheduling a policy at a scheduling layer and triggering the scheduled policy based on a user requirement.

Documents

Application Documents

# Name Date
1 202321047449-STATEMENT OF UNDERTAKING (FORM 3) [14-07-2023(online)].pdf 2023-07-14
2 202321047449-PROVISIONAL SPECIFICATION [14-07-2023(online)].pdf 2023-07-14
3 202321047449-FORM 1 [14-07-2023(online)].pdf 2023-07-14
4 202321047449-DRAWINGS [14-07-2023(online)].pdf 2023-07-14
5 202321047449-DECLARATION OF INVENTORSHIP (FORM 5) [14-07-2023(online)].pdf 2023-07-14
6 202321047449-FORM-26 [13-09-2023(online)].pdf 2023-09-13
7 202321047449-POA [29-05-2024(online)].pdf 2024-05-29
8 202321047449-FORM 13 [29-05-2024(online)].pdf 2024-05-29
9 202321047449-AMENDED DOCUMENTS [29-05-2024(online)].pdf 2024-05-29
10 202321047449-Power of Attorney [04-06-2024(online)].pdf 2024-06-04
11 202321047449-Covering Letter [04-06-2024(online)].pdf 2024-06-04
12 202321047449-ORIGINAL UR 6(1A) FORM 26-120624.pdf 2024-06-20
13 202321047449-ENDORSEMENT BY INVENTORS [09-07-2024(online)].pdf 2024-07-09
14 202321047449-DRAWING [09-07-2024(online)].pdf 2024-07-09
15 202321047449-CORRESPONDENCE-OTHERS [09-07-2024(online)].pdf 2024-07-09
16 202321047449-COMPLETE SPECIFICATION [09-07-2024(online)].pdf 2024-07-09
17 202321047449-CORRESPONDENCE(IPO)-(WIPO DAS)-06-08-2024.pdf 2024-08-06
18 Abstract-1.jpg 2024-08-12
19 202321047449-FORM 18 [26-09-2024(online)].pdf 2024-09-26
20 202321047449-FORM 3 [04-11-2024(online)].pdf 2024-11-04