Sign In to Follow Application
View All Documents & Correspondence

Method And System For Detecting Anomalies At Edge Locations In A Network

Abstract: The present disclosure provides a method (300) and a system (108) to detecting anomalies at edge locations in a network based on an Artificial Intelligence (AI) model. The method includes receiving (302) a structured data corresponding to an edge location. The method includes processing (304) the structured data to extract a current set of features from the structured data The method includes comparing (306) each of the current set of features with a corresponding training feature of a plurality of training features. The method includes identifying (308) a deviation within one or more of the current set of features from the corresponding training feature in response to comparing. The method includes detecting (310) one or more anomalies within the one or more of the current set of features based on the identified deviation and a pre-defined threshold. Figure.3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
21 July 2023
Publication Number
04/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. SAXENA, Gaurav
B1603, Platina Cooperative Housing Society, Casa Bella Gold, Kalyan Shilphata Road, Near Xperia Mall Palava City, Dombivli, Kalyan, Thane - 421204, Maharashtra, India.
4. SHOBHARAM, Meenakshi
2B-62, Narmada, Kalpataru, Riverside, Takka, Panvel, Raigargh - 410206, Maharashtra, India.
5. BHANWRIA, Mohit
39, Behind Honda Showroom, Jobner Road, Phulera, Jaipur - 303338, Rajasthan, India.
6. GAYKI, Vinay
259, Bajag Road, Gadasarai, District -Dindori - 481882, Madhya Pradesh, India.
7. KUMAR, Durgesh
Mohalla Ramanpur, Near Prabhat Junior High School, Hathras, Uttar Pradesh -204101, India.
8. BHUSHAN, Shashank
Fairfield 1604, Bharat Ecovistas, Shilphata, NH48, Thane - 421204, Maharashtra, India.
9. KHADE, Aniket Anil
X-29/9, Godrej Creek Side Colony, Phirojshanagar, Vikhroli East - 400078, Mumbai, Maharashtra, India.
10. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
11. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
12. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
13. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera District-Kota, Rajasthan - 324001, India.
14. SAHU, Kishan
Ajay Villa, Gali No. 2 Ambedkar Colony, Bikaner, Rajasthan - 334003, India.
15. RAJANI, Manasvi
C-22, Old Jawahar Nagar, Kota, Rajasthan - 324005, India.
16. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
17. KUMAR, Yogesh
Village-Gatol, Post-Dabla, Tahsil-Ghumarwin, Distict-Bilaspur, Himachal Pradesh - 174021, India.
18. TALGOTE, Kunal
29, Nityanand Nagar, Nr. Tukaram Hosp., Gaurakshan Road, Akola - 444004, Maharashtra, India.
19. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli, Maharashtra - 421204, India.
20. VISHWAKARMA, Dharmendra Kumar
Ramnagar, Sarai Kansarai, Bhadohi - 221404, Uttar Pradesh, India.
21. SONI, Sajal
K. P. Nayak Market Mauranipur, Jhansi, Uttar Pradesh - 284204, India.
22. PATNAM, Niharika
Plot No. 170, Dattaterya Colony, Yellammabanda, Kukatpally, Hyderabad, Telangana - 500072, India.
23. KUSHWAHA, Avinash
SA 18/127, Mauza Hall, Varanasi - 221007, Uttar Pradesh, India.
24. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.

Specification

FORM 2
HE PATENTS ACT, 1970
(39 of 1970) PATENTS RULES, 2003
COMPLETE SPECIFICATION
METHOD„™TITLE OF THE INVENTION EDGE LOCATIONS IN A
NETWORK
APPLICANT
380006, Gujarat, India; Nationality : India
following specification particularly describes the invention and the manner in which it is to be performed

RESERVATION OF RIGHTS
[001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as, but are not limited to,
copyright, design, trademark, integrated circuit (IC) layout design, and/or trade
5 dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (herein
after referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner. 10
TECHNICAL FIELD
[002] The present disclosure relates to a wireless network, and specifically
to a method and a system for detecting anomalies at edge locations in a network based on Artificial Intelligence (AI). 15
DEFINITION
[003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context
in which they are used to indicate otherwise.
20 [004] The term ‘structured data’ as used herein, refers to a clear code data
that includes a clean, a valid, and an expected data that is transmitted from various
data sources in a network.
[005] The term ‘anomalies’ as used herein, refers to events, behaviours, or
patterns that deviate from normal or expected operation.
25 [006] The term ‘data sources’ as used herein includes network elements or
user equipments associated with the network.
[007] The term ‘coherent data’ as used herein, includes a part of data of
the structure data that is within the pre-defined threshold.
30 BACKGROUND
[008] The following description of related art is intended to provide
2

background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section be used only
to enhance the understanding of the reader with respect to the present disclosure,
5 and not as admissions of prior art.
[009] Current anomaly detection processes used for detecting anomalies
in a wireless network (e.g., a fourth generation (4G) network, a fifth generation (5G) network) face notable challenges, primarily centered around a centralization of a raw and unprocessed data for analysis. This is because these traditional
10 processes necessitate gathering all data at a central location before initiating an
anomaly detection process. However, this centralized approach, while widely adopted, presents significant challenges notably in terms of latency and high bandwidth utilization. Consequently, it proves to be inefficient, particularly in scenarios where real-time or near-real-time anomaly detection is imperative.
15 [0010] Furthermore, an absence of anomalies detection capabilities at edge
locations represents an inefficiency of these traditional approaches. Without an ability to perform the anomalies detection at the edge locations, data gathered at the edge locations is needlessly relayed to a central system for further analysis and enrichment. This unnecessary forwarding of the data not only consumes valuable
20 resources but also prolongs a time required to detect and respond to anomalies,
potentially compromising effectiveness of the anomaly detection process.
[0011] Due to these above discussed limitations of the traditional processes
used for detecting the anomalies, there is a clear need for a more efficient and a decentralized approach for overcoming deficiencies of the prior arts.
25
OBJECTS OF THE PRESENT DISCLOSURE
[0012] It is an object of the present disclosure to provide a method and a
system for detecting anomalies at an edge location in a network using an Artificial
Intelligence (AI) model.
30 [0013] It is an object of the present disclosure to perform a real-time
anomaly detection by supporting a real-time or a near-real-time anomaly detection
3

at the edge location. This ensures that anomalies are detected and addressed promptly, without a need for a data (i.e., a structured data) to be transmitted to a central location (i.e., a centralized database).
[0014] It is an object of the present disclosure to reduce a latency and a
5 bandwidth consumption by performing anomaly detection at the edge location,
hence minimizes the latency and the bandwidth requirements. This optimization allows for a faster response time and an efficient utilization of network resources in the network.
[0015] It is an object of the present disclosure to perform resource and time
10 optimization by implementing anomaly detection at the edge location which
prevents unnecessary data forwarding to subsequent layers in the network for further processing and enrichment. This optimization reduces the network resources consumption and a processing time, improving overall efficiency of the network.
15 SUMMARY
[0016] In one embodiment, a method for detecting anomalies at edge
locations in a network based on Artificial Intelligence (AI) is disclosed. The method includes receiving a structured data corresponding to an edge location. The method includes processing the structured data to extract a current set of features from the
20 structured data. The method includes comparing each of the current set of features
with a corresponding training feature of a plurality of training features. The method includes identifying a deviation within one or more of the current set of features from the corresponding training feature in response to comparing. The method includes detecting one or more anomalies within the one or more of the current set
25 of features based on the identified deviation and a pre-defined threshold.
[0017] In an embodiment, each of the one or more anomalies is detected as
an anomaly when the deviation within the one or more of the current set of features
is determined to be above the pre-defined threshold.
[0018] In an embodiment, the method further includes discarding the one or
30 more anomalies from the structured data. The method further includes generating a
coherent data in response to discarding the one or more anomalies.
4

[0019] In an embodiment, the method further includes storing the coherent
data in a centralized repository.
[0020] In an embodiment, the method further includes an AI model
configured for detecting anomalies at the edge location in the network.
5 [0021] In an embodiment, the AI model is trained based on a training dataset
associated with a plurality of edge locations, and the training dataset includes a
plurality of anomalies and the plurality of training features corresponding to the
plurality of edge locations.
[0022] In an embodiment, the method further includes re-training the AI
10 model based on the structured data and the current set of features associated with
the edge location.
[0023] In another embodiment, a system for detecting anomalies at edge
locations in a network based on Artificial Intelligence (AI) is disclosed. The system includes a memory and a processing engine communicatively coupled to the
15 memory. The processing engine is configured to receive a structured data
corresponding to the edge location. The processing engine is configured to process the structured data to extract a current set of features from the structured data. The processing engine is configured to compare each of the current set of features with a corresponding training feature of a plurality of training features. The processing
20 engine is configured to identify a deviation within one or more of the current set of
features from the corresponding training feature in response to comparing. The processing engine is configured to detect one or more anomalies within the one or more of the current set of features based on the identified deviation and a pre-defined threshold.
25 [0024] In an embodiment, each of the one or more anomalies is detected as
an anomaly when the deviation within the one or more of the current set of features is determined to be above the pre-defined threshold.
[0025] In an embodiment, the processing engine is further configured to
discard the one or more anomalies from the structured data. The processing engine
30 is further configured to generate a coherent data in response to discarding the one
or more anomalies.
5

[0026] In an embodiment, the processing engine is further configured to
store the coherent data in a centralized repository.
[0027] In an embodiment, the processing engine further includes an AI
model configured for detecting anomalies at the edge location in the network.
5 [0028] In an embodiment, the AI model is trained based on a training dataset
associated with a plurality of edge locations, and the training dataset includes a
plurality of anomalies and the plurality of training features corresponding to the
plurality of edge locations.
[0029] In an embodiment, the processing engine is further configured to re-
10 training, by the processing engine, the AI model based on the structured data and
the current set of features associated with the edge location.
[0030] Other objects and advantages of the present disclosure will be more
apparent from the following description, which is not intended to limit the scope of
the present disclosure. 15
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same
20 parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such
25 drawings includes the disclosure of electrical components, electronic components
or circuitry commonly used to implement such components.
[0032] FIG. 1 illustrates an exemplary network architecture in which or with
which embodiments of the present disclosure may be implemented.
[0033] FIG. 2 illustrates an exemplary block diagram of a system
30 configured for detecting anomalies at edge locations in a network based on
Artificial Intelligence (AI), in accordance with an embodiment of the present
6

disclosure.
[0034] FIG. 3 illustrates an exemplary flow diagram of a method for
detecting anomalies at edge locations in a network based on AI, in accordance with
an embodiment of the present disclosure.
5 [0035] FIG. 4 illustrates an exemplary process flow of detecting anomalies
at edge locations in a network based on AI, in accordance with an embodiment of the present disclosure.
[0036] FIG. 5 illustrates an exemplary control logic for detecting anomalies
at edge locations in a network based on AI, in accordance with an embodiment of
10 a present disclosure.
[0037] FIG. 6 illustrates an exemplary computer system in which or with
which embodiments of the present disclosure may be implemented.
LIST OF REFERENCE NUMERALS
15 (100) - Network architecture
(102-1, 102-2…102-N) - One or more users
(104-1, 104-2…104-N) - One or more computing devices or user equipments
(106) – Network
(108) – System
20 (200) – Exemplary block diagram
(202) – Processor(s)
(204) - Memory
(206) - Interface(s)
(208) – Processing engine(s)
25 (210) – Database
(212) – Artificial Intelligence (AI) Model
(600) - Exemplary computer system
(610) - External storage device
(620) - Bus
30 (630) - Main memory
(640) - Read only memory
7

(650) - Mass storage device (660) - Communication port(s) (670) - Processor
5 DETAILED DESCRIPTION
[0038] In the following detailed description, a reference is made to the
accompanying drawings that form a part hereof, and in which the specific embodiments that may be practiced is shown by way of illustration. These embodiments are described in sufficient detail to enable those skilled in the art to
10 practice the embodiments and it is to be understood that other changes may be made
without departing from the scope of the embodiments. The following detailed description is therefore not to be taken in a limiting sense.
[0039] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
15 embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the
20 problems discussed above. Some of the problems discussed above might not be
fully addressed by any of the features described herein.
[0040] The ensuing description provides exemplary embodiments only, and
is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those
25 skilled in the art with an enabling description for implementing an exemplary
embodiment. It should be understood that various changes may be made in the
function and arrangement of elements without departing from the spirit and scope
of the disclosure as set forth.
[0041] Specific details are given in the following description to provide a
30 thorough understanding of the embodiments. However, it will be understood by one
of ordinary skill in the art that the embodiments may be practiced without these
8

specific details. For example, circuits, systems, networks, processes, and other
components may be shown as components in block diagram form in order not to
obscure the embodiments in unnecessary detail. In other instances, well-known
circuits, processes, algorithms, structures, and techniques may be shown without
5 unnecessary detail in order to avoid obscuring the embodiments.
[0042] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in
10 parallel or concurrently. In addition, the order of the operations may be re-arranged.
A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling
15 function or the main function.
[0043] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not
20 necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term
25 “comprising” as an open transition word without precluding any additional or other
elements.
[0044] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included
30 in at least one embodiment of the present disclosure. Thus, the appearances of the
phrases “in one embodiment” or “in an embodiment” in various places throughout
9

this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0045] The terminology used herein is to describe particular embodiments
5 only and is not intended to be limiting the disclosure. As used herein, the singular
forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or
10 components, but do not preclude the presence or addition of one or more other
features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “and/or” includes any combinations of one or more of the
associated listed items.
[0046] It should be noted that the terms “mobile device”, “user equipment”,
15 “user device”, “communication device”, “device” and similar terms are used
interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms is solely for convenience and clarity of description. The invention is not limited to any particular
20 type of device or equipment, and it should be understood that other equivalent terms
or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
[0047] As used herein, an “electronic device”, or “portable electronic
device”, or “user device” or “communication device” or “user equipment” or
25 “device” refers to any electrical, electronic, electromechanical, and computing
device. The user device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices, and transmitting data to the other user devices. The user equipment may have a processor, a display, a memory, a battery, and an input-means such as a hard keypad
30 and/or a soft keypad. The user equipment may be capable of operating on any radio
access technology including but not limited to IP-enabled communication, Zig Bee,
10

Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi,
Wi-Fi direct, etc. For instance, the user equipment may include, but not limited to,
a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR)
devices, laptop, a general-purpose computer, desktop, personal digital assistant,
5 tablet computer, mainframe computer, or any other device as may be obvious to a
person skilled in the art for implementation of the features of the present disclosure.
[0048] Further, the user device may also comprise a “processor” or
“processing engine” including a processing unit. The processor refers to any logic circuitry for processing instructions. The processor may be a general-purpose
10 processor, a special purpose processor, a conventional processor, a digital signal
processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing,
15 input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor is a hardware processor.
[0049] As portable electronic devices and wireless technologies continue to
improve and grow in popularity, the advancing wireless technologies for data
20 transfer are also expected to evolve and replace older generations of wireless
technologies. In a field of wireless data communications, a dynamic advancement of various generations of cellular technology are also seen. The development, in this respect, has been incremental in an order of a second generation (2G), a third generation (3G), a fourth generation (4G), and now a fifth generation (5G), and
25 more such generations are expected to continue in the forthcoming time.
[0050] While considerable emphasis has been placed herein on the
components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the
30 disclosure. These and other changes in the preferred embodiment as well as other
embodiments of the disclosure will be apparent to those skilled in the art from the
11

disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
[0051] Embodiments herein relate to a method and a system for detecting
5 anomalies at edge locations in a network (i.e., a wireless network, such as a Fourth
Generation (4G) network, a Fifth Generation (5G) network, etc.) based on Artificial Intelligence (AI). In particular, the system includes an AI model that is configured for detecting the anomalies at the edge locations in the network. The AI model may be deployed at the edge locations for detecting the anomalies. For this, the AI model
10 is configured to receive a structured data corresponding to an edge location in the
network. Further, the AI model processes this structured data to detect one or more anomalies at the edge location.
[0052] The various embodiments of the present disclosure will be explained
in detail with reference to FIGS. 1 to 6.
15 [0053] FIG. 1 illustrates an exemplary network architecture (100) in which
or with which a system (108) for detecting anomalies at edge locations in a network
based on an Artificial Intelligence (AI) is implemented, in accordance with
embodiments of the present disclosure.
[0054] Referring to FIG. 1, the network architecture (100) may include one
20 or more computing devices or user equipments (104-1, 104-2…104-N) associated
with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that the one or more users (102-1, 102-2…102-N) may be individually referred to as a user (102) and collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand
25 that one or more user equipments (104-1, 104-2…104-N) may be individually
referred to as a user equipment (104) and collectively referred to as the user equipment (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three user equipments (104) are depicted in
30 FIG. 1, however any number of the user equipments (104) may be included without
departing from the scope of the ongoing description.
12

[0055] In an embodiment, the user equipment (104) may include smart
devices operating in a smart environment, for example, an Internet of Things (IoT)
system. In such an embodiment, the user equipment (104) may include, but is not
limited to, smart phones, smart watches, smart sensors (e.g., mechanical, thermal,
5 electrical, magnetic, etc.), networked appliances, networked peripheral devices, a
networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, a smart security system, a smart home system, other devices for monitoring or interacting with or for the user (102) and/or an entity (110), or any
10 combination thereof. A person of ordinary skill in the art will appreciate that the
user equipment (104) may include, but is not limited to, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
15 [0056] In an embodiment, the user equipment (104) may include, but is not
limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device(e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop
20 computer, a tablet computer, or another type of portable computer, a media playing
device, a portable gaming system, and/or any other type of a computer device with wireless communication capabilities, and the like. In an embodiment, the user equipment (104) may include, but is not limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above
25 devices such as virtual reality (VR) devices, augmented reality (AR) devices, a
laptop, a general-purpose computer, a desktop, a personal digital assistant, a tablet computer, a mainframe computer, or any other computing device, the user equipment (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a
30 microphone, a keyboard, and input devices for receiving input from the user (102)
or the entity (110) such as a touch pad, a touch enabled screen, an electronic pen,
13

and the like. A person of ordinary skill in the art will appreciate that the user equipment (104) may not be restricted to the mentioned devices and various other devices may be used.
[0057] In an embodiment, the user equipment (104) is configured to
5 communicate with the system (108) through a network (106). The system (108)
may deploy an AI model at the edge locations for detecting the anomalies at the
edge locations in the network (e.g., the network (106)). The network, for example
may be a wireless network, such as a Fouth Generation (4G) network, a Fifth
Generation (5G) network, a Sixth Generation (6G) network, and the like. The
10 network (106) may enable the user equipment (104) to communicate with other
devices in the network architecture (100) and/or with the system (108). The network
(106) may include a wireless card or some other transceiver connection to facilitate
this communication. In another embodiment, the network (106) may be
implemented as, or include any of a variety of different communication
15 technologies such as a wide area network (WAN), a local area network (LAN), a
wireless network, a mobile network, a Virtual Private Network (VPN), an Internet,
a Public Switched Telephone Network (PSTN), or the like.
[0058] In another exemplary embodiment, a centralized server (112) may
include or comprise, by way of example but not limitation, one or more of: a stand-
20 alone server, a server blade, a server rack, a bank of servers, a server farm, a
hardware supporting a part of a cloud service or the system (108), a home server, a
hardware running a virtualized server, one or more processors executing code to
function as a server, one or more machines performing server-side functionality as
described herein, at least a portion of any of the above, some combination thereof.
25 [0059] Although FIG. 1 shows exemplary components of the network
architecture (100), in other embodiments, the network architecture (100) may
include fewer components, different components, differently arranged components,
or additional functional components than depicted in FIG. 1. Additionally, or
alternatively, one or more components of the network architecture (100) may
30 perform functions described as being performed by one or more other components
of the network architecture (100).
14

[0060] FIG. 2 illustrates an exemplary block diagram (200) of the system
(108) configured for detecting the anomalies at the edge location in the network
based on the AI, in accordance with an embodiment of the present disclosure. The
network may correspond to the wireless network, such as, the 4G network, the 5G
5 network, the 6G network, and the like. FIG. 2 is explained in conjunction with FIG.
1.
[0061] In an embodiment, in order to detect the anomalies at the edge
location, the system (108) may be deployed at the edge location. In an aspect, the system (108) may include one or more processor(s) (202). The one or more
10 processor(s) (202) may be implemented as one or more microprocessors,
microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, one or more processor(s) (202) may be configured to fetch and execute computer-readable
15 instructions stored in a memory (204) of the system (108). The memory (204) may
be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed detect the anomalies at the edge locations in the network. The memory (204) may comprise any non-transitory storage device including, for example, a
20 volatile memory such as a Random-Access Memory (RAM), or a non-volatile
memory such as an Erasable Programmable Read-Only Memory (EPROM), a flash memory, and the like.
[0062] In an embodiment, the system (108) may include an interface(s)
(206). The interface(s) (206) may include a variety of interfaces, for example,
25 interfaces for data input and output devices, referred to as I/O devices, storage
devices, and the like. The interface(s) (206) may facilitate communication of the system (108). The interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, a processing unit/engine (208) and a database (210).
30 [0063] The processing engine (208) may be implemented as a combination
of hardware and programming (for example, programmable instructions) to
15

implement one or more functionalities of the processing engine (208). In examples
described herein, such combinations of hardware and programming may be
implemented in several different ways. For example, the programming for the
processing engine (208) may be processor-executable instructions stored on a non-
5 transitory machine-readable storage medium and the hardware for the processing
engine (208) may include a processing resource (for example, one or more
processors), to execute such instructions. In the present examples, the machine-
readable storage medium may store instructions that, when executed by the
processing resource, implement the processing engine (208). In such examples, the
10 system (108) may include the machine-readable storage medium storing the
instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (108) and the processing resource. In other examples, the processing engine (208) may be implemented by an electronic circuitry.
15 [0064] In an embodiment, the processing engine (208) may include an
anomaly detection module (214) and an AI model (212) for detecting the anomalies at the edge location in the network. As already known to the person skilled in art, the edge location refers to a point within a network infrastructure that is physically close to end-users having the user equipments (104) accessing the network.
20 [0065] The anomaly detection module (214) is configured to detect the
anomalies (i.e., the one or more anomalies) within the one or more relevant features using the pre-defined threshold (e.g., 50%). The anomaly detection module (214) includes the AI model (212) to perform anomaly detection. The AI model (212) is configured to receive the structured data from an edge location. In an embodiment,
25 the structured data corresponds to a clear code data. The clear code data refers to a
clean, valid, and expected data that is transmitted from various data sources in the network. In one embodiment, the data sources correspond to network elements such as, Internet of Things (IoT) gateways, routers, switches, firewalls, Access Points (APs), modems, Network Interface Cards (NICs), and the like. In another
30 embodiments, the data sources correspond to user equipments (i.e., the user
equipments (104)). Further, examples of the clear code data include, but are not
16

limited to, readings of temperature sensors, readings of humidity sensors, readings
of pressure sensors, a status information of the network elements of the user
equipments (104), a network traffic, an application data associated with the user
equipments (104), and the like. Upon receiving the structured data, the AI model
5 (212) is configured to process the structured data to detect one or more anomalies
at the edge location in the network. A method of processing the structured data is further explained in detail in conjunction with FIG. 3.
[0066] Once the one or more anomalies are detected, the AI model (212) is
configured to discard the one or more anomalies from the structured data to generate
10 a coherent data. The coherent data may correspond to clean data that excludes
anomalous or irrelevant observations and represents expected patterns or behaviours of the data sources in the network. The AI model (212) further transmits and store the coherent data in a centralized repository for further analysis. The centralized repository may correspond to a database present within the centralized
15 server (112).
[0067] It should be noted that, the AI model (212) is trained based on a
training dataset associated with a plurality of edge locations to detect the anomalies at the edge location in the network. The training dataset includes a plurality of anomalies and the plurality of training features corresponding to the plurality of
20 edge locations. Examples of the plurality of anomalies includes, but are not limited
to, a spike in a network traffic, unusual data patterns, unusual access patterns associated with the user equipments (104), an abnormal behaviour of network elements, and the like. Examples of the plurality of training features includes, but are not limited to, numerical values indicating the readings of the temperature
25 sensors, the pressure sensors, the humidity sensors, voltage sensors, or any other
similar measurable quantity, errors codes or status indicators corresponding to events such as network connectivity issues, data transmission errors, resource exhaustions, configuration errors, security breaches, hardware failures, etc., a frequency of the events, a rate of change of the numerical values over a period of
30 time (e.g., in 15 days), and the like. It should be noted that, the AI model (212) may
continuously evolve based on the structured data and a current set of features
17

associated with the edge location. In other words, the AI model (212) may perform its retraining based on the structured data received from the edge location and the current set of features identified within the structured data.
[0068] In an embodiment, the database (210) may include data that may be
5 either stored or generated as a result of functionalities implemented by any of the
components of the processor(s) (202), the processing engine (208), or the system (108).
[0069] Although FIG. 2 shows the exemplary block diagram (200) of the
system (108), in other embodiments, the system (108) may include fewer
10 components, different components, differently arranged components, or additional
functional components than depicted in FIG. 2. Additionally, or alternatively, one
or more components of the system (108) may perform functions described as being
performed by one or more other components of the system (108).
[0070] FIG. 3 illustrate an exemplary flow diagram of a method (300) for
15 detecting the anomalies at the edge locations in the network based on the AI, in
accordance with an embodiment of the present disclosure. The network may correspond to the wireless network, such as, the 4G network, the 5G network, the 6G network, and the like. FIG. 3 is explained in conjunction with FIGS. 1 – 2. Each step of the method (300) may be executed by the processing engine (208) using the
20 AI model (212). The AI model (212) is trained for detecting the anomalies at the
edge location based on the training dataset associated with the plurality of edge
locations. The training dataset includes the plurality of anomalies and the plurality
of training features corresponding to the plurality of edge locations.
[0071] In order to detect the anomalies at the edge location in the network,
25 initially at step 302, the structured data corresponding to the edge location is
received. In particular, the AI model (212) is configured for receiving the structured data from the edge location. In an embodiment, the structured data corresponds to a clear code data. The clear code data refers to a clean, a valid, and an expected data that is transmitted from various data sources in the network. In one embodiment,
30 the data sources correspond to network elements such as, Internet of Things (IoT)
gateways, routers, switches, firewalls, Access Points (APs), modems, Network
18

Interface Cards (NICs), and the like. In another embodiments, the data sources
correspond to user equipments (i.e., the user equipments (104)). Examples of the
clear code data include, but are not limited to, readings of temperature sensors,
readings of humidity sensors, readings of pressure sensors, a status information of
5 the network elements of the user equipments (104), a network traffic, an application
data associated with the user equipments (104), and the like. Examples of the anomalies includes, but are not limited to, a spike in a network traffic, an unusual data pattern, an unusual access pattern associated with the user equipment (104), an abnormal behaviour of a network element (e.g., a router), and the like.
10 [0072] Upon receiving the structured data, at step (304) the structured data
is processed to extract a current set of features from the structured data. In particular, the AI model (212) is configured for process the structured data to extract the current set of features. The current set of features, for example, includes a sudden spike in the network traffic (for example: a value of the network traffic with
15 a bandwidth utilization of 100 Mega bits per second (Mbps)) due to an event, e.g.,
a black Friday sale, and an unusual access pattern, e.g., multiple login attempts (4
login attempts) involving a specific user account associated with the user equipment
(104) corresponding to the edge location in the network.
[0073] Once the current set of features are extracted, at step (306), each of
20 the current set of features is compared with a corresponding training feature of the
plurality of training features. In particular, the AI model (212) is configured to compare the current set of features with the corresponding training features. By way of an example, the bandwidth utilization of 100 Mbps spike in the network traffic may be compared with a corresponding training feature that includes a bandwidth
25 utilization of 20 Mbps. Similarly, the unusual access pattern that includes 4 login
attempts is compared with a corresponding training feature that includes 3 login attempts.
[0074] Further, based on the comparison, at step (308), a deviation within
one or more of the current set of features from the corresponding training feature
30 may be identified. In particular, the AI model (212) is configured to identify the
deviation. In continuation to the above example, the deviation, for example, a
19

deviation of the bandwidth utilization of 100 Mbps from the bandwidth utilization of 10 Mbps and the 4 login attempts with the 3 login attempts is identified based on the comparison.
[0075] Upon identifying the deviation, at step (310), one or more anomalies
5 are detected within the one or more of the current set of features based on the
identified deviation and a pre-defined threshold. For example, the pre-defined threshold is set for 50%. In this case, any feature in the current set of features that deviates beyond 50% may be detected as an anomaly. In particular, the AI model (212) is configured to detect the one or more anomalies. In an embodiment, each of
10 the one or more anomalies is detected as an anomaly when the deviation within the
one or more of the current set of features is determined to be above the pre-defined threshold. In continuation to above example, in first case, suppose the bandwidth utilization of 100 Mbps deviates for 90% from the bandwidth utilization of 10 Mbps which is more than the pre-defined threshold, i.e., 50%, in this case, the spike in the
15 network traffic is considered as the anomaly. In second case, suppose the 4 login
attempts deviates 10% with the 3 login attempts which is less than the pre-defined
threshold, i.e., 50%, in this case, the unusual access patterns may not be detected as
the anomaly.
[0076] Once the one or more anomalies are detected, the AI model (212) is
20 configured to discard the one or more anomalies from the structured data. Further,
the AI model (212) is configured to generate the coherent data in response to discarding the one or more anomalies. In an embodiment, the coherent data corresponds to clean data that excludes anomalous or irrelevant observations and represents expected patterns or behaviours of the data sources in the network. In
25 particular, the coherent data includes data of the structured data that is within the
pre-defined threshold. In continuation to the above example, the AI model (212) may discard data associated with the sudden spike in the network traffic from the structured data to generate the coherent data. Furthermore, once the coherent data is generated, the AI model (212) is configured to store the coherent data in the
30 centralized repository for further processing. The AI model (212) performs its
training based on the structured data and the current set of features. In this way, the
20

AI model (212) continuously evolve and learn to consistently enhance its anomaly detection performance over time.
[0077] FIG. 4 illustrates an exemplary process flow (400) of detecting the
anomalies at the edge locations in the network based on the AI, in accordance with
5 an embodiment of the present disclosure. FIG. 4 is explained in conjunction with
FIGS. 1 - 3. Each step of the method (300) may be executed by the processing engine (208) using the AI model (212).
[0078] In order to detect the anomalies, the process flow (400) starts at step
(402). Further, at step (404), a raw clear code is received at an end point. The raw
10 clear code data corresponds to the structured data. The end point may correspond
to a network boundary between the data sources and the AI model (212). In one embodiment, the data sources correspond to network elements such as, the IoT gateways, the routers, the switches, the firewalls, the APs, the modems, the NICs, and the like. In another embodiments, the data sources correspond to the user
15 equipments (104).
[0079] Upon receiving the raw clear code data, at step (406), the AI model
(212) may process the raw clear code data to extract relevant features (i.e., the current set of features) using an inbuilt relevant feature extraction module. Once the relevant features, at step (408), the relevant features may be transferred to an
20 anomaly detection module (214) that includes the AI model (212) for detecting the
anomalies. Further, at step (412), the anomaly detection module (214) may be configured to compare the relevant features with the plurality of training feature to identify the deviation in one or more relevant features from the corresponding training feature.
25 [0080] Further, based on the identified deviation, the anomaly detection
module (214) detects the anomalies (i.e., the one or more anomalies) within the one or more relevant features using the pre-defined threshold (e.g., 50%). Once the anomalies are detected, the anomaly detection module (214) discards the anomalies from the raw clear data to generate the coherent data. Further, at step (414), the
30 coherent data is stored in the centralized repository that is transferred to subsequent
layers (e.g., Integration and Orchestration Layer, Monitoring and Logging Layer,
21

etc.) in the network. At step (416), the subsequent layers in the network uses the
coherent data to perform further performance analysis corresponding to the
network. In an embodiment, upon receiving the relevant features, the relevant
features are transferred to the anomaly detection module (214), as depicted via step
5 (410), for enhancing the anomaly detection process. Upon receiving the relevant
features for enhancing the anomaly detection process, the anomaly detection
module (214) performs retraining of the AI model (212) based on the relevant
features and the raw clear code data.
[0081] FIG. 5 illustrates an exemplary control logic (500) for detecting the
10 anomalies at the edge locations in the network based on the AI, in accordance with
an embodiment of a present disclosure. FIG. 5 is explained in conjunction with FIGS. 1 - 4.
[0082] As depicted via the control logic (500), each Machine Learning
probe, i.e., a first ML probe 504-2, a second ML probe 504-4, and a third ML probe
15 (504-6) is configured for receiving a first clear code data (502-2), a second clear
code data (502-4), and a third clear code data (502-6) respectively. As already known to the person skilled in art, each ML probe typically refers to a component or a module within a system (i.e., the system) that is responsible for monitoring and analyzing a performance, a behavior, and an output of machine learning model (i.e.,
20 the AI model (212)). Further, the first clear code data (502-2), the second clear code
data (502-4), and the third clear code data (502-6) corresponds to the structured
data. Further, each ML probe may be deployed at a different edge location.
[0083] Further, as depicted via the control logic (500), the first ML probe
(504-2) includes a first ML model (506-2) that is in communication with a first
25 computation cluster (508-2). Similarly, the second ML probe (504-4) includes a
second ML model (506-4) that is in communication with a second computation cluster (508-4), and the third ML probe (504-6) includes a third ML model (506-6) that is in communication with a third computation cluster (508-6). Each of the first ML model (506-2), the second ML model (506-4), the third ML model (506-6) may
30 correspond to the AI model (212). In an embodiment, for example, the first ML
probe (504-2) including the first ML model (506-2) may be deployed at a first edge
22

location, the second ML probe (504-4) including the second ML model (506-4) may
be deployed at a second edge location, and the third ML probe (504-6) including
the third ML model (506-6) may be deployed at a third edge location. Further, each
computation cluster, i.e., the first computation cluster (508-2), the second
5 computation cluster (508-4), and the third computation cluster (508-6) may include
a set of three workers 1 and a computation vector 1, a computation vector 2, and a
computation vector 3, respectively, as depicted via the control logic (500).
[0084] Upon receiving the first clear code data (502-2), the first ML model
(506-2) within the first ML probe (504-2) may be configured for processing the first
10 clear code data (502-2) in conjunction with the first computation cluster (508-2) to
detect the anomalies in the first clear code data (502-2). For this, the first ML model (506-2) in conjunction with the first computation cluster (508-2) extract the current set of features and compares the current set of features with the corresponding training feature. Further, based on the comparison, the deviation is identified within
15 the one or more current set of features and the anomalies are detected within the
current set of features using the pre-defined threshold (e.g., 50%). In an embodiment, the anomalies detected is above the predefined threshold. Once the anomalies are detected, the anomalies are discarded from the first clear code data (502-2) and the coherent data including data within the pre-defined threshold is
20 generated. Further, as depicted via an arrow (510-2), the generated coherent data is
transferred and stored in a distributed data lake (512) (same as the centralized repository).
[0085] In a similar manner, upon receiving the second clear code data (502-
4), the second ML model (506-4) within the second ML probe (504-4) may be
25 configured for processing the second clear code data (502-4) in conjunction with
the second computation cluster (508-4) to detect the anomalies in the second clear code data (502-4). Based on the anomalies detected within the second clear code data (502-4), the coherent data may be generated that is further transferred and stored in the distributed data lake (512). Similarly, upon receiving the third clear
30 code data (502-6), the third ML model (506-6) within the third ML probe (504-6)
may be configured for processing the third clear code data (502-6) in conjunction
23

with the third computation cluster (508-6) to detect the anomalies in the third clear
code data (502-6). Based on the anomalies detected in the third clear code data (502-
6), the coherent data may be generated that is further transferred and stored in the
distributed data lake (512).
5 [0086] Furthermore, a distributed computing orchestrator (514) is
configured used the coherent data received from the first ML probe (504-2), the second ML probe (504-4), and the third ML probe (504-6) to perform further performance analysis corresponding to the network. In some embodiments, the distributed computing orchestrator (514) may include the subsequent layers
10 corresponding to the network.
[0087] FIG. 6 illustrates an exemplary computer system (600) in which or
with which embodiments of the present disclosure may be implemented. As shown in FIG. 6, the computer system (600) may include an external storage device (610), a bus (620), a main memory (630), a read only memory (640), a mass storage device
15 (650), a communication port(s) (660), and a processor (670). A person skilled in
the art will appreciate that the computer system (600) may include more than one
processor (670) and communication ports (660). The processor (670) may include
various modules associated with embodiments of the present disclosure.
[0088] In an embodiment, the communication port(s) (660) may be any of
20 an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet
port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port(s) (660) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (600) connects.
25 [0089] In an embodiment, the memory (630) may be Random Access
Memory (RAM), or any other dynamic storage device commonly known in the art. Read-only memory (640) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the
30 processor (670).
[0090] In an embodiment, the mass storage (650) may be any current or
24

future mass storage solution, which may be used to store information and/or
instructions. Exemplary mass storage solutions include, but are not limited to,
Parallel Advanced Technology Attachment (PATA) or Serial Advanced
Technology Attachment (SATA) hard disk drives or solid-state drives (internal or
5 external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one
or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays).
[0091] In an embodiment, the bus (620) communicatively couples the
processor (670) with the other memory, storage and communication blocks. The
10 bus (620) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended
(PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (670) to the computer system (600).
15 [0092] Optionally, operator and administrative interfaces, e.g., a display,
keyboard, joystick, and a cursor control device, may also be coupled to the bus (620) to support direct operator interaction with the computer system (600). Other operator and administrative interfaces may be provided through network connections connected through the communication port(s) (660). Components
20 described above are meant only to exemplify various possibilities. In no way should
the aforementioned exemplary computer system (600) limit the scope of the present disclosure.
[0093] While the foregoing describes various embodiments of the present
disclosure, other and further embodiments of the present disclosure may be devised
25 without departing from the basic scope thereof. The scope of the present disclosure
is determined by the claims that follow. The present disclosure is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the present disclosure when combined with information and knowledge available to the person having ordinary
30 skill in the art.
[0094] The present disclosure provides technical advancement related to
25

detecting and managing anomalies at edge locations. The disclosure overcomes the
limitations of the conventional approaches which includes gathering all data before
initiating anomaly detection process, and lack of real-time or near real-time
anomaly detection in edge locations. The conventional approaches are bandwidth-
5 heavy, inefficient, and time-consuming. The current disclosure describes
performing real-time or near-real time anomaly detection at edge locations. The
current disclosure approach leads to saving bandwidth, efficient anomaly detection
and saves time.
10 ADVANTAGES OF THE PRESENT DISCLOSURE
[0095] The present disclosure provides a system and a method to detect
anomalies at an edge location in a network using an Artificial Intelligence (AI)
model.
[0096] The present disclosure performs a real-time anomaly detection by
15 supporting a real-time or a near-real-time anomaly detection at the edge location.
This ensures that anomalies are detected and addressed promptly, without a need
for a data (i.e., a structured data) to be transmitted to a central location (i.e., a
centralized database).
[0097] The present disclosure reduces a latency and a bandwidth
20 consumption by performing anomaly detection at the edge location, hence
minimizes the latency and the bandwidth requirements. This optimization allows
for a faster response time and an efficient utilization of network resources in the
network.
[0098] The present disclosure performs resource and time optimization by
25 implementing anomaly detection at the edge location which prevents unnecessary
data forwarding to subsequent layers in the network for further processing and enrichment. This optimization reduces the network resources consumption and a processing time, improving overall efficiency of the network.
26

WE CLAIM:
5 1. A method (300) for detecting anomalies at edge locations in a network based
on Artificial Intelligence (AI), the method comprising:
receiving (302), by a processing engine (208), a structured data corresponding to an edge location;
processing (304), by the processing engine (208), the structured data to
10 extract a current set of features from the structured data;
comparing (306), by the processing engine (208), each of the current set of features with a corresponding training feature of a plurality of training features;
identifying (308), by the processing engine (208), a deviation within one or
more of the current set of features from the corresponding training feature in
15 response to comparing; and
detecting (310), by the processing engine (208), one or more anomalies within the one or more of the current set of features based on the identified deviation and a pre-defined threshold.
20 2. The method (300) as claimed in claim 1, wherein each of the one or more
anomalies is detected as an anomaly when the deviation within the one or more of the current set of features is determined to be above the pre-defined threshold.
3. The method (300) as claimed in claim 1, further comprising:
25 discarding, by the processing engine (208), the one or more anomalies from
the structured data; and
generating, by the processing engine (208), a coherent data in response to discarding the one or more anomalies.
30 4. The method (300) as claimed in claim 1, further comprising storing, by the
processing engine (208), the coherent data in a centralized repository.
27

5. The method (300) as claimed in claim 1, wherein the processing engine
(208) comprises an AI model (212) configured for detecting anomalies at the edge
location in the network.
5
6. The method (300) as claimed in claim 5, wherein the AI model (212) is
trained based on a training dataset associated with a plurality of edge locations, and
wherein the training dataset comprises a plurality of anomalies and the plurality of
training features corresponding to the plurality of edge locations.
10
7. The method (300) as claimed in claim 5, further comprising:
re-training, by the processing engine (208), the AI model (212) based on the
structured data and the current set of features associated with the edge location.
15 8. A system (108) for detecting anomalies at edge locations in a network based
on Artificial Intelligence (AI), the system (108) comprising: a memory (204); and
a processing engine (208) communicatively coupled with the memory (204),
configured to:
20 receive (302) a structured data corresponding to the edge location;
process (304) the structured data to extract a current set of features from the structured data;
compare (306) each of the current set of features with a
corresponding training feature of a plurality of training features;
25 identify (308) a deviation within one or more of the current set of
features from the corresponding training feature in response to comparing; and
detect (310) one or more anomalies within the one or more of the
current set of features based on the identified deviation and a pre-defined
30 threshold.
28

9. The system (108) as claim in claim 8, wherein each of the one or more
anomalies is detected as an anomaly when the deviation within the one or more of the current set of features is determined to be above the pre-defined threshold.
5 10. The system (108) as claim in claim 8, wherein the processing engine (208)
is further configured to:
discard the one or more anomalies from the structured data; and
generate a coherent data in response to discarding the one or more anomalies.
10 11. The system (108) as claim in claim 8, wherein the processing engine (208)
is further configured to store the coherent data in a centralized repository.
12. The system (108) as claim in claim 8, the processing engine (208) comprises
an AI model (212) configured for detecting anomalies at the edge location in the
15 network.
13. The system (108) as claim in claim 12, wherein the AI model (212) is trained
based on a training dataset associated with a plurality of edge locations, and wherein
the training dataset comprises a plurality of anomalies and the plurality of training
20 features corresponding to the plurality of edge locations.
14. The system (108) as claim in claim 12, wherein the processing engine (208)
is further configured to re-train the AI model (212) based on the structured data and
the current set of features associated with the edge location.

Documents

Application Documents

# Name Date
1 202321049334-STATEMENT OF UNDERTAKING (FORM 3) [21-07-2023(online)].pdf 2023-07-21
2 202321049334-PROVISIONAL SPECIFICATION [21-07-2023(online)].pdf 2023-07-21
3 202321049334-FORM 1 [21-07-2023(online)].pdf 2023-07-21
4 202321049334-DRAWINGS [21-07-2023(online)].pdf 2023-07-21
5 202321049334-DECLARATION OF INVENTORSHIP (FORM 5) [21-07-2023(online)].pdf 2023-07-21
6 202321049334-FORM-26 [19-10-2023(online)].pdf 2023-10-19
7 202321049334-FORM-26 [25-04-2024(online)].pdf 2024-04-25
8 202321049334-FORM 13 [25-04-2024(online)].pdf 2024-04-25
9 202321049334-AMENDED DOCUMENTS [25-04-2024(online)].pdf 2024-04-25
10 202321049334-FORM-26 [30-04-2024(online)].pdf 2024-04-30
11 202321049334-Request Letter-Correspondence [03-06-2024(online)].pdf 2024-06-03
12 202321049334-Power of Attorney [03-06-2024(online)].pdf 2024-06-03
13 202321049334-Covering Letter [03-06-2024(online)].pdf 2024-06-03
14 202321049334-CORRESPONDENCE(IPO)-(WIPO DAS)-10-07-2024.pdf 2024-07-10
15 202321049334-ORIGINAL UR 6(1A) FORM 26-100724.pdf 2024-07-15
16 202321049334-FORM-5 [18-07-2024(online)].pdf 2024-07-18
17 202321049334-DRAWING [18-07-2024(online)].pdf 2024-07-18
18 202321049334-CORRESPONDENCE-OTHERS [18-07-2024(online)].pdf 2024-07-18
19 202321049334-COMPLETE SPECIFICATION [18-07-2024(online)].pdf 2024-07-18
20 Abstract-1.jpg 2024-09-30
21 202321049334-FORM 18 [01-10-2024(online)].pdf 2024-10-01
22 202321049334-FORM 3 [04-11-2024(online)].pdf 2024-11-04