Sign In to Follow Application
View All Documents & Correspondence

Method And System For Implementing Data Caching In A Network

Abstract: The present disclosure relates to a method [300] and a system [200] for implementing data caching in a network. The method [300] comprises receiving, by a system [200] from a User Interface (UI) [202], a query execution request for requesting a set of network attributes, wherein the set of network attributes are indicative of network performance metrics associated with the network. The method [300] further comprises fetching, by the system [200], a hash code corresponding to the query execution request and comparing, by the system [200], the fetched hash code with a plurality of pre-stored hash codes to determine availability of the requested set of network attributes in a caching unit [210], wherein the plurality of hash codes correspond to a plurality of previously received query execution requests, and obtaining the requested set of network attributes. [FIG. 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 July 2023
Publication Number
04/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Ankit Murarka
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
2. Aayush Bhatnagar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. Jugal Kishore
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
4. Gaurav Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
5. Kishan Sahu
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
6. Rahul Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
7. Sunil Meena
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
8. Gourav Gurbani
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
9. Sanjana Chaudhary
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
10. Chandra Ganveer
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
11. Supriya Kaushik De
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
12. Debashish Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
13. Mehul Tilala
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
14. Dharmendra Kumar Vishwakarma
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
15. Yogesh Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
16. Niharika Patnam
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
17. Harshita Garg
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
18. Avinash Kushwaha
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
19. Sajal Soni
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
20. Kunal Telgote
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
21. Manasvi Rajani
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR IMPLEMENTING DATA CACHING IN A NETWORK”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR IMPLEMENTING DATA CACHING IN A
NETWORK
FIELD OF INVENTION
[0001] The present disclosure generally relates to network performance management systems. More particularly, the present disclosure relates to a method and system for implementing data caching in a network to reduce overall request execution time.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network. An Integrated Performance Management (IPM) system provides the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.

[0004] In IPM systems, various requests are executed by users/operators on a periodic basis to obtain various parameters such as, including but not limited to, network performance graphs, minutes of usage, traffic load graphs, data consumption graphs, and the like. Execution of any of the aforementioned requests takes time and utilization of resources which leads to the generation of load on the communication network or the network systems. In case, multiple/duplicate queries pertaining to same request are generated by the users, it leads to generation of an unwanted/surplus load on the communication network and degradation of the network performance. Therefore, there is a requirement for an efficient and effective network performance management solution that can deal with the unwanted load conditions and the degradation of the network performance.
[0005] The existing network performance management solutions for the aforementioned problems have many limitations, for instance, these existing solutions involve manual verification of the requests to make sure that multiple/duplicate requests are not generated. However, manual verification of the same is a time-consuming, cumbersome, and error-prone task.
[0006] Thus, there exists an imperative need in the art for a solution that can provide data caching in a network performance management system to reduce overall request execution time and overcome the above stated and other limitations of the existing solutions.
OBJECTS OF THE DISCLOSURE
[0007] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0008] It is an object of the present disclosure to provide data caching in a network performance management system to reduce overall request execution time.

[0009] It is also an object of the present disclosure to provide a solution that can solve the unnecessary execution of same type of queries, reports and task.
[0010] It is another object of the present disclosure to provide a solution that has the capability to store frequent use of data like circle mapping data and executed queries or reports etc.
[0011] It is another object of the present disclosure to provide a solution that can improve the system performance by getting the output data from a caching unit through machine learning techniques.
[0012] It is yet another object of the present disclosure to provide a solution that encompasses the use of the AI/ML model to identify the redundant requests coming from a user interface and serving those requests by using the caching unit, to save end user time for showing data which were already computed by same user or different user.
SUMMARY
[0013] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0014] An aspect of the present disclosure may relate to a method for implementing data caching in a network. The method comprises receiving, by a system from a User Interface (UI), a query execution request for requesting a set of network attributes. Further, the method comprises fetching, by the system, a hash code corresponding to the query execution request. Further, the method comprises comparing, by the system, the fetched hash code with a plurality of pre-stored hash codes to determine availability of the requested set of network attributes in a

caching unit, wherein the plurality of hash codes correspond to a plurality of previously received query execution requests, and wherein the caching unit comprises a plurality of sets of network attributes corresponding to a plurality of query execution requests. Thereafter, the method comprises obtaining the requested set of network attributes.
[0015] In an exemplary aspect of the present disclosure, the system receives the query execution request from the UE via an Elastic Load Balancer (ELB).
[0016] In an exemplary aspect of the present disclosure, the set of network attributes comprise at least one of a network performance graph, Minutes of Usage (MOU), traffic load graphs, data consumption graphs or a central processing unit (CPU) utilization, memory utilization, throughput, session count or any other combination thereof.
[0017] In an exemplary aspect of the present disclosure, the method comprises transmitting, by the system, the obtained set of network attributes to the UE.
[0018] In an exemplary aspect of the present disclosure, wherein fetching the hash code corresponding to the query execution request comprises transmitting the received query execution request and a hash code fetch request, and based on the transmitted hash code fetch request, receiving the hash code, wherein the hash code is fetched using an Artificial Intelligence (AI)/Machine Learning (ML) algorithm.
[0019] In an exemplary aspect of the present disclosure, based on the comparison that the fetched hash code is similar to one of the plurality of pre-stored hash codes, obtaining the requested set of network attributes from the cache memory.
[0020] In an exemplary aspect of the present disclosure, in an event the fetched hash code is dissimilar to each of the plurality of pre-stored hash codes, obtaining the requested set of network attributes from a data repository.

[0021] In an exemplary aspect of the present disclosure, storing the obtained set of network attributes in the caching unit.
[0022] Another aspect of the present disclosure may relate to a system for implementing data caching in a network. The system is configured to: receive, from a User Interface (UI), a query execution request for requesting a set of network attributes; fetch a hash code corresponding to the query execution request; compare the fetched hash code with a plurality of pre-stored hash codes to determine availability of the requested set of network attributes in a caching unit, wherein the plurality of hash codes correspond to a plurality of previously received query execution requests, and wherein the caching unit comprises a plurality of sets of network attributes corresponding to a plurality of query execution requests; and obtain the requested set of network attributes.
[0023] Yet another aspect of the present disclosure may relate to a user equipment (UE) comprising a transceiver unit configured to transmit, to a system, a query execution request for requesting a set of network attributes, wherein the set of network attributes are indicative of network performance metrics associated with the network. The transceiver unit is further configured to receive, from the system , the set of network attributes, wherein the set of network attributes are received from the system configured to receive, from a User Interface (UI), by an Integrated Performance Management (IPM) module, a query execution request for requesting a set of network attributes. The system is further configured to fetch a hash code corresponding to the query execution request. The system is further configured to compare the fetched hash code with a plurality of pre-stored hash codes to determine availability of the requested set of network attributes in a caching unit, wherein the plurality of hash codes correspond to a plurality of previously received query execution requests, and wherein the caching unit comprises a plurality of sets of network attributes corresponding to a plurality of query execution requests. The system is further configured to obtain the requested set of network attributes.

[0024] Yet another aspect of the present disclosure may relate to a non-transitory computer-readable storage medium storing instructions for implementing data caching in a network, the storage medium comprising executable code which, when executed by one or more units of a system, causes the system to receive, from a User Interface (UI), by an Integrated Performance Management (IPM) module, a query execution request for requesting a set of network attributes. Further, the executable code which, when executed by one or more units of a system, causes the system to fetch a hash code corresponding to the query execution request. Further, the executable code which, when executed by one or more units of a system, causes the system to compare the fetched hash code with a plurality of pre-stored hash codes to determine availability of the requested set of network attributes in a caching unit, wherein the plurality of hash codes correspond to a plurality of previously received query execution requests, and wherein the caching unit comprises a plurality of sets of network attributes corresponding to a plurality of query execution requests. Further, the executable code which, when executed by one or more units of a system, causes the system to obtain the requested set of network attributes.
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such

drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0026] FIG. 1 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0027] FIG. 2 illustrates an exemplary block diagram of a system for implementing data caching in a network, in accordance with exemplary implementation of the present disclosure.
[0028] FIG. 3 illustrates an exemplary method flow diagram indicating the process of implementing data caching in a network, in accordance with exemplary implementation of the present disclosure.
[0029] FIG. 4a illustrates an exemplary block diagram of a system architecture for implementing data caching in a network, in accordance with exemplary implementation of the present disclosure.
[0030] FIG. 4b illustrates an exemplary sequence flow diagram indicating the process of implementing data caching in a network, in accordance with exemplary implementation of the present disclosure.
[0031] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0032] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that

embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
5 problems discussed above.
[0033] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
10 the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
15 [0034] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The
20 functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
25
[0035] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or
30 concurrently. In addition, the order of the operations may be re-arranged. A process
9

is terminated when its operations are completed but could have additional steps not included in a figure.
[0036] The word “exemplary” and/or “demonstrative” is used herein to mean
5 serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques
10 known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
15
[0037] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality
20 of microprocessors, one or more microprocessors in association with a (Digital
Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of
25 the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
[0038] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
30 “a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
10

or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
5 of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from unit(s) which are required to implement the features of the present disclosure.
[0039] As used herein, “storage unit” or “memory unit” refers to a machine or
10 computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
15 that may be required by one or more units of the system to perform their respective
functions.
[0040] As used herein “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
20 or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
25 [0041] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller,
30 Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
11

[0042] As used herein the transceiver unit includes at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system 5 and/or connected with the system.
[0043] As discussed in the background section, the current known solutions for an efficient and effective network performance management solution that can deal with the unwanted load conditions and the degradation of the network performance,
10 have several shortcomings, such as, involvement of manual verification of the requests to make sure that multiple/duplicate requests are not generated, which is time-consuming, cumbersome, and error-prone task. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a caching unit in a network performance management
15 system to provide capability to the network performance management system to store data such as, including but not limited to, circle mapping data, executed queries or reports, and the like. This allows improvement in the network performance management system by getting the output data from the caching unit to confirm if the requested query has already been executed by some other user or
20 a trained model or not, wherein the data received from the caching unit is received using a trained model, wherein the trained model is a Machine Learning (ML) or an Artificial Intelligence (AI) based system that is trained using historical data or real-time data in the caching unit. To implement the features of the present disclosure for every query/report execution request created by the user/operator, a
25 unique hash code is assigned which is then used to analyse the duplicity or uniqueness of such request.
[0044] For example, if a user “A” executes a query “X” for obtaining Minutes of
Usage (MOU) for Day 1 between 7-8 A.M., and after some time a user “B” logins
30 to a user interface and requests the same MOU query “X” to fetch the same data,
the network performance management system automatically detects, by using the
12

trained model that same type of data had already been requested by the user A when the hash code comparison comes out to be duplicate, and returns the user B with the already executed query/report from the caching unit or using the data from a distributed data lake. 5
[0045] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0046] The present disclosure can be implemented on a computing device [100] as
10 shown in FIG. 1. The FIG. 1 illustrates an exemplary block diagram of the
computing device [100] upon which the features of the present disclosure may be
implemented in accordance with exemplary implementation of the present
disclosure. In an implementation, the computing device [100] may also implement
a method for automating a resolution of a trouble ticket utilising the system [200].
15 In another implementation, the computing device [100] itself implements the
method for automating a resolution of a trouble ticket using one or more units configured within the computing device [100], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
20 [0047] The computing device [100] may include a bus [102] or other
communication mechanism for communicating information, and a hardware
processor [104] coupled with bus [102] for processing information. The hardware
processor [104] may be, for example, a general-purpose microprocessor. The
computing device [100] may also include a main memory [106], such as a random-
25 access memory (RAM), or other dynamic storage device, coupled to the bus [102]
for storing information and instructions to be executed by the processor [104]. The
main memory [106] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [104]. Such instructions, when stored in non-transitory storage media
30 accessible to the processor [104], render the computing device [100] into a special-
purpose machine that is customized to perform the operations specified in the
13

instructions. The computing device [100] further includes a read only memory (ROM) [108] or other static storage device coupled to the bus [102] for storing static information and instructions for the processor [104].
5 [0048] A storage device [110], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [102] for storing information and instructions. The computing device [100] may be coupled via the bus [102] to a display [112], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
10 displaying information to a computer user. An input device [114], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [102] for communicating information and command selections to the processor [104]. Another type of user input device may be a cursor controller [116], such as a mouse, a trackball, or cursor direction keys, for communicating direction
15 information and command selections to the processor [104], and for controlling
cursor movement on the display [112]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
20 [0049] The computing device [100] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [100] causes or programs the computing device [100] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the
25 computing device [100] in response to the processor [104] executing one or more
sequences of one or more instructions contained in the main memory [106]. Such instructions may be read into the main memory [106] from another storage medium, such as the storage device [110]. Execution of the sequences of instructions contained in the main memory [106] causes the processor [104] to perform the
30 process steps described herein. In alternative implementations of the present
14

disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
[0050] The computing device [100] also may include a communication interface
5 [118] coupled to the bus [102]. The communication interface [118] provides a two-
way data communication coupling to a network link [120] that is connected to a local network [122]. For example, the communication interface [118] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of
10 telephone line. As another example, the communication interface [118] may be a
local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [118] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing
15 various types of information.
[0051] The computing device [100] can send messages and receive data, including program code, through the network(s), the network link [120] and the communication interface [118]. In the Internet example, a server [130] might
20 transmit a requested code for an application program through the Internet [128], the
ISP [126], the host [124], the local network [122] and the communication interface [118]. The received code may be executed by the processor [104] as it is received, and/or stored in the storage device [110], or other non-volatile storage for later execution.
25
[0052] Referring to FIG. 2, an exemplary block diagram of a system [200] for implementing data caching in a network, is shown, in accordance with the exemplary implementations of the present disclosure. The system [200] comprises at least one IPM module [206], at least one AI/ML module [208], at least one
30 caching unit [210], and at least one data repository [212]. The system [200] may be
in communication with at least one user interface (UI) [202] and at least one load
15

balancer [204]. Also, all of the components/ units of the system [200] are assumed
to be connected to each other unless otherwise indicated below. As shown in the
FIG. 2, all units shown within the system should also be assumed to be connected
to each other. Also, in FIG. 2 only a few units are shown, however, the system [200]
5 may comprise multiple such units or the system [200] may comprise any such
numbers of said units, as required to implement the features of the present disclosure. In another implementation, the system [200] may reside in a server or a network entity.
10 [0053] In one implementation, the system [200] is configured for implementing
data caching in a network, with the help of the interconnection between the components/units of the system [200].
[0054] In an exemplary aspect of the present disclosure, the system [200] comprises
15 receiving, from a User Interface (UI) [202], a query execution request for requesting
a set of network attributes, wherein the set of network attributes are indicative of network performance metrics associated with the network. It is to be noted that, the system [200] is to receive the query execution request from the UI [202] via the load balancer [204].
20
[0055] In an exemplary implementation of the present disclosure, the query execution request for requesting a set of network attributes is received by the IPM module [206] of the system [200]. The IPM module [206] receives the request via the load balancer [204].
25
[0056] IPM module [206] monitors and analyses performance counters of network elements. The IPM module [206] performs the steps including, collecting performance counter data from various nodes within the network, processing and aggregating said performance counter data and storing it in a Distributed Data Lake;
30 calculating Key Performance Indicators (KPIs) for each network element based on
the processed performance counter data; segregating the calculated KPIs based on
16

the required level of aggregation and storing the KPI data in the Distributed Data
Lake; providing real-time performance monitoring and visualization of the
performance counter data and KPI data; executing tasks at predefined intervals,
execution of search queries, and storage of output data; analysing and
5 troubleshooting the network performance and distributing incoming network traffic
across a group of backend servers using a Load Balancer.
[0057] UI [202] is a point of human-computer interaction and communication in a
device/application/website. The UI [202] allows the users/operators to register a
10 request, view executed queries, view results of the executed queries, and the like.
The UI [202] in communication with the system [200] allows the users/operators to register the request or view results of the executed queries.
[0058] The Elastic Load Balancer (ELB) or load balancer [204] is a vital
15 component of the network performance management system [200], designed to
efficiently distribute incoming network traffic across a multitude of backend servers
or microservices. Its purpose is to ensure the even distribution of data requests,
leading to optimized server resource utilization, reduced latency, and improved
overall system performance. This ensures efficient, reliable, and prompt handling
20 of requests, contributing to the robustness and resilience of the overall system.
[0059] The query execution request may refer to a formalized message or command
sent from the UI [202] to the system [200], requesting the execution of a query. A
query is a request for information from a database or data storage system. The query
25 execution request comprising network attributes may also include but not limited
to, network parameters such as, cell identifier, node identifier, circle, location,
performance counter, and time range. The query execution request may then also
include aggregation type or operations to be performed on above mentioned
network attributes. The aggregation type specifies the sub-indicators, or the lower-
30 level variables of the KPIs, should be calculated together. Further, the query
17

execution request may also include a request for a report related to the above network attributes.
[0060] In an exemplary aspect of the present disclosure, the set of network
5 attributes may comprise Key Performance Indicators (KPIs) such as but not limited
to a network performance graph, Minutes of Usage (MOU), traffic load graphs, data
consumption graphs, or aggregated performance counters of each network element
or a combination thereof. The KPIs help to assess the performance, reliability, and
efficiency of the network infrastructure. The aggregated performance counters may
10 include a central processing unit (CPU) utilization, memory utilization, throughput,
session count, etc.
[0061] KPIs are quantifiable metrics that provide actionable insights into the
effectiveness and efficiency of network operations and help stakeholders make
15 informed decisions to improve performance. KPIs can include metrics such as
uptime, throughput, latency, packet loss, customer satisfaction scores, and revenue generated.
[0062] MOU refers to the total duration of time that a particular service or resource
20 is utilized within a given period, typically measured in minutes.
[0063] Network performance graphs are visual representations of various
performance metrics related to a network over time. Network performance graphs
can include metrics such as throughput, latency, packet loss, and utilization plotted
25 against time.
[0064] Data consumption graphs illustrate the amount of data consumed by users or applications over time.
18

[0065] Traffic load graphs depict the volume of data being transferred through a network over time. These graphs show peaks and valleys in data transmission, helping to visualize patterns of network usage.
5 [0066] In an exemplary aspect of the present disclosure, the system [200] further
comprises fetching a hash code corresponding to the query execution request,
wherein fetching the hash code corresponding to the query execution request
comprises transmitting the received query execution request and a hash code fetch
request; and based on the transmitted hash code fetch request, receive the hash code,
10 wherein the hash code is fetched using an Artificial Intelligence (AI)/Machine
Learning (ML) module [208].
[0067] It is to be noted that, the hash code serves as a unique identifier for the
request, allowing the caching unit [210] to efficiently store, retrieve, and manage
15 cached data associated with that request. It enables the caching unit [210] to quickly
determine if it has previously processed and cached the same query execution request without having to compare the entire request data.
[0068] The AI/ML module [208] may act as a trained model that applies artificial
20 intelligence or machine learning techniques to fetch the hash code. A trained model
is trained using historical data or real-time data such as, but not limited to, user
requests, circle mapping data, executed queries, and the like. The system [200]
includes a trained model which is the AI/ML module [208] or an AI/ML Layer
[406] to compare data from the caching unit [210] or caching layer [504] against
25 the user’s query execution request, and to fetch data from the caching unit [210].
[0069] Traditional hash functions are designed to uniformly distribute data across
a space to minimize collisions. However, AI/ML can provide more intelligent
hashing mechanism which can group similar request queries together. This type of
30 grouping is achieved by giving similar scores to the queries having similar
aggregation and other similar significant parameters in the query execution request.
19

This kind of scoring is incorporated in the AI/ML module [208] during training of
the module itself by providing weights to the parameters according to their
importance in the query execution request. Let's take an example where a query
execution request is executed for a KPI at a time level. In the training dataset, an
5 integer value score of 1250024 is assigned to the above query. Now, with a change
of 100 in the above integer value, 1251024 score is assigned to a new query received. These query inputs along with their expected integer values comprise the training dataset. Output expected by the user for these requests are stored in the caching layer. Later, when a new request is fired, it's value is identified by the
10 AI/ML module [208], which is then compared with the ones stored in the caching
layer and output is provided from the nearest value present in the caching layer. A configurable margin can be provided to identify the nearest value to the current value. Apart from providing value inference, the AI/ML module [208] also gets trained side by side for the values which haven't been encountered before. This
15 significantly decreases the processing time, as similar queries can be answered by
looking at the same or nearby value buckets.
[0070] The caching unit [210] may include a high-speed data storage layer which
temporarily holds data that is likely to be reused, to improve speed and performance
20 of data retrieval. By storing frequently accessed data in such high-speed data
storage layer, the system [200] significantly reduces the time taken to access this data, improving overall system [200] efficiency and performance.
[0071] In an implementation of the present disclosure, the IPM module [206] after
25 receiving the query execution request for requesting a set of network attributes,
transmits the received query execution request and a hash code fetch request to the AI/ML module [208]. Further, based on the transmitted hash code fetch request, the IPM module [206] receives the hash code, wherein the hash code is fetched using the AI/ML module [208]. 30
20

[0072] In an exemplary aspect of the present disclosure, the system [200] further
comprises comparing the fetched hash code with a plurality of pre-stored hash
codes to determine availability of the requested set of network attributes in a
caching unit[210], wherein the plurality of hash codes correspond to a plurality of
5 previously received query execution requests, and wherein the caching unit [210]
comprises a plurality of sets of network attributes corresponding to a plurality of query execution requests.
[0073] In an implementation of the present disclosure, to identify whether the query
10 execution request is a duplicate request or previously executed request or not, the
IPM module [206] utilizes the AI/ML module [208] that fetches a unique hash code
for the query execution request and returns back the same to the IPM module [206].
Further, the IPM module [206] compares and verifies the duplicity or uniqueness
of the Query execution request by comparing the fetched hash code with the
15 previously existing hash codes in the caching unit [210].
[0074] In an exemplary aspect of the present disclosure, the system [200] further is
further configured to obtain the requested set of network attributes from the caching
unit [210], based on the comparison that the fetched hash code is similar to one of
20 the plurality of pre-stored hash codes.
[0075] In an implementation of the present disclosure, if based on comparing by
the IPM module [206], duplicity is found in the fetched hash code, then the IPM
module [206] obtains the set of network attributes for the query execution request
25 from the already executed query stored in the caching unit [210].
[0076] For example, if a user “A” executes a query “X” to obtain Minutes of Usage
(MOU) for Day 1 between 7-8 A.M., and after some time a user “B” logs in to a
user interface and requests the same MOU query “X” to fetch the same data, the
30 system [200] automatically detects it by using the AI/ML trained model. When a
hash code comparison comes out to be a duplicate or similar, in such a scenario, the
21

system [200] returns the user B with the already executed query/report from the caching unit [210].
[0077] In an exemplary aspect of the present disclosure, the system [200] is further
5 configured to obtain the requested set of network attributes from a data repository,
based on the comparison that the fetched hash code is dissimilar to each of the plurality of pre-stored hash codes.
[0078] In an exemplary aspect of the present disclosure, the data repository is a
10 distributed data lake [510].
[0079] The distributed data lake [510] is a data storage repository that centralizes, organizes, and protects large amounts of structured, semi-structured, and unstructured data from multiple sources in a communication network. The data in
15 the distributed data lake can be structured at query-time based on a user’s needs. In
the system [200], raw data and the processed information, based on the user requests, is stored in the distributed data lake [404]. The distributed data lake [404] is a centralized, scalable, and flexible storage solution that allows for easy access and analysis of the data.
20
[0080] In an implementation of the present disclosure, if based on comparing by the IPM module [206], the uniqueness of the query execution request is found, then the IPM module [206] obtains the set of network attributes for the query execution request using the data from the data repository [212].
25
[0081] For example, if a user “A” executes a query “X” for obtaining Minutes of Usage (MOU) for Day 1 between 7-8 A.M., and after some time user B logs in to the user interface and requests another MOU query “Y” to fetch data for Day 1 between 9-10 A.M., the system [200] automatically detects by using the AI/ML
30 trained model that same type of query has not already been requested by any user,
and the hash code comparison comes out to be unique. Then system [200] returns
22

the user B with the newly executed query/report using the data from the data repository [212] or distributed data lake [404].
[0082] In an exemplary aspect of the present disclosure, the system [200] further is
5 further configured to obtain the requested set of network attributes. The system
[200] is further configured to transmit the obtained set of network attributes to the UI [202]. The system [200] is further configured to store the obtained set of network attributes in the caching unit [210].
10 [0083] In an implementation of the present disclosure, the obtained set of network
attributes are shared back from the IPM module [206] to the UI [202] via the load balancer [204] to allow the user to check the query execution results. Further, the IPM module [206] stores the query execution results in the caching unit [210].
15 [0084] Referring to FIG. 3, which illustrates an exemplary method flow diagram
indicating the method [300] of implementing data caching in a network, in accordance with the exemplary implementations of the present disclosure. In an implementation the method [300] is performed by the system [200]. Further, in an implementation, the system [200] may be present in a server device to implement
20 the features of the present disclosure. Also, as shown in FIG. 3, the method [300]
starts at step [302].
[0085] At step [304], the method comprises receiving, by a system [200] from a
User Interface (UI) [202], a query execution request for requesting a set of network
25 attributes, wherein the set of network attributes are indicative of network
performance metrics associated with the network. It is to be noted that, the system [200] is to receive the query execution request from the UI [101] via a load balancer [204].
30 [0086] In an exemplary implementation of the present disclosure, the query
execution request for requesting a set of network attributes is received by the IPM
23

module [206] of the system [200]. The IPM module [206] receives the request via the load balancer [204].
[0087] IPM module [206] monitors and analyses performance counters of network
5 elements. The IPM module [206] performs the steps including, collecting
performance counter data from various nodes within the network using; processing
and aggregating said performance counter data and storing it in a data repository
[212] or Distributed Data Lake [404]; calculating Key Performance Indicators
(KPIs) for each network element based on the processed performance counter data
10 using a KPI Engine; segregating the calculated KPIs based on the required level of
aggregation and storing the KPI data in the data repository [212] or the Distributed Data Lake [404].
[0088] The load balancer [204] is a managed service that automatically distributes
15 incoming application or network traffic across multiple targets. The query
execution request may refer to a formalized message or command sent from the UI [202] to the system [200], requesting the execution of a query. A query is a request for information from a database or data storage system.
20 [0089] UI [202] is a point of human-computer interaction and communication in a
device/application/website. The UI [202] allows the users/operators to register a request, view executed queries, view results of the executed queries, and the like. The UI [202] in communication with the system [200] allows the users/operators to register the request or view results of the executed queries.
25
[0090] The Elastic Load Balancer (ELB) or load balancer [204] is a vital component of the network performance management system [200], designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests,
30 leading to optimized server resource utilization, reduced latency, and improved
24

overall system performance. This ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall system.
[0091] The query execution request may refer to a formalized message or command
5 sent from the UI [202] to the system [200], requesting the execution of a query. A
query is a request for information from a database or data storage system. The query execution request comprising network attributes may also include but not limited to, network parameters, such as cell identifier, node identifier, circle, location, performance counter, and time range. The query execution request may then also
10 include aggregation type or operations to be performed on above mentioned
network attributes. The aggregation type specifies the sub-indicators, or the lower-level variables of the KPIs, should be calculated together. Further, the query execution request may also include a request for a report related to the above network attributes.
15
[0092] The set of network attributes may comprise Key Performance Indicators (KPIs) such as but not limited to a network performance graph, Minutes of Usage (MOU), traffic load graphs, data consumption graphs, or aggregated performance counters of each network element or a combination thereof. The KPIs help to assess
20 the performance, reliability, and efficiency of the network infrastructure. The
aggregated performance counters may include a central processing unit (CPU) utilization, memory utilization, throughput, session count, etc.
[0093] At step [306], the method comprises fetching, by the system [200], a hash
25 code corresponding to the query execution request. The fetching further comprises
transmitting the received query execution request and a hash code fetch request;
and based on the transmitted hash code fetch request, receiving the hash code,
wherein the hash code is fetched using an Artificial Intelligence (AI)/Machine
Learning (ML) module [208]. It is to be noted that, the hash code serves as a unique
30 identifier for the request, allowing the caching unit [210] to efficiently store,
retrieve, and manage cached data associated with that request. It enables the caching
25

unit [210] to quickly determine if it has previously processed and cached the same query execution request without having to compare the entire request data.
[0094] The AI/ML module [208] may act as a trained model that applies artificial
5 intelligence or machine learning techniques to fetch the hash code. A trained model
is trained using historical data or real-time data such as, but not limited to, user
requests, circle mapping data, executed queries, and the like. The system [200]
includes a trained model which is the AI/ML module [208] or an AI/ML Layer
[406] to compare data from the caching unit [210] or caching layer [504] against
10 the user’s query execution request, and to fetch data from the caching unit [210].
[0095] The caching unit [210] may include a high-speed data storage layer which
temporarily holds data that is likely to be reused, to improve speed and performance
of data retrieval. By storing frequently accessed data in such high-speed data
15 storage layer, the system [200] significantly reduces the time taken to access this
data, improving overall system [200] efficiency and performance.
[0096] In an implementation of the present disclosure, the IPM module [206] after
receiving the query execution request for requesting a set of network attributes,
20 transmits the received query execution request and a hash code fetch request to the
AI/ML module [208]. Further, based on the transmitted hash code fetch request, the IPM module [206] receives the hash code, wherein the hash code is fetched using the AI/ML module [208].
25 [0097] Traditional hash functions are designed to uniformly distribute data across
a space to minimize collisions. However, AI/ML can provide more intelligent hashing mechanism which can group similar request queries together. This type of grouping is achieved by giving similar scores to the queries having similar aggregation and other similar significant parameters in the query execution request.
30 This kind of scoring is incorporated in the AI/ML [208] module during training of
the module itself by providing weights to the parameters according to their
26

importance in the query execution request. Let's take an example where a query
execution request is executed for a KPI at a time level. In the training dataset, an
integer value score of 1250024 is assigned to the above query. Now, with a change
of 100 in the above integer value, 1251024 score is assigned to a new query
5 received. These query inputs along with their expected integer values comprise the
training dataset. Output expected by the user for these requests are stored in the caching layer. Later, when a new request is fired, it's value is identified by the AI/ML module [208], which is then compared with the ones stored in the caching layer and output is provided from the nearest value present in the caching layer. A
10 configurable margin can be provided to identify the nearest value to the current
value. Apart from providing value inference, the AI/ML module [208] also gets trained side by side for the values which haven't been encountered before. This significantly decreases the processing time, as similar queries can be answered by looking at the same or nearby value buckets.
15
[0098] At step [308], the method comprises comparing, by the system [200], the fetched hash code with a plurality of pre-stored hash codes to determine availability of the requested set of network attributes in a caching unit [210], wherein the plurality of hash codes correspond to a plurality of previously received query
20 execution requests, and wherein the caching unit [210] comprises a plurality of sets
of network attributes corresponding to a plurality of query execution requests.
[0099] In an implementation of the present disclosure, to identify whether the query execution request is a duplicate request or previously executed request or not, the
25 IPM module [206] utilizes the AI/ML module [208] that fetches a unique hash code
for the query execution request and returns back the same to the IPM module [206]. Further, the IPM module [206] compares and verifies the duplicity or uniqueness of the Query execution request by comparing the fetched hash code with the previously existing hash codes in the caching unit [210].
30
27

[0100] The method [300] further comprises, based on the comparison in an event
the fetched hash code is similar to one of the plurality of pre-stored hash codes ((i.e.
when the fetched hash code matches with one of the plurality of pre-stored hash
codes), obtaining the requested set of network attributes from the caching unit
5 [210].
[0101] In an implementation of the present disclosure, if based on comparing by
the IPM module [206], duplicity is found in the fetched hash code, then the IPM
module [206] obtains the set of network attributes for the query execution request
10 from the already executed query stored in the caching unit [210]. It is to be noted
that the caching unit [210] stores the requested data/report against each pre-stored hash code.
[0102] For example, if a user “A” executes a query “X” to obtain Minutes of Usage
15 (MOU) for Day 1 between 7-8 A.M., and after some time a user “B” logs in to a
user interface and requests the same MOU query “X” to fetch the same data, the
system [200] automatically detects it by using the AI/ML trained model. When a
hash code comparison comes out to be a duplicate, in such a scenario, the system
[200] returns the user B with the already executed query/report from the caching
20 unit [210].
[0103] The method [300] further comprises based on the comparison in an event
the fetched hash code is dissimilar to each of the plurality of pre-stored hash codes,
obtaining the requested set of network attributes from a data repository [212]. In an
25 exemplary aspect of the present disclosure, the data repository is a distributed data
lake [404].
[0104] The distributed data lake [510] is a data storage repository that centralizes,
organizes, and protects large amounts of structured, semi-structured, and
30 unstructured data from multiple sources in a communication network. The data in
the distributed data lake can be structured at query-time based on a user’s needs. In
28

the system [200], raw data and the processed information, based on the user requests, is stored in the distributed data lake [404]. The distributed data lake [404] is a centralized, scalable, and flexible storage solution that allows for easy access and analysis of the data. 5
[0105] In an implementation of the present disclosure, if based on comparing by the IPM module [206], the uniqueness of the query execution request is found, then the IPM module [206] obtains the set of network attributes for the query execution request using the data from the data repository [212].
10
[0106] For example, if a user “A” executes a query “X” for obtaining Minutes of Usage (MOU) for Day 1 between 7-8 A.M., and after some time user B logs in to the user interface and requests another MOU query “Y” to fetch data for Day 1 between 9-10 A.M., the system [200] automatically detects by using the AI/ML
15 trained model that same type of query has not already been requested by any user,
and the hash code comparison comes out to be unique. Then system [200] returns the user B with the newly executed query/report using the data from the data repository [212] or distributed data lake [506].
20 [0107] At step [310], the method, based on comparing further comprises obtaining
the requested set of network attributes. The method [300] is further configured to transmit the obtained set of network attributes to the UI [202]. The method [300] is further configured to store the obtained set of network attributes in the caching unit [210].
25
[0108] In an implementation of the present disclosure, the obtained set of network attributes are shared back from the IPM module [206] to the UI [202] via the load balancer [204] to allow the user to check the query execution results. Further, the IPM module [206] stores the query execution results in the caching unit [210].
30
[0109] Thereafter, the method terminates at step [412].
29

[0110] Referring FIG. 4a, an exemplary system architecture [400] for
implementing data caching in a network is explained in conjunction with a signal
flow diagram as shown in FIG. 4b, in accordance with the exemplary
5 implementations of the present disclosure. The system architecture [400] is
intended to be read in conjunction with the exemplary block diagram of system [200] as shown in FIG. 2.
[0111] As shown in FIG. 4a, the system architecture [400] comprises an Integrated
10 Performance Management (IPM) [206], a caching layer [402], an Artificial
Intelligence (AI)/Machine Learning (ML) layer [406], and a distributed data lake
[404]. The process flow diagram of the system architecture [400] also comprises a
User Interface [202] and a load balancer [204] in communication with the IPM
[206]. Further, a user [401] is in communication with the UI [202] to input a query
15 execution request and also to view the results/report of the query execution request.
[0112] With reference to FIG. 4a and Fig 4b, the first step involves the user [401]
registering a query/report execution request by using the UI [202] and sending the
query/report execution request to the IPM module [206] via the load balancer [204]
20 to fetch relevant data or a set of network attributes from the IPM module [206]. This
is shown as steps S1 and S2 in Fig 4b. Such data or the set of network attributes
may include various parameters or a set of network attributes such as, but not
limited to, KPIs such as, network performance graphs, Minutes of Usage (MOU),
traffic load graphs, data consumption graphs, and the like. The query execution
25 request comprising network attributes may also include but not limited to, like
network parameters such as cell identifier, node identifier, circle, location,
performance counter, and time range. The query execution request may then also
include aggregation type or operations to be performed on above mentioned
network attributes. The aggregation type specifies the sub-indicators, or the lower-
30 level variables of the KPIs, should be calculated together. Further, the query
30

execution request may also include a request for a report related to the above network attributes.
[0113] The UI [502] allows users/operators [401] to register a query execution
5 request, view executed queries, view results of the executed queries, and the like.
[0114] In an exemplary aspect, the IPM module [206] may receive the query execution request/query for specified information such as Minute of Usage (MOU) in the network for network devices for specified time period, such as yesterday, etc.
10
[0115] The load balancer [204] is a vital component of the system [200], designed to efficiently distribute incoming network traffic across a multitude of backend servers or micro services. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and
15 improved overall system performance.
[0116] Further, the next step involves identifying whether the report execution request is a duplicate request or was previously executed or not, a hash code fetch request is transmitted to the AI/ML layer [406] from the IPM module [206]. The
20 AI/ML layer [406] applies artificial intelligence or machine learning
techniques/solutions to fetch the hash code corresponding to the query execution request. The AI/ML layer [406] based on the hash code fetch request, fetches the hash code and returns the same to the IPM module [206]. sending the query/report execution request to the caching layer [402] from the IPM module [206] to check
25 if a similar request has already been placed or executed by the IPM module [206].
These steps are shown as steps S4, S5 and S6 in Fig. 4b. Further, before a request is sent to AI/ML layer [406], the IPM module [206] formulates a JSON from the query/report execution request. The JSON is then sent to the AI/ML layer [406]. This is shown as step S3 in Fig. 4b.
30
31

[0117] Further, in the next step, the IPM module [206] verifies the duplicity or uniqueness of the Query/Report Execution Request by comparing the fetched hash Code with the previously existing hash codes in the caching layer [402].
5 [0118] In one implementation, the IPM module [206] compares the fetched hash
code with a plurality of hash codes pre-stored in the caching layer [402], to
determine the availability of a set of network attributes in the caching layer [402]
(or the caching unit [210] as shown in FIG. 2], wherein the pre-stored hash codes
correspond to a plurality of report execution requests received in the past, and the
10 caching layer [402] comprises information obtained upon execution of a plurality
of report execution requests received in the past.
[0119] Based on the comparison that the fetched hash code is similar to one of the plurality of pre-stored hash codes, the IPM module [206] obtains the data related to
15 the requested set of network attributes from the caching layer [402], wherein the
obtained set of network attributes may include but not limited to, Key Performance Indicators (KPIs) such as but not limited to a network performance graph, Minutes of Usage (MOU), traffic load graphs, data consumption graphs, or aggregated performance counters of each network element or a combination thereof. This is
20 shown as steps S7 and S8 in Fig. 4b. The IPM [504] further transmits the obtained
set of network attributes to the load balancer [204], which further sends the obtained set of network attributes to the UI [202] to be displayed to the user [401]. These steps are shown as steps S10 and S11 in Fig. 4b.
25 [0120] Based on the comparison that the fetched hash code is dissimilar to each of
the plurality of pre-stored hash codes, the IPM module [206] obtains the data related to the requested set of network attributes from the distributed data lake [404] (or the data repository [212] as shown in FIG. 2). This is shown as step S9 in Fig. 4b. The IPM module [206] transmits the data related to obtained set of network
30 attributes to the load balancer [204], which further sends this data to the UI [202]
and simultaneously stores this data in the caching layer [402] for future usage or
32

accessing the same report for a later query execution request. This is shown as steps S10, S11 and S12 in Fig. 4b.
[0121] It is to be noted that, the caching layer [402] plays a significant role in data
5 management and optimization. The caching layer [402] is a high-speed data storage
layer which temporarily holds data that is likely to be reused, to improve the speed
and performance of data retrieval. By storing frequently accessed data in the
caching layer [402], the system [400] significantly reduces the time taken to access
this data, improving overall system efficiency and performance. Further, the
10 caching layer [402] serves as an intermediate layer between the data sources and
the sub-systems, such as the analysis engine, correlation engine, service quality manager, and streaming engine.
[0122] The present invention uses the AI/ML model to identify the redundant
15 requests coming from the user interface [202] and serves those requests by using
the caching layer [402]. This feature saves end user’s time by showing data which were already computed by the same user or a different user, thereby, saving execution time and making debugging process faster.
20 [0123] Yet another aspect of the present disclosure may relate to a user equipment
(UE) comprising a transceiver unit configured to transmit to a system [200], a query execution request for requesting a set of network attributes, wherein the set of network attributes are indicative of network performance metrics associated with the network. The transceiver unit is further configured to receive, from the system
25 [200], the set of network attributes, wherein the set of network attributes are
received from the system [200] configured to receive, from a User Interface (UI) [202], by an Integrated Performance Management (IPM) module [206], a query execution request for requesting a set of network attributes. The system [200] is further configured to fetch a hash code corresponding to the query execution
30 request. The system [200] is further configured to compare the fetched hash code
with a plurality of pre-stored hash codes to determine availability of the requested
33

set of network attributes in a caching unit [210], wherein the plurality of hash codes correspond to a plurality of previously received query execution requests, and wherein the caching unit [210] comprises a plurality of sets of network attributes corresponding to a plurality of query execution requests. The system [200] is further configured to obtain the requested set of network attributes.
[0124] Yet another aspect of the present disclosure may relate to a non-transitory computer-readable storage medium storing instructions for implementing data caching in a network, the storage medium comprising executable code which, when executed by one or more units of a system [200], causes the system [200] to receive, from a User Interface (UI) [202], by an Integrated Performance Management (IPM) module [206], a query execution request for requesting a set of network attributes, wherein the set of network attributes are indicative of network performance metrics associated with the network. Further, the executable code which, when executed by one or more units of a system [200], causes the system [200] to fetch a hash code corresponding to the query execution request. Further, the executable code which, when executed by one or more units of a system [200], causes the system [200] to compare the fetched hash code with a plurality of pre-stored hash codes to determine availability of the requested set of network attributes in a caching unit[210], wherein the plurality of hash codes correspond to a plurality of previously received query execution requests, and wherein the caching unit [210] comprises a plurality of sets of network attributes corresponding to a plurality of query execution requests. Further, the executable code which, when executed by one or more units of a system [200], causes the system [200] to obtain the requested set of network attributes.
[0125] As evident from the above, the present disclosure provides significant technical advancement over the known solutions. Various technical advantages of the present disclosure include:

- Reduced unwanted/surplus burden on the communication network by eliminating duplicate query/report execution requests.
- Enhanced network performance of the communication network.
- Faster visibility of the results to the users in case of placement of Duplicate query/report execution requests.
- Overall time saving of the user when the Duplicate query/report execution requests is placed.
5
[0126] While considerable emphasis has been placed herein on the disclosed
embodiments, it will be appreciated that many embodiments can be made and that
many changes can be made to the embodiments without departing from the
principles of the present disclosure. These and other changes in the embodiments
10 of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
[0127] Further, in accordance with the present disclosure, it is to be acknowledged
15 that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
20 as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.

We Claim:
1. A method [300] of implementing data caching in a network, the method comprising:
- receiving, by a system [200] from a User Interface (UI) [202], a query execution request for requesting a set of network attributes;
- fetching, by the system [200], a hash code corresponding to the query execution request;
- comparing, by the system [200], the fetched hash code with a plurality of pre-stored hash codes to determine availability of the requested set of network attributes in a caching unit [210], wherein the plurality of hash codes correspond to a plurality of previously received query execution requests, and wherein the caching unit [210] comprises a plurality of sets of network attributes corresponding to a plurality of query execution requests; and
- obtaining, by the system [200], the requested set of network attributes.

2. The method [300] as claimed in claim 1, wherein the system [200] is to receive the query execution request from UI [202] via a Load Balancer [204].
3. The method [300] as claimed in claim 1, wherein the set of network attributes comprises at least a Network Performance Graphs, Minutes of Usage, Traffic Load Graphs, Data Consumption Graphs, or a combination thereof.
4. The method [300] as claimed in claim 1, further comprising:
transmitting, by the system [200], the obtained set of network attributes to the UI [202].
5. The method [300] as claimed in claim 1, wherein fetching the hash code corresponding
to the query execution request comprises:
transmitting the received query execution request and a hash code fetch request; and based on the transmitted hash code fetch request, receiving the hash code, wherein the hash code is fetched using an Artificial Intelligence (AI)/Machine Learning (ML) module [208].
6. The method [300] as claimed in claim 1, wherein the comparing further comprises:

in an event the fetched hash code is similar to one of the plurality of pre-stored hash codes, obtaining the requested set of network attributes from the caching unit [210].
7. The method [300] as claimed in claim 1, further comprising:
in an event the fetched hash code is dissimilar to each of the plurality of pre-stored hash codes, obtaining the requested set of network attributes from a data repository [212].
8. The method [300] as claimed in claim 7, further comprising: storing the obtained set of network attributes in the caching unit [210].
9. A system [200] for implementing data caching in a network, the system [200] configured to:

- receive, from a User Interface (UI) [202], by an Integrated Performance Management (IPM) module [206], a query execution request for requesting a set of network attributes;
- fetch a hash code corresponding to the query execution request;
- compare the fetched hash code with a plurality of pre-stored hash codes to determine availability of the requested set of network attributes in a caching unit[210], wherein the plurality of hash codes correspond to a plurality of previously received query execution requests, and wherein the caching unit [210] comprises a plurality of sets of network attributes corresponding to a plurality of query execution requests; and
- obtain, the requested set of network attributes.

10. The system [200] as claimed in claim 9, wherein the system [200] is to receive the query execution request from the UI [202] via a Load Balancer [204].
11. The system [200] as claimed in claim 9, wherein the set of network attributes comprise at least a Network Performance Graphs, Minutes of Usage, Traffic Load Graphs, Data Consumption Graphs, or a combination thereof.
12. The system [200] as claimed in claim 9, wherein the system [200] is configured to further:
transmit the obtained set of network attributes to the UI [202].

13. The system [200] as claimed in claim 9, wherein the system [200] to fetch the hash code
corresponding to the query execution request, is configured to:
transmit the received query execution request and a hash code fetch request; and based on the transmitted hash code fetch request, receive the hash code, wherein the hash code is fetched using an Artificial Intelligence (AI)/Machine Learning (ML) module [208].
14. The system [200] as claimed in claim 9, wherein the system [200] is configured to: obtain the requested set of network attributes from the caching unit [210], based on the comparison that the fetched hash code is similar to one of the plurality of pre-stored hash codes.
15. The system [200] as claimed in claim 9, wherein the system [200] is configured to further:
obtain the requested set of network attributes from a data repository [212], based on the comparison that the fetched hash code is dissimilar to each of the plurality of pre-stored hash codes.
16. The system [200] as claimed in claim 15, wherein the system [200] is configured to
further:
store the obtained set of network attributes in the caching unit [210].
17. The system [200] as claimed in claim 15, wherein the data repository [212] is a distributed data lake [404].
18. A user equipment (UE) comprising:

- a transceiver unit configured to:
- transmit, to a system [200], a query execution request for requesting a set of network attributes, wherein the set of network attributes are indicative of network performance metrics associated with the network;
- receive, from the system [200], the set of network attributes, wherein the set of network attributes are received from the system [200], wherein the system [200] is configured to:

o receive, from a User Interface (UI) [202], by an Integrated Performance Management (IPM) module [206], a query execution request for requesting a set of network attributes;
o fetch a hash code corresponding to the query execution request;
o compare the fetched hash code with a plurality of pre-stored hash codes to determine availability of the requested set of network attributes in a caching unit [210], wherein the plurality of hash codes correspond to a plurality of previously received query execution requests, and wherein the caching unit [210] comprises a plurality of sets of network attributes corresponding to a plurality of query execution requests; and
o obtain the requested set of network attributes.

Documents

Application Documents

# Name Date
1 202321048374-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2023(online)].pdf 2023-07-19
2 202321048374-PROVISIONAL SPECIFICATION [19-07-2023(online)].pdf 2023-07-19
3 202321048374-FORM 1 [19-07-2023(online)].pdf 2023-07-19
4 202321048374-FIGURE OF ABSTRACT [19-07-2023(online)].pdf 2023-07-19
5 202321048374-DRAWINGS [19-07-2023(online)].pdf 2023-07-19
6 202321048374-FORM-26 [18-09-2023(online)].pdf 2023-09-18
7 202321048374-Proof of Right [23-10-2023(online)].pdf 2023-10-23
8 202321048374-ORIGINAL UR 6(1A) FORM 1 & 26)-041223.pdf 2023-12-09
9 202321048374-FORM-5 [16-07-2024(online)].pdf 2024-07-16
10 202321048374-ENDORSEMENT BY INVENTORS [16-07-2024(online)].pdf 2024-07-16
11 202321048374-DRAWING [16-07-2024(online)].pdf 2024-07-16
12 202321048374-CORRESPONDENCE-OTHERS [16-07-2024(online)].pdf 2024-07-16
13 202321048374-COMPLETE SPECIFICATION [16-07-2024(online)].pdf 2024-07-16
14 Abstract-1.jpg 2024-09-04
15 202321048374-FORM 18 [27-01-2025(online)].pdf 2025-01-27