Sign In to Follow Application
View All Documents & Correspondence

Method And System To Automatically Assign Restricted Data To A User

Abstract: The present disclosure relates to a method and a system to automatically assign a restricted data. The method includes receiving, at an Integrated Performance Management (IPM) unit from a load balancer, a restricted data request associated with the user. The method includes transmitting, from the IPM unit to a trained model in the network, a hash code request associated with the restricted data request. The method includes receiving, at the IPM unit from the trained model, a unique hash code based on the hash code request. The method includes fetching, at the IPM unit from a caching layer, a restricted data upon receiving the unique hash code. The method includes automatically assigning, from the IPM unit to the user via the load balancer, the restricted data associated with the restricted data request. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 August 2023
Publication Number
09/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
2. Ankit Murarka
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. Jugal Kishore
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
4. Gaurav Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
5. Kishan Sahu
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
6. Rahul Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
7. Sunil Meena
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
8. Gourav Gurbani
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
9. Sanjana Chaudhary
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
10. Chandra Ganveer
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
11. Supriya Kaushik De
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
12. Debashish Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
13. Mehul Tilala
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
14. Dharmendra Kumar Vishwakarma
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
15. Yogesh Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
16. Niharika Patnam
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
17. Harshita Garg
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
18. Avinash Kushwaha
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
19. Sajal Soni
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
20. Kunal Telgote
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
21. Manasvi Rajani
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM TO AUTOMATICALLY ASSIGN
RESTRICTED DATA TO A USER”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat,
India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM TO AUTOMATICALLY ASSIGN
RESTRICTED DATA TO A USER
TECHNICAL FIELD
5
[0001] Embodiments of the present disclosure generally relate to network
performance management systems. More particularly, embodiments of the present
disclosure relate to automatically assigning a restricted data to a user.
10 BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
15 present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
[0003] Network performance management systems typically track network
20 elements and data using network monitoring tools. Further, the network
performance management systems combine and process such data to determine key
performance indicators (KPI) of the network. Integrated performance management
systems provide the means to visualize the network performance data so that
network operators and other relevant stakeholders are able to identify the service
25 quality of the overall network, and individual/grouped network elements. By having
an overall as well as detailed view of the network performance, the network
operator can detect, diagnose, and remedy actual service issues, as well as predict
potential service issues or failures in the network and take precautionary measures
accordingly.
30
3
[0004] In network performance management systems, management of the network
via multiple dashboards leads to problems like delayed decision-making, and
inaccurate assessment as the network operators have to manage and monitor
performance on multiple dashboards. The problem arises when a same dashboard
has to be assigned for different users, where 5 the users have different permissions
and access. One way to solve this issue is creating multiple dashboards, each with
specific permission assigned to the users. Another way is sending permissionspecific
data to fulfil the administrator requirement. But both of these solutions are
cumbersome tasks that require assigning and keeping track of multiple dashboards
10 while maintaining these dashboards. Further, creating and maintaining multiple
dashboards may become labour-intensive and complex, especially with an increase
in the number of users or their roles.
[0005] Thus, there exists an imperative need in the art to provide a solution that can
15 overcome these and other limitations of the existing solutions.
SUMMARY
[0006] This section is provided to introduce certain aspects of the present disclosure
20 in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
subject matter.
[0007] An aspect of the present disclosure may relate to a method to automatically
25 assign a restricted data to a user. The method includes receiving, by a transceiver
unit at an Integrated Performance Management (IPM) unit from a load balancer in
a network, a restricted data request associated with the user. The restricted data
request is at least one of a restricted dashboard request and a restricted report
execution request. The method further includes transmitting, by the transceiver unit
30 from the IPM to a trained model in the network, a hash code request associated with
the restricted data request. Furthermore, the method includes receiving, by a
4
processing unit at the IPM unit from the trained model, a unique hash code based
on the hash code request. Further, the method includes fetching, by the processing
unit at the IPM unit from a caching layer, the restricted data associated with the
restricted data request, upon receiving the unique hash code. Further, the method
includes automatically assigning, by the 5 processing unit from the IPM unit to the
user via the load balancer in the network, the restricted data associated with the
restricted data request.
[0008] In an exemplary aspect of the present disclosure, the method further
10 includes generating, by the processing unit via a computational layer, a set of
computed restricted data based on at least the restricted report execution request,
wherein the set of computed restricted data is generated in an event the restricted
data associated with the restricted data request is not detected at the caching layer.
15 [0009] In an exemplary aspect of the present disclosure, the method further
includes automatically assigning, by the processing unit from the IPM unit to the
user via the load balancer in the network, the set of computed restricted data
associated with the restricted data request.
20 [0010] In an exemplary aspect of the present disclosure, the unique hash code
associated with the restricted data request is generated via the trained model,
wherein the model is trained using a machine learning technique.
[0011] Another aspect of the present disclosure may relate to a system to
25 automatically assign a restricted data to a user. The system includes a transceiver
unit. The transceiver unit is configured to receive, at an Integrated Performance
Management (IPM) unit from a load balancer in a network, a restricted data request
associated with the user. The restricted data request is at least one of a restricted
dashboard request and a restricted report execution request. The transceiver unit is
30 further configured to transmit, from the IPM unit to a trained model in the network,
a hash code request associated with the restricted data request. The system includes
5
a processing unit connected to at least the transceiver unit. The processing unit is
configured to receive, at the IPM unit from the trained model, a unique hash code
based on the hash code request. The processing unit is further configured to fetch,
at the IPM unit from a caching layer, the restricted data associated with the
restricted data request, upon reception of the 5 unique hash code. The processing unit
is further configured to automatically assign, from the IPM unit to the user via the
load balancer in the network, the restricted data associated with the restricted data
request.
10 [0012] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing instructions to automatically assign a
restricted data to a user, the instructions including executable code which, when
executed by one or more units of a system, cause a transceiver unit of the system to
receive, at an Integrated Performance Management (IPM) unit from a load balancer
15 in a network, a restricted data request associated with the user. The restricted data
request is at least one of a restricted dashboard request and a restricted report
execution request. The instructions when executed by the system further cause the
transceiver unit to transmit, from the IPM unit to a trained model in the network, a
hash code request associated with the restricted data request. The instructions when
20 executed by the system further cause a processing unit to receive, at the IPM unit
from the trained model, a unique hash code based on the hash code request. The
instructions when executed by the system further cause the processing unit to fetch,
at the IPM unit from a caching layer, the restricted data associated with the
restricted data request, upon reception of the unique hash code. The instructions
25 when executed by the system further cause the processing unit to automatically
assign, from the IPM unit to the user via the load balancer in the network, the
restricted data associated with the restricted data request.
6
OBJECTS OF THE INVENTION
[0013] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
5
[0014] It is an object of the present disclosure to share restrictive access to
information on the dashboard with at least one user via assigning counters.
[0015] It is another object of the present disclosure to create a KPI (Key
10 Performance Indicator) and track the performance of a network via the counters.
[0016] It is yet another object of the present disclosure to debug and visualize the
KPI data using the counters.
15 [0017] It is yet another object of the present disclosure to automatically assign the
restricted data to the user based on a request received to assign the restricted data.
DESCRIPTION OF THE DRAWINGS
20 [0018] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
25 disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
30 to implement such components.
7
[0019] FIG. 1 illustrates an exemplary block diagram of a network performance
management system.
[0020] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may 5 be implemented in accordance with
exemplary implementation of the present disclosure.
[0021] FIG. 3 illustrates an exemplary block diagram of a system to automatically
assign a restricted data to a user, in accordance with exemplary implementations of
10 the present disclosure.
[0022] FIG. 4 illustrates a method flow diagram to automatically assign a restricted
data to a user, in accordance with exemplary implementations of the present
disclosure.
15
[0023] FIG. 5 illustrates an exemplary system architecture to automatically assign
a restricted data to a user, in accordance with exemplary implementations of the
present disclosure.
20 [0024] FIG. 6 illustrates a sequence flow diagram to automatically assign a
restricted data to a user, in accordance with exemplary implementations of the
present disclosure.
[0025] The foregoing shall be more apparent from the following more detailed
25 description of the disclosure.
DETAILED DESCRIPTION
[0026] In the following description, for the purposes of explanation, various
30 specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
8
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above.
5
[0027] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
10 It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0028] Specific details are given in the following description to provide a thorough
15 understanding of the embodiments. However, it will be understood by one of the
ordinary skills in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
20
[0029] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
25 concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
[0030] The word “exemplary” and/or “demonstrative” are used herein to mean
30 serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
9
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“include,” “has,” “contains,” and other 5 similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
10 [0031] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a (Digital
15 Signal Processing) DSP core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
20 processing unit is a hardware processor.
[0032] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
25 communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device, or any other computing device which is capable
30 of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
10
a transceiver unit, a processing unit, a storage unit, and any other such unit(s) which
are required to implement the features of the present disclosure.
[0033] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including 5 any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
10 that may be required by one or more units of the system to perform their respective
functions.
[0034] As used herein “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
15 or data. The interface may also be referred to as a set of rules or protocols that define
the communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
called.
20 [0035] All modules, units, and components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
25 Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
[0036] As used herein, the transceiver unit includes at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
30 information or a combination thereof between units/components within the system
and/or connected with the system.
11
[0037] As discussed in the background section, the current known solutions have
several shortcomings. Assigning same dashboard for different users with different
permissions and access is a major issue during the monitoring and management of
the network parameters. Monitoring and managing multiple dashboards for
different users with different access and 5 permission leads to problems like delayed
decision making, inaccurate assessment as the administrative user has to manage
and monitor performance on multiple dashboards. The present disclosure aims to
overcome the above-mentioned and other existing problems in this field of
technology by providing a method and a system to automatically assign a restricted
10 data to a user based on a request of the user.
[0038] FIG. 1 illustrates an exemplary block diagram of a network performance
management system [100], in accordance with the exemplary embodiments of the
present invention. Referring to Fig. 1, the network performance management
15 system [100] comprises various sub-systems such as: an integrated performance
management unit [100a], a normalization layer [100b], a computation layer [100d],
an anomaly detection layer [100o], a streaming engine [100l], a load balancer
[100k], an operations and management system [100p], an API gateway system
[100r], an analysis engine [100h], a parallel computing framework [100i], a
20 forecasting engine [100t], a distributed file system [100j], a mapping layer [100s],
a distributed data lake [100u], a scheduling layer [100g], a reporting engine [100m],
a message broker [100e], a graph layer [100f], a caching layer [100c], a service
quality manager [100q] and a correlation engine[100n]. Exemplary connections
between these subsystems are also shown in FIG. 1. However, it will be appreciated
25 by those skilled in the art that the present disclosure is not limited to the connections
shown in the diagram, and any other connections between various subsystems that
are needed to realise the effects are within the scope of this disclosure.
[0039] Following are the various components of the system [100], as shown in FIG.
30 1:
12
[0040] Integrated Performance Management (IPM) unit [100a] is associated with a
performance management engine [100v] and a Key Performance Indicator (KPI)
Engine [100w].
[0041] Performance Management 5 Engine [100v]: The Performance
Management engine [100v] is a crucial component of the IPM system [100a],
responsible for collecting, processing, and managing performance counter data
from various data sources within the network (e.g., 5G network). As used herein,
the counter data includes metrics such as connection speed, latency, data transfer
10 rates, and many others. The counter data is then processed and aggregated as
required, forming a comprehensive overview of network performance. The
processed information is then stored in the Distributed Data Lake [100u]. The
distributed data lake [100u] is a centralized, scalable, and flexible storage solution,
allowing for easy access and further analysis. The Performance Management engine
15 [100v] also enables the reporting and visualization of the performance counter data,
thus providing network administrators with a real-time, insightful view of the
network's operation. Through these visualizations, operators can monitor the
network's performance, identify potential issues, and make informed decisions to
enhance network efficiency and reliability. An operator in the IPM system [100a]
20 may be an individual, a device, an administrator, and the like who may interact with
or manage the network.
[0042] Key Performance Indicator (KPI) Engine [100w]: The Key Performance
Indicator (KPI) Engine [100w] is a dedicated component tasked with managing the
25 KPIs of all the network elements. The Key Performance Indicator (KPI) Engine
[100w] uses the performance counters, which are collected and processed by the
Performance Management engine [100v] from various data sources. These
counters, encapsulating crucial performance data, are harnessed by the KPI engine
[100w] to calculate essential KPIs. These KPIs may include at least one of: data
30 throughput, latency, packet loss rate, and more. Once the KPIs are computed, the
KPIs are segregated based on the aggregation requirements, offering a multi13
layered and detailed understanding of the network performance. The processed KPI
data is then stored in the Distributed Data Lake [100u], ensuring a highly accessible,
centralized, and scalable data repository for further analysis and utilization. Similar
to the Performance Management engine [100v], the KPI engine [100w] is also
responsible for reporting and visualization of 5 KPI data. This functionality allows
network administrators to gain a comprehensive, visual understanding of the
network's performance, thus supporting informed decision-making and efficient
network management.
10 [0043] Ingestion layer: The Ingestion layer (not shown in FIG. 1) forms a key part
of the IPM system [100a]. The ingestion layer primarily performs the function to
establish an environment capable of handling diverse types of incoming data. This
data may include Alarms, Counters, Configuration parameters, Call Detail Records
(CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial
15 for maintaining and optimizing the network's performance. Upon receiving this
data, the Ingestion layer processes the data by validating the data integrity and
correctness to ensure that the data is fit for further use. Following the validation,
the data is routed to various components of the IPM system [100a], including the
Normalization layer [100b], Streaming Engine [100l], Streaming Analytics, and
20 Message Brokers [100e]. The destination is chosen based on where the data is
required for further analytics and processing. By serving as the first point of contact
for incoming data, the Ingestion layer plays a vital role in managing the data flow
within the system, thus supporting comprehensive and accurate network
performance analysis.
25
[0044] Normalization layer [100b]: The Normalization Layer [100b] serves to
standardize, enrich, and store data in the appropriate databases. It takes in data that
has been ingested and adjusts it to a common standard, making it easier to compare
and analyze. This process of "normalization" reduces redundancy and improves
30 data integrity. Upon completion of normalization, the data is stored in various
databases like the Distributed Data Lake [100u], Caching Layer [100c], and Graph
14
Layer [100f], depending on its intended use. The choice of storage determines how
the data can be accessed and used in the future. Additionally, the Normalization
Layer [100b] produces data for the Message Broker [100e], a system that enables
communication between different parts of the integrated performance management
unit [100a] through the exchange of data 5 messages. Moreover, the Normalization
Layer [100b] supplies the standardized data to several other subsystems. These
include the Analysis Engine [100h] for detailed data examination, the Correlation
Engine [100n] for detecting relationships among various data elements, the Service
Quality Manager [100q] for maintaining and improving the quality of services, and
10 the Streaming Engine [100l] for processing real-time data streams. These
subsystems depend on the normalized data to perform their operations effectively
and accurately, demonstrating the Normalization Layer's [100b] critical role in the
entire system.
15 [0045] Caching layer [100c]: The Caching Layer [100c] in the IPM system [100a]
plays a significant role in data management and optimization. During the initial
phase, the Normalization Layer [100b] processes incoming raw data to create a
standardized format, enhancing consistency and comparability. The Normalizer
Layer then inserts this normalized data into various databases. One such database
20 is the Caching Layer [100c]. The Caching Layer [100c] is a high-speed data storage
layer which temporarily holds data that is likely to be reused, to improve the speed
and performance of data retrieval. By storing frequently accessed data in the
Caching Layer [100c], the system significantly reduces the time taken to access this
data, improving overall system efficiency and performance. Further, the Caching
25 Layer [100c] serves as an intermediate layer between the data sources and the subsystems,
such as the Analysis Engine, Correlation Engine [100n], Service Quality
Manager, and Streaming Engine. The Normalization Layer [100b] is responsible
for providing these sub-systems with the necessary data from the Caching Layer
[100c].
30
15
[0046] Computation layer [100d]: The Computation Layer [100d] in the IPM
system [100a] serves as the main hub for complex data processing tasks. In the
initial stages, raw data is gathered, normalized, and enriched by the Normalization
Layer [100b]. The Normalizer Layer [100b] then inserts this standardized data into
multiple databases including the Distributed Data Lake 5 [100u], Caching Layer
[100c], and Graph Layer [100f], and also feeds it to the Message Broker [100e].
Within the Computation Layer [100d], several powerful sub-systems such as the
Analysis Engine [100h], Correlation Engine [100n], Service Quality Manager
[100q], and the Streaming Engine [100l], utilize the normalized data. These systems
10 are designed to execute various data processing tasks. The Analysis Engine [100h]
performs in-depth data analytics to generate insights from the data. The Correlation
Engine [100n] identifies and understands the relations and patterns within the data.
The Service Quality Manager [100q] assesses and ensures the quality of the
services. The Streaming Engine [100l] processes and analyses the real-time data
15 feeds. In essence, the Computation Layer [100d] is where all major computation
and data processing tasks occur. It uses the normalized data provided by the
Normalization Layer [100b], processing it to generate useful insights, ensure
service quality, understand data patterns, and facilitate real-time data analytics.
20 [0047] Message broker [100e]: The Message Broker [100e], an integral part of the
IPM system [100a], operates as a publish-subscribe messaging system. It
orchestrates and maintains the real-time flow of data from various sources and
applications. At its core, the Message Broker [100e] facilitates communication
between data producers and consumers through message-based topics. This creates
25 an advanced platform for contemporary distributed applications. With the ability to
accommodate a large number of permanent or ad-hoc consumers, the Message
Broker [100e] demonstrates immense flexibility in managing data streams.
Moreover, it leverages the filesystem for storage and caching, boosting its speed
and efficiency. The design of the Message Broker [100e] is centered around
30 reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the
integrity and consistency of the data. With its robust design and capabilities, the
16
Message Broker [100e] forms a critical component in managing and delivering realtime
data in the system.
[0048] Graph layer [100f]: The Graph Layer [100f] plays a pivotal role in the IPM
system [100a]. It can model a variety 5 of data types, including alarm, counter,
configuration, CDR data, Infra-metric data, Probe Data, and Inventory data.
Equipped with the capability to establish relationships among diverse types of data,
The Graph Layer [100f] acts as a Relationship Modeler that offers extensive
modeling capabilities. For instance, it can model Alarm and Counter data, Vprobe,
10 and Alarm data, elucidating their interrelationships. Moreover, the Relationship
Modeler should adapt to processing steps provided in the model and delivering the
results to the system requested, whether it be a Parallel Computing system,
Workflow Engine, Query Engine, Correlation Engine [100n], Performance
Management Engine, or KPI Engine [100w]. With its powerful modelling and
15 processing capabilities, the Graph Layer [100f] forms an essential part of the
system, enabling the processing and analysis of complex relationships between
various types of network data.
[0049] Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key
20 element of the IPM System [100a], endowed with the ability to execute tasks at
predetermined intervals set according to user preferences. A task might be an
activity performing a service call, an API call to another microservice, the execution
of an Elastic Search query, and storing its output in the Distributed Data Lake
[100u] or Distributed File System or sending it to another micro-service. The micro25
service refers to a single system architecture to provide multiple functions. Some
of the microservices in communication are API calls and remote procedure calls.
The versatility of the Scheduling Layer [100g] extends to facilitating graph
traversals via the Mapping Layer to execute tasks. This crucial capability enables
seamless and automated operations within the system, ensuring that various tasks
30 and services are performed on schedule, without manual intervention, enhancing
the system's efficiency and performance. In sum, the Scheduling Layer [100g]
17
orchestrates the systematic and periodic execution of tasks, making it an integral
part of the efficient functioning of the entire system.
[0050] Analysis Engine [100h]: The Analysis Engine [100h] forms a crucial part
of the IPM System [100a], designed 5 to provide an environment where users can
configure and execute workflows for a wide array of use-cases. This facility aids in
the debugging process and facilitates a better understanding of call flows. With the
Analysis Engine [100h], users can perform queries on data sourced from various
subsystems or external gateways. This capability allows for an in-depth overview
10 of data and aids in pinpointing issues. The system's flexibility allows users to
configure specific policies aimed at identifying anomalies within the data. When
these policies detect abnormal behaviour or policy breaches, the system sends
notifications, ensuring swift and responsive action. In essence, the Analysis Engine
[100h] provides a robust analytical environment for systematic data interrogation,
15 facilitating efficient problem identification and resolution, thereby contributing
significantly to the system's overall performance management.
[0051] Parallel Computing Framework [100i]: The Parallel Computing
Framework [100i] is a key aspect of the Integrated Performance Management unit
20 [100a], providing a user-friendly yet advanced platform for executing computing
tasks in parallel. The parallel computing framework [100i] highlights both
scalability and fault tolerance, crucial for managing vast amounts of data. Users can
input data via Distributed File System (DFS) [100j] locations or Distributed Data
Lake (DDL) indices. The framework supports the creation of task chains by
25 interfacing with the Service Configuration Management (SCM) Sub-System. Each
task in a workflow is executed sequentially, but multiple chains can be executed
simultaneously, optimizing processing time. To accommodate varying task
requirements, the service supports the allocation of specific host lists for different
computing tasks. The Parallel Computing Framework [100i] is an essential tool for
30 enhancing processing speeds and efficiently managing computing resources,
significantly improving the system's performance management capabilities.
18
[0052] Distributed File System [100j]: The Distributed File System (DFS) [100j]
is a critical component of the Integrated Performance Management unit [100a],
enabling multiple clients to access and interact with data seamlessly. The
Distributed File system [100j] is designed to manage data files that are partitioned
into numerous segments known as 5 chunks. In the context of a network with vast
data, the DFS [100j] effectively allows for the distribution of data across multiple
nodes. This architecture enhances both the scalability and redundancy of the
system, ensuring optimal performance even with large data sets. DFS [100j] also
supports diverse operations, facilitating the flexible interaction with and
10 manipulation of data. This accessibility is paramount for a system that requires
constant data input and output, as is the case in a robust performance management
system.
[0053] Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital
15 component of the Integrated Performance Management unit [100a], designed to
efficiently distribute incoming network traffic across a multitude of backend servers
or microservices. Its purpose is to ensure the even distribution of data requests,
leading to optimized server resource utilization, reduced latency, and improved
overall system performance. The LB [100k] implements various routing strategies
20 to manage traffic. The LB [100k] includes round-robin scheduling, header-based
request dispatch, and context-based request dispatch. Round-robin scheduling is a
simple method of rotating requests evenly across available servers. In contrast,
header and context-based dispatching allow for more intelligent, request-specific
routing. Header-based dispatching routes requests based on data contained within
25 the headers of the Hypertext Transfer Protocol (HTTP) requests. Context-based
dispatching routes traffic based on the contextual information about the incoming
requests. For example, in an event-driven architecture, the LB [100k] manages
event and event acknowledgments, forwarding requests or responses to the specific
microservice that has requested the event. This system ensures efficient, reliable,
30 and prompt handling of requests, contributing to the robustness and resilience of
the overall performance management system.
19
[0054] Streaming Engine [100l]: The Streaming Engine [100l], also referred to as
Stream Analytics, is a critical subsystem in the Integrated Performance
Management unit [100a]. This engine is specifically designed for high-speed data
pipelining to the User Interface (UI). Its core objective is to ensure real-time data
processing and delivery, enhancing the system's 5 ability to respond promptly to
dynamic changes. Data is received from various connected subsystems and
processed in real-time by the Streaming Engine [100l]. After processing, the data is
streamed to the UI, fostering rapid decision-making and responses. The Streaming
Engine [100l] cooperates with the Distributed Data Lake [100u], Message Broker
10 [100e], and Caching Layer [100c] to provide seamless, real-time data flow. Stream
Analytics is designed to perform required computations on incoming data instantly,
ensuring that the most relevant and up-to-date information is always available at
the UI. Furthermore, this system can also retrieve data from the Distributed Data
Lake [100u], Message Broker [100e], and Caching Layer [100c] as per the
15 requirement and deliver it to the UI in real-time. The streaming engine [100l] is
configured to provide fast, reliable, and efficient data streaming, contributing to the
overall performance of the Integrated Performance Management unit [100a].
[0055] Reporting Engine [100m]: The Reporting Engine [100m] is a key
20 subsystem of the Integrated Performance Management unit [100a]. The
fundamental purpose of designing the Reporting Engine [100m] is to dynamically
create report layouts of API data, cater to individual client requirements, and deliver
these reports via the Notification Engine. The REM serves as the primary interface
for creating custom reports based on the data visualized through the client's
25 dashboard. These custom dashboards, created by the client through the User
Interface (UI), provide the basis for the Reporting Engine [100m] to process and
compile data from various interfaces. The main output of the Reporting Engine
[100m] is a detailed report generated in Excel format. The Reporting Engine’s
[100m] unique capability to parse data from different subsystem interfaces, process
30 it according to the client's specifications and requirements, and generate a
comprehensive report makes it an essential component of this performance
20
management system. Furthermore, the Reporting Engine [100m] integrates
seamlessly with the Notification Engine to ensure timely and efficient delivery of
reports to clients via email, ensuring the information is readily accessible and
usable, thereby improving overall client satisfaction and system usability.
5
[0056] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. In an
implementation, the computing device [200] may also implement a method to
10 automatically assign a restricted data to a user, utilizing the system. In another
implementation, the computing device [200] itself implements the method to
automatically assign a restricted data to a user, using one or more units configured
within the computing device [200], wherein said one or more units are capable of
implementing the features as disclosed in the present disclosure.
15
[0057] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
20 computing device [200] may also include a main memory [206], such as a randomaccess
memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
25 processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose
machine that is customized to perform the operations specified in the
instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
30 information and instructions for the processor [204].
21
[0058] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic 5 LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as
10 a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
15
[0059] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
20 According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
25 contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
30 [0060] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a two22
way data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication 5 interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
electromagnetic, or optical signals that carry digital data streams representing
10 various types of information.
[0061] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220], and the
communication interface [218]. In the Internet example, a server [230] might
15 transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], the host [224], and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
20
[0062] The present disclosure is implemented by a system [300] (as shown in FIG.
3). In an implementation, the system [300] may include the computing device [200]
(as shown in FIG. 2). It is further noted that the computing device [200] is able to
perform the steps of a method [400] (as shown in FIG. 4).
25
[0063] Referring to FIG. 3, an exemplary block diagram of a system [300] to
automatically assign a restricted data to a user is shown, in accordance with the
exemplary implementations of the present disclosure. The system [300] comprises
at least one transceiver unit [302], at least one processing unit [304], and at least
30 one storage unit [306]. Also, all of the components/ units of the system [300] are
assumed to be connected to each other unless otherwise indicated below. As shown
23
in the figures all units shown within the system should also be assumed to be
connected to each other. Also, in FIG. 3 only a few units are shown, however, the
system [300] may comprise multiple such units, or the system [300] may comprise
any such numbers of said units, as required to implement the features of the present
disclosure. Further, in an implementation, the system 5 [300] may be present in a user
device to implement the features of the present disclosure. The system [300] may
be a part of the user device / or may be independent of but in communication with
the user device (may also referred to herein as a UE). In another implementation,
the system [300] may reside in a server or a network entity. In yet another
10 implementation, the system [300] may reside partly in the server/ network entity
and partly in the user device.
[0064] The system [300] is configured to automatically assign a restricted data to a
user, with the help of the interconnection between the components/units of the
15 system [300].
[0065] The system [300] includes a transceiver unit [302]. The transceiver unit
[302] is configured to receive, at an Integrated Performance Management (IPM)
unit [100a] from a load balancer [100k] in a network, a restricted data request
20 associated with the user. The restricted data request is at least one of a restricted
dashboard request and a restricted report execution request.
[0066] As used herein, the restricted dashboard request refers to a request received
for the dashboard with specific/restricted data such as geographical region-specific
25 data. For instance, a call performance dashboard may aggregate the call
performance data in each circle of a network. The circle refers to a specific
geographic area or region. If user A has restricted access to city X circle, user A
may only get data for city X while the call performance dashboard is configured for
the whole country in which city X is located. Therefore, the output for the restricted
30 dashboard request or the restricted report execution request output will depend on
the type of access the user has.
24
[0067] As used herein, the restricted report execution request refers to the execution
or implementation of the request to generate a report based on user requirements.
For example, if the report execution request is for city X, the report may be executed
for city X only comprising all 5 the required information about the requested data
such as call performance data. The report may include graphs, charts, and tables to
represent the requested data. The transceiver unit [302] may further transmit, from
the IPM unit [100a] to a trained model in the network, a hash code request
associated with the restricted data request. The hash code request refers to the
10 request for the assignment of a unique hash code to the restricted data request. The
unique hash code may help to identify duplicate requests sent to the transceiver unit
[302].
[0068] As used herein, a unique hash code refers to a distinct identifier generated
15 by the trained model. The unique hash code such as a fixed-size string of characters
is generated for the restricted data request. The hash code is unique for each unique
input, thereby differentiating requests based on different hash codes.
[0069] The system [300] further includes a processing unit [304] connected to at
20 least the transceiver unit [302]. The processing unit [304] is configured to receive,
at the IPM unit [100a] from the trained model, the unique hash code based on the
hash code request. The unique hash code is generated via the trained model. The
trained model is trained via the Artificial Intelligence (AI)/Machine Learning (ML)
layer. More particularly, the trained model is trained using machine learning
25 techniques. The machine learning technique refers to a method that may create a
model to generate unique integer values (or unique hash code) for every restricted
data request.
[0070] The processing unit [304] is further configured to fetch, at the IPM unit
30 [100a] from a caching layer [506], the restricted data associated with the restricted
data request, upon reception of the unique hash code. In an implementation of the
present disclosure, once the unique hash code is assigned to the restricted data
25
request, the restricted data request may be executed at the IPM unit [100a]. The
IPM unit [100a] may use the unique hash code at the caching layer [506] to retrieve
the restricted data associated with the restricted data request.
[0071] The transceiver unit [302] is further configured 5 to automatically assign,
from the IPM unit [100a] to the user via the load balancer [100k] in the network,
the restricted data associated with the restricted data request. In an implementation
of the present disclosure, once the restricted data is retrieved from the caching layer
[506], the transceiver unit [302] may automatically assign the restricted data to the
10 user. For instance, the unique hash code assigned to the restricted data request for
accessing call performance data of city X is XYZ. The processing unit [304] may
retrieve the restricted data associated with the call performance of city X instead of
the complete data of the country where city X is also located. After retrieving the
restricted data, the processing unit automatically assigns the restricted data to the
15 user so that the user may check only the call performance data of a particular city
(say X) via a user interface. Therefore, the user may not have access to city Y,
which exists on the same call performance dashboard. The user can access only the
restricted data for which the unique hash code is assigned. In an embodiment of the
present disclosure, the user only has read-only access to the dashboard with
20 restricted data (such as call performance data of city X). The read-only access
provides only access to see the requested restricted data and the user may not be
able to modify any particular changes in the dashboard.
[0072] The processing unit [304] is further configured to generate, via a
25 computation layer [100d], a set of computed restricted data based on at least the
restricted report execution request. The set of computed restricted data is generated
in an event the restricted data associated with the restricted data request is not
detected at the caching layer [506] (also referred to as the catching engine). The set
of computed restricted data refers to the processed data based on the request of the
30 user. The requested data is first received at the computational layer in a raw format
and then the computational layer performs computation or processing on the
26
received data to provide the user with the processed or final output data in the form
of computed restricted data. The processing unit [304] is further configured to
automatically assign, from the IPM unit [100a] to the user via the load balancer
[100k] in the network, the set of computed restricted data associated with the
restricted data request. In 5 an implementation of the present disclosure, if the
restricted data associated with the restricted data request is not present in the
caching layer, the IPM unit [100a] may send the request to the computation layer
[100d] using the unique hash code. The computation layer [100d] may compute the
data based on the unique hash code and send the computed restricted data to the
10 IPM unit [100a]. Further, the system includes a storage unit [306]. The storage unit
[306] is connected to at least the transceiver unit [302] and the processing unit
[304]. The storage unit [306] is configured to store the data required for the
implementation of the features of the present invention such as but not limited to
restricted data, training data, report data, and dashboard data.
15
[0073] Referring to FIG. 4, an exemplary method flow diagram [400] to
automatically assign a restricted data to a user, in accordance with exemplary
implementations of the present disclosure is shown. In an implementation, the
method [400] is performed by the system [300]. Further, in an implementation, the
20 system [300] may be present in a server device to implement the features of the
present disclosure. To explain FIG. 4, reference to the components (e.g., caching
layer) is also taken from subsequent FIG. 5 for a better understanding of the
invention. The reference to the components of FIG. 4 is also taken Also, as shown
in FIG. 4, the method [400] starts at step [402].
25
[0074] At step [404], the method includes receiving, by a transceiver unit [302] at
an Integrated Performance Management (IPM) unit [100a] from a load balancer
[100k] in a network, a restricted data request associated with the user. The restricted
data request is at least one of a restricted dashboard request and a restricted report
30 execution request. For example, the restricted dashboard request refers to a
dashboard with performance parameters of the network for a specific/restricted
27
geographical area. For instance, a call performance dashboard may aggregate the
call performance in each circle of a network. The circle refers to a specific
geographic area. The restricted report execution request refers to the execution or
implementation of the request to generate a report based on the report execution
request. For example, if the report execution 5 request is for city X, the report may
be executed and displayed to the user for city X only.
[0075] Next, at step [406], the method includes transmitting, by the transceiver unit
[302] from the IPM unit [100a] to a trained model in the network, a hash code
10 request associated with the restricted data request. The hash code request refers to
the request for the assignment of a unique hash code to the restricted data request.
The unique hash code may help to identify duplicate requests sent to the transceiver
unit [302].
15 [0076] Next, at step [408], the method includes receiving, by a processing unit
[304] at the IPM unit [100a] from the trained model, a unique hash code based on
the hash code request. The unique hash code associated with the restricted data
request is generated via the trained model. The trained model is trained using a
machine learning technique.
20
[0077] Next, at step [410], the method includes fetching, by the processing unit
[304] at the IPM unit [100a] from a caching layer [110c, 506], a restricted data
associated with the restricted data request, upon receiving the unique hash code. In
an implementation of the present disclosure, once the unique hash code is assigned
25 to the restricted data request, the restricted data request may be executed at the IPM
unit [100a]. The IPM unit [100a] may search the restricted data using a unique hash
code at the caching layer [110c, 506] and retrieve the restricted data for the user
based on the unique hash code. For instance, the restricted data request is to obtain
data associated with the internet speed of a 5G network in an area of Y city. The
30 processing unit [304] may retrieve the restricted data, i.e., the internet speed data of
the 5G network in the city Y, and display it to the user either via the dashboard or
28
in the form of the report. The report may be downloaded by the user based on a
request of the user to download the report.
[0078] At step [412], the method includes automatically assigning, by the
processing unit [304] from the IPM unit [100a] 5 to the user via the load balancer in
the network, the restricted data associated with the restricted data request. In an
implementation of the present disclosure, once the restricted data is retrieved from
the caching layer [110c], the transceiver unit [302] may automatically assign the
restricted data to the user.
10
[0079] The method further includes generating, by the processing unit [304] via a
computation layer [100d], a set of computed restricted data based on at least the
restricted report execution request. The set of computed restricted data is generated
in an event the restricted data associated with the restricted data request is not
15 detected at the caching layer [506]. The method further includes automatically
assigning, by the processing unit [304] from the IPM unit [100a] to the user via the
load balancer [100k] in the network, the set of computed restricted data associated
with the restricted data request. In an implementation of the present disclosure, if
the restricted data associated with the restricted data request is not present in the
20 caching layer, the IPM unit [100a] may send the request to the computation layer
[100d] to compute the restricted data and assign to the user.
[0080] The method terminates at step [414].
25 [0081] Referring to FIG. 5, an exemplary system architecture to automatically
assign a restricted data to a user, in accordance with exemplary implementations of
the present disclosure is shown.
[0082] The exemplary system architecture [500] includes but is not limited to a
30 User Interface (UI) [502], the load balancer [100k], the IPM unit [100a], a caching
layer [506], an Artificial Intelligence/ Machine Learning (AI/ML) model [508], the
29
computation layer [100d], the distributed file system [100j], and the distributed data
lake [100u]. In an implementation of the present disclosure, the caching layer [506]
is similar to the caching layer [100c].
[0083] To automatically assign a restricted data to 5 the user, the user may initiate a
restricted data request at the UI [502]. The restricted data request is at least one of
a restricted dashboard request and a restricted report execution request. The
restricted data request is sent to the load balancer [100k].
10 [0084] The load balancer [100k] may efficiently distribute incoming network
traffic across backend servers or microservices. The load balancer [100k] ensures
the even distribution of data requests, leading to optimized server resource
utilization, reduced latency, and improved overall system performance.
15 [0085] The load balancer [100k] may forward the restricted data request to the
Integrated Performance Management (IPM) unit [100a]. The IPM unit [100a] may
send a request for assigning a unique hash code to the restricted data request for
identification of any duplicate request to the AI/ML model [508]. The AI/ML model
[508] is trained using a machine learning technique.
20
[0086] After the application of the AI/ML at the AI/ML model [508] to generate
the unique hash code, the unique hash code is assigned to the restricted data request,
and the unique hash code is shared with the IPM unit [100a]. In an implementation
25 of the present solution, the unique hash code is assigned to the restricted data
request. The user can access only the data for which the unique hash code is
assigned. The user may only have access to execute the request on the dashboard
but may not modify the dashboard.
30 [0087] Further, the IPM unit [100a] may fetch the restricted data associated with
the restricted data request from the caching layer [506], if the data requested is
30
present in the caching layer [506]. The restricted data may be fetched based on the
unique hash code. The caching layer [506] may send the restricted data to the IPM
unit [100a].
[0088] If the restricted data is not present in the 5 caching layer [506], but is present
in the distributed data lake [100u], the restricted data request may be executed
through the distributed data lake [100u]. The restricted data request may be sent to
the distributed data lake [100u] and based on the unique hash code, the restricted
data associated with the restricted data request may be received at the IPM unit
10 [100a].
[0089] If the restricted data is not present in either the caching layer [506] or the
distributed data lake [100u], then the IPM unit [100a] may send the restricted data
request to the computation layer [100d]. The computation layer [100d] may
15 compute the data from the distributed file system [100j], based on the unique hash
code, and send the computed restricted data to the IPM unit [100a].
[0090] The IPM unit [100a] may send restricted data to the load balancer [100k].
The load balancer [100k] may forward the restricted data to the UI [502] for the
20 user. In an implementation of the present disclosure, based on the computed
restricted data received at the IPM unit [100a], the user may have access only to the
restricted data.
[0091] Referring to FIG. 6, an exemplary sequence flow diagram to automatically
25 assign a restricted data to a user, in accordance with exemplary implementations of
the present disclosure is shown.
[0092] In step 1, a restricted data request initiated by the user may be sent to the
load balancer [100k] via the User Interface (UI) [502]. The restricted data request
30 is at least one of a restricted dashboard request and a restricted report execution
request. The restricted dashboard request refers to a dashboard for a
31
specific/restricted geographical area. The restricted report execution request refers
to the execution or implementation of the request to generate a report based on the
report execution request. The load balancer [100k] may efficiently distribute
incoming network traffic across backend servers or microservices. The load
balancer [100k] ensures the even distribution 5 of data requests, leading to optimized
server resource utilization, reduced latency, and improved overall system
performance. In an implementation of the present solution, the UI [502] may
contain a dashboard comprising the set of information. The set of information is
associated with a unique hash code.
10
[0093] In step 2, the load balancer [100k] may forward the restricted data request
to the Integrated Performance Management (IPM) unit [100a].
[0094] Next, in step 3, the IPM unit [100a] may send a request for assigning a
15 unique hash code to the restricted data request to identify duplicate requests to the
AI/ML model [508]. The request for assigning the unique hash code refers to the
request for the assignment of a unique integer value to the restricted data request.
[0095] Further, in step 4, after the application of the AI/ML at the AI/ML model
20 [508], the unique hash code is assigned and received at the IPM unit [100a]. In an
implementation of the present solution, the unique hash code is assigned to the
restricted data request. The user can access only the data for which the unique hash
code is assigned. The user may only have access to execute the request on the
dashboard but may not modify the dashboard.
25
[0096] Next, in step 5, the method includes the IPM unit [100a] fetching the
restricted data from the caching layer [506], if the requested data is present in the
caching layer [506]. The restricted data may be fetched based on the unique hash
code. In an implementation of the present disclosure, once the unique hash code is
30 assigned to the restricted data request, the restricted data request may be executed
at the IPM unit [100a]. The IPM unit [100a] may search the restricted data via a
unique hash code at the caching layer [506] and retrieve the restricted data
32
associated based on the unique hash code. For instance, the restricted data request
is to obtain data for call performance in the circle of city X from the call
performance dashboard, where the call performance dashboard is for the country.
[0097] Next, in step 6, the caching layer 5 [506] may send the restricted data to the
IPM unit [100a]. For instance, the IPM unit [100a] may receive the restricted data
from the caching layer [506], i.e., the call performance data in the circle of city X
from the dashboard.
10 [0098] Further, in step 7, if the restricted data is not present in either the caching
layer [506] or the distributed data lake [100u], then the IPM unit [100a] may send
the request to the computation layer [100d].
[0099] In step 8, the computation layer [100d] may compute the restricted data
15 based on the unique hash code and send the computed restricted data to the IPM
unit [100a]. The process of computation of the restricted data includes receiving the
restricted data at the computation layer [100d] for analysis of the restricted data and
generating the computed restricted data.
20 [0100] In step 9, if the requested data is present in the distributed data lake [100u],
the restricted data request may be executed through the distributed data lake [100u]
by sending the restricted data request to the distributed data lake [100u] and
receiving the restricted data at the IPM unit [100a].
25 [0101] Next in step 9, the IPM unit [100a] may send restricted data to the load
balancer [100k].
[0102] In step 10, the load balancer [100k] may forward the restricted data to the
UI [502] for the user. In an implementation of the present disclosure, based on the
30 computed data received at the IPM unit [100a] the user may have access to the
restricted data, in this instance, the call performance data in a single dashboard,
33
along with other performance management and monitoring data. The user may be
able to analyse the restricted data and make accurate decisions for improving the
call performance.
[0103] The present disclosure further disc 5 loses a non-transitory computer readable
storage medium storing instructions to automatically assign a restricted data to a
user, the instructions including executable code which, when executed by one or
more units of a system, causes a transceiver unit [302] of the system at an Integrated
Performance Management (IPM) unit [100a] from a load balancer [100k] in a
10 network, a restricted data request associated with the user. The restricted data
request is at least one of a restricted dashboard request and a restricted report
execution request. The instructions when executed by the system further cause the
transceiver unit [302] to transmit, from the IPM unit [100a] to a trained model in
the network, a hash code request associated with the restricted data request. The
15 instructions when executed by the system further cause a processing unit [304] to
receive, at the IPM unit [100a] from the trained model, a unique hash code based
on the hash code request. The instructions when executed by the system further
causes the processing unit [304] to fetch, at the IPM unit [100a] from a caching
layer [506], the restricted data associated with the restricted data request, upon
20 reception of the unique hash code. The instructions when executed by the system
further cause the processing unit [304] to automatically assign, from the IPM unit
[100a] to the user via the load balancer [100k] in the network, the restricted data
associated with the restricted data request.
25 [0104] As is evident from the above, the present disclosure provides a technically
advanced solution to automatically assign a restricted data to a user. The present
solution allows the sharing of restrictive access to information on the dashboard
with a group of users via assigning counters. The present solution further allows a
user to create a KPI (Key Performance Indicator) and track the performance of a
30 network via the counters. Furthermore, the present solution allows the user to debug
and visualize the KPI data using the counters.
34
[0105] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof 5 are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
10 of the present disclosure.
[0106] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
15 principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.

We Claim:

1. A method to automatically assign a restricted data to a user, the method
comprising:
receiving, by a transceiver unit [5 302] at an Integrated Performance
Management (IPM) unit [100a] from a load balancer [100k] in a network,
a restricted data request associated with the user, wherein the restricted data
request is at least one of a restricted dashboard request and a restricted
report execution request;
transmitting, by the transceiver unit [302] from the IPM unit [100a]
to a trained model in the network, a hash code request associated with the
restricted data request;
receiving, by a processing unit [304] at the IPM unit [100a] from the
trained model, a unique hash code based on the hash code request;
fetching, by the processing unit [304] at the IPM unit [100a] from a
caching layer [506], a restricted data associated with the restricted data
request, upon receiving the unique hash code; and
automatically assigning, by the processing unit [304] from the IPM
unit [100a] to the user via the load balancer in the network, the restricted
data associated with the restricted data request.

2. The method as claimed in claim 1 further comprises generating, by the
processing unit [304] via a computation layer [100d], a set of computed
restricted data based on at least the restricted report execution request, wherein
the set of computed restricted data is generated in an event the restricted data
associated with the restricted data request is not detected at the caching layer
[506].

3. The method as claimed in claim 2 further comprises automatically assigning,
by the processing unit [304] from the IPM unit [100a] to the user via the load

balancer [100k] in the network, the set of computed restricted data associated
with the restricted data request.

4. The method as claimed in claim 1, wherein the unique hash code associated
with the restricted data request 5 is generated via the trained model, wherein the
model is trained using a machine learning technique
.
5. A system to automatically assign a restricted data to a user, the system
comprises:
10 a transceiver unit [302], wherein the transceiver unit [302] is configured to:
receive, at an Integrated Performance Management (IPM) unit
[100a] from a load balancer [100k] in a network, a restricted data request
associated with the user, wherein the restricted data request is at least one
of a restricted dashboard request and a restricted report execution request;
transmit, from the IPM unit [100a] to a trained model in the network,
a hash code request associated with the restricted data request;
a processing unit [304] connected to at least the transceiver unit, wherein
the processing unit is configured to:
receive, at the IPM unit [100a] from the trained model, a unique hash
code based on the hash code request;
fetch, at the IPM unit [100a] from a caching layer [506], the
restricted data associated with the restricted data request, upon reception of
the unique hash code; and
wherein the transceiver unit [302] is further configured to:
automatically assign, from the IPM unit [100a] to the user via the
load balancer [100k] in the network, the restricted data associated with the
restricted data request.

6. The system as claimed in claim 5, wherein the processing unit [304] is further
configured to generate, via a computation layer [100d], a set of computed
restricted data based on at least the restricted report execution request, wherein
the set of computed restricted data is generated in an event the restricted data
associated with the restricted data request is not detected at the caching layer
[506].

7. The system as claimed in claim 6, wherein the processing unit [304] is further
configured to automatically assign, from the IPM unit [100a] to the user via the
load balancer [100k] in the network, the set of computed restricted data
associated with the restricted data request.

8. The system as claimed in claim 5, wherein the unique hash code associated with
the restricted data request is generated via the trained model, wherein the trained
model is trained using a machine learning technique.

Dated this the 22nd Day of August, 2023

Documents

Application Documents

# Name Date
1 202321056267-STATEMENT OF UNDERTAKING (FORM 3) [22-08-2023(online)].pdf 2023-08-22
2 202321056267-PROVISIONAL SPECIFICATION [22-08-2023(online)].pdf 2023-08-22
3 202321056267-FORM 1 [22-08-2023(online)].pdf 2023-08-22
4 202321056267-FIGURE OF ABSTRACT [22-08-2023(online)].pdf 2023-08-22
5 202321056267-DRAWINGS [22-08-2023(online)].pdf 2023-08-22
6 202321056267-FORM-26 [05-09-2023(online)].pdf 2023-09-05
7 202321056267-Proof of Right [10-01-2024(online)].pdf 2024-01-10
8 202321056267-ORIGINAL UR 6(1A) FORM 1 & 26-300124.pdf 2024-02-03
9 202321056267-FORM-5 [20-08-2024(online)].pdf 2024-08-20
10 202321056267-ENDORSEMENT BY INVENTORS [20-08-2024(online)].pdf 2024-08-20
11 202321056267-DRAWING [20-08-2024(online)].pdf 2024-08-20
12 202321056267-CORRESPONDENCE-OTHERS [20-08-2024(online)].pdf 2024-08-20
13 202321056267-COMPLETE SPECIFICATION [20-08-2024(online)].pdf 2024-08-20
14 202321056267-FORM 3 [21-08-2024(online)].pdf 2024-08-21
15 Abstract 1.jpg 2024-08-29
16 202321056267-Request Letter-Correspondence [30-08-2024(online)].pdf 2024-08-30
17 202321056267-Power of Attorney [30-08-2024(online)].pdf 2024-08-30
18 202321056267-Form 1 (Submitted on date of filing) [30-08-2024(online)].pdf 2024-08-30
19 202321056267-Covering Letter [30-08-2024(online)].pdf 2024-08-30
20 202321056267-CERTIFIED COPIES TRANSMISSION TO IB [30-08-2024(online)].pdf 2024-08-30