Sign In to Follow Application
View All Documents & Correspondence

System And Method For Load Balancing Between Instances In A Network

Abstract: A method and system for performing intelligent load distribution is disclosed. The method includes ingesting data (502) by a normalizer instance (210), receiving health information (504) of normalizer instances by an AI/ML model (212), and determining an optimal normalizer instance (506) using AI/ML model (212). The traffic is delegated (508) to the optimal normalizer instance (210) by a traffic delegation module (214), a processing module (216) processes (510) the ingested data and stores (512) the processed data in a database (222) or provides it to an external system (306) for further analysis (512). The AI/ML model (212) is trained on performance parameters of normalizer instances (210) under diverse conditions. The health information includes metrics related to CPU usage, memory usage, and current load. The traffic delegation adjusts dynamically based on real-time load changes.. [FIG. 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 July 2023
Publication Number
42/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-06-24
Renewal Date

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
4. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
5. SAHU, Kishan
Ajay Villa, Gali No. 2, Ambedkar Colony, Bikaner - 334003, Rajasthan, India.
6. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
7. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera, District - Kota - 324001, Rajasthan, India.
8. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli - 421204 Maharashtra, India.
9. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
10. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
11. DE, Supriya
G2202, Sheth Avalon, Near Jupiter Hospital Majiwada, Thane West - 400601, Maharashtra, India.
12. KUMAR, Debashish
Bhairaav Goldcrest Residency, E-1304, Sector 11, Ghansoli, Navi Mumbai - 400701, Maharashtra, India.
13. TILALA, Mehul
64/11, Manekshaw Marg, Manekshaw Enclave, Delhi Cantonment, New Delhi - 110010, India.
14. KALIKIVAYI, Srinath
3-61, Kummari Bazar, Madduluru Village, S N Padu Mandal, Prakasam District, Andhra Pradesh - 523225, India.
15. PANDEY, Vitap
D 886, World Bank Barra, Kanpur - 208027, Uttar Pradesh, India.

Specification

FORM 2
THE PATENTS ACT, 1970 (39 of 1970) THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
APPLICANT
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF INVENTION
[0002] The embodiments of the present disclosure generally relate to load
balancing. More particularly, the present disclosure relates to a system and a method for load balancing between instances in a network for adapted troubleshooting operation management.
BACKGROUND OF THE INVENTION
[0003] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0004] In the field of telecommunications, load balancing is essential for
providing efficient operation of the network. Moreover, load balancing reduces latency without compromising data integrity within the network. Traditionally, if the load on a database or application exceeds a threshold value, manual intervention is required to manage the operations, resulting in inefficient load management and potential application malfunctions. There is, therefore, a need in the art to provide a system and a method that can mitigate the problems associated with traditional load balancing techniques.

[0005] To overcome the aforementioned challenges, there is a need of a
robust and adaptive load balancing solution, based on advanced AI/ML techniques for optimal data distribution and resource management in telecommunications networks. There is a need of a system and method that can ensure dynamic adaptability to changing workloads, improved scalability, enhanced performance, and reduced maintenance requirements, leading to better resource utilization and optimized system performance.
OBJECTS OF THE INVENTION
[0006] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0007] An object of the present disclosure is to provide a system and a
method for load balancing between instances in a network.
[0008] Another object of the present disclosure is to reduce time and efforts
in application management/operations.
[0009] Another object of the present disclosure is to optimize the resource
allocation while normalizing the data.
[0010] Another object of the present disclosure is to enable better scalability
and adaptability in handling varying workloads.
[0011] Another object of the present disclosure is to provide a system and a
method that is economical and easy to implement.
SUMMARY
[0012] In an exemplary embodiment, a method for performing intelligent
load distribution is described. The method includes steps of ingesting, by a normalizer instance, data for processing of the data and receiving, by an Artificial Intelligence /Machine Learning (AI/ML) model, health information and historical performance data of a plurality of normalizer instances. The method includes determining, by the Artificial Intelligence /Machine Learning (AI/ML) model, at least one optimal instance of the normalizer to process the ingested data based on the received health information and historical performance data and delegating, by

a traffic delegation module , ingested data to the determined at least one optimal
normaliser instance of the normalizer. The optimal normalizer instance then
processes the ingested data corresponding to the delegated traffic using a processing
module. The method further includes storing, by the processing module , the
processed data in a database or providing the processed data to an external system
for further analysis.
[0013] In some embodiments, the AI/ML model is trained on performance
parameters of the normalizer instances under diverse conditions of success and
failure.
[0014] In some embodiments, the health information includes metrics
related to Central Processing Unit (CPU) usage, memory usage, and current load of
each normalizer instance.
[0015] In some embodiments, the traffic delegation is adjusted dynamically
based on real-time load changes of the optimal normalizer instances.
[0016] In some embodiments, the method further includes updating the
AI/ML model with new performance data to improve accuracy and efficiency over
time. The new performance data includes new loads, changes in resources,
workloads, new Service Level Objectives, new accuracy requirement etc. The new
performance data may be provided by a user or it may generated based on historical
performance data.
[0017] In another exemplary embodiment, a system for performing
intelligent load distribution is described. The system includes a normalizer instance
configured to ingest data and an Artificial Intelligence /Machine Learning (AI/ML)
model embedded in the normalizer. The AI/ML model is configured to receive and
analyze health information and historical performance data of a plurality of
instances of the normalizer. The AI/ML model is configured to determine an optimal
instance of the normalizer to process the ingested data based the received health
information and the historical performance data.A traffic delegation module is
configured to delegate traffic to the optimal normalizer instance. The optimal
normalizer instance processes the ingested data corresponding to the delegated
traffic using a processing module. The system further includes a storage module

configured to store the processed data in a database or provide the processed data to an external system for further analysis.
[0018] In some embodiments, the AI/ML model is trained on performance
parameters of the plurality of instances of the normalizer under diverse conditions of success and failure.
[0019] In some embodiments, the health information includes metrics
related to Central Processing Unit (CPU) usage, memory usage, and current load of each normalizer instance.
[0020] In some embodiments, the traffic delegation module is configured to
adjust the delegation dynamically based on real-time load changes of the optimal normalizer instances.
[0021] In some embodiments, an update module is configured to update the
AI/ML model with new performance data to enhance accuracy and efficiency over time.
[0022] A user equipment (UE) communicatively coupled with a network,
the coupling comprises steps of: receiving, by the network, a connection request from the UE and sending, by the network, an acknowledgment of the connection request to the UE. The network transmits a plurality of signals in response to the connection request and configured for performing a method for performing intelligent load distribution.
[0023] A computer program product comprising a non-transitory computer-
readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform the steps for intelligent load distribution. A normalizer instance ingests data of a plurality of normalizer instances wherein the data is ingested for processing of the data. An Artificial Intelligence /Machine Learning (AI/ML) model receives health information and historical performance data of the plurality of normalizer instances. The Artificial Intelligence /Machine Learning (AI/ML) model determines at least one optimal instance of the normalizer to process the ingested data based on the received health information and the historical performance data. A traffic delegation module delgates the ingested data to the determined at least one optimal instance of the

normalizer. A processing module associated with the at least one optimal instance of the normalizer, processes the delegated ingested data. The processing module stores the processed data in a database or provides, the processed data to an external system for further analysis.
BRIEF DESCRIPTION OF DRAWINGS
[0024] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the
disclosed methods and systems which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Some drawings may indicate the components
using block diagrams and may not represent the internal circuitry of each
component. It will be appreciated by those skilled in the art that disclosure of such
drawings includes the disclosure of electrical components, electronic components,
or circuitry commonly used to implement such components.
[0025] FIG. 1 illustrates an example network architecture 100 for
implementing a proposed system 108, in accordance with an embodiment of the
present disclosure.
[0026] FIG. 2 illustrates an example block diagram 200 of a proposed
system 108, in accordance with an embodiment of the present disclosure.
[0027] FIG. 3 illustrates a flow diagram 300 representing a method for load
balancing between instances in a network, in accordance with some embodiments
of the present disclosure.
[0028] FIG. 4 illustrates an exemplary representation of flow diagram 400
representing a method for load balancing between instances in a network, in
accordance with some embodiments of the present disclosure.
[0029] FIG. 5 illustrates a flow diagram 500 representing a method for
performing intelligent load distribution, in accordance with some embodiments of
the present disclosure.
[0030] FIG. 6 illustrates an example computer system 600 in which or with

which the embodiments of the present disclosure may be implemented.
[0031] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 – Network Architecture
102-1, 102-2…102-N – Users
104-1, 104-2…104-N – User Equipments (UEs)
106 – Network
108 – System
110 – Entity
112 – Centralized Server
202 – One or more processor(s)
204 – Memory
206 – Interface(s)
208 – Processing Engine
210 – Normalizer
212 – AI/ML Module
214 – Traffic Delegation Module
216 – Processing Module
218 – Storage Module
220 – Other Modules
222– Database
302 – AI/ML Model
304-1, 304-2, 304-3 – Normalizer Instances
306 – External System
308 – Data Lake
400 – Flow Diagram for Load Balancing
402 – File System
402-1, 402-2, 402-3 – Components of File System
404 – Normalizer Instances

600 – Computer System
610 – External Storage Device
620 – Bus
630 – Main Memory
640 – Read Only Memory
650 – Mass Storage Device
660 – Communication Port(s)
670 – Processor
DETAILED DESCRIPTION
[0032] In the following description, for explanation, various specific details
are outlined in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0033] The ensuing description provides exemplary embodiments only and
is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0034] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other

components may be shown as components in block diagram form in order not to
obscure the embodiments in unnecessary detail. In other instances, well-known
circuits, processes, algorithms, structures, and techniques may be shown without
unnecessary detail to avoid obscuring the embodiments.
5 [0035] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged.
10 A process is terminated when its operations are completed but could have additional
steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
15 [0036] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or
20 designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other
25 elements.
[0037] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the
30 phrases “in one embodiment” or “in an embodiment” in various places throughout
this specification are not necessarily all referring to the same embodiment.
9

Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0038] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular
5 forms “a”, “an”, and “the” are intended to include the plural forms as well, unless
the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other
10 features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “and/or” includes any combinations of one or more of the associated listed items.
[0039] Embodiments herein relate to system and method for load balancing
in the process of normalizing the ingested data. The data source transmits the data
15 to the ingestion system, which ingests the data and sends the data to the proposed
system that normalizes the data. During the normalization of the data, the system may generate a plurality of normalizing instances to normalize the data. The instances balance load among each other without traditional load balancing techniques. In particular, the system uses AI/ML algorithms to sense the load of
20 other instances and intelligently delegate traffic to less-loaded instances. This
method deploys machine learning models to dynamically distribute the data workload and to optimize resource utilization. Thus, the dynamic load balancing is achieved without relying on traditional load balancing techniques, thereby enabling better scalability and adaptability in handling varying workloads.
25 [0040] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIGs. 1-6.
[0041] FIG. 1 illustrates an exemplary network architecture in which or
with which a system (108) for load balancing in the process of normalizing the ingested data is implemented, in accordance with embodiments of the present
30 disclosure.
[0042] Referring to FIG. 1, the network architecture (100) includes one or
10

more computing devices or user equipments (104-1, 104-2…104-N) associated
with one or more users (102-1, 102-2…102-N) in an environment. A person of
ordinary skill in the art will understand that one or more users (102-1, 102-2…102-
N) may be individually referred to as the user (102) and collectively referred to as
5 the users (102). Similarly, a person of ordinary skill in the art will understand that
one or more user equipments (104-1, 104-2…104-N) may be individually referred to as the user equipment (104) and collectively referred to as the user equipment (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the
10 disclosure. Although three user equipments (104) are depicted in FIG. 1, however
any number of the user equipments (104) may be included without departing from the scope of the ongoing description.
[0043] In an embodiment, the user equipment (104) includes smart devices
operating in a smart environment, for example, an Internet of Things (IoT) system.
15 In such an embodiment, the user equipment (104) may include, but is not limited
to, smart phones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV),
20 computers, smart security system, smart home system, other devices for monitoring
or interacting with or for the users (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the user equipment (104) may include, but is not limited to, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server
25 or a cloud-computing system or any other device that is network-connected.
[0044] In an embodiment, the user equipment (104) includes, but is not
limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device(e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch
30 computer device, and so on), a Global Positioning System (GPS) device, a laptop
computer, a tablet computer, or another type of portable computer, a media playing
11

device, a portable gaming system, and/or any other type of computer device with
wireless communication capabilities, and the like. In an embodiment, the user
equipment (104) includes, but is not limited to, any electrical, electronic, electro¬
mechanical, or an equipment, or a combination of one or more of the above devices
5 such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a
general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the user equipment (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone,
10 a keyboard, and input devices for receiving input from the user (102), or the entity
(110) such as touch pad, touch enabled screen, electronic pen, and the like. A person
of ordinary skill in the art will appreciate that the user equipment (104) may not be
restricted to the mentioned devices and various other devices may be used.
[0045] Referring to FIG. 1, the user equipment (104) communicates with a
15 system (108), for example, a load balancing system normalizing the ingested data,
through a network (106). In an embodiment, the network (106) includes at least one of a Fifth Generation (5G) network, 6G network, or the like. The network (106) enables the user equipment (104) to communicate with other devices in the network architecture (100) and/or with the system (108). The network (106) includes a
20 wireless card or some other transceiver connection to facilitate this communication.
In another embodiment, the network (106) is implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone
25 Network (PSTN), or the like.
[0046] In another exemplary embodiment, the centralized server (112)
includes or comprise, by way of example but not limitation, one or more of: a stand¬alone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a
30 virtualized server, one or more processors executing code to function as a server,
one or more machines performing server-side functionality as described herein, at
12

least a portion of any of the above, some combination thereof.
[0047] In an embodiment, the system (108) may generate a plurality of
instances to normalize the data received from the ingestion system. Each of the
instance may have the information or characteristic of the other instance. The
5 instance may continuously sense the load on the other instances. In case when the
load on a particular instance is greater than a particular threshold i.e. an instance is overloaded, the system 108 may then perform load balancing, which includes transferring the load from one instance to the another so as to optimize the resources and enabling effective utilization in the network.
10 [0048] The system uses AI/ML logic to sense the load of other instances and
intelligently delegate traffic to less-loaded systems. When data is ingested in normalizer, the embedded AI/ML model, which is continuously getting feed of health of all normalizer instances decides the best instance of normalizer to serve the data processing in most efficient way. This is achieved due to a large amount of
15 data on which the model is trained which includes all the performance parameters
of normalizer instances in diverse situations of success & failure. Therefore, the
assigned instance as per the model output processes the incoming data to store in
database or provide it to any external system for further analysis.
[0049] FIG. 2 illustrates an example block diagram (200) of a proposed
20 system (108), in accordance with an embodiment of the present disclosure.
[0050] Referring to FIG. 2, in an embodiment, the system (108) may include
one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any
25 devices that process data based on operational instructions. Among other
capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage
30 medium, which may be fetched and executed to create or share data packets over a
network service. The memory (204) may comprise any non-transitory storage
13

device including, for example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[0051] In an embodiment, the system (108) may include an interface(s)
5 (206). The interface(s) (206) may comprise a variety of interfaces, for example,
interfaces for data input and output devices (I/O), storage devices, and the like. The interface(s) (206) may facilitate communication through the system (108). The interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not
10 limited to, processing engine(s) (208) and a database (222). Further, the processing
engine(s) (208) may include one or more engine(s) such as, but not limited to, an
input/output engine, an identification engine and an optimization engine.
[0052] In an embodiment, the processing engine(s) (208) may be
implemented as a combination of hardware and programming, for example,
15 programmable instructions, to implement one or more functionalities of the
processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage
20 medium and the hardware for the processing engine(s) (208) may comprise a
processing resource, for example, one or more processors, to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system may comprise the
25 machine-readable storage medium storing the instructions and the processing
resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry.
30 [0053] In an embodiment, the local database (222) may comprise data that
may be either stored or generated as a result of functionalities implemented by any
14

of the components of the processor (202) or the processing engine (208). In an
embodiment, the local database (222) may be separate from the system (108).
[0054] In an exemplary embodiment, the processing engine (208) may
include a normalizer (210), an AI/ML module (212), a traffic delegation module
5 (214), a processing module (216), a storage module (218), and other modules (220)
having functions that may include but are not limited to testing, storage, and
peripheral functions, such as wireless communication units for remote operation,
audio units for alerts, and the like.
[0055] The normalizer (210) is configured to ingest data. In an aspect, the
10 normalizer (210) performs the function of ingesting data and ensuring that this data
is available for processing by other components within the system (108). The data ingested by the normalizer (210) typically includes any data that needs to be processed and managed by the system. The data can encompass a wide variety of data types depending on the specific application and context of use. Generally, the
15 data may include:
User Data: Information related to user activities, transactions, or interactions within a network or application.
Sensor Data: Data collected from various sensors in an IoT system, including mechanical, thermal, electrical, and other types of sensors.
20 Application Data: Data generated or required by different applications running
within the network, including logs, metrics, and user-generated content. Network Data: Information related to network performance, such as bandwidth usage, latency, packet loss, and other network metrics. Operational Data: Data related to the operations and performance of different
25 system components, including CPU usage, memory usage, and current load metrics.
[0056] The AI/ML module (212) uses this ingested data to optimize its
distribution and ensure efficient load balancing across the network. The AI/ML model (212) uses this data to make informed decisions about load distribution, and dynamically adjusts traffic delegation to maintain optimal performance.
30 [0057] The AI/ML module (212) is further configured to receive and
analyze health information of normalizer instances. In one aspect of the present
15

embodiment, the AI/ML module (212) continuously monitors metrics, such as CPU
usage, memory usage, and the current load of each normalizer instance. The AI/ML
module (212) utilizes this information, along with historical performance data, to
determine one or more optimal instances of the normalizer for processing the
5 ingested data, thereby performing intelligent load distribution.
[0058] The traffic delegation module (214) takes the output from the AI/ML
module (212) and delegates traffic to the determined optimal instances of the normalizer. This delegation is dynamic and adjusts in real-time based on the changing loads and performance metrics of the normalizer instances.
10 [0059] The processing module (216) is configured to process the delegated
traffic corresponding to the optimal normalizer instances, In one aspect of the
present embodiment, the processing module (216) ensures that the data ingested by
the normalizer (210) is processed efficiently.
[0060] The storage module (218) is configured to store the processed data
15 in a database or provide the processed data to an external system for further
analysis. In one aspect of the present embodiment, the storage module (218) ensures that once the data has been processed by the normalizer instance, it is either stored in a database for future reference or provided to external systems for additional processing or analysis.
20 [0061] Other modules (220) may include various additional functionalities
that support the overall operation of the system (108). These may include testing modules to validate the performance and accuracy of the AI/ML models, storage modules for maintaining historical data, and peripheral modules for communication and alerting purposes. These modules enhance the capability of the system (108) to
25 manage and distribute data loads efficiently.
[0062] Although FIG. 2 shows exemplary components of the system (108),
in other embodiments, the system (108) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 2. Additionally, or alternatively, one or more components of
30 the system (108) may perform functions described as being performed by one or
more other components of the system (108).
16

[0063] FIG. 3 illustrates a flow diagram (300) representing a method for
load balancing between normalizer instances in a network, in accordance with some embodiments of the present disclosure.
[0064] As illustrated, the system comprises an AI/ML model (302)
5 embedded in an application server. The AI/ML model (302) continuously receives
a live feed of health information of all exemplary normalizer instances (304)-1,
(304)-2, and (304)-3. This health information includes metrics, such as CPU usage,
memory usage, and current load of each normalizer instance.
[0065] The AI/ML model (302) analyzes the received health information to
10 determine one or more optimal instance of the normalizer to process the ingested
data. The AI/ML model (302) uses historical performance data and the current health metrics to make this determination, ensuring that the workload is dynamically and intelligently distributed among the normalizer instances (304)-1, (304)-2, and (304)-3. This historical performance data includes data but not limited
15 to quantity of data processed, time taken to process the data, accuracy, delay etc.
[0066] When data is ingested into the system, the normalizer (210), with the
help of the AI/ML model (302), identifies the one or more optimized instances of the normalizer to handle the incoming data. This decision is made in real-time, based on the continuous analysis of the health metrics of each normalizer instance.
20 For example, if normalizer instance (304-1) is determined to be less loaded
compared to normalizer instances (304-2) and (304-3), the AI/ML model (302)
provides an output to the traffic delegation module(214) to delegates the traffic to
normalizer instance (304-1).
[0067] The optimal normalizer instance, such as (304-1), processes the
25 ingested data and either stores the processed data in a data lake (308) or provides it
to an external system (306) for further analysis. The data lake (308) acts as a storage repository where the processed data can be saved for future reference or additional processing. The data lake (308) and database (222) refer to a same central repository and have been used interchangeably.
30 [0068] The external system (306) may consume the processed data for
various applications, such as analytics, reporting, or other external processes that
17

require processed data inputs. This dynamic load balancing approach ensures
efficient utilization of system resources, improved scalability, and enhanced
performance.
[0069] The method (300) for load balancing between instances using an
5 AI/ML model to dynamically distribute the data workload is described. The method
(300) thus ensures effective resource utilization and optimized system performance,
contributing to an improved user experience.
[0070] FIG. 3 thus provides a detailed representation of how the AI/ML
model (302), normalizer instances (304-1), (304-2), and (304-3), data lake (308),
10 and external system (306) interact to achieve intelligent load distribution and data
processing within the system (108).
[0071] FIG. 4 illustrates an exemplary representation of a flow diagram
(400) representing a method for load balancing between instances in a network, in
accordance with some embodiments of the present disclosure. In an example, the
15 proposed method is applicable to the file system.
[0072] The system comprises a file system (402), normalizer instances
(404), and a database (222). The file system (402) includes multiple components,
each interacting with an instance of the normalizer.
[0073] Each normalizer instance (404) (i.e., normalizer instance 1,
20 normalizer instance 2, and normalizer instance 3) is configured to ingest data from
the corresponding file system. The normalizer instances (404) perform load
balancing using AI/ML algorithms.
[0074] The AI/ML logic embedded in the normalizer instances (404)
continuously monitor the health and performance of each normalizer instance. This
25 includes metrics, such as CPU usage, memory usage, and current load. The AI/ML
model determines if any instance is overloaded and, if so, intelligently delegates the
traffic to a less-loaded instance.
[0075] For example, when data is ingested by normalizer instance 1 from a
first file system, the embedded AI/ML model assesses the current load and health
30 metrics of all normalizer instances. If normalizer instance 1 is determined to be
overloaded, the AI/ML model will redistribute the data load to normalizer instance
18

2 or normalizer instance 3, which have lower loads. This delegation is dynamic and adjusts in real-time based on changing load conditions.
[0076] Once the optimal normalizer instance processes the ingested data,
the processed data is stored in the database (222). The database (222) acts as a
5 central repository for storing the processed data, ensuring that it is available for
future reference or further analysis by external systems.
[0077] The flow diagram (400) effectively demonstrates how the normalizer
instances (404) work together to achieve intelligent load distribution, based on AI/ML models to optimize resource utilization and system performance. Such
10 method ensures that data processing is efficient, scalable, and adaptable to changing
workloads.
[0078] FIG. 5 illustrates a flow diagram (500) representing a method for
performing intelligent load distribution, in accordance with some embodiments of the present disclosure.
15 [0079] The method begins with ingesting, by a normalizer instance of a
plurality of normalizer instances, data, at step (502). The normalizer instance (210) is configured to receive incoming data that needs to be processed. This data ingestion step ensures that the data is available for further processing within the system.
20 [0080] Next, the method involves receiving, by an Artificial
Intelligence/Machine Learning (AI/ML) model embedded in the normalizer, health information and historical performance data of normalizer instances, at step (504). The AI/ML model (212) continuously monitors various health metrics of the normalizer instances (304), such as CPU usage, memory usage, and current load.
25 This real-time feed of health information allows the AI/ML model to have an up-
to-date view of the system's performance.
[0081] The method then includes determining, by the AI/ML model (212),
an optimal instance of the normalizer to process the ingested data based on the received health information and historical performance data, at step (506). The
30 AI/ML model based on the analysis identifies the one or more normalizer instances
to handle the current data load. This decision-making process considers both the
19

current health metrics and historical data to ensure optimal performance.
[0082] Following this, the method involves delegating, by a traffic
delegation module (214), traffic to the determined optimal normalizer instances of the normalizer, at step (508). The traffic delegation module (214) directs the incoming data to the selected one or more normalizer instances that has been identified as optimal by AI/ML model (212). This step ensures that the data is processed by the most suitable instances, improving efficiency and load distribution.
[0083] The optimal normalizer instances then process the ingested data
corresponding to the delegated traffic, at step (510) using a processing module
(216). This step involves the actual data processing performed by the selected
normalizer instances, ensuring that the data is handled efficiently and accurately.
[0084] Finally, the method includes storing, by the processing module in the
delegated normalizer instance, the processed data in a database or providing the processed data to an external system for further analysis, at step (512). The storage module (218) stores the processed data in the database (222) for future reference or additional processing. Alternatively, the processed data can be provided to an external system (306) for further analysis, enabling various applications such as reporting, analytics, or other data-driven processes.
[0085] FIG. 6 illustrates an example computer system 600 in which or with
which the embodiments of the present disclosure may be implemented.
[0086] As shown in FIG. 6, the computer system 600 may include an
external storage device 610, a bus 620, a main memory 630, a read-only memory 640, a mass storage device 650, a communication port(s) 660, and a processor 670. A person skilled in the art will appreciate that the computer system 600 may include more than one processor and communication ports. The processor 670 may include various modules associated with embodiments of the present disclosure. The communication port(s) 660 may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication ports(s) 660 may be chosen depending on a network, such as a

Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system 600 connects.
[0087] In an embodiment, the main memory 630 may be Random Access
Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory 640 may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor 670. The mass storage device 650 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
[0088] In an embodiment, the bus 620 may communicatively couple the
processor(s) 670 with the other memory, storage, and communication blocks. The bus 620 may be, e.g. a Peripheral Component Interconnect PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor 670 to the computer system 600.
[0089] In another embodiment, operator and administrative interfaces, e.g.,
a display, keyboard, and cursor control device may also be coupled to the bus 620 to support direct operator interaction with the computer system 600. Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) 660. Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system 600 limit the scope of the present disclosure.
[0090] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from

the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE INVENTION
[0091] The present disclosure provides a system and a method for load
balancing between instances in a network.
[0092] The present disclosure provides a system and a method that reduce
time and efforts in application management/operations.
[0093] The present disclosure provides a system and a method that optimize
the resource allocation while normalizing the data.
[0094] The present disclosure provides a system and a method that enable
better scalability and adaptability in handling varying workloads.
[0095] The present disclosure provides a system and a method that provide
a system and a method that is economical and easy to implement.

WE CLAIM:
1. A method for performing intelligent load distribution, the method
comprising:
ingesting data (502) by a normalizer instance (210, 304) of a plurality of normalizer instances (210, 304) wherein the data is ingested for processing of the data;
receiving (504), by an Artificial Intelligence /Machine Learning (AI/ML) model (212) health information and historical performance data of the plurality of normalizer instances (210, 304);
determining (506), by the Artificial Intelligence /Machine Learning (AI/ML) model (212) at least one optimal instance of the normalizer (210, 304) to process the ingested data based on the received health information and the historical performance data;
delegating (508), by a traffic delegation module (214),, the ingested data to the determined at least one optimal instance of the normalizer (210, 304);
processing (510), by a processing module (216) associated with the at least one optimal instance of the normalizer (210, 304), the delegated ingested data; and
storing (512), by the processing module (216), the processed data in a database (222) or providing, by the processing module (216), the processed data to an external system (306) for further analysis.
2. The method as claimed in claim 1, wherein the AI/ML model (212) is trained on performance parameters of the normalizer instances (210, 304) under diverse conditions of success and failure.
3. The method as claimed in claim 1, wherein the health information includes metrics related to Central Processing Unit (CPU) usage, memory usage, and current load of each normalizer instance (210, 304).
4. The method as claimed in claim 1, wherein the traffic delegation is adjusted dynamically based on real-time load changes of the at least one

optimal normalizer instance (210).
5. The method as claimed in claim 1, further comprising updating the AI/ML model (212) with new performance data to improve accuracy and efficiency over time.
6. A system for performing intelligent load distribution, the system comprising:
a normalizer instance (210, 304) of a plurality of normalizer instances (210, 304) configured to ingest data, wherein the data is ingested for processing of the data;
an Artificial Intelligence /Machine Learning (AI/ML) model (212) configured to receive and analyze health information and historical performance data of the plurality of instances of the normalizer (210, 304);
the Artificial Intelligence /Machine Learning (AI/ML) model (212) configured to determine at least one optimal instance of the normalizer (210, 304) to process the ingested data based on the received health information and the historical performance data;
a traffic delegation module (214) configured to delegate the ingested data to the determined at least one optimal instance of the normalizer (210, 304);
a processing module (216) configured to process the delegated ingested data; and
a storage module (218) configured to store the processed data in a database (222) or provide the processed data to an external system (306) for further analysis.
7. The system as claimed in claim 6, wherein the AI/ML model (212) is trained on performance parameters of the plurality of instances of the normalizer (210, 304) under diverse conditions of success and failure.
8. The system as claimed in claim 6, wherein the health information includes metrics related to Central Processing Unit (CPU) usage, memory usage, and current load of each instance of the normalizer (210, 304).
9. The system as claimed in claim 6, wherein the traffic delegation module

(214) is configured to adjust the delegation dynamically based on real-time load changes of the optimal instances of the normalizer (210, 304).
10. The system as claimed in claim 6, further comprising an update module
configured to update the AI/ML model (212) with new performance data
to enhance accuracy and efficiency over time.
11. A user equipment (UE) (104) communicatively coupled with a network
(106), the coupling comprises steps of:
receiving, by the network (106), a connection request from the UE (104);
sending, by the network (106), an acknowledgment of the connection request to the UE (104); and
transmitting a plurality of signals in response to the connection request, wherein the network (106) is configured for performing a method for performing intelligent load distribution as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321047048-STATEMENT OF UNDERTAKING (FORM 3) [12-07-2023(online)].pdf 2023-07-12
2 202321047048-PROVISIONAL SPECIFICATION [12-07-2023(online)].pdf 2023-07-12
3 202321047048-FORM 1 [12-07-2023(online)].pdf 2023-07-12
4 202321047048-DRAWINGS [12-07-2023(online)].pdf 2023-07-12
5 202321047048-DECLARATION OF INVENTORSHIP (FORM 5) [12-07-2023(online)].pdf 2023-07-12
6 202321047048-FORM-26 [13-09-2023(online)].pdf 2023-09-13
7 202321047048-FORM-26 [05-03-2024(online)].pdf 2024-03-05
8 202321047048-FORM 13 [08-03-2024(online)].pdf 2024-03-08
9 202321047048-AMENDED DOCUMENTS [08-03-2024(online)].pdf 2024-03-08
10 202321047048-Request Letter-Correspondence [03-06-2024(online)].pdf 2024-06-03
11 202321047048-Power of Attorney [03-06-2024(online)].pdf 2024-06-03
12 202321047048-Covering Letter [03-06-2024(online)].pdf 2024-06-03
13 202321047048-CORRESPONDANCE-WIPO CERTIFICATE-14-06-2024.pdf 2024-06-14
14 202321047048-ENDORSEMENT BY INVENTORS [01-07-2024(online)].pdf 2024-07-01
15 202321047048-DRAWING [01-07-2024(online)].pdf 2024-07-01
16 202321047048-CORRESPONDENCE-OTHERS [01-07-2024(online)].pdf 2024-07-01
17 202321047048-COMPLETE SPECIFICATION [01-07-2024(online)].pdf 2024-07-01
18 202321047048-ORIGINAL UR 6(1A) FORM 26-020924.pdf 2024-09-09
19 202321047048-FORM-9 [16-10-2024(online)].pdf 2024-10-16
20 202321047048-FORM 18A [17-10-2024(online)].pdf 2024-10-17
21 202321047048-FORM 3 [07-11-2024(online)].pdf 2024-11-07
22 202321047048-FER.pdf 2025-01-21
23 202321047048-FORM 3 [23-01-2025(online)].pdf 2025-01-23
24 202321047048-FORM 3 [23-01-2025(online)]-1.pdf 2025-01-23
25 202321047048-Proof of Right [04-02-2025(online)].pdf 2025-02-04
26 202321047048-ORIGINAL UR 6(1A) FORM 1-170225.pdf 2025-02-19
27 202321047048-OTHERS [06-03-2025(online)].pdf 2025-03-06
28 202321047048-FER_SER_REPLY [06-03-2025(online)].pdf 2025-03-06
29 202321047048-COMPLETE SPECIFICATION [06-03-2025(online)].pdf 2025-03-06
30 202321047048-US(14)-HearingNotice-(HearingDate-17-04-2025).pdf 2025-03-19
31 202321047048-Correspondence to notify the Controller [09-04-2025(online)].pdf 2025-04-09
32 202321047048-Written submissions and relevant documents [01-05-2025(online)].pdf 2025-05-01
33 202321047048-FORM-26 [01-05-2025(online)].pdf 2025-05-01
34 202321047048-US(14)-ExtendedHearingNotice-(HearingDate-02-06-2025)-1100.pdf 2025-05-15
35 202321047048-ORIGINAL UR 6(1A) FORM 26-130525.pdf 2025-05-17
36 202321047048-Correspondence to notify the Controller [27-05-2025(online)].pdf 2025-05-27
37 202321047048-Written submissions and relevant documents [13-06-2025(online)].pdf 2025-06-13
38 202321047048-PatentCertificate24-06-2025.pdf 2025-06-24
39 202321047048-IntimationOfGrant24-06-2025.pdf 2025-06-24

Search Strategy

1 SearchHistoryE_31-12-2024.pdf

ERegister / Renewals

3rd: 24 Sep 2025

From 12/07/2025 - To 12/07/2026