Sign In to Follow Application
View All Documents & Correspondence

System And Method For Configuring Load Factor Based Data Push Mechanism

Abstract: The present disclosure discloses a method (500) for managing data transfers to plurality of destinations (116) for load balancing, including monitoring (502) the plurality of destinations (116) for capturing metrics associated with the plurality of destinations (116) while transferring data to the plurality of destinations (116); analysing (504) the captured metrics using an analysing technique for determining a pattern of the corresponding metrics; comparing (506) the pattern of the metrics of a corresponding destination (116) with a predefined pattern; determining (508) a current load state on the corresponding destination (116) by using the analysing technique when the pattern of a corresponding metric deviates from the predefined pattern in the corresponding destination (116); and controlling (510) a transfer of the data to the corresponding destination (116) for the load balancing on the corresponding destination (116) when the current load state of the corresponding destination (116) is identified as an overload state. Figure.3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 July 2023
Publication Number
50/2024
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
4. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
5. SAHU, Kishan
Ajay Villa, Gali No. 2, Ambedkar Colony, Bikaner - 334003, Rajasthan, India.
6. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
7. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera, District - Kota - 324001, Rajasthan, India.
8. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli - 421204 Maharashtra, India.
9. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
10. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
11. DE, Supriya
G2202, Sheth Avalon, Near Jupiter Hospital Majiwada, Thane West - 400601, Maharashtra, India.
12. KUMAR, Debashish
Bhairaav Goldcrest Residency, E-1304, Sector 11, Ghansoli, Navi Mumbai - 400701, Maharashtra, India.
13. TILALA, Mehul
64/11, Manekshaw Marg, Manekshaw Enclave, Delhi Cantonment, New Delhi - 110010, India.
14. KALIKIVAYI, Srinath
3-61, Kummari Bazar, Madduluru Village, S N Padu Mandal, Prakasam District, Andhra Pradesh - 523225, India.
15. PANDEY, Vitap
D 886, World Bank Barra, Kanpur - 208027, Uttar Pradesh, India.

Specification

FORM 2
THE PATENTS ACT, 1970 (39 of 1970) THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
MECHANISM
APPLICANT
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains
material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (herein after referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
TECHNICAL FIELD
[0002] The present disclosure relates to a field of data management and
network systems, and specifically to a system and a method for configuring load factor-based data push mechanism for managing data transfers to destinations for load balancing.
BACKGROUND
[0003] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0004] In distributed computing environments, data is often transferred
from sources to destinations for further processing, storage, or analysis. As data volumes and system complexities increase, ensuring performance and reliability of these data transfers becomes increasingly challenging. Overloading destinations can lead to system failures, slowdowns, and degraded performance. In order to manage troubleshooting operations, data is pushed to the destinations using a load balancer. Essentially, the load balancer balances a load on each of the destinations,

which may require manual intervention to monitor metrics corresponding to each of the destinations. In conventional methods and systems, it may be a challenge to capture the metrics of the destinations and take appropriate action(s) based on the metrics without the load balancer or manual intervention. Traditional monitoring and data transfer systems often fall short of addressing these issues dynamically and proactively.
[0005] Various technologies have been introduced to address aspects of data
transfer management and system monitoring; however, they exhibit significant limitations. In one conventional approach, an open-source monitoring system is provided to monitor systems, applications, services, and business processes using predefined thresholds and alerts. However, such a system is heavily relying on static thresholds and manual configurations. Also, it lacks the ability to dynamically adjust data transfer rates based on real-time conditions, making it less adaptive to fluctuating workloads. Further, in another conventional approach, a monitoring and management service is provided that collects and tracks metrics, collects log files, and sets alarms. Such a system provides robust monitoring capabilities but often relies on predefined actions and lacks the ability to dynamically adjust data transfers in real time based on system performance.
[0006] Thus, there is a need to provide a system and a method that can
manage the data transfers efficiently and overcome the deficiencies of the prior arts.
OBJECTS OF THE PRESENT DISCLOSURE
[0007] It is an object of the present disclosure to provide a system and a
method to balance the load of destinations without a load balancer, thereby leading to efficient resource management.
[0008] It is an object of the present disclosure to provide a system and a
method to balance load of destinations without manual intervention.

[0009] It is an object of the present disclosure to use advanced artificial
intelligence (AI) or machine learning (ML) based models to take actions in real-time based on metrics captured from destinations.
[0010] It is an object of the present disclosure to proactively track and
monitor the behavior of a destination and automatically start/stop a data push to that destination, thereby saving the time required to push data from the source to the destination.
SUMMARY
[0011] In an exemplary embodiment, the present invention discloses a
method for managing data transfers to a plurality of destinations for load balancing. The method includes a step of monitoring, by a monitoring unit, the plurality of destinations for capturing one or more metrics associated with the plurality of destinations while transferring data to the corresponding plurality of destinations. The method further includes a step of analysing, by a processing unit, the one or more captured metrics using an analysing technique for determining a pattern of corresponding one or more metrics. The method further includes a step of comparing, by the processing unit, the pattern of the corresponding one or more metrics of a corresponding destination of the plurality of destinations with a predefined pattern of the corresponding one or more metrics. The method further includes a step of determining, by the processing unit, a current load state on the corresponding destination of the plurality of destinations by using the analysing technique when the pattern of a corresponding metric of the one or more metrics deviates from the predefined pattern of the corresponding metric of the one or more metrics in the corresponding destination of the plurality of destinations. The method further includes a step of controlling, by the processing unit, transfer of data to the corresponding destination of the plurality of destinations for the load balancing in real time when the determined current load state of the corresponding destination of the plurality of destinations is identified as an overload state.

[0012] In some embodiments, the one or more metrics includes interface
level metrics, network level metrics, Internet Protocol (IP) level metrics, or a combination thereof.
[0013] In some embodiments, the analysing technique includes an Artificial
Intelligence (AI)/Machine Learning (ML) based analysing technique.
[0014] In some embodiments, the method includes a step of feeding, by the
processing unit, the one or more metrics into an AI/ML algorithm to generate a trained AI/ML based model for determining the current load state on each of the plurality of destinations.
[0015] In some embodiments, controlling the transfer of the data to the
corresponding destination of the plurality of destinations for the load balancing includes stopping the transfer of the data to the corresponding destination of the plurality of destinations for the load balancing.
[0016] In some embodiments, the method includes a step of automatically
transferring, by the processing unit, the data to the corresponding destination of the plurality of destinations when the determined current load state of the corresponding destination of the plurality of destinations is identified as one of, a normal state or an underload state.
[0017] In another exemplary embodiment, the present invention discloses a
system for managing data transfers to a plurality of destinations for load balancing. The system includes: a monitoring unit configured to monitor the plurality of destinations for capturing one or more metrics associated with the plurality of destinations while transferring data to the corresponding plurality of destinations. The system further includes a processing unit communicatively coupled to the monitoring unit. The processing unit is configured to: analyze the one or more captured metrics using an analysing technique for determining a pattern of the corresponding one or more metrics. The processing unit is further configured to

compare the pattern of the corresponding one or more metrics of a corresponding destination of the plurality of destinations with a predefined pattern of the corresponding one or more metrics. The processing unit is further configured to determine a current load state on the corresponding destination of the plurality of destinations by using the analysing technique when the pattern of a corresponding metric of the one or more metrics deviates from the predefined pattern of the corresponding metric of the one or more metrics in the corresponding destination of the plurality of destinations. The processing unit is further configured to control transfer of data to the corresponding destination of the plurality of destinations for the load balancing in real time when the determined current load state of the corresponding destination of the plurality of destinations is identified as an overload state.
[0018] In some embodiments, the one or more metrics includes interface
level metrics, network level metrics, Internet Protocol (IP) level metrics, or a combination thereof.
[0019] In some embodiments, the analysing technique includes an Artificial
Intelligence (AI)/Machine Learning (ML) based analysing technique.
[0020] In some embodiments, the processing unit is configured to feed the
one or more metrics into an AI/ML algorithm to generate a trained AI/ML based model for determining the current load state on each of the plurality of destinations.
[0021] In some embodiments, the processing unit is configured to control
the transfer of the data to the corresponding destination of the plurality of destinations by stopping the transfer of the data to the corresponding destination of the plurality of destinations.
[0022] In some embodiments, the processing unit is configured to
automatically transfer the data to the corresponding destination of the plurality of destinations when the determined current load state of the corresponding

destination of the plurality of destinations is identified as one of, a normal state or an underload state.
[0023] In an exemplary embodiment, the present invention discloses a user
equipment configured to manage data transfers to a plurality of destinations for load balancing. The user equipment includes. a processing unit. The user equipment further includes: a computer readable storage medium storing programming for execution by the processing unit, the programming including instructions to receive one or more captured metrics associated with the plurality of destinations at a user interface, analyze the one or more received metrics using an analysing technique for determining a pattern of the corresponding one or more metrics; compare the pattern of the corresponding one or more metrics of a corresponding destination of the plurality of destinations with a predefined pattern of the corresponding one or more metrics; determine a current load state on the corresponding destination of the plurality of destinations by using the analysing technique when the pattern of a corresponding metric of the one or more metrics deviates from the predefined pattern of the corresponding metric of the one or more metrics in the corresponding destination of the plurality of destinations; and control transfer of data to the corresponding destination of the plurality of destinations for the load balancing in real time when the determined current load state of the corresponding destination of the plurality of destinations is identified as an overload state.
[0024] In some embodiments, the one or more metrics includes interface
level metrics, network level metrics, Internet Protocol (IP) level metrics, or a combination thereof.
[0025] In some embodiments, the analysing technique includes an Artificial
Intelligence (AI)/Machine Learning (ML) based analysing technique.

[0026] In some embodiments, the processing unit is configured to feed the
one or more metrics an AI/ML algorithm to generate a trained AI/ML based model for determining the current load state on each of the plurality of destinations.
[0027] In some embodiments, the processing unit is configured to control
the transfer of the data to the corresponding destination of the plurality of destinations by stopping the transfer of the data to the corresponding destination of the plurality of destinations.
[0028] In some embodiments, the processing unit is configured to
automatically transfer the data to the corresponding destination of the plurality of destinations when the determined current load state of the corresponding destination of the plurality of destinations is identified as one of, a normal state or an underload state.
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] In the figures, similar components and/or features may have the
same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
[0030] The diagrams are for illustration only, which thus is not a limitation
of the present disclosure, and wherein:
[0031] FIG. 1A illustrates an exemplary network architecture in which or
with which embodiments of the present disclosure may be implemented;
[0032] FIG. 1B illustrates an exemplary system architecture, in accordance
with an embodiment of the present disclosure;

[0033] FIG. 2 illustrates an exemplary block diagram of a system, in
accordance with an embodiment of the present disclosure;
[0034] FIG. 3 illustrates an exemplary flow diagram representation for
implementing the system, in accordance with an embodiment of the present disclosure;
[0035] FIG. 4 illustrates an exemplary computer system in which or with
which embodiments of the present disclosure may be implemented; and
[0036] FIG. 5 illustrates a flowchart of a method for managing data transfers
to destinations for load balancing, in accordance with an embodiment of present disclosure.
LIST OF REFERENCE NUMERALS
100 – Network architecture
102-1, 102-2…102-N – User Equipment
104-1, 104-2…104-N – Users
106 – System
108 – Network
110 – AI/ML Engine
112 –System Architecture
114-1, 114-N – Sources
116 – Destinations
116a – Destination 1
116b – Destination 2
116c – Destination 3
118 – User Interface
120 – First Database
122 – Second Database
124 – Ingestion Layer

200 – Block diagram
202 – Monitoring Unit
204 – Memory
206 – Interfacing Unit
208 – Processing Unit
210 – Database
212 – Training Module
214 – Analysing Module
216 – Load Determination Module
218 – Data Transfer Module
300 – Flow Diagram of System
400 –Computer system
410 – External storage device
420 – Bus
430 – Main memory
440 – Read only memory
450 – Mass storage device
460 – Communication port(s)
470 – Processor
500 – Method
DETAILED DESCRIPTION
[0037] The following is a detailed description of embodiments of the
disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.

[0038] Generally, a system is performing under heavy load (data volume is
huge) while transferring data to multiple destinations. Therefore, the present disclosure may provide a system that captures metrics (such as, latency, bandwidth, reachability etc), monitors a behaviour and tracks a pattern of the destination. Based on the metrics and its pattern, if the system finds that a load on the particular destination is increasing or finds that the particular destination is getting overloaded or the destination is not responding well to an incoming system, then the system takes an action and automatically stops transferring the data to that destination. Once the destination reaches to its normal metrics, the system automatically starts the data push to that destination.
[0039] Various embodiments of the present disclosure will be explained in
detail with reference to FIGs. 1 to 5.
[0040] FIG. 1A illustrates an exemplary network architecture (100) in
which or with which embodiments of the present disclosure may be implemented.
[0041] Referring to the FIG. 1A, the network architecture (100) may include
one or more user equipment (102-1, 102-2…102-N) associated with one or more users (104-1, 104-2…104-N) and a system (106) in an environment. In an embodiment, the one or more user equipment (102-1, 102-2…102-N) may be communicated to the system (106) through a network (108). A person of ordinary skill in the art will understand that the one or more user equipment (102-1, 102-2…102-N) may be individually referred to as the user equipment (102) and collectively referred to as the user equipment (102). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three user equipment (102) are depicted in the FIG. 1A, however any number of the user equipment (102) may be included without departing from the scope of the ongoing description. Similarly. a person of ordinary skill in the art will understand that the one or more users (104-1, 104-2…104-N) may be individually referred to as the user (104) and collectively referred to as the users (104).

[0042] In an embodiment, the user equipment (102) may include smart
devices operating in a smart environment, for example, an Internet of Things (IoT) system. In such an embodiment, the user equipment (102) may include, but not limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, a smart security system, a smart home system, other devices for monitoring or interacting with or for the users (104) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the user equipment (102) may include, but not limited to, intelligent multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
[0043] In an embodiment, the user equipment (102) may include, but not
limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like.
[0044] In an embodiment, the user equipment (102) may include, but is not
limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, a general-purpose computer, a desktop, a personal digital assistant, a mainframe computer, or any other computing device. In another embodiment, the user equipment (102) may include one or more in-built or

externally coupled accessories including, but not limited to, a visual aid device such
as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving
input from the user (104) or the entity such as a touch pad, a touch enabled screen,
an electronic pen, and the like. A person of ordinary skill in the art will appreciate
5 that the user equipment (102) may not be restricted to the mentioned devices and
various other devices may be used.
[0045] Referring to the FIG. 1A, the user equipment (102) may
communicate with the system (106), for example, a load balancing system, through
10 the network (108). In accordance with embodiments of the present disclosure, the
system (106) may be designed and configured for monitoring and capturing one or more metrics of destinations (116) (as shown in FIG. 1B) while transferring data from sources (114-1... 114-N) (as shown in FIG. 1B) to the destinations (116). In an embodiment, the data may be, but not limited to, transactional data, sensor data,
15 user data, analytical data, business data, and so forth. Embodiments of the present
invention are intended to include or otherwise cover any type of the data.
[0046] In some embodiments, the metrics corresponding to each of the
destinations (116) may be, but are not limited to latency, bandwidth, reachability,
20 and the like. Embodiments of the present invention are intended to include or
otherwise cover any type of the metrics. Further, in some embodiments, the metrics may be, but not be limited to, interface level metrics, network level metrics, Internet Protocol (IP) level metrics, and so forth.
25 [0047] In accordance with embodiments of the present disclosure, the
system (106) may include an Artificial Intelligence (AI)/Machine Learning (ML) engine (110) that may be executed to analyse the real time metrics and make intelligent decisions about data transfers to ensure optimal performance of the system (106). In an exemplary embodiment, the AI/ML engine (110) may be
30 utilized to continuously analyze the metrics of the destinations (116) and predicts
overload conditions by analysing patterns of the corresponding metrics. Based on the predicted overload in real time, the AI/ML engine (110) may automatically
13

adjust data transfer rates for transferring the data to the destinations (116). In an
aspect, if one of, the destinations (116) is approaching the overload condition, the
AI/ML engine (110) may reduce or stop the data transfers. Once the metrics of the
corresponding destination (116) reach a normal state, the AI/ML engine (110) may
5 automatically resume the data transfers. Therefore, the present disclosure enables
the system (106) to proactively track and monitor a behaviour of the destinations
(116), and based on the metrics, the system (106) may take necessary actions on the
fly. In an embodiment, the system (106) may be explained in detail in conjunction
with FIG. 2.
10
[0048] In an embodiment, the network (108) may include at least one of a
5G network, 6G network, or the like. The network (108) may enable the user
equipment (102) to communicate with other devices in the network architecture
(100) and/or with the system (106). The network (108) may include a wireless card
15 or some other transceiver connection to facilitate this communication. In another
embodiment, the network (108) may be implemented as, or include any of a variety of different communication technologies such as a Wide Area Network (WAN), a Local Area Network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, a Public Switched Telephone Network (PSTN), or
20 the like.
[0049] Although the FIG. 1A shows exemplary components of the network
architecture (100); however, in other embodiments, the network architecture (100)
may include fewer components, different components, differently arranged
25 components, or additional functional components than depicted in the FIG. 1A.
Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
30 [0050] FIG. 1B illustrates an exemplary system architecture (112), in
accordance with an embodiment of the present disclosure.
14

[0051] Referring to the FIG. 1B, the system architecture (112) may be
implemented to fetch data from the sources (114-1,…114-N) (hereinafter
collectively referred to as the sources (114) and individually referred to as the
source (114)), transfer the fetched data to the corresponding destinations (116) and
5 normalize the data by the destinations (116) using policy configurations received
from a user interface (118).
[0052] In another embodiment, the system architecture (112) may be
implemented to fetch the data from the sources (114), normalize the data using the
10 policy configurations received from the user interface (118) and transfer the
normalized data to the corresponding destinations (116).
[0053] The system architecture (112) may include the user interface (118)
that may allow users (104) (as shown in the FIG. 1A) to interact with the system
15 (106). The user interface (118) may allow the users (104) to onboard new sources
(114) and configure policies for data normalization. The system architecture (112) may also include a first database (120) associated with the system (106) to store metadata-related information. The metadata-related information may be, but not limited to, a source identification (ID), a timestamp indicating date and time when
20 the data is fetched, data fetching parameters indicating how the data was fetched,
and so forth. The system architecture (112) may also include a second database (122) associated with the destination (116) to store the normalized data. In an embodiment, the first database (120) and the second database (122) may be different databases. In another embodiment, the first database (120) and the second
25 database (122) may be the same databases.
[0054] In an exemplary embodiment, the system (106) may include an
ingestion layer (124) that may be configured to fetch the data from the sources (114)
using one of, an Application Programming Interface (API), database queries, or any
30 other method. The sources (114) may include, but are not limited to, a database, an
application, or any other system. In some embodiments, the system (106) may pull
15

the data from the different sources (114) using the user interface (118) based on the configuration of the sources (114).
[0055] The system (106) may further transfer the data to the corresponding
5 destination (116) for further processing or storage by way of load balancing, i.e.
without any manual intervention or without using a load balancer, thereby leading to efficient resource management and time efficiency. The destination (116) may be, but not limited to, the database, the application or any other system. In an exemplary embodiment, the data transfer may be performed by using one of, the
10 API, file transfers, or any other methods depending on requirement of the
destination (116). Further, the destination (116) may be having a normalization layer (not shown) to normalize the data based on the policy configurations received via the user interface (118) from the corresponding source (114) or the user. In an embodiment, the policy configurations may include, but are not limited to, data
15 format conversion, data cleaning, data enrichment rules, and so forth. Further, the
normalized data from the destination (116) may be stored in the second database (122).
[0056] In another embodiment, the system (106) may be having the
20 normalization layer (not shown) that may be configured to normalize the data prior
sending the data to the destination (116) based on the policy configurations received
via the user interface (118) from the corresponding source (114). In such
embodiment, the system (106) may be configured to transmit the normalized data
to the corresponding destinations (116) for further processing.
25
[0057] FIG. 2 illustrates an exemplary block diagram (200) of the system
(106), in accordance with an embodiment of the present disclosure.
[0058] In an embodiment, the system (106) may include a monitoring unit
30 (202), a memory (204), an interfacing unit (206), a processing unit (208) and a
database (210). The processing unit (208) further comprises a training module
16

(212), an analysing module (214), a load determination module (216) and a data transfer module (218).
[0059] In an embodiment, the monitoring unit (202) may be configured to
5 monitor the destinations (116) for capturing the metrics associated with the
corresponding destinations (116) while transferring the data to the corresponding
destinations (116). In such embodiment, the monitoring unit (202) may be
configured to continuously capture the real-time metrics from each of the
corresponding destinations (116) such that the metrics may be, but not limited to,
10 the interface level metrics, the network level metrics, the Internet Protocol (IP) level
metrics, and so forth. Further, in an exemplary embodiment, the metrics may be,
but not limited to, the latency, the bandwidth, the reachability, and so forth. In an
embodiment, the monitoring unit (202) may be configured to store the captured
metrics to the database (210).
15
[0060] The memory (204) may be configured to store computer-readable
instructions or routines in a non-transitory computer readable storage medium. In
an aspect, the memory (204) may be configured to store program instructions that
may be executed to perform tasks associated with the system (106). The memory
20 (204) may include any non-transitory storage device including, for example, but not
limited to, a volatile memory such as a Random-Access Memory (RAM), or a non-volatile memory such as an Erasable Programmable Read Only Memory (EPROM), a flash memory, and the like. Embodiments of the present invention are intended to include or otherwise type of the memory (204) including known related art and/or
25 later developed technologies.
[0061] In an embodiment, the interfacing unit (206) may comprise a variety
of interfaces, for example, interfaces for data input and output devices (I/O), storage
devices, and the like. The interfacing unit (206) may facilitate communication
30 through the system (106). The interfacing unit (206) may also provide a
communication pathway for various other units/modules of the system (106).
17

[0062] In an embodiment, the database (210) offers functionality to manage,
capture, store, and retrieve data. In an embodiment, the database (210) is configured
to serve as a centralized repository for storing the captured metrics, the normalized
data associated with the destinations (116), and the metadata-related information.
5 The database (210) may also be configured to store predefined patterns of the
metrics. The database (210) is designed to interact seamlessly with other
components of the system (106), such as the training module (212), the analysing
module (214), the load determination module (216), and the data transfer module
(218), to support the functionality of the system (106) effectively. The database
10 (210) may store data that may be either stored or generated as a result of
functionalities implemented by any of the components of the processing unit (208). In an embodiment, the database (210) may be separate from the system (106).
[0063] The modules are controlled by the processing unit (208) which
15 execute the computer-readable instructions retrieved from the memory (204). The
processing unit (208) further interacts with the interfacing unit (206) to facilitate
user interaction and to provide options for managing and configuring the system
(106). The processing unit (208) may be implemented as one or more
microprocessors, microcomputers, microcontrollers, digital signal processors,
20 central processing units, logic circuitries, and/or any devices that process data based
on operational instructions.
[0064] In an embodiment, the training module (212) may be configured to
access the metrics stored in the database (210). Further, the training module (212)
25 may be configured to utilize the stored metrics as training data for training an
AI/ML model. In an exemplary embodiment of the present invention, the training
module (212) may be configured to feed the stored metrics into the AI/ML
algorithm for generating a trained AI/ML-based model for determining the current
load state of each of the destinations (116).
30
[0065] The analysing module (214) may be configured to analyse the
captured metrics by using an analysing technique, in an embodiment. The analysing
18

technique may include, but is not limited to, a machine learning algorithm, an AI
algorithm, and so forth. In an embodiment, the analysing module (214) may be
configured to analyse the captured metrics by using the analysing technique for
determining a pattern of the metrics. In an exemplary embodiment, the analysing
5 module (214) may be configured to compare the pattern of the corresponding
metrics of a corresponding destination of the destinations (116) with the predefined
pattern of the corresponding metrics. In an aspect, the predefined pattern of the
metrics may be a normal pattern that may be established for each metric based on
historical analysis from the destinations (116). The predefined pattern may be
10 stored in the database (210).
[0066] In an embodiment, the analysing module (214) may be configured
to generate a signal of load determination for the corresponding destination (116), when the pattern of at least one metric deviates from the predefined pattern of the
15 corresponding metric in the corresponding destination (116). The analysing module
(214) may be configured to transmit the generated signal to the load determination module (216). In another embodiment, the analysing module (214) may be configured to enable the monitoring unit (202) to continue capturing the metrics of the destinations (116) when the pattern of all the metrics lies within the predefined
20 pattern of the corresponding metrics in the corresponding destination (116).
[0067] The load determination module (216) may be configured to
determine the current load state on the corresponding destination (116) by using the analysing technique based on the signal received from the analysing module (214).
25 In an exemplary embodiment, the load determination module (216) may be
configured to compare the current load state of the destination (116) with a predefined level of load stored in the database (210). In an embodiment, the load determination module (216) may be configured to generate a control signal when the current load state of the destination (116) exceeds the predefined level of load.
30 Such a condition may indicate that the current load state of the destination (116) is
an overload state. The load determination module (216) may be configured to transmit the generated control signal to the data transfer module (218).
19

[0068] In another embodiment, the load determination module (216) may
be configured to generate a start signal when the current load state of the destination
(116) falls within the predefined level of load. Such a condition may indicate that
5 the current load state on the destination (116) is a normal state or an underloaded
state. The load determination module (216) may be configured to transmit the generated start signal to the data transfer module (218).
[0069] The data transfer module (218) may be configured to control the
10 transfer of the data to the corresponding destination (116) based on the received
stop signal, in an embodiment. In such embodiment, the data transfer module (218)
may be configured to control the transfer of the data to the corresponding
destination (116) by stopping the transfer of the data to the corresponding
destination (116). In another embodiment, the data transfer module (218) may be
15 configured to automatically transfer or push the data to the corresponding
destination (116) based on the received start signal.
[0070] Although the FIG. 2 shows an exemplary block diagram (200) of the
system (106); however, in other embodiments, the system (106) may include fewer
20 components, different components, differently arranged components, or additional
functional components than depicted in the FIG. 2. Additionally, or alternatively, one or more components of the system (106) may perform functions described as being performed by one or more other components of the system (106).
25 [0071] FIG. 3 illustrates an exemplary flow diagram representation (300)
for implementing the system (106), in accordance with an embodiment of the present disclosure.
[0072] Referring to the FIG. 3, the system (106) may capture the metrics
30 corresponding to a destination 1 (116a), a destination 2 (116b) and a destination 3
(116c). As discussed above, the metrics may include, but not be limited to, the
latency, the bandwidth, the reachability, and so forth. Based on the captured
20

metrics, the system (106) may utilize the AI/ML engine (110) to analyze the
captured metrics and may determine the current load on each of the destination 1
(116a), the destination 2 (116b) and the destination 3 (116c). Further, based on the
determined current load, the system (106) may automatically balance the load on
5 each of the destination 1 (116a), the destination 2 (116b) and the destination 3
(116c) in real time by stopping transfer of the data to the destination 1 (116a) and
by continuing transfer the data to the destination 2 (116b) and the destination 3
(116c). Referring to the FIG. 3, it is shown that the determined current load on the
destination 1 (116a) is above the predefined level of load, due to which the system
10 (106) stops the transfer of the data to the destination 1 (116a).
[0073] Therefore, the present disclosure does not require a load balancer to
be put in place to determine the load on each of the destination 1 (116a), the
destination 2 (116b) and the destination 3 (116c). The system (106), in itself, using
15 the AI/ML engine (110), may be able to automatically analyse the metrics of the
destination 1 (116a), destination 2 (116b) and destination 3 (116c), determines the load, balance the load, and transfers or stops the data on each of the destination 1 (116a), destination 2 (116b) and destination 3 (116c).
20 [0074] FIG. 4 illustrates an exemplary computer system (400) in which or
with which embodiments of the present disclosure may be implemented. As shown in FIG. 4, the computer system (400) may include an external storage device (410), a bus (420), a main memory (430), a read only memory (440), a mass storage device (450), a communication port (460), and a processor (470). A person skilled in the
25 art will appreciate that the computer system (400) may include more than one
processor (470) and the communication ports (460). The processor (470) may include various modules associated with embodiments of the present disclosure.
[0075] In an embodiment, the external storage device (410) may be any
30 device that is commonly known in the art such as, but not limited to, a memory
card, a memory stick, a solid-state drive, a hard disk drive (HDD), and so forth.
21

[0076] In an embodiment, the bus (420) may be communicatively coupled
with the processor(s) (470) with the other memory, storage, and communication
blocks. The bus (420) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI
5 Extended (PCI-X) bus, a Small Computer System Interface (SCSI), a Universal
Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (470) to the computer system (400).
10 [0077] In an embodiment, the main memory (430) may be a Random Access
Memory (RAM), or any other dynamic storage device commonly known in the art. The Read-only memory (440) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the
15 processor (470).
[0078] In an embodiment, the mass storage device (450) may be any current
or future mass storage solution, which may be used to store information and/or
instructions. Exemplary mass storage solutions include, but are not limited to, a
20 Parallel Advanced Technology Attachment (PATA) or a Serial Advanced
Technology Attachment (SATA) hard disk drives or solid-state drives (internal or
external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one
or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g.,
an array of disks (e.g., SATA arrays).
25
[0079] Further, the communication port (460) may be any of an RS-232 port
for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit
or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other
existing or future ports. The communication port (460) may be chosen depending
30 on the network (108), such a Local Area Network (LAN), Wide Area Network
(WAN), or any network to which the computer system (400) connects.
22

[0080] Optionally, operator and administrative interfaces, e.g., a display, a
keyboard, a joystick, and a cursor control device, may also be coupled to the bus
(420) to support a direct operator interaction with the computer system (400). Other
operator and administrative interfaces may be provided through network
5 connections connected through the communication port (460). Components
described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (400) limit the scope of the present disclosure.
10 [0081] FIG. 5 illustrates a flowchart of a method (500) for managing data
transfers to destinations (116) for load balancing, in accordance with an embodiment of present disclosure.
[0082] Step (502) includes a step of monitoring a plurality of destinations
15 (116) for capturing one or more metrics associated with the plurality of destinations
(116) while transferring data to the corresponding plurality of destinations (116).
[0083] Step (504) includes a step of analysing the one or more captured
metrics using an analysing technique for determining a pattern of the corresponding
20 one or more metrics.
[0084] Step (506) includes a step of comparing the pattern of the
corresponding one or more metrics of a corresponding destination of the plurality
of destinations (116) with a predefined pattern of the corresponding one or more
25 metrics.
[0085] Step (508) includes a step of determining a current load state on the
corresponding destination of the plurality of destinations (116) by using the
analysing technique when the pattern of a corresponding metric of the one or more
30 metrics deviates from the predefined pattern of the corresponding metric of the one
or more metrics in the corresponding destination of the plurality of destinations (116).
23

[0086] Step (510) includes a step of controlling a transfer of the data to the
corresponding destination of the plurality of destinations (116) for the load
balancing in real time when the determined current load state of the corresponding
5 destination of the plurality of destinations (116) is identified as an overload state.
[0087] In an embodiment, wherein the one or more metrics includes
interface level metrics, network level metrics, Internet Protocol (IP) level metrics,
or a combination thereof.
10
[0088] In an embodiment, the analysing technique includes an Artificial
Intelligence (AI)/Machine Learning (ML) based analysing technique.
[0089] In an embodiment, the method (500) includes a step of feeding the
15 one or more metrics into an AI/ML algorithm to generate a trained AI/ML based
model for determining the current load state on each of the plurality of destinations (116).
[0090] In an embodiment, the controlling of the transfer of the data to the
20 corresponding destination of the plurality of destinations (116) for the load
balancing includes stopping the transfer of the data to the corresponding destination of the plurality of destinations (116) for the load balancing.
[0091] In an embodiment, the method (500) includes a step of automatically
25 transferring the data to the corresponding destination of the plurality of destinations
(116) when the determined current load state of the corresponding destination of the plurality of destinations (116) is identified as one of, a normal state or an underload state.
30 [0092] In an embodiment, the present disclosure discloses a user equipment
(UE) configured to manage data transfers to destinations for load balancing. The user equipment comprising. a processing unit. The user equipment further comprising: a computer readable storage medium storing programming for
24

execution by the processing unit, the programming including instructions to receive
one or more captured metrics associated with the plurality of destinations at a user
interface; analyse the one or more received metrics using an analysing technique
for determining a pattern of the corresponding one or more metrics; compare the
5 pattern of the corresponding one or more metrics of a corresponding destination of
the plurality of destinations with a predefined pattern of the corresponding one or more metrics; determine a current load state on the corresponding destination of the plurality of destinations by using the analysing technique when the pattern of a corresponding metric of the one or more metrics deviates from the predefined
10 pattern of the corresponding metric of the one or more metrics in the corresponding
destination of the plurality of destinations; and control a transfer of data to the corresponding destination of the plurality of destinations for the load balancing in real time when the determined current load state of the corresponding destination of the plurality of destinations is identified as an overload state.
15
[0093] The present disclosure provides technical advancement related to a
field of data management and network systems. This advancement addresses limitations of existing solutions by dynamically managing data transfers between sources and destinations using real-time performance metrics and a push data
20 mechanism. The disclosure provides inventive aspects such as an integration of an
AI/ML engine to analyze metrics and make intelligent decisions about data transfer rates, which offers significant improvements in performance and reliability. By implementing this invention, the present system enhances data transfer efficiency and prevents destination overloads, resulting in a more stable and optimized data
25 flow.
[0094] While the foregoing describes various embodiments of the present
disclosure, other and further embodiments of the present disclosure may be devised
without departing from the basic scope thereof. The scope of the present disclosure
30 is determined by the claims that follow. The present disclosure is not limited to the
described embodiments, versions or examples, which are included to enable a
25

person having ordinary skill in the art to make and use the present disclosure when combined with information and knowledge available to the person having ordinary skill in the art.
[0095] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0096] The present disclosure provides a system and a method to balance
load of destinations without a load balancer, thereby leading to efficient resource
management.
[0097] The present disclosure provides a system and a method to balance
load of destinations without manual intervention.
[0098] The present disclosure uses an advanced artificial intelligence (AI),
or machine learning (ML) based models to take actions in real-time based on metrics captured for destinations.
[0099] The present disclosure proactively tracks and monitors a behaviour
of a destination and automatically starts/stops a data push to that destination, thereby saving time required to push data from a source to the destination.

WE CLAIM:
1. A method (500) for managing data transfers to a plurality of destinations (116)
for load balancing, wherein the method (500) comprising steps of:
monitoring (502), by a monitoring unit (202), the plurality of destinations (116) for capturing one or more metrics associated with the plurality of destinations (116) while transferring data to the corresponding plurality of destinations (116);
analysing (504), by a processing unit (208), the one or more captured metrics using an analysing technique for determining a pattern of the corresponding one or more metrics;
comparing (506), by the processing unit (208), the pattern of the corresponding one or more metrics of a corresponding destination of the plurality of destinations (116) with a predefined pattern of the corresponding one or more metrics;
determining (508), by the processing unit (208), a current load state on the corresponding destination of the plurality of destinations (116) by using the analysing technique when the pattern of a corresponding metric of the one or more metrics deviates from the predefined pattern of the corresponding metric of the one or more metrics in the corresponding destination of the plurality of destinations (116); and
controlling (510), by the processing unit (208), transfer of data to the corresponding destination of the plurality of destinations (116) for the load balancing in real time when the determined current load state of the corresponding destination of the plurality of destinations (116) is identified as an overload state.
2. The method (500) as claimed in claim 1, wherein the one or more metrics
comprise interface level metrics, network level metrics, Internet Protocol (IP)
level metrics, or a combination thereof.

3. The method (500) as claimed in claim 1, wherein the analysing technique comprises an Artificial Intelligence (AI)/Machine Learning (ML) based analysing technique.
4. The method (500) as claimed in claim 1, comprising a step of feeding, by the processing unit (208), the one or more metrics into an AI/ML algorithm to generate a trained AI/ML based model for determining the current load state on each of the plurality of destinations (116).
5. The method (500) as claimed in claim 1, wherein controlling the transfer of the data to the corresponding destination of the plurality of destinations (116) for the load balancing comprises stopping the transfer of the data to the corresponding destination of the plurality of destinations (116) for the load balancing.
6. The method (500) as claimed in claim 1, comprising a step of automatically transferring, by the processing unit (208), the data to the corresponding destination of the plurality of destinations (116) when the determined current load state of the corresponding destination of the plurality of destinations (116) is identified as one of, a normal state or an underload state.
7. A system (106) for managing data transfers to a plurality of destinations (116) for load balancing, wherein the system (106) comprising:
a monitoring unit (202) configured to monitor the plurality of destinations (116) for capturing one or more metrics associated with the plurality of destinations (116) while transferring data to the corresponding plurality of destinations (116); and
a processing unit (208) communicatively coupled to the monitoring unit (202), wherein the processing unit (208) is configured to:

analyze the one or more captured metrics using an analysing technique for determining a pattern of the corresponding one or more metrics;
compare the pattern of the corresponding one or more metrics of a corresponding destination of the plurality of destinations (116) with a predefined pattern of the corresponding one or more metrics;
determine a current load state on the corresponding destination of the plurality of destinations (116) by using the analysing technique when the pattern of a corresponding metric of the one or more metrics deviates from the predefined pattern of the corresponding metric of the one or more metrics in the corresponding destination of the plurality of destinations (116); and
control transfer of the data to the corresponding destination of the plurality of destinations (116) for the load balancing in real time when the determined current load state of the corresponding destination of the plurality of destinations (116) is identified as an overload state.
8. The system (106) as claimed in claim 7, wherein the one or more metrics comprises interface level metrics, network level metrics, Internet Protocol (IP) level metrics, or a combination thereof.
9. The system (106) as claimed in claim 7, wherein the analysing technique comprises an Artificial Intelligence (AI)/Machine Learning (ML) based analysing technique.
10. The system (106) as claimed in claim 7, wherein the processing unit (208) is configured to feed the one or more metrics into an AI/ML algorithm to generate a trained AI/ML based model for determining the current load state on each of the plurality of destinations (116).

11. The system (106) as claimed in claim 7, wherein the processing unit (208) is configured to control the transfer of the data to the corresponding destination of the plurality of destinations (116) by stopping the transfer of the data to the corresponding destination of the plurality of destinations (116).
12. The system (106) as claimed in claim 7, wherein the processing unit (208) is configured to automatically transfer the data to the corresponding destination of the plurality of destinations (116) when the determined current load state of the corresponding destination of the plurality of destinations (116) is identified as one of, a normal load or an underload.
13. A user equipment (102) configured to manage data transfers to a plurality of destinations (116) for load balancing, the user equipment (102) comprising:
a processing unit (208); and
a computer readable storage medium storing programming for execution by the processing unit (208), the programming including instructions to:
receive one or more metrics associated with the plurality of destinations (116) at a user interface (118);
analyze the received one or more metrics using an analysing technique for determining a pattern of the corresponding one or more metrics;
compare the pattern of the corresponding one or more metrics of a corresponding destination of the plurality of destinations (116) with a predefined pattern of the corresponding one or more metrics;
determine a current load state on the corresponding destination of the plurality of destinations (116) by using the analysing technique when the pattern of a corresponding metric of the one or more metrics deviates from the predefined pattern of the corresponding metric of the one or more metrics in the corresponding destination of the plurality of destinations (116); and

control transfer of data to the corresponding destination of the plurality of destinations (116) for the load balancing in real time when the determined current load state of the corresponding destination of the plurality of destinations (116) is identified as an overload state.
14. The user equipment (102) as claimed in claim 13, wherein the one or more metrics comprises interface level metrics, network level metrics, Internet Protocol (IP) level metrics, or a combination thereof.
15. The user equipment (102) as claimed in claim 13, wherein the analysing technique comprises an Artificial Intelligence (AI)/Machine Learning (ML) based analysing technique.
16. The user equipment (102) as claimed in claim 13, wherein the processing unit (208) is configured to feed the one or more metrics into an AI/ML algorithm to generate a trained AI/ML based model for determining the current load state on each of the plurality of destinations (116).
17. The user equipment (102) as claimed in claim 13, wherein the processing unit (208) is configured to control the transfer of the data to the corresponding destination of the plurality of destinations (116) by stopping the transfer of the data to the corresponding destination of the plurality of destinations (116).
18. The user equipment (102) as claimed in claim 13, wherein the processing unit (208) is configured to automatically transfer the data to the corresponding destination of the plurality of destinations (116) when the determined current load state of the corresponding destination of the plurality of destinations (116) is identified as one of, a normal state or an underload state.

Documents

Application Documents

# Name Date
1 202321047104-STATEMENT OF UNDERTAKING (FORM 3) [13-07-2023(online)].pdf 2023-07-13
2 202321047104-PROVISIONAL SPECIFICATION [13-07-2023(online)].pdf 2023-07-13
3 202321047104-FORM 1 [13-07-2023(online)].pdf 2023-07-13
4 202321047104-DRAWINGS [13-07-2023(online)].pdf 2023-07-13
5 202321047104-DECLARATION OF INVENTORSHIP (FORM 5) [13-07-2023(online)].pdf 2023-07-13
6 202321047104-FORM-26 [13-09-2023(online)].pdf 2023-09-13
7 202321047104-POA [29-05-2024(online)].pdf 2024-05-29
8 202321047104-FORM 13 [29-05-2024(online)].pdf 2024-05-29
9 202321047104-AMENDED DOCUMENTS [29-05-2024(online)].pdf 2024-05-29
10 202321047104-Request Letter-Correspondence [03-06-2024(online)].pdf 2024-06-03
11 202321047104-Power of Attorney [03-06-2024(online)].pdf 2024-06-03
12 202321047104-Covering Letter [03-06-2024(online)].pdf 2024-06-03
13 202321047104-ENDORSEMENT BY INVENTORS [26-06-2024(online)].pdf 2024-06-26
14 202321047104-DRAWING [26-06-2024(online)].pdf 2024-06-26
15 202321047104-CORRESPONDENCE-OTHERS [26-06-2024(online)].pdf 2024-06-26
16 202321047104-COMPLETE SPECIFICATION [26-06-2024(online)].pdf 2024-06-26
17 202321047104-ORIGINAL UR 6(1A) FORM 26-270624.pdf 2024-07-01
18 202321047104-CORRESPONDENCE(IPO)-(WIPO DAS)-12-07-2024.pdf 2024-07-12
19 Abstract.jpg 2024-10-09
20 202321047104-FORM-9 [16-10-2024(online)].pdf 2024-10-16
21 202321047104-FORM 18A [18-10-2024(online)].pdf 2024-10-18
22 202321047104-FORM 3 [04-11-2024(online)].pdf 2024-11-04
23 202321047104-FER.pdf 2024-12-26
24 202321047104-FORM 3 [26-02-2025(online)].pdf 2025-02-26
25 202321047104-FORM 3 [26-02-2025(online)]-1.pdf 2025-02-26
26 202321047104-FER_SER_REPLY [06-03-2025(online)].pdf 2025-03-06
27 202321047104-Proof of Right [19-03-2025(online)].pdf 2025-03-19
28 202321047104-ORIGINAL UR 6(1A) FORM 1-270325.pdf 2025-03-28
29 202321047104-US(14)-HearingNotice-(HearingDate-01-12-2025).pdf 2025-11-04

Search Strategy

1 SearchStrategy202321047104E_23-12-2024.pdf