Abstract: The present disclosure relates to a method and a system for increasing throughput of data transactions by implementing a storage system. The disclosure encompasses: receiving, at the storage system [300a], transaction requests from application processes [312a, 312b, 312c] for accessing data from the storage system [300a]; identifying, a corresponding database ring from a plurality of database rings [310a, 310b, 310c] of the storage system [300a] for serving the transaction requests from application processes [312a, 312b, 312c] based on a unique data type associated with each of the transaction requests from the application processes [312a, 312b, 312c]; and facilitating, an access of the identified corresponding database ring from the plurality of database rings [310a, 310b, 310c] for serving the transaction requests from application processes [312a, 312b, 312c] based on the unique data type associated with the each transaction request. [FIG. 4]
1
FORM 2
THE PATENTS ACT, 1970 (39 OF
1970)
&
5 THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR INCREASING THROUGHPUT OF
10 DATA TRANSACTIONS BY IMPLEMENTING A STORAGE SYSTEM”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point,
15 Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
20
The following specification particularly describes the invention and the manner in which
it is to be performed.
25
2
METHOD AND SYSTEM FOR INCREASING THROUGHPUT OF DATA
TRANSACTIONS BY IMPLEMENTING A STORAGE SYSTEM
FIELD OF INVENTION
5
[0001] Embodiments of the present disclosure generally relate to the field of
wireless communication systems. More particularly, the embodiments of the
present disclosure relate to methods and systems for increasing throughput of data
transactions by implementing a storage system.
10
BACKGROUND
[0002] The following description of related art is intended to provide background
information pertaining to the field of the disclosure. This section may include
15 certain aspects of the art that may be related to various features of the present
disclosure. However, it should be appreciated that this section be used only to
enhance the understanding of the reader with respect to the present disclosure, and
not as an admissions of prior art.
20 [0003] Wireless communication technology has rapidly evolved over the past few
decades, with each generation bringing significant improvements and
advancements. The first generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second-generation (2G) technology, digital communication and data
25 services became possible, and text messaging was introduced. 3G technology
marked the introduction of high-speed internet access, mobile video calling, and
location-based services. The fourth-generation (4G) technology revolutionized
wireless communication with faster data speeds, better network coverage, and
improved security. Currently, the fifth-generation (5G) technology is being
30 deployed, promising even faster data speeds, low latency, and the ability to connect
3
multiple devices simultaneously. With each generation, wireless communication
technology has become more advanced, sophisticated, and capable of delivering
more services to its users.
5 [0004] The nodes in the existing wireless communication systems, such as access
and mobility management function (AMF) node, sessions management function
(SMF) node, use a database for storing logical data in persistence mode, for
different types of data type. For example, AMF node has distributed process
architecture wherein multiple core processes-executing business logic are running
10 on multiple nodes or containers that are connected to a database server. These also
need timely responses from the database. This also means that various processors
are connected to the databases and running at the backend at the same time, and
multiple threads may take application requests at the same time and may connect to
the database (DB) at the same time. These nodes may generate very high rates of
15 transactions or data traffic towards database (DB). This may also lead to several
other limitations such as timeout, buffering, queueing of requests due to large
volume of traffic at the same time.
[0005] As each database has a limited capacity or rate of handling transactions, this
20 limits the fulfilling applications requirement of a very high transaction rate. This
imitation, in many cases, causes lags or delays in response. This, in turn, affects
application server capacity to support parallel transaction rate. Also, on application
start or stop, different data types stored in DB need specific handling to be preserved
or cleared data, which is not served by using single DB residing inside a single
25 container.
[0006] Thus, there exists an imperative need in the art to provide a method and a
system for increasing throughput of data transactions by implementing a storage
system, which the present disclosure aims to address.
30
4
SUMMARY
[0007] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
5 This summary is not intended to identify the key features or the scope of the claimed
subject matter.
[0008] An aspect of the present disclosure may relate to a method for increasing
throughput of data transactions by implementing a storage system. The method
10 includes receiving, by a processing unit at the storage system, one or more
transaction requests from one or more application processes for accessing data from
the storage system. Next, the method includes identifying, by the processing unit, a
corresponding database ring from a plurality of database rings of the storage system
for serving the one or more transaction requests from one or more application
15 processes based on a unique data type associated with each of the one or more
transaction requests from the one or more application processes. Thereafter, the
method includes facilitating, by the processing unit, an access of the identified
corresponding database ring from the plurality of database rings for serving the one
or more transaction requests from one or more application processes based on the
20 unique data type associated with each transaction request.
[0009] In an exemplary aspect of the present disclosure, the application processes
are associated with one or more network nodes.
25 [0010] In an exemplary aspect of the present disclosure, wherein each database ring
of the plurality of database rings is connected to a separate database instance.
[0011] In an exemplary aspect of the present disclosure, the method further
comprises performing, by each database ring of the plurality of database rings, one
30 or more target actions for a corresponding data type handled by each database ring.
5
[0012] In an exemplary aspect of the present disclosure, the one or more target
actions are performed based on at least one of a start operation, a stop operation,
and a restart operation associated with the one or more application processes.
5
[0013] Another aspect of the present disclosure may relate to a storage system. The
database comprises a database container comprising a plurality of database rings,
wherein each database ring of the plurality of database rings is connected to one or
more application processes, and wherein each database ring of the plurality of
10 database rings is configured to handle a unique data type of a data associated with
one or more transaction requests from the one or more application processes.
[0014] Another aspect of the present disclosure may relate to a system for
increasing throughput of data transactions by implementing a storage system. The
15 system comprises a processing unit configured to receive at the storage system, one
or more transaction requests from one or more application processes for accessing
data from the storage system; the processing unit configured to identify a
corresponding database ring from a plurality of database rings of the storage system
for serving the one or more transaction requests from one or more application
20 processes based on a unique data type associated with each of the one or more
transaction requests from the one or more application processes; and the processing
unit configured to facilitate an access of the identified corresponding database ring
from the plurality of database rings for serving the one or more transaction requests
from one or more application processes based on the unique data type associated
25 with the each transaction request.
[0015] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing instructions for increasing throughput
of data transactions by implementing a storage system, the instructions include
30 executable code which, when executed by one or more units of a system, causes: a
6
processing unit of the system to: receive at the storage system, one or more
transaction requests from one or more application processes for accessing data from
the storage system; identify a corresponding database ring from a plurality of
database rings of the storage system for serving the one or more transaction requests
5 from one or more application processes based on a unique data type associated with
each of the one or more transaction requests from the one or more application
processes; facilitate an access of the identified corresponding database ring from
the plurality of database rings for serving the one or more transaction requests from
one or more application processes based on the unique data type associated with the
10 each transaction request.
OBJECTS OF THE INVENTION
[0016] Some of the objects of the present disclosure, which at least one
15 embodiment disclosed herein satisfies are listed herein below.
[0017] It is an object of the present disclosure to provide a system and a method for
implementing a storage system that minimizes response time for serving application
requests.
20
[0018] It is another object of the present disclosure to provide a solution that
provides multiplexed throughput capacity on the same set of hardware.
25 DESCRIPTION OF THE DRAWINGS
[0019] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
30 different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
7
disclosure. Some drawings may indicate the components using block diagrams and
may not represent the internal circuitry of each component. It will be appreciated
by those skilled in the art that disclosure of such drawings includes disclosure of
electrical components, electronic components or circuitry commonly used to
5 implement such components.
[0020] FIG. 1 illustrates an exemplary block diagram representation of 5th
generation core (5GC) network architecture.
10 [0021] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
exemplary implementation of the present disclosure.
[0022] FIG. 3 illustrates an exemplary block diagram of a system for increasing
15 throughput of data transactions by implementing a storage system, in accordance
with exemplary implementations of the present disclosure.
[0023] FIG. 4 illustrates a method flow diagram for increasing throughput of data
transactions by implementing a storage system, in accordance with exemplary
20 implementations of the present disclosure.
[0024] FIG. 5 illustrates an exemplary block diagram of existing solution or prior
art for implementing a storage system.
25 [0025] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
DETAILED DESCRIPTION
30 [0026] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
8
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
5 address any of the problems discussed above or might address only some of the
problems discussed above.
[0027] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
10 the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
15
[0028] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skills in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
20 may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
[0029] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
25 diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
30
9
[0030] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
5 necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
10 similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0031] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
15 processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a (Digital
Signal Processing) DSP core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
20 integrated circuits, etc. The processor may perform signal coding, data processing,
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
25 [0032] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
30 user equipment/device may include, but is not limited to, a mobile phone, smart
10
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
5 a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
[0033] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
10 form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
15 functions.
[0034] As used herein “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to as a set of rules or protocols that define
20 communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
called.
[0035] All modules, units, components used herein, unless explicitly excluded
25 herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
30 circuits (FPGA), any other type of integrated circuits, etc.
11
[0036] As used herein the transceiver unit includes at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
5 and/or connected with the system.
[0037] As discussed in the background section, the current known solutions have
several shortcomings such as those related to limited throughput, high response
time for serving application requests, timeouts, different handling on application
10 cluster restart and delays in response at application etc. The present disclosure aims
to overcome the above-mentioned and other existing problems in this field of
technology by providing methods and systems for increasing throughput of data
transactions by implementing a storage system. The present disclosure minimizes
response time for serving application requests. In the present solution, each data
15 type is served by a separate database ring. In the existing solutions related to
database setup, a set of threads accept incoming traffic coming from application
processes, performs required processing for a request and respond back in required
minimal response time. In the present solution, there are multiple database
setups/rings in the same container where each ring specific setup handles one type
20 of logical data, providing multiplexed throughput capacity on the same set of
hardware.
[0038] In general, an application with multiple distributed process architecture
needs very high database transaction rate support from database applications.
25 Database applications have a limit of finite transaction rate support. In order to cater
high transaction rate need, multiple setups/instances may be created inside same
container where container resource usage is shared across all multiple setups which
in term makes multiple threads available for handling application traffic producing
multiplexed throughput on same hardware resource as per hardware’s maximum
30 limits in terms of throughput. Also, in such an approach all types of logical data
12
may need separate handling on cluster stop, which is possible in the present
disclosure’s approach.
[0039] FIG. 1 illustrates an exemplary block diagram representation of 5th
5 generation core (5GC) network architecture, in accordance with exemplary
implementation of the present disclosure. As shown in FIG. 1, the 5GC network
architecture [100] includes a user equipment (UE) [102], a radio access network
(RAN) [104], an access and mobility management function (AMF) [106], a Session
Management Function (SMF) [108], a Service Communication Proxy (SCP) [110],
10 an Authentication Server Function (AUSF) [112], a Network Slice Specific
Authentication and Authorization Function (NSSAAF) [114], a Network Slice
Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a
Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122],
a Unified Data Management (UDM) [124], an application function (AF) [126], a
15 User Plane Function (UPF) [128], a data network (DN) [130], wherein all the
components are assumed to be connected to each other in a manner as obvious to
the person skilled in the art for implementing features of the present disclosure.
[0040] Radio Access Network (RAN) [104] is the part of a mobile
20 telecommunications system that connects user equipment (UE) [102] to the core
network (CN) and provides access to different types of networks (e.g., 5G network).
It consists of radio base stations and the radio access technologies that enable
wireless communication.
25 [0041] Access and Mobility Management Function (AMF) [106] is a 5G core
network function responsible for managing access and mobility aspects, such as UE
registration, connection, and reachability. It also handles mobility management
procedures like handovers and paging.
13
[0042] Session Management Function (SMF) [108] is a 5G core network function
responsible for managing session-related aspects, such as establishing, modifying,
and releasing sessions. It coordinates with the User Plane Function (UPF) [128] for
data forwarding and handles IP address allocation and QoS enforcement.
5
[0043] Service Communication Proxy (SCP) [110] is a network function in the 5G
core network that facilitates communication between other network functions by
providing a secure and efficient messaging service. It acts as a mediator for servicebased interfaces.
10
[0044] Authentication Server Function (AUSF) [112] is a network function in the
5G core responsible for authenticating UEs during registration and providing
security services. It generates and verifies authentication vectors and tokens.
15 [0045] Network Slice Specific Authentication and Authorization Function
(NSSAAF) [114] is a network function that provides authentication and
authorization services specific to network slices. It ensures that UEs can access only
the slices for which they are authorized.
20 [0046] Network Slice Selection Function (NSSF) [116] is a network function
responsible for selecting the appropriate network slice for a UE based on factors
such as subscription, requested services, and network policies.
[0047] Network Exposure Function (NEF) [118] is a network function that exposes
25 capabilities and services of the 5G network to external applications, enabling
integration with third-party services and applications.
[0048] Network Repository Function (NRF) [120] is a network function that acts
as a central repository for information about available network functions and
30 services. It facilitates the discovery and dynamic registration of network functions.
14
[0049] Policy Control Function (PCF) [122] is a network function responsible for
policy control decisions, such as QoS, charging, and access control, based on
subscriber information and network policies.
5
[0050] Unified Data Management (UDM) [124] is a network function that
centralizes the management of subscriber data, including authentication,
authorization, and subscription information.
10 [0051] Application Function (AF) [126] is a network function that represents
external applications interfacing with the 5G core network to access network
capabilities and services.
[0052] User Plane Function (UPF) [128] is a network function responsible for
15 handling user data traffic, including packet routing, forwarding, and QoS
enforcement.
[0053] Data Network (DN) [130] refers to a network that provides data services to
user equipment (UE) in a telecommunications system. The data services may
20 include but are not limited to Internet services, private data network related services.
[0054] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
(also referred herein as a computer system [200]) upon which the features of the
present disclosure may be implemented in accordance with exemplary
25 implementation of the present disclosure. In an implementation, the computing
device [200] may also implement a method for increasing throughput of data
transactions by implementing a storage system utilising the system. In another
implementation, the computing device [200] itself implements the method for
increasing throughput of data transactions by implementing a storage system using
30 one or more units configured within the computing device [200], wherein said one
15
or more units are capable of implementing the features as disclosed in the present
disclosure.
[0055] The computing device [200] may include a bus [202] or other
5 communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a randomaccess memory (RAM), or other dynamic storage device, coupled to the bus [202]
10 for storing information and instructions to be executed by the processor [204]. The
main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a special15 purpose machine that is customized to perform the operations specified in the
instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
information and instructions for the processor [204].
20 [0056] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
25 displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction
30 information and command selections to the processor [204], and for controlling
16
cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
5 [0057] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
10 computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
15 process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
[0058] The computing device [200] also may include a communication interface
20 [218] coupled to the bus [202]. The communication interface [218] provides a twoway data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
25 telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing
30 various types of information.
17
[0059] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
5 transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
10
[0060] The computing device [200] encompasses a wide range of electronic
devices capable of processing data and performing computations. Examples of
computing devices [200] include, but are not limited only to, personal computers,
laptops, tablets, smartphones, servers, and embedded systems. The devices may
15 operate independently or as part of a network and can perform a variety of tasks
such as data storage, retrieval, and analysis. Additionally, computing devices [200]
may include peripheral devices, such as monitors, keyboards, and printers, as well
as integrated components within larger electronic systems, showcasing their
versatility in various technological applications.
20
[0061] Referring to FIG. 5, an exemplary block diagram [500] of existing solution
or prior art for implementing a storage system (such as non-cluster database), is
shown. As per the traditional approach implemented by the system, container
resources are utilized by a single database application catering all types of logical
25 data or all ring traffic managed by a single database instance, which delays the
response that affects application server capacity to support parallel transaction rate
and limits the applications requirement of very high transaction rate.
[0062] Referring to FIG. 3, an exemplary block diagram of a system [300] for
30 increasing throughput of data transactions by implementing a storage system, is
shown, in accordance with the exemplary implementations of the present
18
disclosure. The system [300] comprises at least one storage system [300a], at least
one processing unit [304], at least one storage unit [306] and at least one application
process [308]. The storage system [300a] further comprises at least one Database
Container [302], and Database Ring [310] (Database Ring 1 [310a], Database Ring
5 2 [310b] and Database Ring 3 [310c]). The application process [308] comprises
Process [312] (Process 1 [312a], Process 2 [312b] and Process 3 [312c]). Also, all
of the components/ units of the system [300] are assumed to be connected to each
other unless otherwise indicated below. As shown in the figures all units shown
within the system should also be assumed to be connected to each other. Also, in
10 FIG. 3 only a few units are shown, however, the system [300] may comprise
multiple such units or the system [300] may comprise any such numbers of said
units, as required to implement the features of the present disclosure. Further, in an
implementation, the system [300] may be present in a user device to implement the
features of the present disclosure. The system [300] may be a part of the user device
15 or may be independent of but in communication with the user device (may also
referred herein as a UE). In another implementation, the system [300] may reside
in a server or a network entity. In yet another implementation, the system [300] may
reside partly in the server/ network entity and partly in the user device.
20 [0063] The system [300] is configured for increasing throughput of data
transactions by implementing a storage system, with the help of the interconnection
between the components/units of the system [300].
[0064] The system [300] comprises a storage system [300a].
25
[0065] The storage system [300a] comprises a database container [302] comprising
a plurality of database rings [310a, 310b, 310c]. In an exemplary implementation,
the database container [302] is configured to run at least one database instance in a
virtualized environment. Examples of the database may include but are not limited
30 only to, MySQL, MongoDB and PostgreSQL and the like. The database ring refers
to instances of databases inside the container [302]. The database ring is a logical
19
partition for each datatype. Examples of data types include, but not limited only to
a string data type, a hash data type, a list type, object type.
[0066] Each database ring of the plurality of database rings [310a, 310b, 310c] is
5 connected to one or more application processes [312a, 312b, 312c]. The one or more
application processes [312a, 312b, 312c] are associated with one or more network
nodes. Examples of the application processes can include, but not limited only to
core processes such as registration process, load balancing process, mobility
process, and the like. In an exemplary implementation, one or more network nodes
10 may be such as, but not limited to, access and mobility management (AMF) and
session management function (SMF). Further, each database ring of the plurality of
database rings [310a, 310b, 310c] is connected to a separate database instance. Each
separate database instance may have a unique identifier. In an exemplary
implementation, each separate database instance may have the same or different
15 configuration, data types or traffic data handling capacity.
[0067] Each database ring of the plurality of database rings [310a, 310b, 310c] is
configured to perform one or more target actions for a corresponding data type
handled by each database ring. In an exemplary implementation, the one or more
20 target actions are performed based on at least one of a start operation, a stop
operation, and a restart operation associated with the one or more application
processes [312a, 312b, 312c].
[0068] In an exemplary aspect, in the start operation, the system initiates a new
25 transaction process for handling the data. The system may identify a database ring
from the plurality of database rings [310a, 310b, 310c] that may handle the specific
type of data. The start operation enables the necessary resources (such as network
nodes and database instances) are prepared and engaged for the new task.
30 [0069] In an exemplary aspect, in the stop operation, the system halts or terminates
an ongoing transaction process. Once the data has been successfully processed and
20
is no longer needed, or if an error occurs that requires stopping the current process.
The stop operation enables that the resources are freed up and not wasted on
completed or faulty transaction.
5 [0070] In an exemplary aspect, the restart operation occurs when the system needs
to stop and then immediately restart a transaction process. If a process failed or
needed to be refreshed due to system condition (such as update or error). The restart
operation enables the system to resume the transaction from where it left off or
begin it a fresh, minimizing disruption in data handling and maintaining system
10 efficiency.
[0071] Furthermore, each database ring of the plurality of database rings [310a,
310b, 310c] is configured to handle a unique data type of a data associated with one
or more transaction requests from the one or more application processes [312a,
15 312b, 312c]. In an exemplary implementation, examples of unique data types
include, but not limited only to a string data type, a hash data type, a list type, an
object type. In an implementation, for handling the one or more transaction requests
may be associated with such as, but not limited to, at least one of processing, storing
and fetching data to/from each database ring of the plurality of database rings [310a,
20 310b, 310c]. In an implementation, the transaction requests may be associated with
one or more network nodes, such as, but not limited to AMF and SMF. In an
implementation, the transaction requests may have different or the same data and
data type for the AMF and SMF. In an exemplary implementation, the system
manages data corresponding to mobility of a user device for the AMF. Further, the
25 system manages data corresponding to sessions related to user device etc., for the
SMF.
[0072] Further, as shown in FIG. 3, the one or more application processes [312a,
312b, 312c] are connected to the plurality of database rings [310a, 310b, 310c] of
30 the database container [302]. In an implementation, each data type of the application
processes [312a, 312b, 312c] is handled by a separate database ring [310a, 310b,
21
310c]. The database ring 1 [310a] may handle data type 1, database ring 2 [310b]
may handle data type 2, and the database ring 3 [310c] may handle data type 3.
Also, there may be any number of database rings created in the database container
[302] as may be required to implement the features of the present disclosure.
5
[0073] Due to multiple database rings’ setups, the count of threads catering
application traffic may be increased and parallel request processing capacity may
be multiplexed by the number of instances, i.e., the database rings [310a, 310b,
310c], which increase throughput of data transactions. The increased number of
10 instances may be sufficient to support the required transaction rate. Further, the
plurality of the database rings [310a, 310b, 310c] are created inside the same
container where container resource usage is shared across all the database rings’
setups which in term makes multiple threads available for handling application
traffic producing multiplexed throughput on the same physical resource. In an
15 implementation, all predefined data types of setups for the plurality of database
rings need separate handling on cluster stop, which is only possible via the present
disclosure approach like database flush on cluster stop for specific type of data or
database ring.
20 [0074] The storage unit [306] is configured to store the data handled by the database
container [302] and other such information so as to facilitate the implementation of
the features of the present disclosure.
[0075] Referring to FIG. 4, an exemplary method flow [400] diagram for increasing
25 throughput of data transactions by implementing a storage system, in accordance
with exemplary implementations of the present disclosure, is shown. In an
implementation the method [400] is performed by the system [300]. As shown in
FIG. 4, the method [400] starts at step [402].
30 [0076] At step [404], the method [400] as disclosed by the present disclosure
comprises receiving, by a processing unit [304] at the storage system [300a], one
or more transaction requests from one or more application processes [312a, 312b,
22
312c] for accessing data from the storage system [300a]. The method [400]
implemented by the processing unit [304] at the storage system [300a] may receive
the one or more transaction requests from one or more application processes [312a,
312b, 312c] for accessing the data. The one or more application processes [312a,
5 312b, 312c] are associated with one or more network nodes, such as access and
mobility management (AMF) and session management function (SMF). The one or
more transaction requests from one or more application processes [312a, 312b,
312c] may associated with such as, but not limited to, at least one of processing,
storing, and fetching data to/ from each database ring of the plurality of database
10 rings [310a, 310b, 310c]. Examples of the application processes can include, but
not limited only to core processes such as registration process, load balancing
process, mobility process, and the like. In an implementation, the transaction
requests may have different or the same data and data type for the AMF and SMF.
In an exemplary implementation, the system manages data corresponding to
15 mobility of a user device for the AMF. Further, the system manages data
corresponding to sessions related to user device etc., for the SMF.
[0077] Next, at step [406], the method [400] as disclosed by the present disclosure
comprises identifying, by the processing unit [304], a corresponding database ring
20 from a plurality of database rings [310a, 310b, 310c] of the storage system [300a]
for serving the one or more transaction requests from one or more application
processes [312a, 312b, 312c] based on a unique data type associated with each of
the one or more transaction requests from the one or more application processes
[312a, 312b, 312c]. Further, the processing unit [304] may identify the unique data
25 type associated with each of the one or more transaction requests and corresponding
database ring for serving the transaction requests. In an implementation, the
processing unit [304] may identify the unique data type associated with the
application processes [312a, 312b, 312c] handled by a separate database ring [310a,
310b, 310c]. The database ring 1 [310a] may handle a data type 1, the database ring
30 2 [310b] may handle a data type 2, and the database ring 3 [310c] may handle a data
type 3 of the transaction requests from the one or more application processes [312a,
23
312b, 312c]. In an implementation, each database ring of the plurality of database
rings [310a, 310b, 310c] is connected to a separate database instance. Each separate
database instance may have a unique identifier. In an exemplary implementation,
each separate database instance may have the same or different configuration, data
5 types or traffic data handling capacity.
[0078] Next, at step [408], the method [400] as disclosed by the present disclosure
comprises facilitating, by the processing unit [304], an access of the identified
corresponding database ring from the plurality of database rings [310a, 310b, 310c]
10 for serving the one or more transaction requests from one or more application
processes [312a, 312b, 312c] based on the unique data type associated with each
transaction request. After identifying each database ring from the plurality of
database rings [310a, 310b, 310c] of the storage system [300a], the processing unit
[304] may facilitate the access of the identified corresponding database ring from
15 the plurality of database rings [310a, 310b, 310c] for serving the one or more
transaction requests from one or more application processes [312a, 312b, 312c].
Furthermore, each database ring of the plurality of database rings [310a, 310b,
310c] may handle the unique data type of a data associated with one or more
transaction requests from the one or more application processes [312a, 312b, 312c].
20 In an exemplary implementation, unique data types include, but not limited only to
a string data type, a hash data type, a list type, an object type. In an implementation,
for handling the one or more transaction requests may be associated with such as,
but not limited to, at least one of processing, storing and fetching data to/ from each
database ring of the plurality of database rings [310a, 310b, 310c]. Further, each
25 database ring of the plurality of database rings [310a, 310b, 310c] may perform one
or more target actions for a corresponding data type handled by each database ring.
In an exemplary implementation, the one or more target actions are performed
based on at least one of a start operation, a stop operation, and a restart operation
associated with the one or more application processes [312a, 312b, 312c].
30
24
[0079] In an exemplary aspect, in the start operation, the system initiates a new
transaction process for handling the data. The system may identify a database ring
from the plurality of database rings [310a, 310b, 310c] that may handle the specific
type of data. The start operation enables that the necessary resources (such as
5 network nodes and database instances) are prepared and engaged for the new task.
[0080] In an exemplary aspect, in the stop operation, the system halts or terminates
an ongoing transaction process. Once the data has been successfully processed and
is no longer needed, or if an error occurs that requires stopping the current process.
10 The stop operation enables that the resources are freed up and not wasted on
completed or faulty transaction.
[0081] In an exemplary aspect, the restart operation occurs when the system needs
to stop and then immediately restart a transaction process. If a process failed or
15 needed to be refreshed due to system condition (such as update or error). The restart
operation enables the system to resume the transaction from where it left off or
begin it a fresh, minimizing disruption in data handling and maintaining system
efficiency.
20 [0082] Thereafter, the method [400] terminates at step [410].
[0083] The present disclosure may relate to a system [300] for increasing
throughput of data transactions by implementing a storage system [300a]. The
system [300] comprises a processing unit [304] configured to receive at the storage
25 system [300a], one or more transaction requests from one or more application
processes [312a, 312b, 312c] for accessing data from the storage system [300a]; the
processing unit [304] configured to identify a corresponding database ring from a
plurality of database rings [310a, 310b, 310c] of the storage system [300a] for
serving the one or more transaction requests from one or more application processes
30 [312a, 312b, 312c] based on a unique data type associated with each of the one or
more transaction requests from the one or more application processes [312a, 312b,
25
312c]; and the processing unit [304] configured to facilitate an access of the
identified corresponding database ring from the plurality of database rings [310a,
310b, 310c] for serving the one or more transaction requests from one or more
application processes [312a, 312b, 312c] based on the unique data type associated
5 with the each transaction request.
[0084] The present disclosure may relate to a non-transitory computer readable
storage medium storing instructions for increasing throughput of data transactions
by implementing a storage system [300a], the instructions include executable code
10 which, when executed by one or more units of a system [300], causes: a processing
unit [304] of the system to: receive at the storage system [300a], one or more
transaction requests from one or more application processes [312a, 312b, 312c] for
accessing data from the storage system [300a]; identify a corresponding database
ring from a plurality of database rings [310a, 310b, 310c] of the storage system for
15 serving the one or more transaction requests from one or more application processes
[312a, 312b, 312c] based on a unique data type associated with each of the one or
more transaction requests from the one or more application processes [312a, 312b,
312c]; facilitate an access of the identified corresponding database ring from the
plurality of database rings [310a, 310b, 310c] for serving the one or more
20 transaction requests from one or more application processes [312a, 312b, 312c]
based on the unique data type associated with the each transaction request.
[0085] As is evident from the above, the present disclosure provides a technically
advanced solution for implementing a storage system (such as non-cluster
25 database). The present solution minimizes response time for serving application
requests. Further, the present solution comprises creating multiple setups/instances
inside the same container where container resource usage is shared across all
multiple setups, which makes multiple threads available for handling application
traffic producing multiplexed throughput on the same hardware resource.
30
26
[0086] While considerable emphasis has been placed herein on the disclosed
embodiments, it will be appreciated that many embodiments can be made and that
many changes can be made to the embodiments without departing from the
principles of the present disclosure. These and other changes in the embodiments
5 of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
[0087] Further, in accordance with the present disclosure, it is to be acknowledged
10 that the functionality described for the various the components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
15 as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
27
We Claim:
1. A storage system [300a], the storage [300a] comprising:
- a database container [302] comprising a plurality of database rings
5 [310a, 310b, 310c],
wherein each database ring of the plurality of database rings [310a,
310b, 310c] is connected to one or more application processes [312a,
312b, 312c], and
wherein each database ring of the plurality of database rings [310a,
10 310b, 310c] is configured to handle a unique data type of a data
associated with one or more transaction requests from the one or more
application processes [312a, 312b, 312c].
2. The storage as claimed in claim 1, wherein the application processes [312a,
15 312b, 312c] are associated with one or more network nodes.
3. The storage as claimed in claim 1, wherein each database ring of the
plurality of database rings [310a, 310b, 310c] is connected to a separate
database instance.
4. The storage as claimed in claim 1, wherein each database ring of the
20 plurality of database rings [310a, 310b, 310c] is configured to perform one
or more target actions for a corresponding data type handled by each of the
plurality of database rings.
5. The storage as claimed in claim 4, wherein the one or more target actions
are performed based on at least one of a start operation, a stop operation,
25 and a restart operation associated with the one or more application processes
[312a, 312b, 312c].
28
6. A method for increasing throughput of data transactions by implementing a
storage system [300a], the method comprising:
- receiving, by a processing unit [304] at the storage system [300a], one
or more transaction requests from one or more application processes
5 [312a, 312b, 312c] for accessing data from the storage system [300a];
- identifying, by the processing unit [304], a corresponding database ring
from a plurality of database rings [310a, 310b, 310c] of the storage
system [300a] for serving the one or more transaction requests from one
or more application processes [312a, 312b, 312c] based on a unique data
10 type associated with each of the one or more transaction requests from
the one or more application processes [312a, 312b, 312c]; and
- facilitating, by the processing unit [304], an access of the identified
corresponding database ring from the plurality of database rings [310a,
310b, 310c] for serving the one or more transaction requests from one
15 or more application processes [312a, 312b, 312c] based on the unique
data type associated with each of the one or more transaction requests.
7. The method as claimed in claim 6, wherein the application processes [312a,
312b, 312c] are associated with one or more network nodes.
8. The method as claimed in claim 6, wherein each database ring of the
20 plurality of database rings [310a, 310b, 310c] is connected to a separate
database instance.
9. The method as claimed in claim 6, wherein the method comprises
performing, by each database ring of the plurality of database rings [310a,
310b, 310c], one or more target actions for a corresponding data type
25 handled by each of the plurality of database rings.
29
10. The method as claimed in claim 9, wherein the one or more target actions
are performed based on at least one of a start operation, a stop operation,
and a restart operation associated with the one or more application processes
[312a, 312b, 312c].
5 11. A system [300] for increasing throughput of data transactions by
implementing a storage system [300a], the system comprising:
- a processing unit [304] configured to:
o receive at the storage system [300a], one or more transaction
requests from one or more application processes [312a, 312b,
10 312c] for accessing data from the storage system [300a];
o identify a corresponding database ring from a plurality of
database rings [310a, 310b, 310c] of the storage system [300a]
for serving the one or more transaction requests from one or
more application processes [312a, 312b, 312c] based on a unique
15 data type associated with each of the one or more transaction
requests from the one or more application processes [312a, 312b,
312c]; and
o facilitate an access of the identified corresponding database ring
from the plurality of database rings [310a, 310b, 310c] for
20 serving the one or more transaction requests from one or more
application processes [312a, 312b, 312c] based on the unique
data type associated with each of the one or more transaction
requests.
12. The system as claimed in claim 11, wherein the application processes [312a,
25 312b, 312c] are associated with one or more network nodes.
30
13. The system as claimed in claim 11, wherein each database ring of the
plurality of database rings [310a, 310b, 310c] is connected to a separate
database instance.
14. The system as claimed in claim 11, wherein the processing unit is
5 configured to perform, by each database ring of the plurality of database
rings [310a, 310b, 310c], one or more target actions for a corresponding data
type handled by each of the plurality of database rings.
15. The system as claimed in claim 14, wherein the one or more target actions
are performed based on at least one of a start operation, a stop operation,
10 and a restart operation associated with the one or more application processes
[312a, 312b, 312c].
| # | Name | Date |
|---|---|---|
| 1 | 202321063842-STATEMENT OF UNDERTAKING (FORM 3) [22-09-2023(online)].pdf | 2023-09-22 |
| 2 | 202321063842-PROVISIONAL SPECIFICATION [22-09-2023(online)].pdf | 2023-09-22 |
| 3 | 202321063842-POWER OF AUTHORITY [22-09-2023(online)].pdf | 2023-09-22 |
| 4 | 202321063842-FORM 1 [22-09-2023(online)].pdf | 2023-09-22 |
| 5 | 202321063842-FIGURE OF ABSTRACT [22-09-2023(online)].pdf | 2023-09-22 |
| 6 | 202321063842-DRAWINGS [22-09-2023(online)].pdf | 2023-09-22 |
| 7 | 202321063842-Proof of Right [22-01-2024(online)].pdf | 2024-01-22 |
| 8 | 202321063842-FORM-5 [21-09-2024(online)].pdf | 2024-09-21 |
| 9 | 202321063842-ENDORSEMENT BY INVENTORS [21-09-2024(online)].pdf | 2024-09-21 |
| 10 | 202321063842-DRAWING [21-09-2024(online)].pdf | 2024-09-21 |
| 11 | 202321063842-CORRESPONDENCE-OTHERS [21-09-2024(online)].pdf | 2024-09-21 |
| 12 | 202321063842-COMPLETE SPECIFICATION [21-09-2024(online)].pdf | 2024-09-21 |
| 13 | 202321063842-FORM 3 [07-10-2024(online)].pdf | 2024-10-07 |
| 14 | 202321063842-Request Letter-Correspondence [08-10-2024(online)].pdf | 2024-10-08 |
| 15 | 202321063842-Power of Attorney [08-10-2024(online)].pdf | 2024-10-08 |
| 16 | 202321063842-Form 1 (Submitted on date of filing) [08-10-2024(online)].pdf | 2024-10-08 |
| 17 | 202321063842-Covering Letter [08-10-2024(online)].pdf | 2024-10-08 |
| 18 | 202321063842-CERTIFIED COPIES TRANSMISSION TO IB [08-10-2024(online)].pdf | 2024-10-08 |
| 19 | Abstract.jpg | 2024-10-19 |
| 20 | 202321063842-ORIGINAL UR 6(1A) FORM 1 & 26-311224.pdf | 2025-01-04 |