Sign In to Follow Application
View All Documents & Correspondence

Method And System For Real Time Monitoring Of Container Statistics

Abstract: The present disclosure relates to method and system for real-time monitoring of container statistics. The method comprises, receiving, by a transceiver unit [202] via a User Interface (UI), an information associated with one or more hosts having a set of containers. Thereafter, fetching, by a management unit [204] via a collector module [204A], a set of details associated with a set of target containers from the set of containers to determine a set of performance metrics of the set of target containers. Thereafter, transmitting, by the management unit [204], the determined set of performance metrics from the collector module [204A] to a stream module [204B]. Thereafter, extracting, by an extraction unit [206] via one or more manager nodes [206A], the transmitted set of performance metrics in one or more predefined batch sizes, and then storing, by the extraction unit [206], the extracted set of performance metrics into a database [208]. [FIG. 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Inventors

1. Sandeep Bisht
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
2. Lokesh Poonia
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. Aayush Bhatnagar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
4. Shubham Kumar Naik
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
5. Munir Sayyad
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
6. Anup Patil
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR REAL-TIME MONITORING OF
CONTAINER STATISTICS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR REAL-TIME MONITORING OF CONTAINER
STATISTICS
TECHNICAL FIELD
[0001] Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to methods and systems for real-time monitoring of container statistics.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Conventionally, servers were configured to run only one application per server to reduce conflicts between multiple applications running on the same server, the associated risks with running multiple applications together, upgradeability issues, complexities around backups and diagnostics, etc. Consequently, the running of one application per server resulted in wasted potential of hardware due to low resource utilisation. Further, setting up new dedicated server for each application proved to be very expensive.
[0004] This was remedied by the introduction of virtual machine environments, wherein a single server could use a virtual machine software to allocate different parts of hardware to run different operating systems on a single server, in the same manner as if they were running on dedicated machines. The different operating systems, running different applications concurrently are separated from each other. This allowed the running of multiple applications on a server to an extent; however, Virtual Machine solutions tend to be very resource intensive, thereby limiting the number and types of applications that may be run concurrently. This led to an increased popularity in container systems, wherein, a container is a single package of an application, along with all of the dependencies required to run the said application. With this solution, servers can make use of a container engine, i.e., Docker, to run multiple containers on a single server, and on a single operating system, wherein, each container contains a separate

application. This allows for a more resource-efficient, faster, and simpler method for running multiple applications on a single server without running into the aforementioned issues.
[0005] However, with a higher number of applications running concurrently on a single server, it is important to actively monitor the resource usage for each container environment on a host server, because over utilisation can lead to a crash of the system. Further, due to multiple applications running on a single operating system, and a shared kernel, container environments are less secure. This is because a vulnerability in one of the applications running concurrently on a single server may compromise the entire operating system, thereby compromising the rest of the applications in the process. In this light, it is important to monitor the hardware performance and utilisation metrics in real-time when running container applications on one or more host servers, to assess the resource load of each container, and the host server capacity. It is also important to provide a method to make container systems more secure.
[0006] The conventional solutions provide that the container engine associated with one or more container engines may be used to obtain performance metrics, via one or more commands, for all the containers on a host server, or the user may manually select one or more containers using a Container ID and collect performance metrics for each container manually. However, the conventional solutions fail to allow an automated and real-time monitoring of the performance metrics associated with one or more containers. Further, the conventional solutions do not provide real-time monitoring of performance metrics based on one or more desired categorizations, i.e., as a group of Host servers, a resource group, or a group of select containers that may be associated with one or more specific host servers.
[0007] Furthermore, the performance metrics for containers provided by conventional solutions are returned as raw data, thereby requiring manual analysis of data. This leads to time-inefficiency and efforts to analyse the data, especially in environments with a large number of host servers that are running an even larger number of container applications. Furthermore, the conventional solutions do not allow a user to configure one or more of parameters related to monitoring of container statistics, such as intervals for the fetching details or determining metrics, batch sizes or classification of the performance metrics data, or the flush intervals to insert performance metrics data into a database. Further, the conventional solutions do not provide a mechanism to validate the host IPs before they may be assigned to a node for monitoring the performance metrics. As a consequence, this may lead to one or more issues arising from the addition and mapping of incorrect Host IPs.

[0008] For example, wherein an underlying Host IP mapping is faulty, the performance metrics corresponding to a first set of one or more containers may instead be displayed as the performance metrics of a second set of one or more containers, or simply, there may be a mismatch between the performance metrics and the corresponding container. Meaning thereby, that the system may display the performance metrics of Container A under the name of Container B. Alternatively, or in combination with the aforementioned problem, a faulty mapping of Host IPs may also result in display of an entirely inaccurate, or incomplete data. This may prompt the operation engineers of the network environment to expend efforts and resources in a wrong direction, thereby leading to a significant wastage of time and resources. Further, the underlying problem which requires a troubleshooting may not be rectified for an extended period of time for the same reason.
[0009] Yet another drawback of the conventional solutions is that the performance metrics data is not available in a simplified format that may segregated into different relevant sections to allow faster and easier understanding of the available data by the user or the host operator, or to allow the analysis and comparison between different hosts and container environments.
[0010] Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks. The present disclosure proposes a system and method to efficiently manage at least some of the abovementioned drawbacks associated with the conventional solutions in the field of network performance management systems.
SUMMARY
[0011] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0012] An aspect of the present disclosure may relate to a method for real-time monitoring of container statistics. The method comprises, receiving, by a transceiver unit, via a user interface (UI), an information associated with one or more hosts having a set of containers. Thereafter, fetching, by a management unit, via a collector module, a set of details associated with a set of target containers from the set of containers to determine a set of performance metrics of the set of target containers. Thereafter, the method further comprises, transmitting, by the management

unit, the determined set of performance metrics from the collector module to a stream module. Thereafter, the method further comprises, extracting, by an extraction unit, via one or more manager nodes, the transmitted set of performance metrics in one or more predefined batch sizes, and storing, by the extraction unit, the extracted set of performance metrics into a database.
[0013] In an exemplary aspect of the present disclosure, the method further comprises that the set of performance metrics comprises at least one of a central processing unit (CPU) utilization, a hard disk drive (HDD) activity, and a block input/output (I/O) operation.
[0014] In an exemplary aspect of the present disclosure, the present disclosure further comprises, automatically adjusting, by the extraction unit, at least one of: one or more batch sizes to extract the set of performance metrics, and one or more extraction intervals of the one or more manager nodes, wherein said automatic adjusting is performed dynamically.
[0015] In an exemplary aspect of the present disclosure, the information comprises at least one of one or more host names, one or more Internet Protocol (IP) addresses associated with the one or more hosts, and one or more stream channel designations.
[0016] In an exemplary aspect of the present disclosure, the present disclosure further comprises, validating, by the management unit, each container of the set of containers, wherein the validation comprises a verification of the one or more host names, the one or more IP addresses, and the one or more stream channel designations.
[0017] In an exemplary aspect of the present disclosure, each target container from the set of target containers is in a running state.
[0018] In an exemplary aspect of the present disclosure, the set of details comprises at least one of a container name and a container identifier corresponding to each target container from the set of target containers.
[0019] In an exemplary aspect of the present disclosure, the present disclosure further comprises, running by the controller module one or more schedulers, to identify via the one or more schedulers the set of target containers from the set of containers, and to determine via the one or more schedulers the set of performance metrics of the set of target containers.

[0020] Another aspect of the present disclosure may relate to a system for real-time monitoring of container statistics. The system comprises, a transceiver unit configured to receive, via a User Interface (UI), an information associated with one or more hosts having a set of containers. The system further comprises, a management unit connected at least to the transceiver unit, wherein the management unit is configured to: fetch, via a collector module, a set of details associated with a set of target containers from the set of containers to determine a set of performance metrics of the set of target containers, and to transmit, the determined set of performance metrics from the collector module to a stream module. The system further comprises, an extraction unit connected at least to the management unit, wherein the extraction unit is configured to: extract, via one or more manager nodes, the transmitted set of performance metrics in one or more predefined batch sizes, and to store, the extracted set of performance metrics into a database.
[0021] Yet another aspect of the present disclosure may relate to a User Equipment (UE) for real-time monitoring of container statistics. The UE comprises, a User Interface (UI) configured to transmit, to a system, an information associated with one or more hosts having a set of containers, wherein the information is transmitted for a storage of a set of performance metrics of a set of target containers; and to receive, from the system, an indication of the storage of the set of performance metrics into a database, wherein the storage is based on: receiving, by a transceiver unit of the system via the user interface (UI), the information; fetching, by a management unit of the system via a collector module, a set of details associated with the set of target containers from the set of containers to determine the set of performance metrics of the set of target containers; transmitting, by the management unit, the determined set of performance metrics from the collector module to a stream module; extracting, by an extraction unit of the system via one or more manager nodes, the transmitted set of performance metrics in one or more predefined batch sizes; and storing, by the extraction unit, the extracted set of performance metrics into the database.
[0022] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for real-time monitoring of container statistics, wherein, the instructions include an executable code which, when executed by a one or more units of a system, causes: a transceiver unit to receive, via a User Interface (UI), an information associated with one or more hosts having a set of containers. The instructions, when executed, further cause, a management unit to fetch via a collector module, a set of details associated

with a set of target containers from the set of containers to determine a set of performance
metrics of the set of target containers; and to transmit, the determined set of performance
metrics from the collector module to a stream module. The instructions, when executed, further
cause, an extraction unit to extract, via one or more manager nodes, the transmitted set of
5 performance metrics in one or more predefined batch sizes, and to store, the extracted set of
performance metrics into a database.
OBJECTS OF THE INVENTION
10 [0023] Some of the objects of the present disclosure, which at least one embodiment disclosed
herein satisfies are listed herein below.
[0024] It is an object of the present disclosure to provide a system and a method for real-time monitoring of container statistics.
[0025] It is another object of the present disclosure to provide a system and method for real-
15 time monitoring of container statistics that offers a scalable and fault-tolerant system. It is
designed to handle a large number of hosts and is capable of automatically reassigning tasks if
a node goes down.
[0026] It is another object of the present disclosure to provide a system and method for real¬
time monitoring of container statistics that provides performance metrics view in an optimized
20 and simplified chart format, which makes it easier for users to understand, compare and analyse
the data.
[0027] It is another object of the present disclosure to provide a system and method for real¬
time monitoring of container statistics that offers a high level of configurability to users. This
25 includes the ability to configure the interval for fetching container stats, batch sizes, and the
flush interval to insert the container statistics data into a database.
[0028] It is another object of the present disclosure to provide a system and method for real-time monitoring of container statistics that validates host IPs before they are uploaded or
7

assigned to manager nodes. This is to ensure the integrity and accuracy of the data being monitored.
[0029] It is another object of the present disclosure to provide a system and method for real-
5 time monitoring of container statistics that offers a sophisticated filtering system, allowing
users to view container statistics of a group of host servers, a particular host server, or a
particular container. This enhances the flexibility and customization for users when trying to
analyse specific data sets.
10 [0030] It is another object of the present disclosure to provide a system and method for real-
time monitoring of container statistics that efficiently manages inactive containers, preventing unnecessary get stats requests.
BRIEF DESCRIPTION OF THE DRAWINGS
15 [0031] The accompanying drawings, which are incorporated herein, and constitute a part of
this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the
20 figures are not to be construed as limiting the disclosure, but the possible variants of the method
and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
25
[0032] FIG. 1 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented, in accordance with exemplary implementations of the present disclosure.
30 [0033] FIG. 2 illustrates an exemplary block diagram of a system for real-time monitoring of
container statistics, in accordance with exemplary implementations of the present disclosure.
8

[0034] FIG. 3 illustrates a flow diagram of a method for real-time monitoring of container statistics, in accordance with exemplary implementations of the present disclosure.
5 [0035] FIG. 4 illustrates a flow diagram of an exemplary method for real-time monitoring of
container statistics, in accordance with exemplary implementations of the present disclosure.
[0036] FIG. 5 illustrates a flow diagram of an exemplary process for real-time monitoring of container statistics, in accordance with exemplary implementations of the present disclosure.
10
[0037] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
15
[0038] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used
20 independently of one another or with any combination of other features. An individual feature
may not address any of the problems discussed above or might address only some of the problems discussed above.
[0039] The ensuing description provides exemplary embodiments only, and is not intended to
limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description
25 of the exemplary embodiments will provide those skilled in the art with an enabling description
for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0040] Specific details are given in the following description to provide a thorough
30 understanding of the embodiments. However, it will be understood by one of ordinary skill in
9

the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0041] Also, it is noted that individual embodiments may be described as a process which is
5 depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block
diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
10 [0042] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an
example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary
15 structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent
that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
20 [0043] As used herein, a “processing unit” or “processor” or “operating processor” includes
one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a
25 microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array
circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
30 [0044] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smart-
device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless
10

communication device”, “a mobile communication device”, “a communication device” may
be any electrical, electronic and/or computing device or equipment, capable of implementing
the features of the present disclosure. The user equipment/device may include, but is not limited
to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital
5 assistant, tablet computer, wearable device or any other computing device which is capable of
implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
10 [0045] As used herein, “storage unit” or “memory unit” refers to a machine or computer-
readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The
15 storage unit stores at least the data that may be required by one or more units of the system to
perform their respective functions.
[0046] As used herein “interface” or “user interface” refers to a shared boundary across which
two or more separate components of a system exchange information or data. The interface may
also be referred to a set of rules or protocols that define communication or interaction of one
20 or more modules or one or more units with each other, which also includes the methods,
functions, or procedures that may be called.
[0047] All modules, units, components used herein, unless explicitly excluded herein, may be
software modules or hardware processors, the processors being a general-purpose processor, a
special purpose processor, a conventional processor, a digital signal processor (DSP), a
25 plurality of microprocessors, one or more microprocessors in association with a DSP core, a
controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0048] As used herein the transceiver unit includes at least one receiver and at least one
transmitter configured respectively for receiving and transmitting data, signals, information or
30 a combination thereof between units/components within the system and/or connected with the
system.
11

[0049] As used herein, “host(s)”, or “host server(s)”, or “server(s)” may be used to refer to one or more servers that are hosting the one or more applications in a container environment.
[0050] As discussed in the background section, the current known solutions have several
5 shortcomings. The present disclosure aims to overcome the above-mentioned and other
existing problems in this field of technology by providing a novel solution for real-time
monitoring of container statistics. The novel solution of the present disclosure enables a user
and/or a host server operator to monitor, container statistics comprising one or more
performance metrics associated with one or more containers, in real-time, and at user-
10 configurable intervals. Furthermore, the present solution enables the determination of the state
of a container associated with a host, which thereby allows the system of the present disclosure
to focus hardware resources to monitor only those containers that may be in a running state. In
addition to these advancements, the present disclosure discloses the novel solution that allows
the user and/or the host server operator to filter and categorize different containers and host
15 groups, thereby allowing them to customize the list of containers to be monitored.
[0051] The present solution further enables the container statistics, that are usually fetched in
the form of raw data from the container engine, to be stored, viewed, analysed and compared
in a simplified format to save time and resources that may be otherwise used in processing raw
data in large-scale server environments. The present disclosure also provides additional novel
20 and technically advanced features to remedy the drawbacks associated with the conventional
solutions in the field of the present disclosure, and the same are discussed in detail herein below.
[0052] FIG. 1 illustrates an exemplary block diagram of a computing device [100] upon which the features of the present disclosure may be implemented in accordance with exemplary
25 implementation of the present disclosure. In an implementation, the computing device [100]
may also implement a method for real-time monitoring of container statistics, utilising the system [200]. In another implementation, the computing device [100] itself implements the method for real-time monitoring of the container statistics, using one or more units configured within the computing device [100], wherein said one or more units are capable of implementing
30 the features as disclosed in the present disclosure.
12

[0053] The computing device [100] may include a bus [102] or other communication
mechanism for communicating information, and a processor [104] coupled with bus [102] for
processing information. The processor [104] may be, for example, a general-purpose
microprocessor. The computing device [100] may also include a main memory [106], such as
5 a random-access memory (RAM), or other dynamic storage device, coupled to the bus [102]
for storing information and instructions to be executed by the processor [104]. The main
memory [106] also may be used for storing temporary variables or other intermediate
information during execution of the instructions to be executed by the processor [104]. Such
instructions, when stored in non-transitory storage media accessible to the processor [104],
10 render the computing device [100] into a special-purpose machine that is customized to
perform the operations specified in the instructions. The computing device [100] further includes a read only memory (ROM) [108] or other static storage device coupled to the bus [102] for storing static information and instructions for the processor [104].
[0054] A storage device [110], such as a magnetic disk, optical disk, or solid-state drive is
15 provided and coupled to the bus [102] for storing information and instructions. The computing
device [100] may be coupled via the bus [102] to a display [112], such as a Cathode Ray Tube
(CRT), Liquid Crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED
(OLED) display, etc. for displaying information to a computer user. An input device [114],
including alphanumeric and other keys, touch screen input means, etc. may be coupled to the
20 bus [102] for communicating information and command selections to the processor [104].
Another type of user input device may be a cursor controller [116], such as a mouse, a trackball,
or cursor direction keys, for communicating direction information and command selections to
the processor [104], and for controlling cursor movement on the display [112]. This input
device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis
25 (e.g., y), that allow the device to specify positions in a plane.
[0055] The computing device [100] may implement the techniques described herein using
customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic
which in combination with the computing device [100] causes or programs the computing
device [100] to be a special-purpose machine. According to one implementation, the techniques
30 herein are performed by the computing device [100] in response to the processor [104]
executing one or more sequences of one or more instructions contained in the main memory [106]. Such instructions may be read into the main memory [106] from another storage
13

medium, such as the storage device [110]. Execution of the sequences of instructions contained in the main memory [106] causes the processor [104] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
5 [0056] The computing device [100] also may include a communication interface [118] coupled
to the bus [102]. The communication interface [118] provides a two-way data communication
coupling to a network link [120] that is connected to a local network [122]. For example, the
communication interface [118] may be an integrated services digital network (ISDN) card,
cable modem, satellite modem, or a modem to provide a data communication connection to a
10 corresponding type of telephone line. As another example, the communication interface [118]
may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [118] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
15 [0057] The computing device [100] can send messages and receive data, including program
code, through the network(s), the network link [120] and the communication interface [118]. In the Internet example, a server [130] might transmit a requested code for an application program through the Internet [128], the ISP [126], the local network [122], the host [124] and the communication interface [118]. The received code may be executed by the processor [104]
20 as it is received, and/or stored in the storage device [110], or other non-volatile storage for later
execution.
[0058] Referring to FIG. 2, an exemplary block diagram of a system [200] for real-time monitoring of container statistics, is shown, in accordance with the exemplary implementations of the present disclosure. The system [200] comprises at least one transceiver unit [202], at
25 least one management unit [204], at least one extraction unit [206], and at least one database
[208]. The management unit [204], as used herein, may further comprise, a collector module [204A] and a stream module [204B]. Further, the extraction unit [206], as used herein, may further comprise one or more manager nodes [206A]. Also, all of the components/ units of the system [200] are assumed to be connected to each other unless otherwise indicated below. As
30 shown in the figures all units shown within the system should also be assumed to be connected
to each other.
14

[0059] Also, in FIG. 2 only a few units are shown, however, the system [200] may comprise
multiple such units or the system [200] may comprise any such numbers of said units, as
required to implement the features of the present disclosure. Further, in an implementation, the
system [200] may be present in a user device to implement the features of the present
5 disclosure. The system [200] may be a part of the user device / or may be independent of but
in communication with the user device (may also referred herein as a UE). In another implementation, the system [200] may reside in a server or a network entity. In yet another implementation, the system [200] may reside partly in the server/ network entity and partly in the user device.
10 [0060] Further, in accordance with the present disclosure, it is to be acknowledged that the
functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be
15 construed as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0061] The system [200] is configured for real-time monitoring of the container statistics, with the help of the interconnection between the components/units of the system [200].
20 [0062] More particularly, the system [200] may comprise, the transceiver unit [202] configured
to receive, via a User Interface (UI), an information associated with one or more hosts having a set of containers. As used herein, the information comprises at least one of one or more host names, one or more Internet Protocol (IP) addresses associated with the one or more hosts, and one or more stream channel designations.
25 [0063] It is to be noted that the set of containers as used herein in the present disclosure may
be a container or set of containers of a network function, such as AMF (Access and Mobility Management Function), SMF (Session Management Function), UDM (Unified Data Management), or any other application for any domain. Further, the set of containers may be associated with any other entity that is obvious to a person skilled in the art to implement the
30 solution of the present disclosure.
15

[0064] As used herein, one or more stream channels may be used to refer to one or more data
streams, wherein a data stream corresponds to a fixed flow of data packets from one system
module to another. A stream channel may be used to signify a transmission route for a set of
data, i.e., a set of performance metrics, wherein the user may specify from where, and in what
5 manner, the set of data is to be transmitted from one module to another designated module.
[0065] Further, as used herein, the one or more stream channel designations may comprise an allocation of a set of performance metric data for at least one of, one or more target containers associated with a host server, one or more target containers associated with a specified host group, and one or more of any target containers which may be selected to be monitored by a
10 user and/or a host server operator, to one or more stream channels. To illustrate with an
example, a user X may provide a stream channel designation wherein a set of performance metric data for a set of target containers associated with a host server A may be allocated to a stream channel Y. Now, as will be explained in more detail in the paragraphs herein below, when the extraction unit [206] extracts the set of performance metrics from the stream module
15 [204B] and stores them in the database [208], then the set of performance metrics for target
containers associated with server A will be extracted and transmitted via the stream channel Y, as per the stream channel designation provided by the user X.
[0066] The system [200] may further comprise, the management unit [204] connected at least to the transceiver unit [202], wherein the management unit [204] further comprises, the
20 collector module [204A] and the stream module [204B]. As used herein, the management unit
[204] may be configured to fetch, via the collector module [204A], a set of details associated with a set of target containers, from the set of containers, to determine the set of performance metrics of the set of target containers. The management unit [204] may be further configured to transmit, the determined set of performance metrics from the collector module [204A] to the
25 stream module [204B].
[0067] As used herein, the set of details may comprise at least one of a container name and a
container identifier corresponding to each target container from the set of target containers,
wherein a container name may comprise a name corresponding to at least one of, the name of
the application running inside the container, a system-generated name, or any other name as
30 may be configured by the user or the host server operator, Further, a container identifier, as
used herein, may comprise a unique alphanumeric identifier that may be used to identify a container.
16

[0068] Further, the container name may be a user-defined label assigned to each target
container from the set of target containers based on a purpose and/or an application of said each
target container. Further, the container name is used to identify and manage said each target
container in a container environment. Furthermore, the container name may be a unique label
5 in a pre-defined format such as "nginx" or "my-web-app", associated with the host server.
[0069] As used herein the “container environment” refers to the use of one or more containers
to package and deploy applications and services in networks, based on a docker container
engine. Further, a docker may be an open-source platform that allows to build, deploy, and run
applications in lightweight, portable, and self-sufficient containers associated with the
10 networks.
[0070] Further, the container identifier may be an automatically generated unique identifier
assigned to each target container from the set of target containers by a container engine.
Further, the container identifier associated with each target container is a hexadecimal string
like "f81ec7464732" that may be used to uniquely identify said each target container across all
15 host servers.
[0071] Further, as used herein, the set of performance metrics comprises at least one of a central processing unit (CPU) utilization, a hard disk drive (HDD) activity, and a block input/output (I/O) operation.
[0072] As used herein “CPU Utilization” signifies a percentage of CPU resources used by the
20 set of target containers, to indicate an extent of a processing power that is utilized by the set of
target containers. Further, a high CPU utilization may indicate resource constraints or intense computational workloads.
[0073] As used herein “HDD Activity” signifies a measurement of one or more read/write
operations (in bytes or input/output operations per second (IOPS)) between the set of target
25 containers and a hard disk drive of the host server. The host server monitors one or more storage
access patterns of the set of target containers in order to identify one or more potential bottlenecks or one or more storage capacity issues.
[0074] As used herein “block I/O operation” signifies a track of a number of block-level
input/output operations (reads and writes) which are performed by the set of target containers.
30 The block I/O operation provides an analysis of the storage access patterns associated with the
17

set of target containers in order to identify potential storage performance issues and/or storage resource constraints.
[0075] Further, in an exemplary implementation, the stream module [204B], may be
configured to transmit the determined set of performance metrics, based on the one or more
5 stream channels received from the user, to one or more manager nodes [206A], of the extraction
unit [206], for extraction.
[0076] In an exemplary implementation, each target container from the set of containers may
be a container that is in a running state. As used herein, the running state may refer to a state
of a particular container being active, wherein the determination of a container being in an
10 active or inactive state may be made on the basis of a set of pre-defined container state
determination rules. By determining whether the particular container is in the running state or an inactive state, the present disclosure intelligently prevents the sending of requests to get the performance metrics from an inactive container, thereby saving system resources, and reducing redundant data in the total data pool.
15 [0077] In another exemplary implementation, the collector module [204A] may be configured
to run one or more schedulers to identify, via the one or more schedulers, the set of target containers from the set of containers, and to determine via the one or more schedulers, the set of performance metrics of the set of target containers.
[0078] Further, in another exemplary embodiment, the collector module [204A], via one or
20 more schedulers, may fetch the details associated with the set of target containers at a the pre-
configured interval and store them in a cache, wherein the details may be updated periodically at pre-configured intervals as set by the user or the host server operator, as required.
[0079] In yet another exemplary implementation, the management unit [204] may be further configured to validate each container of the set of containers, wherein the validation comprises
25 a verification of the one or more host names, the one or more IP addresses, and the one or more
stream channel designations. As used herein, the verification of the one or more host names may comprise, checking the version of Host IP, i.e., IPv4 or IPv6 for each host server, and matching, the Host IP for each host server to check if they are segregated into the correct host group and comprise the correct IP version, as per the information received by the user via the
30 User Interface (UI), or not.
18

[0080] The system [200] may further comprise, an extraction unit [206] connected at least to the management unit [204], the extraction unit [206] configured to extract, via one or more manager nodes [206A], the transmitted set of performance metrics in one or more predefined batch sizes, and store, the extracted set of performance metrics into a database [208].
5 [0081] As used herein, the pre-defined batch size for the set of performance metrics may
comprise a plurality of parts of the total performance metric data divided into a plurality of
batches based on the total size of the data i.e., 10 Mb, or a batch of data representing the
container statistics for 2 Host Groups, or 10 Running Containers. It may be acknowledged by
a person skilled in the art that the aforementioned description of batch size metrics is merely
10 exemplary in nature, and any other format, as may be known by a person ordinarily skilled in
the art, may be employed to segregate data into multiple batches of defined size based on such format.
[0082] Further, in an exemplary implementation of the present disclosure, the extraction unit [206], may be configured to poll the stream module [204B] of the management unit [204], to
15 retrieve the container statistics data, comprising the set of performance metrics at a pre-
configured fixed interval. As used herein, the polling interval of the extraction unit [206] may be user configurable. For example, a user X may configure the polling interval for the extraction unit [206] to be 100 milliseconds. Now, the extraction unit [206] will poll, or simply check with the stream module [204B] at an interval of every 100ms for the availability of
20 updated performance metrics, wherein, it will perform the extraction operation as defined in in
method [300] in case of availability of an updated performance metric data. In this manner, the present disclosure facilitates that the container statistics associated with one or more containers to be monitored by the user may be updated in real-time.
[0083] Further, in an exemplary implementation, the extraction unit [206] may be further
25 configured to automatically adjust at least one of: one or more batch sizes to extract the set of
performance metrics, and one or more extraction intervals of the one or more manager nodes
[206A], wherein said automatic adjustment is performed dynamically. Further, the automatic
adjustment performed by the extraction unit [206] may either be performed automatically based
on one or more parameters determined by the system [200] or based on the pre-set parameters
30 provided by the user or the host server operator. It is to be noted that the performing
dynamically the automatic adjustment refers to an ability to automatically adjust in real-time without manual intervention, at least one of the one or more batch sizes, an amount of data
19

processed at once, the one or more extraction intervals i.e., a frequency at which the set of performance metrics is collected.
[0084] In yet another exemplary implementation, the data comprising the extracted set of
performance metrics, that is stored in the database [208], may be displayed to the user or the
5 host server operator, via the User Interface (UI), in a simplified format in real-time, as it is
added to the database. Further, any previous data stored in the database may also be displayed,
via the User Interface (UI) to the user or the host server operator, on demand, in a simple
format. As used herein, a simple format to display the data stored in the database may comprise
any format for the easy analysis of data, such as a chart, a graph, a graphical or visual
10 representation of the data etc.
[0085] Referring to FIG. 3, wherein a flow diagram of a method [300] for real-time monitoring
of container statistics, in accordance with exemplary implementations of the present disclosure
is shown. In an implementation, the method [300] is performed by the system [200]. Further,
in an implementation, the system [200] may be present in a server device to implement the
15 features of the present disclosure. Also, as shown in FIG. 3, the method [300] starts at step
[302].
[0086] At step [304], the method [300] may comprise, receiving, by a transceiver unit [202] via a User Interface (UI), an information associated with one or more hosts having a set of containers.
20 [0087] It is to be noted that the set of containers as used herein in the present disclosure may
be a container or set of containers of a network function, such as AMF (Access and Mobility Management Function), SMF (Session Management Function), UDM (Unified Data Management), or any other application for any domain. Further, the set of containers may be associated with any other entity that is obvious to a person skilled in the art to implement the
25 solution of the present disclosure.
[0088] As used herein, the information associated with the one or more hosts may comprise at least one of one or more host names, one or more Internet Protocol (IP) addresses associated with the one or more hosts, and one or more stream channel designations.
[0089] As used herein, one or more stream channels may be used to refer to one or more data
30 streams, wherein a data stream corresponds to a fixed flow of data packets from one system
20

module to another. A stream channel may signify a transmission route to define the route for a set of data, i.e., one or more performance metrics, wherein the user may specify from where, and in what manner, the set of data is to be transmitted from one module to another designated module.
5 [0090] Further, as used herein, the one or more stream channel designations may comprise an
allocation of a set of performance metric data for at least one of, one or more target containers associated with a host server, one or more target containers associated with a specified host group, and one or more of any target containers which may be selected to be monitored by a user and/or a host server operator, to one or more stream channels. To illustrate with an
10 example, a user X may provide a stream channel designation wherein a set of performance
metric data for a set of target containers associated with a host server A may be allocated to a stream channel Y. Now, as will be explained in more detail in the paragraphs herein below, when the extraction unit [206] extracts the set of performance metrics from the stream module [204B] and stores them in the database [208], then the set of performance metrics for target
15 containers associated with server A will be extracted and transmitted via the stream channel Y,
as per the stream channel designation provided by the user X.
[0091] At step [306], the method [300] may comprise, fetching, by a management unit [204] via a collector module [204A], a set of details associated with a set of target containers, from the set of containers, to determine a set of performance metrics of the set of target containers.
20 [0092] As used herein, the set of details may comprise, at least one of a container name and a
container identifier corresponding to each target container from the set of target containers, wherein a container name may comprise a name corresponding to at least one of, the name of the application running inside the container, a system-generated name, or any other name as may be configured by the user or the host server operator, Further, a container identifier, as
25 used herein, may comprise a unique alphanumeric identifier that may be used to identify a
container.
[0093] Further, the container name may be a user-defined label assigned to each target
container from the set of target containers based on a purpose and/or an application of said each
target container. Further, the container name is used to identify and manage said each target
30 container in a docker environment. Furthermore, the container name may be a unique label in
a pre-defined format such as "nginx" or "my-web-app", associated with the host server.
21

[0094] Further, the container identifier may be an automatically generated unique identifier
assigned to each target container from the set of target containers by a container engine.
Further, the container identifier associated with each target container is a hexadecimal string
like "f81ec7464732" that may be used to uniquely identify said each target container across all
5 host servers.
[0095] Further, as used herein, the set of performance metrics comprises at least one of a central processing unit (CPU) utilization, a hard disk drive (HDD) activity, and a block input/output (I/O) operation.
[0096] As used herein “CPU Utilization” signifies a percentage of CPU resources used by the
10 set of target containers, to indicate an extent of a processing power that is utilized by the set of
target containers. Further, a high CPU utilization may indicate resource constraints or intense computational workloads.
[0097] As used herein “HDD Activity” signifies a measurement of one or more read/write
operations (in bytes or input/output operations per second (IOPS)) between the set of target
15 containers and a hard disk drive of the host server. The host server monitors one or more storage
access patterns of the set of target containers in order to identify one or more potential bottlenecks or one or more storage capacity issues.
[0098] As used herein “block I/O operation” signifies a track of a number of block-level
input/output operations (reads and writes) which are performed by the set of target containers.
20 The block I/O operation provides an analysis of the storage access patterns associated with the
set of target containers in order to identify the one or more potential storage performance issues and/or the one or more storage resource constraints.
[0099] In an exemplary implementation, each target container from the set of containers, may
25 be a container that is in a running state. As used herein, the running state may refer to a state
of a particular container being active wherein the determination of a container being in an active or inactive state may be made on the basis of a set of pre-defined container state determination rules, by determining whether the particular container is in the running state or an inactive state, the present disclosure intelligently prevents the sending of requests to get the performance
22

metrics from an inactive container, thereby saving system resources, and reducing redundant data in the total data pool.
[0100] In another exemplary implementation, the method [300] may comprise, running, by the
collector module [204A], one or more schedulers to identify, via the one or more schedulers
5 the set of target containers from the set of containers, and to determine, via the one or more
schedulers, the set of performance metrics of the set of target containers.
[0101] Further, in another exemplary embodiment, the method [300] may further comprise
fetching, by collector module [204A], via one or more schedulers, the details associated with
the set of target containers at a pre-configured interval and store them in a cache, wherein the
10 details may be updated periodically at the pre-configured interval as set by the user or the host
server operator, as required.
[0102] Further, in yet another exemplary implementation, the method [300] may further
comprise, validating, by the management unit [204], each container of the set of containers,
wherein the validation comprises a verification of the one or more host names, the one or more
15 IP addresses, and the one or more stream channel designations. As used herein, the verification
of the one or more host names may comprise, checking the version of Host IP, i.e., IPv4 or IPv6 for each host server, and matching, the Host IP for each host server to check if they are segregated into the correct host group and comprise the correct IP version, as per the information received by the user via the User Interface (UI), or not.
20 [0103] At step [308], the method [300] may comprise, transmitting, by the management unit
[204], the determined set of performance metrics from the collector module [204A] to a stream module [204B].
[0104] In an exemplary implementation, the method [300] may comprise, transmitting, by the
stream module [204B], the determined set of performance metrics, based on the one or more
25 stream channels received from the user, to one or more manager nodes [206A] of the extraction
unit [206], for extraction.
[0105] At step [310], the method [300] may comprise, extracting, by an extraction unit [206] via one or more manager nodes [206A], the transmitted set of performance metrics in one or more predefined batch sizes.
23

[0106] As used herein, the pre-defined batch size for the set of performance metrics may
comprise a plurality of parts of the total performance metric data divided into a plurality of
batches based on the total size of the data i.e., 10 Mb, or a batch of data representing the
container statistics for 2 Host Groups, or 10 Running Container. It may be acknowledged by a
5 person skilled in the art that the aforementioned description of batch size metrics is merely
exemplary in nature, and any other format, as may be known by a person ordinarily skilled in the art, may be employed to segregate data into multiple batches of defined size based on such format.
[0107] Further, in an exemplary implementation of the present disclosure, the extraction unit
10 [206], may poll the stream module [204B] of the management unit [204], to retrieve the
container statistics data comprising the set of performance metrics, at a pre-configured fixed
interval. As used herein, the polling interval of the extraction unit [206] may be user
configurable. For example, a user X may configure the polling interval for the extraction unit
[206] to be 100 milliseconds. Now, the extraction unit [206] will poll, or simply check with the
15 stream module [204B] at an interval of every 100ms for the availability of updated performance
metrics, wherein, it will perform the extraction operation as defined in in method [300] in case of availability of an updated performance metric data. In this manner, the present disclosure facilitates that the container statistics associated with one or more containers to be monitored by the user may be updated in real-time.
20 [0108] In an exemplary implementation, the method [300] may further comprise, automatically
adjusting, by the extraction unit [206], at least one of: one or more batch sizes to extract the set of performance metrics, and one or more extraction intervals of the one or more manager nodes [206A], wherein said automatic adjusting is performed dynamically. Further, the automatic adjustment performed by the extraction unit [206] may either be performed automatically based
25 on one or more parameters determined by the system [200] or based on the pre-set parameters
provided by the user or the host server operator.
[0109] At step [312], the method [300] may comprise, storing, by the extraction unit [206], the extracted set of performance metrics into a database [208].
[0110] In an exemplary implementation, the data comprising the extracted set of performance
30 metrics, that is stored in the database [208], may be displayed to the user or the host server
operator, via the User Interface (UI), in a simplified format in real-time, as it is added to the
24

database. Further, any previous data stored in the database may also be displayed, via the User
Interface (UI) to the user or the host server operator, on demand, in a simple format. As used
herein, a simple format to display the data stored in the database may comprise any format for
the easy analysis of data, such as a chart, a graph, a graphical or visual representation of the
5 data etc.
[0111] At step [314], the method [300] terminates.
[0112] Referring to FIG. 4, wherein a flow diagram of an exemplary method [400] for real-time monitoring of container statistics is shown. As, depicted in FIG. 4, the method [400] starts at step [402].
10 [0113] At step [404], the method [400] comprises, receiving, one or more Host Groups created
by the user, the Host Groups containing a list of one or more Hosts to be monitored, and one or more stream channel names, to which data will be pushed.
[0114] As used herein, one or more stream channels may be used to refer to one or more data
streams, wherein a data stream corresponds to a fixed flow of data packets from one system
15 module to another. A stream channel may be used to define the route for a set of data, i.e., the
one or more performance metrics, wherein the user may specify from where, and in what manner, a set of data is to be communicated from one module, and to be received for storage, or otherwise, at another.
[0115] Further, as used herein, the list of one or more Host Groups may be received from the
20 user in the form of a sheet, i.e., a spreadsheet, wherein the sheet may comprise details of the
Host Groups and the related values, such as an IP Address associated with the Host server, IP
version details, hardware details/configuration, list of containers associated with a Host,
respective container IDs etc. It may be acknowledged by a person ordinarily skilled in the art
that the aforementioned description of the list of one or more host groups, and the parameters
25 included therein are merely exemplary in nature, and any other format for the list of one or
more Host Groups, or the parameters associated thereof, may be employed by the user, either separately or in addition to the aforementioned examples, to implement the features of the present disclosure.
[0116] At step [406], the method [400] comprises assigning the one or more Host Groups to a
30 collector.
25

[0117] As used herein, a collector may be a collector module [204A] encompassed in the
management unit [204] of the system [200]. The method of the present disclosure encompasses
that there may be one or more collectors, wherein the different Hosts, and/or Host Groups from
among the one or more Host Groups may be assigned to different Collectors. Further, as used
5 herein, the assigning of the one or more Host Groups may comprise, allocating, the one or more
Host Groups to one of the one or more Collectors, to fetch running container (i.e., one or more target containers) details and container stats for running containers. Further, the set of performance metrics for the running containers may be extracted from the container stats of the running containers.
10 [0118] At step [408], the method [400] comprises, fetching, by the collector, names of running
containers and respective Container IDs, for all of the assigned Host Groups and, storing, the names of running containers and the respective Container IDs, in a cache.
[0119] It may be acknowledged by a person skilled in the art that for the purpose of this invention, the collector may be further configured to fetch any other parameters, in addition to
15 or other than, the name of running containers and the respective Container IDs. Further, as used
herein, the cache may comprise a cache memory that may be used for high-speed short-term storage of data. In this case, the names of the running containers and the respective Container IDs, may be stored by the collector on the cache memory to perform one or more features of the present disclosure that may require the use of the aforementioned data in real-time, such as
20 calculating the one or more container stats.
[0120] At step [410], the method [400] comprises updating, the details of the running containers of assigned Host Groups that are stored in cache, by the Collector, via one or more schedulers, in a configured interval.
[0121] As used herein, the updating of the details of the running containers i.e., the names of
25 running containers and the respective Container IDs, of the assigned Host Groups that are
stored in cache, may be performed to keep the system [200] updated as to which of the
containers associated with one or more hosts are in a running state, and which of the containers
are inactive. This prevents the use of resources to monitor the resources of the one or more
inactive containers and ensures that the details in the cache used to determine running
30 containers for the monitoring of resources only correspond to running containers, and not to
inactive containers.
26

[0122] Further, as used herein, one or more schedulers may refer to a process scheduler
component that may be used to allocate the order of performance, and time interval for the
tasks performed by a processor of a computing device. For updating the container details in a
cache, the scheduler may be a short-term process scheduler, a medium-term process scheduler,
5 or a long-term process scheduler, depending upon the implementation of the present disclosure.
[0123] Further, in an exemplary implementation of the present disclosure, the configured interval for updating the container details may refer to an interval that may be defined by the user or host server operator, as required. For example, the configured interval may be a fixed time interval such as 10 minutes.
10 [0124] At step [412], the method [400] comprises fetching, by the collector, via the one or
more schedulers, the container stats of all the running containers.
[0125] As used herein, the performance metrics fetched by the collector comprise a raw data of container statistics fetched from a container engine corresponding to the one or more target containers.
15 [0126] At step [414], the method [400] comprises calculating, by the Collector, one or more
central processing unit (CPU), hard disk drive (HDD) and Block input/output (I/O) data, using the container stats fetched from all of the running containers.
[0127] As used herein, the performance metrics determined by the Collector may comprise both the parameters of resource utilization by individual containers, and the collective resource
20 utilization of the resources by the running containers from the one or more Host Groups that
may be monitored at any given time. It may be further understood that the present disclosure encompasses that the performance metrics data determined by the collector module [204A] may be representative of resource utilization in various formats, such as usage percentage from the available resources, component specific utilization metrics, i.e., read and writes of the HDD
25 in metric of Mbps. This is different from the raw data fetched from the one or more containers
and may comprise a plurality of representations of the data in different formats, each or any of which may be utilized for a dedicated use case of viewing, comparing and analysing the data.
[0128] At step [416], the method [400] comprises pushing, by the collector, the one or more
CPU, HDD, and Block I/O data to the respective stream channels. The Collector may push the
30 data into the respective stream channels via the stream module [204B], wherein the stream
27

module [204B] may then route the received data into designated stream channels based on the stream channel information received from the user or the host server operator.
[0129] At step [418], the method [400] comprises pulling, by one or more manager nodes, the data from the assigned stream channels.
5 [0130] At step [420], the method [400] comprises flushing, by the one or more manager nodes,
the data, by adding it to a queue, wherein the data is flushed to the database, via the queue, in batches.
[0131] In an exemplary implementation of the present disclosure, the data flushed to the
database may be stored in the database and may be retrieved for reference and analysis at any
10 point of time. In another exemplary implementation, the data may be stored in the database for
a pre-configured period of time, wherein the pre-configured period of time may be set by the user or host server operator, as required. For example, 1 month.
[0132] As used herein, a manager node (also known as a manager or master node) is a specialized container host that oversees and coordinates the activities of a cluster of nodes i.e.,
15 containers. The manager node are responsible for scheduling containers across the one or more
Host Groups, monitoring node and container performance metrics, maintaining cluster state and configuration, and providing a centralized Application programming interface (API) for container management and orchestration, collecting and aggregating performance data from the one or more Host Groups, processing and analyzing performance metrics to identify trends,
20 bottlenecks, or issues, triggering alerts, scaling, or other actions based on performance metric
thresholds, providing a unified view of cluster performance and container health
[0133] In yet another exemplary implementation, the data stored in the database may be
displayed to the user or the host server operator, via the User Interface (UI), in a simplified
format in real-time, as it is added to the database. Further, any previous data stored in the
25 database may also be displayed, via the User Interface (UI) to the user or the host server
operator, on demand, in a simple format. As used herein, a simple format to display the data stored in the database may comprise any format for the easy analysis of data, such as a chart, a graph, a graphical or visual representation of the data etc.
[0134] At step [422], the method [400] terminates.
28

[0135] Referring to FIG. 5, wherein a flow diagram of an exemplary process [500] for real-time monitoring of container statistics, based on an exemplary implementation of the present disclosure is shown.
[0136] At step [S1], a user, that may also be a host server operator, creates one or more Host
5 Groups and transmits the one or more Host Groups to a Management Unit [204] for validation
of Host IPs and the assignment of the one or more Host Groups to a Collector Module [204A] of the Management Unit [204].
[0137] As used herein, a Host Group may be transmitted by the user, to the management unit
[204], in the form of a Host Group sheet, comprising a list of one or more Host Groups, a list
10 of one or more Hosts corresponding to each Host Group, the Host IPs associated with each
Host from a Host Group, a stream channel name, a batch size, and one or more container details for all containers associated with the each of the one or more Hosts from the one or more Host Groups.
[0138] Further, in an additional step [S1A], the user may also create one or more Resource
15 Groups and transmit the one or more Resource Groups to the management unit [204] along
with the one or more Host Groups. As used herein, the one or more Resource Groups may refer
to a filter comprising a Host Group name from the one or more Host Groups, or a Resource
Group may comprise a Host Group further comprising one or more Host servers from different
Host Groups. As used herein, the one or more Resource Groups may be created and transmitted
20 by the user in addition to the one or more Host Groups, wherein the resource group may be
created and transmitted alongside the Host Groups, or separately, at any point of time, by the user, via a User Interface (UI). Further a Resource Group filter allows the user to get a container stats of only the containers associated with the Host Group that is included within the one or more Resource Groups.
25 [0139] Next, at step [S2], the Management Unit [204] may store all the information related to
the one or more Host Groups from the Host Group sheet, and the one or more Resource Groups from the Resource Group sheet, into the database [208].
[0140] Thereafter, at step [S3], the Management Unit [204] assigns, the one or more Host Groups to a Collector Module, based on the Host Group details received in step [S1].
29

[0141] Thereafter, at step [S4], the collector module [204A] fetches the details of the one or more target containers, i.e., the containers in a running state, from the one or more Host Groups assigned to the collector module [204A].
[0142] Thereafter, at step [S5], the collector module [204A] determines performance metrics
5 of the one or more target containers, wherein the metrics may comprise one or more central
processing unit (CPU), hard disk drive (HDD) and Block input/output (I/O) data. The performance metrics determined by the collector module [204A] may be derived from the raw data of container statistics fetched from the container engine corresponding to the one or more target containers. As used herein, the performance metrics determined by the collector module
10 [204A] may comprise both the parameters of resource utilization by individual containers, and
the collective resource utilization of the resources by the running containers from the one or more Host Groups that may be monitored at any given time. It may be further understood that the present disclosure encompasses that the performance metrics data determined by the collector module [204A] may be representative of resource utilization in various formats, such
15 as usage percentage from the available resources, component specific utilization metrics, i.e.,
read and writes of the HDD in metric of Mbps.
[0143] Thereafter, at step [S6], the collector module [204A] transmits the determined performance metrics, comprising the one or more CPU, HDD and Block I/O data to the stream module [204B].
20 [0144] Thereafter, at step [S7], the stream module [204B], transmits the performance metrics
data, based the stream channel information provided by the user in the Host Groups sheet, to one or more designated Manager Nodes [206A] associated with the one or more stream channels for extraction.
[0145] Lastly, at step [S8A], the one or more manager nodes [206A] divide the performance
25 metrics data into batches, based on the user defined batch size for the sets of data, and thereafter
transmits it to the database [208] for storage, wherein, the stored batch-wise performance data may be displayed to the user in a simplified format, or stored for use and analysis at a later point of time.
[0146] Alternatively, at step [S8B], the one or more manager nodes [206A] may raise an alarm,
30 indicating one or more faults. The alarm raised by the one or more manager nodes may initiate
a fault manager, wherein the fault manager may resolve the one or more faults detected by the
30

one or more manager nodes, if possible, or display a relevant fault-information to the user for diagnostic and resolution purposes.
[0147] As used herein, the one or more faults may comprise, one or more errors associated
with performance metrics received at the one or more manager nodes [206A], or one or more
5 errors associated with the division of the performance metrics data into batches, or the storage
of the batch-wise data in the database [208].
[0148] The present disclosure further discloses a User Equipment (UE) for real-time monitoring of container statistics. The UE comprises, a User Interface (UI) configured to transmit, to a system [200], an information associated with one or more hosts having a set of
10 containers, wherein the information is transmitted for a storage of a set of performance metrics
of a set of target containers; and to receive, from the system [200], an indication of the storage of the set of performance metrics into a database [208], wherein the storage is based on: receiving, by a transceiver unit [202] of the system [200] via the User Interface (UI), the information; fetching, by a management unit [204] of the system [200] via a collector module
15 [204A], a set of details associated with the set of target containers from the set of containers to
determine the set of performance metrics of the set of target containers; transmitting, by the management unit [204], the determined set of performance metrics from the collector module [204A] to a stream module [204B]; extracting, by an extraction unit [206] of the system via one or more manager nodes [206A], the transmitted set of performance metrics in one or more
20 predefined batch sizes; and storing, by the extraction unit [206], the extracted set of
performance metrics into the database [208].
[0149] The present disclosure further discloses a non-transitory computer readable storage medium storing instructions for real-time monitoring of container statistics, wherein the instructions include an executable code which, when executed by a one or more units of a
25 system, causes: a transceiver unit [202] to receive, via a User Interface (UI), an information
associated with one or more hosts having a set of containers. The instructions, when executed, further cause, a management unit [204] to fetch via a collector module [204A], a set of details associated with a set of target containers from the set of containers to determine a set of performance metrics of the set of target containers; and to transmit, the determined set of
30 performance metrics from the collector module [204A] to a stream module [204B]. The
instructions, when executed, further cause, an extraction unit [206] to extract, via one or more
31

manager nodes [206A], the transmitted set of performance metrics in one or more predefined batch sizes, and to store, the extracted set of performance metrics into a database [208].
[0150] As is evident from the above, the present disclosure provides a technically advanced
solution for real-time monitoring of container statistics. The present solution enables the user
5 to validate one or more host servers via Host IP validation. Further, the present solution enables
the user to monitor the container statistics in real-time. The present solution also enables the
user to identify target containers which are in a running state, and only determine performance
metrics for such target containers, thereby saving system resources. In addition to identifying
and only monitoring running containers, the present solution further enables a user to filter and
10 categorize host servers into various host groups, wherein the user may specify a group of any
of the hosts, or the associated containers, to monitor the container statistics collectively. As already discussed herein above, the present disclosure provides a solution that enables a number of other novel aspects for the real-time monitoring of container statistics as well.
[0151] While considerable emphasis has been placed herein on the disclosed implementations,
15 it will be appreciated that many implementations can be made and that many changes can be
made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
20
32

We Claim:
1. A method for real-time monitoring of container statistics, comprising the steps of:
receiving, by a transceiver unit [202] via a user interface (UI), an information associated with one or more hosts having a set of containers;
fetching, by a management unit [204] via a collector module [204A], a set of details associated with a set of target containers from the set of containers to determine a set of performance metrics of the set of target containers;
transmitting, by the management unit [204], the determined set of performance metrics from the collector module [204A] to a stream module [204B];
extracting, by an extraction unit [206] via one or more manager nodes [206A], the transmitted set of performance metrics in one or more predefined batch sizes; and
storing, by the extraction unit [206], the extracted set of performance metrics into a database [208].
2. The method as claimed in claim 1, wherein the set of performance metrics comprises at least one of a central processing unit (CPU) utilization, a hard disk drive (HDD) activity, and a block input/output (I/O) operation.
3. The method as claimed in claim 1, the method comprises automatically adjusting, by the extraction unit [206], at least one of: one or more batch sizes to extract the set of performance metrics, and one or more extraction intervals of the one or more manager nodes [206A], wherein said automatic adjusting is performed dynamically.
4. The method as claimed in claim 1, wherein the information comprises at least one of one or more host names, one or more Internet Protocol (IP) addresses associated with the one or more hosts, and one or more stream channel designations.
5. The method as claimed in claim 4, the method comprises validating, by the management unit [204], each container of the set of containers, wherein the validation comprises a verification of the one or more host names, the one or more IP addresses, and the one or more stream channel designations.

6. The method as claimed in claim 1, wherein each target container from the set of target containers is in a running state.
7. The method as claimed in claim 1, wherein the set of details comprises at least one of a container name and a container identifier corresponding to each target container from the set of target containers.
8. The method as claimed in claim 1, the method comprises running by the collector module [204A], one or more schedulers to:

- identify via the one or more schedulers the set of target containers from the set of containers, and
- determine via the one or more schedulers the set of performance metrics of the set of target containers.
9. A system for real-time monitoring of container statistics, the system comprises:
a transceiver unit [202] configured to receive, via a user interface (UI), an information associated with one or more hosts having a set of containers;
a management unit [204] connected at least to the transceiver unit [202], the management unit [204] configured to:
fetch, via a collector module [204A], a set of details associated with a set of target containers from the set of containers to determine a set of performance metrics of the set of target containers, and
transmit, the determined set of performance metrics from the collector module [204A] to a stream module [204B]; and an extraction unit [206] connected at least to the management unit [204], the extraction unit [206] configured to:
extract, via one or more manager nodes [206A], the transmitted set of performance
metrics in one or more predefined batch sizes, and
store, the extracted set of performance metrics into a database [208].
10. The system as claimed in claim 9, wherein the set of performance metrics comprises at
least one of a central processing unit (CPU) utilization, a hard disk drive (HDD) activity,
and a block input/output (I/O) operation.

11. The system as claimed in claim 9, wherein the extraction unit [206] is configured to automatically adjust at least one of: one or more batch sizes to extract the set of performance metrics, and one or more extraction intervals of the one or more manager nodes, wherein said automatic adjustment is performed dynamically.
12. The system as claimed in claim 9, wherein the information comprises at least one of one or more host names, one or more Internet Protocol (IP) addresses associated with the one or more hosts, and one or more stream channel designations.
13. The system as claimed in claim 12, wherein the management unit [204] is configured to validate each container of the set of containers, wherein the validation comprises a verification of the one or more host names, the one or more IP addresses, and the one or more stream channel designations.
14. The system as claimed in claim 9, wherein each target container from the set of target containers is in a running state.
15. The system as claimed in claim 9, wherein the set of details comprises at least one of a container name and a container identifier corresponding to each target container from the set of target containers.
16. The system as claimed in claim 9, wherein the collector module [204A] is configured to run one or more schedulers to:

- identify via the one or more schedulers the set of target containers from the set of containers, and
- determine via the one or more schedulers the set of performance metrics of the set of target containers.
17. A User Equipment (UE), the UE comprises:
- a User Interface (UI) configured to:

transmit, to a system [200], an information associated with one or more hosts having a set of containers, wherein the information is transmitted for a storage of a set of performance metrics of a set of target containers; and
receive, from the system [200], an indication of the storage of the set of performance metrics into a database, wherein the storage is based on:
receiving, by a transceiver unit [202] of the system [200] via the User Interface (UI), the information,
fetching, by a management unit [204] of the system via a collector module [204A], a set of details associated with the set of target containers from the set of containers to determine the set of performance metrics of the set of target containers,
transmitting, by the management unit [204], the determined set of performance metrics from the collector module [204A] to a stream module [204B],
extracting, by an extraction unit [206] of the system [200] via one or more manager nodes [206A], the transmitted set of performance metrics in one or more predefined batch sizes, and
storing, by the extraction unit [206], the extracted set of performance metrics into the database [208].

Documents

Application Documents

# Name Date
1 202321047022-STATEMENT OF UNDERTAKING (FORM 3) [12-07-2023(online)].pdf 2023-07-12
2 202321047022-PROVISIONAL SPECIFICATION [12-07-2023(online)].pdf 2023-07-12
3 202321047022-FORM 1 [12-07-2023(online)].pdf 2023-07-12
4 202321047022-FIGURE OF ABSTRACT [12-07-2023(online)].pdf 2023-07-12
5 202321047022-DRAWINGS [12-07-2023(online)].pdf 2023-07-12
6 202321047022-FORM-26 [19-09-2023(online)].pdf 2023-09-19
7 202321047022-Proof of Right [06-10-2023(online)].pdf 2023-10-06
8 202321047022-ORIGINAL UR 6(1A) FORM 1 & 26)-231023.pdf 2023-11-06
9 202321047022-ENDORSEMENT BY INVENTORS [07-07-2024(online)].pdf 2024-07-07
10 202321047022-DRAWING [07-07-2024(online)].pdf 2024-07-07
11 202321047022-CORRESPONDENCE-OTHERS [07-07-2024(online)].pdf 2024-07-07
12 202321047022-COMPLETE SPECIFICATION [07-07-2024(online)].pdf 2024-07-07
13 202321047022-FORM 3 [02-08-2024(online)].pdf 2024-08-02
14 Abstract-1.jpg 2024-08-09
15 202321047022-Request Letter-Correspondence [14-08-2024(online)].pdf 2024-08-14
16 202321047022-Power of Attorney [14-08-2024(online)].pdf 2024-08-14
17 202321047022-Form 1 (Submitted on date of filing) [14-08-2024(online)].pdf 2024-08-14
18 202321047022-Covering Letter [14-08-2024(online)].pdf 2024-08-14
19 202321047022-CERTIFIED COPIES TRANSMISSION TO IB [14-08-2024(online)].pdf 2024-08-14