Sign In to Follow Application
View All Documents & Correspondence

System And Method For Discovery Of At Least One Asset In A Network

Abstract: ABSTRACT SYSTEM AND METHOD FOR DISCOVERY OF AT LEAST ONE ASSET IN A NETWORK The present invention relates to a system (108) and a method (500) for discovery of at least one asset in a network (106). The method (500) includes steps of retrieving metrics data related to each of a plurality of assets (110) present in a list. Further training a model (214) utilizes the retrieved metrics data pertaining to the plurality of assets present in the list. Furthermore, the method includes, scheduling one or more discovery time slots for each of the plurality of assets utilizing one or more scheduling logics based on an output generated by the trained model (214). Thereafter, categorizing the plurality of assets (110) into one or more batches of assets based on the scheduled one or more discovery time slots for each of the plurality of assets (110). Then initiating a discovery process for each batch according to the respective scheduled one or more discovery time slots. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 July 2023
Publication Number
04/2025
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,

Inventors

1. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
2. Ankit Murarka
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
3. Rizwan Ahmad
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
4. Kapil Gill
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
5. Rahul Verma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
6. Arpit Jain
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
7. Shashank Bhushan
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
8. Kamal Malik
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
9. Prakash Gaikwad
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
10. Sameer Magu
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
11. Supriya De
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
12. Tilala Mehul
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
13. Kumar Debashish
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR DISCOVERY OF AT LEAST ONE ASSET IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication systems, more particularly relates to a method and a system for discovery of at least one asset in a network.
BACKGROUND OF THE INVENTION
[0002] A server is a computer program or device that provides a service to another computer program and its user, also known as the client. In a data-center, a physical computer that a server program runs on is also frequently referred to as a server. A server is a powerful machine designed to compute, store, and manage data, devices, and systems over a network. In computing, the server is a piece of computer hardware or software (computer program) that provides functionality for other programs or devices, called "clients".
[0003] Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients or performing computations for a client. A single server can serve multiple clients, and a single client can use multiple servers. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers, etc. For example, Google Web Server (GWS) is proprietary web server software that Google uses for its web infrastructure. GWS is used exclusively inside Google's ecosystem for website hosting.
[0004] A server farm or server cluster is a collection of computer servers maintained by an organization to supply server functionality far beyond the capability of a single device. The number of servers keeps on increasing as new servers are added, and for a large organization the number of servers may be 5000 odd.
[0005] Server discovery is a feature that allows client applications to find servers on the network and then manually configure server information. This is time consuming. If you have, for example, over 5000 servers, then the discovery process will take a long time, which is not desired. Further, this will be an extra load on the system. In the prior art, this load is not managed and is generally random and not distributed for optimal utilization of resources.
[0006] In the prior art, discoveries are done in a single go and are manually scheduled. For example, if we have 100 servers and the discovery is scheduled for, say 1 pm on Friday, the discovery will be performed every Friday at 1 pm. As the requirement of the organization continually increases, new servers keep on adding in the system and number of servers keeps on increasing. In the example above, the weekly discovery will be performed at the scheduled time even when the number of servers increases to say 500 or 1000. This will over burden the resources and does not provide an optimal utilization of resources, which is undesirable.
[0007] It is desired that despite the increasing number of servers in the system, the process of discovery is dynamically and automatically scheduled to optimize the resources in the system for better throughput in the discovery process.
[0008] There is therefore a need for a solution that overcomes the above challenges and provides a system and a method for efficiently managing the process of discovery such as, but not limited to, server discovery which reduces manual intervention, distributes load and provides optimal usage of system resources.
SUMMARY OF THE INVENTION
[0009] One or more embodiments of the present disclosure provides a method and system for discovery of at least one asset in a network.
[0010] In one aspect of the present invention, a method for discovery of at least one asset in the network is disclosed. The method includes the step of retrieving, by one or more processors, metrics data related to each of a plurality of assets present in a list. The method further includes the step of training, by the one or more processors, a model utilizing the retrieved metrics data pertaining to the plurality of assets present in the list. The method further includes the step of scheduling, by the one or more processors, one or more discovery time slots for each of the plurality of assets utilizing one or more scheduling logics based on an output generated by the trained model. The method further includes the step of categorizing, by the one or more processors, the plurality of assets into one or more batches of assets based on the scheduled one or more discovery time slots for each of the plurality of assets. The method further includes the step of initiating, by the one or more processors, a discovery process for each batch according to the respective scheduled one or more discovery time slots.
[0011] In another embodiment, each of the plurality of assets includes at least one of, a server and an Internet Protocol (IP) asset
[0012] In yet another embodiment, the one or more processors, creates the list by identifying the plurality of assets present in the network and adding the plurality of assets to the list.
[0013] In yet another embodiment, each of the scheduled discovery time slot is non-overlapping with other scheduled discovery time slots.
[0014] In yet another embodiment, the scheduled discovery time slot is based upon the traffic/load pertaining to each of the plurality of assets present in the list.
[0015] In yet another embodiment, the retrieved metrics data includes at least one of a memory utilization, a central processing unit (CPU) usage, and a response time related to each of the plurality of assets.
[0016] In yet another embodiment, scheduling logics are developed by the one or more processors in order to intelligently schedule one or more discovery time slotsfor each of the plurality of assets.
[0017] In another aspect of the present invention a system for discovery of at least one asset in the network is disclosed. The system includes a retrieving unit configured to retrieve metrics data related to each of a plurality of assets present in a list. The system includes a training unit, configured to train a model utilizing the retrieved metrics data pertaining to the plurality of assets present in the list. The system further includes a scheduler configured to schedule one or more discovery time slots for each of the plurality of assets utilizing one or more scheduling logics based on an output generated by the trained model. The system further includes a categorizing unit configured to categorize the plurality of assets into one or more batches of assets based on the scheduled one or more discovery time slots for each of the plurality of assets. The system further includes an asset discovery manager, configured to initiate a discovery process for each batch according to the respective scheduled one or more discovery time slots.
[0018] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor. The processor is configured to retrieve metrics data related to each of a plurality of assets present in a list. The processor is further configured to train a model utilizing the retrieved metrics data pertaining to the plurality of assets present in the list. The processor is further configured to schedule one or more discovery time slots for each of the plurality of assets utilizing one or more scheduling logics based on an output generated by the trained model. The processor is further configured to categorize the plurality of assets into one or more batches of assets based on the scheduled one or more discovery time slots for each of the plurality of assets. The processor is further configured to initiate a discovery process for each batch according to the respective scheduled one or more discovery time slots.
[0019] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0021] FIG. 1 is an exemplary block diagram of an environment for discovery of at least one asset in a network, according to one or more embodiments of the present invention;
[0022] FIG. 2 is an exemplary block diagram of a system for discovery of at least one asset in the network, according to one or more embodiments of the present invention;
[0023] FIG. 3 is an exemplary architecture of the system of FIG. 2, according to one or more embodiments of the present invention;
[0024] FIG. 4 is an exemplary architecture diagram illustrating the flow for discovery of at least one asset in the network, according to one or more embodiments of the present disclosure; and
[0025] FIG. 5 is a flow diagram of a method for discovery of at least one asset in the network, according to one or more embodiments of the present invention.
[0026] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0027] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0028] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0029] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0030] The present invention discloses a system and a method for discovery of at least one asset in the network. More particularly, the system and method for minimizing the impact on real time traffic during an asset discovery is disclosed. The proposed invention offers a novel solution to automatically schedule the discovery of at least one asset in the network by intelligently dividing the assets into batches of assets and scheduling one or more time slots for discoveries. By automating a discovery process, the invention substantially reduces the time taken to complete the discoveries and minimizes the effect on the performance of the assets. The system and method use Artificial Intelligence/Machine Learning (AI/ML) model to analyze each asset metrics and determine the optimal time slot for the performing the process of discovery.
[0031] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for discovery of at least one asset in the network, according to one or more embodiments of the present invention. The environment 100 includes a User Equipment (UE) 102, a server 104, a network 106, a system 108, and a plurality of assets 110. A user interacts with the system 108 utilizing the UE 102.
[0032] For the purpose of description and explanation, the description will be explained with respect to one or more user equipment’s (UEs) 102, or to be more specific will be explained with respect to a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
[0033] In an embodiment, each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0034] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0035] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
[0036] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0037] The environment 100 includes the plurality of assets 110 which are communicably coupled to the server 104 via the network 106. In one embodiment, the plurality of assets 110 includes at least one of, but not limited to, a server and an Internet Protocol (IP) asset. The at least one asset such as the server may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
[0038] The plurality of assets 110 are any equipment or devices which are used in the network 106. The plurality of assets 110 are essential components that make up the intricate infrastructure of the network 106. The plurality of assets 110 play a vital role in facilitating communication, ensuring connectivity, and supporting the transmission of data in the telecommunications industry. The plurality of assets 110 includes at least one of, but not limited to, one or more servers, one or more base stations, one or more databases, one or more antennas, etc.
[0039] The environment 100 further includes the system 108 communicably coupled to the server 104, the plurality of assets 110, and the UE 102 via the network 106. The system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity.
[0040] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0041] FIG. 2 is an exemplary block diagram of the system 108 for discovery of at least one asset in the network 106, according to one or more embodiments of the present invention.
[0042] As per the illustrated and preferred embodiment, the system 108 for discovery of at least one asset in the network 106, the system 108 includes one or more processors 202, a memory 204, and a storage unit 206. The one or more processors 202, hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. However, it is to be noted that the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0043] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202. The memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to discover at least one asset in the network 106. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0044] As per the illustrated embodiment, the storage unit 206 is configured to store a list of plurality of assets 110 in the network 106 and metrics data related to each of the plurality of assets 110 present in the list. The storage unit 206 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of storage unit 206 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0045] As per the illustrated embodiment, the system 108 includes the processor 202 for discovery of at least one asset in the network 106. The processor 202 includes a creation unit 208, a retrieving unit 210, a training unit 212, an Artificial Intelligence/Machine Learning (AI/ML) model 214, a scheduler 216, a categorizing unit 218, and an asset discovery manager 220. The processor 202 is communicably coupled to the one or more components of the system 108 such as the storage unit 206, and the memory 204. In an embodiment, operations and functionalities of the creation unit 208, the retrieving unit 210, the training unit 212, the AI/ML model 214, the scheduler 216, the categorizing unit 218, the asset discovery manager 220, and the one or more components of the system 108 can be used in combination or interchangeably.
[0046] In an embodiment, the creation unit 208 of the processor 202 is configured to create the list pertaining to the plurality of assets 110. In particular, the creation unit 208 identifies the plurality of assets present in the network 106 and adds the identified plurality of assets in at least one of a sequential manner in order to create the list of the plurality of assets present in the network 106. For example, let us assume there are 100 servers in the network 106. The creation unit 208 further creates the list such as Server 1, Server 2,..,Server 100. In one embodiment, the creation unit 208 adds the metrics data related to each of the plurality of assets present in the network 106 in the list created. Thereafter, the creation unit 208 stores the list created in the storage unit 206 which ensures that the created list is available for at least one of, but not limited to, retrieval and analysis.
[0047] In an embodiment, the retrieving unit 210 of the processor 202 is configured to retrieve the metrics data related to each of the plurality of assets 110 present in the list. In particular, the retrieving unit 210 retrieves the metrics data related to each of the plurality of assets 110 from the list created by the creation unit 208. In an alternate embodiment, the retrieving unit 210 retrieves the metrics data related to each of the plurality of assets 110 from the storage unit 206 which includes the list created by the creation unit 208. In one embodiment, the metrics data are measures of quantitative assessment commonly used for comparing and tracking performance of the plurality of assets 110 present in the network 106. The retrieved metrics data includes at least one of, but not limited to, a memory utilization, a Central Processing Unit (CPU) usage, a request per second served, a latency in serving the request, and a response time related to each of the plurality of assets 110.
[0048] In an embodiment, the training unit 212 of the processor 202 is configured to train the AI/ML model 214 utilizing the retrieved metrics data pertaining to the plurality of assets 110 present in the list. In particular, the retrieved metrics data is fed to the AI/ML model 214 by the training unit 212. While training, the AI/ML model 214 learns at least one of, but not limited to, trends, patterns and behavior related to the metrics data of the plurality of assets 110. In one embodiment, the system 108 selects an appropriate AI/ML model 214 from a set of available options of the AI/ML models. Thereafter, the selected AI/ML model 214 is trained using the retrieved metrics data. In one embodiment, the AI/ML model 214 is trained by the the training unit 212 using historical metrics data pertaining to the plurality of assets 110.
[0049] In an alternate embodiment, the retrieved metrics data is normalized by a normalizer before training the AI/ML model 214 utilizing the retrieved metrics data. The normalizer may be included in the processor 202. In particular, the normalizer preprocesses the retrieved metrics data which includes at least one of, but not limited to, reorganizing the retrieved metrics data, removing the redundant data in the retrieved metrics data, formatting the retrieved metrics data and removing null values from the retrieved metrics data. The main goal of the the normalizer is to achieve a standardized data format across the entire system 108. The normalizer ensures that the normalized data is stored appropriately in the storage unit 206. In particular, the training unit 212 receives the normalized data from the normalizer and trains the AI/ML model 214 utilizing the normalized metrics data.
[0050] In an embodiment, upon training the AI/ML model 214, the trained AI/ML model 214 generates an output based on the retrieved metrics data fed to the the AI/ML model 214. In an alternate embodiment, based on the generated output, the training unit 212 feeds the regular updates related to the generated output to the AI/ML model 214 in order to generate more accurate output. Thereafter, the scheduler 216 of the processor 202 is configured to schedule one or more discovery time slots for each of the plurality of assets 110 utilizing one or more scheduling logics based on an output generated by the trained AI/ML model 214. The one or more discovery time slots are the range of the time slots when the processor 202 can perform a discovery process for each of the plurality of the assets present in the network 106. In an alternate embodiment, the one or more discovery time slots are the range of the time slots when the plurality of assets 110 are not busy. In particular, the one or more scheduling logics refer to systematic and analytical methods used to make decisions and calculations in order to schedule the one or more discovery time slots based on the output generated by the trained AI/ML model 214.
[0051] In one embodiment, the one or more scheduling logics includes at least one of, but not limited to, a k-means clustering, a hierarchical clustering, a Principal Component Analysis (PCA), an Independent Component Analysis (ICA), a deep learning logics such as Artificial Neural Networks (ANNs), a Convolutional Neural Networks (CNNs), a Recurrent Neural Networks (RNNs), a Long Short-Term Memory Networks (LSTMs), a Generative Adversarial Networks (GANs), a Q-Learning, a Deep Q-Networks (DQN), a Reinforcement Learning Logics etc.
[0052] The trained AI/ML model 214 generates the output pertaining to at least one of, but not limited to, the traffic/load on each of the plurality of assets 110 present in the list created by the creation unit 208. Further, the one or more scheduling logics which are developed by the scheduler 216 intelligently schedules the one or more discovery time slots for each of the plurality of assets 110 based on the traffic/load on each of the plurality of assets 110. In one embodiment, the one or more scheduled discovery time slots are scheduled by the scheduler 216 in such a way that the scheduled discovery time slots for each of the plurality of assets 110 are not overlapping with each other.
[0053] In an embodiment, the categorizing unit 218 of the processor 202 is configured to categorize the plurality of assets 110 into one or more batches of assets based on the scheduled one or more discovery time slots for each of the plurality of assets 110. The one or more batches are the groups of the plurality of assets 110. In particular, the categorizing unit 218 divides the total number of the plurality of assets 110 present in the network 106 into smaller groups or batches. For example, Batch 1, Batch 2, …, Batch N and so on, which includes plurality of assets 110.
[0054] In an embodiment, the scheduler 216 and the categorizing unit 218 simultaneously schedules the one or more discovery time slots for each of the plurality of assets 110 and categorizes the plurality of assets 110 into the one or more batches of the plurality of assets 110 based on the scheduled one or more discovery time slots for each of the plurality of assets 110. In particular, the scheduler 216 and the categorizing unit 218 may be a single entity to perform their respective functions without deviating from the scope of the present disclosure.
[0055] In an embodiment, the asset discovery manager 220 of the processor 202 is configured to initiate the discovery process for each batch of the plurality of assets 110 according to the respective scheduled one or more discovery time slots for the plurality of assets 110. In one embodiment, the discovery process discovers multiple plurality of assets 110 instances running on a host. In another embodiment, the discovery process allows users to find plurality of assets 110 on the network 106 and then discover how to connect with the plurality of assets 110. In the discovery process, the system 108 may retrieve the information pertaining to the plurality of assets 110 such as at least one of, but not limited to, an IP address and a port number or address. The IP address is the unique identifying number assigned to each of the plurality of assets device connected to the network 106. The port number is a number assigned to uniquely identify a connection endpoint and to direct data to a specific service. Once the user has information of the plurality of assets 110, the user stores the information and uses the stored information to connect directly to at least one of the plurality of assets 110 without going through the discovery process.
[0056] For example, let us assume there are 1000 assets such as, servers in the network 106 and the list created is related to the 1000 assets is stored in the storage unit 206. In particular, the system 108 is required to discover the 1000 assets in the network 106. Further, the system 108 retrieves the metrics data related to each of 1000 assets/servers present in the list. Based on the retrieved metrics data, the system 108 trains the AI/ML model 214. Furthermore, the system 108 schedules the one or more discovery time slots for each of the 1000 assets utilizing the trained AI/ML model 214. Thereafter, the system 108 divides the 1000 assets into the one or more batches of assets based on the scheduled one or more discovery time slots. Then the system 108 initiates the discovery process for each batch according to the respective scheduled one or more discovery time slots. For example, the discovery of batch 1 such as server 1 to server 300 will be performed every Friday at 1 pm. Similarly, the discovery of batch 2 such as server 301 to server 600 will be performed every Saturday at 1pm.
[0057] The creation unit 208, the retrieving unit 210, the training unit 212, the AI/ML model 214, the scheduler 216, the categorizing unit 218, and the asset discovery manager 220, in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0058] FIG. 3 illustrates an exemplary architecture for the system 108, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for discovery of at least one asset in the network 106. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the UE 102 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0059] FIG. 3 shows communication between the UE 102, the system 108, and the plurality of assets 110. For the purpose of description of the exemplary embodiment as illustrated in FIG. 3, the UE 102, and the plurality of assets 110 uses network protocol connection to communicate with the system 108. In an embodiment, the network protocol connection is the establishment and management of communication between the UE 102, the plurality of assets 110 and system 108 over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols. The network protocol connection includes, but not limited to, Session Initiation Protocol (SIP), System Information Block (SIB) protocol, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
[0060] In an embodiment, the UE 102 includes a primary processor 302, and a memory 304 and a User Interface (UI) 306. In alternate embodiments, the UE 102 may include more than one primary processor 302 as per the requirement of the network 106. The primary processor 302, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0061] In an embodiment, the primary processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304. The memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to discovery of at least one asset in the network 106. The memory 304 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0062] In an embodiment, the User Interface (UI) 306 includes a variety of interfaces, for example, a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The User Interface (UI) 306 allows the user to request the system 108 for the discovery of at least one asset in the network 106. In one embodiment, the user may include at least one of, but not limited to, a network operator.
[0063] As mentioned earlier in FIG.2, the system 108 includes the processors 202, the memory 204, and the storage unit 206, for discovery of at least one asset in the network 106 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0064] Further, as mentioned earlier the processor 202 includes the creation unit 208, the retrieving unit 210, the training unit 212, the AI/ML model 214, the scheduler 216, the categorizing unit 218, the asset discovery manager 220, which are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0065] FIG. 4 is an exemplary architecture diagram illustrating the flow for discovery of at least one asset in the network 106, according to one or more embodiments of the present disclosure. The architecture 400 includes the storage unit 206, the training unit 212 and an asset management module 402.
[0066] In one embodiment, the asset management module 402 is a single entity which performs the functions of the creation unit 208, the retrieving unit 210, the AI/ML model 214, the scheduler 216, the categorizing unit 218, and the asset discovery manager 220 without deviating the scope of the present disclosure. Further, the architecture 400 includes the discovery data related to the plurality of assets 110.
[0067] In one embodiment, based on training done by the asset management module 402 utilizing the training unit 212, the asset management module 402 schedules the one or more discovery time slots for each of the plurality of assets 110 and further categorizes the plurality of assets 110 into one or more batches of the plurality of assets 110 based on the scheduled one or more discovery time slots for each of the plurality of assets 110.
[0068] For example, let us assume the plurality of assets 110 are the plurality of servers. Further, the plurality of servers includes servers in thousands, for example 5000 servers (Server 1, Server 2, Server 3, Server 4… Server 5000). The asset management module 402 divides 5000 servers into groups/lots/batches of nearly equal or unequal number of servers. For example, a Batch 1 has 10 servers (Server 1, Server 2, Server 3… Server 10), a Batch 2 has 40 servers (Server 11, Server 12, Server 13… Server 50), a Batch 3 has 50 servers (Server 51, Server 52, Server 53… Server 100), and so on.
[0069] In one embodiment, subsequent to categorizing the plurality of assets 110 into batches, the asset management module 402 executes the discovery process for each batch according to the respective scheduled one or more discovery time slots and collects the discovery data from the plurality of assets 110. The discovery data includes information pertaining to at least one of, but not limited to, a middleware 410, technologies/applications 412 and an operating system 408 related to the plurality of assets 110. The discovery data includes at least one of, but not limited to, version details of the middleware, location, installation paths, log paths, home/log paths, error log paths, process ID, and port details. The middleware 410 is software that acts as a bridge between an operating system 408 or storage unit 206 and applications 412, especially on the network 106. The operating system 408 is a software package that runs applications 412 and serves as a communication link (interface) between the system 108 and the user. The allocation of services and resources, like devices, memory, processors, and information, is the primary duty of the operating system 408.
[0070] In an embodiment, the middleware 410 pertains to the plurality of assets 110, including at least one of, Apache, Tomcat, JBoss, Oracle Hypertext Transfer Protocol Server (OHS), Webserver, and Web logic. In one embodiment, the technologies/applications 412 running on the plurality of assets 110, including at least one of, Redis, Kafka, Elasticsearch, Docker, Keydb, Aerospike, My Structured Query Language (MySQL), Oracle, Data Business Language (DBL), Spike, Cassandra, MongoDB, Hadoop, and Spark.
[0071] As such, the above techniques of the present disclosure provide multiple advantages including reduced impact on real-time traffic by providing efficient resource utilization while efficiently handling large number of assets during the discovery process. The invention also provides for minimal impact on remote assets and improves the plurality of assets 110 performance.
[0072] FIG. 5 is a flow diagram of a method 500 for discovery of at least one asset in the network 106, according to one or more embodiments of the present invention. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0073] At step 502, the method 500 includes the step of retrieving, metrics data related to each of the plurality of assets 110 present in the list. In particular, the retrieving unit 210 is configured to retrieve metrics data related to each of the plurality of assets 110 present in the list. In one embodiment, the creation unit 208 creates the list in which the each of the plurality of assets 110 are added by the creation unit 208. In an alternate embodiment, the metrics data related to each of the plurality of assets 110 may be included in the list. In one example, for 1000 servers present in the network 106, the retrieving unit 210 retrieves the metric data such as the memory utilization, the central processing unit (CPU) usage, and the response time pertaining to each of the 1000 servers from at least one of, the list created by the creation unit 208 and the storage unit 206.
[0074] At step 504, the method 500 includes the step of training, the model 214 utilizing the retrieved metrics data pertaining to the plurality of assets 110 present in the list. In particular, training unit 212 utilizes the retrieved metrics data in order to train the AI/ML model 214. The AI/ML model 214 learns at least one of, but not limited to, the trends, the pattens, and the behaviour, of the the plurality of assets 110. For example, based on training, the AI/ML model 214 figures out the traffic/load handled by the plurality of assets 110 in a particular day. In yet another example, based on training, the AI/ML model 214 figures out about a time slot when each of the plurality of assets 110 are not busy in serving one or more requests.
[0075] At step 506, the method 500 includes the step of scheduling the one or more discovery time slots for each of the plurality of assets 110 utilizing one or more scheduling logics based on an output generated by the trained model 214. In particular, the scheduler 216 is configured to schedule the one or more discovery time slots for each of the plurality of assets 110 utilizing one or more scheduling logics based on an output generated by the trained model 214. For example, the trained model 214 such as the AI/ML model 214 generates output pertaining to the time slot when the each of the plurality of assets 110 such as servers in the network 106 are ideal or less busy in serving one or more requests. Let’s assume every Monday from 1pm to 3 pm, the servers are not busy. Based on the output of the AI/ML model 214, the scheduler 216 schedules the one or more discovery time slots such as Monday at 1pm to 3 pm for each of the servers present in the network 106.
[0076] At step 508, the method 500 includes the step of categorizing the plurality of assets into one or more batches of assets based on the scheduled one or more discovery time slots for each of the plurality of assets 110. In particular, the categorization unit 218 is configured to categorize the plurality of assets 110 into one or more batches of assets based on the scheduled one or more discovery time slots for each of the plurality of assets 110. For example, let us assume there are 1000 servers in the network. Out of the 1000 servers in the network, the categorization unit 218 divides the 1000 servers into smaller groups/batches such as a Batch 1 of 300 servers, a Batch 2 of 300 servers, and a Batch 3 of 400 servers.
[0077] At step 510, the method 500 includes the step of initiating the discovery process for each batch according to the respective scheduled one or more discovery time slots. In particular, the asset discovery manager 220 is configured to initiate the discovery process for each batch according to the respective scheduled one or more discovery time slots. For example, the asset discovery manager 220 process one batch at one time. The discovery process for Batch 1 of 300 servers is performed on the one or more discovery time slots such as on Monday from 1pm to 3pm, further the discovery process for Batch 2 of 300 servers is performed on the one or more discovery time slots such as on Tuesday, furthermore the discovery process for Batch 3 of 400 servers is performed on the one or more discovery time slots such as on Tuesday, but at a different time from the Batch 2.
[0078] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to retrieve metrics data related to each of a plurality of assets 110 present in a list. The processor 202 is further configured to train a model 214 utilizing the retrieved metrics data pertaining to the plurality of assets 110 present in the list. The processor 202 is further configured to schedule one or more discovery time slots for each of the plurality of assets 110 utilizing one or more scheduling logics based on an output generated by the trained model 214. The processor 202 is further configured to categorize the plurality of assets 110 into one or more batches of assets based on the scheduled one or more discovery time slots for each of the plurality of assets 110. The processor 202 is further configured to initiate a discovery process for each batch according to the respective scheduled one or more discovery time slots.
[0079] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0080] The present disclosure provides technical advancement of flexibility to schedule the discoveries for a large number of assets in batches. This approach minimizes the impact on real time traffic while utilizing the system resources. The system identifies batches and time slots for schedule discoveries. It reduces the time to get discoveries done and nullifies the impact on remote asset. The AI/ML model achieves the best possible result in the shortest time without degrading the performance metrics of the discovered servers.
[0081] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[0082] Environment - 100;
[0083] User Equipment (UE) - 102;
[0084] Server - 104;
[0085] Network- 106;
[0086] System -108;
[0087] Plurality of assets – 110;
[0088] Processor - 202;
[0089] Memory - 204;
[0090] Storage unit – 206;
[0091] Creation unit – 208;
[0092] Retrieving unit – 210;
[0093] Training unit – 212;
[0094] AI/ML model – 214;
[0095] Scheduler- 216;
[0096] Categorizing unit – 218;
[0097] Asset discovery manager – 220;
[0098] Primary Processor – 302;
[0099] Memory – 304;
[00100] User Interface (UI) – 306.

,CLAIMS:CLAIMS
We Claim:
1. A method (500) for discovery of at least one asset in a network (106), the method (500) comprising the steps of:
retrieving, by the one or more processors (202), metrics data related to each of a plurality of assets (110) present in a list;
training, by the one or more processors (202), a model (214) utilizing the retrieved metrics data pertaining to the plurality of assets (110) present in the list;
scheduling, by the one or more processors (202), one or more discovery time slots for each of the plurality of assets (110) utilizing one or more scheduling logics based on an output generated by the trained model (214);
categorizing, by the one or more processors (202), the plurality of assets (110) into one or more batches of assets based on the scheduled one or more discovery time slots for each of the plurality of assets (110); and
initiating, by the one or more processors (202), a discovery process for each batch according to the respective scheduled one or more discovery time slots.

2. The method (500) as claimed in claim 1, wherein each of the plurality of assets (110) includes at least one of, a server and an Internet Protocol (IP) asset.

3. The method (500) as claimed in claim 1, wherein the one or more processors (202), creates the list by identifying the plurality of assets (110) present in the network (106) and adding the plurality of assets (110) to the list.

4. The method (500) as claimed in claim 1, wherein each of the scheduled discovery time slot is non-overlapping with other scheduled discovery time slots.

5. The method (500) as claimed in claim 1, wherein the scheduled discovery time slot is based upon the traffic/load pertaining to each of the plurality of assets (110) present in the list.

6. The method (500) as claimed in claim 1, wherein the retrieved metrics data includes at least one of a memory utilization, a central processing unit (CPU) usage, and a response time related to each of the plurality of assets (110).

7. The method (500) as claimed in claim 1, wherein scheduling logics are developed by the one or more processors (202) in order to intelligently schedule one or more discovery time slots for each of the plurality of assets (110).

8. A system (108) for discovery of at least one asset (110) in a network (106), the system (108) comprising:
a retrieving unit (210), configured to, retrieve, metrics data related to each of a plurality of assets (110) present in a list;
a training unit (212), configured to, train, a model (214) utilizing the retrieved metrics data pertaining to the plurality of assets (110) present in the list;
a scheduler (216), configured to, schedule, one or more discovery time slots for each of the plurality of assets (110) utilizing one or more scheduling logics based on an output generated by the trained model (214);
a categorizing unit (218), configured to, categorize, the plurality of assets (110) into one or more batches of assets based on the scheduled one or more discovery time slots for each of the plurality of assets (110); and
an asset discovery manager (220), configured to, initiate, a discovery process for each batch according to the respective scheduled one or more discovery time slots.

9. The system (108) as claimed in claim 8, wherein each of the plurality of assets (110) includes at least one of, a server and an Internet Protocol (IP) asset.

10. The system (108) as claimed in claim 8, wherein a creation unit (208) of the system (108), creates the list by identifying the plurality of assets (110) present in the network (106) and adding the plurality of assets (110) to the list.

11. The system (108) as claimed in claim 8, wherein each of the scheduled discovery time slot is non-overlapping with other scheduled discovery time slots.

12. The system (108) as claimed in claim 8, wherein the scheduled discovery time slot is based upon the traffic/load pertaining to each of the plurality of assets (110) present in the list.

13. The system (108) as claimed in claim 8, wherein the retrieved metrics data includes at least one of a memory utilization, a central processing unit (CPU) usage, and a response time related to each of the plurality of assets (110).

14. The system (108) as claimed in claim 8, wherein scheduling logics are developed by scheduler (216) in order to intelligently scheduling one or more discovery time slots for each of the plurality of assets (110).

Documents

Application Documents

# Name Date
1 202321049119-STATEMENT OF UNDERTAKING (FORM 3) [20-07-2023(online)].pdf 2023-07-20
2 202321049119-PROVISIONAL SPECIFICATION [20-07-2023(online)].pdf 2023-07-20
3 202321049119-FORM 1 [20-07-2023(online)].pdf 2023-07-20
4 202321049119-FIGURE OF ABSTRACT [20-07-2023(online)].pdf 2023-07-20
5 202321049119-DRAWINGS [20-07-2023(online)].pdf 2023-07-20
6 202321049119-DECLARATION OF INVENTORSHIP (FORM 5) [20-07-2023(online)].pdf 2023-07-20
7 202321049119-FORM-26 [03-10-2023(online)].pdf 2023-10-03
8 202321049119-Proof of Right [08-01-2024(online)].pdf 2024-01-08
9 202321049119-DRAWING [19-07-2024(online)].pdf 2024-07-19
10 202321049119-COMPLETE SPECIFICATION [19-07-2024(online)].pdf 2024-07-19
11 Abstract-1.jpg 2024-09-30
12 202321049119-Power of Attorney [25-10-2024(online)].pdf 2024-10-25
13 202321049119-Form 1 (Submitted on date of filing) [25-10-2024(online)].pdf 2024-10-25
14 202321049119-Covering Letter [25-10-2024(online)].pdf 2024-10-25
15 202321049119-CERTIFIED COPIES TRANSMISSION TO IB [25-10-2024(online)].pdf 2024-10-25
16 202321049119-FORM 3 [02-12-2024(online)].pdf 2024-12-02
17 202321049119-FORM 18 [20-03-2025(online)].pdf 2025-03-20