Sign In to Follow Application
View All Documents & Correspondence

Method And System For Recommending A Location For A Server Installation In A Network

Abstract: ABSTRACT METHOD AND SYSTEM FOR RECOMMENDING A LOCATION FOR A SERVER INSTALLATION IN A NETWORK The present disclosure relates to a system (108) and a method (600) for recommending a location for a server (104) installation in a network (106). The system (108) includes a prediction unit (210) to predict utilizing a trained model, whether the network (106) requires a server (104) expansion. The system (108) includes a transceiver (212) to receive a notification request pertaining to a server (104) expansion. The system (108) includes an extraction unit (214) to retrieve relevant data from a database (208). The system (108) includes an analysis engine (216) to analyze utilizing a trained model the relevant data to recommend the location for the server (104) installation in the network (106). The system (108) includes a recommendation unit (218) to recommend an optimal location for the server (104) installation in the network (106) to a user based on the analysis. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD - 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
2. Ankit Murarka
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
3. Rizwan Ahmad
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
4. Kapil Gill
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
5. Rahul Verma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
6. Arpit Jain
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
7. Shashank Bhushan
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
8. Kamal Malik
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
9. Chaitanya V Mali
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
10. Supriya De
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
11. Kumar Debashish
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
12. Tilala Mehul
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
13. Kothagundla Vinay Kumar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR RECOMMENDING A LOCATION FOR A SERVER INSTALLATION IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to telecom servers, more particularly relates to method and system for recommending a location for a server installation in a network.
BACKGROUND OF THE INVENTION
[0002] Telecommunications networks rely on servers to handle various tasks, including routing, data storage, and communication management. As network demands increase, telecom providers need to expand their infrastructure by adding new servers. The placement of these servers plays a critical role in ensuring reliable network connectivity, reducing latency, and meeting user demands for high-quality services.
[0003] Conventionally, the placement of telecom servers has been a complex and time-consuming process. It often involves manual planning, site surveys, and extensive network analysis to determine optimal locations. Existing methods lack a systematic approach to server placement, leading to suboptimal performance, inefficient resource utilization, and increased deployment costs.
[0004] Hence there is a need in the art for a system and a method to optimize server placement, for reducing suboptimal performance, inefficient resource utilization, and further decreasing deployment costs.
SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a method and a system for recommending a location for a server installation in a network.
[0006] In one aspect of the present invention, the system for recommending the location for the server installation in the network is disclosed. The system includes a prediction unit configured to predict utilizing a trained model, whether the network requires a server expansion. The system further includes a transceiver configured to receive a notification request pertaining to a server expansion based on the prediction that the network requires the server expansion. The system further includes an extraction unit configured to retrieve relevant data from a database based on the notification request received. The system further includes an analysis engine, configured to analyze utilizing a trained model the relevant data to recommend the location for the server installation in the network. The system further includes a recommendation unit configured to recommend an optimal location for the server installation in the network to a user based on the analysis.
[0007] In an embodiment, the prediction unit predicts, utilizing a trained model, a requirement of the server expansion in the network, by monitoring, utilizing the trained model, current one or more servers details in the network, comparing, utilizing the trained model, the current one or more servers details with a predefined threshold pertaining to the one or more servers details and in response to detecting deviation in the one or more servers details which are exceeding the predefined threshold based on comparison, predicting, utilizing the trained model, the requirement of the server expansion in the network.
[0008] In an embodiment, the predefined threshold is set by the prediction unit based on historical data pertaining to the one or more server details.
[0009] In an embodiment, the one or more server details including at least one of, traffic patterns, server utilization, and performance metrics.
[0010] In an embodiment, the relevant data includes at least one of real time data pertaining to a preferred server location, preferred number of servers, existing infrastructure details in the network, transmission paths, installation sites and capacity constraints.
[0011] In an embodiment, the trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model.
[0012] In an embodiment, the trained model is trained utilizing the historical data and the real time relevant data.
[0013] In an embodiment, the trained model learns trends/patterns based on the historical data and the real time relevant data.
[0014] In an embodiment, the analysis engine analyses, utilizing a trained model, the relevant data to recommend an optimal location for the server installation in the network, by determining, one or more parameters pertaining to multiple locations, identifying, an optimal location among the multiple locations for the server installation based on the determination and in response to identifying the optimal location for the server installation, recommending, the optimal location for the server installation in the network to the user.
[0015] In an embodiment, the one or more parameters includes at least one of, a path loss, a transmission loss, and nearby existing infrastructure.
[0016] In an embodiment, the transceiver receives the notification request pertaining to the server expansion transmitted by the prediction unit.
[0017] In another aspect of the present invention, the method for recommending the location for the server installation in the network is disclosed. The method includes the step of predicting utilizing a trained model, whether the network requires a server expansion. The method further includes the step of receiving a notification request pertaining to the server expansion based on the prediction that the network requires the server expansion. The method further includes the step of retrieving relevant data from a database based on the notification request received. The method further includes the step of analyzing utilizing the trained model, the relevant data to recommend the location for the server installation in the network. The method further includes the step of recommending an optimal location for the server installation in the network to a user based on the analysis.
[0018] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to predict, utilizing a trained model, whether the network requires a server expansion. The processor is further configured to receive a notification request pertaining to the server expansion based on the prediction that the network requires the server expansion. The processor is further configured to retrieve relevant data from a database based on the notification request received. The processor is further configured to analyze, utilizing the trained model, the relevant data to recommend the location for the server installation in the network. The processor is further configured to recommend an optimal location for the server installation in the network to a user based on the analysis.
[0019] In another aspect of invention, User Equipment (UE) is disclosed. The UE includes one or more primary processors communicatively coupled to one or more processors, the one or more primary processors coupled with a memory. The processor causes the UE to display the recommendation of the optimal location for the server installation in the network received from the one or more processor.
[0020] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0022] FIG. 1 is an exemplary block diagram of an environment for recommending a location for a server installation in a network, according to one or more embodiments of the present invention;
[0023] FIG. 2 is an exemplary block diagram of a system for recommending the location for the server installation in the network, according to one or more embodiments of the present invention;
[0024] FIG. 3 is a schematic representation of a workflow of the system of FIG. 1, according to the one or more embodiments of the present invention;
[0025] FIG. 4 is an exemplary architecture which can be implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0026] FIG. 5 is a signal flow diagram recommending the location for the server installation in the network, according to one or more embodiments of the present invention; and
[0027] FIG. 6 is a schematic representation of a method recommending the location for the server installation in the network, according to one or more embodiments of the present invention.
[0028] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0029] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0030] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0031] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0032] FIG. 1 illustrates an exemplary block diagram of an environment 100 for recommending a location for a server installation in a network, according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 102, a server 104, a network 106 and a system 108 communicably coupled to each other for recommending the location for the server 104 installation in the network 106.
[0033] As per the illustrated embodiment and for the purpose of description and illustration, the UE 102 includes, but not limited to, a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 102 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 102a, the second UE 102b, and the third UE 102c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 102”.
[0034] In an embodiment, the UE 102 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0035] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0036] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0037] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0038] The environment 100 further includes the system 108 communicably coupled to the server 104 and the UE 102 via the network 106. The system 108 is configured to recommend the location for the server 104 installation in the network 106. As per one or more embodiments, the system 108 is adapted to be embedded within the server 104 or embedded as an individual entity.
[0039] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0040] FIG. 2 is an exemplary block diagram of the system 108 for recommending the location for the server 104 installation in the network 106, according to one or more embodiments of the present invention.
[0041] As per the illustrated embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, and a database 208. For the purpose of description and explanation, the description will be explained with respect to one processor 202 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processors 202 as per the requirement of the network 106. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0042] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0043] In an embodiment, the user interface 206 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 206 facilitates communication of the system 108. In one embodiment, the user interface 206 provides a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, the UE 102 and the database 208.
[0044] The database 208 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 208 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0045] In order for the system 108 for recommending the location for the server 104 installation in the network 106, the processor 202 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a prediction unit 210, a transceiver 212, an extraction unit 214, an analysis engine 216, and a recommendation unit 218 communicably coupled to each other for recommending the location for the server 104 installation in the network 106.
[0046] In one embodiment, the one or more modules includes, but not limited to, the prediction unit 210, the transceiver 212, the extraction unit 214, the analysis engine 216, and the recommendation unit 218 can be used in combination or interchangeably for recommending the location for the server 104 installation in the network 106.
[0047] The prediction unit 210, the transceiver 212, the extraction unit 214, the analysis engine 216, and the recommendation unit 218 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0048] In one embodiment, the prediction unit 210 is configured to predict whether the network 106 requires the server 104 expansion by utilizing a trained model. The trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model, where the model is at least one of but not limited to, regression model, decision trees, random forests, naive Bayes, support vector machines, k-nearest neighbor (KNN), K-means, ensemble model, generative model, exponential smoothing. The trained model is trained utilizing historical data and the real time relevant data. The trained model learns trends/patterns based on the historical data and the real time relevant data.
[0049] The server 104 expansion refers to the process of increasing the capacity and capabilities of the server 104 to handle more tasks, data or users. The server expansion is necessary to ensure that the server 104 can handle increased demand without performance degradation. The historical data includes, but is not limited to, past data on network usage, server performance metrics, user activity and any other relevant information that has been collected over time. The trends /patterns identified in the historical data includes, but not limited to usage peaks and troughs, growth trends, seasonal patterns, event- based spikes, performance degradation. For example, the historical data shows that every December the server experiences a 50% increase in traffic due to holiday sales. The real-time relevant data includes, but is not limited to, current data being generated by the network and servers such as current usage statistics, active user sessions and real-time performance metrics. The tends/patterns identified in the real-time relevant data includes, but not limited to, current usage levels, immediate trends, system alerts and warning, user behavior changes, performance metrics. For example, the real-time data indicates that current traffic is already 30% higher than usual with increasing trends similar to the previous December.
[0050] Further, by utilizing the training model, the prediction unit 210 predicts a requirement of the server 104 expansion in the network 106 by monitoring current one or more server 104 details in the network 106. The one or more servers 104 details including at least one of, traffic patterns, server utilization, and performance metrics. The traffic patterns refer to the flow and volume of data being transmitted to and from the server 104 over the network 106. The traffic patterns include, but are not limited to, volume of requests, data throughput, peak times, user behavior, geographical distribution. The server 104 utilization refers to how much of the server’s resources such as Central Processing Unit (CPU), memory, disk space and network bandwidth are being used at any given time. The server 104 utilization includes, but not limited to, CPU utilization, memory utilization, disk utilization, network utilization, resource availability. The performance metrics of the one or more server 104 details include, but not limited to, CPU usage, memory usage, storage usage, network bandwidth, latency, error rates.
[0051] Upon monitoring the one or more server 104 details, the current one or more server 104 details is compared with a predefined threshold pertaining to the one or more server 104 details. The predefined threshold is set by the prediction unit 210 based on historical data pertaining to the one or more servers 104 details. For example, the historical data shows that server performance starts to degrade when CPU usage consistently exceeds 80%, the predefined threshold is set at 75% to allow for proactive measures before performance degradation occurs, if the current CPU usage is 78%, then the prediction unit 210 predicts that the current CPU usage of server 104 exceeds the predefined threshold value.
[0052] In response to detecting deviation in the one or more server 104 details which are exceeding the predefined threshold based on comparison, the prediction unit 210 predicts the requirement of the server 104 expansion in the network 106 by utilizing the trained model.
[0053] Upon predicting the requirement of the server 104 expansion in the network 106, the transceiver 212 is configured to receive a notification request pertaining to the server 104 expansion based on the prediction that the network 106 requires the server 104 expansion.
[0054] Upon receiving the notification request pertaining to the server 104 expansion, the extraction unit 214 is configured to retrieve relevant data from the database 208 based on the notification request received. The relevant data includes at least one of real time data pertaining to a preferred server location, preferred number of servers, existing infrastructure details in the network, transmission paths, installation sites and capacity constraints.
[0055] Subsequently, the analysis engine 216, by utilizing the trained model analyses the relevant data to recommend the location for the server 104 installation in the network 106. Further, by utilizing the trained model, the analysis engine 216 analyses the relevant data to recommend an optimal location for the server 104 installation in the network 106 by determining one or more parameters pertaining to multiple locations. The one or more parameters includes at least one of a path loss, a transmission loss and nearby existing infrastructure. The path loss pertains to the loss of signal strength. The transmission loss refers to the loss of signal power as it travels through a transmission medium such as cables, fibers or wireless channels. The nearby existing infrastructure refers to the already established facilities and resources such as network components, power supply, cooling systems and physical space in the proximity of the potential server 104 installation location. Upon determining the one or more parameters pertaining to the multiple locations, the optimal location is identified among the multiple locations for the server 104 installation based on the determination. In response to identifying the optimal location for the server 104 installation, the optimal location for the server 104 installation in the network 106 is recommended to the user. In particular, the recommendation unit 218 is configured to recommend the optimal location for the server 104 installation in the network 106 to the user based on the analysis. Further, the optimal location refers to exact position or co-ordinates within a city, a building, a floor, a bay or a rack. Furthermore, the optimal location is a maximum probability output from a trained model for one or more positions.
[0056] Therefore, the system 108 has the ability to suggest and calculate the best location for installing new servers 104 in case of expansion requirements. The system 108 can quickly analyze and provide flexibility to accommodate growth while maintaining optimal network performance upon receiving the new expansion requests.
[0057] FIG. 3 describes a preferred embodiment of the system 108 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 102a and the system 108 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0058] As mentioned earlier in FIG. 1, each of the first UE 102a the second UE 102b, and the third UE 102c may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 102a without deviating from the scope of the present disclosure and the limiting the scope of the present disclosure. The first UE 102a includes one or more primary processors 302 communicably coupled to the one or more processors 202 of the system 108.
[0059] The one or more primary processors 302 are coupled with a memory 304 storing instructions which are executed by the one or more primary processors 302. Execution of the stored instructions by the one or more primary processors 302 enables the first UE 102a to display the recommendation of the optimal location for the server 104 installation in the network 106.
[0060] As mentioned earlier in FIG. 2, the one or more processors 202 of the system 108 is configured for selecting the one or more hyperparameters values for model training. As per the illustrated embodiment, the system 108 includes the one or more processors 202, the memory 204, the user interface 206, and the database 208. The operations and functions of the one or more processors 202, the memory 204, the user interface 206, and the database 208 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0061] Further, the processor 202 includes the prediction unit 210, the transceiver 212, the extraction unit 214, the analysis engine 216, and the recommendation unit 218. The operations and functions of the prediction unit 210, the transceiver 212, the extraction unit 214, the analysis engine 216, and the recommendation unit 218 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description as provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0062] FIG. 4 is an exemplary block diagram of an architecture 400 implemented in the system 108 for recommending the location for the server 104 installation in the network 106, according to one or more embodiments of the present invention.
[0063] The architecture 400 includes a forecasting engine 402, a Inventory Management (IM) 404, distributed cache 406, a AI/ML model 408 and the database 208.
[0064] The forecasting engine 402 determines or forecasts the need for new server 104, based on the current node traffic, data loss, and the latency. In an embodiment the forecasting engine 402 is configured to use predictive analysis to determine requirement for a new server 104. The forecasting engine 402 initiates the request to the IM 404 for determining the requirement for the new server 104 and the location and the position for the installation of the new server 104. Upon receiving the request from the forecasting engine 402, the IM 404 retrieves the relevant data either from the distributed cache 406 or from the database 208.
[0065] Further, the IM 404 connects with the AI/ML model 408 for predicting the requirement for the new server 104 and the location and the position for the installation of the new server 104. The AI/ML model 408 is trained utilizing historical data and real time relevant data. The AI/ML model 408 learns trends/patterns based on the historical data and the real time relevant data.
[0066] Upon predicting the requirement for the new server 104 and the location and the position for the installation of the new server 104 by the AI/ML model 408, the IM 404 receives the notification pertaining to the requirement of the new server and the location and position for the installation of the new server 104.
[0067] FIG. 5 is a signal flow diagram for recommending the location for the server 104 installation in the network 106, according to one or more embodiments of the present invention.
[0068] At step 502, the forecasting engine 402 initiates the notification request pertaining to the requirement of the server 104 expansion and the location for the installation of the server 104 in the network 106.
[0069] At step 504, upon receiving the request, the IM 404 retrieves the relevant data from the database 208 based on the notification request received. The relevant data includes at least one of real time data pertaining to a preferred server location, preferred number of servers, existing infrastructure details in the network, transmission paths, installation sites and capacity constraints.
[0070] At step 506, the IM 404 analyzes the relevant data by using the AI/ML model 408. The AI/ML model is trained utilizing historical data and real time relevant data. The AI/ML model learns trends/patterns based on the historical data and the real time relevant data.
[0071] In an embodiment the AI/ML model 408 predicts the requirements of the server 104 expansion in the network 106 by monitoring the current one or more server 104 details in the network 106. The one or more server 104 details including at least one of, traffic patterns, server utilization, and performance metrics. Upon monitoring the one or more server 104 details, the current one or more server 104 details is compared with the predefined threshold pertaining to the one or more server 104 details. The predefined threshold is set by the prediction unit based on historical data pertaining to the one or more server 104 details. In response to detecting the deviation in the one or more sever details which are exceeding the predefined threshold based on comparison, the AI/ML model 408 predicts the requirement of the server 104 expansion in the network 106.
[0072] In an embodiment, the AI/ML model also recommends the location for the server 104 installation in the network 106. The AI/ML model 408 recommends the location of the server 104 installation in the network 106 by determining the one or more parameters pertaining to the multiple locations. Upon determining the one or more parameters, the location is identified among the multiple locations for the server installation based on the determination. In response to identifying the location, the AI/ML recommends the location for the server installation in the network to the user.
[0073] At step 508, based on the analysis from the AI/ML model 408, the IM 404 recommends the requirement for the server expansion and the location for the installation of the server 104 in the network 106 to the user.
[0074] FIG. 6 is a flow diagram of a method 600 for recommending the location for the server 104 installation in the network 106, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0075] At step 602, the method 600 includes the step of predicting whether the network 106 requires the server 104 expansion by utilizing the training model by the prediction unit 210. The trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model. The trained model is trained utilizing the historical data and the real time relevant data. The trained model learns trends/patterns based on the historical data and the real time relevant data.
[0076] The prediction unit 210, by utilizing the trained model predicts whether the network 106 requires the server 104 expansion by monitoring the current one or more server 104 details in the network 106. The one or more server 104 details including at least one of, traffic patterns, server utilization, and performance metrics. Upon monitoring the one or more server 104 details, the current one or more server 104 details is compared with the predefined threshold pertaining to the one or more server 104 details. The predefined threshold is set by the prediction unit based on historical data pertaining to the one or more server 104 details. In response to detecting the deviation in the one or more sever details which are exceeding the predefined threshold based on comparison, the prediction unit 210 predicts the requirement of the server 104 expansion in the network 106.
[0077] At step 604, the method 600 includes the step of receiving the notification request pertaining to the server 104 expansion based on the prediction that the network 106 requires the server 104 expansion by the transceiver 212.
[0078] At step 606, the method 600 includes the step of retrieving the relevant data from the database 208 by the extraction unit 214 based on the notification request received.
[0079] At step 608, the method 600 includes the step of analyzing the relevant data to recommend the location for the server 104 installation in the network 106 by the analysis engine 216. The analysis engine 216 by utilizing the trained model by recommending the optimal location for the server 104 installation in the network 106 by determining the one or more parameters pertaining to the multiple locations. Upon determining the one or more parameters, the location is identified among the multiple locations for the server installation based on the determination. In response to identifying the location, the analysis engine 216 recommends the location for the server installation in the network to the user.
[0080] At step 610, the method 600 includes the step of recommending the optimal location for the server 104 installation in the network 106 to a user based on the analysis by the recommendation unit 218.
[0081] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to predict utilizing the trained model, whether the network 106 requires the server 104 expansion. The processor 202 is further configured to receive the notification request pertaining to the server 104 expansion based on the prediction that the network 106 requires the server 104 expansion. The processor 202 is further configured to retrieve relevant data from the database 208 based on the notification request received. The processor 202 is further configured to analyze, utilizing the trained model, the relevant data to recommend the location for the server 104 installation in the network 106. The processor 202 is further configured to recommend the optimal location for the server 104 installation in the network 106 to the user based on the analysis.
[0082] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0083] The present disclosure incorporates technical advancement of eliminating the for manual analysis and decision-making, saving significant time and effort for end users by suggesting and calculating the best location for installing new severs in case of expansion requirement. Further, the present disclosure ensures that the servers are placed in the most efficient and effective positions, reducing unnecessary expenses and optimizing resource allocation. Further, the system can quickly analyze and provide flexibility to accommodate growth while maintaining optimal network performance.
[0084] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0085] Environment- 100
[0086] User Equipment (UE)- 102
[0087] Server- 104
[0088] Network- 106
[0089] System -108
[0090] Processor- 202
[0091] Memory- 204
[0092] User Interface- 206
[0093] Database- 208
[0094] Prediction Unit- 210
[0095] Transceiver- 212
[0096] Extraction Unit- 214
[0097] Analysis Engine- 216
[0098] Recommendation Unit- 218
[0099] One or more primary processor- 304
[00100] Memory- 304
[00101] Forecasting Engine – 402
[00102] Inventory Management (IM)- 404
[00103] Distributed cache- 406
[00104] AI/ML model – 408
,CLAIMS:CLAIMS:
We Claim:
1. A method (600) for recommending a location for a server (104) installation in a network (106), the method comprising the steps of:
predicting, by one or more processors (202), utilizing a trained model, whether the network (106) requires a server (104) expansion;
receiving, by the one or more processors (202), a notification request pertaining to the server (104) expansion based on the prediction that the network (106) requires the server (104) expansion;
retrieving, by the one or more processors (202), relevant data from a database (208) based on the notification request received;
analysing, by the one or more processors (202), utilizing the trained model, the relevant data to recommend the location for the server (104) installation in the network (106); and
recommending, by the one or more processors (202), an optimal location for the server (104) installation in the network (106) to a user based on the analysis.

2. The method (600) as claimed in claim 1, wherein the step of predicting, utilizing a trained model, whether the network (106) requires a server (104) expansion, includes steps of:
monitoring, by the one or more processors (202), utilizing the trained model, current one or more server (104) details in the network (106);
comparing, by the one or more processors (202), utilizing the trained model, the current one or more server (104) details with a predefined threshold pertaining to the one or more server (104) details;
in response to detecting deviation in the one or more server (104) details which are exceeding the predefined threshold based on comparison, predicting, by the one or more processors (202), utilizing the trained model, the requirement of the server (104) expansion in the network (106).

3. The method (600) as claimed in claim 2, wherein the predefined threshold is set by the one or more processors (202) based on historical data pertaining to the one or more server (104) details.

4. The method (600) as claimed in claim 2, wherein the one or more server (104) details includes at least one of, traffic patterns, server utilization, and performance metrics.

5. The method (600) as claimed in claim 1, wherein the relevant data includes at least one of real time data pertaining to a preferred server location, preferred number of servers, existing infrastructure details in the network, transmission paths, installation sites and capacity constraints.

6. The method (600) as claimed in claim 1, wherein the trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model (408).

7. The method (600) as claimed in claim 1, wherein the trained model is trained utilizing the historical data and the real time relevant data.

8. The method (600) as claimed in claim 1, wherein the trained model learns trends/patterns based on the historical data and the real time relevant data.

9. The method (600) as claimed in claim 1, wherein the step of analysing, utilizing the trained model, the relevant data to recommend an optimal location for the server (104) installation in the network (106), includes the steps of:
determining, by the one or more processors (202), one or more parameters pertaining to multiple locations;
identifying, by the one or more processors (202), an optimal location among the multiple locations for the server (104) installation based on the determination; and
in response to identifying the optimal location for the server (104) installation, recommending, by the one or more processors (202), the optimal location for the server (104) installation in the network (106) to the user.

10. The method (600) as claimed in claim 9, wherein the one or more parameters includes at least one of, a path loss, a transmission loss, and nearby existing infrastructure.

11. A system (108) for recommending a location for a server (104) installation in a network (106), the system (108) comprises:
a prediction unit (210), configured to, predict, utilizing a trained model, whether the network (106) requires a server (104) expansion;
a transceiver (212), configured to, receive, a notification request pertaining to a server (104) expansion based on the prediction that the network (106) requires the server (104) expansion;
an extraction unit (214), configured to, retrieve, relevant data from a database (208) based on the notification request received;
an analysis engine (216), configured to, analyse, utilizing a trained model, the relevant data to recommend the location for the server (104) installation in the network (106); and
a recommendation unit (218), configured to, recommend, an optimal location for the server (104) installation in the network (106) to a user based on the analysis.

12. The system (108) as claimed in claim 11, wherein the prediction unit predicts, utilizing a trained model, a requirement of the server (104) expansion in the network (106), by:
monitoring, utilizing the trained model, current one or more servers (104) details in the network (106);
comparing, utilizing the trained model, the current one or more server (104) details with a predefined threshold pertaining to the one or more server (104) details; and
in response to detecting deviation in the one or more server (104) details which are exceeding the predefined threshold based on comparison, predicting, utilizing the trained model, the requirement of the server (104) expansion in the network (106).

13. The system (108) as claimed in claim 12, wherein the predefined threshold is set by the prediction unit based on historical data pertaining to the one or more server (104) details.

14. The system (108) as claimed in claim 12, wherein the one or more server (108) details including at least one of, traffic patterns, server utilization, and performance metrics.

15. The system (108) as claimed in claim 11, wherein the relevant data includes at least one of real time data pertaining to a preferred server location, preferred number of servers, existing infrastructure details in the network, transmission paths, installation sites and capacity constraints.

16. The system (108) as claimed in claim 11, wherein the trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model (408).

17. The system (108) as claimed in claim 11, wherein the trained model is trained utilizing the historical data and the real time relevant data.

18. The system (108) as claimed in claim 11, wherein the trained model learns trends/patterns based on the historical data and the real time relevant data.

19. The system (108) as claimed in claim 11, wherein the analysis engine (216) analyses, utilizing a trained model, the relevant data to recommend an optimal location for the server (104) installation in the network (106), by:
determining, one or more parameters pertaining to multiple locations;
identifying, an optimal location among the multiple locations for the server (104) installation based on the determination; and
in response to identifying the optimal location for the server (104) installation, recommending, the optimal location for the server (104) installation in the network (106) to the user.

20. The system (108) as claimed in claim 19, wherein the one or more parameters includes at least one of, a path loss, a transmission loss, and nearby existing infrastructure.

21. The system (108) as claimed in claim 11, wherein the transceiver (212) receives the notification request pertaining to the server expansion transmitted by the prediction unit (210).

22. A User Equipment (UE) (102), comprising:
one or more primary processors (302) communicatively coupled to one or more processors (202), the one or more primary processors (302) coupled with a memory (304), wherein said memory (304) stores instructions which when executed by the one or more primary processors (302) causes the UE (102) to:
displaying, the recommendation of the optimal location for the server (104) installation in the network (106) received from the one or more processors (202); and
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321047035-STATEMENT OF UNDERTAKING (FORM 3) [12-07-2023(online)].pdf 2023-07-12
2 202321047035-PROVISIONAL SPECIFICATION [12-07-2023(online)].pdf 2023-07-12
3 202321047035-FORM 1 [12-07-2023(online)].pdf 2023-07-12
4 202321047035-FIGURE OF ABSTRACT [12-07-2023(online)].pdf 2023-07-12
5 202321047035-DRAWINGS [12-07-2023(online)].pdf 2023-07-12
6 202321047035-DECLARATION OF INVENTORSHIP (FORM 5) [12-07-2023(online)].pdf 2023-07-12
7 202321047035-FORM-26 [20-09-2023(online)].pdf 2023-09-20
8 202321047035-Proof of Right [08-01-2024(online)].pdf 2024-01-08
9 202321047035-DRAWING [13-07-2024(online)].pdf 2024-07-13
10 202321047035-COMPLETE SPECIFICATION [13-07-2024(online)].pdf 2024-07-13
11 Abstract-1.jpg 2024-09-02
12 202321047035-Power of Attorney [05-11-2024(online)].pdf 2024-11-05
13 202321047035-Form 1 (Submitted on date of filing) [05-11-2024(online)].pdf 2024-11-05
14 202321047035-Covering Letter [05-11-2024(online)].pdf 2024-11-05
15 202321047035-CERTIFIED COPIES TRANSMISSION TO IB [05-11-2024(online)].pdf 2024-11-05
16 202321047035-FORM 3 [28-11-2024(online)].pdf 2024-11-28