Sign In to Follow Application
View All Documents & Correspondence

System And Method For Predicting One Or More Instances Of Network Function In A Network

Abstract: ABSTRACT SYSTEM AND METHOD FOR PREDICTING ONE OR MORE INSTANCES OF NETWORK FUNCTION IN A NETWORK The present invention relates to a system (120) and a method (500) for predicting one or more instances of a Network Function (NF) (125) in a network (105) is disclosed. The system (120) includes an FMS interface (230) configured to retrieve data pertaining to a plurality of customers from one or more sources. The system (120) includes an analyzing unit (240) configured to analyze the retrieved data utilizing one or more forecasting units (410) to identify trends and patterns pertaining to customer load on each of one or more instances of the NF (125). The analyzing unit (240) is further configured to generate prediction of a count of instances to be allocated to the NF (125) in the network (105) based on the identified trends and patterns. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 January 2024
Publication Number
31/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Ankit Murarkaq
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
18. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
19. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
20. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
21. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
22. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
23. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
24. Sunil meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
25. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
26. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR PREDICTING ONE OR MORE INSTANCES OF NETWORK FUNCTION IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication networks, more particularly relates to a method and a system for predicting one or more instances of a Network Function (NF) in the wireless communication network.
BACKGROUND OF THE INVENTION
[0002] In the wireless communication networks, the load on the Network Functions (NFs) are increasing gradually. The load may be the one of but not limited to traffic, customers load demands, customers data, etc. The NFs may be one of but not limited to load balancers, routers, switches, firewalls, etc. In Fifth Generation Service Based Architecture (5G SBA), the NFs interact over the Service-Based Interface (SBI) using standard protocols such as Hypertext Transfer Protocol (HTTP/2) and Representational State Transfer Application Programming Interfaces (REST APIs). Each NF enables specific services registered in a Network Repository Function (NRF), enabling dynamic discovery and communication between the NFs. The NFs are implemented as microservices and typically deployed in a cloud-native environment, leveraging technologies such as containerization, orchestration, and automation tools.
[0003] In general, the NFs are added to the wireless communication networks by manual prediction. The NFs may face the issue of not efficiently handling the load caused by the customer demands. The insufficient NFs in the wireless communication networks causes the load on the existing NFs. This leads to service degradations and delays that may affect the experience of the customers.
[0004] In order to handle the load on the NFs, the appropriate number of instances of the NFs may be added to the wireless communication network. The process of manually predicting the number of instances required to be added to the wireless communication network is a time consuming and cumbersome task and may also not be a correct prediction of number of instances of the NFs required.
[0005] In view of the above, there is a dire need for a system and method for predicting one or more instances of the NFs in the network, which ensures better handling of the load without degrading the efficiency of the services provided by the NFs.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a method and a system for predicting one or more instances of a Network Function (NF) in a network.
[0007] In one aspect of the present invention, the method for predicting the one or more instances of the NF in the network is disclosed. The method includes the step of retrieving, by one or more processors, data pertaining to a plurality of customers from one or more sources. The method includes the step of analyzing, by the one or more processors, the retrieved data utilizing one or more forecasting units to identify trends and patterns pertaining to customer load on each of one or more instances of the NF. The method includes the step of generating, by the one or more processors, prediction of a count of instances to be allocated to the NF in the network based on the identified trends and patterns.
[0008] In one embodiment, the data pertaining to the plurality of customers includes at least one of, onboarding data, service usage data, deactivation data, and historical data.
[0009] In yet another embodiment, the step of retrieving, by one or more processors, data pertaining to the plurality of customers from one or more sources, further includes the steps of, pre-processing, by the one or more processors, the retrieved data in order to utilize the pre-processed data for the training of the one or mor forecasting units, and storing, by the one or more processors, the pre-processed data.
[0010] In yet another embodiment, the data pertaining to the plurality of customers from the one or more sources is retrieved in real-time.
[0011] In yet another embodiment, the generated prediction of the count of instances to be allocated to the NF is stored in the centralized data repository.
[0012] In yet another embodiment, the method further includes the step of transmitting, by the one or more processors, a visual representation of the trends and patterns of the customer load of the one or more instances of the NF to a user device based on the analysis. The method further includes the step of displaying, by the one or more processors, the transmitted visual representation of the trends and patterns of the customer load of the one or more instances of the NF to a user via a user interface of the user device.
[0013] In another aspect of the present invention, the system for predicting the one or more instances of the NF in the network is disclosed. The system includes a retrieving unit configured to retrieve data pertaining to a plurality of customers from one or more sources. The system includes an analyzing unit, configured to analyze, the retrieved data utilizing one or more forecasting units to identify trends and patterns pertaining to customer load on each of one or more instances of the NF. The system includes a generating unit, configured to generate, prediction of a count of instances to be allocated to the NF in the network based on the identified trends and patterns.
[0014] In another aspect of the present invention, a user device is disclosed. One or more primary processors are communicatively coupled to one or more processors. The one or more primary processors coupled with a memory. The memory stores instructions which when executed by the one or more primary processors cause the user device to transmit a request by a user to the one or more processors for predicting the one or more instances of the NF.
[0015] In another aspect of the embodiment, a non-transitory computer-readable medium stored thereon computer-readable instructions that, when executed by a processor, are disclosed. The processor is configured to retrieve data pertaining to a plurality of customers from one or more sources. The processor is configured to analyze the retrieved data utilizing one or more forecasting units to identify trends and patterns pertaining to customer load on each of one or more instances of the NF. The processor is configured to generate, prediction of a count of instances to be allocated to the NF in the network based on the identified trends and patterns.
[0016] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0018] FIG. 1 is an exemplary block diagram of an environment for predicting one or more instances of a Network Function (NF) in a network, according to one or more embodiments of the present disclosure;
[0019] FIG. 2 is an exemplary block diagram of a system for predicting the one or more instances of the NF in the network, according to the one or more embodiments of the present disclosure;
[0020] FIG. 3 is a schematic representation of a workflow of the system of FIG. 2 communicably coupled with a user device, according to the one or more embodiments of the present disclosure;
[0021] FIG. 4 is a block diagram of an architecture that can be implemented in the system of FIG.2, according to the one or more embodiments of the present disclosure; and
[0022] FIG. 5 is a flow diagram illustrating a method for predicting the one or more instances of the NF in the network, according to the one or more embodiments of the present disclosure.
[0023] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0025] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0026] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments.
[0027] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for predicting one or more instances of a Network Function (NF) 125 in a network 105, according to one or more embodiments of the present invention. The environment 100 includes the network 105, a user device 110, a server 115, and a system 120. The user device 110 aids a user to interact with the system 120 for transmitting a request by a user to one or more processors 205 (as shown in FIG.2) for predicting the one or more instances of the NF 125. In an embodiment, the user is at least one of, a network operator, and a service provider. The NF 125 is a modular software-based functional component in a Fifth Generation Service-Based Architecture (5G SBA) that performs specific tasks essential for network operation. In an exemplary embodiment of the NF 125 in the 5G SBA, an Access and Mobility Management Function (AMF) manages user device 110 registration, connection, and mobility. A Session Management Function (SMF) handles session establishment and resource allocation. A User Plane Function (UPF) routes user traffic in a data plane. A Policy Control Function (PCF) manages policies for QoS, charging, etc. A Network Slice Selection Function (NSSF) assists in selecting appropriate network slices for user sessions.
[0028] The one or more instances of the NF 125 in the network 105 refers to the use of models, or analytical techniques to estimate the state, behavior, or outcomes of specific NFs based on data, trends, or conditions in the network 105. The one or more instances of the NF 125 is a specific, operational deployment of the NF 125 within the 5G network. Multiple instances of the same NF 125 can coexist to provide scalability, redundancy, and load balancing. The one or more instances of the NF 125 prediction process involves analyzing historical or real-time data to anticipate performance, capacity, demand, or potential issues in the NF 125.
[0029] For the purpose of description and explanation, the description will be explained with respect to the user device 110, or to be more specific will be explained with respect to a first user device 110a, a second user device 110b, and a third user device 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the user device 110 from the first user device 110a, the second user device 110b, and the third user device 110c is configured to connect to the server 115 via the network 105. In an embodiment, each of the first user device 110a, the second user device 110b, and the third user device 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0030] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0031] The network 105 also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0032] The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defense facility, or any other facility that provides content.
[0033] The environment 100 further includes the system 120 communicably coupled to the server 115 and each of the first user device 110a, the second user device 110b, and the third user device 110c via the network 105. The system 120 is configured for predicting the one or more instances of the NF 125 in the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity, as per multiple embodiments of the present invention.
[0034] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0035] FIG. 2 is an exemplary block diagram of the system 120 for predicting the one or more instances of the NF 125 in the network 105, according to one or more embodiments of the present disclosure.
[0036] The system 120 includes a processor 205, a memory 210, a user interface 215, an input device 220, and a centralized data repository 225. For the purpose of description and explanation, the description will be explained with respect to one or more processors 205, or to be more specific will be explained with respect to the processor 205 and should nowhere be construed as limiting the scope of the present disclosure. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 120 includes one processor. However, it is to be noted that the system 120 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0037] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0038] The memory 210 may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the system 120 may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) may facilitate communication for the system. The interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and the centralized data repository 225. The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0039] The user interface 215 may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The user interface 215 may be rendered on a display, implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display may be integrated within the system 120 or connected externally. Further the input device(s) 220 may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0040] The centralized data repository 225 is communicably connected to the processor 205 and the memory 210. The centralized data repository 225 is configured to store and retrieve the data pertaining to a plurality of customers from the one or more sources. In another embodiment, the centralized data repository 225 may be outside the system 120 and communicated through a wired medium and a wireless medium. The centralized data repository 225 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of storage unit types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0041] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0042] In order for the system 120 to predict the one or more instances of the NF 125 in the network 105, the processor 205 includes a Fault Management System (FMS) interface 230, a pre-processing unit 235, an analyzing unit 240, a transmitting unit 245, and a displaying unit 250 communicably coupled to each other. In an embodiment, operations and functionalities of the FMS interface 230, the pre-processing unit 235, the analyzing unit 240, the transmitting unit 245, and the displaying unit 250 can be used in combination or interchangeably.
[0043] The FMS interface 230 is configured to retrieve the data pertaining to the plurality of customers from the one or more sources. In an embodiment, the plurality of customers includes, but not limited to, individual users accessing a streaming platform, businesses utilizing a cloud service, or subscribers of a telecommunication network. In an embodiment, the one or more sources include, but not limited to, the centralized data repository 225, Application Programming Interfaces (APIs), files, and cloud-based repositories. In an embodiment, the data pertaining to the plurality of customers includes at least one of, onboarding data, service usage data, deactivation data, and historical data. The onboarding data refers to the customer data collected and processed during an initial phase when the plurality of customers begins their interaction with a product, service, or platform. The onboarding data refers to typically used to establish the customer's account, preferences, or eligibility for the service. In an exemplary embodiment, the onboarding data includes, but not limited to, customer name, contact details, plan selection, and activation date.
[0044] The service usage data refers to information collected about how the plurality of customers interacts with and utilizes the product, service, or platform. The service usage data captures the specifics of customer behavior, preferences, and patterns during their engagement with the service, providing valuable insights into performance, user needs, and potential areas for improvement. In an exemplary embodiment, the service usage data includes, but not limited to, data usage, hours streamed, and call frequency. The deactivation data refers to the information collected and recorded when the plurality of customers terminates or suspends their use of the product, service, or platform. The deactivation data provides insights into the circumstances and reasons behind the customer's decision to discontinue the service, and it may include technical, behavioral, or feedback-related elements. In an exemplary embodiment, the deactivation data includes, but not limited to, cancellation reason, and termination date. Subsequently, a data integration unit 405 (as shown in FIG.4) is configured to integrate the data generated from the FMS interface.
[0045] Upon integrating the data by the data integration unit 405, the pre-processing unit 235 is configured to pre-process the integrated data. In an embodiment, the pre-processing includes data definition, data normalization, and data cleaning. The data normalization is the process of reorganizing data within the centralized data repository 225 so that the users can utilize the data for further queries and analysis. The data cleaning is the process of fixing or removing incorrect, corrupted, incorrectly formatted, duplicate, or incomplete data within the dataset.
[0046] Upon pre-processing the data, the pre-processed data is transmitted to the one or more forecasting units 410 (as shown in FIG.4) for training. For the purpose of description and explanation, the description will be explained with respect to one or more forecasting units 410, or to be more specific will be explained with respect to the forecasting unit 410 and should nowhere be construed as limiting the scope of the present disclosure. In an embodiment, the forecasting unit 410 includes, but not limited to an Artificial Intelligence/Machine Learning (AI/ML) model. The forecasting unit 410 is configured for identifying key predictors, including, but not limited to hourly/daily traffic trends, user behavior during peak times, and service subscription plans. Based on the pre-processed data, the AI/ML model includes, but not limited to, time series models, supervised learning models, unsupervised learning models, is trained. The time series model predicts future values of the one or more instances of the NF 125 based on the historical trends. The supervised learning models are trained based on the historical data with known outcomes (for example, past instance counts). The supervised learning models include, but are not limited to, linear regression, random forests, and gradient boosting. The unsupervised learning models are useful for discovering the patterns in the retrieved data. The unsupervised learning models include, but not limited to, clustering (for example, k-means) and autoencoders.
[0047] Upon training the forecasting unit 410, the centralized data repository 225 is configured to store the pre-processed data and output of the forecasting unit 410 . In an embodiment, the centralized data repository 225 is connected to a workflow manager 420 (as shown in FIG.4) for transmitting the stored data to the analyzing unit 240 for analysis. The analyzing unit 240 is configured to analyze the retrieved data utilizing the forecasting unit 410 to identify the trends and the patterns pertaining to the customer load on each of one or more instances of the NF 125. Identifying the trends and patterns in data involves analyzing historical and real-time data to extract insights about customer behavior, system performance, or resource usage. The trend refers to a long-term change or direction in data over time. The trends might reflect consistent growth, decline, or shifts in the customer load on the one or more instances of the NF 125. In an exemplary embodiment, a network service provider observes the data for the one or more instances of the NF 125. The customer load on the one or more instances of the NF 125 is increasing by approximately 10% every two weeks, indicating a steady adoption of services or an expanding customer base.
[0048] Further, the analyzing unit 240 is configured to generate prediction of a count of instances to be allocated to the NF 125 in the network 105 based on the identified trends. Based on the exemplary embodiment, the analyzing unit 240 predicts the customer load will reach 15, 000 requests per day. Therefore, additional one or more instances of the NF 125 are provisioned to handle the upcoming customer load. In an alternate embodiment, the analyzing unit 240 predicts periods of reduced customer load. During these periods, the number of instances to be allocated to the NF 125 is scaled down accordingly to efficiently handle the customer demand.
[0049] The pattern refers to recurring or cyclical behaviors observed in the data. The recurring or cyclical behaviors are linked to specific times, events, or customer habits. In an exemplary embodiment, the SMF might observe the recurring pattern of high data session initiation requests during specific times of the day, such as during peak hours 7:00 PM to10:00 PM on weekends, particularly on Saturdays. The recurring pattern correlates with the increased use of streaming services, navigation apps, or video calls during these times. The PCF might adjust Quality of Service (QoS) policies to prioritize critical applications like navigation over less critical background updates. The UPF scales up the data handling capacity to manage the surge in traffic. The AMF might optimize resource allocation for handovers as more users transition between base stations during their commutes. The analyzing unit 240 recommends scaling up server capacity during the peak times. The forecasting unit 410 predict a similar pattern for the upcoming holidays, prompting pre-emptive adjustments. The analyzing unit 240 is further configured to generate the prediction of the count of instances to be allocated to the NF 125 in the network 105 based on the identified trends and patterns. Predicting the count of instances involves using the historical and real-time data to estimate the number of network function instances (e.g., the AMF, the SMF, the PCF, the UPF etc.,) needed to handle the expected customer load effectively.
[0050] In an exemplary embodiment, by analysing the historical data, the forecasting unit 410 predicts that the NF instance will experience a 30% increase in traffic during the next holiday weekend, requiring 3 additional instances to maintain service quality. By forecasting the required count of instances of the NF 125, resource provisioning and load-balancing adjustments are optimized to efficiently handle periods of high demand. The proactive approach not only prevents downtime but also enhances customer satisfaction by delivering a seamless and reliable service experience. The generated prediction of the count of instances to be allocated to the NF 125 in the network 105 is stored in the centralized data repository 225 for record-keeping and future reference, which ensures that the system 120 includes a historical log of all predictions, which is used for auditing, refinement of forecasting models, or training future AI/ML models.
[0051] Upon storing the generated prediction of the count of instances to be allocated to the NF 125 in the centralized data repository 225, the transmitting unit 245 is configured to transmit a visual representation of the trends and patterns related to the customer load of the one or more instances of the NF 125 to the user device 110 based on the analysis. The displaying unit 250 is configured to display the transmitted visual representation of the trends and patterns of the customer load of the one or more instances of the NF 125 to the user via a user interface 315 (as shown in FIG.3) of the user device 110. The visual representation of the trends and patterns is in the form of line graphs, heatmaps, bar charts, real-time dashboards.
[0052] By making the prediction of required one or more instances of the NF 125, the system 120 not only predicts and stores one or more instance allocation counts but also provides a user-friendly visual representation of the customer load of the trends and patterns. By transmitting these insights to decision-makers, it enables proactive resource management, reduces resource wastage and improves overall network performance. Further, the system 120 can dynamically scale the one or more instances of the NF 125, adapting to changing network conditions and customer demands.
[0053] FIG. 3 is a schematic representation of a workflow of the system of FIG. 2 communicably coupled with the user device 110, according to the one or more embodiments of the present disclosure. More specifically, FIG. 3 illustrates the system 120 configured for predicting the one or more instances of the NF 125. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first user device 110a for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0054] As mentioned earlier in FIG.1, in an embodiment, the first user device 110a may encompass electronic apparatuses. These devices are illustrative of, but not restricted to, modems, routers, switches, laptops, tablets, smartphones (including phones), or other devices enabled for web connectivity. The scope of the first user device 110a explicitly extends to a broad spectrum of electronic devices capable of executing computing operations and accessing networked resources, thereby providing users with a versatile range of functionalities for both personal and professional applications. This embodiment acknowledges the evolving nature of electronic devices and their integral role in facilitating access to digital services and platforms. In an embodiment, the first user device 110a can be associated with multiple users. Each of the first user device 110a is communicatively coupled with the processor 205.
[0055] The first user device 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120. The one or more primary processors 305 are coupled with a memory 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 enables the first user device 110a to transmit the request to the one or more processors 205 for predicting the one or more instances of the NF 125. The visual representation of the trends and patterns related to the customer load of the one or more instances of the NF 125 is transmitted to the user device 110. The transmitted visual representation of the trends and patterns of the customer load of the one or more instances of the NF 125 is displayed to the user via the user interface 315 of the user device 110.
[0056] Furthermore, the one or more primary processors 305 within the first user device 110a are uniquely configured to execute a series of steps as described herein. This configuration underscores the processor 205 capability to delete the data from the centralized data repository 225. The coordinated functioning of the one or more primary processors 305 and the additional processors is directed by the executable instructions stored in the memory 310. The executable instructions facilitate seamless communication and compatibility among the one or more primary processors 305, optimizing performance and resource use.
[0057] As mentioned earlier in FIG.2, the system 120 includes the one or more processors 205, the memory 210, the user interface 215, and the centralized data repository 225. The operations and functions of the one or more processors 205, the memory 210, the user interface 215, and the centralized data repository 225 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0058] Further, the processor 205 includes the FMS interface 230, the pre-processing unit 235, the analyzing unit 240, the transmitting unit 245, and the displaying unit 250 communicably coupled to each other. The operations and functions of the FMS interface 230, the pre-processing unit 235, the analyzing unit 240, the transmitting unit 245, and the displaying unit 250 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0059] FIG. 4 is a block diagram of an architecture 400 that can be implemented in the system of FIG.2, according to the one or more embodiments of the present disclosure.
[0060] The architecture 400 of the system 120 includes the FMS interface 230, the data integration unit 405, the pre-processing unit 235, the forecasting unit 410, the workflow manager 415, the analyzing unit 240 and the user interface 215. The FMS interface 230 is configured to retrieve the data pertaining to the plurality of customers from the one or more sources. In an embodiment, the plurality of customers includes, but not limited to, individual users accessing the streaming platform, businesses utilizing the cloud service, or subscribers of the telecommunication network. In an embodiment, the one or more sources include, but not limited to, the centralized data repository 225, Application Programming Interfaces (APIs), files, and cloud-based repositories. In an embodiment, the data pertaining to the plurality of customers includes at least one of, onboarding data, service usage data, deactivation data, and historical data. Further, the system architecture 400 includes the data integration unit 405 used for combining the data from the one or more sources into a single, unified view.
[0061] Upon integrating the data, the data is transmitted to the processor 205 by the user via a user login. Upon receiving the data from the data integration unit 405, the pre-processing unit 235 is configured to pre-process the retrieved data. In an embodiment, the pre-processing includes data definition, data normalization, and data cleaning. The data normalization is the process of reorganizing data within the centralized data repository 225 so that the users can utilize the data for further queries and analysis. The data cleaning is the process of fixing or removing incorrect, corrupted, incorrectly formatted, duplicate, or incomplete data within the dataset.
[0062] Upon pre-processing the data, the pre-processed data is transmitted to the one or more forecasting units 410 for training. For the purpose of description and explanation, the description will be explained with respect to one or more forecasting units 410, or to be more specific will be explained with respect to the forecasting unit 410 and should nowhere be construed as limiting the scope of the present disclosure. In an embodiment, the forecasting unit 410 includes, but not limited to an Artificial Intelligence/Machine Learning (AI/ML) model. The forecasting unit 410 is configured for identifying key predictors, including, but not limited to hourly/daily traffic trends, user behavior during peak times, and service subscription plans. Based on the pre-processed data, the AI/ML model includes, but not limited to, time series models, supervised learning models, unsupervised learning models, is trained. The time series model predicts future values of the one or more instances of the NF 125 based on the historical trends. The supervised learning models are trained based on the historical data with known outcomes (for example, past instance counts). The supervised learning models include, but are not limited to, linear regression, random forests, and gradient boosting. The unsupervised learning models are useful for discovering the patterns in the retrieved data. The unsupervised learning models include, but not limited to, clustering (for example, k-means) and autoencoders.
[0063] Upon training the forecasting unit 410, the centralized data repository 225 is configured to store the pre-processed data and the output of the forecasting unit 410 . In an embodiment, the centralized data repository 225 is connected to the workflow manager 415 for transmitting the stored data to the analyzing unit 240 for analysis. In an embodiment, the workflow manager 415 receives the request from the UI 215 and transmits the received request to the analyzing unit 240 to perform analysis. The analyzing unit 240 is configured to analyze the retrieved data utilizing the forecasting unit 410 to identify the trends and the patterns pertaining to the customer load on each of one or more instances of the NF 125. Identifying the trends and patterns in the data involves analyzing historical and real-time data to extract insights about customer behavior, system performance, or resource usage. The trend refers to a long-term change or direction in data over time. The trends might reflect consistent growth, decline, or shifts in the customer load on the one or more instances of the NF 125. The analyzing unit 240 is further configured to generate the prediction of the count of instances to be allocated to the NF 125 in the network 105 based on the identified trends.
[0064] The pattern refers to recurring or cyclical behaviors observed in the data. The recurring or cyclical behaviors are linked to specific times, events, or customer habits. The analyzing unit 240 recommends scaling up server capacity during these peak times. The forecasting unit 410 predict the similar pattern for the upcoming holidays, prompting pre-emptive adjustments. The analyzing unit 240 is further configured to generate the prediction of the count of instances to be allocated to the NF 125 in the network 105 based on the identified trends and patterns. Predicting the count of instances involves using the historical and real-time data to estimate the number of network function instances (e.g., the AMF, the SMF, the PCF etc.,) needed to handle the expected customer load effectively.
[0065] By predicting the required count of instances of the NF 125, resource provisioning and load balancing adjustments ensure that network functions can handle high-demand periods efficiently. The proactive approach not only prevents downtime but also enhances customer satisfaction by delivering a seamless and reliable service experience. The generated prediction of the count of instances to be allocated is stored in the centralized data repository 225 for record-keeping and future reference, which ensures that the system 120 includes the historical log of all predictions, which is used for auditing, refinement of forecasting models, or training future AI/ML models.
[0066] Upon storing the generated prediction of the count of instances to be allocated in the centralized data repository 225, the analyzing unit 240 is configured to transmit the visual representation of the trends and patterns related to the customer load of the one or more instances of the NF 125 to the user device 110 based on the analysis. Further, the analyzing unit 240 is configured to display the transmitted visual representation of the trends and patterns of the customer load of the one or more instances of the NF 125 to the user via the user interface 315 of the user device 110. The visual representation of the trends and patterns is in the form of line graphs, heatmaps, bar charts, real-time dashboards.
[0067] FIG. 5 is a flow diagram illustrating a method 500 for predicting the one or more instances of the NF 125 in the network 105, according to the one or more embodiments of the present disclosure.
[0068] At step 505, the method 500 includes the step of retrieving the data pertaining to the plurality of customers from the one or more sources by the FMS interface 230. In an embodiment, the plurality of customers includes, but not limited to, individual users accessing the streaming platform, businesses utilizing the cloud service, or subscribers of the telecommunication network. In an embodiment, the one or more sources include, but not limited to, the centralized data repository 225, Application Programming Interfaces (APIs), files, and cloud-based repositories. In an embodiment, the data pertaining to the plurality of customers includes at least one of, onboarding data, service usage data, deactivation data, and historical data. Subsequently, the data integration unit 405 is configured to integrate the data generated from the FMS interface that receives from a Fault management system. Upon integrating the data, the pre-processing unit 235 is configured to pre-process the data. In an embodiment, the pre-processing includes data definition, data normalization, and data cleaning. The data normalization is the process of reorganizing data within the centralized data repository 225 so that the users can utilize the data for further queries and analysis. The data cleaning is the process of fixing or removing incorrect, corrupted, incorrectly formatted, duplicate, or incomplete data within the dataset.
[0069] Upon pre-processing the data, the pre-processed data is transmitted to the forecasting unit 410 for training. Upon training of the forecasting unit 410, the centralized data repository 225 is configured to store the pre-processed data. In an embodiment, the centralized data repository 225 is connected to the workflow manager 415 for transmitting the stored data to the analyzing unit 240 for analysis.
[0070] At step 510, the method 500 includes the step of analyzing the retrieved data utilizing the forecasting unit 410 to identify the trends and the patterns pertaining to the customer load on each of one or more instances of the NF 125 by the analyzing unit 240. Identifying the trends and patterns in data involves analyzing historical and real-time data to extract insights about customer behavior, system performance, or resource usage. The trend refers to a long-term change or direction in data over time. The trends might reflect consistent growth, decline, or shifts in the customer load on the one or more instances of the NF 125.
[0071] The pattern refers to recurring or cyclical behaviors observed in the data. The recurring or cyclical behaviors are linked to specific times, events, or customer habits. The analyzing unit 240 recommends scaling up server capacity during these peak times. The forecasting unit 410 predict the similar pattern for the upcoming holidays, prompting pre-emptive adjustments.
[0072] At step 515, the method 500 includes the step of generating the prediction of the count of instances to be allocated to the NF 125 in the network 105 based on the identified trends and the patterns by the analyzing unit 240. Predicting the count of instances involves using the historical and real-time data to estimate the number of network function instances (e.g., the AMF, the SMF, the UPF, the PCF, etc.,) needed to handle the expected customer load effectively.
[0073] By predicting the required count of instances of the NF 125, resource provisioning and load balancing adjustments ensure that network functions can handle high-demand periods efficiently. The proactive approach not only prevents downtime but also enhances customer satisfaction by delivering a seamless and reliable service experience. The generated prediction of the count of instances to be allocated to the NF 125 is stored in the centralized data repository 225 for record-keeping and future reference, which ensures that the system 120 includes the historical log of all predictions, which is used for auditing, refinement of forecasting models, or training future AI/ML models.
[0074] The method further includes the step of transmitting the visual representation of the trends and patterns related to the customer load of the one or more instances of the NF 125 to the user device 110 based on the analysis by the transmitting unit 245. The method further includes the step of displaying the transmitted visual representation of the trends and patterns of the customer load of the one or more instances of the NF 125 to the user via the user interface 215 of the user device 110 by the displaying unit 250. The visual representation of the trends and patterns is in the form of line graphs, heatmaps, bar charts, real-time dashboards.
[0075] In another aspect of the embodiment, a non-transitory computer-readable medium stored thereon computer-readable instructions that, when executed by a processor 205. The processor 205 is configured to retrieve data pertaining to a plurality of customers from one or more sources. The processor 205 is configured to analyze the retrieved data utilizing one or more forecasting units 410 to identify trends and patterns pertaining to customer load on each of one or more instances of the NF 125. The processor 205 is configured to generate, prediction of a count of instances to be allocated to the NF 125 in the network 105 based on the identified trends and patterns.
[0076] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments.
[0077] The present disclosure provides technical advancement for seamlessly integrating the data generated from the FMS interface with the AI/ML model to accurately predict the one or more instances of the NFs required for load handling based on the customer data. By leveraging AI/ML predictions, the system not only predicts, and stores the required NF instance allocation counts but also provides a user-friendly visual representation of the customer load of the trends and patterns. These insights are transmitted to decision-makers, enabling proactive resource management, reducing resource wastage and improving overall network performance. Further, the system increases processing speed by optimizing processor usage, reduces memory requirements, and supports dynamic scaling of the NF instances, adapting to changing network conditions and customer demands.
[0078] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[0079] Environment - 100
[0080] Network-105
[0081] User equipment- 110
[0082] Server - 115
[0083] System -120
[0084] Processor - 205
[0085] Memory - 210
[0086] User interface-215
[0087] Input device- 220
[0088] Centralized data repository- 225
[0089] FMS interface– 230
[0090] Pre-processing unit– 235
[0091] Analyzing unit – 240
[0092] Transmitting unit- 245
[0093] Displaying unit– 250
[0094] Storage unit- 255
[0095] Primary processor- 305
[0096] Memory-310
[0097] User Interface- 315
[0098] Data integration unit- 405
[0099] Forecasting unit- 410
[00100] Workflow manager- 415
,CLAIMS:CLAIMS
We Claim:
1. A method (500) for predicting one or more instances of a Network Function (NF) (125) in a network (105), the method (500) comprising the steps of:
retrieving, by one or more processors (205), data pertaining to a plurality of customers from one or more sources;
analyzing, by the one or more processors (205), the retrieved data utilizing one or more forecasting units (410) to identify trends and patterns pertaining to customer load on each of one or more instances of the NF (125); and
generating, by the one or more processors (205), prediction of a count of instances to be allocated to the NF (125) in the network (105) based on the identified trends and patterns.

2. The method (500) as claimed in claim 1, wherein the data pertaining to the plurality of customers includes at least one of, onboarding data, service usage data, deactivation data, and historical data.

3. The method (500) as claimed in claim 1, wherein the step of retrieving, by one or more processors (205), data pertaining to the plurality of customers from one or more sources, further includes the steps of:
pre-processing, by the one or more processors (205), the retrieved data in order to utilize the pre-processed data for the training of the one or more forecasting units (410).
storing, by the one or more processors (205), the pre-processed data.

4. The method (500) as claimed in claim 1, wherein the data pertaining to the plurality of customers from the one or more sources is retrieved in real-time.

5. The method (500) as claimed in claim 1, wherein the generated prediction of the count of instances to be allocated is stored in a centralized data repository (225).

6. The method (500) as claimed in claim 1, wherein the method (500) comprises the step of:
transmitting, by the one or more processors (205), a visual representation of the trends and patterns of the customer load at the one or more instances of NF (125) to a user device (110) based on the analysis; and
displaying, by the one or more processors (205), the transmitted visual representation of the trends and patterns of the one or more instances of the NF (125) to a user via a user interface (315) of the user device (110).

7. A system (120) for predicting one or more instances of a Network Function (NF) (125) in a network (105), the system (120) comprising:
a Fault Management System (FMS) interface (230), configured to, retrieve, data pertaining to a plurality of customers from one or more sources;
an analyzing unit (240), configured to, analyze, the retrieved data utilizing one or more forecasting units (410) to identify trends and patterns pertaining to customer load on each of one or more instances of the NF (125); and
the analyzing unit (240), configured to, generate, prediction of a count of instances to be allocated to the NF (125) in the network (105) based on the identified trends and patterns.

8. The system (120) as claimed in claim 7, wherein the data pertaining to the plurality of customers includes at least one of, onboarding data, service usage data, deactivation data and historical data.

9. The system (120) as claimed in claim 7, wherein upon retrieving, data pertaining to the plurality of customers from the one or more sources, the system (120) comprises:
a pre-processing unit (235), configured to, pre-process, the retrieved data in order to utilize the pre-processed data for the training of the one or more forecasting units (410); and
a centralized data repository (225), configured to, store, the pre-processed data.

10. The system (120) as claimed in claim 7, wherein the data pertaining to the plurality of customers from the one or more sources is retrieved in real-time.

11. The system (120) as claimed in claim 7, wherein the generated prediction of the count of instances to be allocated is stored in the centralized data repository (225).

12. The system (120) as claimed in claim 7, wherein the system (120) comprises:
a transmitting unit (245), configured to, transmit, a visual representation of the trends and patterns of the customer load of the one or more instances of the NF (125) to a user device (110) based on the analysis; and
a displaying unit (250), configured to, display, the transmitted visual representation of the trends and patterns of the one or more instances of the NF (125) to a user via a user interface (315) of the user device (110).

13. A user device (110), comprising:
one or more primary processors (305) communicatively coupled to one or more processors (205), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the user device (110) to:
transmit, a request by a user to the one or more processors (205) for predicting one or more instances of a Network Function (NF); and
wherein the one or more processors (205) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202421006518-STATEMENT OF UNDERTAKING (FORM 3) [31-01-2024(online)].pdf 2024-01-31
2 202421006518-PROVISIONAL SPECIFICATION [31-01-2024(online)].pdf 2024-01-31
3 202421006518-POWER OF AUTHORITY [31-01-2024(online)].pdf 2024-01-31
4 202421006518-FORM 1 [31-01-2024(online)].pdf 2024-01-31
5 202421006518-FIGURE OF ABSTRACT [31-01-2024(online)].pdf 2024-01-31
6 202421006518-DRAWINGS [31-01-2024(online)].pdf 2024-01-31
7 202421006518-DECLARATION OF INVENTORSHIP (FORM 5) [31-01-2024(online)].pdf 2024-01-31
8 202421006518-FORM-26 [12-02-2024(online)].pdf 2024-02-12
9 202421006518-Proof of Right [04-06-2024(online)].pdf 2024-06-04
10 202421006518-DRAWING [17-01-2025(online)].pdf 2025-01-17
11 202421006518-CORRESPONDENCE-OTHERS [17-01-2025(online)].pdf 2025-01-17
12 202421006518-COMPLETE SPECIFICATION [17-01-2025(online)].pdf 2025-01-17
13 202421006518-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
14 202421006518-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
15 202421006518-Covering Letter [24-01-2025(online)].pdf 2025-01-24
16 202421006518-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
17 202421006518-FORM 3 [31-01-2025(online)].pdf 2025-01-31
18 Abstract-1.jpg 2025-03-13