Abstract: ABSTRACT SYSTEM AND METHOD FOR MANAGING PROCESSING OF REQUESTS IN A NETWORK ENVIRONMENT The present invention relates to a system (108) and a method (600) for managing processing of requests in a network (106) environment (100). The method (600) includes step of receiving a plurality of requests from a user via one or more communication channels at a server (104). Further, creating an entry in a queue for each communication channel used for receiving a request from the plurality of requests at the server (104) from the user. Thereafter, computing an ejection time index for each request based on a time of receiving the request, a preconfigured ejection threshold time and a preconfigured ejection job interval. Furthermore, storing, at a storage unit (206), a reference pertaining to each of the request in the queue. Thereafter processing the one or more requests as per the ejection time index. Ref. Fig. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR MANAGING PROCESSING OF REQUESTS IN A NETWORK ENVIRONMENT
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication systems, more particularly relates to a method and a system for managing processing of requests in a network environment.
BACKGROUND OF THE INVENTION
[0002] In a communication network, server(s) receive multiple requests from users over a single time frame, process the received requests and deliver results to the user end. However, instances where multiple user-requests are received leading to overcrowding of requests ultimately result in reduced performance of network servers such as system failure, or servers may hang up or the speed at which the request processing occurs may slow down. In simpler terms, the server attending massive request influx may end of freezing or may even restart as a means of auto-recovery resulting in loss of data. This is accompanied by redundant memory occupation where residue of previously processed request (both successful or failed) gets stored in dynamic storage and thus resulting in less operational memory space for the server components. When a server is unable to process any user request or it gets timed out i.e., server is unable to respond to that request in time, then without any subsequent mechanism to release such requests would crowd server dynamic performance and insinuate poor user experience as well as memory degradation due to data redundancy.
[0003] The server(s) in a network receives and processes a request from client via different channel/connection pathway. However, every request received by the server is not successful and there may be requests which may be failed or time-out. The server has definite dynamic memory to store received requests and process them. In case multiple requests pour into/ stream into the server, the server may experience redundant data from requests cluttered in the dynamic memory resulting constriction of memory space and thus the components of the server may experience degradation of performance. Without suitable mechanism having the capability to monitor and process multiple of requests from multiple channels/connections while preventing performance degradation, an optimum network performance cannot be achieved.
[0004] Presently there is no solution to eject residue of processed requests whether successful or failed or timed-out in order to free up dynamic memory of a server. There is a need for suitable mechanism to release any attached resources or residue data of a request when it fails, or it gets timed out. The rejection parameters i.e., timed out threshold should also be able to be reconfigured during runtime so that the server can adjust to real time behavior of the network.
[0005] Therefore, from the above cases, it becomes necessary to implement a system and method to set a threshold limit for ejection of processed request thereby de-cluttering the residual data from server dynamic configuration in a repeated as well as regular manner, so as to prevent possibility of server shutdowns and maintain the server health. However, the current available solutions are not able to offer the optimized ejection system and method with provision to reconfigure ejection threshold limits in real-time.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provides a method and a system for managing processing of requests in a network environment.
[0007] In one aspect of the present invention, the method for managing processing of the requests in the network environment is disclosed. The method includes the step of receiving, by one or more processors, at a server, a plurality of requests from a user via one or more communication channels. The method further includes the step of creating, by one or more processors, an entry in a queue for each communication channel used for receiving a request from the plurality of requests at the server from the user. The method further includes the step of computing, by the one or more processors, an ejection time index for each request based on a time of receiving the request, a preconfigured ejection threshold time and a preconfigured ejection job interval. The method further includes the step of storing, by the one or more processors, at a storage unit, a reference pertaining to each of the request in the queue, the queue includes one or more other requests having the ejection time index similar to the ejection time index of the request. The method further includes the step of processing, by the one or more processors, the one or more requests as per the ejection time index.
[0008] In another embodiment, the plurality of requests pertain to availing one or more services from the server.
[0009] In yet another embodiment, for each communication channel, a separate queue is created by the one or more processors.
[0010] In yet another embodiment, the step of, processing, the one or more requests as per the ejection time index, includes the step of further comprises the step of, if status of the one or more requests is timed out, triggering, by the one or more processors, an ejection job to eject the one or more requests from the queue and releasing associated one or more resources and if status of the one or more requests is success or failure, ejecting, by the one or more processors, the one or more requests from the queue and releasing the associated one or more resources.
[0011] In yet another embodiment, when if the status of the one or more requests is timed out, the method further comprises the steps of, raising, by the one or more processors, an alert for the one or more requests being timed out to the user.
[0012] In yet another embodiment, the status of the one or more requests is inferred by the one or more processors based on a type of the response received from the server pertaining to the one or more requests.
[0013] In yet another embodiment, the method further comprises the steps of, updating, by the one or more processors, the queue dynamically based on changing network conditions or server load.
[0014] In yet another embodiment, the method further comprises the steps of, dynamically redistributing, by the one or more processors, one or more resources among active requests in the queue which are not released.
[0015] In yet another embodiment, the method further comprises the steps of, generating, by the one or more processors, performance reports based on ejection related activities.
[0016] In another aspect of the present invention, the system for managing identity retrieval of the device in the network is disclosed. The system includes a transceiver, configured to, receive, at a server, a plurality of requests from a user via one or more communication channels. The system includes a creating unit, configured to, create, an entry in a queue for each communication channel used for receiving a request from the plurality of requests at the server from the user. The system further includes a computing unit, configured to, compute, an ejection time index for each request based on a time of receiving the request, a preconfigured ejection threshold time and a preconfigured ejection job interval. The system further includes a storing unit, configured to, store, at a storage unit, a reference pertaining to each of the request in the queue, the queue includes one or more other requests having the ejection time index similar to the ejection time index of the request. The system further includes a processing unit, configured to, process, the one or more requests as per the ejection time index.
[0017] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor. The processor is configured to receive, at a server, a plurality of requests from a user via one or more communication channels. The processor is further configured to create an entry in a queue for each communication channel used for receiving a request from the plurality of requests at the server from the user. The processor is further configured to compute, an ejection time index for each request based on a time of receiving the request, a preconfigured ejection threshold time and a preconfigured ejection job interval. The processor is further configured to store, at a storage unit, a reference pertaining to each of the request in the queue, the queue includes one or more other requests having the ejection time index similar to the ejection time index of the request. The processor is further configured to process, the one or more requests as per the ejection time index.
[0018] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0020] FIG. 1 is an exemplary block diagram of an environment for managing processing of requests in a network environment, according to one or more embodiments of the present invention;
[0021] FIG. 2 is an exemplary block diagram of a system for managing processing of the requests in the network environment, according to one or more embodiments of the present invention;
[0022] FIG. 3 is an exemplary architecture of the system of FIG. 2, according to one or more embodiments of the present invention;
[0023] FIG. 4 is an exemplary architecture to for managing processing of the requests in the network environment, according to one or more embodiments of the present disclosure;
[0024] FIG. 5 is an exemplary signal flow diagram illustrating the flow for managing processing of the requests in the network environment, according to one or more embodiments of the present disclosure; and
[0025] FIG. 6 is a flow diagram of a method for managing processing of the requests in the network environment, according to one or more embodiments of the present invention.
[0026] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0027] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0028] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0029] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0030] Various embodiments of the present invention provide a system and a method for managing processing of requests in a network environment. More particularly, the system and the method provide a solution for ejecting residue data from a processed request from at least one of, but not limited to, a dynamic memory/storage of a server in the network. In other words, the present invention provides a unique approach of processing the requests to solve cluttering of the request present in a queue. Therefore, the present invention is able to ensure that performance of the server is not degraded.
[0031] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing processing of requests in a network environment 100, according to one or more embodiments of the present invention. The environment 100 includes a User Equipment (UE) 102, a server 104, the network 106, and a system 108. Herein, managing processing of requests pertains to ejecting the requests from a queue and releasing associated one or more resources with the requests.
[0032] For the purpose of description and explanation, the description will be explained with respect to one or more user equipment’s (UEs) 102 of a third party, or to be more specific will be explained with respect to a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
[0033] In an embodiment, each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0034] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0035] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
[0036] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0037] The environment 100 further includes the system 108 communicably coupled to the server 104, and the UE 102, via the network 106. The system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity.
[0038] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0039] FIG. 2 is an exemplary block diagram of the system 108 for managing processing of the requests in the network 106 environment 100, according to one or more embodiments of the present invention.
[0040] As per the illustrated and preferred embodiment, the system 108 for managing processing of the requests in the network 106 environment 100, includes one or more processors 202, a memory 204 and a storage unit 206. The one or more processors 202, hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. However, it is to be noted that the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0041] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202. The memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for managing processing of the requests in the network 106 environment 100. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0042] As per the illustrated embodiment, the storage unit 206 is configured to store a reference pertaining to each of the requests. In other words, the storage unit 206 stores an address of the at least one of, a queue and the memory 204 where the requests are present. The storage unit 206 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of storage unit 206 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0043] As per the illustrated embodiment, the system 108 includes the processor 202 for managing processing of the requests in the network 106 environment 100. The processor 202 includes a transceiver 208, a creating unit 210, a computing unit 212, a processing unit 214, an alert raising unit 216, an inferring unit 218, an updating unit 220, a redistributing unit 222, and a generating unit 224. The processor 202 is communicably coupled to the one or more components of the system 108 such as the memory 204 and the storage unit 206. In an embodiment, operations and functionalities of the transceiver 208, the creating unit 210, the computing unit 212, the processing unit 214, the alert raising unit 216, the inferring unit 218, the updating unit 220, the redistributing unit 222, the generating unit 224 and the one or more components of the system 108 can be used in combination or interchangeably.
[0044] Initially, a user utilizes the UE 102 to transmit a plurality of requests at the server 104 via one or more communication channels to avail one or more services from the server 104. In one embodiment, the transceiver 208 of the processor 202 receives the plurality of requests at the server 104 in the network 106. In one embodiment, the one or more communication channels between the UE 102, the server 104 and the system 108 within the network 106 is at least one of, but not limited to, a Transmission Control Protocol (TCP) connection.
[0045] The TCP is a protocol for establishing connection between a source and a destination, such as the UE 102 and the server 104 which ensures secure data transmission therebetween. In other words, the one or more communication channels are the medium through which the communication between the UE 102, the server 104 and the system 108 takes place. In one embodiment, the one or more communication channels are designed to send packets across an internet and ensure the successful delivery of data and messages over the network 106.
[0046] In one embodiment, the one or more services include at least one of, but not limited to, a calling service, a text message service, a video calling service, and a call recording service. In particular, in order to avail the one or more services, the user needs to subscribe the one or more services.
[0047] Upon receiving the plurality of requests from the UE 102 of the user via the one or more communication channels, the creating unit 210 of the processor 202 is configured to create an entry in the queue for each communication channel used for receiving a request from the plurality of requests at the server 104. In particular, creating the entry in the queue refers to adding the plurality of requests in the queue based on at least one of, but not limited to, a time of receiving the plurality of requests.
[0048] In one embodiment, the queue, in the context of telecommunications refers to a waiting line where the received requests are held until the received requests are processed or serviced by at least one of, but not limited to, the server 104. In an alternate embodiment, the queue may be at least of, but not limited to, the memory 204 where the received plurality of requests is stored. In one embodiment, for each communication channel, a separate queue is created by the creating unit 210.
[0049] Upon creating the entry in the queue for each communication channel, the computing unit 212 of the processor 202 is configured to compute an ejection time index for each request from the plurality of requests based on at least one of, but not limited to, a time of receiving the request, a preconfigured ejection threshold time and a preconfigured ejection job interval. In one embodiment, the ejection time index represents the time in various formats at which the plurality of requests must be ejected from the queue and associated one or more resources must be released. For example, the various formats include at least one of, but not limited to, a 12 hours’ time format and a 24 hours’ time format along with the date. In one embodiment, the one or more resources is anything that is used to perform a task or to achieve a goal. The one or more resources includes at least one of, but not limited to, a Central Processing Unit (CPU), a memory, and a network bandwidth.
[0050] In one embodiment, the preconfigured ejection threshold time is a predefined time limit at which the plurality of requests is ejected. In one scenario, the plurality of requests is waiting in the queue in order to be served by the server 104. While waiting in the queue, if the preconfigured ejection threshold time is breached, then the plurality of requests is ejected, and the associated one or more resources are released.
[0051] In one embodiment, the preconfigured ejection job interval is a predefined time interval at which preconfigured ejection threshold time is checked to determine a number of requests among the plurality of the requests had breached the preconfigured ejection threshold time. In particular, the ejection job interval is the frequency for checking whether the plurality of the requests has breached the preconfigured ejection threshold time.
[0052] In one embodiment, the preconfigured ejection threshold time and the preconfigured ejection job interval are predefined by the computing unit 212 based on historical data pertaining to the plurality of requests. In alternate embodiment, the preconfigured ejection threshold time and the preconfigured ejection job interval are predefined or configured in real time by at least one of, the user.
[0053] Upon computing the ejection time index for each request from the plurality of requests, the storage unit 206 is configured to store the reference pertaining to each of the request in the queue. In other words, instead of storing a copy for each request in the storage unit 206, the storage unit 206 stores the address of the at least one of, the queue and the memory 204 where the requests are located. In one embodiment, the queue includes one or more other requests having the ejection time index similar to the ejection time index of the plurality of requests. In other words, a request reference is stored according to the ejection time index in the queue along with the other one or more requests with the same ejection time index.
[0054] Upon computing the ejection time index for the one or more received requests and storing the reference pertaining to each of the request in the queue, the processing unit 214 of the processor 202 is configured to process the one or more requests as per the ejection time index. In one embodiment, the processing unit 214 checks for the status of the one or more requests in the queue. The status of the one or more requests is inferred by an inferring unit 218 of the processor 202 based on the type of a response received from the server 104 pertaining to the one or more requests. In particular, response may include at least one of, but not limited to, an ejection prompt for the one or more received requests. In one embodiment, the ejection prompt refers to a command or message which indicates the status of the one or more requests in the queue.
[0055] In particular, based on the status of the one or more requests in the queue, the processing unit 214 ejects the one or more requests from the queue and releases the associated one or more resources at the computed ejection time index. More particularly, if the status of the one or more requests is at least one of, but not limited to, a success, a failure, and a time out, then the said one or more requests are ejected from the queue and the associated one or more resources are released.
[0056] In one embodiment, if the status of the one or more requests is timed out, then an ejection job is triggered by the processing unit 214 to eject the one or more requests from the queue and release the associated one or more resources. Herin, the ejection job is a process of managing the one or more requests in the queue. More particularly, the ejection job removes the one or more requests from the queue. In one embodiment, the ejection job is triggered by the processing unit 214 at a predefined time interval. In another embodiment, if status of the one or more requests is the success or the failure, the processing unit 214 ejects the one or more requests from the queue and releases the associated one or more resources. Advantageously, the accumulation of the one or more requests in the queue is prevented due to which the memory 204 of the system 104 is being saved.
[0057] In one embodiment, when the status of the one or more requests is timed out, then the alert raising unit 216 of the processor 202 is configured to raise an alert for the one or more requests being timed out to the user. In particular, the alert raising unit 216 is configured to notify the user regarding the time out status of the one or more requests in the queue. In one embodiment, the one or more requests which are being timed out, the associated one or more resources are released or ejected by the processing unit 214. Advantageously, the accumulation of the one or more requests which are being timed out in the queue is prevented due to which the memory 204 of the system 104 is being saved.
[0058] Upon processing the one or more requests within the ejection time index based on the status of the one or more requests, the updating unit 220 of the processor 202 is configured to update, the queue of the one or more requests dynamically based on at least one of, but not limited to, a changing network conditions or server load.
[0059] In one embodiment, upon updating the queue of the one or more requests, the redistributing unit 222 of the processor 202 is configured to, dynamically redistribute, the one or more resources among an active one or more requests in the queue of the one or more requests which are not released.
[0060] In one embodiment, upon updating the queue and redistributing the one or more resources among active requests in the queue of the one or more requests, the generating unit 224 of the processor 202 is configured to generate performance reports based on the released or ejection related activities. Thereafter, the generated performance reports are provided to the user.
[0061] The transceiver 208, the creating unit 210, the computing unit 212, the processing unit 214, the alert raising unit 216, the inferring unit 218, the updating unit 220, the redistributing unit 222, and the generating unit 224 in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0062] FIG. 3 illustrates an exemplary architecture for the system 108, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for managing processing of the requests in the network 106 environment 100. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the UE 102 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0063] FIG. 3 shows communication between the UE 102, the system 108, and the server 104. For the purpose of description of the exemplary embodiment as illustrated in FIG. 3, the UE 102, uses network protocol connection or the communication channel to communicate with the system 108 and the server 104. In an embodiment, the network protocol connection or the communication channel is the establishment and management of communication between the UE 102, the system 108 and the server 104, over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols. The network protocol connection includes, but not limited to, Session Initiation Protocol (SIP), System Information Block (SIB) protocol, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
[0064] In an embodiment, the UE 102 includes a primary processor 302, and a memory 304 and a User Interface (UI) 306. In alternate embodiments, the UE 102 may include more than one primary processor 302 as per the requirement of the network 106. The primary processor 302, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0065] In an embodiment, the primary processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304. The memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for managing processing of the requests in the network 106 environment 100. The memory 304 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0066] In an embodiment, the User Interface (UI) 306 includes a variety of interfaces, for example, a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The User Interface (UI) 306 of the UE 102 transmits the plurality of requests to the server 104 in order to avail the one or more services from the server 104. In one embodiment, the user may be at least one of, but not limited to, a network operator, etc.
[0067] For example, let us assume the system 108 continuously checks for the status of the one or more received requests subsequent to the processing of requests in the network 106. Utilizing the calculated ejection time index, the processing of the one or more received requests is performed. Thereafter, the one or more received requests are ejected from the queue and the associated one or more resources are released according to the ejection time index based on the status of the one or more requests Advantageously, the clustering of the one or more requests in the queue or the memory 204 is prevented.
[0068] As mentioned earlier in FIG.2, the system 108 includes the processors 202, and the memory 204, for managing processing of requests in the network 106 environment 100, which are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0069] Further, as mentioned earlier the processor 202 includes the transceiver 208, the creating unit 210, the computing unit 212, the processing unit 214, the alert raising unit 216, the inferring unit 218, the updating unit 220, the redistributing unit 222, and the generating unit 224 which are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0070] FIG. 4 is an exemplary the system 108 architecture 400 for managing processing of the requests in the network 106 environment 100, according to one or more embodiments of the present disclosure.
[0071] As per the illustrated embodiment, the system 108 architecture 400 includes a Virtual Machine (VM) 402. The VM enables the UE 102 to run programs written in at least one of, but not limited to, Java as well as programs written in other languages that are compiled to bytecode. In the illustrated embodiment, the VM 402 is the environment in which an application 404 and a protocol stack module 406 is executed.
[0072] In one embodiment, the application 404 is at least one of, but not limited to, a java application which utilizes the protocol stack module 406 to communicate between the UE 102 and the server 104 via the one or more protocols. The application 404 includes, at least one of, but not limited to, desktop applications, web applications, mobile applications, and enterprise applications.
[0073] In an embodiment of the present invention, the protocol stack module 406 is a library based on one or more programming languages which interacts within the network 106 to communicate between the UE 102 and the server 104 via one or more network protocol connections.
[0074] In an embodiment, the one or more network protocol connections is the establishment and management of communication between two or more UE 102 over the network 106 using a specific protocol or set of protocols. The one or more network protocol connection includes, but not limited to, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
[0075] The protocol stack module 406 provides abstracted APIs (Application Programming Interface) for developers to build an application around it with inbuilt features like connection management, log management, transport HTTP2 messages, overload protection, rate limit protection, etc. Further, the protocol stack module 406 has the capability to manages the one or more request related the one or more resources.
[0076] In one embodiment, the protocol stack module 406 includes at least one of, but not limited to, a creation module 406a, an ejection module 406b, and an audit module 406c. In one embodiment, the creation module 406a acts like the creating unit 210. The creation module 406a is configured to create individual queues for each communication channel/connection used for receiving the one or more requests from the plurality of requests at the server 104. Herein the one or more requests are stored as per the ejection time index. In one embodiment, the ejection module 406b acts like the processing unit 214. The ejection module 406b is configured to carry out an ejection process at a specified interval through the queue for the failed and timed out one or more request. The ejection module 406b releases the associated one or more resources and ejects the failed and the timed out one or more requests from the queue. In one embodiment, when the request is successful the ejection module 406b is further configured to eject the request and release the associated one or more resources immediately. In one embodiment, the audit module 406c checks for the one or more requests leftover or missed by the ejection module 406b ensuring long time system 108 stability.
[0077] Further, the system 108 architecture 400 includes a network layer 408. The network layer 408 is capable of transmitting network packets from the UE 102 to the server 104. In particular, the network packets are the data or the plurality of requests transmitted via the network 106 from the UE 102 to the server 104. The request includes, at least one of, but not limited to, data, service, Hypertext Transfer Protocol (HTTP) requests.
[0078] FIG. 5 is a signal flow diagram illustrating the flow for managing processing of the requests in the network 106 environment 100, according to one or more embodiments of the present disclosure.
[0079] At step 502, the user transmits the plurality of requests to the server 104 using the UE 102 for availing the one or more services. Initially, the plurality of requests is received at the system 108 via the one or more communication channels.
[0080] At step 504, the system 108 creates the entry in the queue for each communication channel used for receiving the plurality of requests. Further, the system 108 calculates the ejection time index for each request among the received plurality of requests. Thereafter, the system 108 stores the reference pertaining to the each of the request in the queue and forwards the plurality of requests to the server 104.
[0081] At step 506, the server 104 transmits a response to the system 108 related to the plurality of requests. In particular, the response may include the status of the one or more requests. The status includes at least one of, the success, the failure, and the timed out of the plurality of requests.
[0082] At step 508, the system 108 ejects the one or more requests from the queue and releases the associated one or more resources as per the ejection time index based on the status of the one or more requests. In an alternate embodiment, when the response is not received by the system 108, then the ejection job is triggered the system 108 to eject the one or more requests from the queue and releases the associated one or more resources.
[0083] At step 510, the system 108 generates the report related to the ejected one or more requests from the queue and the released associated one or more resources. Further, the system 108 provides the generated report to the UE 102 of the user.
[0084] FIG. 6 is a flow diagram of a method 600 for managing processing of the requests in the network 106 environment 100, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0085] At step 602, the method 600 includes the step of receiving the plurality of requests from the user via one or more communication channels at the server 104. In one embodiment, the transceiver 208 receives the plurality of requests from the user via one or more communication channels. For example, let us consider that when the user wishes to avail the one or more services the user transmits at least one request to the server 104 using the UE 102. In one embodiment, the request received from the UE 102 of the user is at least one of, but not limited to, an Application Programming Interface (API) call. The API calls are the medium by which the user interacts with the server 104 within the network 106. In particular, the API call is a message sent to at least one of, the server 104 to avail the one or more services.
[0086] At step 604, the method 600 includes the step of creating, the entry in the queue for each communication channel used for receiving the request from the plurality of requests at the server 104 from the user. In one embodiment, the creating unit 210 creates the entry in the queue for each communication channel. In particular, when the user transmits the plurality of requests via the one or more communication channels, then the creating unit 210 creates the separate queue for each communication channel. For example let us consider there are two communication channels such as a channel A and a channel B. So, when the plurality of requests is received via the channel A and the channel B, the separate queue is created for the plurality of requests received from the channel A. Similarly, the separate queue is created for the plurality of requests received from the channel B.
[0087] At step 606, the method 600 includes the step of computing the ejection time index for each request based on the time of receiving the request, the preconfigured ejection threshold time and the preconfigured ejection job interval. Herein, the computing unit 212 identifies the time of receiving the request, but not limited to, a particular format. In one embodiment, the computing unit 212 is configured to calculate the ejection time index for each request. In one embodiment, the preconfigured ejection job interval may be less than or equal to the preconfigured ejection threshold time. For example, if the preconfigured ejection threshold time is predefined as 5 seconds, then the preconfigured ejection job interval is predefined as 2 seconds, then in every 2 seconds the request in the queue is checked to determine whether the requests had breached the ejection threshold time. In one embodiment, the ejection time index is calculated as, the ejection time index = (time of receiving the request + preconfigured ejection threshold time)/ preconfigured ejection job interval.
[0088] At step 608, the method 600 includes the step of storing the reference pertaining to each of the request in the queue. In one embodiment, the queue further includes one or more other requests having the ejection time index similar to the ejection time index of the one or more received request. In one embodiment, the storage unit 206 is configured to store the reference pertaining to each of the request in the queue. For example, a single copy of the each of the request may be stored in the memory 204 and the address or reference of the each of the request is stored against the ejection time index in the storage unit 206.
[0089] At step 610, the method 600 includes the step of processing the one or more requests as per the ejection time index. In one embodiment, the processing unit 214 is configured to process the one or more requests. For example, in one scenario the when the one or more requests included in the queue are served by the server 104, then the server 104 provides the response to the processing unit 214. Thereafter, the inferring unit 218 infers that the response from the server 104 as the status of the one or more requests. When the status of the one or more requests is success or failure then the processing unit 214 ejects the one or more requests from the queue and releases the associated one or more resources.
[0090] In another scenario, when the one or more requests included in the queue are served not by the server 104 or the server 104 doesn’t provides response to the processing unit 214 for the one or more requests, then the inferring unit 218 infers that the status of the one or more requests is time out. Thereafter, the alert raising unit 216 raises an alert for the one or more requests being timed out and provides the alert to the user. When the status of the one or more requests is timed out then the ejection job is triggered by the processing unit 214 to eject the one or more requests from the queue and release the associated one or more resources.
[0091] In one embodiment, the processing unit 214 runs an auditor job for checking for any stale request within the queue. The stale request is the leftover request that is missed by the ejection job. While running the auditor job, if the processing unit 214 identifies any stale request within the queue, then the processing unit 214 ejects the stale request from the queue and release the associated one or more resources which ensures the long time system 108 stability.
[0092] Upon ejecting the one or more requests from the queue and releasing the associated one or more resources, the updating unit 220 updates the queue dynamically based on changing network conditions or server load. For example, let us consider a one scenario such as the one or more requests from the queue are ejected, the one or more resources associated with the one or more requests are released and the load on the server pertaining to the one or more request is increasing, then the updating unit 220 updates the queue with the one or more requests so that the load of the one or more request would be held until the server 104 serves the one or more request. Thereafter, the redistributing unit 222 dynamically redistributes the released one or more resources among the active one or more requests in the queue which are not released.
[0093] In one embodiment, the performance reports are generated by the generating unit 224 based on the ejection related activities. For example, the performance reports includes details pertaining to at least one of, but not limited to, the status of the one or more requests, the released one or more resources, the active one or more requests. In particular, the generated performance reports is provided to the user based on a request received from the user.
[0094] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor 202. The processor 202 is configured to receive, at a server 104, a plurality of requests from a user via one or more communication channels. The processor 202 is further configured to create an entry in a queue for each communication channel used for receiving a request from the plurality of requests at the server 104 from the user. The processor 202 is further configured to compute, an ejection time index for each request based on a time of receiving the request, a preconfigured ejection threshold time and a preconfigured ejection job interval. The processor 202 is further configured to store, at a storage unit 206, a reference pertaining to each of the request in the queue, wherein the queue includes one or more other requests having the ejection time index similar to the ejection time index of the request. The processor 202 is further configured to process, the one or more requests as per the ejection time index.
[0095] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0096] The present disclosure provides technical advancements of the present invention such as preventing cluttering of the plurality of requests in memory which may lead to critical failures in the application. The plurality of requests received are arranged in a queue leading to efficient plurality of requests processing efficient management of the systems storage unit by means of sorting out already completed/timed out requests by ejecting them from the storage unit. Identifying failed /timed out plurality of requests and related resources. Releasing failed/timed out request and related resources.
[0097] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0098] Environment - 100;
[0099] User Equipment (UE) - 102;
[00100] Server - 104;
[00101] Network- 106;
[00102] System -108;
[00103] Processor - 202;
[00104] Memory - 204;
[00105] Storage unit – 206;
[00106] Transceiver– 208;
[00107] Creating unit – 210;
[00108] Computing unit – 212;
[00109] Processing unit – 214;
[00110] Alert raising unit – 216;
[00111] Inferring unit – 218;
[00112] Updating unit – 220;
[00113] Redistributing unit – 222;
[00114] Generating unit – 224;
[00115] Primary Processor – 302;
[00116] Memory – 304;
[00117] User Interface (UI) – 306;
[00118] Virtual Machine – 402;
[00119] Application – 404;
[00120] Protocol Stack – 406;
[00121] Creation module – 406a;
[00122] Ejection module – 406b;
[00123] Audit module – 406c;
[00124] Network Layer – 408.
,CLAIMS:
CLAIMS
We Claim:
1. A method (600) for managing processing of requests in a network (106) environment (100), the method (600) comprising the steps of:
receiving, by one or more processors (202), at a server (104), a plurality of requests from a user via one or more communication channels;
creating, by one or more processors (202), an entry in a queue for each communication channel used for receiving a request from the plurality of requests at the server (104) from the user;
computing, by the one or more processors (202), an ejection time index for each request based on a time of receiving the request, a preconfigured ejection threshold time and a preconfigured ejection job interval;
storing, by the one or more processors (202), at a storage unit (206), a reference pertaining to each of the request in the queue, wherein the queue includes one or more other requests having the ejection time index similar to the ejection time index of the request; and
processing, by the one or more processors (202), the one or more requests as per the ejection time index.
2. The method (600) as claimed in claim 1, wherein the plurality of requests pertain to availing one or more services from the server (104).
3. The method (600) as claimed in claim 1, wherein for each communication channel, a separate queue is created by the one or more processors (202).
4. The method (600) as claimed in claim 1, wherein the step of, processing, the one or more requests as per the ejection time index, includes the step of:
if status of the one or more requests is timed out, triggering, by the one or more processors (202), an ejection job to eject the one or more requests from the queue and releasing associated one or more resources; and
if status of the one or more requests is success or failure, ejecting, by the one or more processors (202), the one or more requests from the queue and releasing the associated one or more resources.
5. The method (600) as claimed in claim 4, wherein when the status of the one or more requests is timed out, the method (600) further comprises the step of:
raising, by the one or more processors (202), an alert for the one or more requests being timed out, to the user.
6. The method (600) as claimed in claim 4, wherein the status of the one or more requests is inferred by the one or more processors (202) based on a type of the response received from the server (104) pertaining to the one or more requests.
7. The method (600) as claimed in claim 1, the method (600) further comprising a step of updating, by the one or more processors (202), the queue dynamically based on changing network conditions or server load.
8. The method (600) as claimed in claim 1, wherein the method (600) further comprises the step of:
dynamically redistributing, by the one or more processors (202), one or more resources among active requests in the queue which are not released.
9. The method (600) as claimed in claim 1, wherein the method (600) further comprises the step of,
generating, by the one or more processors (202), performance reports based on ejection related activities.
10. A system (108) for managing processing of requests in a network (106) environment (100), the system (108) comprising:
a transceiver (208), configured to, receive, at a server (104), a plurality of requests from a user via one or more communication channels;
a creating unit (210), configured to, create, an entry in a queue for each communication channel used for receiving a request from the plurality of requests at the server (104) from the user;
a computing unit (212), configured to, compute, an ejection time index for each request based on a time of receiving the request, a preconfigured ejection threshold time and a preconfigured ejection job interval;
a storing unit (206), configured to, store, a reference pertaining to each of the request in the queue, wherein the queue includes one or more other requests having the ejection time index similar to the ejection time index of the request; and
a processing unit (214), configured to, process, the one or more requests based on the ejection time index.
11. The system (108) as claimed in claim 10, wherein the plurality of requests pertain to availing one or more services from the server (104).
12. The system (108) as claimed in claim 10, wherein for each communication channel, a separate queue is created by the creating unit (210).
13. The system (108) as claimed in claim 10, wherein the processing unit (214), processes, the one or more requests as per the ejection time index, by:
triggering, an ejection job to eject the one or more requests from the queue and releasing associated one or more resources, if status of the one or more requests is timed out; and
ejecting, the one or more requests from the queue and releasing the associated one or more resources, if status of the one or more requests is success or failure.
14. The system (108) as claimed in claim 13, wherein when the status of the one or more requests is timed out, an alert raising unit (216) configured to:
raise, an alert for the one or more requests being timed out to the user.
15. The system (108) as claimed in claim 13, wherein the status of the one or more requests is inferred by an inferring unit (218) based on a type of response received from the server (104) pertaining to the one or more requests.
16. The system (108) as claimed in claim 10, wherein an updating unit (220) is configured to, update, the queue dynamically based on changing network conditions or server (104) load.
17. The system (108) as claimed in claim 10, wherein a redistributing unit (222) is configured to, dynamically redistribute, one or more resources among active requests in the queue which are not released.
18. The system (108) as claimed in claim 10, wherein a generating unit (224) is configured to, generate, performance reports based on ejection related activities.
| # | Name | Date |
|---|---|---|
| 1 | 202321059740-STATEMENT OF UNDERTAKING (FORM 3) [05-09-2023(online)].pdf | 2023-09-05 |
| 2 | 202321059740-PROVISIONAL SPECIFICATION [05-09-2023(online)].pdf | 2023-09-05 |
| 3 | 202321059740-FORM 1 [05-09-2023(online)].pdf | 2023-09-05 |
| 4 | 202321059740-FIGURE OF ABSTRACT [05-09-2023(online)].pdf | 2023-09-05 |
| 5 | 202321059740-DRAWINGS [05-09-2023(online)].pdf | 2023-09-05 |
| 6 | 202321059740-DECLARATION OF INVENTORSHIP (FORM 5) [05-09-2023(online)].pdf | 2023-09-05 |
| 7 | 202321059740-FORM-26 [17-10-2023(online)].pdf | 2023-10-17 |
| 8 | 202321059740-Proof of Right [12-02-2024(online)].pdf | 2024-02-12 |
| 9 | 202321059740-DRAWING [03-09-2024(online)].pdf | 2024-09-03 |
| 10 | 202321059740-COMPLETE SPECIFICATION [03-09-2024(online)].pdf | 2024-09-03 |
| 11 | Abstract 1.jpg | 2024-09-26 |
| 12 | 202321059740-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 13 | 202321059740-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202321059740-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202321059740-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321059740-FORM 3 [29-01-2025(online)].pdf | 2025-01-29 |
| 17 | 202321059740-FORM 18 [20-03-2025(online)].pdf | 2025-03-20 |