Sign In to Follow Application
View All Documents & Correspondence

System And Method For Managing Devices And Preventing Network Congestion

Abstract: The present disclosure provides a system and method for managing devices and preventing network congestion. The system comprises an extended backoff indicator flag with an extended backoff time to facilitate computing devices to wait for a predetermined period. The computing devices are configured to wait for the predetermined period before pushing data again for the mentioned time duration in a response sent by the system. The inclusion of an extended wait time flag along with the time mentioned in the response packet sent by the server/load balancer will allow the system to reduce load over a network and increase the efficiency of the system.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 May 2022
Publication Number
48/2023
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. KUMAR, Ranjit Ma
Flat No KV-105, Kanha Vertical Gopal Vihar, Jabalpur – 482002, Madhya Pradesh, India.
2. MARUF, Kazim Hanif
202, Plot 104, Krishna Sarang Galaxy, Sec 18, Ulwe, Navi Mumbai, 410206, Maharashtra, India.
3. GOYAL, Shubham
Pachbigha Road, Pansaari Gali, Joura, Morena - 476221, Madhya Pradesh, India.

Specification

DESC:RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.

FIELD OF INVENTION
[0002] The present disclosure generally relates to systems and methods for Internet of Things (IoT) use cases planned for deployment in lower-power wide area network (LPWAN). Further, the present disclosure relates to a bandwidth constraint network, where a large volume of end points try to simultaneously report data to an application server using connection-oriented protocols and retries. More particularly, the present disclosure relates to a system and a method for managing devices and preventing network congestion caused by inefficient handling of incoming requests by the application servers.

BACKGROUND OF INVENTION
[0003] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0004] Cost efficiency and reliability are the two key aspects that every Internet of Things (IoT) solution architect and business owner desires. Maintaining reliability of data and reducing cost is the most challenging part of every IoT solution. To achieve this, utilities, devices, original equipment manufacturers (OEMs), application server providers, telecom service providers, and server/cloud infrastructure providers including firewalls and load balancers follow best practices to make the solution work flawlessly and reliably, while saving cost on all the verticals. The adaption of lower-power wide area network (LPWAN) and connection-oriented protocols with an implementation of load balancers enable reliable networks. The adoption of narrow band (NB)-IoT technology has been extremely fast because of the benefits it offers. NB-IoT is an LPWAN technology based on the existing fourth generation (4G)/long-term evolution (LTE) communication network. NB-IoT is standardized by the global third generation partnership project (3GPP) standards organization. There are various use cases that are best suited for the benefits that the LPWAN networks like NB-IOT and category-M (CAT-M) technology offers including advanced metering infrastructure (smart electric and water meters), smart street lighting, connected coolers, asset monitoring, etc. However, there are a few undiscovered territories that bring in a lot of challenges in implementing IoT solutions on the LPWAN networks.
[0005] In use cases like smart metering where the volume of the deployment is in a few millions, cloud/server infrastructure vendors and the application servers vendors design their infrastructure sizing considering the number of requests the application server would handle in a 24-hour window. The control is mostly given to the application server which would use staggered schedulers that would poll the devices in a 24-hour window and also use layer 4 and round robin load balancers to balance the incoming load on the servers. However, multiple problems creep in in a scaled-up solution due to scenarios which are part of the use case, but cause significant overshooting of the requests to an unexpected extent. Problems arise from performance degradation related congestion due to a sudden surge in demand of resources by the IoT end points (devices) in scenarios like reporting of alarms and events (specifically power outage or power restoration). In events like a power outage, restoration of an entire grid/village/town is more frequent and common in developing economies, where multiple devices try to establish connection with the application server simultaneously to push the alarms.
[0006] Most of the devices are implemented with connection-oriented protocols like a transmission control protocol (TCP) to maintain reliability of data. The TCP layer and the application sitting on the devices have been configured with multiple retries to ensure data availability in case of failures. These layers use the retries to report the event/alarm to the server. In turn, the whole infrastructure including a Telecom Service Provider (TSP) network, a cloud/server infrastructure including load balancers has to bear the instant load created by the surging requests coming in from the reporting end points (devices). The server sizing is usually not designed to support concurrent connection at this scale as it involves very high infrastructure cost. In such cases, most of the intermediate entities like load balancers use TCP reset (RST) flag as a congestion control mechanism and reject the connection requests by sending TCP-RST when the listening queue is full or has reached maximum allowed concurrent connections. Several popular TCP implementations on the end points (devices) immediately resend a synchronizing (SYN) packet in response to a RST flag. This overloads the semantics of the reset message, and inevitably leads to a more aggressive behaviour from the TCP implementations in response to a reset. These retries from multiple devices generate a huge surge in the demand of network resources causing severe radio-level congestions and further lead to a degrading performance by the server. Real life situations have been observed where quick succession retries have created a ‘radio storm’ causing an increased received signal strength indicator (RSSI) at a base station. This phenomenon takes significant time to recover and this limitation defeats the whole purpose of considering LPWAN like NB-IoT for implementing IoT solutions like a smart meter or a smart street light.
[0007] There is, therefore, a need in the art to provide a system and a method that can mitigate the problems associated with the prior arts.


OBJECTS OF THE INVENTION
[0008] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are listed herein below.
[0009] It is an object of the present disclosure to provide a system and a method that addresses severe performance issues like congestion caused by a sudden surge of requests coming from a reporting end in a lower-power wide area network (LPWAN) and bandwidth constraint networks like a narrow band-Internet of Things (NB-IoT).
[0010] It is an object of the present disclosure to provide a system and a method that reduces performance issues by facilitating devices to wait for a configurable time period to report data to an application server.
[0011] It is an object of the present disclosure to provide a system and a method for IoT use cases planned for deployment in LPWAN and bandwidth constraint networks like NB-IoT where a large volume of end points try to simultaneously report data to the application server.
[0012] It is an object of the present disclosure to provide a system and a method that uses an extended backoff indicator flag and an extended backoff time in a response sent by the application server to facilitate the devices to wait for a predetermined period of time.
[0013] It is an object of the present disclosure to provide a system and a method that addresses a server’s limitation of handling bombarding requests from multiple entities within the solution architecture of an IoT solution.

SUMMARY
[0014] This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0015] In an aspect, the present disclosure relates to a system for managing one or more communication requests. The system may include a processor and a memory operatively coupled to the processor that stores instructions to be executed by the processor. The processor may receive a connection request from one or more users via one or more computing devices. The one or more users operate the one or more computing devices and are connected to the processor via a network. The connection request may be based on a simultaneous data transmission by the one or more computing devices. The processor may determine if the connection request may be allowed based on a predetermined condition. The processor may in response to a negative determination, facilitate the one or more computing devices to wait for a configurable time prior to a re-transmission of the connection request. The processor may in response to a positive determination, establish a connection with the one or more computing devices and allow the simultaneous data transmission from the one or more computing devices based on the established connection.
[0016] In an embodiment, the predetermined condition may be based on at least one of a listening queue length, a socket connection, and a number of connection requests.
[0017] In an embodiment, the processor may determine if a summation of the listening queue length and the socket connection is less than the number of connection requests.
[0018] In an embodiment, the processor may facilitate the one or more computing devices to wait for the configurable time via a backoff indicator flag comprising a time counter and an indicator flag.
[0019] In an embodiment, the processor may facilitate the one or more computing devices to wait while the indicator flag is in a high state and until an expiry of the time counter.
[0020] In an embodiment, the processor may establish the connection with the one or more computing devices by establishing a transmission control protocol (TCP) connection.
[0021] In an aspect, the present disclosure relates to a method for managing one or more communication requests. The method may include receiving, by a processor associated with a system, a connection request from one or more users. The connection request may be based on a simultaneous data transmission by one or more computing devices. The method may include determining, by the processor, if the connection request may be allowed based on a predetermined condition. The method may include, in response to a negative determination, facilitating, by the processor, the one or more computing devices to wait for a configurable time prior to a re-transmission of the connection request. The method may include, in response to a positive determination, establishing, by the processor, a connection with the one or more computing devices, and allowing, by the processor, the simultaneous data transmission from the one or more computing devices based on the established connection.
[0022] In an embodiment, the predetermined condition may be based on at least one of a listening queue length, a socket connection, and a number of connection requests.
[0023] In an embodiment, the method may include determining, by the processor, if a summation of the listening queue length and the socket connection is less than the number of connection requests.
[0024] In an embodiment, the method may include facilitating, by the processor, the one or more computing devices to wait for the configurable time via a backoff indicator flag comprising a time counter and an indicator flag.
[0025] In an embodiment, the method may include establishing, by the processor, the connection with the one or more computing devices by establishing a TCP connection.
[0026] In an aspect, a user equipment (UE) for managing one or more communication requests may include one or more processors communicatively coupled to a processor in a system. The one or more processors may be coupled with a memory. The memory may store instructions to be executed by the one or more processors that may cause the one or more processors to transmit a connection request to the processor via a network. The connection request may be based on a simultaneous data transmission by the UE and one or more UEs in the network. The processor may be configured to receive the connection request from the UE. The processor may determine if the connection request is allowed based on a predetermined condition. The processor may, in response to a negative determination, facilitate the UE and the one or more UEs to wait for a configurable time prior to a re-transmission of the connection request. The processor may, in response to a positive determination, establish a connection with the UE and the one or more UEs and allow the simultaneous data transmission from the UE and the one or more UEs based on the established connection.

BRIEF DESCRIPTION OF DRAWINGS
[0027] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components, or circuitry commonly used to implement such components.
[0028] FIG. 1 illustrates an exemplary network architecture (100) for implementing a proposed system (108), in accordance with an embodiment of the present disclosure.
[0029] FIG. 2 illustrates an exemplary block diagram (200) of the proposed system (108), in accordance with an embodiment of the present disclosure.
[0030] FIG. 3 illustrates an exemplary smart metering architecture (300) of the system (108), in accordance with an embodiment of the present disclosure.
[0031] FIG. 4 illustrates exemplary flow diagram (400) of a method for managing devices and reducing congestion, in accordance with an embodiment of the present disclosure.
[0032] FIG. 5 illustrates an exemplary computer system (500) in which or with which the embodiments of the present disclosure may be implemented.
[0033] The foregoing shall be more apparent from the following more detailed description of the disclosure.

DETAILED DESCRIPTION
[0034] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0035] The ensuing description provides exemplary embodiments only and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0036] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail to avoid obscuring the embodiments.
[0037] Also, it is noted that individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0038] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
[0039] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0040] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0041] The various embodiments throughout the disclosure will be explained in more detail with reference to FIGs. 1-5.
[0042] FIG. 1 illustrates an exemplary network architecture (100) for implementing a proposed system (108), in accordance with an embodiment of the present disclosure.
[0043] As illustrated in FIG. 1, the network architecture (100) may include a system (108). The system (108) may be connected to one or more computing devices (104-1, 104-2…104-N) via a network (106). The one or more computing devices (104-1, 104-2…104-N) may be interchangeably specified as a user equipment (UE) (104) and be operated by one or more users (102-1, 102-2...102-N). Further, the one or more users (102-1, 102-2…102-N) may be interchangeably referred as a user (102) or users (102).
[0044] In an embodiment, the computing devices (104) may include, but not be limited to, a mobile, a laptop, etc. Further, the computing devices (104) may include a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, a general-purpose computer, desktop, personal digital assistant, tablet computer, and a mainframe computer. Additionally, input devices for receiving input from the user (102) such as a touch pad, touch-enabled screen, electronic pen, and the like may be used. A person of ordinary skill in the art will appreciate that the computing devices (104) may not be restricted to the mentioned devices and various other devices may be used.
[0045] In an embodiment, the network (106) may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0046] In an embodiment, a high volume of end points or computing devices (104) may try to communicate with the system (108) simultaneously as part of the use case but in an exaggerated way due to listening queue limitations and multiple retry mechanism configured in the devices (104). Further, the system (108) may handle multiple computing devices (104) under one feeder/substation face power outage/restoration where the computing devices (104) have to report the alarms to the system (108).
[0047] Conventionally, whenever there is an event of power outage/power restoration on multiple deployed meters connected on one or more feeder/substation ports, the meters may report a real time alarm to the application server. In an example, the system (108) may handle a maximum of 64000 socket connections over a transmission control protocol (TCP) port. However, as per server capacity and considering a resource utilization of the server may open up to 51200 concurrent sockets on one TCP port. Therefore, the application server (e.g., Head End System) may utilize a push alarm from 51200 devices simultaneously. To process these many requests, the application server may need to have a hardware capability (e.g., random access memory (RAM)/memory) and a processing power to process the simultaneous requests arriving at the listening queue.
[0048] In an embodiment, the system (108) may maintain a listening queue, where the incoming requests may be queued up and accepted depending on the availability of the TCP socket. For the requests that may not be accepted due to the ongoing requests and their processing time at a centralized server or a load balancer (not shown) may be implemented to send a reset, acknowledgment [RST, ACK] to the computing devices (104), which may indirectly reject the incoming request. However, in this case, the computing devices (104) may not follow a retransmission timeout (RTO) to backoff and may retry sending the TCP-synchronizing (TCP-SYN) packet after a retransmit timer configured in the computing device (104). This phenomenon may lead to an instant and significant surge in the incoming requests from the computing devices (104) causing severe congestion issues.
[0049] In an embodiment, the system (108) may incorporate resizing of an infrastructure which may increase the listening queue length and handle the incoming requests. However, cost may be a major concern for various stakeholders. Hence, a change in the handling of the incoming request at the TCP layer may address the cost issue without resizing the infrastructure. Therefore, a timer in [RST, ACK] response from the system (108) may facilitate the computing devices (104) to wait before the computing devices (104) may retry to send the [TCP-SYN] request again for reporting the alarm. This may significantly reduce the load on the Telecom Service Provider (TSP) network and the application server by letting the centralized server or the load balancer to accept the allowed number of requests. Further, the centralized server or the load balancer may process requests from the computing devices (104) which have exceeded the maximum listening queue length for waiting for a time configured in the TCP-RST response.
[0050] In accordance with embodiments of the present disclosure, the system (108) may receive a connection request from one or more users (102). The connection request may be based on a simultaneous data transmission by the one or more computing devices (104). The system (108) may determine if the connection request may be allowed based on a predetermined condition. The predetermined condition may be based on, but not limited to, a listening queue length, a socket connection, and a number of connection requests.
[0051] In an embodiment, the system (108) may determine if a summation of the listening queue length and the socket connection is less than the number of connection requests.
[0052] In an embodiment, the system (108) may, in response to a negative determination, facilitate the one or more computing devices (104) to wait for a configurable time prior to a re-transmission of the connection request. The system (108) may facilitate the one or more computing devices (104) to wait for the configurable time via a backoff indicator flag comprising a time counter and an indicator flag. Further, the one or more computing devices (104) may be configured to wait while the indicator flag is in a high state and until an expiry of the time counter.
[0053] In an embodiment, the system (108) may include an extended wait time flag along with a time mentioned in an integer format. The extended wait time flag with the time may be included in the TCP-RST response packet sent by the centralized server/load balancer. This may allow the system (108) to reduce load over a TSP network as well as the server and increase the efficiency of the centralized server.
[0054] In an embodiment, the system (108) may, in response to a positive determination, establish a connection with the one or more computing devices (104). Further, the system (108) may allow the simultaneous data transmission from the one or more computing devices (104) based on the established connection. In an embodiment, the system (108) may establish a TCP connection and allow the simultaneous data transmission with the one or more computing devices (104).
[0055] Although FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
[0056] FIG. 2 illustrates an exemplary block diagram (200) of the proposed system (110), in accordance with an embodiment of the present disclosure.
[0057] Referring to FIG. 2, the system (108) may comprise one or more processor(s) (202) that may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[0058] In an embodiment, the system (108) may include an interface(s) (206). The interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output (I/O) devices, storage devices, and the like. The interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, processing engine(s) (208) and a database (210), where the processing engine(s) (208) may include, but not be limited to, a data parameter engine (212) and a data analyzing engine (214).
[0059] In an embodiment, the processing engine(s) (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (108) may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (108) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry.
[0060] In an embodiment, the processor (202) may receive a connection request from one or more users (e.g., 102) via the data parameter engine (212). The processor (202) may store the connection request in the database (210). The connection request may be based on a simultaneous data transmission by one or more computing devices (e.g., 104). The processor (202) may determine if the connection request may be allowed based a predetermined condition. The predetermined condition may be based on, but not limited to, a listening queue length, a socket connection, and a number of connection requests.
[0061] In an embodiment, the processor (202) may determine if a summation of the listening queue length and the socket connection is less than the number of connection requests using the data analyzing engine (214). Further, the processor (202) may determine if the connection request may be allowed.
[0062] In an embodiment, the processor (202) may, in response to a negative determination, facilitate the one or more computing devices (104) to wait for a configurable time prior to a re-transmission of the connection request. The processor (202) may facilitate the one or more computing devices (104) to wait for the configurable time via a backoff indicator flag comprising a time counter and an indicator flag. Further, the one or more computing devices (104) may be configured to wait while the indicator flag is in a high state and until an expiry of the time counter.
[0063] In an embodiment, the processor (202) may, in response to a positive determination, establish a connection with the one or more computing devices (104). Further, the processor (202) may allow the simultaneous data transmission from the one or more computing devices (104) based on the established connection. In an embodiment, the processor (202) may establish a TCP connection and allow the simultaneous data transmission with the one or more computing devices (104).
[0064] Although FIG. 2 shows exemplary components of the system (108), in other embodiments, the system (108) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 2. Additionally, or alternatively, one or more components of the system (108) may perform functions described as being performed by one or more other components of the system (108).
[0065] FIG. 3 illustrates an exemplary smart metering architecture (300) of the system (108), in accordance with an embodiment of the present disclosure.
[0066] As illustrated in FIG. 3, in an embodiment, in case of event like power outage or power restoration, n number of devices may try to connect to the same server internet protocol (IP)/uniform resource locator (URL)/port which may be configured in a client simultaneously. A centralized server/load balancer may have an upper limit of ‘a’ listening queue length. Further, the centralized server /load balancer may have an upper limit on ‘b’ concurrent sessions allowed depending on the sizing of the infrastructure.
[0067] Further, in an embodiment, when TCP-SYN requests from ‘n’ number of devices fall on the centralized server/load balancer simultaneously, the centralized server may first fill up the listening queue of ‘a’ number of connections. The centralized server may monitor the availability of the TCP sockets and depending on the availability of the TCP sockets, the centralized server may accept the TSP connection by sending TCP-SYN ACK to computing devices (e.g., 104). Upon receiving TCP-ACK from the computing devices (104), the centralized server may move the connection state from received to accepted, and further, move the connection state to an established TCP socket connection. The application server may consume the data once the TCP connection is established and data is processed.
[0068] In an embodiment, the centralized server /load balancer may make way for new entries in the listening queue depending upon how quickly the existing socket connections are released. The centralized server /load balancer may send a [RST, ACK] in response to [SYN] request for n-(a+b) computing devices (104). The centralized server) may include an ‘extended wait timer (t)’ flag with the value set to high and include an integer value of the timer in [RST, ACK] packet. The packet may be consumed by the computing device (104) attempting to connect to the centralized server and may be forced to wait for the included timer before retrying/retransmitting [SYN] request. The extended wait timer integer value may be in the range of 0 to 9 and may be more than a usual TCP RTO which may be 2 seconds. The extended wait timer may be in multiples of 30 sec. i.e. if t=0, wait time may be 30 seconds, for t=1, wait time may be 60 seconds. The maximum wait time may be 300 sec (5 minutes).
[0069] In an embodiment, the centralized server /load balancer may increment the values depending upon the number of requests that fall simultaneously. The computing devices (104) may consume the wait timer (t) from [RST, ACK] packet received from the centralized server. The computing device (104) may wait for that much amount of time before sending another [SYN] request/retransmission for the same push packet.
[0070] As illustrated in FIG. 3, the solution architecture (302) may include smart meters (344) connected to a control plane IP data bus (316). A home subscriber service (HSS) (318) may be connected to a mobile management entity (MME) (322). A base station/eNodeB (320) may receive inputs from the smart meters (344) and further process the inputs through the MME (322), a serving gateway (SGW) (324), and a packet network data gateway (PGW) (326). Further, the solution architecture (302) may include an HES (310) including a load balancer (312), a firewall (314), an HES user interface (UI) (304), a server 1 (306), and a server 2 (308). The servers (server 1 (306), server 2 (308)) may be connected to the load balancer (312) and the firewall (314). The firewall (314) may be further connected to the PGW (326).
[0071] Further, the solution architecture (302) may include a customer data centre (328). The customer data centre (328) may further include a mobile device management system (MDMS) (330) and a billing software engine (332). The customer data centre (328) may be further connected to the HES (310) as well as to another HES (338). The partner HES (338) may further include servers (server 1 (334), server 2 (336)), a load balancer (342), and a firewall (340).
[0072] FIG. 4 illustrates exemplary flow diagram (400) of a method for managing devices and reducing congestion, in accordance with an embodiment of the present disclosure.
[0073] As illustrated in FIG. 4, the following steps may be performed by the system (108) for managing devices and reducing congestion.
[0074] At step 402: The system (108) may start or initialize.
[0075] At step 404: Power outage/restoration may occur on “n” number of computing devices (e.g., 104)/meters (e.g., 344) simultaneously.
[0076] At step 406: All the “n” number of computing devices (104)/meters (344) may attempt to connect to the same server uniform resource locator (URL)/IP by sending a TCP [SYN] request.
[0077] At step 408: A centralized server may monitor a listening queue length “a” and socket connections “b.”
[0078] At step 410: The system (108) may determine if a summation of the listening queue length “a” and the socket connection “b” is less than the number of computing devices (104)/meters (344).
[0079] At step 412: Based on a negative determination from step 410, the centralized server may add an extended waiter time (t) flag and its integer value in [RST, ACK].
[0080] At step 414: The computing device (104) may consume the extended wait timer (t) in [RST, ACK] received from the centralized server.
[0081] At step 416: The computing device (104) may send another [SYN] request/ retransmission for the same push packet. Further, the system (108) may continue with step 410.
[0082] At step 418: Based on a positive determination from step 410 and a positive connection status, the centralized server may respond by sending [SYN, ACK] packet in response to TCP [SYN] packet from the computing device (104).
[0083] At step 420: Based on an accepted server connection status, the computing device (104) may respond by sending an [ACK] packet in response to the TCP [SYN, ACK] packet from the centralized server
[0084] At step 422: Based on an established server connection status (TCP established status), the computing device (104) may send [PSH, ACK] packet to the centralized server. The centralized server may send the [ACK] packet in response to the [PSH, ACK] packet received.
[0085] At step 424: The centralized server and the computing device (104) may close the TCP connection.
[0086] At step 424: The system (108) may terminate the process of managing devices and reducing congestion.
[0087] FIG. 5 illustrates an exemplary computer system (500) in which or with which the embodiments of the present disclosure may be implemented.
[0088] As shown in FIG. 5, the computer system (500) may include an external storage device (510), a bus (520), a main memory (530), a read-only memory (540), a mass storage device (550), a communication port(s) (560), and a processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor and communication ports. The processor (570) may include various modules associated with embodiments of the present disclosure. The communication port(s) (560) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication ports(s) (560) may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
[0089] In an embodiment, the main memory (530) may be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (570). The mass storage device (550) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
[0090] In an embodiment, the bus (520) may communicatively couple the processor(s) (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g. a Peripheral Component Interconnect PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), universal serial bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
[0091] In another embodiment, operator and administrative interfaces, e.g., a display, keyboard, and cursor control device may also be coupled to the bus (520) to support direct operator interaction with the computer system (500). Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (560). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
[0092] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.


ADVANTAGES OF THE INVENTION
[0093] The present disclosure provides a system and a method that addresses severe performance issues like congestion caused by a sudden surge of requests coming from a reporting end in a lower-power wide area network (LPWAN) and bandwidth constraint networks like a narrow band-Internet of Things (NB-IoT).
[0094] The present disclosure provides a system and a method that reduces performance issues by facilitating devices to wait for a configurable time period to report data to an application server.
[0095] The present disclosure provides a system and a method for IoT use cases planned for deployment in LPWAN and bandwidth constraint networks like NB-IoT where a large volume of end points try to simultaneously report data to the application server.
[0096] The present disclosure provides a system and a method that uses an extended backoff indicator flag and an extended backoff time in a response sent by the application server to facilitate the devices to wait for a predetermined period of time.
[0097] The present disclosure provides a system and a method that addresses a server’s limitation of handling bombarding requests from multiple entities within the solution architecture of an IoT solution.
[0098] The present disclosure provides a system and a method that significantly reduces network congestion and improves the performance of the system.
,CLAIMS:1. A system (108) for managing one or more communication requests, the system (108) comprising:
a processor (202); and
a memory (204) operatively coupled with the processor (202), wherein said memory (204) stores instructions, which when executed by the processor (202), cause the processor (202) to:
receive a connection request from one or more users (102) via one or more computing devices (104), wherein the one or more users (102) operate the one or more computing devices (104) and are connected to the processor (202) via a network (106), and wherein the connection request is based on a simultaneous data transmission by the one or more computing devices (104);
determine if the connection request is allowed based on a predetermined condition;
in response to a negative determination, facilitate the one or more computing devices (104) to wait for a configurable time prior to a re-transmission of the connection request; and
in response to a positive determination, establish a connection with the one or more computing devices (104), and allow the simultaneous data transmission from the one or more computing devices (104) based on the established connection.

2. The system (108) as claimed in claim 1, wherein the predetermined condition is based on at least one of: a listening queue length, a socket connection, and a number of connection requests.

3. The system (108) as claimed in claim 2, wherein the processor (202) is to determine if a summation of the listening queue length and the socket connection is less than the number of connection requests.
4. The system (108) as claimed in claim 1, wherein the processor (202) is to facilitate the one or more computing devices (104) to wait for the configurable time via a backoff indicator flag comprising a time counter and an indicator flag.

5. The system (108) as claimed in claim 4, wherein the processor (202) is to facilitate the one or more computing devices (104) to wait while the indicator flag is in a high state and until an expiry of the time counter.

6. The system (108) as claimed in claim 1, wherein the processor (202) is to establish the connection with the one or more computing devices (104) by establishing a transmission control protocol (TCP) connection.

7. A method for managing one or more communication requests, the method comprising:
receiving, by a processor (202) associated with a system (108), a connection request from one or more users (102), wherein the connection request is based on a simultaneous data transmission by one or more computing devices (104);
determining, by the processor (202), if the connection request is allowed based on a predetermined condition;
in response to a negative determination, facilitating, by the processor (202), the one or more computing devices (104) to wait for a configurable time prior to a re-transmission of the connection request; and
in response to a positive determination, establishing, by the processor (202), a connection with the one or more computing devices (104), and allowing, by the processor (202), the simultaneous data transmission from the one or more computing devices (104) based on the established connection.

8. The method as claimed in claim 7, wherein the predetermined condition is based on at least one of: a listening queue length, a socket connection, and a number of connection requests.

9. The method as claimed in claim 8, comprising determining, by the processor (202), if a summation of the listening queue length and the socket connection is less than the number of connection requests.

10. The method as claimed in claim 7, comprising facilitating, by the processor (202), the one or more computing devices (104) to wait for the configurable time via a backoff indicator flag comprising a time counter and an indicator flag.

11. The method as claimed in claim 7, wherein establishing, by the processor (202), the connection with the one or more computing devices (104) comprises establishing a transmission control protocol (TCP) connection.

12. A user equipment (UE) (104) for sending one or more communication requests, the UE (104) comprising:
one or more processors communicatively coupled to a processor (202) associated with a system (108), wherein the one or more processors are coupled with a memory, and wherein said memory stores instructions, which when executed by the one or more processors, cause the one or more processors to:
transmit a connection request to the processor (202) via a network (106), wherein the connection request is based on a simultaneous data transmission by the UE (104) and one or more other UEs in the network (106);
wherein the processor (202) is configured to:
receive the connection request from the UE (104);
determine if the connection request is allowed based on a predetermined condition;
in response to a negative determination, facilitate the UE (104) and the one or more other UEs to wait for a configurable time prior to a re-transmission of the connection request; and
in response to a positive determination, establish a connection with the UE (104) and the one or more other UEs, and allow the simultaneous data transmission from the UE (104) and the one or more other UEs based on the established connection.

Documents

Application Documents

# Name Date
1 202221030595-STATEMENT OF UNDERTAKING (FORM 3) [27-05-2022(online)].pdf 2022-05-27
2 202221030595-PROVISIONAL SPECIFICATION [27-05-2022(online)].pdf 2022-05-27
3 202221030595-POWER OF AUTHORITY [27-05-2022(online)].pdf 2022-05-27
4 202221030595-FORM 1 [27-05-2022(online)].pdf 2022-05-27
5 202221030595-DRAWINGS [27-05-2022(online)].pdf 2022-05-27
6 202221030595-DECLARATION OF INVENTORSHIP (FORM 5) [27-05-2022(online)].pdf 2022-05-27
7 202221030595-ENDORSEMENT BY INVENTORS [27-05-2023(online)].pdf 2023-05-27
8 202221030595-DRAWING [27-05-2023(online)].pdf 2023-05-27
9 202221030595-CORRESPONDENCE-OTHERS [27-05-2023(online)].pdf 2023-05-27
10 202221030595-COMPLETE SPECIFICATION [27-05-2023(online)].pdf 2023-05-27
11 202221030595-FORM-8 [29-05-2023(online)].pdf 2023-05-29
12 202221030595-FORM 18 [29-05-2023(online)].pdf 2023-05-29
13 Abstract1.jpg 2023-10-28
14 202221030595-FER.pdf 2024-04-13
15 202221030595-FER_SER_REPLY [17-06-2024(online)].pdf 2024-06-17
16 202221030595-CORRESPONDENCE [17-06-2024(online)].pdf 2024-06-17
17 202221030595-CLAIMS [17-06-2024(online)].pdf 2024-06-17

Search Strategy

1 SearchHistory-766E_03-04-2024.pdf