Abstract: Disclosed is a system and method (900) for executing network tests on an electronic device (110) over a network connection. The method comprises receiving, from a remote server (140), work orders to execute the networks tests including a speed test and a web performance test, and scheduling an execution of the speed test and the web performance test based on the work orders. The method further comprises executing the speed test and the web performance test over the network connection, aggregating execution results of the speed test and the web performance test, and transmitting the execution results of each of the at least one speed test and the at least one web performance test to the remote server. FIG. 2
DESC:FORM 2
THE PATENTS ACT, 1970 (39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
SYSTEM AND METHOD FOR REMOTELY EXECUTING NETWORK TESTS OVER USER NETWORK CONNECTIONS
Jio Platforms Limited, an Indian company, having registered address at Office -102, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
The following specification particularly describes th e invention and the manner in which it is to be performed.
TECHNICAL FIELD
[0001] The embodiments of the present disclosure generally relate to the field of wireless communications and network performance testing. More particularly, the present disclosure relates to a system and a method for remotely executing network tests on an electronic device over a network connection based on work orders.
BACKGROUND OF THE INVENTION
[0002] The subject matter disclosed in the background section should not be assumed or construed to be prior art merely because of its mention in the background section. Similarly, any problem statement mentioned in the background section or its association with the subject matter of the background section should not be assumed or construed to have been previously recognized in the prior art.
[0003] In the realm of wireless communication networks, with increased usage of internet and popularity of Over-The-Top (OTT) media content, there is an increase in demand for high-speed internet connectivity. Conventional protocols for evaluating the performance of communication networks face various challenges and limitations when it comes to performing a speed test to assess a Quality of Service (QoS) during peak usage periods. The challenges are more prevalent in regions where users frequently encounter inconsistent speeds in a communication network and a poor QoS during the peak usage periods. The poor QoS results in frustration and impacts an overall experience of users in the communication network.
[0004] Heretofore, conventional methods for evaluating the performance of the communication networks relied on manual or localized speed tests conducted by individual users. The manual speed tests are limited in scope and effectiveness, especially when attempting to assess conditions of the communication network across a broad geographic area. Thus, the conventional methods have not proven to be successful in providing comprehensive insights into root causes of degradation in the performance of the communication networks during the peak usage periods or at specific locations.
[0005] Therefore, to overcome aforementioned challenges and limitations associated with the conventional methods, there lies a need for a system and a method that is capable of remotely executing networks tests on a user network and identifying network performance issues effectively.
SUMMARY
[0006] The following embodiments present a simplified summary to provide a basic understanding of some aspects of the disclosed invention. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope thereof. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
[0007] According to an embodiment, a method for executing network tests on an electronic device over a network connection is described. The method comprises receiving, by a transceiver module from a remote server, one or more work orders to execute the networks tests. The network tests include at least one speed test and at least one web performance test. Further, the method comprises scheduling, by a scheduling engine, an execution of the at least one speed test and the at least one web performance test based on the one or more work orders. Thereafter, the method comprises executing, by an execution engine, the at least one speed test over the network connection by selecting, from a plurality of test servers, a test server nearest to the electronic device based on a geographical location of the electronic device. Further, the method comprises executing, by the execution engine, transmitting and receiving test packets to and from the selected test server, and measuring one or more parameters associated with the at least one speed test based on a sample size of the test packets and a round trip time for the transmission and the reception of the test packets between the electronic device and the selected test server. Furthermore, the method comprises executing, by the execution engine, the at least one web performance test over the network connection by loading one or more Uniform Resource Locators (URLs) of a web page, and measuring one or more parameters associated with the at least one web performance test. Subsequently, the method comprises aggregating, by a data aggregation engine, execution results of each of the at least one speed test and the at least one web performance test, and transmitting, by the transceiver module, the execution results of each of the at least one speed test and the at least one web performance test to the remote server.
[0008] In one or more aspects, the one or more work orders include test configuration parameters, a scheduled start time for executing the at least one speed test and the at least one web performance test, a number of iterations, and a test duration for executing the at least one speed test and the at least one web performance test. The test configuration parameters include a type of the at least one speed test and the sample size of the test packets. The one or more parameters associated with the at least one speed test includes one or more of a download speed, an upload speed, a latency, a jitter, or a packet loss.
[0009] In an aspect, the method comprises triggering, by the scheduling engine, the execution of the at least one speed test and the at least one web performance test at the scheduled start time by sending an API request to an alarm manager of the electronic device. Each of the at least one speed test and the at least one web performance test is executed iteratively based on the number of iterations specified in the one or more work orders.
[0010] In an aspect, the method comprises rescheduling, by the scheduling engine, the execution of the at least one speed test and the at least one web performance in case the one or more work orders include an instruction to execute the at least one speed test and the at least one web performance for two or more iterations.
[0011] In one or more aspects, the one or more parameters associated with the at least one web performance test comprise a total load time of loading the one or more URLs of the web page. Further, the execution of the at least one web performance test over the network connection further comprises identifying, by the execution engine from the one or more URLs of the web page, one or more failed URLs that exceed a predefined load time threshold.
[0012] In an aspect, the method comprises determining, by the execution engine, a variation in the latency between consecutive test packets based on the round trip time for the transmission and the reception of the consecutive test packets, and calculating, by the execution engine, a ratio of lost test packets to a total number of the test packets transmitted to the test server. The jitter is measured based on the determined variation in the latency between the consecutive test packets, and the packet loss is determined based on the calculated ratio.
[0013] In one or more aspects, the at least one speed test and the at least one web performance test are performed in background of the electronic device, and the electronic device corresponds to one of a User Equipment (UE) or a Set Top Box (STB).
[0014] In an aspect, the geographical location of the electronic device is acquired from a service provider serving the electronic device.
[0015] According to another embodiment, an electronic device for executing network tests over a network connection is described. The system comprises a transceiver module, a scheduling engine, an execution engine, and a data aggregation engine. The transceiver module is configured to receive, from a remote server, one or more work orders to execute the networks tests. The network tests include at least one speed test and at least one web performance test. The scheduling engine is configured to schedule an execution of the at least one speed test and the at least one web performance test based on the one or more work orders. The execution engine is configured to execute the at least one speed test and the at least one web performance test over the network connection. For executing the at least speed test, the execution engine is configured to select, from a plurality of test servers, a test server nearest to the electronic device based on a geographical location of the electronic device, transmit and receive test packets to and from the selected test server, and measure one or more parameters associated with the at least one speed test based on a sample size of the test packets and a round trip time for the transmission and the reception of the test packets between the electronic device and the selected test server. Further, for executing the at least one web performance test, the execution engine is configured to load one or more URLs of a web page, and measure one or more parameters associated with the at least one web performance test. The data aggregation engine is configured to aggregate execution results of each of the at least one speed test and the at least one web performance test, and the transceiver module is further configured to transmit the execution results of each of the at least one speed test and the at least one web performance test to the remote server.
[0016] In an aspect, the scheduling engine is further configured to trigger the execution of the at least one speed test and the at least one web performance test at the scheduled start time by sending an API request to an alarm manager of the electronic device, wherein each of the at least one speed test and the at least one web performance test is executed iteratively based on the number of iterations specified in the one or more work orders.
[0017] In an aspect, the scheduling engine is further configured to reschedule the execution of the at least one speed test and the at least one web performance in case the one or more work orders include an instruction to execute the at least one speed test and the at least one web performance for two or more iterations.
[0018] In an aspect, for executing the at least one web performance test over the network connection, the execution engine is further configured to identify, from the one or more URLs of the web page, one or more failed URLs that exceed a predefined load time threshold.
[0019] In an aspect, the execution engine is further configured to determine a variation in the latency between consecutive test packets based on the round trip time for the transmission and the reception of the consecutive test packets, and calculate a ratio of lost test packets to a total number of the test packets transmitted to the test server. The jitter is measured based on the determined variation in the latency between the consecutive test packets, and the packet loss is determined based on the calculated ratio.
[0020] According to another embodiment, a method for creating one or more work orders for execution of network tests over a network connection is described. The method comprises establishing, by a communication unit, a connection with a network management device and one or more electronic devices, and controlling, by a display engine, an application interface of the network management device to display a work order window including a plurality of options for creating at least one work order. Further, the method comprises obtaining, by a reception engine, a set of inputs corresponding to the plurality of options displayed in the work order window. The network tests include at least one speed test and at least one web performance test, and the set of inputs includes information of test configuration parameters, a scheduled start time for executing the at least one speed test and the at least one web performance test, a number of iterations, and a test duration for executing the at least one speed test and the at least one web performance test. Thereafter, the method comprises creating, by a work order management engine, the at least one work order related to the at least one speed test and the at least video test based on the obtained set of inputs, and storing, by the work order management engine, the at least one work order in a database. Furthermore, the method comprises transmitting, by a transmitting engine over the network connection, the retrieved at least one work order to the one or more electronic devices to execute the at least one speed test and the at least one web performance test, and receiving, by the reception engine, data including execution results of the at least one speed test and the at least one web performance test from the one or more electronic devices. Subsequently, the method comprises generating, by a report generation engine based on the received data, a performance report including performance metrics and indicators related to the execution of the at least one speed test and the at least one web performance test.
BRIEF DESCRIPTION OF DRAWINGS
[0021] Various embodiments disclosed herein will become better understood from the following detailed description when read with the accompanying drawings. The accompanying drawings constitute a part of the present disclosure and illustrate certain non-limiting embodiments of inventive concepts. Further, components and elements shown in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. For consistency and ease of understanding, similar components and elements are annotated by reference numerals in the exemplary drawings.
[0022] FIG. 1 illustrates a block diagram depicting an example system for executing network tests over network connections, in accordance with an example embodiment of the present disclosure.
[0023] FIG. 2 illustrates an example system architecture of a user device, in accordance with an embodiment of the present disclosure.
[0024] FIG. 3 illustrates a block diagram depicting a system architecture of a server, in accordance with an example embodiment of the present disclosure.
[0025] FIG. 4 illustrates a flowchart depicting a method for creating work orders for execution of the network tests over the network connections of the user device, in accordance with an embodiment of the present disclosure.
[0026] FIG. 5 illustrates an example UI for navigating to a work order module, in accordance with an embodiment of the present disclosure.
[0027] FIG. 6 illustrates an example UI for navigating to an application work order, in accordance with an embodiment of the present disclosure.
[0028] FIG. 7 illustrates an example UI including an option for creating a work order for the user device, in accordance with an embodiment of the present disclosure.
[0029] FIG. 8 illustrates an example UI including options for receiving user inputs to create the work order, in accordance with an embodiment of the present disclosure.
[0030] FIG. 9 illustrates an example UI including options for receiving user inputs to create the work order for one or more user devices using device Identifiers (IDs) of the user device, in accordance with an embodiment of the present disclosure.
[0031] FIG. 10 illustrates a flowchart depicting a method for executing network tests on the user device over a network connection of the user device, in accordance with an embodiment of the present disclosure.
[0032] FIG. 11 illustrates a flowchart depicting a method for operations performed by the user device for executing a speed test over the network connection, in accordance with an embodiment of the present disclosure.
[0033] FIG. 12 illustrates a flowchart depicting operations performed by the user device for executing a web performance test over the network connection, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE INVENTION
[0034] Inventive concepts of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of one or more embodiments of inventive concepts are shown. Inventive concepts may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Further, the one or more embodiments disclosed herein are provided to describe the inventive concept thoroughly and completely, and to fully convey the scope of each of the present inventive concepts to those skilled in the art. Furthermore, it should be noted that the embodiments disclosed herein are not mutually exclusive concepts. Accordingly, one or more components from one embodiment may be tacitly assumed to be present or used in any other embodiment.
[0035] The following description presents various embodiments of the present disclosure. The embodiments disclosed herein are presented as teaching examples and are not to be construed as limiting the scope of the present disclosure. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified, omitted, or expanded upon without departing from the scope of the present disclosure.
[0036] The following description contains specific information pertaining to embodiments in the present disclosure. The detailed description uses the phrases “in some embodiments” which may each refer to one or more or all of the same or different embodiments. The term “some” as used herein is defined as “one, or more than one, or all.” Accordingly, the terms “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” In view of the same, the terms, for example, “in an embodiment” refers to one embodiment and the term, for example, “in one or more embodiments” refers to “at least one embodiment, or more than one embodiment, or all embodiments.”
[0037] The term “comprising,” when utilized, means “including, but not necessarily limited to;” it specifically indicates open-ended inclusion in the so-described one or more listed features, elements in a combination, unless otherwise stated with limiting language. Furthermore, to the extent that the terms “includes,” “has,” “have,” “contains,” and other similar words are used in either the detailed description, such terms are intended to be inclusive in a manner similar to the term “comprising.”
[0038] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features.
[0039] The description provided herein discloses exemplary embodiments only and is not intended to limit the scope, applicability, or configuration of the present disclosure. Rather, the foregoing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing any of the exemplary embodiments. Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it may be understood by one of the ordinary skilled in the art that the embodiments disclosed herein may be practiced without these specific details.
[0040] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein the description, the singular forms "a", "an", and "the" include plural forms unless the context of the invention indicates otherwise.
[0041] The terminology and structure employed herein are for describing, teaching, and illuminating some embodiments and their specific features and elements and do not limit, restrict, or reduce the scope of the present disclosure. Accordingly, unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having ordinary skill in the art.
[0042] Various aspects of the present disclosure illustrate a method for creating workorders for scheduling network tests, and a system and method for remotely executing network tests on user network connections in a background of a user device based on the created work orders. The following description provides specific details of certain aspects of the disclosure illustrated in the drawings to provide a thorough understanding of those aspects. It should be recognized, however, that the present disclosure can be reflected in additional aspects and the disclosure may be practiced without some of the details in the following description.
[0043] The various aspects including the example aspects are now described more fully with reference to the accompanying drawings, in which the various aspects of the disclosure are shown. The disclosure may, however, be embodied in different forms and should not be construed as limited to the aspects set forth herein. Rather, these aspects are provided so that this disclosure is thorough and complete, and fully conveys the scope of the disclosure to those skilled in the art. In the drawings, the sizes of components may be exaggerated for clarity.
[0044] Various aspects of the present disclosure provide a system and a method that is capable of automating and remotely executing network tests on a user network to enable organizations and individuals to continuously monitor and assess performance of communication networks remotely. In another aspect of the present disclosure, the system and the method describe operations for creating the work orders for scheduling the network tests and operations for remotely executing the network tests on the user network to facilitate the organizations to proactively manage a quality of service (QoS) of the communication networks and enhance user satisfaction.
[0045] In the disclosure, various embodiments are described using terms used in some communication standards (e.g., 3rd Generation Partnership Project (3GPP)), but these are merely examples for description. Various embodiments of the disclosure may also be easily modified and applied to other communication systems.
[0046] In order to facilitate an understanding of the disclosed invention, a number of terms are defined below.
[0047] A “download speed” refers to a measurement of Downlink (DL) speed (in Megabits per second (Mbps)) indicating how fast data is transferred from a test server to the user device.
[0048] An “upload speed” refers to a measurement of Uplink (UL) speed (in Mbps) indicating how fast data is transferred from the user device to the test server.
[0049] A “latency” refers to a measurement of a time (in milliseconds) taken for a data packet to travel from the user device to the test server and return back from the test server.
[0050] A “packet loss” refers to a percentage of data packets lost during transmission between the user device and the test server, impacting the quality of the network connection.
[0051] A “jitter” refers to a measurement in a variation in the latency over time, indicating instability in the network connection.
[0052] A “WebView option” refers to an option provided on an interface of the user device that allows loading and displaying web pages within an application installed on the user device. The web pages are loaded and displayed using the WebView option.
[0053] A “total load time” refers to a time taken for a web page to fully load all assets of the web page including, but not limited to, images, scripts, etc.
[0054] The following description provides specific details of certain aspects of the disclosure illustrated in the drawings to provide a thorough understanding of those aspects. It should be recognized, however, that the present disclosure can be reflected in additional aspects and the disclosure may be practiced without some of the details in the following description.
[0055] Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings. FIG. 1 through FIG. 12, discussed below, and the one or more embodiments used to describe the principles of the present disclosure are by way of illustration only and should not be construed in any way to limit the scope of the present disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device.
[0056] FIG. 1 illustrates a block diagram depicting an example system 100 for executing network tests over network connections, in accordance with an example embodiment of the present disclosure. The embodiment of the system 100 shown in FIG. 1 is for illustration only. Other embodiments of the system 100 may be used without departing from the scope of this disclosure.
[0057] As shown in FIG. 1, the system 100 includes a user device 110 (interchangeably referred to as “electronic device 110), a network 120, a load balancer 130, a server 140 (interchangeably referred to as “remote server 140”), a database 150, a network management device 160, and a plurality of test servers 1 through N.
[0058] The user device 110 communicates with the server 140 via the network 120. In one or more embodiments, one or more applications are installed on the user device 110 to communicate with the server 140 and the test servers 1 through N. Example of the user device 110 may include, but not limited to, a Set Top Box (STB) and a User Equipment (UE) such as, but not limited to, smartphones, tablets, laptops, desktop computers, and the like.
[0059] The network 120 enables transmission of messages and acts as a communication medium between components of the system 100. The network 120 may correspond to one of an Internet, a proprietary Internet Protocol (IP) network, or other data network. The network 120 may include suitable logic, circuitry, and interfaces that may be configured to provide several network ports and several communication channels for transmission and reception of data related to operations of various entities of the system 100. Each network port may correspond to a virtual address (or a physical machine address) for transmission and reception of the communication data. For example, the virtual address may be an Internet Protocol Version 4 (IPV4) (or an IPV6 address) and the physical address may be a Media Access Control (MAC) address. The network 120 may be associated with an application layer for implementation of communication protocols based on one or more communication requests from the various entities of the system 100. The communication data may be transmitted or received via the communication protocols. Examples of the communication protocols may include, but are not limited to, Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Domain Network System (DNS) protocol, Common Management Interface Protocol (CMIP), Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Long Term Evolution (LTE) communication protocols, or any combination thereof.
[0060] In some aspects of the present disclosure, the communication data may be transmitted or received via at least one communication channel of several communication channels in the network 120. Examples of the communication channels may include, but are not limited to, a wireless channel, a wired channel, a combination of wireless and wired channel thereof. The wireless or wired channel may be associated with a data standard which may be defined by one of a Local Area Network (LAN), a Personal Area Network (PAN), a Wireless Local Area Network (WLAN), a Wireless Sensor Network (WSN), Wireless Area Network (WAN), Wireless Wide Area Network (WWAN), a metropolitan area network (MAN), a satellite network, the Internet, an optical fiber network, a coaxial cable network, an Infrared (IR) network, a Radio Frequency (RF) network, and a combination thereof. Aspects of the present disclosure are intended to include or otherwise cover any type of communication channel, including known, related art, and/or later developed technologies.
[0061] The load balancer 130 is an intermediary between the network 120 and the server 140. The load balancer 130 is configured to distribute, to the server 140, incoming requests from the network management device 160 for creating the work orders for scheduling the network tests on multiple user devices (not shown in FIG. 1). In one or more embodiments, the incoming requests for creating the work order may be received from a plurality of network management devices (not shown in FIG. 1).
[0062] The server 140 may be a network of computers, a software framework, or a combination thereof, that may provide a generalized approach to create a server implementation. Examples of the server 140 may include, but are not limited to, personal computers, laptops, mini-computers, mainframe computers, any non-transient and tangible machine that can execute a machine-readable code, cloud-based servers, distributed server networks, or a network of computer systems. The server 140 may be realized through various web-based technologies such as, but not limited to, a Java web-framework, a .NET framework, a personal home page (PHP) framework, or any web-application framework. In other aspects of the present disclosure, the server 140 may be configured to perform one or more operations for creating work orders for scheduling network tests over the user network connections in the background of the user device 110, and determining overall performance of the user network connections based on execution of the network tests at the user device 110.
[0063] The database 150 may be configured to store data including, but not limited to, data collected as the results of the execution of the network tests, user profile data associated with the user device 110, and backend data associated with the network management device 160.
[0064] The network management device 160 corresponds to an electronic device used by network engineers or network operation team for managing and optimizing network resources. The network management device 160 may include an application interface to facilitate display of a plurality of options for receiving inputs related for creation of the work orders and configurability of the network test parameters. The application interface may be further configured to facilitate communications between the network management device 160 and the server 140 for creating the workorders for scheduling the network tests at the user device 110. The application interface may be further configured to send one or more requests to the server 140 and receive data communications from the server 140 via the network 120. The application interface may be further configured to cause the network management device 160 to output a signal instructing the server 140 to initiate a process for creating the work orders for scheduling the network tests at the user device 110.
[0065] Although FIG. 1 illustrates one example of the system for remotely scheduling the speed tests on the user network, various changes may be made to FIG. 1. For example, the system may include any number of user devices and servers in any suitable arrangement. Further, in another example, the system may include any number of components in addition to the components shown in FIG. 1. Further, various components in FIG. 1 may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
[0066] FIG. 2 illustrates an example system architecture of the user device 110, in accordance with an embodiment of the present disclosure. The embodiment of the system architecture of the of the user device 110 as shown in FIG. 2 is for illustration only. However, the user device 110 may come in a wide variety of configurations, and FIG. 2 does not limit the scope of the present disclosure to any particular system architecture of the user device 110.
[0067] As shown in FIG. 2, the user device 110 includes one or more processors 210 (hereinafter also referred to as “processor 210”), a memory 215, a transceiver module 220, an interface(s) 225, and a processing Engine(s)/module(s) 230. These components may be in electronic communication via one or more buses (e.g., communication bus 250). Depending on the network type, the term “user device” may refer to any electronic device such as “STB,” “mobile station,” “subscriber station,” “remote terminal,” “wireless terminal,” or “receive point,”. For the sake of convenience, the term “user device” used herein refers to an electronic device such as the UE or the STB that wirelessly accesses the server 140 via the network 120.
[0068] The one or more components of the user device 110 are communicatively coupled with the processor 210 (described below) to perform operations for executing the network tests over the network connection. The processor 210 may include various processing circuitry and configured to execute programs or computer readable instructions stored in the memory 215. The processor 210 may also include an intelligent hardware device including a general-purpose processor, such as, for example, and without limitation, a Central Processing Unit (CPU), an Application Processor (AP), a dedicated processor, or the like, a microcontroller, a Field-Programmable Gate Array (FPGA), a programmable logic device, a discrete hardware component, or any combination thereof. In some cases, the processor 210 may be configured to operate a memory array using a memory controller. In some cases, a memory controller may be integrated into the processor 210. The processor 210 may be configured to execute computer-readable instructions stored in a memory (e.g., the memory 215) to cause the user device 110 to perform various functions (e.g., schedule an execution of the network tests based on the work orders received from the remote server 140 and execute the network test over the network connection at the scheduled time of execution of the network tests).
[0069] The memory 215 is communicatively coupled to the processor 210. A part of the memory 215 may include a Random-Access Memory (RAM), and another part of the memory 215 may include a flash memory or other Read-Only Memory (ROM). The memory 215 is configured to store a set of instructions required by the processor 210 for controlling overall operations of the user device 110. The memory 215 may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory 215 may, in some examples, be considered a non-transitory storage medium. The "non-transitory" storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted that the memory 215 is non-movable. In some examples, the memory 215 can be configured to store larger amounts of information. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in the RAM or cache). The memory 215 can be an internal storage unit or it can be an external storage unit of the user device 110, cloud storage, or any other type of external storage.
[0070] More specifically, the memory 215 may store computer-readable instructions including instructions that, when executed by a processor (e.g., the processor 210) cause the user device 110 to perform various functions described herein. In some cases, the memory 215 may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices.
[0071] The transceiver module 220 may include one or more antennas, one or more of Radio Frequency (RF) transceivers, a transmit processing circuitry, and a receive processing circuitry. The transceiver module 220 may be configured to receive incoming signals, such as signals transmitted by the test servers 1 through N, the server 140, and the user device 110. The transceiver module 220 may down-convert the incoming signals to generate baseband signals which may be sent to the receiver processing circuitry. The receiver processing circuitry may transmit the processed baseband signals to the processor 210 for further processing. The transmit processing circuitry may receive analog or digital data from the processor 210 and may encode, multiplex, and/or digitize the outgoing baseband data to generate processed baseband signals. The transceiver module 220 may further receive the outgoing processed baseband from the transmit processing circuitry and up-converts the baseband signals to Radio Frequency (RF) signals that may be transmitted to the server 140 and the network management device 160.
[0072] The interface 225 may include suitable logic, circuitry, a variety of interfaces, and/or codes that may be configured to receive input(s) and present output(s) on the application interface of the user device 110. The variety of interfaces may include interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. For example, the I/O interface may have an input interface and an output interface. The interface 225 may facilitate communication of the user device 110 with various devices and systems connected to it. The interface 225 may also provide a communication pathway for one or more components of the user device 110. Examples of such components include, but are not limited to, the processing Engine(s)/module(s) 230.
[0073] In one or more embodiments, processing Engine(s)/module(s) 230 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the user device 110. In non-limiting examples, described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing Engine(s)/module(s) 230 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor 210 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing Engine(s)/module(s) 230. In such examples, the user device 110 may also comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the user device 110 and the processing resource. In other examples, the processing Engine(s)/module(s) 230 may be implemented using an electronic circuitry.
[0074] In one or more embodiments, processing Engine(s)/module(s) 230 may include one or more units/modules selected from any of a scheduling engine 232, an execution engine 234, a data aggregation engine 236, and other engines/modules 238 (not shown).
[0075] In an embodiment, the processor 210, using the transceiver module 220, is configured to receive, from the remote server 140, one or more work orders to execute the networks tests. The network tests may include, but not limited to, a speed test and a web performance test. Further, the processor 210, using the scheduling engine 232, is configured to schedule an execution of the speed test and the web performance test based on the one or more work orders received from the remote server 140.
[0076] In an embodiment, the processor 210, using the execution engine 234, may be configured to execute the speed test and the web performance test over the network connection of the user device. The execution engine 234 may execute each of the speed test and the web performance test iteratively based on a number of iterations specified in the one or more work orders. The speed test and the web performance test are performed by the execution engine 234 in background of the user device 110. The execution of the speed test and the web performance test in the background of the user device 110 may refers to performing the speed test and the web performance test as a non-intrusive process that does not interfere with primary functions of the user device 110. For example, if the STB is streaming a video while executing the speed test or the web performance test, these tests run without disrupting the video streaming, ensuring that the user experience remains unaffected.
[0077] Furthermore, the processor 210, using the data aggregation engine 236, is configured to aggregate execution results of each of the speed test and the web performance test. For instance, once the speed test and wave performance tests are executed by the execution engine 234, the data aggregation engine 236 aggregates the obtained execution results to generate a consolidated data set that reflects network performance over multiple test iterations. The aggregation process may comprise collecting individual test metrics such as download speed, upload speed, latency, packet loss, jitter, web page load time, and failed Uniform Resource locators (URLs) occurrences from each test execution cycle. These individual data points may be timestamped or categorized based on execution parameters (for example, network type, test location, and time of execution), and may be normalized to ensure consistency across different test conditions.
[0078] In an embodiment, the processor 210, using the transceiver module 220, is configured to transmit the execution results of each of the speed test and the web performance test to the remote server 140.
[0079] In an embodiment, the processor 210, using the scheduling engine 232, is configured to trigger the execution of the speed test and the web performance test at a scheduled start time in the one or more work orders. The scheduling engine 232 may trigger the execution of the network tests by sending an Application Programming Interface (API) request to an alarm manager (not shown) of the user device 110. In some embodiments, the scheduling engine 232 may be configured to reschedule the execution of the speed test and the web performance test in case the one or more work orders include an instruction to execute the speed test and the web performance test for two or more iterations. Further, in an embodiment, the execution engine 234 may be configured to synchronize the execution results of the network tests in the remote server.
[0080] Although FIG. 2 illustrates one example of the system architecture of the user device 110, various changes may be made to FIG. 2. Further, the user device 110 may include any number of components in addition to those shown in FIG. 2, without deviating from the scope of the present disclosure. For example, the user device 110 may further include circuitry, programing, applications, or a combination thereof. Further, various components in FIG. 2 may be combined, further subdivided, or omitted, and additional components may be added according to particular needs.
[0081] In an alternate embodiment, each engine/module of the processing Engine(s)/module(s) 230 (i.e., the scheduling engine 232, the execution engine 234, the data aggregation engine 236, and other engines/modules 238) is configured to independently perform various operations of the processor 210, as described herein, without deviating from the scope of the present disclosure. Additionally, different engines/modules shown in FIG. 2 may be split into two or more engines/modules each operating independently in communication with one another, optionally in a distributive manner, with shared responsibilities. Furthermore, multiple instances of the engines/modules may be implemented for executing the network tests over the network connections or multiple modules can be combined into a single engine/module to perform all corresponding functions described herein.
[0082] FIG. 3 illustrates a block diagram depicting a system architecture of the server 140, in accordance with an example embodiment of the present disclosure. The embodiment of the server 140 shown in FIG. 3 is for illustration only. Other embodiments of the server 140 may be used without departing from the scope of this disclosure.
[0083] The server 140 may include an Input-Output (I/O) interface 310, a memory 320, a data processing circuitry 330, a communication unit 350, a console host 360, and a database 370 coupled to each other via a first communication bus 324.
[0084] The I/O interface 310 may include suitable logic, circuitry, interfaces, and/or codes that may be configured to receive input(s) and present (or display) output(s) of the server 140. For example, the I/O interface 310 may have an input interface (not shown) and an output interface (not shown). The input interface may be configured to enable a user to provide input(s) to trigger (or configure) the server 140 to create the work orders for remotely scheduling the network tests at the user device 110. Aspects of the present disclosure are intended to include or otherwise cover any type of the input interface including known, related art, and/or later developed technologies without deviating from the scope of the present disclosure. The output interface may be configured to display (or present) output(s) to the user by the server 140. In some aspects of the present disclosure, the output interface may provide the output(s) based on instruction(s) provided via the input interface. Examples of the output interface of the I/O interface 310 may include, but are not limited to, a digital display, an analog display, a touch screen display, an appearance of a desktop, and/or illuminated characters.
[0085] The memory 320 may be configured to store logic, instructions 320A, circuitry, interfaces, and/or codes of the data processing circuitry 330 for executing various operations. The memory 320 may further be configured to store data associated with the work orders, that may be utilized by various data processing engines (or processor(s)) of the data processing circuitry 330 to create the work orders for remotely scheduling the network tests over user network connections of the user device 110. Aspects of the present disclosure are intended to include and/or otherwise cover any type of the data associated with the work orders, user profile, and network performance without deviating from the scope of the present disclosure. Examples of the memory 320 may include but are not limited to, a ROM, RAM, a flash memory, a removable storage drive, a Hard Disc Drive (HDD), a solid-state memory, a magnetic storage drive, a Programmable Read-Only Memory (PROM), the EPROM, and/or the EEPROM.
[0086] The data processing circuitry 330 may include processor(s) (such as data processing engines) configured with suitable logic, instructions, circuitry, interfaces, and/or codes for executing one or more operations of various operations performed by the server 140. For example, the data processing circuitry 330 is configured to execute programs and other processes stored in the memory 320. The data processing circuitry 330 is further configured to move data into or out of the memory 320 as required by the execution process. Examples of the data processing circuitry 330 may include, but are not limited to, an Application Specific integrated circuit (ASIC) processor, a Reduced Instruction Set Architecture (RISC) processor, a Complex Instruction Set Architecture (CISC) processor, an FPGA, and the like.
[0087] The data processing circuitry 330 may include data processor(s) (e.g., data processing engines) as presented in FIG. 3. According to an exemplary embodiment, the data processing circuitry 330 may include a display engine 312, a work order management engine 314, a reception engine 318, a transmitting engine 320, and a report generation engine 322 coupled to each other by way of a second communication bus 340.
[0088] In an embodiment, the display engine 312 is configured to control the application interface of the network management device 160 to display a work order window including options for creating the or more work orders. The reception engine 318 is configured to obtain inputs corresponding to the options displayed in the work order window.
[0089] In an embodiment, the work order management engine 314 is configured to create the one or more work orders related to the speed test and the web performance test based on the inputs obtained by the reception engine 318. Further, the work order management engine 314 may be configured to store the created one or more work orders in the database 370. Furthermore, the work order management engine 314 may be configured to retrieve the one or more work orders from the database 370 at the scheduled start time specified in the obtained inputs.
[0090] Further, the transmitting engine 320 may be configured to transmit the retrieved one or more work orders to each user device 110 over the network connection to execute the speed test and the web performance test. Furthermore, the reception engine 318 may be configured to receive data including the execution results of the speed test and the web performance test from each user device 110.
[0091] In an embodiment, the report generation engine 322 is configured to generate, based on the received data including the execution results, a performance report including performance metrics and indicators related to the execution of the speed test and the web performance test. For instance, based on the aggregated execution results, the report generation engine 322 generates the performance report including actionable insights into a quality and a stability of the network connection. The performance report may include, but not limited to, a summary of network performance trends, geographical representations of speed test results, latency and jitter distribution, packet loss statistics, and web performance insights including average page load times and failed URL occurrences. Additionally, the performance report may highlight anomalies, such as but not limited to, excessive latency spike or consistent packet loss that may indicate underlying network issues related to the network connection of the user device 110.
[0092] In a non-liming example, a table within the performance report can represent these performance metrics and indicators in a structure format , allowing the end users or the network engineers to interpret and analyze the test data effectively. The table may include multiple columns representing different test execution parameters, such as the network type, test location, execution values, etc. An exemplary Table is shown below:
Table 1
[0093] It should be understood that the above table is provide merely as a non-limiting example, and the performance report may include additional or alternative metrics, indicators, or representations as required for specific implementations. The structure and contents of the table may be modified based on the network conditions, test parameters, or reporting preference without departing or deviating from the scope of the present disclosure.
[0094] The communication unit 350 includes an electronic circuit specific to a standard that enables wired or wireless communication. The communication unit 350 is configured to communicate internally between internal hardware components and with external devices via one or more networks. The communication unit 350 may be configured to enable the server 140 to communicate with various entities of the system 100 (such as the user device 110, the network management device 160, and in some scenarios external network devices) through backhaul connection (e.g., wired backhaul or wireless backhaul) or a network. Examples of the communication unit 350 may include, but are not limited to, a modem, a network interface such as an Ethernet card, a communication port, and/or a Personal Computer Memory Card International Association (PCMCIA) slot and card, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, and a local buffer circuit. It will be apparent to a person of ordinary skill in the art that the communication unit 350 may include any device and/or apparatus capable of providing wireless or wired communications between the network management device 160 and the server 140 and other components of the system 100.
[0095] The console host 360 may include suitable logic, circuitry, interfaces, and/or codes that may be configured to enable the I/O interface 310 to receive input(s) and/or present output(s). In some aspects of the present disclosure, the console host 360 may include suitable logic, instructions, and/or codes for executing various operations of one or more computer executable applications to host the console on the network management device 160, by way of which the network engineer can trigger the server 140 to create the work order and schedule the network tests on the user device 110. In some other aspects of the present disclosure, the console host 360 may provide a Graphical User Interface (GUI) for the server 140 for user interaction.
[0096] Various engines of the data processing circuitry 330 are presented to illustrate the functionality driven by the server 140. It will be apparent to a person having ordinary skill in the art that various engines in the data processing circuitry 330 are for illustrative purposes and not limited to any specific combination of hardware circuitry and/or software.
[0097] Although FIG. 3 illustrates one example of the system architecture of the server 140, various changes may be made to FIG. 3. Further, the server 140 may include any number of components in addition to those shown in FIG. 3, without deviating from the scope of the present disclosure. Further, various components in FIG. 3 may be combined, further subdivided, or omitted, and additional components may be added according to particular needs.
[0098] FIG. 4 illustrates a flowchart depicting a method 400 for creating the work orders for execution of the network tests over the network connections of the user device 110, in accordance with an exemplary embodiment of the present disclosure. The method 400 comprises a series of operation steps indicated by blocks 402 through 418.
[0099] Example blocks 402 through 418 of the method 400 are performed by one or more components of the server 140 as disclosed in FIG. 3, for creating the work orders and generating the performance report including the performance metrics and indicators related to the execution of the network tests over the network connections of the user device 110. Although the method 400 shows the example blocks of operation steps 402 through 418, in some embodiments, the method 400 may include additional steps, fewer steps or steps in different order than those depicted in FIG. 4. In other embodiments, the steps 402 through 418 may be combined or may be performed in parallel. The method 400 starts at block 402.
[0100] At block 402, the communication unit 350 establishes a connection with the network management device 160 and the user device 110. It will be apparent to a person having ordinary skill in the art that the communication unit 350 may also establish a connection with multiple user devices 110. For ease of explanation and for the sake of brevity of the present disclosure, only one user device 110 is depicted in FIG. 1 as an example.
[0101] At block 404, the display engine 312 controls the application interface of the network management device 160 to display a work order window including a plurality of options for creating the work orders for execution of the network tests over the network connection of the user device 110. Some example UIs depicting the process to navigate to the work order window is described below by referring to FIG. 5 through FIG. 8.
[0102] FIG. 5 illustrates an example UI 500 for navigating to a work order module, in accordance with an embodiment of the present disclosure. The UI 500 may include a cognitive platform having a plurality of options provided to an end user and/or the network engineer. Once the end user or the network engineer selects the option corresponding to the work order module, the display engine 312 controls the application interface of the network management device 160 to navigate to another page depicted by an example UI 500 (as shown in FIG. 5 described below).
[0103] FIG. 6 illustrates the example UI 600 for navigating to an application work order, in accordance with an embodiment of the present disclosure. Once the end user or the network engineer selects the option corresponding to the application work order shown in the UI 600, the display engine 312 controls the application interface of the network management device 160 to navigate to another page depicted by an example UI 700 (as shown in FIG. 7 described below).
[0104] FIG. 7 illustrates the example UI 700 including an option for creating the work orders for execution of the network tests, in accordance with an embodiment of the present disclosure. As shown in FIG. 7, the UI 700 provide an option 702 for creating the work orders for execution of the network tests at the user device 110. Once the end user or the network engineer selects the option 702 corresponding to the create work order shown in the example UI 700, the display engine 312 controls the application interface of the network management device 160 to navigate to another page depicted by an example UI 800 (as shown in FIG. 8 described below).
[0105] FIG. 8 illustrates the example UI 800 including options for receiving user inputs to create the work orders, in accordance with an embodiment of the present disclosure. The UI 800 provide a plurality of options to the end user or the network engineer for configuring a test script for executing the network tests such as the speed test and the web performance test. As shown in FIG. 8, the plurality of options may include, but not limited to, the option for selecting recipe tests such as the speed test and the web performance test, an option for selecting the date and time period for executing the recipe test, an option for selecting a number of iterations for the recipe test, an option for setting a start time of the recipe test, and an option for setting a test duration of the recipe test.
[0106] In an embodiment, FIG. 9 illustrates an example UI 900 including options for receiving user inputs to create the work order for one or more user devices using device Identifiers (IDs) of the user device 110, in accordance with an embodiment of the present disclosure. As shown in FIG. 9, the UI 900 provide multiple options to the end user or the network engineer for configuring the test script for executing the network tests. The options includes, but not limited to, an option for selecting a recipe test (i.e., the speed test), the option for selecting the date and time period for executing the speed test, the option for selecting the number of iterations for the speed test, the option for setting the start time of the speed test, the option for setting the test duration of the speed test, an option for adding the user devices (i.e., STBs) manually by entering a unique device ID of each user device 110 or by uploading a file including a list of unique device IDs of the user devices 110. Once the end user or the network engineer selects all of these options and enters the device IDs of the user devices, the work order management engine 314 creates the test script (i.e., the work order) for executing the speed test and assign the created test script to the user devices 110 in accordance with their unique device IDs uploaded or entered by the network engineer for executing the speed test. In an embodiment, the work order management engine 314 identifies each user device 110 based on the respective unique device IDs of the user devices 110, and creates a user profile of one or more users associated with each user device 110 based on the identification and backend data associated with the identified user devices.
[0107] At block 406, the reception engine 318 obtains a set of inputs corresponding to the plurality of options displayed in the work order window. In a non-limiting example, the set of inputs may include, but not limited to, information of test configuration parameters, a scheduled start time for executing the speed test and the web performance test, a number of iterations, and a test duration for executing the speed test and the web performance test. In another non-limiting example, the test configuration parameters include a type of the speed test and a sample size of the test packets. The set of inputs is received by the reception engine 318 when the end user or the network engineer provides input data via the input options displayed on the UI 800. Once the set of inputs is obtained by the reception engine 318, the flow of the method 400 proceeds to block 408.
[0108] At block 408, the work order management engine 314 creates the work orders related to the speed test and the web performance test based on the obtained set of inputs.
[0109] Further, the work order management engine 314 stores the created work orders in the database 370 (at block 410), and retrieve the created work orders from the database 370 at the scheduled start time (at block 412).
[0110] Furthermore, at block 414, the transmitting engine 320 transmits, over the network connection, the retrieved work orders to each user device 110 to execute the speed test and the web performance test.
[0111] Thereafter, at block 416, the reception engine 318 receives the data including the execution results of the speed test and the web performance test from each user device 110. The work order management engine 314 may also store, in the database 370, the measurement values included in the received data for performing data analytics to identify the trend and pattern of network performance.
[0112] Once the execution results of the speed test and the web performance test is received by the reception engine 318 of the server 140, then at block 418, the report generation engine 322 of the server 140 generates the performance report including performance metrics and indicators related to the execution of the speed test and the web performance test using the data received from each user device 110.
[0113] FIG. 10 illustrates a flowchart depicting a method 1000 for executing the network tests over the network connection of the user device 110, in accordance with an exemplary embodiment of the present disclosure. The method 1000 comprises a series of operation steps indicated by blocks 1002 through 1012.
[0114] Example blocks 1002 through 1012 of the method 1000 are performed by one or more components of the user device 110 as disclosed in FIG. 2, for executing of the network tests over the network connections of the user device 110 based on the one or more work orders received from the remote server 140. Although the method 1000 shows the example blocks of operation steps 1002 through 1012, in some embodiments, the method 900 may include additional steps, fewer steps or steps in different order than those depicted in FIG. 10. In other embodiments, the steps 1002 through 1012 may be combined or may be performed in parallel. The method 1000 starts at block 1002.
[0115] At block 1002, the transceiver module 220 receives the one or more work orders from the remote server 140 to execute the networks tests including, but not limited to, the speed test and the web performance test.
[0116] At block 1004, the scheduling engine 232 schedules the execution of the speed test and the web performance test at the scheduled start time specified in the one or more work orders received from the remote server 140. Additionally, the scheduling engine 232 triggers the execution of the speed test and the web performance test at the scheduled start time by sending the API request to the alarm manager of the user device 110.
[0117] At block 1006, the execution engine 234 executes the speed test over the network connection of the user device 110 iteratively based on the number of iterations specified in the one or more work orders, and measures one or more parameters associated with the speed test.
[0118] At block 1008, the execution engine 234 executes the web performance test over the network connection of the user device 110 iteratively based on the number of iterations specified in the one or more work orders, and measures one or more parameters associated with the web performance test.
[0119] At block 1010, the data aggregation engine 236 aggregates the one or more parameters associated with each of the speed test and the web performance test as the execution results. In particular, the data aggregation engine 236 consolidates all the measurement results of the execution of the speed test and the web performance test as the execution results. In a non-limiting example, the measurement results such as the upload speed, download speed, latency, packet loss, and jitter may be captured and consolidated as the execution results for the speed test. Further, in another non-limiting example, the measurement results such as the total load time and failed URLs of a web page may be captured and consolidated as the execution results of the web performance test.
[0120] At block 1012, the transceiver module 220 transmits the execution results of each of the speed test and the web performance test to the remote server 140 via a Hypertext Transfer Protocol (HTTP) API.
[0121] FIG. 11 illustrates a flowchart depicting operations performed by the user device 110 for executing the speed test 1006 over the network connection, in accordance with an embodiment of the present disclosure. The operation step 1006 comprises a series of sub operation steps indicated by blocks 1102 through 1110. Example blocks 1102 through 1110 are performed by the execution engine 234 of the user device 110 as disclosed in FIG. 2, for executing of the speed test over the network connection of the user device 110.
[0122] At block 1102, the execution engine 234 selects, from the test servers 1 through N, a test server nearest to the user device 110 based on a geographical location of the user device 110. The execution engine 234 may acquire the geographical location of the user device 110 from an internet service provider serving the user device 110.
[0123] At block 1104, the execution engine 234 transmits the test packets of data to the selected test server using the transceiver module 220.
[0124] At block 1106, the execution engine 234 receives a set of test packets from the selected test server via the transceiver module 220 in response to the transmitted test packets of data.
[0125] At block 1108, the execution engine 234 measures the one or more parameters associated with the speed test based on a sample size of the test packets of data and a round trip time for the transmission and the reception of the test packets between the user device 110 and the selected test server. In a non-limiting example, the execution engine may measure the download speed, the upload speed, the latency, the jitter, or the packet loss during the transmission and the reception of the test packets between the user device 110 and the selected test server.
[0126] In particular, the execution engine 234 may measure the download speed, the upload speed, and the latency by calculating the round trip time for the transmission and the reception of the test packets between the user device 110 and the selected test server. Further, the execution engine 234 may measure the jitter by determining a variation in the latency between the consecutive test packets. The execution engine 234 may determine the variation in the latency between consecutive test packets based on the round trip time for the transmission and the reception of the consecutive test packets. Furthermore, the execution engine 234 may calculate a ratio of lost test packets to a total number of the test packets transmitted from the user device 110 to the test server, and may determine the packet loss based on the calculated ratio.
[0127] At block 1110, the execution engine 234 may transmit, using the transceiver module 220, the measurement results of the speed test to the remote server 140 for the generation of the performance report.
[0128] FIG. 12 illustrates a flowchart depicting operations performed by the user device 110 for executing the web performance test 1008 over the network connection, in accordance with an embodiment of the present disclosure. The operation step 1008 comprises a series of sub operation steps indicated by blocks 1202 through 1208. Example blocks 1202 through 1208 are performed by the execution engine 234 of the user device 110 as disclosed in FIG. 2, for executing of the web performance test over the network connection of the user device 110.
[0129] At block 1202, the execution engine 234 loads the one or more URLs of web pages of a web browser installed in the user device 110 using the “WebView” option.
[0130] At block 1204, the execution engine 234 measures the total load time for each web page.
[0131] At block 1206, the execution engine 234 identifies, from the one or more URLs of the web pages, one or more failed URLs that exceed a predefined load time threshold (for example, 5-second) by tracking network requests. For instance, the execution engine 234 may tract the network requests to checks if any assets of a web page fail to load within 5 seconds.
[0132] At block 1208, the execution engine 234 transmits the measurement results including the measured total load time and the one or more failed URLs to the remote server 140 for the generation of the performance report.
[0133] Embodiments of the present technology may be described herein with reference to flowchart illustrations of methods and systems according to embodiments of the technology, and/or procedures, algorithms, steps, operations, formulae, or other computational depictions, which may also be implemented as computer program products. In this regard, each block or step of the flowchart, and combinations of blocks (and/or steps) in the flowchart, as well as any procedure, algorithm, step, operation, formula, or computational depiction can be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions embodied in computer-readable program code. As will be appreciated, any such computer program instructions may be executed by one or more computer processors, including without limitation a general-purpose computer or special purpose computer, or other programmable processing apparatus to perform a group of operations comprising the operations or blocks described in connection with the disclosed methods.
[0134] Further, these computer program instructions, such as embodied in computer-readable program code, may also be stored in one or more computer-readable memory or memory devices (for example, the memory 215 or 320) that can direct a computer processor or other programmable processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory or memory devices produce an article of manufacture including instruction means which implement the function specified in the block(s) of the flowchart(s).
[0135] It will further be appreciated that the term “computer program instructions” as used herein refer to one or more instructions that can be executed by the one or more processors (for example, the processor 210 or data processing circuitry 330) to perform one or more functions as described herein. The instructions may also be stored remotely such as on a server, or all or a portion of the instructions can be stored locally and remotely.
[0136] One or more embodiments disclosed herein may provide one or more technical advantages and other advantages. The embodiments disclosed herein provide an efficient mechanism for automating execution of the speed tests on the user network, thereby can help the organizations and the individuals to continuously monitor and remotely assess the performance of the communication networks.
[0137] Further, in certain embodiments, the system utilizes the work orders created by the remote server to perform the network tests thus, a manual user intervention is not required for scheduling the network tests at a desired time. Further, utilizing the performance report which is generated based on the measurement results of the speed test and the web performance test.
[0138] Furthermore, based on the performance report generated as the result of the remote execution of the speed tests helps the organizations to proactively manage a quality of service (QoS) of the communication networks without any user intervention by scheduling the network tests in background of the user device and thus helps in enhancing user satisfaction by ensuring a consistent and reliable internet connectivity. In particular, the execution of the network tests is performed typically in an automated manner and do not require any user interaction. The system utilizes the created test scripts to execute the network tests, simulating real-world user scenarios.
[0139] Moreover, the disclosed system and method measures network performance and web performance for an application installed in the user device and provides real-time insights into network speed and web load times. Additionally, an assessment for bandwidth, latency, and packet loss can be performed based on an analysis of the performance report generated using the measurement results to ensure seamless delivery of content to the user devices.
[0140] Additionally, the disclosed system and method collects data associated with the performed network tests and analyze the collected data to identify metrics related to network speed. Based on the analysis the system and method can efficiently identify any performance issues that may be logged and reported to the end user or the network engineer. This information may also help the network engineers in identifying and addressing problems associated with the network connection.
[0141] Certain embodiments of the present disclosure describes location-based selection of test servers for execution of the network tests which helps in identifying areas where network performance is particularly problematic. Also, with continuous analysis of testing data the disclosed system can identify and mark areas of network congestion or bottlenecks where the network speed significantly degrades, which can further be used in optimizing the network to avoid any bottlenecks.
[0142] Those skilled in the art will appreciate that the methodology described herein in the present disclosure may be carried out in other specific ways than those set forth herein in the above disclosed embodiments without departing from essential characteristics and features of the present invention. The above-described embodiments are therefore to be construed in all aspects as illustrative and not restrictive.
[0143] The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Any combination of the above features and functionalities may be used in accordance with one or more embodiments.
[0144] In the present disclosure, each of the embodiments has been described with reference to numerous specific details which may vary from embodiment to embodiment. The foregoing description of the specific embodiments disclosed herein may reveal the general nature of the embodiments herein that others may, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications are intended to be comprehended within the meaning of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and is not limited in scope.
LIST OF REFERENCE NUMERALS
[0145] The following list is provided for convenience and in support of the drawing figures and as part of the text of the specification, which describe innovations by reference to multiple items. Items not listed here may nonetheless be part of a given embodiment. For better legibility of the text, a given reference number is recited near some, but not all, recitations of the referenced item in the text. The same reference number may be used with reference to different examples or different instances of a given item. The list of reference numerals is:
100 - System
110 - User device
120 - Network
130 - Load balancer
140 - Server
150 - Database
160 - Network Management Device
1 through N - Test servers
210 - Processor
215 - Memory
220 - Transceiver module
225 - Interface(s)
230 - Processing Engine(s)/module(s)
232 - Scheduling engine
234 - Execution engine
236 - Data aggregation engine
238 - Other engines/modules
250 - Communication bus
310 - Input-Output (I/O) interface
320 - Memory
324 - First communication bus
330 - Data processing circuitry
312 - Display engine
314 - Work order management engine
318 - Reception engine
320 - Transmitting engine
322 - Report generation engine
340 - Second communication bus
350 - Communication Unit
360 - Console host
370 - Database
400 - Method for creating the work orders for execution of the network tests
500 - Example UI for navigating to a work order module
600 - Example UI for navigating to an application work order
700 - Example UI including option for creating the work orders
702 - Option for creating the work order
800 - Example UI including options for receiving user inputs
900 - Method for executing the network tests over the network connection of the user device
1000 - Flowchart depicting operations for executing the speed test
1100 - Operations for executing the web performance test
,CLAIMS:We claim:
1. A method (900) for executing network tests on an electronic device (110) over a network connection, the method (900) comprising:
receiving, by a transceiver module (220) from a remote server (140), one or more work orders to execute the networks tests, wherein the network tests include at least one speed test and at least one web performance test;
scheduling, by a scheduling engine (232), an execution of the at least one speed test and the at least one web performance test based on the one or more work orders;
executing, by an execution engine (234), the at least one speed test over the network connection by:
selecting, from a plurality of test servers (1 to N), a test server nearest to the electronic device (110) based on a geographical location of the electronic device (110);
transmitting and receiving test packets to and from the selected test server; and
measuring one or more parameters associated with the at least one speed test based on a sample size of the test packets and a round trip time for the transmission and the reception of the test packets between the electronic device (110) and the selected test server;
executing, by the execution engine (234), the at least one web performance test over the network connection by:
loading one or more URLs of a web page; and
measuring one or more parameters associated with the at least one web performance test; and
aggregating, by a data aggregation engine (236), execution results of each of the at least one speed test and the at least one web performance test; and
transmitting, by the transceiver module (220), the execution results of each of the at least one speed test and the at least one web performance test to the remote server.
2. The method (900) as claimed in claim 1, wherein
the one or more work orders includes test configuration parameters, a scheduled start time for executing the at least one speed test and the at least one web performance test, a number of iterations, and a test duration for executing the at least one speed test and the at least one web performance test,
the test configuration parameters include a type of the at least one speed test and the sample size of the test packets, and
the one or more parameters associated with the at least one speed test includes one or more of a download speed, an upload speed, a latency, a jitter, or a packet loss.
3. The method (900) as claimed in claim 2, comprising:
triggering, by the scheduling engine (232), the execution of the at least one speed test and the at least one web performance test at the scheduled start time by sending an API request to an alarm manager of the electronic device,
wherein each of the at least one speed test and the at least one web performance test is executed iteratively based on the number of iterations specified in the one or more work orders.
4. The method (900) as claimed in claim 2, comprising:
rescheduling, by the scheduling engine (232), the execution of the at least one speed test and the at least one web performance in case the one or more work orders includes an instruction to execute the at least one speed test and the at least one web performance for two or more iterations.
5. The method (900) as claimed in claim 1, wherein
the one or more parameters associated with the at least one web performance test comprises a total load time of loading the web page, and
the execution of the at least one web performance test over the network connection further comprises identifying, by the execution engine from the one or more URLs of the web page, one or more failed URLs that exceed a predefined load time threshold.
6. The method (900) as claimed in claim 1, comprising:
determining, by the execution engine (234), a variation in the latency between consecutive test packets based on the round trip time for the transmission and the reception of the consecutive test packets; and
calculating, by the execution engine (234), a ratio of lost test packets to a total number of the test packets transmitted to the test server, wherein
the jitter is measured based on the determined variation in the latency between the consecutive test packets, and
the packet loss is determined based on the calculated ratio.
7. The method (900) as claimed in claim 1, wherein
the at least one speed test and the at least one web performance test are performed in background of the electronic device (110), and
the electronic device (110) corresponds to one of a User Equipment (UE) or a Set Top Box (STB).
8. The method (900) as claimed in claim 1, wherein the geographical location of the electronic device (110) is acquired from a service provider serving the electronic device (110).
9. An electronic device (110) for executing network tests over a network connection, the electronic device (110) comprising:
a transceiver module (220) configured to receive, from a remote server, one or more work orders to execute the networks tests, wherein the network tests include at least one speed test and at least one web performance test;
a scheduling engine (232) configured to schedule an execution of the at least one speed test and the at least one web performance test based on the one or more work orders;
an execution engine (234) configured to execute the at least one speed test and the at least one web performance test over the network connection,
wherein, to execute the at least speed test, the execution engine (234) is configured to:
select, from a plurality of test servers (1 to N), a test server nearest to the electronic device (110) based on a geographical location of the electronic device (110);
transmit and receive test packets to and from the selected test server; and
measure one or more parameters associated with the at least one speed test based on a sample size of the test packets and a round trip time for the transmission and the reception of the test packets between the electronic device (110) and the selected test server, and
wherein, to execute the at least one web performance test, the execution engine (234) is configured to:
load one or more URLs of a web page; and
measure one or more parameters associated with the at least one web performance test; and
a data aggregation engine (236) configured to aggregate execution results of each of the at least one speed test and the at least one web performance test,
wherein the transceiver module (220) is further configured to transmit the execution results of each of the at least one speed test and the at least one web performance test to the remote server.
10. The electronic device (110) as claimed in claim 9, wherein
the one or more work orders includes test configuration parameters, a scheduled start time for executing the at least one speed test and the at least one web performance test, a number of iterations, and a test duration for executing the at least one speed test and the at least one web performance test,
the test configuration parameters include a type of the at least one speed test and the sample size of the test packets, and
the one or more parameters associated with the at least one speed test includes one or more of a download speed, an upload speed, a latency, a jitter, or a packet loss.
11. The electronic device (110) as claimed in claim 10, wherein the scheduling engine (232) is further configured to trigger the execution of the at least one speed test and the at least one web performance test at the scheduled start time by sending an API request to an alarm manager of the electronic device (110), wherein each of the at least one speed test and the at least one web performance test is executed iteratively based on the number of iterations specified in the one or more work orders.
12. The electronic device (110) as claimed in claim 10, wherein the scheduling engine (232) is further configured to reschedule the execution of the at least one speed test and the at least one web performance test in case the one or more work orders includes an instruction to execute the at least one speed test and the at least one web performance for two or more iterations.
13. The electronic device (110) as claimed in claim 9,
wherein the one or more parameters associated with the at least one web performance test comprises a total load time of loading the web page, and
wherein, to execute the at least one web performance test over the network connection, the execution engine (234) is further configured to identify, from the one or more URLs of the web page, one or more failed URLs that exceed a predefined load time threshold.
14. The electronic device (110) as claimed in claim 9, wherein the execution engine (234) is further configured to:
determine a variation in the latency between consecutive test packets based on the round trip time for the transmission and the reception of the consecutive test packets; and
calculate a ratio of lost test packets to a total number of the test packets transmitted to the test server, wherein
the jitter is measured based on the determined variation in the latency between the consecutive test packets, and
the packet loss is determined based on the calculated ratio.
15. The electronic device (110) as claimed in claim 9, wherein
the at least one speed test and the at least one web performance test are performed in background of the electronic device, and
the electronic device (110) corresponds to one of a User Equipment (UE) or a Set Top Box (STB).
16. The electronic device (110) as claimed in claim 8, wherein the geographical location of the electronic device (110) is acquired from a service provider serving the electronic device (110).
17. A method (400) for creating one or more work orders for execution of network tests over a network connection, the method (400) comprising:
establishing, by a communication unit (350), a connection with a network management device and one or more electronic devices; and
controlling, by a display engine (312), an application interface of the network management device to display a work order window including a plurality of options for creating at least one work order;
obtaining, by a reception engine (318), a set of inputs corresponding to the plurality of options displayed in the work order window, wherein
the network tests include at least one speed test and at least one web performance test, and
the set of inputs includes information of test configuration parameters, a scheduled start time for executing the at least one speed test and the at least one web performance test, a number of iterations, and a test duration for executing the at least one speed test and the at least one web performance test;
creating, by a work order management engine (314), the at least one work order related to the at least one speed test and the at least one web performance test based on the obtained set of inputs;
storing, by the work order management engine (314), the at least one work order in a database; and
retrieving, by the work order management engine (314), the at least one work order from the database at the scheduled start time;
transmitting, by a transmitting engine (320) over the network connection, the retrieved at least one work order to the one or more electronic devices to execute the at least one speed test and the at least one web performance test;
receiving, by the reception engine (318), data including execution results of the at least one speed test and the at least one web performance test from the one or more electronic devices; and
generating, by a report generation engine (322) based on the received data, a performance report including performance metrics and indicators related to the execution of the at least one speed test and the at least one web performance test.
| # | Name | Date |
|---|---|---|
| 1 | 202421029807-STATEMENT OF UNDERTAKING (FORM 3) [12-04-2024(online)].pdf | 2024-04-12 |
| 2 | 202421029807-PROVISIONAL SPECIFICATION [12-04-2024(online)].pdf | 2024-04-12 |
| 3 | 202421029807-POWER OF AUTHORITY [12-04-2024(online)].pdf | 2024-04-12 |
| 4 | 202421029807-FORM 1 [12-04-2024(online)].pdf | 2024-04-12 |
| 5 | 202421029807-DRAWINGS [12-04-2024(online)].pdf | 2024-04-12 |
| 6 | 202421029807-DECLARATION OF INVENTORSHIP (FORM 5) [12-04-2024(online)].pdf | 2024-04-12 |
| 7 | 202421029807-Proof of Right [09-08-2024(online)].pdf | 2024-08-09 |
| 8 | 202421029807-Request Letter-Correspondence [26-02-2025(online)].pdf | 2025-02-26 |
| 9 | 202421029807-Power of Attorney [26-02-2025(online)].pdf | 2025-02-26 |
| 10 | 202421029807-Form 1 (Submitted on date of filing) [26-02-2025(online)].pdf | 2025-02-26 |
| 11 | 202421029807-Covering Letter [26-02-2025(online)].pdf | 2025-02-26 |
| 12 | 202421029807-FORM 18 [04-04-2025(online)].pdf | 2025-04-04 |
| 13 | 202421029807-DRAWING [04-04-2025(online)].pdf | 2025-04-04 |
| 14 | 202421029807-CORRESPONDENCE-OTHERS [04-04-2025(online)].pdf | 2025-04-04 |
| 15 | 202421029807-COMPLETE SPECIFICATION [04-04-2025(online)].pdf | 2025-04-04 |
| 16 | Abstract-1.jpg | 2025-05-13 |