Abstract: The present disclosure relates to a system (100) and a method (400) for monitoring performance of networking devices (103) in a communication network (106). The method (400) includes receiving a user input for monitoring the performance of the network device (103). The method (400) further includes identifying at least one performance test server (108) corresponding to the at least one performance test, based on the user input. Furthermore, the method (400) includes generating a trigger signal for the at least one performance test server (108) to perform at least one performance test on the network device (103). Furthermore, the method (400) includes receiving a first test result from the at least one performance test server (108) and generating a first output signal to enable the user device (102) for rendering the first test result. FIG. 3
DESC:FORM 2
THE PATENTS ACT, 1970 (39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
SYSTEM AND METHOD FOR MONITORING PERFORMANCE OF NETWORKING DEVICES IN A COMMUNICATION NETWORK
Jio Platforms Limited, an Indian company, having registered address at Office -101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
The following complete specification particularly describes the disclosure and the manner in which it is performed.
TECHNICAL FIELD
[0001] The embodiments of the present disclosure generally relate to the field of communication networks and systems. More particularly, the present disclosure relates to a system and a method for monitoring performance of networking devices in a communication network.
BACKGROUND OF THE INVENTION
[0002] The subject matter disclosed in the background section should not be assumed or construed to be prior art merely due to its mention in the background section. Similarly, any problem statement mentioned in the background section or its association with the subject matter of the background section should not be assumed or construed to have been previously recognized in the prior art.
[0003] With recent advancements in telecommunication networks, Passive Optical Network (PON) has emerged as a crucial access technology. PON technologies are constantly advancing toward increased capacity and increased end user counts. Therefore, the PON is developed to carry huge amounts of traffic in the network. The PON provides an economical way of transferring broadband information using a tree-like fiber-cable architecture with passive optical splitters on nodes. One of a major component of a PON architecture is an Optical Network Terminal (ONT) device. The ONT device is a vital device for fiber-optic internet setups that converts optical signals to electrical signal. The ONT device communicates directly with an Internet Service Provider (ISP) to get a fiber-optic internet connection to customer premises facilitating high-speed internet access.
[0004] With an increase in number of end users, the ONT device is prone to failures such as power supply unit failures, loss of signal due to damaged fiber, loss of signal due to equipment failures, fluctuating signal strength due to environmental factors and signal attenuation, hardware and firmware issues. The ONT device failures causes service disruption, leading to revenue losses and customer dissatisfaction. Also, the ONT device may not be entirely visible to a Network Management System (NMS) for performing fault management operations. Hence, frequent monitoring of the ONT device is important for preventing network downtime and faults, maintaining fault tolerance and security during information transmission to provide better customer service with operational and reliable data.
[0005] Therefore, in view of the challenges associated with the ONT device, there lies a need for a solution capable of monitoring and diagnosing the ONT device to identify factors responsible for degrading performance of the ONT device. Further, there is a need for a solution capable of rectifying the faults in the ONT device to enrich the customer experience.
SUMMARY
[0006] The following embodiments present a simplified summary in order to provide a basic understanding of some aspects of the disclosed invention. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope thereof. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
[0007] According to an embodiment, a method for monitoring performance of a networking device in a communication network is provided. The method includes receiving, by a data exchange engine from a user device, a user input for monitoring the performance of the network device. The method further includes identifying, by a test server engine in response to a determination that the user input is associated with the real time instance, at least one performance test server from a plurality of performance test servers corresponding to the at least one performance test, based on the user input. Furthermore, the method includes generating, by the test server engine, a trigger signal for the at least one performance test server to perform at least one performance test on the network device. Furthermore, the method includes receiving, by the test server engine in response to the trigger signal from the at least one performance test server, a first test result from the at least one performance test server. Furthermore, the method includes generating, by the test server engine in receipt of the first test result, a first output signal to enable the user device for rendering the first test result.
[0008] In some aspects of the present disclosure, the user input comprises a device identifier of the networking device, a test request to perform at least one performance test on the networking device, and a temporal value for performing the at least one test on the networking device.
[0009] In some aspects of the present disclosure, the method further includes determining, by a device identification engine, whether the device identifier matches with a device tag from a plurality of predefined device tags corresponding to one or more valid network devices in the communication network. Moreover, the method includes determining, by a temporal detection engine in response to the determination that the device identifier matches with the device tag, whether the temporal value matches with the real time instance. The match of the temporal value with the real time instance corresponds to the user input being associated with the real time instance.
[0010] In some aspects of the present disclosure, the method further includes generating, by the device identification engine, an error signal in response to a determination that the device identifier mismatches with each of the plurality of predefined device tags. Moreover, the method includes transmitting, by the device identification engine, the error signal to the user device. The error signal enables the user device to render an error notification.
[0011] In some aspects of the present disclosure, the method further includes determining, by the temporal detection engine in response to the determination that the device identifier matches with one of the plurality of predefined device tags, whether the temporal value is associated with a historical instance. Moreover, the method includes retrieving, by the test server engine in response to the determination that the temporal value is associated with the historical instance, a second test result from a memory. The second test result comprises at least one historical value of the at least one performance parameter associated with the networking device corresponding to the historical instance. Furthermore, the method includes generating, by the test server engine, a second output signal to enable the user device for rendering the second test results.
[0012] In some aspects of the present disclosure, the trigger signal enables the at least one performance test server to determine at least one real time value of at least one performance parameter corresponding to the networking device. The at least one performance parameter is associated with the at least one performance test.
[0013] In some aspects of the present disclosure, the first test result corresponds to the at least one real time value of the at least one performance parameter associated with the least one performance test.
[0014] In some aspects of the present disclosure, each performance test server of the plurality of performance test servers corresponds to at least one of a plurality of performance test services comprising a speed test, a ping test, a web performance test, a trace route test, a memory test, a Wireless Local Area Network (WLAN) test, and a Local Area Network (LAN) test.
[0015] According to another embodiment of the present disclosure, a system to monitor performance of a networking device in a communication network is provided. The system includes a data exchange engine, and a test server engine. The data exchange engine is configured to receive a user input from a user device for monitoring the performance of the network device. The test server engine is configured to identify, in response to a determination that the user input is associated with the real time instance, at least one performance test server from a plurality of performance test servers corresponding to the at least one performance test, based on the user input. Moreover, the test server engine is configured to generate a trigger signal for the at least one performance test server to perform at least one performance test on the network device. Furthermore, the test server engine is configured to receive, in response to the trigger signal from the at least one performance test server, a first test result from the at least one performance test server. Furthermore, the test server engine is configured to generate, in receipt of the first test result, a first output signal to enable the user device for rendering the first test result.
BRIEF DESCRIPTION OF DRAWINGS
[0016] Various embodiments disclosed herein will become better understood from the following detailed description when read with the accompanying drawings. The accompanying drawings constitute a part of the present disclosure and illustrate certain non-limiting embodiments of inventive concepts. Further, components and elements shown in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. For the purpose of consistency and ease of understanding, similar components and elements are annotated by reference numerals in the exemplary drawings.
FIG. 1 illustrates a block diagram depicting a system to monitor performance of networking device(s) in a communication network, in accordance with an exemplary embodiment of the present disclosure.
FIG. 2 illustrates a block diagram depicting a data processing server, in accordance with an exemplary embodiment of the present disclosure.
FIG. 3 illustrates a block diagram depicting functional environment of a Network Management System (NMS), in accordance with an exemplary embodiment of the present disclosure.
FIG. 4 presents a flow chart that depicts a method for monitoring the performance of the networking device(s), in accordance with an exemplary embodiment of the present disclosure.
LIST OF REFERENCE NUMERALS
102 – User Device
103 – Networking Devices
104 – Data Processing Server
106 – Network
108 – Performance Test Servers
110 – User Interface
112 – Processing Unit
114 – Device Memory
116 – Application Console
118 – Network Interface
120 – Data Processing Circuitry
122 – Server Memory
200 – Communication Interface
202 – Console Host
203 – First Communication Bus
204 – Data Exchange Engine
206 – Data Identification Engine
208 – Temporal Detection Engine
210 – Test Server Engine
212 – Historical Data Engine
214 – Report Generation Engine
216 – Instructions Repository
218 – Device Identifier Repository
220 – Test Server Data Repository
222 – Test Results Repository
226 – Second Communication Bus
302 – Network
304 – Users (Internet / Intranet)
306 – Shared Load Balancer
308 – Web Layer
310 – Web Server
312 – Application Layer – 1
314 – Application Server for Users (Internet / Intranet)
316 – Application Layer - 2
318 – Framework Server for Users (Internet / Intranet)
320 – Database Layer
322 – Cluster of Nodes
324 – Primary Databases
326 – Secondary Database
DETAILED DESCRIPTION OF THE INVENTION
[0017] Inventive concepts of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of one or more embodiments of inventive concepts are shown. Inventive concepts may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Further, the one or more embodiments disclosed herein are provided to describe the inventive concept thoroughly and completely, and to fully convey the scope of each of the present inventive concepts to those skilled in the art. Furthermore, it should be noted that the embodiments disclosed herein are not mutually exclusive concepts. Accordingly, one or more components from one embodiment may be tacitly assumed to be present or used in any other embodiment.
[0018] The following description presents various embodiments of the present disclosure. The embodiments disclosed herein are presented as teaching examples and are not to be construed as limiting the scope of the present disclosure. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified, omitted, or expanded upon without departing from the scope of the present disclosure.
[0019] The following description contains specific information pertaining to embodiments in the present disclosure. The detailed description uses the phrases “in some embodiments” which may each refer to one or more or all of the same or different embodiments. The term “some” as used herein is defined as “one, or more than one, or all.” Accordingly, the terms “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” In view of the same, the terms, for example, “in an embodiment” refers to one embodiment and the term, for example, “in one or more embodiments” refers to “at least one embodiment, or more than one embodiment, or all embodiments.”
[0020] The term “comprising,” when utilized, means “including, but not necessarily limited to;” it specifically indicates open-ended inclusion in the so-described one or more listed features, elements in a combination, unless otherwise stated with limiting language. Furthermore, to the extent that the terms “includes,” “has,” “have,” “contains,” and other similar words are used in either the detailed description, such terms are intended to be inclusive in a manner similar to the term “comprising.”
[0021] In the following description, for the purposes of explanation, various specific details are set forth to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features.
[0022] The description provided herein discloses exemplary embodiments only and is not intended to limit the scope, applicability, or configuration of the present disclosure. Rather, the foregoing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing any of the exemplary embodiments. Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it may be understood by one of the ordinary skilled in the art that the embodiments disclosed herein may be practiced without these specific details.
[0023] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein the description, the singular forms "a", "an", and "the" include plural forms unless the context of the invention indicates otherwise.
[0024] The terminology and structure employed herein are for describing, teaching, and illuminating some embodiments and their specific features and elements and do not limit, restrict, or reduce the scope of the present disclosure. Accordingly, unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having ordinary skill in the art.
[0025] In the present disclosure, various embodiments are described using terms such as extensible radio access network (xRAN), and open-radio access network (O-RAN)) that are commonly used in communication standards (e.g., 3rd generation partnership project (3GPP), but these are merely examples for description. Various embodiments of the disclosure may also be easily modified and applied to other communication systems.
[0026] The present disclosure relates to a system and a method for monitoring performance of networking devices in a communication network. The networking devices may be Optical Network Terminal (ONT) devices connected to the communication network. More specifically, the present disclosure is directed towards a system and a method for diagnosing the networking devices by performing performance test(s) on the networking device such as a speed test, a ping test, a web performance test, a trace route test, a memory test, and a network connectivity test. The system may include an electronic portal that enables a user to initiate diagnosis of a network device, or to check a performance of the network device in the past and compare its real-time performance with its past performance. The portal also enables an end user to search for the networking device utilizing a unique device identifier. Further, the portal may include provisions to perform the one or more performance tests on the networking device to monitor and diagnose its performance. Further, the portal may allow the user to access results of the one or more tests either collectively or individually. The end user may also access the results of a previously performed test by accessing a history tab of the portal.
[0027] The following description provides specific details of certain aspects of the disclosure illustrated in the drawings to provide a thorough understanding of those aspects. It should be recognized, however, that the present disclosure can be reflected in additional aspects and the disclosure may be practiced without some of the details in the following description.
[0028] Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings. FIG. 1 through FIG. 4, discussed below, and the embodiments used to describe the principles of the present disclosure are by way of illustration only and should not be construed in any way to limit the scope of the present disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device.
[0029] Various aspects including the example aspects are now described more fully with reference to the accompanying drawings, in which the various aspects of the disclosure are shown. The disclosure may, however, be embodied in different forms and should not be construed as limited to the aspects set forth herein. Rather, these aspects are provided so that this disclosure is thorough and complete, and fully conveys the scope of the disclosure to those skilled in the art. In the drawings, the sizes of components may be exaggerated for clarity.
[0030] Various aspects of the present disclosure provide a system and a method for monitoring performance of networking devices in a communication network. In some aspects of the present disclosure, the system and the method relate to determining historical (or past) performance of a networking device in the communication network. In some other aspects of the present disclosure, the system and the method relate to diagnosing a network device based on a comparison of its present performance value with its historical performance value. In yet other aspects of the present disclosure, the system and the method relate to performing one or more tests on the user device for monitoring performance of the user device and diagnosing issues impacting the performance of the user device.
[0031] FIG. 1 illustrates a block diagram depicting a system 100 to monitor performance of network devices 103 in a communication network 106 (hereinafter interchangeably referred to and designated as ‘network 106’), in accordance with an exemplary embodiment of the present disclosure. The embodiments of the system 100 shown in FIG. 1 are for illustration only. Other embodiments of the system 100 may be used without departing from the scope of this disclosure.
[0032] The system 100 includes a user device 102, networking devices 103 (i.e., presented by way of first through third networking devices 103a-103c), a data processing server 104, and performance test servers 108 (i.e., presented by way of first through third performance test servers 108a-108c). Various components of the system 100 are coupled to are coupled to each other by way of the communication network 106 (hereinafter interchangeably referred to and designated as ‘the network 106’).
[0033] Examples of the user device 102 may include, but not limited to portable handheld electronic devices such as a mobile phone, a tablet, a laptop, a smart watch etc., or fixed electronic devices such as a desktop computer, computing device, etc. In some aspects of the present disclosure, the user device 102 may act as a medium to provide input(s) and fetch output(s) from the data processing server 104. More particularly, the user device 102 acts as a source to communicate details of networking devices 103 (such as device identifiers, request for performance test(s), and temporal values) to the data processing server 104. Based on the details, the data processing server 104 may identify a networking device 103 from the networking devices 103 for analysis. Moreover, the details provided by the user device 102 enable selective performance test server(s) 108 to initiate performance test(s) for the networking device 103. The user device 102 may further be configured to render result(s) of the performance test(s) for the identified networking device 103 to a user of the user device 102.
[0034] According to the exemplary embodiment presented through FIG. 1, the user device 102 may include a user interface 110, a processing unit 112, a device memory 114, an application console 116, and a network interface 118.
[0035] The user interface 110 may include an input interface (not shown) for receiving input(s) from the user. The input(s) from the user device 102 may include, but not limited to a device identifier, a request to perform performance test(s), and/or a temporal value. Examples of the input interface may include, but are not limited to, a touch interface, a mouse, a keyboard, a motion recognition unit, a gesture recognition unit, a voice recognition unit, or the like. Aspects of the present disclosure are intended to include or otherwise cover any type of the input interface including known, related art, and/or later developed technologies without deviating from the scope of the present disclosure. The user interface 110 may further include an output interface (not shown) for rendering output(s) to the user. In other aspects, the output interface may be configured to present result(s) provided by the data processing server 104 to the user. The result(s) may include but are not limited to performance test(s) associated with the networking device 103 selected by the user. Examples of the output interface of the user interface 110 may include, but are not limited to, a digital display, an analog display, a touch screen display, a graphical user interface, a website, a webpage, a keyboard, a mouse, a light pen, an appearance of a desktop, and/or illuminated characters. Aspects of the present disclosure are intended to include or otherwise cover any type of the output interface including known, related art, and/or later developed technologies without deviating from the scope of the present disclosure.
[0036] The processing unit 112 may include suitable logic, instructions, circuitry, interfaces, and/or codes for executing various operations, such as the operations associated with the user device 102. In some aspects of the present disclosure, the processing unit 112 may utilize processor(s) such as Arduino or raspberry pi and/or the like. Further, the processing unit 112 may be configured to control operation(s) executed by the user device 102 in response to the input received at the user interface 110 from the user. Examples of the processing unit 112 may include, but are not limited to, an application-specific integrated circuit (ASIC) processor, a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a field-programmable gate array (FPGA), a Programmable Logic Control unit (PLC), and the like. Aspects of the present disclosure are intended to include or otherwise cover any type of the processing unit 112 including known, related art, and/or later developed processing units.
[0037] The device memory 114 may be configured to store logic, instructions, circuitry, interfaces, and/or codes of the processing unit 112, data associated with the user device 102, and data associated with the system 100. Examples of the device memory 114 may include, but are not limited to, a Read-Only Memory (ROM), a Random-Access Memory (RAM), a flash memory, a removable storage drive, a hard disk drive (HDD), a solid-state memory, a magnetic storage drive, a Programmable Read Only Memory (PROM), an Erasable PROM (EPROM), and/or an Electrically EPROM (EEPROM). Aspects of the present disclosure are intended to include or otherwise cover any type of the device memory 114 including known, related art, and/or later developed memories, without deviating from the scope of the present disclosure.
[0038] The application console 116 may be configured as a computer-executable application, to be executed by the processing unit 112. The application console 116 may include suitable logic, instructions, and/or codes for executing multiple operations of the system 100 and may be controlled (or hosted) by the data processing server 104. The computer executable application(s) may be stored in the device memory 114. The computer-executable application may be hosted by the data processing server 104.
[0039] The network interface 118 may be configured to enable the user device 102 to communicate with the data processing server 104 over the network 106. Examples of the network interface 118 may include, but are not limited to, a modem, a network interface such as an Ethernet card, a communication port, and/or a Personal Computer Memory Card International Association (PCMCIA) slot and card, an antenna, a radio frequency (RF) transceiver, amplifier(s), a tuner, oscillator(s), a digital signal processor, a coder-decoder (CODEC) chipset, a Subscriber Identity Module (SIM) card, and a local buffer circuit. It will be apparent to a person of ordinary skill in the art that the network interface 118 may include any device and/or apparatus capable of providing wireless or wired communication between the user device 102 and the data processing server 104.
[0040] The networking devices 103 may include circuitry, logic, and code(s) that connects optical fibre cables (from the network 106) to other wiring(s) such as Ethernet and phone lines by converting a communication signal from optical energy form to electrical energy form, and vice versa. Though the networking devices 103 draw power from an electrical source, they may also have battery backup options in case of a power outage. The networking devices 103 may typically be a part of a large gigabyte passive optical network (GPON) system that enables high-speed data connections for wired consumer technologies. In some aspects of the present disclosure, the networking devices 103 can be Optical Network Terminal (ONT) devices. In such a scenario, the networking devices 103 may include hardware components such as, but not limited to a terminal point, an optical Modulator-Demodulator (MODEM), network cable(s), optical router, transponder circuitry, and optical fiber cable(s), that when operated in co-operation, provide a wired and/or wireless communication link for external user device(s) in their vicinity to connect to the network 106. Aspects of the present disclosure are intended to include or otherwise cover any type of networking device as networking devices 103 including known, related art, and/or later developed networking devices, without deviating from the scope of the present disclosure.
[0041] In some aspects of the present disclosure, the user device 102 and a networking device 103 (for which the user intends to check performance parameter(s)) may be independent entities communicatively coupled to each other, whereas in some other aspects of the present disclosure, the user device 102 and the networking device 103 may be enclosed in a single enclosure as a single electronic device.
[0042] Although, in the presented aspect of the present disclosure, FIG. 1 illustrates a presence of three networking devices 103 (i.e., the first through third networking devices 103a-103c), it will be apparent to a person of ordinary skill in the art that the scope of the present disclosure is not limited to it. In various other aspects, the system 100 may include any number of networking devices 103, without deviating from the scope of the present disclosure. In such a scenario, each networking device 103 may be structurally and functionally similar to the first through third networking devices 103a-103c as disclosed herein.
[0043] The data processing server 104 may be a network of computers, a software framework, or a combination thereof, that may provide a generalized approach to create a server implementation. Examples of the data processing server 104 may include, but are not limited to, personal computers, laptops, mini-computers, mainframe computers, any non-transient and tangible machine that can execute a machine-readable code, cloud-based servers, distributed server networks, or a network of computer systems. The data processing server 104 may be realized through various web-based technologies such as, but not limited to, a Java web-framework, a .NET framework, a personal home page (PHP) framework, or any web-application framework. In various aspects of the present disclosure, the data processing server 104 may be configured to perform data processing and/or storage operations to enable monitoring of the performance of the networking devices 103.
[0044] The data processing server 104 may include data processing circuitry 120 and a server memory 122. The data processing circuitry 120 may include processor(s) (comprising data processing engines) configured with suitable logic, instructions, circuitry, interfaces, and/or codes for executing operations of various operations performed by the data processing server 104 for computations and data processing related to monitoring of the performance of networking devices 103. Examples of the data processing circuitry 120 may include, but are not limited to, an Application Specific Integrated Chip (ASIC) processor, a RISC processor, a CISC processor, a Field Programmable Gate Array (FPGA), and the like. The server memory 122 may be configured to store the logic, instructions, circuitry, interfaces, and/or codes of the data processing circuitry 120 for executing various operations of the system 100. Aspects of the present disclosure are intended to include and/or otherwise cover any type of the data associated with the data processing server 104, without deviating from the scope of the present disclosure. Examples of the server memory 122 may include but are not limited to, a ROM, a RAM, a flash memory, a removable storage drive, a HDD, a solid-state memory, a magnetic storage drive, a PROM, an EPROM, and/or an EEPROM.
[0045] In some aspects of the present disclosure, the data processing server 104 may be supported by external data center(s) (not shown) to perform one or more data processing and/or data storage tasks associated with the operations of the system 100. The external datacenter(s) may include suitable logic, circuitry, and/or code(s) to store data and perform computational tasks to support the data processing server 104. Examples of the external data center(s) may include, but are not limited to Oracle Database, Amazon Web Services (AWS) Database, and the like. In some aspects of the present disclosure, the server memory 122 may further be configured to temporarily store data from the external the data center(s).
[0046] The network 106 may include suitable logic, circuitry, and interfaces that may be configured to provide several network ports and several communication channels for transmission and reception of data related to operations of various entities of the system 100. Each network port may correspond to a virtual address (or a physical machine address) for transmission and reception of the communication data. For example, the virtual address may be an Internet Protocol Version 4 (IPV4) (or an IPV6 address) and the physical address may be a Media Access Control (MAC) address. The network 106 may be associated with an application layer for implementation of communication protocols based on communication requests from the various entities of the system 100. The communication data may be transmitted or received via the communication protocols. Examples of the communication protocols may include, but are not limited to, Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Domain Network System (DNS) protocol, Common Management Interface Protocol (CMIP), Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Long Term Evolution (LTE) communication protocols, or any combination thereof. In some aspects of the present disclosure, the communication data may be transmitted or received via at least one communication channel of several communication channels in the network 106. The communication channels may include, but are not limited to, a wireless channel, a wired channel, a combination of wireless and wired channel thereof. The wireless or wired channel may be associated with a data standard which may be defined by one of a Local Area Network (LAN), a Personal Area Network (PAN), a Wireless Local Area Network (WLAN), a Wireless Sensor Network (WSN), Wireless Area Network (WAN), Wireless Wide Area Network (WWAN), a metropolitan area network (MAN), a satellite network, the Internet, an optical fiber network, a coaxial cable network, an infrared (IR) network, a radio frequency (RF) network, and a combination thereof. Aspects of the present disclosure are intended to include or otherwise cover any type of communication channel, including known, related art, and/or later developed technologies.
[0047] The performance test servers 108 may include circuitry, logic, interfaces and code(s) to implement one of a network of computers, a software framework, or a combination thereof, that may provide a generalized approach to create server implementation(s). Each performance test server 108 is configured to perform performance test service(s) based on which the performance test servers 108 may determine performance parameters for networking devices, indicating their operational performance in the network 106. Examples of the performance test service(s) may include, but are not limited to a speed test, a ping test, a web performance test, a trace route test, a memory test, a Wireless Local Area Network (WLAN) test, and a Local Area Network (LAN) test.
[0048] Although, in the presented aspect of the present disclosure, FIG. 1 illustrates presence of three performance test servers 108 (i.e., the first through third performance test servers 108a-108c), it will be apparent to a person of ordinary skill in the art that the scope of the present disclosure is not limited to it. In various other aspects, the system 100 may include any number of performance test servers 108 without deviating from the scope of the present disclosure. In such a scenario, each performance test server 108 may be structurally and functionally similar to the first through third performance test servers 108a-108c as disclosed herein.
[0049] In operation, the data processing server 104 receives the user input from the user device 102. The user input comprises the device identifier for the networking device 103, the test request to perform performance test(s) on the networking device 103, and the temporal value for performing the performance test(s) on the networking device 103. The data processing server 104 further determines, in response to a determination that the device identifier matches with one of the predefined device tags, whether the temporal value is associated with a real time instance. Furthermore, the data processing server 104 identifies, in response to the determination that the temporal value is associated with the real time instance, at least one performance test server of the performance test server(s) 108 corresponding to the performance test(s) associated with the request. Furthermore, the data processing server 104 generates a trigger signal for the identified performance test server(s) 108 (hereinafter interchangeably referred to and designated as ‘the performance test server(s) 108’). The trigger signal enables the performance test server(s) 108 to determine real time value (s) of performance parameter(s) corresponding to an identified networking device 103. Furthermore, in response to the trigger signal, the data processing server 104 receives a first test result from the performance test server(s) 108. The first test result corresponds to the real time value(s) of the performance parameter(s) associated with the identified networking device 103 (hereinafter interchangeably referred to and designated as ‘the networking device 103’). Furthermore, the data processing server 104 generates a first output signal to enable the user device 102 for rendering the first test result.
[0050] In another scenario, when the data processing server 104 determines that the device identifier mismatches with each of the predefined device tags, the data processing server 104 generates an error signal for the user device 102 that enables the user device 102 to render an error notification to the user.
[0051] In yet another scenario, when the data processing server 104 determines that the temporal value is associated with a historical instance, the data processing server 104 retrieves a second test result from the server memory 122. The second test result comprises historical value(s) of the performance parameter(s) associated with the networking device 103 corresponding to the historical instance. Furthermore, the data processing server 104 generates a second output signal to enable the user device 102 for rendering the second test results to the user.
[0052] Although FIG. 1 illustrates one example of the system 100, various changes may be made to FIG. 1. Further, the system 100 may include any number of components in addition to the components shown in FIG. 1. Further, various components in FIG. 1 may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
[0053] FIG. 2 illustrates a block diagram depicting the data processing server 104, in accordance with an exemplary embodiment of the present disclosure. The data processing server 104 may include the data processing circuitry 120, the server memory 122, a communication interface 200, and a console host 202 coupled to each other via a first communication bus 203.
[0054] The communication interface 200 may be configured to enable the data processing server 104 to communicate with various other entities of the system 100 via the network 106. Examples of the communication interface 200 may include, but are not limited to, a MODEM, a network interface such as an Ethernet card, a communication port, and/or a Personal Computer Memory Card International Association (PCMCIA) slot and card, an antenna, a radio frequency (RF) transceiver, amplifier(s), a tuner, oscillator(s), a digital signal processor, a coder-decoder (CODEC) chipset, a Subscriber Identity Module (SIM) card, and a local buffer circuit. It will be apparent to a person of ordinary skill in the art that the communication interface 200 may include any device and/or apparatus capable of providing wireless or wired communications between the data processing apparatus 104 and various other entities of the system 100.
[0055] The console host 202 may include suitable logic, circuitry, interfaces, and/or codes that may be configured to enable the communication interface 200 to receive input(s) and/or present output(s). In some aspects of the present disclosure, the console host 202 may include suitable logic, instructions, and/or codes for executing various operations of computer executable applications to host the application console 116 on the user device 102, by way of which a user can trigger the data processing server 104 to monitor the performance of the networking device 103. In some other aspects of the present disclosure, the console host 202 may provide a Graphical User Interface (GUI) for the data processing server 104 for user interaction.
[0056] The data processing circuitry 120 may include data processor(s) (e.g., data processing engines) as presented in FIG. 2. According to an exemplary embodiment, the data processing circuitry 120 may include a data exchange engine 204, a device identification engine 206, a temporal detection engine 208, a test server engine 210, a historical data engine 212, and a report generation engine 214 coupled to each other by way of a second communication bus 226.
[0057] The data exchange engine 204 may be configured to enable transfer of data from the server memory 122 to various engines of the data processing circuitry 120. The data exchange engine 204 may further be configured to enable a transfer of data and/or instructions (by way of signal(s)) between various other engines of the data processing circuitry 120. Furthermore, the data exchange engine 204 may be configured to enable the data processing server 104 to receive the user input from the user device 102. More particularly, the data exchange 204 may enable the data processing circuitry 120 to receive the user input from the user device 102. The user input comprises the device identifier, the temporal value, and the request to initiate performance test(s). Preferably, a user interface may be generated through a user console of the user device 102, which may be hosted by the console host 202. The user interface may facilitate a user of the user device 102 to select a networking device 103 from the networking devices 103 in the communication network. Additionally, the user may be enabled to provide input(s) for monitoring the performance of the selected networking device (103) that may be received by the data processing server 104 through the data exchange engine 204.
[0058] The device identification engine 206 may be configured to receive the device identifier from the data exchange engine 204. Based on the device identifier, the device identification engine 206 may determine whether the device identifier received from the user device 102 is valid or invalid. Preferably, to check a validity of the device identifier, the device identification engine 206 may retrieve multiple predefined device tags from the server memory 122. Each of the predefined device tags is associated with a networking device 103 registered with the data processing server 104. The data processing server 104 may further compare the device identifier with each of the predefined device tags. Each tag of the predefined device tags is unique.
[0059] In a scenario, when the device identifier matches with one of the predefined device tags, the device identification engine 206 may validate the device identifier. The device identification engine 206 may also identify the matching networking device from the networking devices 103 based on the match of the device identifier with one of the predefined device tags. Upon identifying that the device identifier is valid, the device identification engine 206 may generate a valid identifier signal for the data exchange engine 204, which enables the data exchange engine 204 to provide the temporal value to the temporal detection engine 208. Moreover, the device identification engine 206 may also send an identity of the networking device 103 associated with the device identifier (derived through the matched device tag) to the temporal detection engine 208 for further operations.
[0060] In another scenario, when the device identifier mismatches with all of the predefined device tags, the device identification engine 206 may identify that the device identifier as invalid. Upon identification of the device identifier as invalid, the device identification engine 206 may generate the error signal for the user device 102. The device identification engine 206 may further transmit the error signal to the user device 102 to enable the user device 102 for rendering the error notification.
[0061] The temporal detection engine 208 may be configured to receive the temporal value from the data exchange engine 204 and the identity of the networking device 103 from the device identification engine 206. Based on the temporal value and the identity, the temporal detection engine 208 may be configured to analyze the temporal value retrieved from the user input to identify a time instance (or the time frame) associated with the temporal value for which the user intends to retrieve performance information of the networking device 103.
[0062] In a scenario, when the temporal value is associated with the real time instance, the temporal detection engine 208 may generate a performance test signal for the test server engine 210. The temporal detection engine 208 may further generate a test request signal for the data exchange engine 204 that enables the data exchange engine 204 to transmit the request for performance test(s) to the test server engine 210. Moreover, the temporal detection engine 208 may further send the identity of the networking device 103 to the test server engine 210 for further operations.
[0063] In another scenario, when the temporal value is associated with a historical instance, the temporal detection engine 208 may generate a data fetch signal for the historical data engine 212. Moreover, the temporal detection engine 208 may further send the identity of the networking device 103 to the test server engine 210 for further operations.
[0064] The test server engine 210 may be configured to receive the request for performance test(s) from the data exchange engine 204 and the identity of the networking device 103 from the temporal detection engine 208. The test server engine 210 may further be configured to identify performance test server(s) 108 from the performance test servers 108, based on the request for performance test(s) and the identity of the networking device 103. In some aspects of the present disclosure, the test server engine 210 may determine performance test service(s) from multiple performance test services, corresponding the request to performance test(s). Each performance test server 108 may be associated with performance test service(s). The test server engine 210 may identify the performance test service(s) associated with each performance test server 108 and may identify the performance test server(s) 108 best suitable for the required performance test service(s) intended for the networking device 103. Specifically, the details of the performance test service(s) may be tagged with the corresponding performance test server(s) 108 and may be stored as an entry (e.g., in the form of a look-up table) in the server memory 122. The test server engine 210 may identify performance test service(s) from the user input. Further, the test server engine 210 may retrieve the look-up table from the server memory 122 and may identify the performance test server(s) matching with the desired performance test service(s) in the user input. Examples of the performance test services may include, but are not limited to a speed test, a ping test, a web performance test, a trace route test, a memory test, a Wireless Local Area Network (WLAN) test, and a Local Area Network (LAN) test.
[0065] The test server engine 210 may further generate the trigger signal for the identified performance test server(s) 108. Furthermore, the test server engine 210 may transmit the trigger signal to the performance test server(s) 108 for initiation of the performance test(s) for the networking device 103. Preferably, the test server engine 210, upon identification of the desired performance test service(s) from the user input and the suitable performance test server(s) 108 (for performing the desired performance test service(s)), may generate trigger signals for each of the identified performance test server(s) 108. The trigger signal for each identified performance test server may include instructions to enable at least one of the desired performance test service(s). In response to the initiation of the performance test(s), the performance test server(s) 108 may determine real time value(s) of parameter(s) associated with the performance test service(s). The phrase “real time value(s) of the parameter(s)” as used herein may be referred to as instantaneous value(s) of the parameter(s) determined by the performance test server(s) 108 in response to the trigger signal from the test server engine 210. The performance test server(s) 108 may further generate the first test result by cumulating the real time value(s) of parameter(s) associated with the performance test service(s) determined by the performance test server(s) 108. The test server engine 210 may also be configured to receive the first test result from the performance test server(s) 108. Further, the test server engine 210 may be configured to generate the first output signal for the user device 102. Furthermore, the test server engine 210 may be configured to transmit the first output signal to the user device 102, to enable the user device 102 for displaying the first test result to the user.
[0066] The historical data engine 212 may be configured to receive the temporal value and the identity of the networking device 103. The historical data engine 212 may further be configured to check an availability of historical data associated with the historical instance associated with the temporal value.
[0067] In a scenario, when the historical data for the historical instance is available (i.e., an entry corresponding to the historical incidence is present in the server memory 122), the historical data engine 212 may be configured to enable the test server engine (210) to fetch the second test result from the server memory 122 corresponding to the historical incidence. The second test result may include historical values of the performance parameter(s) associated with the historical incidence for the networking device 103. In some aspects of the present disclosure, historical entries for the historical incidents are stored in the server memory 122 with corresponding timestamps. The test server engine 210 may identify a historical entry based on the timestamp corresponding to the historical incidence. The test server engine 210 may further generate the second output signal for the user device 102 that enables the user device 102 for rendering the second test results to the user.
[0068] In another scenario, when the historical data corresponding to the historical instance is unavailable (i.e., the entry corresponding to the historical incidence is absent in the server memory 122), the test server engine 210 may generate a non-availability signal for the user device 102 that enables the user device 102 for rendering a notification for non-availability of historical data requested by the user.
[0069] The report generation engine 214 may be configured to generate report(s) for each scenario (as described hereinabove) and store the report(s) into the server memory 122 for future reference.
[0070] Various engines of the data processing circuitry 120 are presented to illustrate the functionality driven by the data processing server 104. It will be apparent to a person having ordinary skill in the art that various engines in the data processing circuitry 120 are for illustrative purposes and not limited to any specific combination of hardware circuitry and/or software.
[0071] The server memory 122 may be configured to store data corresponding to system 100. In some aspects of the present disclosure, server memory 122 may be segregated into multiple repositories that may be configured to store a specific type of data. In the exemplary embodiment as presented through FIG. 2, the server memory 114 includes instructions repository 216, device identifier repository 218, test server data repository 220, test results repository 222.
[0072] The instruction repository 216 may be configured to store instructions and/or codes for operation(s) to be performed by various components of the data processing server 104. The device identifier repository 218 may be configured to store the predefined device tags. The test server data repository 218 may be configured to store specification details of various performance test servers 108 such as specifications, location, and service(s) etc. Test result repository 222 may be configured to store the values of the performance test(s) such as the first and second results, historical data, and the like.
[0073] According to an embodiment of the present disclosure, the instructions repository the instructions repository 216 may be configured to store computer program instructions corresponding to the operation(s) performed by various engines in the data processing circuitry 120. In an embodiment of the present disclosure, the instructions repository 216 may be configured as a non-transitory storage medium. Examples of the instructions repository 216 configured as the non-transitory storage medium includes hard drives, solid-state drives, flash drives, Compact Disk (CD), Digital Video Disk (DVD), and the like. Aspects of the present disclosure are intended to include or otherwise cover any type of non-transitory storage medium as the instructions repository 216, without deviating from the scope of the present disclosure. As will be appreciated, any such computer program instructions stored in the instructions repository 216 may be executed by one or more computer processors, including without limitation a general-purpose computer or special purpose computer, or other programmable processing apparatus to produce a machine, such that the computer program instructions which execute on the computer processor(s) or other programmable processing apparatus create means for implementing the function(s) specified.
[0074] It will be apparent to a person of ordinary skill in the art that the repositories in the server memory 122 are presented based on the functionality of the data processing server 104 and are not limited to those disclosed. The server memory 122 may have any configuration, combination and/or count of repositories without deviating from the scope of the present disclosure. Although FIG. 2 illustrates one example of the data processing server 104, various changes may be made to FIG. 2. Further, the data processing server 104 may include any number of components in addition to those shown in FIG. 2, without deviating from the scope of the present disclosure. Further, various components in FIG. 2 may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
[0075] FIG. 3 illustrates a block diagram depicting functional environment 300 of a Network Management System (NMS), in accordance with an exemplary embodiment of the present disclosure. The functional environment 300 may include a network 302, user(s) including Intranet users 304-1 and 304-2 (hereinafter collectively referred to as “end users 304”), a shared load balancer 306, a web layer 308, web server 310, a first application layer 312, application servers 314 including an application server for Intranet users 314-1 and an application server for Internet users 314-2, a second application layer 316, framework servers 318 including a framework server for Intranet users 318-1 and a framework server for Internet users 318-2, a database layer 320, a cluster of nodes 322, a first primary database 324-1, a second primary database 324-2 (hereinafter collectively referred to as the “database 324”), and a secondary database 326. The secondary database 326 may store updated data of the database 324. Further, the load balancer 306 may distribute incoming network traffic across multiple servers, thereby preventing any single server from being overloaded.
[0076] It should be noted that the network 302 as shown in FIG. 3 is similar to the network 106 of FIG. 1 and the application server 314 as shown in FIG. 3 is similar to the data processing server 104 of FIG. 1. The database is similar to the server memory 122. Therefore, a detailed description of the same is omitted herein for the sake of brevity of the present disclosure.
[0077] In an embodiment, the web layer 308 may serve as an entry point for the users accessing a speed test service from both intranet and internet. The web layer 308 may provide the user interface where the users may initiate the performance test(s) for the networking device 103 and view the results. The first and the second application layers 312, 316 (hereinafter also collectively referred to as ‘the application layers 312, 316’) may be used for handling authentication and encryption. The authentication may ensure that only authorized users can access the system, protecting it from an unauthorized access.
[0078] The encryption may ensure that data transmitted between the user and the data processing server 104 is secure and cannot be easily intercepted or read by malicious entities. The database layer 320 may utilize different databases to store detailed information associated with the system 100.
[0079] FIG. 4 presents a flow chart that depicts a method 400 for monitoring performance of the network devices 103 in the communication network 106, in accordance with an exemplary embodiment of the present disclosure.
[0080] At block 402, the data processing server 104 may receive the user input from the user device 102. The user input comprises the device identifier of the networking device 103, the test request to perform performance test(s) on the networking device 103, and a temporal value for performing the at least one test on the networking device 103.
[0081] In some aspects of the present disclosure, the device identifier may correspond to the networking device 103. The temporal value may correspond to a particular time (or a time frame) at which the user of the user device 102 intends to check the performance of the networking device 103. The request to initiate the performance test(s) may correspond to test(s) that the user intends to perform on the networking device 103 to evaluate its performance.
[0082] At block 404, the data processing server 104 may determine whether the device identifier received from the user device 102 is valid or invalid. When the device identifier is determined as valid, the method 400 proceeds to block 408. Else when the device identifier is determined as invalid the method 400 proceeds to block 406.
[0083] In some aspects of the present disclosure, to check a validity of the device identifier, the data processing server 104 may retrieve the predefined device tags. Each of the predefined device tags is associated with a networking device 103 registered with the system 100. The data processing server 104 may further compare the device identifier with each of the predefined device tags. In the scenario, when the device identifier matches with any of the predefined device tags, the data processing server 104 validates the device identifier and retrieves information of the networking device 103 from the identified tag. In the other scenario, when the device identifier mismatches with all of the predefined tags, the data processing server the data processing server 104 identifies the device identifier as invalid.
[0084] At block 406, the data processing server 104 may generate the error signal that represents invalidity of the device identifier. The data processing server 104 may further transmit the error signal to the user device 102 to enable the user device 102 for rendering the error notification.
[0085] At block 408, the data processing server 104 may identify the networking device 103 from the networking devices 103 based on the match of the device identifier with one of the predefined device tags.
[0086] At block 410, the data processing server 104 may retrieve the temporal value for performing the performance test(s) on the networking device 103 from the user input.
[0087] At block 412, the data processing server 104 may analyze the temporal value retrieved from the user input to identify the time instance (or the time frame) associated with the temporal value, for which the user of the user device 102 intends to retrieve performance information of the networking device 103. When the temporal value is associated with the real time instance, the method 400 proceeds to block 414. Else when the temporal value is associated with the historical instance, the method proceeds to block 424.
[0088] At block 414, the data processing server 104 may retrieve the request for performing the performance test(s) on the networking device 103 from the user input.
[0089] At block 416, the data processing server 104 may identify the performance test server(s) 108 based on the request for performance test(s) retrieved from the user input. In some aspects of the present disclosure, the data processing server 104 may determine performance test service(s) from the performance test services registered with the data processing server 104, corresponding the request to performance test(s). Each performance test server 108 may be associated with performance test service(s). The data processing server 104 may identify the performance test service(s) associated with each performance test server 108 and may identify the performance test server(s) 108 best suitable for the required performance test service(s) for the networking device 103. In some aspects of the present disclosure, the performance test services may include, but are not limited to a speed test, a ping test, a web performance test, a trace route test, a memory test, a Wireless Local Area Network (WLAN) test, and a Local Area Network (LAN) test.
[0090] At block 418, the data processing server 104 may generate the trigger signal for the identified performance test server(s) 108. The data processing server 104 may further transmit the trigger signal to the performance test server(s) 108 for initiation of the performance test(s) for the networking device 103. In response to the initiation of the performance test(s), the performance test server(s) 108 may determine real time value(s) of parameter(s) associated with the performance test service(s). The performance test server(s) 108 may further generate the first test result by cumulating the real time value(s) of parameter(s) associated with the performance test service(s) determined by the performance test server(s) 108.
[0091] At block 420, the data processing server 104 may receive the first test result from the performance test server(s) 108.
[0092] At block 422, the data processing server 104 may generate the first output signal for the user device 102. The data processing server 104 may further transmit the first output signal to the user device 102 to enable the user device 102 for rendering the first test result to the user.
[0093] At block 424 the data processing server 104 may check the availability of historical data associated with the historical instance associated with the temporal value. When the historical data for the historical instance is available (i.e., the entry corresponding to the historical incidence is present in the server memory 122), the method 400 proceeds to block 428. Else when the historical data corresponding to the historical instance is unavailable (i.e., the entry corresponding to the historical incidence is absent in the server memory 122), the method proceeds to block 426.
[0094] At block 426, upon determination of non-availability of historical data corresponding to the historical incidence, the data processing server 104 may generate the non-availability signal for the user device 102. The data processing server 104 may further transmit the non-availability signal to the user device 102 to enable the user device 102 for rendering the notification for non-availability of historical data requested by the user.
[0095] At block 428, upon determination of availability of the historical data corresponding to the historical incidence, the data processing server 104 may retrieve the second test result from the server memory 122 corresponding to the historical incidence. The second test result may include historical values of the performance parameter(s) associated with the historical incidence for the networking device 103. In some aspects of the present disclosure, historical entries for the historical incidents are stored in the server memory 122 with corresponding timestamps. The data processing server 104 may identify a historical entry based on the timestamp corresponding to the historical incidence.
[0096] At block 430, the data processing server 104 may generate the second output signal for the user device 102. The data processing server 104 may further transmit the second output signal to the user device 102, to enable the user device 102 for rendering the second test results to the user.
[0097] Now, referring to the technical abilities and advantageous effect of the present disclosure, operational advantages that may be provided by above disclosed system 100 and method 400 may include monitoring and diagnosing the performance of the networking devices 103 by performing test(s) networking devices 103 to identify factors degrading the performance of the networking device(s) 103. Another potential advantage of the one or more embodiments includes facilitating the system 100 to take remedial actions based on test results (i.e., the first and second results), thereby improving the performance of the networking device(s) 103 and Quality of Service (QoS) of the network 106. Moreover, an improved performance and QoS of the network 106 results in nearly flawless network operations that enhance customer experience and significantly reduces expenses on network maintenance and operations.
[0098] Those skilled in the art will appreciate that the methodology described herein in the present disclosure may be carried out in other specific ways than those set forth herein in the above disclosed embodiments without departing from essential characteristics and features of the present invention. The above-described embodiments are therefore to be construed in all aspects as illustrative and not restrictive.
[0099] The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Any combination of the above features and functionalities may be used in accordance with one or more embodiments.
[00100] In the present disclosure, each of the embodiments has been described with reference to numerous specific details which may vary from embodiment to embodiment. The foregoing description of the specific embodiments disclosed herein may reveal the general nature of the embodiments herein that others may, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications are intended to be comprehended within the meaning of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and is not limited in scope.
,CLAIMS:We Claim:
1. A method (400) for monitoring performance of a networking device (103) in a communication network (106), the method (400) comprising:
receiving, by a data exchange engine (204) from a user device (102), a user input for monitoring the performance of the network device (103);
identifying, by a test server engine (210) in response to a determination that the user input is associated with the real time instance, at least one performance test server (108) from a plurality of performance test servers (108) corresponding to the at least one performance test, based on the user input;
generating, by the test server engine (210), a trigger signal for the at least one performance test server (108) to perform at least one performance test on the network device (103);
receiving, by the test server engine (210) in response to the trigger signal from the at least one performance test server (108), a first test result from the at least one performance test server (108); and
generating, by the test server engine (210) in receipt of the first test result, a first output signal to enable the user device (102) for rendering the first test result.
2. The method (400) as claimed in claim 1, wherein the user input comprises a device identifier of the networking device (103), a test request to perform at least one performance test on the networking device (103), and a temporal value for performing the at least one test on the networking device (103).
3. The method (400) as claimed in claim 2, further comprising:
determining, by a device identification engine (206), whether the device identifier matches with a device tag from a plurality of predefined device tags corresponding to one or more valid network devices (103) in the communication network; and
determining, by a temporal detection engine (208) in response to the determination that the device identifier matches with the device tag, whether the temporal value matches with the real time instance, wherein the match of the temporal value with the real time instance corresponds to the user input being associated with the real time instance.
4. The method (400) as claimed in claim 3, further comprising:
generating, by the device identification engine (206), an error signal in response to a determination that the device identifier mismatches with each of the plurality of predefined device tags; and
transmitting, by the device identification engine (206), the error signal to the user device (102), wherein the error signal enables the user device (102) to render an error notification.
5. The method (400) as claimed in claim 3, further comprising:
determining, by the temporal detection engine (208) in response to the determination that the device identifier matches with one of the plurality of predefined device tags, whether the temporal value is associated with a historical instance;
retrieving, by the test server engine (210) in response to the determination that the temporal value is associated with the historical instance, a second test result from a memory (122), wherein the second test result comprises at least one historical value of the at least one performance parameter associated with the networking device (103) corresponding to the historical instance; and
generating, by the test server engine (210), a second output signal to enable the user device (102) for rendering the second test results.
6. The method (400) as claimed in claim 1, wherein the trigger signal enables the at least one performance test server (108) to determine at least one real time value of at least one performance parameter corresponding to the networking device (103), wherein the at least one performance parameter is associated with the at least one performance test.
7. The method (400) as claimed in claim 6, wherein the first test result corresponds to the at least one real time value of the at least one performance parameter associated with the least one performance test.
8. The method (400) as claimed in claim 1, wherein each performance test server (108) of the plurality of performance test servers (108) corresponds to at least one of a plurality of performance test services comprising a speed test, a ping test, a web performance test, a trace route test, a memory test, a Wireless Local Area Network (WLAN) test, and a Local Area Network (LAN) test.
9. A system (100) to monitor performance of a networking device (103) in a communication network (106), the system (100) comprising:
a data exchange engine (204) configured to receive a user input from a user device (102) for monitoring the performance of the network device (103); and
a test server engine (210) configured to:
identify, in response to a determination that the user input is associated with the real time instance, at least one performance test server (108) from a plurality of performance test servers (108) corresponding to the at least one performance test, based on the user input;
generate a trigger signal for the at least one performance test server (108) to perform at least one performance test on the network device (103);
receive, in response to the trigger signal from the at least one performance test server (108), a first test result from the at least one performance test server (108); and
generate, in receipt of the first test result, a first output signal to enable the user device (102) for rendering the first test result.
10. The system (100) as claimed in claim 9, wherein the user input comprises a device identifier of the networking device (103), a test request to perform at least one performance test on the networking device (103), and a temporal value for performing the at least one test on the networking device (103).
11. The system (100) as claimed in claim 10, further comprising:
a device identification engine (206) configured to determine whether the device identifier matches with a device tag from a plurality of predefined device tags corresponding to one or more valid network devices (103) in the communication network; and
a temporal detection engine (208) configured to determine, in response to the determination that the device identifier matches with the device tag, whether the temporal value matches with the real time instance, wherein the match of the temporal value with the real time instance corresponds to the user input being associated with the real time instance.
12. The system (100) as claimed in claim 11, wherein the device identification engine (206) is further configured to:
generate an error signal in response to a determination that the device identifier mismatches with each of the plurality of predefined device tags; and
transmit the error signal to the user device (102), wherein the error signal enables the user device (102) to render an error notification.
13. The system (100) as claimed in claim 11, wherein:
the temporal detection engine (208) is further configured to determine, in response to the determination that the device identifier matches with one of the plurality of predefined device tags, whether the temporal value is associated with a historical instance; and
the test server engine (210) is further configured to:
retrieve a second test result from a memory (122) in response to the determination that the temporal value is associated with the historical instance, wherein the second test result comprises at least one historical value of the at least one performance parameter associated with the networking device (103) corresponding to the historical instance; and
generate a second output signal to enable the user device (102) for rendering the second test results.
14. The system (100) as claimed in claim 9, wherein the trigger signal enables the at least one performance test server (108) to determine at least one real time value of at least one performance parameter corresponding to the networking device (103), wherein the at least one performance parameter is associated with the at least one performance test.
15. The system (100) as claimed in claim 14, wherein the first test result corresponds to the at least one real time value of the at least one performance parameter associated with the least one performance test.
16. The system (100) as claimed in claim 9, wherein each performance test server (108) of the plurality of performance test servers (108) corresponds to at least one of a plurality of performance test services comprising a speed test, a ping test, a web performance test, a trace route test, a memory test, a Wireless Local Area Network (WLAN) test, and a Local Area Network (LAN) test.
| # | Name | Date |
|---|---|---|
| 1 | 202421031876-STATEMENT OF UNDERTAKING (FORM 3) [22-04-2024(online)].pdf | 2024-04-22 |
| 2 | 202421031876-PROVISIONAL SPECIFICATION [22-04-2024(online)].pdf | 2024-04-22 |
| 3 | 202421031876-POWER OF AUTHORITY [22-04-2024(online)].pdf | 2024-04-22 |
| 4 | 202421031876-FORM 1 [22-04-2024(online)].pdf | 2024-04-22 |
| 5 | 202421031876-DRAWINGS [22-04-2024(online)].pdf | 2024-04-22 |
| 6 | 202421031876-DECLARATION OF INVENTORSHIP (FORM 5) [22-04-2024(online)].pdf | 2024-04-22 |
| 7 | 202421031876-Proof of Right [09-08-2024(online)].pdf | 2024-08-09 |
| 8 | 202421031876-Request Letter-Correspondence [02-03-2025(online)].pdf | 2025-03-02 |
| 9 | 202421031876-Power of Attorney [02-03-2025(online)].pdf | 2025-03-02 |
| 10 | 202421031876-Form 1 (Submitted on date of filing) [02-03-2025(online)].pdf | 2025-03-02 |
| 11 | 202421031876-Covering Letter [02-03-2025(online)].pdf | 2025-03-02 |
| 12 | 202421031876-FORM 18 [13-03-2025(online)].pdf | 2025-03-13 |
| 13 | 202421031876-DRAWING [13-03-2025(online)].pdf | 2025-03-13 |
| 14 | 202421031876-CORRESPONDENCE-OTHERS [13-03-2025(online)].pdf | 2025-03-13 |
| 15 | 202421031876-COMPLETE SPECIFICATION [13-03-2025(online)].pdf | 2025-03-13 |
| 16 | Abstract.jpg | 2025-05-01 |