Abstract: Synthetic monitoring of channel or a site, script or test that generates transactions which simulates an action or a path on the site or the channel by an end-user. Current identification of anomalies from test data is heavily rule based and manually demanding. An input data associated with one or more network units is received through one or more routes at a network server. One or more clusters is obtained by segmenting a non-homogeneous data based on a sampling rate and a noise level. One or more anomalies in the one or more clusters corresponding to each channel based on an unsupervised clustering model. One or more correlated anomaly in one or more KPIs is detected based on clustering of the input data into one or more class as an actionable anomaly. A forecast model is generated based on identification rule associated with the one or more correlated anomalies.
Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR ANOMALY DETECTION OF NETWORK KEY PERFORMANCE INDICATOR (KPI)
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
[001] The disclosure herein generally relates to network management, and, more particularly, to method and system for anomaly detection of network key performance indicator (KPI).
BACKGROUND
[002] Proactive or synthetic monitoring of channel or a site (i.e., application, software, and hardware), a script or a test that generates transactions which simulates an action or a path for taking on the site or the channel by a customer or an end-user. These path are then continuously monitored at specified intervals for performance, functionality, availability, and response time measures. Analysis of a synthetic test data enables an operation team to identify problems and determine if a website or application is slow or experiencing downtime before that problem affects actual end-users or customers. Since the synthetic monitoring does not need actual traffic, it enables companies to test applications continuously or test new applications prior launching to a live customer. The synthetic monitoring complements a passive monitoring approach to provide visibility on health of the applications during off peak hours when transaction volume is low. It can provide deeper visibility into end-to-end performance, regardless of where applications are running.
[003] In most of the cases, current identification of anomalies from the test data is heavily rule based and manually demanding. The rules based on a static threshold can generate lot of fault alarms and would require experts’ attention. Manual discretion is needed to decide if the alert is an actual anomaly or not. Trouble tickets on anomalous data is raised mostly by the static threshold and time consuming manual reactive actions are being taken to resolve the issues. Enormous amount of manual labor and time are needed to identify what is anomaly in the channel, based on test data. Analytical inferences may be specific to clusters of homogenous channels. Identification of issues by tracking them manually is cumbersome task. Misclassification of the issues due to the human errors. Data Collection can be huge as network key performance indicator (KPI) can have sampling rates in seconds and minutes. Huge amount of data must be available in database for processing. Proper data management is another challenge. Higher memory and computational resources and other cloud computing facilities without paid version of the aggregators can be challenging.
SUMMARY
[004] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one aspect, a processor implemented method of detecting an anomaly of the key performance indicator (KPI) of the one or more network unit is provided. The processor implemented method includes at least one of: receiving, via one or more hardware processors, an input data associated with one or more network unit through one or more routes at a network server; obtaining, via the one or more hardware processors, a non-homogeneous data by ingesting the input data through an application programming interface (API) in an associated template format; obtaining, via the one or more hardware processors, one or more clusters by segmenting the non-homogeneous data based on a sampling rate and a noise level; identifying, via the one or more hardware processors, one or more anomalies in the one or more clusters corresponding to one or more channels based on an unsupervised clustering model; classifying, via the one or more hardware processors, one or more abnormal behaviors based on a set of customized filters to identify one or more actionable anomalies per channel; detecting, via the one or more hardware processors, one or more correlated anomalies in one or more KPIs based on clustering of the input data into one or more class; and generating, via the one or more hardware processors, a forecast model based on a labeled data and one or more identification rules associated with the one or more correlated anomalies. The one or more class corresponds to a normal class or an abnormal class.
[005] In an embodiment, the input data corresponds to (a) parameters associated with the one or more routes, (b) a web monitoring data, and (c) a data associated with a key performance indicator (KPI) of the one or more network unit. In an embodiment, the key performance indicator (KPI) of the one or more network unit corresponds to one or more performance parameters which includes (i) a latency, (ii) a packet loss, and (iii) a jitter, of each channel. In an embodiment, the non-homogeneous data corresponds to a multivariate input time series data streamed with variable sampling frequencies. In an embodiment, one or more model parameters is adjusted with a small set of annotated data or a labelled data from same channel. In an embodiment, the forecast model learns one or more seasonal patterns from a historical data at a regular interval and a flag is raised for possible actionable anomalies that is anticipated in the channel.
[006] In another aspect, there is provided a system for detection of an anomaly of the key performance indicator (KPI) of the one or more network unit. The system includes a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: receive, an input data associated with one or more network unit through one or more routes at a network server; obtain, a non-homogeneous data by ingesting the input data through an application programming interface (API) in an associated template format; obtain, one or more clusters by segmenting the non-homogeneous data based on a sampling rate and a noise level; identify, one or more anomalies in the one or more clusters corresponding to one or more channels based on an unsupervised clustering model; classify, one or more abnormal behaviors based on a set of customized filters to identify one or more actionable anomalies per channel; detect, one or more correlated anomalies in one or more KPIs based on clustering of the input data into one or more class; and generate, a forecast model based on a labeled data and one or more identification rules associated with the one or more correlated anomalies. The one or more class corresponds to a normal class or an abnormal class.
[007] In an embodiment, the input data corresponds to (a) parameters associated with the one or more routes, (b) a web monitoring data, and (c) a data associated with a key performance indicator (KPI) of the one or more network unit. In an embodiment, the key performance indicator (KPI) of the one or more network unit corresponds to one or more performance parameters includes (i) a latency, (ii) a packet loss, and (iii) a jitter, of each channel. In an embodiment, the non-homogeneous data corresponds to a multivariate input time series data streamed with variable sampling frequencies. In an embodiment, one or more model parameters is adjusted with a small set of annotated data or a labelled data from same channel. In an embodiment, the forecast model learns one or more seasonal patterns from a historical data at a regular interval and a flag is raised for possible actionable anomalies that is anticipated in the channel.
[008] In yet another aspect, there are provided one or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors causes at least one of: receiving, an input data associated with one or more network unit through one or more routes at a network server; obtaining, a non-homogeneous data by ingesting the input data through an application programming interface (API) in an associated template format; obtaining, one or more clusters by segmenting the non-homogeneous data based on a sampling rate and a noise level; identifying, one or more anomalies in the one or more clusters corresponding to one or more channels based on an unsupervised clustering model; classifying, one or more abnormal behaviors based on a set of customized filters to identify one or more actionable anomalies per channel; detecting, one or more correlated anomalies in one or more KPIs based on clustering of the input data into one or more class; and generating, a forecast model based on a labeled data and one or more identification rules associated with the one or more correlated anomalies. The one or more class corresponds to a normal class or an abnormal class.
[009] In an embodiment, the input data corresponds to (a) parameters associated with the one or more routes, (b) a web monitoring data, and (c) a data associated with a key performance indicator (KPI) of the one or more network unit. In an embodiment, the key performance indicator (KPI) of the one or more network unit corresponds to one or more performance parameters includes (i) a latency, (ii) a packet loss, and (iii) a jitter, of each channel. In an embodiment, the non-homogeneous data corresponds to a multivariate input time series data streamed with variable sampling frequencies. In an embodiment, one or more model parameters is adjusted with a small set of annotated data or a labelled data from same channel. In an embodiment, the forecast model learns one or more seasonal patterns from a historical data at a regular interval and a flag is raised for possible actionable anomalies that is anticipated in the channel.
[010] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[011] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[012] FIG. 1 illustrates a system for anomaly detection of a key performance indicator (KPI) of one or more network unit, according to an embodiment of the present disclosure.
[013] FIG. 2 illustrates an exemplary block diagram of the system of FIG.1, according to some embodiments of the present disclosure.
[014] FIG. 3A and FIG. 3B are exemplary flow diagrams illustrating method of detecting the anomaly of the key performance indicator (KPI) of the one or more network unit, according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
[015] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[016] There is a need to reduce false alarms and identify anomaly epochs within variable windows in time and this requires a lot of time and manual labor. Embodiments of the present disclosure provide a method and system for an anomaly detection of network key performance indicator (KPI) based on machine learning. The embodiment of the present disclosure provides a protocol and customizable to non-homogenous clusters and thus catering to specific unique behaviors of channels.
[017] Referring now to the drawings, and more particularly to FIGS. 1 through 3B, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[018] FIG. 1 illustrates a system 100 for anomaly detection of a key performance indicator (KPI) of one or more network unit, according to an embodiment of the present disclosure. In an embodiment, the system 100 includes one or more processor(s) 102, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 104 operatively coupled to the one or more processors 102. The memory 104 includes a database. The one or more processor(s) processor 102, the memory 104, and the I/O interface(s) 106 may be coupled by a system bus such as a system bus 108 or a similar mechanism. The one or more processor(s) 102 that are hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more processor(s) 102 is configured to fetch and execute computer-readable instructions stored in the memory 104. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[019] The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface device(s) 106 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a camera device, and a printer. Further, the I/O interface device(s) 106 may enable the system 100 to communicate with other devices, such as web servers and external databases. The I/O interface device(s) 106 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. In an embodiment, the I/O interface device(s) 106 can include one or more ports for connecting number of devices to one another or to another server.
[020] The memory 104 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 104 includes a plurality of modules 110 and a repository 112 for storing data processed, received, and generated by the plurality of modules 110. The plurality of modules 110 may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types.
[021] Further, the database stores information pertaining to inputs fed to the system 100 and/or outputs generated by the system (e.g., data/output generated at each stage of the data processing) 100, specific to the methodology described herein. More specifically, the database stores information being processed at each step of the proposed methodology.
[022] Additionally, the plurality of modules 110 may include programs or coded instructions that supplement applications and functions of the system 100. The repository 112, amongst other things, includes a system database 114 and other data 116. The other data 116 may include data generated as a result of the execution of one or more modules in the plurality of modules 110. Further, the database stores information pertaining to inputs fed to the system 100 and/or outputs generated by the system (e.g., at each stage), specific to the methodology described herein. Herein, the memory for example the memory 104 and the computer program code configured to, with the hardware processor for example the processor 102, causes the system 100 to perform various functions described herein under.
[023] FIG. 2 illustrates an exemplary block diagram of the system 100 of FIG.1, according to some embodiments of the present disclosure. The system 200 may be an example of the system 100 (FIG. 1). In an example embodiment, the system 200 may be embodied in, or is in direct communication with the system, for example the system 100 (FIG. 1). The system 200 includes one or more network unit 202A-N, one or more routes 204A-N, a network server 206, a data ingestion unit 208, a bronze layer 210, a silver layer 212, a golden layer 214, a behaviour detection unit 216, and a bot 218. An input data associated with the one or more network unit 202A-N through one or more routes 204A-N to the network server 206. The input data corresponds to (a) one or more parameters associated with the one or more routes 204A-N, (b) a web monitoring data, and (c) a data associated with a key performance indicator (KPI) of the one or more network unit 202A-N. For example, the input data are as mentioned in table 1:
[024] In an embodiment, the input data based on the web monitoring, specifically a proactive or a synthetic monitoring of a channel (e.g., site, application, software, hardware) which utilize a test that generates simulated transactions that mimic actual transactions by a customer or end-user and scan the channels at a regular intervals for functionality, availability, and response time measures. The data associated with the key performance indicator (KPI) of the one or more network unit 202A-N corresponds to one or more performance parameters of each channel such as (i) a latency, (ii) a packet loss, and (iii) the jitter. The data associated with the key performance indicator (KPI) is stored in the network server 206.
[025] The input data is streamed via the data ingestion unit 208 for data ingestion to the database through an application programming interface (API) in an associated template format. In an embodiment, the input data from the network server 206 is streamed through a Kafka queue and is accumulated using a python script and prepared with appropriate template for ingestion into an elastic search database. The non-homogeneous data corresponds to a multivariate input time series data streamed with variable sampling frequencies as the channels are not always monitored at constant sampling frequencies. In an embodiment, the data from the data ingestion unit 208 may be a non-homogeneous data as one or more web monitoring tests that generated data for different channels may be collected with the variable sampling frequencies. In an embodiment, some channels are monitored every thirty seconds, and some are monitored every one minute, two minutes, fifteen minutes, or thirty minutes depending upon priority assigned by specification from an end user. For example, all KPI data that are streamed at two-minute sampling frequency falls in one segment for analysis.
[026] The bronze layer 210 further includes a data segmentation unit 210A which is configured to segment the non-homogeneous data based on a sampling rate and a noise level to obtain one or more clusters. In an embodiment, the segmentation based on noise is repeated for the inherent noise present in channel. The noises present in these channels are also variable (for example: clean channels, moderate noise channels, and highly noisy channels). The noise present in the channel which constitutes as an actionable anomaly in a specific channel is understood based on the past data. The actionable anomaly is defined as an anomaly that is crucial to get resolved as soon as occurs because that directly impact the end user satisfaction. For example, the actionable anomalies for resolution: for a chosen segment (i.e., for a specific sampling rate of two minutes) single occurrences of point anomalies are not considered actionable, as resolve on most of the time (i.e., internet glitches), but a collection of point anomalies that persists over a window of fifteen minutes (for example, count of point anomalies is greater than three) a ticket need to be raised for resolution. Similarly for another segment (i.e., for a specific sampling rate of fifteen minutes or thirty minutes) even point anomalies must be considered actionable, as there is no immediate measurement to cross verify if the anomalous behaviors have settled or is persisting, even the point anomaly need to be resolved via ticket.
[027] The silver layer 212 is configured to identify one or more anomalies in the one or more clusters corresponding to one or more channels based on an unsupervised clustering model. In an embodiment, one or more specific rules defined, that tag one or more particular situations of KPI values as one or more anomalies are introduced directly on a processed dataset. In an embodiment, one or more tests including regular and noisy scenarios normal behaviour and anomalous behaviour are separated and annotated and the output are validated by a testing team that consists of SMEs as and when required. The silver layer 212 further includes a modelling unit 212A, an anomaly detection unit 212B, an anomaly annotation unit 212C, and a subject matter expert (SME) validation unit 212D. The modelling unit 212A is configured to run “n” unsupervised clustering models (i.e., M1,...,Mn from global models set). The anomaly annotation unit 212C is configured to annotate regular and abnormal behaviors, select the model based on the accepted trade off (e.g., between the execution time and performance). In an embodiment, one or more lower bound anomalies are removed using a unique median filter. In an embodiment, a selective window approach is implemented to remove instantaneous anomalies and flagging an anomaly as the actional anomaly only if there are two or more anomalies in a twelve-minute window and otherwise, the anomaly is ruled out as instantaneous.
[028] In an embodiment, if the data is sampled in a larger sampling rate, for example only one sample is available every thirty minute, even a point anomaly can be marked as business anomaly for a ticketing purpose. Similarly, if the data is sampled in a smaller sampling rate (for example, one sample is available every thirty seconds or one minute) then by looking at collective behaviour in context of longer time window (for example ten to fifteen minutes) to make sure that the point anomalies are repeated sufficiently many times to mark them as an actionable anomaly for the ticketing purpose.
[029] For example, if the channel is inherently noisy for a particular test, the SME can decide to at least one of: (i) flag the entire channel as anomaly as noisy in nature; (ii) flag the data samples above a chosen static threshold (mark packet loss above X as an anomaly) thus generating lot of false alarms in a noisy channel, where X might be 5, 10 or 20 depends upon business needs; (iii) flag data samples above the statistical threshold (e.g., mark ten deviation from a local mean or median of a distribution); (iv) flag the data samples above the dynamic threshold by a machine learning model as anomaly to reduce the false alarms; (v) a customized combination of step (i-iii) with step iv and adjust the thresholds to reach a good trade-off between the false alarms and missed alarms.
[030] The one or more clusters generated are further refined by including a business logic by the SMEs specific to the tests per channels. The annotated anomalies are communicated for validation from a SME testing team. In an embodiment, unsupervised techniques are used as the data acquired are in general unlabeled (i.e., manually labelled data is costly and labor intensive and often hard to obtain). One or more model parameters are adjusted with a small set of annotated /labelled data from the same channel. In an embodiment, if the training data depicts a clear separation of two classes where one indicates normal and other anomalous behavior (i.e., for all possible range of data, once validated by the SME) data that does not belong to the normal class can be given anomalous flags (i.e., with 100 percent probability) because the data belongs to the anomalous cluster. For all other scenarios, when there is an ambiguity whether the input data belong to either clusters (i.e., normal, and anomalous clusters formed during training the model) then assigning probabilities for this datapoint to belong to the anomalous class (i.e., based on soft max function) and the input data can be flagged as anomalous. In an embodiment, a semi-supervised learning is used to validate and adjust the unsupervised output with actual anomalous behaviors identified by the SME to adjust the model parameters. The network KPI are in general unlabeled and available in large quantities whereas the labelled data is very expensive and much smaller compared to unlabeled data. The abnormal behaviour is flagged, and a check whether the behaviour qualify for the actionable anomalies based on a set of customized filters (i.e., customized heuristics) and if yes flag is raised for auto ticketing via the bot 218. In an embodiment, for each homogenous group ‘m’, compare an unsupervised output with a validation data set based on ticketing team/SME and adjust one or more hyper parameters of the unsupervised learning model and assign probabilities.
[031] Once the SME inputs and individual customization features are obtained for individual tests per channel, a binary classification problem is defined on whether a data point is anomalous or not. A set of multi class, multi cluster classification models can be used for the non-homogeneous dataset. The KPI (i.e., packet loss, jitter, and latency) is forecasted for each channel/path based on historical performance features (i.e., hourly, daily, monthly, and seasonal behaviors) using a forecast model and on the forecasted KPI, anomaly detection module run and update the anomaly flag. In an embodiment, the forecast model is generated based on a labeled data and one or more identification rules associated with the one or more correlated anomalies. In an embodiment, the KPIs are used to identify the correlation. The forecast model learns one or more seasonal patterns from a historical data at a regular interval (e.g., hourly, daily, monthly, and seasonal behaviors) and a flag is raised for possible actionable anomalies that is anticipated in the channel. The KPIs and anomalies can come in a correlated fashion, e.g., zero latency in a channel can indicate hundred packet loss and vice versa.
[032] The bot 218 is configured to categorize the error reports and then clocks in the issue for further analysis. The report from the bot 218 can then be used to raise incident tickets, thereby notifying the client about possible anomalies and one or more measures to be taken.
[033] FIG. 3A and FIG. 3B are exemplary flow diagrams illustrating method 300 of detecting the anomaly of the key performance indicator (KPI) of the one or more network unit 202A-N, according to an embodiment of the present disclosure. In an embodiment, the system 100 comprises one or more data storage devices or the memory 104 operatively coupled to the one or more hardware processors 102 and is configured to store instructions for execution of steps of the method by the one or more processors 102. The flow diagram depicted is better understood by way of following explanation/description. The steps of the method of the present disclosure will now be explained with reference to the components of the system as depicted in FIGS. 1 and 2.
[034] At step 302, an input data associated with the one or more network unit 202A-N is received through one or more routes 204A-N at the network server 206. The input data corresponds to (a) parameters associated with the one or more routes 204A-N, (b) the web monitoring data, and (c) the data associated with the key performance indicator (KPI) of the one or more network unit 202A-N. The key performance indicator (KPI) of the one or more network unit 202A-N corresponds to one or more performance parameters of each channel (i) a latency, (ii) a packet loss, and (iii) a jitter. At step 304, a non-homogeneous data is obtained by ingesting the input data through an application programming interface (API) in an associated template format. The non-homogeneous data corresponds to a multivariate input time series data streamed with variable sampling frequencies. At step 306, one or more clusters are obtained by segmenting the non-homogeneous data based on a sampling rate and a noise level. At step 308, one or more anomalies in the one or more clusters corresponding to one or more channels is identified based on an unsupervised clustering model. At step 310, one or more abnormal behaviors are classified based on a set of customized filters to identify one or more actionable anomalies per channel. At step 312, one or more correlated anomalies is detected in one or more KPIs based on clustering of the input data into one or more class. The one or more class corresponds to a normal class or an abnormal class. In an embodiment, one or more model parameters is adjusted with a small set of annotated data or a labelled data from same channel. At step 314, a forecast model is generated based on a labeled data and one or more identification rules associated with the one or more correlated anomalies. The forecast model learns one or more seasonal patterns from the historical data at the regular interval and a flag is raised for possible actionable anomalies that is anticipated in the channel.
[035] The embodiment of present disclosure herein addresses unresolved problem of identification and classification of abnormal events at key performance indicator (KPI) of the network. The embodiment of the present disclosure identifies point, contextual, and collection anomalies from a network performance measures in a dynamic manner. The embodiment of the present disclosure provides custom-fit models for the non-homogeneous class of KPIs (i.e., sampling rate, noise level presence and regular behavior for the channel). The embodiment of the present disclosure provides custom-fit anomaly flagging as per customer requirement (i.e., configurable static or statistical thresholds along with dynamic threshold as a feature). The embodiment of the present disclosure enables significant reduction in missed and false alarms in the channels compared to the regular scenarios of the static or the statistical thresholds alone. The embodiment of the present disclosure provides specific protocols for different types of network issues for the KPI data. The embodiment of the present disclosure provides a timely alert to operations team for faster resolution via a smart bot. Further, identifies anomalies well before than by human most of the time. The system is capable of classify and cluster channels and paths based on poor network connection into very clean, somewhat decent, moderately noisy, significantly noisy, and totally corrupt channels. Alert alarms based on dynamic thresholds (i.e., network congestion is going on beyond the expected dynamic threshold of the channel) measured. The system 100 can generate channel specific features (i.e., time frame for which the channel is facing the issues most, multivariate feature extractions from test data for a unique channel). The embodiment of the present disclosure addresses proactive monitoring of channel using synthetic web pinging for health monitoring at specified intervals for performance for functionality, availability, and response time measures. The embodiment of the present disclosure identifies deviation (a) utility of application specifically in a different subdomain of synthetic web pinging along with (b) an unsupervised learning via dynamic thresholding for recommending efficient trouble ticketing. The feedback form SME act as an input to further refine the anomalies to reduce the false/missed anomalies.
[036] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[037] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
[038] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[039] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[040] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[041] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
, Claims:
1. A processor implemented method (300), comprising:
receiving, via one or more hardware processors, an input data associated with a plurality of network units through a plurality of routes at a network server (302);
obtaining, via the one or more hardware processors, a non-homogeneous data by ingesting the input data through an application programming interface (API) in an associated template format (304);
obtaining, via the one or more hardware processors, a plurality of clusters by segmenting the non-homogeneous data based on a sampling rate and a noise level (306);
identifying, via the one or more hardware processors, at least one anomaly in the plurality of clusters corresponding to a plurality of channels based on an unsupervised clustering model (308);
classifying, via the one or more hardware processors, at least one abnormal behaviour based on a set of customized filters to identify an at least one actionable anomaly per channel (310);
detecting, via the one or more hardware processors, at least one correlated anomaly in a plurality of KPIs based on clustering of the input data into at least one class, wherein the at least one class corresponds to a normal class or an abnormal class (312); and
generating, via the one or more hardware processors, a forecast model based on a labeled data and a plurality of identification rules associated with the at least one correlated anomaly (314).
2. The processor implemented method (300) as claim in claim 1, wherein the input data corresponds to (a) parameters associated with the plurality of routes, (b) a web monitoring data, and (c) a data associated with a key performance indicator (KPI) of the plurality of network units, and wherein the key performance indicator (KPI) of the plurality of network units corresponds to a plurality of performance parameters comprising (i) a latency, (ii) a packet loss, and (iii) a jitter, of each channel.
3. The processor implemented method (300) as claim in claim 1, wherein the non-homogeneous data corresponds to a multivariate input time series data streamed with variable sampling frequencies.
4. The processor implemented method (300) as claim in claim 1, wherein a plurality of model parameters is adjusted with a small set of annotated data or a labelled data from same channel.
5. The processor implemented method (300) as claim in claim 1, wherein the forecast model learns a plurality of seasonal patterns at a regular interval and a flag is raised for possible actionable anomalies that is anticipated in the channel.
6. A system (100), comprising:
a memory (104) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (102) coupled to the memory (104) via the one or more communication interfaces (106), wherein the one or more hardware processors (102) are configured by the instructions to:
receive, an input data associated with a plurality of network units through a plurality of routes at a network server;
obtain, a non-homogeneous data by ingesting the input data through an application programming interface (API) in an associated template format;
obtain, a plurality of clusters by segmenting the non-homogeneous data based on a sampling rate and a noise level;
identify, at least one anomaly in the plurality of clusters corresponding to a plurality of channels based on an unsupervised clustering model;
classify, at least one abnormal behaviour based on a set of customized filters to identify an at least one actionable anomaly per channel;
detect, at least one correlated anomaly in a plurality of KPIs based on clustering of the input data into at least one class, wherein the at least one class corresponds to a normal class or an abnormal class; and
generate, a forecast model based on a labeled data and a plurality of identification rules associated with the at least one correlated anomaly.
7. The system (100) as claim in claim 6, wherein the input data corresponds to (a) parameters associated with the plurality of routes, (b) a web monitoring data, and (c) a data associated with a key performance indicator (KPI) of the plurality of network units, and wherein the key performance indicator (KPI) of the plurality of network units corresponds to a plurality of performance parameters comprises (i) a latency, (ii) a packet loss, and (iii) a jitter, of each channel.
8. The system (100) as claim in claim 6, wherein the non-homogeneous data corresponds to a multivariate input time series data streamed with variable sampling frequencies.
9. The system (100) as claim in claim 6, wherein a plurality of model parameters is adjusted with a small set of annotated data or a labelled data from same channel.
10. The system (100) as claim in claim 6, wherein the forecast model learns a plurality of seasonal patterns from a historical data at a regular interval and a flag is raised for possible actionable anomalies that is anticipated in the channel.
| # | Name | Date |
|---|---|---|
| 1 | 202221056578-STATEMENT OF UNDERTAKING (FORM 3) [01-10-2022(online)].pdf | 2022-10-01 |
| 2 | 202221056578-REQUEST FOR EXAMINATION (FORM-18) [01-10-2022(online)].pdf | 2022-10-01 |
| 3 | 202221056578-FORM 18 [01-10-2022(online)].pdf | 2022-10-01 |
| 4 | 202221056578-FORM 1 [01-10-2022(online)].pdf | 2022-10-01 |
| 5 | 202221056578-FIGURE OF ABSTRACT [01-10-2022(online)].pdf | 2022-10-01 |
| 6 | 202221056578-DRAWINGS [01-10-2022(online)].pdf | 2022-10-01 |
| 7 | 202221056578-DECLARATION OF INVENTORSHIP (FORM 5) [01-10-2022(online)].pdf | 2022-10-01 |
| 8 | 202221056578-COMPLETE SPECIFICATION [01-10-2022(online)].pdf | 2022-10-01 |
| 9 | 202221056578-FORM-26 [24-11-2022(online)].pdf | 2022-11-24 |
| 10 | Abstract1.jpg | 2022-12-09 |
| 11 | 202221056578-Proof of Right [10-01-2023(online)].pdf | 2023-01-10 |
| 12 | 202221056578-FER.pdf | 2025-06-16 |
| 13 | 202221056578-FORM 3 [14-07-2025(online)].pdf | 2025-07-14 |
| 1 | Search056578E_06-01-2025.pdf |