Abstract: A method and system is provided for dictionary learning based power waveform disaggregation is disclosed. The system and method disclosed herein enables disaggregation of power profiles from aggregate power measurements. A measurement model based on a union of dictionaries aids in collecting aggregate data relevant for detection and estimation of the individual appliance power profiles. This measurement model is a stable and invertible sampling operator. An estimation of the individual sources based on analysis sparsity based optimization yields results with low approximation errors. The scaling effect in the estimation of individual sources can be controlled by calibrating the weights in the block sparse approximation and the penalty in the analysis sparse optimization. Estimation errors can be reduced by reducing the cross coherence of dictionaries and designing them with a condition number close to unity.
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR DICTIONARY LEARNING BASED POWER WAVEFORM DISAGGREGATION
Applicant:
Tata Consultancy Services Limited
A company incorporated in India under the Companies Act, 1956
having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the embodiments and the manner in which it is to be performed.
FIELD OF THE INVENTION
[001] The present application generally relates to energy and power systems. Particularly, the application provides a method and system for dictionary learning based power waveform disaggregation.
BACKGROUND OF THE INVENTION
[002] In order to manage generation, transmission and consumption of power efficiently, the power consumption of electrical appliances must be controlled and monitored dynamically. However due to large number of appliances in a location it is difficult to monitor individual appliances. Hence there is a need to monitor the aggregate power consumption of a location and infer the power consumption of individual appliances from this aggregate power.
[003] Traditional approaches include learning the features from different individual appliances and attempt to infer their presence in the measurement of a mixture of features from different appliances. Techniques from signal processing and machine learning have been applied to achieve this goal.
[004] However, the techniques suffer from several drawbacks, including computational complexity, large training time, accuracy of inferences, limitation on real time capabilities and scalability and robustness to take into account different load signatures (temporal/structural features).
[005] Prior art literature has illustrated several method for monitoring and controlling power consumption by electrical appliances, however due to several drawbacks, including those mentioned above, this is still considered as a challenge in the technical domain.
SUMMARY OF THE INVENTION
[006] Before the present methods, systems, and hardware enablement are described, it is to be understood that this invention is not limited to the particular systems, and methodologies described, as there can be multiple possible embodiments of the present invention which are not expressly illustrated in the present disclosure. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the present invention which will be limited only by the appended claims.
[007] In an aspect method for dictionary learning based power waveform disaggregation of at least one appliances is provided, said method comprising processor implemented steps of acquiring a data from each of the at least one appliances, using a data acquisition module (208A); generating data model wherein the generated data model extracts features of each of the at least one appliances, by implementing feature extraction techniques, using a data model generation module (208B); generating a measurement model and calibrating the measurement model to measure the data based on at least one from a group of a type of power to be measured, a power level resolution and a sampling frequency, using a measurement model generation module (208C); an inference model is generated and implemented for estimating individual signals and classification of power features data based on prior knowledge of statistical and structural properties of the data, wherein the presence or absence of events representative of load features is inferred on the acquired data based on an predetermined test statistics, using an inference model generation module (208D).
[008] In another aspect, a system (102) dictionary learning based power waveform disaggregation of at least one appliances is provided. The System comprising a processor (202), a memory (204), operatively coupled with said processor. According to an aspect of the disclosed invention the system (102) further comprises a data acquisition module (208A) configured to acquire data from each of the at least one appliances. The system (102) further comprises a data model generation module (208B) configured to generate a data model wherein the generated data model captures all the essential features of each of the at least one appliances, by implementing feature extraction techniques. Further the disclosed system comprises a measurement model generation module (208C) configured to generate a measurement model and calibrating the measurement model to measure the data based on at least one from a group of a type of power to be measured, a power level resolution and a sampling frequency; and an inference model generation module (208D) configured to generate an inference model and implement said inference model for estimating individual signals and classification of power features data based on prior knowledge of statistical and structural properties of the data, wherein the presence or absence of events representative of load features is inferred on the acquired data based on an predetermined test statistics.
BRIEF DESCRIPTION OF THE DRAWINGS
[009] The foregoing summary, as well as the following detailed description of preferred embodiments, are better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings exemplary constructions of the invention; however, the invention is not limited to the specific methods and system disclosed. In the drawings:
[0010] Figure 1: illustrates a network implementation of a system 102 according to an embodiment of the subject disclosed herein;
[0011] Figure 2: illustrates the system 102, in accordance with an embodiment of the subject matter disclosed herein;
[0012] Figure 3: shows illustrates an exemplary functional framework for the working of an embodiment disclosed herein;
[0013] Figure 4: shows illustrates an exemplary operation framework for the working of an embodiment disclosed herein;
[0014] Figure 5: illustrates a flowchart 500 for working of the system, in accordance with an embodiment of the subject matter disclosed herein;
[0015] Figure 6: illustrates a graphical representation for implementation of the subject matter disclosed herein on a refrigerator;
[0016] Figure 7 illustrates a graphical representation for implementation for the disclosed subject matter disclosed herein on a microwave oven;
[0017] Figure 8 illustrates a detection test statistic for evaluation of test parameters, in accordance with an exemplary embodiment of the disclosed subject matter;
[0018] Figure 9 is an expanded illustration of the window index 70 as illustrated in figure 7;
[0019] Figure 10 illustrates a graphical representation the overlap of the two power profiles, of the refrigerator and the microwave oven in accordance with an embodiment of the disclosed subject matter;
[0020] Figure 11 illustrates a the estimated power profile of the refrigerator on the time interval (145,146) minutes in accordance with an embodiment of the disclosed subject matter;
[0021] Figure 12 illustrates a the estimated power profile of the microwave oven on the time interval (145,146) minutes; and
[0022] Figure 13 illustrates a plot of the variation of the penalty weights as a function of iteration index, when solving the block sparse optimization problem in accordance with an embodiment of the disclosed subject matter;
DETAILED DESCRIPTION OF THE INVENTION
[0023] Some embodiments of this invention, illustrating all its features, will now be discussed in detail.
[0024] The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
[0025] It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, the preferred, systems and methods are now described.
[0026] The disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms.
[0027] The elements illustrated in the Figures inter-operate as explained in more detail below. Before setting forth the detailed explanation, however, it is noted that all of the discussion below, regardless of the particular implementation being described, is exemplary in nature, rather than limiting. For example, although selected aspects, features, or components of the implementations are depicted as being stored in memories, all or part of the systems and methods consistent with the attrition warning system and method may be stored on, distributed across, or read from other machine-readable media.
[0028] The techniques described above may be implemented in one or more computer programs executing on (or executable by) a programmable computer including any combination of any number of the following: a processor, a storage medium readable and/or writable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), plurality of input units, and plurality of output devices. Program code may be applied to input entered using any of the plurality of input units to perform the functions described and to generate an output displayed upon any of the plurality of output devices.
[0029] Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language. The programming language may, for example, be a compiled or interpreted programming language. Each such computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor.
[0030] Method steps of the invention may be performed by one or more computer processors executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, the processor receives (reads) instructions and data from a memory (such as a read-only memory and/or a random access memory) and writes (stores) instructions and data to the memory. Storage devices suitable for tangibly embodying computer program instructions and data include, for example, all forms of non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROMs. Any of the foregoing may be supplemented by, or incorporated in, specially-designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays). A computer can generally also receive (read) programs and data from, and write (store) programs and data to, a non-transitory computer-readable storage medium such as an internal disk (not shown) or a removable disk.
[0031] Any data disclosed herein may be implemented, for example, in one or more data structures tangibly stored on a non-transitory computer-readable medium. Embodiments of the invention may store such data in such data structure(s) and read such data from such data structure(s).
[0032] The present application provides a computer implemented method and system for measuring end user video quality.
[0033] Fig. 1 illustrates a network implementation 100 of a system 102 for dictionary learning based power waveform disaggregation of appliances according to an embodiment of the present disclosure. In one aspect of the system 102, one or more features of an appliance may be inferred from aggregate power measurements. In one aspect the steps for inferring features of an appliance from aggregate power measurement comprises a) data acquisition, b) appliance feature extraction and c) Inference and learning. Further, the features may be classified as a) Steady state features and b) Transient features. In another aspect of the subject matter disclosed herein an inference mechanism needs to distinguish between steady state features and transient features on an appliance and also be able to distinguish these features for multiple appliances. In an aspect of the system 102 the distinction between steady state and transient features is made without trading off the sampling rate and hence reduce data acquisition and component costs. Further this leads to reduction of training demands thereby reducing the data storage and data processing costs.
[0034] Although the present subject matter is explained considering that the system 102 is implemented on a server, it may be understood that the system 102 may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. In one implementation, the system 102 may be implemented in a cloud-based environment. In another embodiment, it may be implemented as custom built hardware designed to efficiently perform the invention disclosed. It will be understood that the system 102 may be accessed by multiple users through one or more user devices 104-1, 104-2…104-N, collectively referred to as user devices 104 hereinafter, or applications residing on the user devices 104. Examples of the user devices 104 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices 104 are communicatively coupled to the system 102 through a network 106.
[0035] In one implementation, the network 106 may be a wireless network, a wired network or a combination thereof. The network 106 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 106 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0036] In one embodiment the present invention, Fig. 2 with reference to Fig. 1, describes a detailed working of the various components of the system 102. In one embodiment, the system 102 may include at least one processor 202, an input/output (I/O) interface 204, and a memory 206. The at least one processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 206.
[0037] The I/O interface 204 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may allow the system 102 to interact with the user directly or through the client devices 104. Further, the I/O interface 204 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one or more ports for connecting a number of devices to one another or to another server.
[0038] The memory 206 may include any computer-readable medium and computer program product known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 206 may include modules 208 and data 210. In an embodiment the modules 208 comprise of a data acquisition module 208A configured collect data from the appliances. A data model generation module 208B configured to derive a data model which captures all the essential features of an appliance in an efficient manner, a measurement model module 208C configured to derive a measurement model and calibrating the measurement model for enabling measurement on data, and an inference model generation module 208D configured to make inferences on data based on prior knowledge of statistical and structural properties of the data thereby balancing the tradeoff between detection and estimation accuracy wherein the presence or absence of events representative of load features is inferred on the acquired data based on an predetermined test statistics.
[0039] The modules 208 include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. The modules 208 may include programs or coded instructions that supplement applications and functions of the system 102. The modules 208 described herein may be implemented as software modules that may be executed in the cloud-based computing environment of the system 102.
[0040] The data 210, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the modules 208. The data 210 may include data generated as a result of the execution of one or more modules.
[0041] Referring now to figure 3, a functional architecture for the invention subject matter disclosed herein is illustrated. In an embodiment the element 302 is used for data acquisition, the functional block 304 is used for data model learning, functional block 306 is used for measurement model construction, the constructed measurement mode is then calibrated using the functional block 308, the functional block 310 derives inferences based on the calibrated measurement model and the functional block 312 estimates individual signal for appliances.
[0042] Referring now to figure 4, an illustration of the operational architecture of the disclosed subject matter is shown. At the element 402 data is acquired which is used for training for data model learning at the element 404. The acquired data is used for event detection at the step 406 wherein event detection is used during the testing phase for identification of relevant events. At the element 408 feature estimation for each appliance is performed and at the element 410 represents signal estimation for each appliance.
[0043] In operation a flowchart 500 for dictionary learning based power waveform disaggregation of at least one appliances is shown in Figure 5 according to an embodiment of the disclosure. Initially at step 502, a data from each of the at least one appliances is acquired using a data acquisition module (208A). At step 504 a data model is generated wherein the generated data model extracts features of each of the at least one appliances, by implementing feature extraction techniques, using a data model generation module (208B). At step 506, a measurement model is generated and calibrating the measurement model to measure the data based on at least one from a group of a type of power to be measured, a power level resolution and a sampling frequency, using a measurement model generation module (208C). And finally at step 508, an inference model is generated and implementing said inference model for estimating individual signals and classification of power features data based on prior knowledge of statistical and structural properties of the data, wherein the presence or absence of events representative of load features is inferred on the acquired data based on an predetermined test statistics, using an inference model generation module (208D).
[0044] In an embodiment it is useful to consider appliance operational states, load signatures, and data acquisition strategies to measure aggregate power consumption of a home for making inferences on power profiles of individual appliances. As per the disclosed subject matter the steps involved in characterizing power profiles of individual appliances include. Further it should be appreciated that the following steps should be read in conjunction with figure 3, figure 4 and figure 5:
a) A data acquisition strategy: The data acquisition strategy determines the sampling rate and power measurement resolution requirements.
b) Data pre-processing: Data pre-processing allows feature extraction. Features extracted from individual appliances are represented in terms of data dependent orthonormal basis functions.
c) Pattern recognition/classification of power features: The presence/absence of events representative of load features is inferred on test data based on an appropriate test statistic for hypothesis testing.
[0045] The above steps involve a development of a) A data model, b) A measurement model and c) An inference model as explained in the flowchart of Fig. 5 wherein the data acquisition and data pre-processing steps may be enabled by using a measurement model and data model respectively and the recognition accuracy of features will depend on the inference model.
[0046] According to an embodiment of the disclosed subject matter a measurement model as described in Fig. 3 can be characterized in terms of three key factors of measured power: a) The type of power: Measured power can be classified into real power (due to resistive loads) and reactive power (due to inductive loads). Reactive power can be used to differentiate between loads that cause similar real power levels. This becomes critical at lower sampling rates. b) Power level resolution: For a wide range of appliances, a power measurement resolution of 0.1W is typically required. This power level resolution depends on the resolution of the A/D converter. Typical power measurement meters are approximately constrained to a power level resolution of 10W to meet billing requirements. Higher power level resolutions allow better recognition of appliance features from power measurements. c) Sampling frequency: The sampling rate determines the power level resolution. This in turn depends on the data converter available for data acquisition. Lower sampling rates allow a lower power consumption on the data converter
[0047] According to an embodiment of the present subject matter disaggregation of power features of individual appliances from aggregate power measurements involves the steps of data acquisition, data pre-processing and pattern recognition. Towards this, measurement, data and inference models need to be developed for aggregate data. Improved disaggregation algorithms are needed to improve robustness, and accuracy of appliance identification by the algorithms, while reducing data acquisition, data processing and training requirements.
[0048] In an embodiment, the data model generation module 208B is configured to derive a data model by learning a dictionary of orthonormal basis functions. In an aspect, a linear combination of these basis functions may lead to an efficient representation of appliance features. In an embodiment the dictionary enables efficient representation and discrimination of feature simultaneously. In an embodiment the dictionary is flexible, and may adapt to spatial, temporal and usage pattern variability. Furthermore, the dictionary is calibrated to the data.
[0049] The system 102 makes inferences on data of individual appliances from aggregate power measurements based on data models. The data from each appliance is modelled in terms of a dictionary of basis functions, D. A linear combination of these basis functions represent the features of the appliance. The generic optimization problem to be solved is where is a penalty for the dictionary atoms (in terms of coherence) and is a penalty for the coefficient vectors (in terms of sparsity).
[0050] In an embodiment a data model is defined in terms of a learnt dictionary. According to an aspect given a signal , a dictionary , consists of atoms (basis functions) whose linear combinations given by , with , are representative of significant features of the dimensional signal. Further the vector of coefficients which are used to weight each atom of the dictionary is an encoding vector. A sparse and distinct support set for the encoding vector of each appliance helps in differentiating between features of different appliances from a linear mixture of the power consumption profiles of all appliances. In another aspect an approximately distinct support set of the encoding vectors can be enforced by developing a data model which allows a sparse approximation for an appliance power feature.
[0051] A sparse approximation formulated as a regularized regression problem, which may be solved using equation (1).
, ……………. (1)
[0052] In another embodiment the signal is approximated by a sparse linear combination of basis functions as: , with a constraint on the norm on the coefficient vector, . The norm is typically used as regularize in (1) as an approximation to the norm making the problem convex in while still encouraging sparse solutions.
[0053] In an aspect the dictionary is characterized by properties including a) Mutual coherence represented by equation (2)
…………………………. (2)
b) Sparsity: The coherence of a dictionary influences the recovery of the sparse coding support (the set of atoms associated with the non-zero coding coefficients). If an observation has an sparse coding in , the support is recovered accurately by solving (1) if equation (3) holds.
……………. (3)
c) Condition number: The coherence and sparsity determine the condition number of the dictionary given by ……………………….. (4).
[0054] In an aspect a well-conditioned dictionary with a condition number close to unity is robust to noise and outliers in the data.
d) Stability: The dictionary for each appliance can be interpreted as a linear measurement on the data . The measurement operator needs to satisfy the properties of invertibility and stability. A measurement operator is invertible if for every , , implies . An invertible operator is well calibrated to the data. In an embodiment invertible operator is well calibrated to the data. The measurement operator is stable if there exist constants and such that for every , equation (5) holds true.
………………… (5)
[0055] A measurement operator with with restricted isometry property approximately preserves the Euclidian geometry of the dimensional observations in its measurements . The constants and are the stability bounds and the ratio provides a measure of stability of the sampling operator. The measurement operator satisfies the Restricted Isometry Property (RIP) of order if there is a constant such that equation (6) holds true for all with an norm .
…………………….. (6)
[0056] In an embodiment acts as an approximate isometry on the set of vectors that are sparse. A small isometry constant ensures stability. The constant is related to the coherence and sparsity as . Hence the dictionary operator can be made stable by reducing its coherence, or sparsity on the encoding vector.
e) Robustness: For a model approximation error , with an term approximation of the optimization problem in (1), an upper bound on the error of the encoding vector (measurement) is dependent on the isometry constant/coherence such that
………………………………… (7)
For small isometry constants, measurement errors are robust to perturbations.
[0057] A measurement operator (a well calibrated dictionary) which is stable, invertible and robust to perturbations is useful for disaggregation of linear mixtures of sparse sources (source separation). However When computing sparse approximations using a dictionary, there is an inherent tradeoff between the approximation error, stability, and the sparsity measure . leading to a tradeoff between data approximation and disaggregation accuracy.
[0058] Overcomplete dictionaries with are useful for capturing significant features (transient and steady state) of the power waveforms and allow sparse representations. Typically, the coherence of overcomplete dictionaries is high (close to unity) making it unsuitable for source separation. The tradeoff between sparsity and the coherence can be managed by an appropriate choice of the size of the Dictionary ( ). The steps to learn an overcomplete dictionary with bounded coherence comprise:
Learning an initial dictionary using the K-SVD algorithm on multiple measurements of the data in the matrix .
Computing the Gram matrix and apply two constrains:
i) A structural constraint which thresholds the off diagonal elements :
………………………. (8)
ii) A spectral constraint on the eigenvalues of the Gram matrix is applied.
[0059] In an embodiment the spectral constrain on the eigenvalues of the Gram matrix is applied such that the eigenvalue decomposition of the Gram matrix is calculated: , , and Obtain a new diagonal matrix by retaining only the largest eigenvalues from and thresholding the remaining eigenvalues to zero.
[0060] In an embodiment of the subject matter disclosed herein the non-zero eigenvalues lie within the bounds of equation (9).
………………….. (9)
[0061] In an embodiment the steps to learn an overcomplete dictionary with bounded coherence further comprise updating the diction using as per equation (10),
………………………..…… (10)
and applying a rotation on the dictionary.
[0062] In an embodiment applying rotation on the dictionary comprises the steps of a) computing the encoding vector by solving the regularized regression problem to enforce a sparsity on the encoding matrix as per equation (11),
……………………… (11)
b) computing the covariance between the observation and its current approximation as per equation (12)
……………….. (12)
c) computing the Singular Value Decomposition (SVD) of the covariance such that ,
d) computing and apply this rotation operator to obtain the dictionary using equation (13).
………………….. (13)
where
[0063] In an embodiment, the steps of computing the gram matrix, updating the dictionary and applying rotation on the dictionary are repeated on the rotated dictionary for a predetermined number of iterations. At every iteration, the coherence threshold is reduced by a step ( ) until the coherence comes close to the Welch (lower) bound on the coherence of a dictionary, wherein the Welch (lower) bound on the coherence of a dictionary given by equation (14).
………………….. (14)
[0064] The final dictionary allows a sparse approximation of the power waveform of an appliance, with a sparsity measure dependent on the final value of coherence used in the dictionary learning steps described in the aforementioned paragraphs.
[0065] In an embodiment once the data models for each appliance are obtained, and interpreted as measurement models, the models are then calibrated to the data in order to make inferences based on the measurements. In an embodiment calibration of measurement models is required due to appliance component, load and usage pattern variation. In another embodiment calibration involves a systematic adjustment of model parameters so that model outputs accurately reflect external benchmarks. In an exemplary embodiment when the data model in terms of a dictionary is interpreted as a measurement model, the model parameters are sparsity, coherence and isometry. The benchmarks are set in terms of tolerance and accuracy by the training signals used to derive the data model. The calibration involves estimating the model parameters, running the model (reconstructing the signal from its measurement), assessing the results in terms of the benchmark tolerance and accuracy, and adjusting the model parameters accordingly.
[0066] In an embodiment of the subject matter disclosed herein, after defining data and measurement models for data from appliances, data and measurement models for aggregate data from all appliances: , are defined. In an embodiment in order to detect features on aggregate data for classification, a test statistic on the measurements, based on the measurement model is defined. This measurement model characterizes a frame of reference which can give measurements, from which the observations can be estimated optimally. A right frame of reference (representational basis) aids in the detection and estimation of features of the appliances. An accurate detection of a linear mixture of features in aids in an optimal estimation of the features of individual appliances in .
[0067] In an aspect of the present subject matter, the data model and measurement model for aggregate data is disclosed. In an embodiment for representation/measurement of an aggregate of sources which are sparse in their learnt dictionaries (subspaces), we use a union of subspaces as a measurement model for measurement of the aggregate signal. This measurement model constructed as a union of subspaces needs to be invertible and stable wherein invertibility and stability are related to the restricted isometry property (RIP) of the measurement/sampling operator. A sampling operator based on a union of subspaces needs to satisfy the block RIP property. This block RIP property is satisfied if the individual subspaces are well conditioned (condition number close to unity).
[0068] In an embodiment, the measurement model for two sources may be constructed as per the following equation (15).
……… (15)
Further the union of is constructed as:
………………. (16)
where and is a zero matrix of size .
[0069] In another embodiment the condition number of the two sub-spaces may be made approximately close to unity by reducing their coherence. In an embodiment the condition number may be made close to unity similar to the process described for data models. The measurement model constructed over a union of subspaces yields measurements which can be used to define a test statistic to detect events of overlap of operational modes of the 2 appliances. In an aspect these regions of overlap may be used to disaggregate power profiles of individual appliances.
[0070] In an aspect, on aggregate power observations, , the following optimization problem may be solved, ……………. (17). Further the norm of the estimated encoding vector (measurement) may be used over the union of subspaces as a detection test statistic.
[0071] In an embodiment where is the detection threshold, the following two hypothesis as shown in equation (18) may be tested.
……………….. (18)
[0072] The choice of detection threshold has the following implication Under the decision (a misdetection), there is a distortion in the estimates of features in and Under the decision (a false alarm), there is a residual interference in the estimation of features in .
[0073] In accordance with an embodiment of the present subject matter, in order to achieve a balance in the tradeoff between the effect of distortion and the effect of residual interference the value of T may be chosen at the point of intersection of the distributions of the norm test statistic under the two hypothesis in equation (18). This minimizes the probabilities of false alarm and misdirection.
[0074] In an aspect of the present subject matter in order to estimate the representation coefficients of the data model for each appliance, the waveform features of individual appliances can be estimated by solving an optimization problem over the union of subspaces.
[0075] In another aspect, the optimization problem to be solved for estimating data model representational coefficients with sparse and approximately disjoint encoding supports is given by equation (19).
…………….. (19)
[0076] Equation (19) is a weighted norm block sparse optimization problem. This weighted regularized regression formulation induces a sparsity on the individual co-efficient estimates for a representation of on . An appropriate choice of the penalizing weights will result in estimates of the encoding coefficients with approximately disjoint and sparse supports. This will allow disaggregation of the power profiles of individual appliances, .
[0077] In accordance to the subject matter disclosed herein, the weights are estimated by a method comprising the following iterative steps: (i) Initializing the weights to (ii) Solving the block sparse optimization problem of equation (19) (iii) Use the representation residual error and re-compute the weights for block as shown in equation (20). (iv) resolve equation (19) with updated weights.
……… (20)
[0078] In an embodiment the iteration is stopped after the weights for the two blocks are maximally separated. Further, once the optimal synthesis coefficients are estimated, they may be used to reconstruct the individual features using an analysis sparse optimization approach. In an embodiment the individual features may further be used to achieve a trade-off between a) approximation of data within a class , and b) approximation of the aggregate data , .
[0079] Further, once the representative coefficients of the individual sources are estimated from (19), the individual power waveform features are reconstructed based on an analysis sparsity approach. The optimization problem to be solved to reconstruct features of each appliance is given by equation (21):
…………. (21)
wherein is the Moore-Penrose pseudo-inverse operator computed as per equation (22):
……………. (22).
[0080] The analysis sparsity based reconstruction in equation (21) is well suited for observations with lower sampling rates as there is a norm regularization on the estimates . However, the estimate of the disaggregated signal has three errors in its approximation as per equation (23), , …………….. (23)
[0081] is the error due to interference from a partial reconstruction of features of appliance 2 based on the dictionary of appliance 1 or vice versa. This can be reduced by reducing the cross-coherence of atoms from the dictionaries of the two appliances. This cross-coherence is defined as per equation (24).
…………………. (24)
[0082] is the error due to noise in the measurements. This can be reduced by designing a well-conditioned dictionary with a small self-coherence (approaching the Welch bound). is the error due to numerical artifacts of the block sparse optimization. In an embodiment the weights used for the block sparse approximation in equation (19) and the sparsity penalty on the analysis sparsity optimization problem in equation (21) may be tuned to the training data for reducing the approximation errors in equation (23).
[0083] It may be understood that the data and measurement models described can be extended to more than two appliances. In an exemplary embodiment the detection and estimation of three appliances can be formulated in terms of a general measurement model constructed as
where and ………………… (25)
wherein weights may be computed appropriately as described in the preceding paragraphs.
[0084] In an example of working the subject matter disclosed herein, appliance power waveforms were extracted from the Reference Energy Disaggregation Dataset (REDD), at a sampling rate of 1 sample/sec. Dictionaries were learnt for each appliance. A union of dictionaries was constructed as a measurement model for detection of overlaps of power profiles over aggregate data. Once overlapped power profiles were detected, a block sparse approximation was solved to estimate the representational coefficients of each appliance. An analysis sparse optimization problem was then solved using the estimated coefficients, to estimate the power profile features of each appliance. Table 1 gives the numerical values of the parameters used to derive the data and measurement models for detection and estimation of the Refrigerator and Microwave power profiles.
Parameter Parameter value
Measurement vector size
125
Dictionary cardinality
130
Coherence bound
0.01
Sparsity
100
Size of union of dictionaries
Detection threshold
80000
Penalty on block sparse synthesis sparse optimization
Number of iterations to reach optimal penalty weights
20
Sparsity penalty for analysis
Table 1
[0085] In the example the power data for refrigerator and microwave were collected from the REDD data set at a sampling rate of 1 sample/sec.
[0086] Referring to figure 6 a graphical representation for implementation of the subject matter disclosed herein on a refrigerator is illustrated. As shown in figure 6 the training signal is provided for a refrigerator and later a dictionary signal is reconstructed for the refrigerator. As illustrated form the figure, the reconstructed signal very closely resembles the training signal. Similarly referring to figure 7 a graphical representation for implementation for the disclosed subject matter disclosed herein on a microwave oven is illustrated. A training signal for a microwave is provided and later a dictionary signal is reconstructed, again the reconstructed signal covers all the properties of the training signal.
[0087] Figure 8 illustrates, detection test statistic for evaluation of test parameters. In an embodiment the norm magnitude computed on the measured aggregate power for the example implementation of the disclosed subject matter, wherein the graph represents norm on encoding vector over union of dictionaries and wherein the detection test statistic is maximum on the window index 70. Power waveforms on window index 70 are taken for disaggregation as illustrated in figure 9.
[0088] Referring to figure 9 the window index 70 from figure 8 is illustrated in accordance with an embodiment of the disclosed subject matter. According to the graphical representation of figure 9, in the example, number of samples per window is 125. The window index of 70 corresponds to 145 minutes and the overlap of power profiles is seen at 145 minutes in the figure above. As per the example the linear mixture of power profiles on this window is further used for disaggregation.
[0089] Figure 10 illustrates a graphical representation the overlap of the two power profiles, of the refrigerator and the microwave oven. In the example the overlap region of power profiles in the time interval of (145,146) minutes is shown. As per the subject matter disclosed herein the signals over this window add linearly to give the aggregate power profile.
[0090] Further the disaggregation algorithm was applied to an aggregate power of the Refrigerator and Oven. The results are illustrated in graphical form as per figure 11 and figure 12.
[0091] Referring to figure 11, the estimated power profile of the refrigerator on the time interval (145,146) minutes is shown, wherein the test signal and disaggregated signal for the refrigerator are plotted in a power (watts) versus time graph. Referring further to figure 12 the estimated power profile of the microwave oven over the interval (145,146) minutes is shown as a dashed line. In an embodiment the slight scaling of the estimate may be controlled by changing the weighting on the block sparse optimization algorithm.
[0092] Referring to figure 13 illustrates plot of the variation of the penalty weights as a function of iteration index, when solving the block sparse optimization problem, as per the embodiment of the disclosed subject matter. As shown in figure 13, as per the example, the optimization problem may be solved, i.e. maximal separation may be reached for the example over 7 iterations and where the weights are illustrated by figure 13.
[0093] It may be understood by a person skilled in the art that although the subject matter disclosed herein is illustrated with reference to certain embodiments, this is in no way to limit the scope of the subject disclosed herein which is limited only by the following claims and further the method and system disclosed may be implemented in embodiments other than those disclosed in this application.
,CLAIMS:
1. A method for dictionary learning based power waveform disaggregation of at least one appliances, said method comprising processor implemented steps of:
acquiring a data from each of the at least one appliances, using a data acquisition module (208A);
generating a data model wherein the generated data model extracts features of each of the at least one appliances, by implementing feature extraction techniques, using a data model generation module (208B);
generating a measurement model and calibrating the measurement model to measure the data based on at least one from a group of a type of power to be measured, a power level resolution and a sampling frequency, using a measurement model generation module (208C); and
generating an inference model and implementing said inference model for estimating individual signals and classification of power features data based on prior knowledge of statistical and structural properties of the data, wherein the presence or absence of events representative of load features is inferred on the acquired data based on an predetermined test statistics, using an inference model generation module (208D).
2. The method according to claim 1 wherein data acquisition is based on a predetermined data acquisition strategy wherein the predetermined data acquisition strategy determines the sampling rate and power measurement resolution requirements.
3. The method according to claim 1 wherein the features extracted from each appliance of the at least one appliances are represented in terms of data dependent orthonormal basis functions.
4. The method according to claim 1 wherein the data model is generated by learning a dictionary of orthonormal basis functions wherein a linear combination of the orthonormal basis functions lead to an efficient representation of appliance features suing the data model generation module.
5. The method according to claim 5 wherein the dictionary of orthonormal basis functions is flexible such that the dictionary adapts to spatial, temporal and usage pattern variability.
6. A system (102) for dictionary learning based power waveform disaggregation of at least one appliances, comprising a processor (202), a memory (204), operatively coupled with said processor, the system comprising:
a data acquisition module (208A) configured to acquire data from each of the at least one appliances;
a data model generation module (208B) configured to generate a data model wherein the generated data model captures all the essential features of each of the at least one appliances, by implementing feature extraction techniques;
a measurement model generation module (208C) configured to generate a measurement model and calibrating the measurement model to measure the data based on at least one from a group of a type of power to be measured, a power level resolution and a sampling frequency; and
an inference model generation module (208D) configured to generate an inference model and implement said inference model for estimating individual signals and classification of power features data based on prior knowledge of statistical and structural properties of the data, wherein the presence or absence of events representative of load features is inferred on the acquired data based on an predetermined test statistics.
7. The system according to claim 6 wherein the data acquisition module (208A) is configured to acquire data based on a predetermined data acquisition strategy wherein the predetermined data acquisition strategy determines the sampling rate and power measurement resolution requirements.
8. The system according to claim 6 wherein the data model generation module (208B) is configured to generate the data model by learning a dictionary of orthonormal basis functions wherein a linear combination of the orthonormal basis functions lead to an efficient representation of appliance features suing the data model generation module.
| # | Name | Date |
|---|---|---|
| 1 | Form 3 [11-03-2016(online)].pdf | 2016-03-11 |
| 2 | Drawing [11-03-2016(online)].pdf | 2016-03-11 |
| 3 | Description(Provisional) [11-03-2016(online)].pdf | 2016-03-11 |
| 4 | Form 3 [10-03-2017(online)].pdf | 2017-03-10 |
| 5 | Form 18 [10-03-2017(online)].pdf | 2017-03-10 |
| 6 | Drawing [10-03-2017(online)].pdf | 2017-03-10 |
| 7 | Description(Complete) [10-03-2017(online)].pdf_282.pdf | 2017-03-10 |
| 8 | Description(Complete) [10-03-2017(online)].pdf | 2017-03-10 |
| 9 | Assignment [10-03-2017(online)].pdf | 2017-03-10 |
| 10 | ABSTRACT 1.jpg | 2018-08-11 |
| 11 | 201621008625-FORM 26-130416.pdf | 2018-08-11 |
| 12 | 201621008625-FORM 1-130416.pdf | 2018-08-11 |
| 13 | 201621008625-CORRESPONDENCE-130416.pdf | 2018-08-11 |
| 14 | 201621008625-OTHERS [26-05-2021(online)].pdf | 2021-05-26 |
| 15 | 201621008625-FER_SER_REPLY [26-05-2021(online)].pdf | 2021-05-26 |
| 16 | 201621008625-COMPLETE SPECIFICATION [26-05-2021(online)].pdf | 2021-05-26 |
| 17 | 201621008625-CLAIMS [26-05-2021(online)].pdf | 2021-05-26 |
| 18 | 201621008625-FER.pdf | 2021-10-18 |
| 19 | 201621008625-US(14)-HearingNotice-(HearingDate-13-02-2023).pdf | 2023-01-13 |
| 20 | 201621008625-FORM-26 [08-02-2023(online)].pdf | 2023-02-08 |
| 21 | 201621008625-FORM-26 [08-02-2023(online)]-1.pdf | 2023-02-08 |
| 22 | 201621008625-Correspondence to notify the Controller [08-02-2023(online)].pdf | 2023-02-08 |
| 23 | 201621008625-Written submissions and relevant documents [23-02-2023(online)].pdf | 2023-02-23 |
| 24 | 201621008625-PatentCertificate27-12-2023.pdf | 2023-12-27 |
| 25 | 201621008625-IntimationOfGrant27-12-2023.pdf | 2023-12-27 |
| 1 | 2020-11-2511-02-30E_25-11-2020.pdf |