Sign In to Follow Application
View All Documents & Correspondence

Method Of Storing And Retrieving Input Data In An Electronic Device And The Electronic Device Thereof

Abstract: The present invention provides a method and system for encoding and decoding of sparse and non-sparse data using compressed sensing. The method includes designing measurement matrices for compressed sensing using prime number theory, modifying the designed measurement matrices, encoding non sparse data and decoding the original input data with a zero error for integer data-type using compressed sensing. The method further includes encoding non sparse data and decoding the original input data with a very high accuracy for real-valued data-type using compressed sensing. Figure 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
03 March 2014
Publication Number
52/2015
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
bangalore@knspartners.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-02-24
Renewal Date

Applicants

SAMSUNG R&D INSTITUTE INDIA – BANGALORE PRIVATE LIMITED
# 2870, ORION Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanakundi Circle, Marathahalli Post, Bangalore -560037, Karnataka, India

Inventors

1. KIZHAKKEMADAM, Sriram
Employed at Samsung R&D Institute India – Bangalore Private Limited, having its office at, # 2870, ORION Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanakundi Circle, Marathahalli Post, Bangalore -560037, Karnataka, India
2. NAGARAJ, Nithin
Employed at Samsung R&D Institute India – Bangalore Private Limited, having its office at, # 2870, ORION Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanakundi Circle, Marathahalli Post, Bangalore -560037, Karnataka, India

Specification

RELATED APPLICATION

Benefit is claimed to Indian Provisional Application No. 1071/CHE/2014 titled "METHOD AND SYSTEM FOR ENCODING AND DECODING OF SPARSE AND NON-SPARSE DATA USING COMPRESSED SENSING" filed on 3 March 2014, which is herein incorporated in its entirety by reference for all purposes.

FIELD OF INVENTION

The present invention relates to the field of data processing and more particularly relates to a method and apparatus for storing and retrieving input data in an electronic device using compressed sensing.

BACKGROUND OF THE INVENTION

Data is stored in electronic devices such as image processing devices upon compressing. Consider a system having some sparsity in data that is collected and it has to be compressed efficiently. Such examples arise in fields such as digital imaging, medical imaging etc. In particular, in the field of medical imaging, the very high data size of the sensing apparatus (such as MRI etc.) brings about a necessity to have large data servers to store data of several patients. Compression will be helpful to decrease the overhead in storing the sensed data. Recently, Compressed Sensing has been proposed as a mechanism to decrease the data storage requirements. When data is acquired using compressed sensing (CS), it needs to be sparse in order to achieve perfect recovery at the receiver. Non-sparse data cannot be recovered by existing CS decoders.

There are several sensing matrices for Compressed Sensing. These matrices need to satisfy a property known as Restricted Isometry Property (RIP). However, all known designs in literature have drawbacks such as incapability in sensing and perfectly recovering non-sparse data, requirement of a sparsifying transformation at the receiver in the case of recovering compressed measurements of non-sparse data. Such a transformation necessarily results in loss of recovery at receiver, however small the loss may be. Likewise, the minimum number of measurements needed to recover sparse data is roughly around four times the scarcity.

Hence, there exist a need of a method and device for effective compressed storage of data and perfect recovery.

SUMMARY

An objective of present invention is to design a method and system for storing and retrieving input data in an electronic device. An aspect of present invention discloses a method of storing and retrieving input data in an electronic device. The method comprises encoding the input data based on a super additive sequence having a pre-defined length, storing the encoded input data as a vector of compressed sensing measurement, decoding the stored vector of compressed sensing measurement using a L1 normalization and retrieving the input data from the decoded vector of compressed sensing measurement. The input data is one of sparse and non-sparse data. The super additive sequence is a sequence of increasing integers such that every element is greater than sum of all its previous elements. In the present invention, encoding the input data based on a super additive sequence having a pre-defined length comprises selecting a pre-defined length for the super additive sequence based on the weight of the input data, generating a sensing matrix having a pre-defined number of rows and a pre­defined number of column, replacing one or more pre-defined elements of the sensing matrix with the elements of the super additive sequence and performing compressed sensing in order to generate the vector of compressed sensing measurement. Likewise, replacing one or more pre- defined elements of the sensing matrix with the elements of the super additive sequence according to one embodiment of present invention comprises determining number of ones and zeros in each row of the sensing matrix and replacing the ones in the one or more rows, where the number of ones is less than the pre-defined length of the super additive sequence. Also, decoding the stored vector of compressed sensing measurement using a L1 normalization in view of present invention comprises regenerating the sensing matrix from the stored vector of compressed sensing measurement to compute an input vector corresponding to input data and performing the L1 minimization of a residue vector in order to estimate the input data and retrieving the input data by computing the sum of input vector and residue vector.

Another aspect of present invention teaches a method of storing and retrieving input data in an electronic device. The method comprises encoding the input data using a prime number matrix having one or more rows and one or more columns, storing the encoded input data in the electronic device and decoding the stored input data by computing an absolute of the prime number matrix with the stored input data. The number of rows and number of columns in the prime number matrix is selected based on characteristics of input data to be stored. The method further comprises selecting N prime numbers for the prime number matrix based on the input data to be stored. Yet another aspect of present invention discloses an electronic device for storing and retrieving compressed data comprises of an encoder adapted for encoding the input data based on a super additive sequence having a pre­defined length, a storage module coupled with the encoder for storing the encoded input data as a vector of compressed sensing measurement and a decoder for decoding the stored vector of compressed sensing measurement using a L1 normalization.

BRIEF DESRIPTION OF THE ACCOMPANYING DRAWINGS

The aforementioned aspects and other features of the present invention will be explained in the following description, taken in conjunction with the accompanying drawings, wherein:

Figure 1 is flow chart illustrating a method of storing and retrieving input data from an electronic device, according to one embodiment of present invention.

Figure 2 is flow chart illustrating a method of encoding the input data based on a super additive sequence having a pre-defined length, according to one embodiment of present invention.

Figure 3 is flow chart illustrating a method of decoding the stored vector of compressed sensing measurement using a L1 normalization, according to one embodiment of present invention.

Figure 4 is flow chart illustrating a method of storing and retrieving input data from an electronic device, according to another embodiment of present invention.

Figure 5 is block diagram illustrating an apparatus for compressed sensing, according to one embodiment of present invention.

Figure 6 is a block diagram of an electronic device for storing and retrieving input data, according to one embodiment of present invention.

Figure 7 is a graphical representation illustrating comparison of performance of RIP matrices - Gaussian, BSM and SNM for non-sparse binary data with number of measurements M = K = N/2 and N taking values of 50, 10, 500 and 1000.

Figure 8 is a graphical illustrating the scenario of measuring the consistency of perfect recovery as we change the sparsity setting from sparse to non-sparse, wherein N=100, M=40, K = M and is varied from 0 to 1.3'M, thereby changing the binary data vector from sparse to non-sparse.

Figure 9 is a graphical illustrating the scenario of measuring the consistency of perfect recovery as we change the sparsity setting from sparse to non-sparse, where N=500, M=200, K = M and is varied from 0 to 1.3M, thereby changing the binary data vector from sparse to non-sparse.

Figure 10 is a graphical representation illustrating BER v/s SNR on link from BS1 to MS when SNR on BS2 to MS is at 0 dB for an AWGN channel.

DETAILED DESCRIPTION OF THE INVENTION

The embodiments of the present invention will now be described in detail with reference to the accompanying drawings. However, the present invention is not limited to the embodiments. The present invention can be modified in various forms. Thus, the embodiments of the present invention are only provided to explain more clearly the present invention to the ordinarily skilled in the art of the present invention. In the accompanying drawings, like reference numerals are used to indicate like components. The specification may refer to "an", "one" or "some" embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.

As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms "includes", "comprises", "including" and/or "comprising" when used in this specification, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Furthermore, "connected" or "coupled" as used herein may include operatively connected or coupled. As used herein, the term "and/or" includes any and all combinations and arrangements of one or more of the associated listed items.

Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

Figure 1 is flow chart illustrating a method of storing and retrieving input data from an electronic device, according to one embodiment of present invention. In one embodiment of present invention, the non-sparse data can also be encoded and decoded using compressed sensing framework. The method according to present invention proposes a new design of the CS matrix. Aforesaid design is based on incorporating special number theoretic properties to the sensing matrix. Both sparse and non-sparse data can be acquired and perfectly decoded (under certain conditions) according to one embodiment of the present invention. Here, the invention is focused on the recovery of binary data, but with suitable modifications, the design can be adapted for non-binary data as well. In the present embodiment, at step 101 the input data is encoded based on a super additive sequence having a pre-defined length. The method of encoding input data is explained in detail in Figure 2. As mentioned earlier, the input data is one of sparse and non-sparse data. Likewise, the super additive sequence is a sequence of increasing integers such that every element is greater than sum of all its previous elements. At step 102, the encoded input data is stored as a vector of compressed sensing measurement. Further, the stored data is decoded and retrieved as indicated in steps 103 and 104. At step 103, the stored vector of compressed sensing measurement is decoded using a L1 normalization; and at step 104 the input data is retrieved from the decoded vector of compressed sensing measurement.

In view of the prior art, there exist a Binary Sparse Measurement matrices (BSM) which satisfy the Restricted Isometry Property (RIP) and which can be used as the sensing matrix in a CS framework. Consider a matrix A, where A has M rows, N columns and all entries are binary. The matrix A may have "d" 1's in every column, where d can be set by the user. Typical values of "d" are 8, 16 or 32. For each column of the sensing matrix A, 'd' random numbers between 1 and M are generated and 1s are placed in that column, in those rows that correspond to the 'd' random integers. If there is duplication of any of the 'd' numbers, it is repeated until all 'd' integers are distinct. It is to be noted that no two columns of binary sensing matrix A are identical. These matrices have the characteristics such as binary and sparse satisfy a weaker form of RIP, they show that this weaker property (RIP-p) is sufficient for LP-decoding and is tolerant to measurement noise and having efficient update time, equal to its sparsity parameter 'd'. Experimental evidence shows that these matrices are as good or better than random Gaussian or Fourier matrices, and faster. According to one embodiment, design of the sensing matrix A is incorporated a number-theoretic structure by using prime numbers. However, notice that the matrix A thus designed need not satisfy RIP always. The number of rows for the matrix A is reduced if the input data is sparse and then the number-theoretic structure is used to some of the rows of BSM. In other words, the number theoretic structure can be used to estimate some of the locations of the 1s in the sparse vector x. The design of sensing matrix according to one embodiment of present invention satisfies RIP and also has the number theoretic structure built into some of the rows and hence called SNM.

The present invention proposes the Super-Additive Sequences (SAS) for compressing the input data in one embodiment of present invention. The Super-Additive Sequence (SAS) F is a sequence of increasing integers such that every element is greater than the sum of all its previous elements. For instance, consider that F = {1, 2, 4, 8, 16, 32, 64, 128, 256}. The advantage of such a sequence is that given any number V between 0 to max(F), the number V can be decomposed as sum of members of K in a unique manner. F = {1, 3, 5, 10, 20, 40, 80, 160} is another example of SAS. Not all numbers in the range 0 to max (F) can be decomposed. For example, the number 2 cannot be decomposed in this set. However, those numbers which can be decomposed in a unique manner. For instance, if the SAS F and a value V are given, the number V can be decomposed using Greedy Algorithm as stated below. In order to decompose V as the sum of distinct members of F, the following steps are implemented.

Step 1: Subtract the largest number which is < V and which belongs to F from V. Call this number "a". This number definitely belongs to the decomposition. Add "a" to the list D.

Step 2: Replace V with V- a

Step 3: Repeat Steps 1 and 2 until V becomes zero. At this stage, the list D contains the unique decomposition of V. For instance, let F = {1, 2, 4, 8, 16, 32, 64, 128, 256} and V = 31. Application of Greedy Algorithm yields V=16 + 8 + 4 + 2 + 1 which is a unique decomposition for number 31. Afore mentioned super additive sequences are used for compressed sensing of input data according to one embodiment of present invention.

Figure 2 is flow chart illustrating a method of encoding the input data based on a super additive sequence having a pre-defined length, according to one embodiment of present invention. For a given N, M and d, first construct a BSM as described in Figure 1. The encoding the input data based on a super additive sequence includes selecting a pre-defined length for the super additive sequence based on the weight of the input data at step 201. Then select a SAS F whose cardinality is C. C is selected by the user based on the weight of the input data to be stored. For example, let F ={1, 2, 4, 8, 16, 32, 64} with C = 7. Further, at step 202, a sensing matrix A is generated having a pre-defined number of rows and a pre-defined number of columns. Then one or more pre­defined elements of the sensing matrix A is replaced with the elements of the super additive sequence at step 203. For each row of the matrix A which contains < C ones, replace every 1 in that row with the next successive element of F . Such a row will have all entries from F in place of 1 and the remaining entries will all be zero. Further, the matrix will be sparse since we started with a BSM. Then at step 204, performing compressed sensing in the sensing matrix A in order to generate the vector of compressed sensing measurement. The compressed sensing of binary data vector x as y = Ax. Here, y is the vector of CS measurements.

Figure 3 is flow chart illustrating a method of decoding the stored vector of compressed sensing measurement using L1 normalization, according to one embodiment of present invention. In the present invention, the sensing matrix is regenerated from the stored vector of compressed sensing measurement to compute an input vector corresponding to input data and at step 302 the L1 minimization is performed of a residue vector in order to approximate the input data. The input data is retrieved by computing the sum of input vector and residue vector For instance, consider xrec as the input vector of length A/*1, which is computed from the regenerated sensing matrix A. Corresponding to the row-index / of the matrix A for which the number of ones was < C, perform the Greedy Algorithm on the compressed sensing measurement value V=y(/) to uniquely decompose it into the members of F. Record the indices in the /-throw of A which are in the decomposition. Let this set of indices be denoted by /. Then, for each rtl, set XrecC")^ for everv row /= 1 t0 M- Having partially estimated xrec, the remaining 1s are found by by Z_i minimization. It is assumed that the residue vector xi to be determined is sparse and such that x=xrec+xi. The residue vector xi is determined based on L-i minimization as follows: are mm , . *i = x l*ili subject to A(xrec + xx) = y where |x|i indicates the 1-norm of the function.

Figure 4 is flow chart illustrating a method of storing and retrieving input data from an electronic device, according to another embodiment of present invention. According to this embodiment, the prime number matrix is used. At step 401, the input data is encoded using a prime number matrix having one or more rows and one or more columns. A binary sparse measurement matrices (BSM) as suggested in the previous section and modify it appropriately to yield a Sparse Number-theoretic Matrix (SNM) which also satisfies RIP. Let x be a input data consisting of n integer entries: [x(1),x(2),...,x(A7)]r. Further, let x be K-sparse: which means that only K< n values are non-zero. For each of these K non-zero values, let the /-th prime number not divide x(i). Then x can be accurately recovered from just K+1 linear measurements. Let the list of prime numbers be denoted by the set P= {2,3,...,P(n)} where P(j) is the y'-th prime number. The number of rows and number of columns in the prime number matrix is selected based on characteristics of input data to be stored. Construct the prime number matrix (linear measurement matrix) A with 1 row and N columns as: A=[3-5-7-P(n), 2 ■ 5 • 7 - P(n), ..., 2 • 3 - P(n - 1)]

Note that every column of A consists of product of exactly n-^ distinct prime numbers from the set P with they'-th prime number missing for they'-th column. Perform the linear measurement y =Ax. Here, since A is a 1*r? matrix and x is a n*1 matrix, y is an integer value. We have thus performed a single linear measurement. At step 402, the "y" value is stored in the electronic device. Then, at step 403, the stored input data (y) is decoded by computing an absolute of the prime number matrix with the stored input data. In order to decode, given the integer value y and the matr ix A, the K non-zero values of the vector x is to be recovered. For that, first the K locations are recovered as follows. Since for all /from 1 to n, P(i) does not divide x(i) (only for non-zero x(/)),then in order to recover K locations of the non-zero values of x as follows. For /=1 to /?,compute: $ = mod(P(/) if the i-th value of x(i) is zero, then the i-th column of A does not contribute to y. All other entries contribute to y contains the i-th prime number. This implies that y is divisible by P(i). Thus the only way to make not divisible by P(i) is by having the i-th entry of x not equal to zero. Now sample x at those locations for which £ not equal to zero. Now, sample x at those locations for which i = 0 . There are only K such values. This can be done with a new sensing matrix A having K rows and n columns where each row of A has a single 1 with rest of the entries zero. The ones are placed at the K non-zero indices of the array $. Thus the number for linear measurements required is K + 1. Note that we have no condition on the size of K for this result to be true. In other words, K can be any number from 0 to n and the method works fine, implying that x could be non-sparse as well.

If x has only binary entries and is also K-sparse, then the condition that the i-th prime number should not divide x(i) for the K non-zero locations is trivially satisfied since x(i) is 1 in these K locations and no prime number divides 1. The number of linear measurements required is just 1. For instance, if x is a binary vector, then a single measurement value y obtained from the sensing matrix A above is sufficient for perfect recovery. For the rest of the discussion, we will deal with binary x unless otherwise specified. If n is large, then A matrix given in the above methodgives large integer entries the encoded input data. Hence, in order to reduce the number of primes in set P, while increasing the number of rows of A (now the measurement is a vector y = Ax). For example, let us say that we limit P to the first three primes P = {2, 3, 5}. Now, A matrix can be suitably modified as follows: 3-5 2-5 2-3 0 0 0 0 0 0 0 3 • 5 2 «5 2 «3 0 V^O 0 0 0 0 0 3•5 2•5 2 • 3^/ In the above given example, the number of rows of A is n/3 (assuming that n is divisible by 3). The number of columns of A will be n as before. For the binary data vector x, the K unknowns are the K locations of 1s. Therefore, the number of linear measurements required for perfect recovery is thus n/3. By increasing the number of primes in the set P, the number of columns of A can be reduced, and thereby reduce the number of measurements required, but this will also increase the dynamic range of the values of the vector y.

Figure 5 is block diagram illustrating an apparatus 500 for sensing compressed data, according to one embodiment of present invention. The electronic device (not shown in figure) for storing and retrieving compressed data comprises of a compressed sensing apparatus 500. The compressed sensing apparatus 500 includes an encoder 501 adapted for encoding the input data based on a super additive sequence having a pre-defined length, a storage module 502 coupled with the encoder 501 for storing the encoded input data as a vector of compressed sensing measurement and a decoder 503 for decoding the stored vector of compressed sensing measurement using a L1 normalization. The encoder 501 is further configured for selecting a pre-defined length for the super additive sequence based on the weight of the input data, generating a sensing matrix having a pre-defined number of rows and a pre-defined number of column, replacing one or more pre-defined elements of the sensing matrix with the elements of the super additive sequence and performing compressed sensing in order to generate the vector of compressed sensing measurement. Replacing one or more pre-defined elements of the sensing matrix with the elements of the super additive sequence by the encoder 501 comprising steps of determining number of ones and zeros in each row of the sensing matrix and replacing the ones in the one or more rows, where the number of ones is less than the pre-defined length of the super additive sequence.

The decoder 503 for decoding the stored vector of compressed sensing measurement using the L1 normalization is further configured for regenerating the sensing matrix from the stored the vector of compressed sensing measurement to compute an input vector corresponding to input data and performing the L1 minimization of a residue vector in order to approximate the input data and retrieving the input data by computing the sum of input vector and residue vector.

Figure 6 is a block diagram of an electronic device 600 for storing and retrieving input data, according to one embodiment of present invention. The components described above in figure 5 may be implemented on electronic device that store data using compressive sensing, such as a computer or MRI scanner, memory resources, and network throughput capability to handle the necessary workload placed upon it. The electronic device 600 includes a processor 602 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 603, read only memory (ROM) 605, random access memory (RAM) 606, input/output (I/O) devices 601, and compressed sensing apparatus 604. The processor 602 may be implemented as one or more CPU chips, or may be part of one or more application specific integrated circuits (ASICs).The secondary storage 603 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if RAM 606 is not large enough to hold all working data. Secondary storage 603 may be used to store programs that are loaded into RAM 508 when such programs are selected for execution. The ROM 605 is used to store instructions and perhaps data that are read during program execution. ROM 605 is a nonvolatile memory device that typically has a small memory capacity relative to the larger memory capacity of secondary storage 603. The RAM 606 is used to store volatile data and perhaps to store instructions. Access to both ROM 605 and RAM 606 is typically faster than to secondary storage 603.

Figure 7 is a graphical representation illustrating comparison of performance of RIP matrices - Gaussian, BSM and SNM for non-sparse binary data with number of measurements M = K = N/2 and N taking values of 50, 10, 500 and 1000. The figure shows the performance of the three RIP matrices - random Gaussian, BSM and SNM for non-sparse binary data. In each instance, the sparsity is equal to N/2 and this corresponds to uniformly distribute binary vector. The measurements in each case is M = N/2. This is the worst case scenario of non- sparsity and SNM is able to perfectly recover in a consistent manner for increasing value of the dimension of the input signal x, whereas Gaussian and BSM fail. Also, as expected, BSM outperforms Gaussian.

Figure 8 is a graphical illustration of the scenario of measuring the consistency of perfect recovery as we change the sparsity setting from sparse to non-sparse, wherein N=100, M=40, K = M and is varied from 0 to 1.3M, thereby changing the binary data vector from sparse to non-sparse.

Figure 9 is a graphical illustration of the scenario of measuring the consistency of perfect recovery as we change the sparsity setting from sparse to non-sparse, where N=500, M=200, K = M and is varied from 0 to 1.3M, thereby changing the binary data vector from sparse to non-sparse.

Figure 10 is a graphical representation illustrating BER v/s SNR on link from BS1 to MS when SNR on BS2 to MS is at 0 dB for an AWGN channel. The figure depicts the perfect recovery for the three kinds of matrices. SNM performs comparable with Gaussian RIP matrix with the added benefit of a fast encoding and recovery. For SNM, we have used d = 4, C = 7. For sparsity factors in the range 0.4 < p < 0.9, SNM clearly outperforms both random Gaussian and BSMs. Consider a system model with two base stations and a single user downlink setup in which BS2 assists BS1 in transmitting a code word to MS. At BS2, an SNM is being used and its performance is compared with the earlier random Gaussian measurement matrix A. As before, a mother code of rate 1/3 is being used when the link from BS2 has a received SNR of 0 dB and the SNR on link from BS1 is varied. Although the invention of the method, apparatus and device has been described in connection with the embodiments of the present invention illustrated in the accompanying drawings, it is not limited thereto. It will be apparent to those skilled in the art that various substitutions, modifications and changes may be made thereto without departing from the scope and spirit of the invention.

We claim:

1. A method of storing and retrieving input data in an electronic device comprising: encoding the input data based on a super additive sequence having a pre-defined length; storing the encoded input data as a vector of compressed sensing measurement; decoding the stored vector of compressed sensing measurement using a L1 normalization; and retrieving the input data from the decoded vector of compressed sensing measurement. The method as claimed in claim 1, wherein the input data is one of sparse and non-sparse data. The method as claimed in claim 1, wherein super additive sequence is a sequence of increasing integers such that every element is greater than sum of all its previous elements. The method as claimed in claim 1, encoding the input data based on a super additive sequence having a pre-defined length comprising: selecting a pre-defined length for the super additive sequence based on the weight of the input data; generating a sensing matrix having a pre-defined number of rows and a pre-defined number of column; replacing one or more pre-defined elements of the sensing matrix with the elements of the super additive sequence; and performing compressed sensing in order to generate the vector of compressed sensing measurement.

5. The method as claimed in claim 4, wherein replacing one or more pre­ defined elements of the sensing matrix with the elements of the super additive sequence comprising: determining number of ones and zeros in each row of the sensing matrix; and replacing the ones in the one or more rows, where the number of ones is less than the pre-defined length of the super additive sequence.

6. The method as claimed in claim 1, wherein decoding the stored vector of compressed sensing measurement using a L1 normalization comprising: regenerating the sensing matrix from the stored vector of compressed sensing measurement to compute an input vector corresponding to input data; and performing the L1 minimization of a residue vector in order to estimate the input data; and retrieving the input data by computing the sum of input vector and residue vector.

7. A method of storing and retrieving input data in an electronic device comprising: encoding the input data using a prime number matrix having one or more rows and one or more columns; storing the encoded input data in the electronic device; and decoding the stored input data by computing an absolute of the prime number matrix with the stored input data. The method as claimed in claim 7, wherein the number of rows and number of columns in the prime number matrix is selected based on characteristics of input data to be stored. The method as claimed in claim 7, wherein the input data is one of sparse and non-sparse data.

10.The method as claimed in claim 7, further comprising: selecting N prime numbers for the prime number matrix based on the input data to be stored. The method as claimed in claim 7, wherein Nth element of the prime number matrix comprises all selected prime numbers other than Nth prime number. An electronic device for storing and retrieving compressed data comprises of: an encoder adapted for encoding the input data based on a super additive sequence having a pre-defined length; a storage module coupled with the encoder for storing the encoded input data as a vector of compressed sensing measurement; a decoder for decoding the stored vector of compressed sensing measurement using a L1 normalization. The device as claimed in claim 12, wherein the input data is one of sparse and non-sparse data. The device as claimed in claim 12, wherein the super additive sequence is a sequence of increasing integers such that every element is greater than sum of all its previous elements. The device as claimed in claim 12, wherein the encoder is further configured for: selecting a pre-defined length for the super additive sequence based on the weight of the input data; generating a sensing matrix having a pre-defined number of rows and a pre-defined number of column; replacing one or more pre-defined elements of the sensing matrix wrth the elements of the super additive sequence; and performing compressed sensing in order to generate the vector of compressed sensing measurement.

16.The device as claimed in claim 12, wherein replacing one or more pre­defined elements of the sensing matrix with the elements of the super additive sequence by the encoder further comprising: determining number of ones and zeros in each row of the sensing matrix; and replacing the ones in the one or more rows, where the number of ones is less than the pre-defined length of the super additive sequence.

17.The device as claimed in claim 12, wherein the decoder for decoding the stored vector of compressed sensing measurement using the L1 normalization is further configured for: regenerating the sensing matrix from the stored the vector of compressed sensing measurement to compute an input vector corresponding to input data; and performing the L1 minimization of a residue vector in order to estimate the input data; and retrieving the input data by computing the sum of input vector and residue vector.

Documents

Application Documents

# Name Date
1 POA_Samsung R&D Institute India-new.pdf 2014-03-04
2 2013_SMG_789_Provisional Specification_final draft.pdf 2014-03-04
3 2013_SMG_789_Drawings.pdf 2014-03-04
4 1071-CHE-2014 FORM-5 27-02-2015.pdf 2015-02-27
5 1071-CHE-2014 FORM-2 27-02-2015.pdf 2015-02-27
6 1071-CHE-2014 FORM-1 27-02-2015.pdf 2015-02-27
7 1071-CHE-2014 DRAWINGS 27-02-2015.pdf 2015-02-27
8 1071-CHE-2014 DESCRIPTION(COMPLETE) 27-02-2015.pdf 2015-02-27
9 1071-CHE-2014 CORRESPONDENCE OTHERS 27-02-2015.pdf 2015-02-27
10 1071-CHE-2014 CLAIMS 27-02-2015.pdf 2015-02-27
11 1071-CHE-2014 ABSTRACT 27-02-2015.pdf 2015-02-27
12 abstract- 1071-CHE-2014.jpg 2015-05-22
13 1071-CHE-2014-FORM-26 [03-08-2019(online)].pdf 2019-08-03
14 1071-CHE-2014-FORM 13 [17-08-2019(online)].pdf 2019-08-17
15 1071-CHE-2014-FER.pdf 2019-11-21
16 1071-CHE-2014-OTHERS [18-05-2020(online)].pdf 2020-05-18
17 1071-CHE-2014-FER_SER_REPLY [18-05-2020(online)].pdf 2020-05-18
18 1071-CHE-2014-COMPLETE SPECIFICATION [18-05-2020(online)].pdf 2020-05-18
19 1071-CHE-2014-CLAIMS [18-05-2020(online)].pdf 2020-05-18
20 1071-CHE-2014-US(14)-HearingNotice-(HearingDate-17-01-2023).pdf 2023-01-03
21 1071-CHE-2014-FORM-26 [13-01-2023(online)].pdf 2023-01-13
22 1071-CHE-2014-Correspondence to notify the Controller [13-01-2023(online)].pdf 2023-01-13
23 1071-CHE-2014-FORM-26 [17-01-2023(online)].pdf 2023-01-17
24 1071-CHE-2014-PETITION UNDER RULE 137 [31-01-2023(online)].pdf 2023-01-31
25 1071-CHE-2014-Written submissions and relevant documents [01-02-2023(online)].pdf 2023-02-01
26 1071-CHE-2014-PatentCertificate24-02-2023.pdf 2023-02-24
27 1071-CHE-2014-IntimationOfGrant24-02-2023.pdf 2023-02-24

Search Strategy

1 2020-10-2113-35-23AE_21-10-2020.pdf
2 1071CHE2014SearchStrategy_19-11-2019.pdf

ERegister / Renewals