Abstract: Distributed optimization is an important class of problems in which the objective function as well as the constraints are distributed among a set of parties which required computing optimization function without revealing their inputs to other parties. However, conventional methods involved computations at multiple parties which adds additional computational and memory overheads. Present application provides systems and methods that overcome this disadvantage by using and fully homomorphic encryption (FHE) scheme and avoid communications between cloud and target node and further improve efficiency as well as security of above conventional methods and ensure privacy preserving distributed optimization. More specifically, FHE schemes is used by system and method of the present disclosure instead of PHE schemes for encryption of agents’ data. All the parameters and inputs can be encrypted using FHE scheme, which increases the security level of private data supplied by parties. [To be published with FIG. 2]
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of invention: PRIVACY PRESERVING DISTRIBUTED OPTIMIZATION
Applicant
Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description:
The following specification particularly describes the invention and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY [001] The present application claims priority from Indian provisional application no. 202121024973, filed on June 4, 2021. The entire contents of the aforementioned application are incorporated herein by reference.
TECHNICAL FIELD [002] The disclosure herein generally relates to quadratic problems optimization, and, more particularly, to privacy preserving distributed optimization.
BACKGROUND [003] Distributed optimization is an important class of problems in which the objective function as well as the constraints are distributed among a set of parties. Examples include trajectory optimization for formation control of vehicles, decentralized control of power systems, distributed control of large-scale systems, and problems of distributed optimization on sensor networks. With the advent of AI/ML algorithms on one side and various data protection laws on the other, protecting the data privacy of individual parties (during the optimization process) has become an important aspect. To give an example consider the problem of estimating the state of a dynamical system from privacy-sensitive sensor measurements. For instance, such a problem arises in smart houses where the temperature or energy readings of the sensors are aggregated by a cloud controller and can reveal whether the owners are in the building. The untrusted cloud must collect the measurements and output an initial state estimate of the system to a target agent, while maintaining the privacy of the sensor data and final output. In scenarios like these all the concerned parties need to compute an optimization function without revealing their inputs to other parties. However, involving computations at multiple parties adds additional computational and memory overheads.
SUMMARY
[004] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
[005] For example, in one aspect, there is provided a processor implemented method for privacy preserving distributed optimization. The method comprises obtaining an input comprising (i) a plurality of encrypted matrices, and (ii) a plurality of encrypted vectors from one or more client devices; encrypting an initialized error parameter value (n) to obtain an encrypted error parameter value (n*); selecting a random vector (μ0) with a pre-defined size and encrypting the random vector (μ0) obtain an encrypted random vector; and iteratively perform, until a last iteration of a number of pre-defined iterations being initialized, computing nabla (Δ/μi*) based on the plurality of encrypted matrices, and the plurality of encrypted vectors; updating the random vector (μ0*) based on the encrypted error parameter value (n*) and the computed nabla Δ(/μi*) to obtain an output vector μi*; and updating the output vector based on a vReLU function computed for the output vector μi * to obtain a final vector.
[006] In an embodiment, the vReLU function of the output vector μi* is computed using a two’s compliment method and a polynomial approximation method.
[007] In an embodiment, the vReLU function of the output vector μi* is computed using the two’s compliment method by: deriving a two’s compliment of each vector comprised in the output vector μi ; extracting a most significant bit (MSB) for the derived two’s compliment of each vector comprised in the output vector μi ; converting the MSB of the one or more vectors into a first domain to obtain a first domain based MSB vectors; summing the first domain based MSB vectors to obtain a final value; computing a difference based on a first domain based encryption of a pre-defined value and the final value; representing the computed difference as a pre-defined form in a second domain; computing a logical operation on a MSB of the pre-defined form and a pre-determined value to obtain a logical operation-based output; and computing the vReLU function of the output vector based on the logical operation-based output and the output vector.
[008] In an embodiment, the logical operation-based output as 0 indicates presence of at least one negative number in the output vector, and the logical operation-based output as 1 indicates presence of positive numbers in the output vector.
[009] In an embodiment, the vReLU function of the output vector is computed using the two’s compliment method when the output vector comprises of binary vectors.
[010] In an embodiment, the vReLU function of the output vector is computed using the polynomial approximation method when the output vector comprises of integer values.
[011] In an embodiment, the vReLU function of the output vector is computed using the polynomial approximation method by: computing a multiplicative output of vectors comprised in the output vector; computing delta as inverse of (i) the multiplicative output of vectors comprised in the output vector, and (ii) a pre-determined constant e; updating a value of delta based on the multiplicative output to obtain an updated delta value; and obtaining the vReLU function of the output vector based on the updated delta value and the output vector.
[012] In another aspect, there is provided a processor implemented system for privacy preserving distributed optimization. The system comprises: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: obtain an input comprising (i) a plurality of encrypted matrices, and (ii) a plurality of encrypted vectors from one or more client devices; encrypt an initialized error parameter value (n) to obtain an encrypted error parameter value (n*); select a random vector (μ0) with a pre-defined size and encrypting the random vector (μ0) obtain an encrypted random vector; and iteratively perform, until a last iteration of a number of pre-defined iterations being initialized, compute nabla (∆/Zj*) based on the plurality of encrypted matrices, and the plurality of encrypted vectors; update the random vector (μ0*) based on the encrypted error parameter value (n*) and the computed nabla ∆(μi*) to obtain an output vector μi*; and update
the output vector based on a vReLU function computed for the output vector μi* to obtain a final vector.
[013] In an embodiment, the vReLU function of the output vector μi* is computed using a two’s compliment method and a polynomial approximation method.
[014] In an embodiment, the vReLU function of the output vector μi* is computed using the two’s compliment method by: deriving a two’s compliment of each vector comprised in the output vector μi*; extracting a most significant bit (MSB) for the derived two’s compliment of each vector comprised in the output vector μi*; converting the MSB of the one or more vectors into a first domain to obtain a first domain based MSB vectors; summing the first domain based MSB vectors to obtain a final value; computing a difference based on a first domain based encryption of a pre-defined value and the final value; representing the computed difference as a pre-defined form in a second domain; computing a logical operation on a MSB of the pre-defined form and a pre-determined value to obtain a logical operation-based output; and computing the vReLU function of the output vector based on the logical operation-based output and the output vector.
[015] In an embodiment, the logical operation-based output as 0 indicates presence of at least one negative number in the output vector, and the logical operation-based output as 1 indicates presence of positive numbers in the output vector.
[016] In an embodiment, the vReLU function of the output vector is computed using the two’s compliment method when the output vector comprises of binary vectors.
[017] In an embodiment, the vReLU function of the output vector is computed using the polynomial approximation method when the output vector comprises of integer values.
[018] In an embodiment, the vReLU function of the output vector is computed using the polynomial approximation method by: computing a multiplicative output of vectors comprised in the output vector; computing delta as inverse of (i) the multiplicative output of vectors comprised in the output vector,
and (ii) a pre-determined constant e; updating a value of delta based on the multiplicative output to obtain an updated delta value; and obtaining the vReLU function of the output vector based on the updated delta value and the output vector.
[019] In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause privacy preserving distributed optimization by obtaining an input comprising (i) a plurality of encrypted matrices, and (ii) a plurality of encrypted vectors from one or more client devices; encrypting an initialized error parameter value (n) to obtain an encrypted error parameter value (n*); selecting a random vector (μ0) with a pre-defined size and encrypting the random vector (μ0) obtain an encrypted random vector; and iteratively perform, until a last iteration of a number of pre-defined iterations being initialized, computing nabla (∆/μi*) based on the plurality of encrypted matrices, and the plurality of encrypted vectors; updating the random vector (μo*) based on the encrypted error parameter value (n*) and the computed nabla ∆(μi*) to obtain an output vector μi*; and updating the output vector based on a vReLU function computed for the output vector μi* to obtain a final vector.
[020] In an embodiment, the vReLU function of the output vector μi* is computed using a two’s compliment method and a polynomial approximation method.
[021] In an embodiment, the vReLU function of the output vector μi* is computed using the two’s compliment method by: deriving a two’s compliment of each vector comprised in the output vector μi ; extracting a most significant bit (MSB) for the derived two’s compliment of each vector comprised in the output vector μi ; converting the MSB of the one or more vectors into a first domain to obtain a first domain based MSB vectors; summing the first domain based MSB vectors to obtain a final value; computing a difference based on a first domain based encryption of a pre-defined value and the final value; representing the computed difference as a pre-defined form in a second domain; computing a logical operation on a MSB of the pre-defined form and a pre-determined value to obtain a logical
operation-based output; and computing the vReLU function of the output vector based on the logical operation-based output and the output vector.
[022] In an embodiment, the logical operation-based output as 0 indicates presence of at least one negative number in the output vector, and the logical operation-based output as 1 indicates presence of positive numbers in the output vector.
[023] In an embodiment, the vReLU function of the output vector is computed using the two’s compliment method when the output vector comprises of binary vectors.
[024] In an embodiment, the vReLU function of the output vector is computed using the polynomial approximation method when the output vector comprises of integer values.
[025] In an embodiment, the vReLU function of the output vector is computed using the polynomial approximation method by: computing a multiplicative output of vectors comprised in the output vector; computing delta as inverse of (i) the multiplicative output of vectors comprised in the output vector, and (ii) a pre-determined constant ∈; updating a value of delta based on the multiplicative output to obtain an updated delta value; and obtaining the vReLU function of the output vector based on the updated delta value and the output vector.
[026] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS [027] The accompanying drawings, which are incorporated in and
constitute a part of this disclosure, illustrate exemplary embodiments and, together
with the description, serve to explain the disclosed principles:
[028] FIG. 1 illustrates an exemplary block diagram of a system for
privacy preserving distributed optimization, in accordance with an embodiment of
the present disclosure.
[029] FIG. 2 illustrates an exemplary flow diagram of a method for privacy preserving distributed optimization using the system of FIG. 1, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
[030] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[031] Distributed optimization is an important class of problems in which the objective function as well as the constraints are distributed among a set of parties. Examples include trajectory optimization for formation control of vehicles, decentralized control of power systems, distributed control of large-scale systems, and problems of distributed optimization on sensor networks. With the advent of artificial intelligence/machine learning (AI/ML) algorithms on one side and various data protection laws on the other, protecting the data privacy of individual parties (during the optimization process) has become an important aspect. To give an example consider the problem of estimating the state of a dynamical system from privacy-sensitive sensor measurements. For instance, such a problem arises in smart houses where the temperature or energy readings of the sensors are aggregated by a cloud controller and can reveal whether the owners are in the building. The untrusted cloud must collect the measurements and output an initial state estimate of the system to a target agent, while maintaining the privacy of the sensor data and final output. In scenarios like these all the concerned parties need to compute an optimization function without revealing their inputs to other parties. However, involving computations at multiple parties adds additional computational and memory overheads.
[032] Quadratic optimization problems frequently occur in various areas such as system control, portfolio optimization, dynamical systems etc. The energy optimization problem stated in previous section is an example of quadratic optimization problem. In this kind of problems, the objective function is a quadratic function and constraints are given by linear equations/in-equations. A simple example is:
Minimize X2 + Y2 + 2X - Y (1)
subj ect to constraints
2X-3Y <1
5X + 2Y < 5 (2)
[033] Distributed optimization problems arise when there are multiple parties who want to collaborate and optimize some objective function using their private data. All the parties contribute the problem solving and in the end benefit from the optimized function. Parties involved in the optimization process are divided into three categories:
The Agents A1,…, Ap who own private data related to optimization such
as objective function, constraints, and the like.
The cloud C who does all computations.
The target T who owns the output of computation. It is possible that T is
one of the agents.
The objective is to find:
(3) subject to:
where
Qc is a n × n positive definite matrix known to C and possibly proprietary.
Ac ∈Rm×n is also owned by C. In some cases, one or both Qc, Ac could be
public.
CA = (CA1, …,CAP) where c Ai∈ Rni is owned by agent Ai such that n1 +
....+ np = n.
[034] The formulation given in equation (3) is standard way of writing any quadratic optimization problem. For instance, the quadratic optimization problem given by equations (1) and (2) can be expressed as:
Minimize:
subject to constraints:
[035] Attempts have been made for privacy preserving distributed optimization wherein in many cases multiple parties may be interested in optimizing certain functions collaboratively but cannot share their data with each other. For example, in case of supply chain partners, all parties are interested in keeping their costs to minimum, but due to competition it may not be possible to fully share their data with others. This leads to sub optimal efficiency in their total business process. Thus, there is a need for distributed optimization methods that can guarantee data privacy of each party without compromising the optimization process.
[036] In a research work by Andreea et al. (e.g., refer “Andreea B Alexandru, Konstantinos Gatsis, Yasser Shoukry, Sanjit A Seshia, Paulo Tabuada, and George J Pappas. Cloud-based quadratic optimization with partially homomorphic encryption. IEEE Transactions on Automatic Control, 2020.”), authors proposed a method of solving privacy preserving distributed quadratic optimization using partially homomorphic encryption (PHE). In this method, each party encrypts its private data (using a PHE scheme) and uploads it to a third-party space/entity (e.g., cloud computing device or cloud). The cloud then runs an optimization solver on encrypted data and returns the output to other parties. Broadly speaking PHE schemes are encryption schemes which allow only a few types of computations on encrypted data. This method uses only PHE schemes which are known to be efficient in terms of memory/computations required. Its main disadvantage is that it requires communication between cloud and the target node as many times as the number of iterations N (which is a predetermined integer value parameter). More specifically, there are computations happening at both
cloud and target node. The main challenge observed in the above conventional work is it requires comparisons operations over encrypted data and hence its dependence on target node. Second important point is its threat model.
[037] Say, at some stages, the cloud runs iterations. In each iteration it updates the encrypted value of some parameter (e.g., μi*)using gradient descent method. After that it needs to know if all the values of μi are non-negative or not. If all values are non-negative the iterations continue with same value else with zero-vector. Since μi is in encrypted format this task is very challenging. The above description can be better understood by way of following example. Suppose that μi = (-1, 0, 3). But the cloud knows only μi* which looks like μi* = (a, b, c) where a, b, c are arbitrary numbers which are encryption of -1, 0, 3. There is no way for the cloud to know if all the entries of μi are non-negative just based on the encrypted entries of μi*.
[038] To solve this problem, the cloud invokes say an algorithm in a specific step. Since target node knows the decryption key (of μi*) the cloud simply sends the value to it and asks it to execute the algorithm. This approach solves the problem but at extra cost of communications. If the cloud is running optimization iterations for (say) 1000 times, then it must interact with target node 1000 times to run the algorithm. This is very inefficient because of the communication overhead.
[039] Threat Model: Another downside of this algorithm is the threat model it supports. All the agents encrypt their private data with the public key of the target node. If the target node and the cloud are malicious/compromised, then they can know all the private data of other agents. In this case the target node can also cheat others by giving them wrong result of optimization solution.
[040] Encryption of Parameters: In this scheme some parameters like the matrix Q need to be public. This is because the Paillier encryption does not support all operations on encrypted data.
[041] Embodiments of the present disclosure overcome this disadvantage by using and fully homomorphic encryption (FHE) scheme and avoid communications between the cloud and target node and to improve efficiency as well as security of above conventional algorithm. More specifically, FHE schemes
is used by system and method of the present disclosure instead of PHE schemes for encryption of agents’ data. All the parameters and inputs can be encrypted using FHE scheme. This increases the security level of private data supplied by parties.
[042] Referring now to the drawings, and more particularly to FIG. 1 through 2, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[043] FIG. 1 illustrates an exemplary block diagram of a system 100 for privacy preserving distributed optimization, in accordance with an embodiment of the present disclosure. The system 100 may also be referred as a cloud computing device and may be interchangeably used herein. In an embodiment, the system 100 includes one or more processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more processors 104. The one or more processors 104 may be one or more software processing modules and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the device 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[044] The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an
embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
[045] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment a database 108 can be stored in the memory 102, wherein the database 108 may comprise, but are not limited to inputs received from one or more client devices (e.g., client nodes, target nodes, computing devices, and the like) such as encrypted matrices, and encrypted vector(s). In an embodiment, the memory 102 may store information pertaining to number of iterations, the one or more modeling technique(s), initialized random vector for which the distributed optimization problem needs to be solved, and the like. The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis.
[046] FIG. 2, with reference to FIG. 1, illustrates an exemplary flow diagram of a method for privacy preserving distributed optimization using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to the components of the system 100 as depicted in FIG. 1, and the flow diagram. In an embodiment of the present disclosure, at step 202, the one or more hardware processors 104 comprised in a cloud computing device obtain an input comprising (i) a plurality of encrypted matrices, and (ii) a plurality of encrypted vectors from one or more client devices.
[047] The inputs are encrypted using multi-key FHE. Multi-key FHE allows multiple users to encrypt the data under their secret keys, without the need to share the secret keys, and still allow cloud to compute on the encrypted data. Each user i has a pair of public and secret key . Each user i encrypts the inputs using the public key pki and send to the cloud. After the cloud does the optimization computation, each client i can decrypt the results using his/her secret key sk.
[048] The present disclosure considers a distributed optimization problem related to financial investment strategy and is well known example of quadratic optimization. In the current example, the present disclosure, wherein two parties are considered P1, P2, wherein P1 an investor, and P2 is an investment bank. There are two variables x1, and x2 which are: X1 = amount to be invested by P1 in Company 1 x2 = amount to be invested by P1 in Company 2
Objective Function Party P2 (investment bank) knows historical data and has calculated that the risk-return equation for the two companies is given by:
F(x1,x2) = 20x1 + 16x2 — (3x12 + 2x2 + 2x1x2) The above equation shows the returns an investor can expect to gain if he invests $X1, $X1 in company 1 and company 2 respectively. The investment banks come up with these equations by doing continuous market research and using proprietary algorithms. However, the end result is always a quadratic equation in the above form.
Constraints The party P1 (investor) want to invest $5000 in the two companies such that he gets maximum returns with minimum risk. In other words, P1 want to maximize the value of F(x1, x2) = subject to his constraints as follows:
x1 + x2 ≤ 5000 which simply says that sum of investments of P1 is not more than $5000. Additionally, he also wants to ensure that his investments in the two companies are limited as follows:
x1 ≤ 4000 and x2 ≤ 3000
In simple terms these constraints mean that P1 does not want to invest more than $3000 in Company 1 and 4000 in Company 2.
The above problem and constraints can be written in standard form as follows. The Maximization problem is required to be converted into Minimization problem by taking negative of F(x1, x2): Minimize
subject to
x1 + x2 < 5000 x1 < 4000 x2 < 3000
[049] The plurality of encrypted matrices comprises at least a first encrypted matrix (Q*), and a second encrypted matrix (A*), a third encrypted matrix (Q-1 ), a fourth encrypted matrix (At* ). The plurality of encrypted vectors comprises at a first encrypted vector (b*) and a second encrypted vector (c*). The first encrypted matrix (Q) may be expressed as follows by way example:
For the sake of brevity, and for better understanding of the embodiments of the present disclosure, consider matrix Q being expressed as:
It is to be noted that the matrix Q is not known to the cloud computing device.
Similarly, inverse of matrix Q (denoted by Q 1) is not known to the cloud computing device, and the encrypted inverse of matrix Q (denoted by Q-1 - also referred as third encrypted matrix) can be expressed as:
For the sake of brevity, and for better understanding of the embodiments of the present disclosure, consider inverse of matrix Q (denoted by Q-1) being expressed as:
Similarly, the second encrypted matrix (A*) may be expressed as follows by way example:
The transform of second encrypted matrix (At* - also referred as fourth encrypted matrix) may be expressed as follows by way example:
Similarly, the first encrypted vector b* may be expressed as follows by way example:
The second encrypted vector c* may be expressed as follows by way example:
[050] It is to be understood by person having ordinary skill in the art or person skilled in the art that original vectors b and c at the client device have values different from as represented above and are not known to the cloud computing device. For instance, the vectors b and c at the client devices can be expressed as follows:
and
[051] It is to be understood by a person having ordinary skill in the art or person skilled in the art that the above example of financial investment serving as a quadratic optimization problem shall not be construed as limiting the scope of the present disclosure. Referring to FIG. 2, at step 204 of the present disclosure, the one or more hardware processors 104 of the cloud computing device encrypt an initialized error parameter value (n) to obtain an encrypted error parameter value (n*). Prior to encrypting the error parameter value (n), the error parameter value (n) may be initialized. For instance, the error parameter value (n) (also referred as pre¬defined error parameter value and interchangeably used herein) may take a value say, ‘0.005’, whose encrypted value denoted by 77* = 0.0025. In an embodiment, the error parameter value is one of a pre-determined value or an empirically determined value.
[052] At step 206 of the present disclosure, the one or more hardware processors 104 of the cloud computing device select a random vector (μ0) with a size based on the second encrypted vector and the selected random vector is then encrypted to obtain an encrypted random vector. The random vector (μ0) is selected such that it’s a size is equal to number of rows of the second encrypted matrix A*. For instance, random vector (μ0) may be expressed as μ0 = [7,5,9] and the encrypted random vector (μ0*) is expressed as follows:
μ0* = [54874,12557,92547]
[053] The encrypted random vector (μ0*) can also be referred as an initial solution (μ0*) and may be interchangeably used herein.
[054] At step 208 of the present disclosure, the one or more hardware processors 104 of the cloud computing device iteratively perform, until a last iteration of a number of pre-defined iterations being initialized a plurality of steps 208a through 208c. In an embodiment, the number of pre-defined iterations N is one of a pre-determined value or an empirically determined value. Assuming that the number of pre-defined iterations N=5. More specifically, at step 208a, the one
or more hardware processors 104 of the cloud computing device compute nabla (V/Zj*) based on the plurality of encrypted matrices, and the plurality of encrypted vectors. More specifically, the one or more hardware processors 104 compute nabla based on the second encrypted matrix A*, the encrypted inverse of matrix Q (denoted by Q-1 ), the encrypted random vector μ0*, the first encrypted vector b* and the second encrypted vector c*. In an embodiment of the present disclosure, computation of nabla (V/Zj*) can be expressed by way following exemplary equation:
(μi)^-A'Q-1^iiC + c*)-^ [055] At step 208b, the one or more hardware processors 104 of the cloud computing device update the random vector (μ0*) based on the encrypted error parameter value (n*) and the computed nabla V(μ0*) to obtain an output vector μi*. The output vector is μi* is expressed as follows:
μi* ← μi*+n*.∆(μi*) [056] At step 208c of the present disclosure, the one or more hardware processors 104 of the cloud computing device update the output vector based on a vReLU function of the output vector to obtain a final vector. The output vector is updated by performing a comparison of the output vector with a pre-defined vector (e.g., says pre-defined zero vector) to obtain the final vector. The updation of the output vector can be expressed as follows:
μi+1* ←vReLU(0, μi*) [057] The steps 208a through 208c are iteratively performed or repeated until the last iteration (e.g., in this case until 5 iterations) to obtain a final vector. The above steps 208a through 208c is better understood by way of following description wherein example is discussed for a first iteration (or one iteration). For example, for i = 0,
∆g(μ0*) = [90564,68194,24308 ]
μ0* = [ 57592,66065,26216 ] and
μ1* = vReLU([ 65022, 30759, 61009], [ 57592, 66065, 26216 ]) =
[ 57592,66065,26216 ]
[058] At the end of 5 iterations, the final vector is returned by the cloud computing device, say μ5* = [25874, 56981, 51987]. This value of μ5* is communicated to the client devices which required optimum value.
[059] In an embodiment of the present disclosure, the vReLU function of the output vector is obtained using one of a polynomial approximation method or a two’s (2’s) compliment method. An introduced to vReLU function is provided by way of following description:
[060] Broadly vReLU function is defined as below. For any
μi* =μ1*, …,μn ∈ Rn
vReLU(μ*) = μ* if μi* ≥ 0 for every i = 1,…,n (4)
( 0 otherwise
Example: vReLU(—1,0,3) = (0,0,0) and vReLU(1,0,3) = (1,0,3). Then the present disclosure uses approximation polynomials to evaluate this function on encrypted inputs. The method of the present disclosure is based on this concept. To begin with, the method described herein introduces the notion of ReLU and vReLU functions and describe how they can be computed on encrypted inputs. Present disclosure discusses the usual one variable Rectifed Linear Unit (ReLU) function. The simple versions of this is defined as follows: For any x£l.
ReLU(x) = x if x ≥ 0, (5)
{0 otherwise
For example, ReLU(-1) = 0 and ReLU(10) = 10. This function can be analytically expressed as
ReLU(x) = (6)
[061] ReLU is extensively used as an activation function in modern ML algorithms as known in the art. In the last few years preserving privacy of user’s data/information has become very important and accordingly researchers are (re)designing ML applications. This involved implementing various ML algorithms using state of the art privacy preserving computational techniques such as Fully Homomorphic Encryption (FHE) or Secure Multiparty Computations. In this context, the present disclosure’s focus is on the function ReLU as expressed in equation (5). Several efficient implementations of (5) are known, including those
that can compute over homomorphically encrypted inputs. The function (5) can be extended to vectors as defined in equation (6). This extended ReLU occurs in several applications, especially in connection with optimization problems. However, not much is known about computing (6) on encrypted inputs. In the present disclosure, a method implementing an algorithm is described to solve this problem.
[062] Referring to steps of FIG. 2, the computation of vReLU function of the output vector using the polynomial approximation method may be better understood by way of following description.
[063] Modern FHE techniques enable computation of a large class of functions over encrypted inputs. They are designed to prioritize security over other factors such as usability or efficiency [in terms of resource consumption]. As a result, even if an adversary has access to FHE encrypted data and all the computations done on them the security of plaintext messages is guaranteed. A consequence of these security guarantees is it is not possible to compute comparisons over FHE encrypted data. This is explained using computation of equation (6).
[064] Example 1: Suppose a client device has x ∈ R. and the client device sends x* = h.E(x) to a cloud (e.g., the cloud computing device) to compute ReLU(x) using equation (6). To compute |x| the cloud must be able to verify if x > 0 or x < 0 using x* only. On the other hand, if x* leaks this information then it violates semantic security of underlying encryption scheme making the scheme completely insecure. Thus, there is no way a h.E scheme can be secure and still enable comparison over encrypted data.
[065] A way out of this impasse is to compute (5) using polynomial
approximation techniques (e.g., as known in the art technique). These techniques
allow us to find polynomials POLY(X) defined over R. such that values of ReLU(x)
can be approximated by value of POLY(x) for x£l. Symbolically, this is
expressed as:
ReLU(x) ≈ POLY(x), x ∈ R (7)
[066] Polynomial approximation techniques are well studied and hence the details are skipped here for the sake of brevity. Some of the polynomials frequently used to approximate function (5) are:
POLY1(X) = 0.1488 + 0.4993X + 0.3007X2 - 0.0003X3 - 0.0168X4 POLY2(X) = 0.1500 + 0.5012X + 0.2981X2 - 0.0004X3 - 0.0288X4 The above expressions together can be referred as equation (8)
[067] Note that these are low degree polynomials and most of the FHE schemes can evaluate these kinds of polynomials easily. For the sake of clarity, the present disclosure provides an exemplary algorithm as shown below that captures the process of computing ReLU on encrypted inputs. Exemplary algorithm:
1. procedure Compute (ReLU(x*), x ∈ R)
2. Input: x*
3. Choose a polynomial from {POLY1,POLY2} as approximation polynomial. Denote it by POLY
4. Compute RES = POLY(x*), where RES is result of computation done by cloud.
5. return RES
6. end procedure
[068] Going back to the situation of Example 1, the problem of computing ReLU(x), securely can be solved as follows. The client sends x to cloud which in turn uses one of the polynomials POLY1(X), POLY2(X) to evaluate at x and sends back the output. This technique provides an efficient and secure solution to the problem of comparing encrypted values.
[069] In the study of general constrained optimization problems the
present disclosure encounters following function which can be considered as
extension of ReLU to vectors. The present disclosrue denotes this by vReLU and
defines it as:
For any μ = (μ1,…, μn) ∈ Rn
vReLU(μ) ={ u if μi ≥ 0 for every i = 1, …,n
(9)
0 otherwise
Example 2: vReLU(—1,0,3) = (0,0,0) and vReLU(1,0,3) = (1,0,3).
[070] Note that in conventional algorithm’s steps, the function evaluated is precisely the vReLU function which is based on comparison operations. If all the data is in plaintext domain then it can be easily computed as seen in Example 2 above. In the present disclosure, vReLU is evaluated on encrypted data. Input to the vector ReLU computation is the encrypted random vector μ0* = [μ1*,μ2 …,μn*]. The cloud computing device also initializes e to 10t, where t is a number chosen by the one or more client devices. Below steps/algorithm illustrate computation of the vReLU function of the output vector:
1. Compute ReLU(μi*) for i = 1,2,…, n, using polynomial approximation.
2. Compute δ(μ*) = Πni=1ReLU(μ1*)
3. Compute delta as inverse of sum of δ(μ*) + ∈
4. Update 8 by product of 8 and δ(μ*)
5. Return vReLU(μ*) as scalar-vector product of 8 and (μ*)
[071] The above steps can be better understood by a simple numerical example as illustrated below: Let μ* = (1,3,5), and set e = 1/10000
1. Compute ReLU each vector value gives ReLU(1) = 1, ReLU(3) = 3 and ReLU(5) = 5
2. Compute δ(*) = 1.3.5 = 15
3. Compute delta as inverse of sum of δ(μ*) + ∈ = 1 = 0.0666666
1
( 15+ 10000 )
4. Update 8 by product of 8 and δ(μ*) = 8.15 = 0.999999
5. Return vReLU(μ*) as scalar-vector product of δ and (μ) = (5.μ1, 8. μ2*, δ. μ3*)= (0.999999, 2.99999, 4.99999)
Correctness of above vReLU computation via the polynomial approximation method:
[072] Observe that: A(μ) = 0* → (ReLU(μi) = 0* → μi < 0)) for at least one index i. Next define a function X: Rn → Rn as follows:
From these, it is easy to observe the following: For any μi < 0 for some 1 < i < n, then:
vReLU(μ) = X(μ) = 0* On the other hand, if μi > 0 for every 1 < i < n then
vReLU(μ)approx x(μ)
Therefore, it is observed that using step 2 of the vReLU function and above expression for X(μ), vReLU(fa) can be computed for any μ ∈ Rn using encrypted components μi .
[073] More specifically, the above steps of computation the vReLU function of the output vector using the polynomial approximation method comprises: computing a multiplicative output of vectors comprised in the output vector (refer step 2 of the above algorithm); computing delta as inverse of (i) the multiplicative output of vectors comprised in the output vector, and (ii) a pre-determined constant e (refer step 3 of the above algorithm); updating a value of delta based on the multiplicative output to obtain an updated delta value (refer step 4 of the above algorithm); and obtaining the vReLU function of the output vector based on the updated delta value and the output vector (refer step 5 of the above algorithm). The inverse of (i) the multiplicative output of vectors comprised in the output vector to obtain delta is computed by way of following description.
[074] Computation of using polynomial approximation: A key
computation in the above steps is Compute delta as inverse of sum of δ(μ) + e, which involves computing inverse of homomorphically encrypted numbers which is a challenging task. The same is computed using the method described in below algorithm:
Algorithm for computation of using polynomial approximation:
1. procedure Compute ( 1/b using Newton’s Root Finding Algorithm)
2. Input b, b > 0, where b is a non-zero integer value.
3. Choose least positive integer k such that b ≤ 2k
e.g., if b = 33 then k = 6 since 33 < 26 = 64
4. Initialize z0 = 2k-1 e.g., if b = 33 then z0 = 25, where z refers to an initial value
5. for iterations i = 1,2,… N do
6. zi+1 ← zi (2 - bz i )
7. end for
8. return zi
9. end procedure
[075] The vReLU function of the output vector is computed using the polynomial approximation method when the output vector comprises of integer values, in an embodiment of the present disclosure.
[076] Further, present disclosure describes the computation of vReLU function of the output vector using the 2’s compliment method which may be better understood by way of following description:
[077] In the above description, the present disclosure discussed an approach that computes vReLU(μ*) on the homomorphically encrypted input vector μ* = (μ1*, …,μn*). Here it is noted that the components μi (which are numbers) are encrypted using FHE schemes that take μi as plaintext and output another number μ1* as a ciphertext (or encrypted text). On the other hand, if bit encryption FHE scheme is consider, to compute vReLU, the present disclosure implements another approach to solve the privacy preserving quadratic optimization problem using FHE schemes. The approach being now described is referred to as 2’s compliment method. Encoding of real numbers using 2's compliment method is well known and extensively used in modern computers. It is considered suitable for efficiently performing arithmetic operations on real numbers. In this method every real number is represented as a sequence of bits with its most significant bit (MSB) indicating if the number is positive or negative. More precisely, the following is observed:
[078] Fact 1: Suppose x G∈ R. and its expansion in 2’s compliment in a t-bit register is xtxt_1, …,x0 where xt is the MSB. Then x < 0 if and only if MSB(x) = 1. Consider a bit-encryption FHE scheme is used with bH. E/bH. D as encryption/decryption algorithm. With notations as in Fact 1 bH.E(x) is represented as a vector as below: bH.E(x) = bH.E(xt),…bH.E(x0) For the sake of simplicity, this is noted as: bH.E(x) = x* = Xt*, …,X0*
In this case, xt* is encryption of 0 or 1 depending on whether x > 0 or x < 0. Below is an example, which is represented by 2’s compliment method.
[079] To use the method of 2’s compliment, the system 100 implements a technique that can convert ciphertexts from bit-encryption domain to integer-encrypted domain which is explained in the following examples:
[080] Suppose two encryption bits of bit 1 are considered as bH.E(1) = 5687, bH.E(1) = 8741. If the ciphertexts are added, i.e., 5687 + 8741, the addition results in 14428 which is then decrypted as bH. D(14428) = 0 = 1+1, because the scheme bH.E is a bitwise FHE. However, the method of the present disclosure requires to obtain the sum 1+1=2 upon decryption. This can be achieved by means of ‘switching technique’.
[081] Switch-to-1H. E: This method when implemented by the present disclosure takes as input a ciphertext that is bit-encryption of a bit and switches it to a ciphertext integer encryption. For instance, in case of the above example, the same technique is used on 5687, 8741. The method obtained Switch-to-1H. E (5687) = 5124, Switch-to-1H. E(8741) = 3687 such that their sum 8811 decrypts to 1+1=2.
[082] Switch-to-bH.E: This method when implemented by the present disclosure works as inverse to Switch-to-1H. E method. It takes as input ciphertext coming from Switchto1H. E and switches it to a ciphertext that belongs to a bH. E scheme. For example, from the above text, it takes 8811 as input and outputs Switch-to-bH . E(8811) = (4587, 5587,1254,5874) where the four ciphertexts
4587,5587,1254,5874 can be decrypted into 1,1,1, and 0, respectively which give 2=1101 in 2’s compliment. Computation of vReLU:
[083] To compute vReLU(p*) given a bit-encrypted vector μ* = (μ*-i, …, μ*n) where μi ∈ R. The method of the present disclosure uses the bit encoding of μi in 2’s compliment method denoted by (μit, μi,t-i, … ,μi,o) with μi ∈ {0,1} where μi,t is the MSB of μi. Then the corresponding ciphertext n-tuple is denoted by:
μi =(μi,t, … ,μi,o) Further, it is known that,
μit = bH.E(1) <=>μi <0 [084] Since, the system and method are using bit-encryption based FHE the values μi*t behave like bits which can only be xored, in one example embodiment. However, these values are needed to be integers, which can be added. The MSB’s are now converted from bit encrypted ciphers to integer encryption which is expressed as below: Switch_to_1H.E(μ*it) = B*it
Where B*it encrypts MSB B*i,t with integer plaintext space. Next, consider the sum σ(μ) defined as:
σ(μ)=B*it + .... + B*n,t
[085] Here σ(μ*) is an encrypted integer that is addition of encrypted MSBs of μ.. If μit = 0 for all i then σ(μ) = IH. E(0). On the other hand, even if one of μit = 1, then σ(μ) is a positive number s, which is the number of MSBs which are 1, i.e., number of negative numbers μ.
[086] Next, D = IH.E(0) — σ(μ*) needs to be computed. This integer encrypted ciphertext now needs to be converted to bit encrypted ciphertext as follows: Switch_to_bH.E(D) = (d't, …d'0)
where d't is the bit encrypted MSB of IH.E(D). The system (and/or method) of the present disclosure, performs a logical operation (e.g., xor) with 1* (e.g., a predetermined value) = bH. E(1), and uses the following observation: d't + 1* = 0* <=>μi < 0 for some i. Thus, the vReLU(p.*) is given by: vReLU(μ) = ([d't + 1]μ1,…,[d't + 1]μ*n).
[087] The above explanation may be further better understood by way of simplified steps which depict the vReLU(μ.*) using the 2’s compliment method as described below:
1. Deriving a two’s complement of each vector comprised in the output vector
μ*i ;
2. Extracting a most significant bit (MSB) for the derived two’s complement of each vector comprised in the output vector μ*i ;
3. Converting the MSB of the one or more vectors into a first domain to obtain a first domain based MSB vectors (integer based MSB vectors or MSB vectors in a first domain);
4. Summing the first domain based MSB vectors to obtain a final value;
5. Computing (refer step 6 in the below algorithm for 2’s compliment method) a difference based on a first domain based encryption of a pre-defined value (zero) and the final value;
6. Representing the computed difference as a pre-defined form in a second domain (binary domain) - (refer step 7 in the below algorithm for 2’s compliment method);
7. Computing a logical operation (xor) on a MSB of the pre-defined form and a pre-determined value (e.g., 1) to obtain a logical operation-based output -(refer Step 8 in the below algorithm for 2’s compliment method); and
8. Computing the vReLU function of the output vector based on the logical operation-based output and the output vector.
[088] If the logical operation-based output is 0, it indicates presence of at least one negative number in the output vector. If the logical operation-based output is 1, it indicates presence of positive numbers in the output vector. In an
embodiment, the vReLU function of the output vector is computed using the above two’s compliment method when the output vector comprises of binary vectors. In other words, binary vectors (or vectors in binary domain) serve as input to the two’s compliment method for computation of the vReLU function of the output vector. The first domain in an integer domain and the second domain is the binary domain. [089] Using these notations, the method of the present disclosure is represented in the below algorithm as below. In other words, using the above notations, the algorithm for 2’s compliment method is illustrated below by way of example by the present disclosure. More specifically, exemplary algorithm/pseudo code for 2’s compliment method implemented by the present disclosure is described below:
1. procedure Compute (vReLU((μ),(μ ∈Rn
2. Input μ = (μ*1, …,μn*) where μi* = bH.E(μi)
3. Extract MSBi* = MSB(μi)
4. Switch-to-IH. E(μ*)= Bi
5. Compute σ(5) = B1 + ■■■ + Bn
6. Compute D = IH.E(0) — σ(B)
7. Switch-to- bH.E(D) = (d't, …d'0)
8. Compute [d't +1*]
9. Compute RES = ([d[+1]. [d't +1].μ*n)
10. return RES
11. end procedure
[090] The above algorithm can be better understood by way of following example:
[091] For simplicity, the computations are used in plaintext domain, however it is to be understood by a person having ordinary skill in the art or person skilled in the art that the same is/can be realized in an encrypted domain.
1. Input μ = (1,0,-3). Since —3 < 0, vReLU(1,0, — 3) = (0,0,0). The vReLU is computed using above algorithm.
2. For extracting MSB from 2’ compliment representation, refer below Table 1 (which illustrates ReLU on vectors).
Table 1
Step 1 Step 2 Step 3 Step 4 Product of MSB
Write the numbers 2’s compliment MSB MSB 0 1 0
1 0001 0 1
0 0000 0 1
-3 1101 1 0
3. Here, d=-2 is obtained, hence MSB in 2’s compliment is MSB(d)= dt = 1.
4. The dt 0 1 = 0
5. The output ReLU(jX) = (-1.0,0.0,-3.0) = (0,0,0)
To compute vReLU(1,0, -3), first the product A(1,0, -3) - 0 is computed and hence vReLU(1,0, - 3) = (1.0,0.0,0.-3) = (0,0,0)
[092] Correctness proof of above vReLU computation via the 2’s compliment method:
[093] Let μ. = (μ*1, …,μn) ∈ Rn be arbitrary vector. If μi < 0 for i then δi = 0* (refer step 4). Consequently, Δ(μ) = 0* which implies that RES = (0*, …,0*) and hence when decrypted bH.D(RES) = (0, …,0) is obtained as required. On the other hand, if μi > 0 for all i = 1, …,n, then δi = 1*. In this case A(μ) = 1* and hence RES = (1*μ1,…, 1*μ.n). Thus, when decrypted bH.D(RES) = (1.μ1, …,1.μn) = (μ1, …,μn) is obtained as required.
[094] Below algorithm/pseudo code illustrates an overall method of FIG. 2 of the present disclosure, and is provided by way of example:
1. procedure (e.g., say cloud-based optimization using FHE scheme)
Input: Encrypted inputs from one or more client devices (e.g., also referred
as agents), wherein inputs from each agent Ai for i = 1,2,…, p. Public key
of target node T, pkT
Output: The value of x as in equation (3)
Phase 1: Input encryption. Input encryption. The homomorphic properties are used which are common to all FHE algorithms as the case may be.
2. All agents encrypt their data using public key of T (or target node)
3. All agents send their data to cloud C
Phase 2: Problem formulation stage. Here the cloud C used encrypted inputs and formulates the problem in a required format. It also initializes parameter values required during the computation.
4. C constructs A*, Q*, Q-1 , and the vectors b* and c* as in equation (3) -* is denoted that the input is encrypted using a FHE scheme.
5. C selects a value for error parameter n (e.g., say n = 0.005) and then computes n* = h.EpkT(n)
6. C chooses a m × 1 initial vector μ0 with all positive entries and then computes μ0* = h.EpkT(μ0)
Phase 3: In this phase, the cloud runs main set of computations.
7. for iterations i = 1,2,…, N do
8. C updates Δ μi* ← -A*Q-1 (At μi* + c*) - b*
9. C updates μi* ← μi+n*. Δμ*
10. C uses vReLU function computation algorithm (either polynomial approximation or 2’s compliment method) to μi+1* as below:
μi+1* ←vReLU(0,μi*)
11. end for
Phase 4: This is the final phase of optimization. In this phase, the cloud stops the computations and sends the encrypted output vector/output to T (or target node), which decrypts the output and obtains the result.
12. Cloud C sends the encrypted output to T (or target node).
13. T (or target node) decrypts the output and gets required result.
14. end procedure
[095] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other
modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[096] As mentioned above, privacy preserving distributed optimization is a class of problems in which multiple parties collaborate with each other to solve an optimization problem with the guarantee that the input data of each party remains private/secure. These problems arise in several areas such as engineering design and control, logistics, scheduling, etc. One of the known techniques to solve this problem involve random transformations of inputs and then delegating computations to cloud. This approach suffers from weak security guarantees. In another conventional technique Partially Homomorphic Encryption techniques were used to solve quadratic optimization problems. This technique, though secure, has high communication complexity. Embodiments of the present disclosure provide system and method that implement quadratic optimization method based on Fully Homomorphic Encryption (FHE) scheme. More specifically, the private data is encrypted using FHE scheme and serves as an input to computing device such as a cloud, server, and the like. The computing device (e.g., cloud, server, etc.) executes the optimization solver on encrypted data and sends back the encrypted output. In performing the above method, the present disclosure ensures that there is a low communication complexity that provides standard security.
[097] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC
and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
[098] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[099] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[100] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[101] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
We Claim:
1. A processor implemented method, comprising:
obtaining an input comprising (i) a plurality of encrypted matrices, and (ii) a plurality of encrypted vectors from one or more client devices (202);
encrypting an initialized error parameter value (n) to obtain an encrypted error parameter value (n*) (204);
selecting a random vector ( μ 0) with a pre-defined size and encrypting the random vector ( μ 0) obtain an encrypted random vector (206); and
iteratively performing, until a last iteration of a number of pre-defined iterations being initialized (208),
computing nabla (Δ μi *) based on the plurality of encrypted
matrices, and the plurality of encrypted vectors (208a);
updating the random vector ( μ 0*) based on the encrypted error
parameter value (n*) and the computed nabla V( μi*) to obtain an output
vector μ i* (208b); and
updating the output vector based on a vReLU function computed for
the output vector μi* to obtain a final vector (208c).
2. The processor implemented method of claim 1, wherein the vReLU function of the output vector μi* is computed using a two’s compliment method and a polynomial approximation method.
3. The processor implemented method of claim 2, wherein the vReLU function of the output vector μi* is computed using the two’s compliment method by:
deriving a two’s compliment of each vector comprised in the output vector μi*;
extracting a most significant bit (MSB) for the derived two’s compliment of
— *
each vector comprised in the output vector μi ;
converting the MSB of the one or more vectors into a first domain to obtain a first domain based MSB vectors;
summing the first domain based MSB vectors to obtain a final value;
computing a difference based on a first domain based encryption of a pre-defined value and the final value;
representing the computed difference as a pre-defined form in a second domain;
computing a logical operation on a MSB of the pre-defined form and a pre¬determined value to obtain a logical operation-based output; and
computing the vReLU function of the output vector based on the logical operation-based output and the output vector.
4. The processor implemented method of claim 3, wherein the logical operation-based output as 0 indicates presence of at least one negative number in the output vector, and wherein the logical operation-based output as 1 indicates presence of positive numbers in the output vector.
5. The processor implemented method of claim 2, wherein the vReLU function of the output vector is computed using the two’s compliment method when the output vector comprises of binary vectors.
6. The processor implemented method of claim 2, wherein the vReLU function of the output vector is computed using the polynomial approximation method when the output vector comprises of integer values.
7. The processor implemented method of claim 2, wherein the vReLU function of the output vector is computed using the polynomial approximation method by:
computing a multiplicative output of vectors comprised in the output vector;
computing delta as inverse of (i) the multiplicative output of vectors comprised in the output vector, and (ii) a pre-determined constant ∈;
updating a value of delta based on the multiplicative output to obtain an updated delta value; and
obtaining the vReLU function of the output vector based on the updated delta value and the output vector.
8. A system (100), comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
obtain an input comprising (i) a plurality of encrypted matrices, and (ii) a plurality of encrypted vectors from one or more client devices;
encrypt an initialized error parameter value (n) to obtain an encrypted error parameter value (n*);
select a random vector (μ0) with a pre-defined size and encrypting the random vector (μ0) obtain an encrypted random vector; and
iteratively perform, until a last iteration of a number of pre-defined iterations being initialized,
compute nabla (Δμi*) based on the plurality of encrypted matrices,
and the plurality of encrypted vectors;
update the random vector (μ0*) based on the encrypted error
parameter value (n*) and the computed nabla V(μi*) to obtain an output
vector μi* ; and
update the output vector based on a vReLU function computed for
the output vector μi* to obtain a final vector.
9. The system of claim 8, wherein the vReLU function of the output vector μi* is computed using a two’s compliment method and a polynomial approximation method.
10. The system of claim 9, wherein the vReLU function of the output vector /Zj* is computed using the two’s compliment method by:
deriving a two’s compliment of each vector comprised in the output vector μi*;
extracting a most significant bit (MSB) for the derived two’s compliment of each vector comprised in the output vector μi*;
converting the MSB of the one or more vectors into a first domain to obtain a first domain based MSB vectors;
summing the first domain based MSB vectors to obtain a final value;
computing a difference based on a first domain based encryption of a pre-defined value and the final value;
representing the computed difference as a pre-defined form in a second domain;
computing a logical operation on a MSB of the pre-defined form and a pre¬determined value to obtain a logical operation-based output; and
computing the vReLU function of the output vector based on the logical operation-based output and the output vector.
11. The system of claim 10, wherein the logical operation-based output as 0 indicates presence of at least one negative number in the output vector, and wherein the logical operation-based output as 1 indicates presence of positive numbers in the output vector.
12. The system of claim 9, wherein the vReLU function of the output vector is computed using the two’s compliment method when the output vector comprises of binary vectors.
13. The system of claim 9, wherein the vReLU function of the output vector is computed using the polynomial approximation method when the output vector comprises of integer values.
14. The system of claim 9, wherein the vReLU function of the output vector is computed using the polynomial approximation method by:
computing a multiplicative output of vectors comprised in the output vector;
computing delta as inverse of (i) the multiplicative output of vectors comprised in the output vector, and (ii) a pre-determined constant ∈;
updating a value of delta based on the multiplicative output to obtain an updated delta value; and
obtaining the vReLU function of the output vector based on the updated delta value and the output vector.
| # | Name | Date |
|---|---|---|
| 1 | 202121024973-STATEMENT OF UNDERTAKING (FORM 3) [04-06-2021(online)].pdf | 2021-06-04 |
| 2 | 202121024973-PROVISIONAL SPECIFICATION [04-06-2021(online)].pdf | 2021-06-04 |
| 3 | 202121024973-FORM 1 [04-06-2021(online)].pdf | 2021-06-04 |
| 4 | 202121024973-DRAWINGS [04-06-2021(online)].pdf | 2021-06-04 |
| 5 | 202121024973-FORM-26 [22-10-2021(online)].pdf | 2021-10-22 |
| 6 | 202121024973-Proof of Right [22-11-2021(online)].pdf | 2021-11-22 |
| 7 | 202121024973-FORM 18 [07-03-2022(online)].pdf | 2022-03-07 |
| 8 | 202121024973-DRAWING [07-03-2022(online)].pdf | 2022-03-07 |
| 9 | 202121024973-CORRESPONDENCE-OTHERS [07-03-2022(online)].pdf | 2022-03-07 |
| 10 | 202121024973-COMPLETE SPECIFICATION [07-03-2022(online)].pdf | 2022-03-07 |
| 11 | Abstract1.jpg | 2022-05-11 |
| 12 | 202121024973-FER.pdf | 2023-03-06 |
| 13 | 202121024973-OTHERS [08-08-2023(online)].pdf | 2023-08-08 |
| 14 | 202121024973-FER_SER_REPLY [08-08-2023(online)].pdf | 2023-08-08 |
| 15 | 202121024973-CLAIMS [08-08-2023(online)].pdf | 2023-08-08 |
| 1 | SearchStrategyE_06-03-2023.pdf |
| 2 | SearchHistoryamended(16)AE_26-06-2024.pdf |