Abstract: ABSTRACT A system for video transformation to deter object recognition is disclosed. The system includes an object detection subsystem configured to extract a plurality of pixels from at least one detected object, a transformation matrix generation subsystem configured to generate a transformation matrix to apply on the plurality of pixels. The transformation matrix generation subsystem includes a bottleneck warp vector generation subsystem configured to generate a warp vector for transforming the plurality of pixels; resize at least one of the set of rows to the width associated with the plurality of pixels, a warp vector modulation subsystem configured to modulate generated warp vector for generating a transformation matrix. The system also includes a transformation subsystem configured to apply generated transformation matrix on each of the set of rows associated with the plurality of pixels, to resize at least one of the set of rows associated with the plurality of pixels. FIG. 1
Claims:WE CLAIM:
1. A method (380) for transforming video to deter object recognition comprising:
extracting, by an object detection subsystem, a plurality of pixels from at least one detected object using one or more object detection techniques, wherein at least one object is detected from one or more videos or images; (390)
generating, by a transformation matrix generation subsystem, a transformation matrix to apply on the plurality of pixels; (400)
generating, by a bottleneck warp vector generation subsystem, a warp vector for transforming the plurality of pixels associated with the set of rows; (410)
modulating, by a warp vector modulation subsystem, generated bottleneck warp vector using a predefined blending function for generating a transformation matrix; (420)
applying, by a transformation subsystem, generated transformation matrix on each of at least one of the set of rows and the set of columns associated with the plurality of pixels; and (430)
resizing, by the transformation subsystem, at least one of the set of rows and the set of columns associated with the plurality of pixels upon replication of corresponding plurality of pixels in the image. (440)
2. The method (380) as claimed in claim 1, further comprising rotating, by the transformation subsystem, the plurality of pixels by an angle -alpha to align the plurality of pixels in a XY plane upon detection of a tilted angle alpha of the at least one detected object using a tilt detection technique.
3. The method (380) as claimed in claim 1, further comprising:
assigning, by an assigning subsystem, a bottleneck warp vector to a predefined set of middle rows associated with the transformation matrix;
assigning, by the assigning subsystem, a modulated bottleneck warp vector to a predefined set of middle to upper and lower rows associated with the transformation matrix; and
assigning, by the assigning subsystem, a uniform distribution vector to a predefined set of lower rows and a predefined set of upper rows associated with the transformation matrix.
4. The method (380) as claimed in claim 1, further comprising assembling, by the warp vector modulation subsystem, the warp vectors associated with the predefined set of rows associated with the transformation matrix.
5. The method (380) as claimed in claim 1, further comprising applying, by the transformation subsystem, transformation matrix on the plurality of pixels, wherein the plurality of pixels is replicated based on the transformation matrix.
6. The method (380) as claimed in claim 1, further comprising resizing, by the transformation subsystem, at least one of the set of rows and the set of columns to the width associated with the plurality of pixels upon replication of corresponding plurality of pixels in the image.
7. A system (100) for video transformation to deter object recognition comprising:
an object detection subsystem (110) configured to extract a plurality of pixels from at least one detected object using one or more object detection techniques, wherein at least one object is detected from one or more videos or images;
a transformation matrix generation subsystem (120) operatively coupled to the object detection subsystem (110), and configured to generate a transformation matrix to apply on the plurality of pixels, wherein the transformation matrix generation comprises:
a bottleneck warp vector generation subsystem (130) configured to generate a warp vector for transforming the plurality of pixels associated with the set of rows,
a warp vector modulation subsystem (140) operatively coupled to the bottleneck warp vector generation subsystem (130), and configured to modulate generated bottleneck warp vector using a predefined blending function for generating a transformation matrix;
a transformation subsystem (150) operatively coupled to the transformation matrix generation subsystem (120), and configured to:
apply generated transformation matrix on each of at least one of the set of rows and set of columns associated with the plurality of pixels, and
resize at least one of the set of rows and the set of columns associated with the plurality of pixels upon replication of corresponding plurality of pixels in the image.
8. The system (100) as claimed in claim 7, wherein the transformation subsystem is also configured to rotate the plurality of pixels by an angle -alpha to align the plurality of pixels in a XY plane upon detection of a tilted angle alpha of the at least one detected object using a tilt detection technique.
9. The system (100) as claimed in claim 7, further comprising an assigning subsystem configured to:
assign a bottleneck warp vector to a predefined set of middle rows associated with the transformation matrix;
assign a modulated bottleneck warp vector to a predefined set of middle to upper and lower rows associated with the transformation matrix; and
assign a uniformly distributed vector to a predefined set of lower rows and a predefined set of upper rows associated with the transformation matrix.
10. The system (100) as claimed in claim 7, wherein the warp vector modulation subsystem is also configured to assemble the warp vectors associated with the predefined set of rows associated with the transformation matrix.
11. The system (100) as claimed in claim 7, wherein the transformation subsystem is also configured to apply transformation matrix on the plurality of pixels, wherein the plurality of pixels is replicated based on the transformation matrix.
12. The system (100) as claimed in claim 7, wherein the transformation subsystem is also configured to resize at least one of the set of rows and the set of columns to the width associated with the plurality of pixels upon replication of corresponding plurality of pixels in the image.
Dated this 1st day of August 2019
Signature
Vidya Bhaskar Singh Nandiyal
Patent Agent (IN/PA-2912)
Agent for the Applicant
, Description:FIELD OF INVENTION
Embodiments of a present disclosure relate to object recognition, and more particularly to a system and method for video or image transformation to deter object recognition.
BACKGROUND
Object recognition is a computer vision technique which has an ability to perceive an object’s physical properties for a certain class in digital images or videos. Now a days, there has been a huge surge in amount of videos which is acquired all over the world. Sometimes, it violates the privacy as there is no control over the amassing and distribution of the videos. There is a need to deter object recognition by transforming the images or videos. Various systems are available for video transformation to deter object recognition.
At present, system which is available to transform the video involves blurring or distortion which is unable to preserve intent of the original image and detected object while modifying the object. Also, there are no existing systems which transforms the detected object in the image or video by applying a warp vector.
Hence, there is a need for an improved system and method for video or image transformation to deter object recognition in order to address the aforementioned issues.
BRIEF DESCRIPTION
In accordance with an embodiment of the disclosure, a system for video transformation to deter object recognition is disclosed. The system includes an object detection subsystem configured to extract a plurality of pixels from at least one detected object using one or more object detection techniques, wherein at least one object is detected from one or more videos or images. The system also includes a transformation matrix generation subsystem operatively coupled to the object detection subsystem. The transformation matrix generation subsystem is configured to generate a transformation matrix to apply on the plurality of pixels. The transformation matrix generation subsystem includes a bottleneck warp vector generation subsystem configured to generate a warp vector for transforming the plurality of pixels associated with the set of rows. The transformation matrix generation subsystem also includes a warp vector modulation subsystem operatively coupled to the bottleneck warp vector generation subsystem. The warp vector modulation subsystem is configured to modulate generated bottleneck warp vector using a predefined blending function for generating a transformation matrix. The system also includes a transformation subsystem operatively coupled to the transformation matrix generation subsystem. The transformation subsystem is configured to apply generated transformation matrix on each of the set of rows and the set of columns associated with the plurality of pixels. The transformation subsystem is also configured to resize at least one of the set of rows and the set of columns associated with the plurality of pixels upon replication of corresponding plurality of pixels in the image.
In accordance with another embodiment of the disclosure, a method for transforming video to deter object recognition is provided. The method includes extracting a plurality of pixels from at least one detected object using one or more object detection techniques, wherein at least one object is detected from one or more videos or images. The method also includes generating a transformation matrix to apply on the plurality of pixels. The method also includes generating a warp vector for transforming the plurality of pixels associated with the set of rows. The method also includes modulating a generated bottleneck warp vector using a predefined blending function for generating a transformation matrix. The method also includes applying a generated transformation matrix on each of the set of rows associated with the plurality of pixels. The method also includes resizing at least one of the set of rows associated with the plurality of pixels upon replication of corresponding plurality of pixels in the image.
To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
FIG. 1 is a block diagram representation of a system for video transformation to deter object recognition in accordance with an embodiment of the present disclosure;
FIG. 2 is a block diagram representation of an embodiment of the system for video transformation to deter object recognition of FIG. 1 in accordance with an embodiment of the present disclosure;
FIG. 3 is a block diagram representation of another embodiment of the system for video transformation to deter object recognition of FIG. 1 in accordance with an embodiment of the present disclosure;
FIG. 4 is a block diagram representation of a general computer system in accordance with an embodiment of the present disclosure;
FIG. 5 is a flow diagram representing steps involved in a method for transforming video to deter object recognition in accordance with an embodiment of the present disclosure; and
FIG. 6 is an image representation of a transformation matrix for transforming video to deter object recognition in accordance with an embodiment of the present disclosure.
Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
Embodiments of the present disclosure relate to a system and method for video transformation to deter object recognition. The system includes an object detection subsystem configured to extract a plurality of pixels from at least one detected object using one or more object detection techniques, wherein at least one object is detected from one or more videos or images. The system also includes a transformation matrix generation subsystem operatively coupled to the object detection subsystem. The transformation matrix generation subsystem is configured to generate a transformation matrix to apply on the plurality of pixels. The transformation matrix generation subsystem includes a bottleneck warp vector generation subsystem configured to generate a warp vector for transforming the plurality of pixels associated with the set of rows. The transformation matrix generation subsystem also includes a warp vector modulation subsystem operatively coupled to the bottleneck warp vector generation subsystem. The warp vector modulation subsystem is configured to modulate generated bottleneck warp vector using a predefined blending function for generating a transformation matrix. The system also includes a transformation subsystem operatively coupled to the transformation matrix generation subsystem. The transformation subsystem is configured to apply generated transformation matrix on each of the set of rows and the set of columns associated with the plurality of pixels. The transformation subsystem is also configured to resize at least one of the set of rows associated with the plurality of pixels upon replication of corresponding plurality of pixels in the image.
FIG. 1 is a block diagram representation of a system (100) for video transformation to deter object recognition in accordance with an embodiment of the present disclosure. The system (100) includes an object detection subsystem (110) configured to extract a plurality of pixels from at least one detected object using one or more object detection techniques, wherein at least one object is detected from one or more videos or images. As used herein, the term “video” is a moving picture which is formed by combining a sequence of images. In one embodiment, the one or more object detection techniques may include a deep learning based neural network such as region based convolutional neural network, fast region based convolutional neural network and the like.
In one embodiment, the plurality of pixels may be resized to a predefined value. In such embodiment, the predefined value corresponds to a width of each of the plurality of pixels and a height of each of the plurality of pixels.
The system (100) also includes a transformation matrix generation subsystem (120) operatively coupled to the object detection subsystem (110).
The transformation matrix generation subsystem (120) is configured to generate a transformation matrix to apply on the plurality of pixels corresponding to the at least one detected object. In some embodiment, the transformation matrix generation subsystem (120) may also be configured to apply the transformation matrix on the plurality of pixels to transform the plurality of pixels associated with a set of rows. In some embodiment, the generation of the transformation matrix is described below in the further embodiments.
Further, the transformation matrix generation subsystem (120) includes a bottleneck warp vector generation subsystem (130) configured to transform the plurality of pixels associated with the set of rows by generating a warp vector, wherein the warp vector is associated with each of the set of rows. In one embodiment, the warp vector corresponds to a numerical value. As used herein, the term “warp vector” is an advanced distortion effect driven by a (color) vector bitmap rather than a grayscale map.
In some embodiment, the warp vector may be configured to convey a pixel density distribution. In such embodiment, the numerical values in the warp vector may include a small value and a large value. The small value associated with the warp vector represents a compressed image. The large value associated with the warp vector represents a decompressed image.
In some embodiment, the length of the warp vector may be same as the width of each of the plurality of pixels. In one embodiment, a number of times the corresponding pixel in the image may be replicated represents the values of the warp vector. In some embodiment, an uneven distribution of the plurality of pixels and resizing to the width associated with the plurality of pixels may cause a warp behaviour.
In one embodiment, at first instance, the bottleneck warp vector generation subsystem (130) may be configured to apply the predefined step function along one of a horizontal axis or vertical axis of the image. At second instance, the bottleneck warp vector generation subsystem (130) may be configured to apply the predefined step function along an axis opposite to the already applied axis.
More specifically, in one exemplary embodiment, at the first instance, the bottleneck warp vector generation subsystem (130) may be configured to apply the predefined step function along the horizontal axis of each of the set of rows to obtain compression or decompression effect. As used herein, the compression or decompression represents one or more warps.
At second instance, the bottleneck warp vector generation subsystem (130) may be configured to apply the predefined step function along the vertical axis of each of the set of rows to obtain compression and decompression effect. In another embodiment, bottleneck warp vector generation subsystem (130) may be configured to apply the predefined step function in a vice versa manner.
The predefined step function may be applied along the horizontal and vertical axis for the N number of intervals for obtaining compression or decompression effect on the at least one detected object.
In one embodiment, the step function may be defined by using an equation f(x) = ?_(i=0)^N a_i g_Ki (x) for all real numbers x, where N is a number of intervals (K0, K1 .. KN) and a_i is an interval amplitude. Also, g_Ki (x) = {1 for x ? Ki and 0 for x ? Ki. Further, the step function may be smoothed to provide more aesthetic results.
For example, the step function along the horizontal axis includes intervals such as K0 = [ -8, 0], K1 = [0,25], K2 = [26,44], K3 = [45,83], K4 = [84,102], K5 = [103,127], K6 = [128, 8]., wherein K2 and K4 interval represents a compression effect and K3 interval represents a decompression effect. Further, the corresponding amplitudes are a0 = 0, a1= 100, a2 = 40, a3 = 50, a4 = 40, a5 = 100, a6 = 0. Further, the generated step function may be filtered through a smoothing filter to smoothen the transition.
In another embodiment, the bottleneck warp vector generation subsystem (130) may be configured to generate the bottleneck warp vector using gaussian distribution. The bottleneck warp vector generation subsystem (130) may be configured to generate a plurality of gaussian vectors based on the width of the each of the plurality of pixels for generating the bottleneck warp vector. Further, the bottleneck warp vector generation subsystem (130) may be configured to add the plurality of generated gaussian vectors to obtain a resultant vector. The resultant vector may be subtracted from the maximum value of the gaussian vectors to generate the bottleneck warp vector.
Furthermore, the transformation matrix generation subsystem (120) also includes a warp vector modulation subsystem (140) operatively coupled to the bottleneck warp vector generation subsystem (130). The warp vector modulation subsystem (140) is configured to modulate a generated bottleneck warp vector using a predefined blending function for generating a transformation matrix.
In one embodiment, the warp vector modulation subsystem (140) may be configured to modulate the generated bottleneck warp vector to maximize the warping for a predefined row from the set of rows using the predefined blending function. In such embodiment, the warp vector modulation subsystem (140) may be configured to taper the warping towards the beginning and ending rows corresponds to the plurality of pixels for enabling the plurality of pixels to blend back smoothly to the original image.
In one embodiment, rows with the maximum warping may be considered as a bottleneck rows. In another embodiment, row with no warping may be considered as a uniform distribution.
Further, the warp vector modulation subsystem (140) may be configured to generate the transformation matrix by initializing the transformation matrix to zero based on a height of the plurality of pixels and a width of the plurality of pixels.
Furthermore, the system (100) may include an assigning subsystem configured to assign the bottleneck warp vector to a predefined set of middle rows associated with the transformation matrix. In one embodiment, the predefined set of middle rows may include rows from BN_T (as shown in FIG 6) (480) to BN_B (as shown in FIG 6) (490). In such embodiment, the BN_T (480) corresponds to the start of the bottleneck top row. In some embodiment, the BN_B (490) corresponds to the end of the bottleneck bottom row.
Further, the assigning subsystem may be configured to assign a modulated bottleneck warp vector to a predefined set of middle to upper and lower rows associated with the transformation matrix.
Furthermore, the assigning subsystem may also be configured to assign a uniformly distributed vector to a predefined set of lower rows and a predefined set of upper rows associated with the transformation matrix. In one embodiment, the predefined set of lower rows may include rows from BL_B (as shown in FIG 6) (500) to CH (as shown in FIG 6) (510). In such embodiment, BL_B (500) is the blending bottom row where warp vector modulation ends so as to obtain the uniform distribution of the plurality of pixels. In one embodiment, CH (510) represents the height associated with the plurality of pixels.
In some embodiment, the predefined set of upper rows may include rows from 0 (as shown in FIG 6) (460) to BL_T (as shown in FIG 6) (470). In such embodiment, the BL_T (460) is abbreviated as blending top row where the warp vector modulation ends so as to obtain the uniform distribution of the plurality of pixels.
Moreover, the warp vector modulation subsystem (140) may be configured to assemble the warp vectors associated with the predefined set of rows associated with the transformation matrix.
In one embodiment, transformation matrix may be generated by using the defined equations. The warp vectors for rows below BL_B and above BL_T may be initialized with a uniform distribution as defined in the equation
transform_matrix [i] = CW * U (j,0, CW), i ? 0, BL_T – 1 and BL_B, CH -1. j ? 0, CW – 1
where i is the row number of the image, j is the column number of the image, U (x, a, b) = i / (b-a), a <= x < b, CW is the width of the plurality of pixels and CH is the height of the plurality of pixels.
In continuation with the above-mentioned equation, the rows from BN_T to BN-B may be generated by using an equation:
transform_matrix [i] = bottleneck_warp_vector, i ? BN_T, BN_B
delta and clip are defined as delta_BT = ((bottleneck_warp_vector) -min?(bottleneck_warp_vector)))/((BN_T-BL_T))
delta_BB = ((bottleneck_warp_vector) -min?(bottleneck_warp_vector)))/((BL_B-BN_B))
For vector “X” of length “VLEN”
clip (X) = if X[j] > N, then X[j] = N else X[j] = X[j], j ? V LEN
where clip (X) is a function that clips value “X” to a maximum defined value “N” if X is greater than “N”, else it returns “X”. Furthermore, warp vectors for rows BL_T to BN_T are calculated by the equation:
transform matrix [i-1] = clip (transform_matrix [i] + delta_BT), i ? BN_T, BL_T where delta_BT is a scalar added to all values of the vector transform_matrix [i] which in turn achieves modulation of bottleneck warp vector at location BN_T to a uniform distribution at location BL_T.
Warp vectors for rows below BN_B till BL_B are calculated by the equation:
transform_matrix [i+1] = clip (transform_matrix [i] + delta_BN), i ? BN_B, BL_B which in turn achieves modulation of bottleneck warp vector at location BN_B to a uniform distribution at location BL_B.
Furthermore, the system (100) also includes a transformation subsystem (150) operatively coupled to the transformation matrix generation subsystem (120). The transformation subsystem (150) may include a transformation function configured to apply generated transformation matrix on the plurality of pixels associated with each of the set of rows to deter object recognition.
In one embodiment, the transformation subsystem (150) may also be configured to apply a tilt detection technique on the plurality of pixels corresponding to the at least one detected object to detect the tilted angle alpha of the detected object prior to transform the plurality of pixels corresponding to the at least one detected object. The transformation subsystem (120) may be configured to rotate the plurality of pixels by angle -alpha to align the plurality of pixels in a XY plane upon detection of tilted angle alpha of the at least one detected object.
In one embodiment, the transformation subsystem (150) may be configured to transform the plurality of pixels associated with each of the set of rows by replicating each of the plurality of pixels corresponding to each of the set of rows based on the value associated with a warp vectors of the transformation matrix. In some embodiment, size of each of the set of rows may be increased upon replicating each of the plurality of pixels.
Further, in one exemplary embodiment, the transformation subsystem (150) may be configured to obtain a transformed set of rows upon resizing the replicated set of rows to the predefined size. In one specific embodiment, in continuation of performing the above described functions on at least one of the set of rows subsequently, the above described functions are also performed on at least one of the set of columns subsequently.
The transformation subsystem (150) may be configured to rotate the plurality of pixels of the image associated with the transformation matrix by 90 degree to transform the plurality of pixels associated with each of a set of columns by replicating each of the pixels corresponding to each of the set of columns based on the value associated with the warp vector. In some embodiment, size of each of the set of columns may be increased upon replicating each of the plurality of pixels. Further, the transformation subsystem (150) is also configured to resize at least one of the set of rows to the width associated with the plurality of pixels upon replication of corresponding plurality of pixels in the image.
The transformation subsystem (150) may be configured to obtain transformed set of columns upon resizing the replicated set of columns to the predefined size. Further, the plurality of pixels associated with the matrix may be rotated by –(90-alpha) degree to obtain an original image.
FIG. 2 is a block diagram of an embodiment of the system (160) for video transformation to deter object recognition of FIG. 1 in accordance with an embodiment of the present disclosure. In one exemplary embodiment, a face (190) as an object is detected from an input image (170) by finding the location and size of the face. Further, a plurality of pixels is extracted from the face (190), by object detection subsystem (180), to further process the detected face (190). Furthermore, the plurality of pixels which are associated with the face (190) are resized based on the height and width of the plurality of pixels.
Further, the tilt detection technique is applied on the plurality of pixels corresponding to the at least one detected object to detect the tilted angle alpha of the detected object prior to transform the plurality of pixels corresponding to the at least one detected object.
Furthermore, upon detection of tilted angle alpha of the at least one detected object, the plurality of pixels is rotated by the transformation matrix generation subsystem (200) by angle -alpha to align the plurality of pixels in a XY plane.
Afterwards, transformation of the face is performed by generating a transformation matrix. In such case, the transformation matrix is generated by transformation generation matrix subsystem (200) to transform the face (190) by applying on the plurality of pixels associated with the face (190). For generating the transformation matrix, firstly the warp vector is generated by the bottleneck warp vector generation subsystem (210), to apply to a first row including the plurality of pixels. Similarly, the warp vector is generated for each row, where the warp vector includes a numeric value which represents the number of times the corresponding pixel in the detected face (190) are replicated.
Further, each row is resized to the resized width associated with the plurality of pixels. Furthermore, the amount of transformation is controlled by warp vector modulation subsystem (220), by varying the warp vector for each row such that the warping is maximum at a predefined row associated with the plurality of pixels.
Moreover, tapering is performed towards the beginning and ending rows of the plurality of pixels to blend back the plurality of pixels smoothly into the original image.
Further, the plurality of pixels is rotated by 90 degrees for performing the same process which results in warping of the columns associated with the plurality of pixels. Upon transforming all the rows and columns of the plurality of pixels, a transformed face (240) is obtained. Furthermore, the plurality of pixels is rotated by –(90-alpha) degrees to obtain original position and plugged in back into the entire image.
Furthermore, the object detection subsystem (180), the transformation matrix generation subsystem (200), the bottleneck warp vector generation subsystem (210), the warp vector modulation subsystem (220) and the transformation subsystem (230) are substantially similar to an object detection subsystem (110), a transformation matrix generation subsystem (120), a bottleneck warp vector generation subsystem (130), a warp vector modulation subsystem (140) and a transformation subsystem (150) of FIG. 1.
FIG. 3 is a block diagram of another embodiment of the system (250) for video transformation to deter object recognition of FIG. 1 in accordance with an embodiment of the present disclosure. In such embodiment, a car as an object is detected from the streamed video and above process is performed in a same manner to transform all the detected cars from the streamed video.
Furthermore, the object detection subsystem (270), the transformation matrix generation subsystem (290), the bottleneck warp vector generation subsystem (300), the warp vector modulation subsystem (310) and the transformation subsystem (320) are substantially similar to an object detection subsystem (110), a transformation matrix generation subsystem (120), a bottleneck warp vector generation subsystem (130), a warp vector modulation subsystem (140) and a transformation subsystem (150) of FIG. 1.
FIG. 4 is a block diagram of a general computer system (340) in accordance with an embodiment of the present disclosure. The computer system (340) includes processor(s) (350), and memory (360) coupled to the processor(s) (350) via a bus (370).
The processor(s) (350), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
The memory (360) includes a plurality of subsystems stored in the form of executable program which instructs the processor (350) to perform the configuration of the system illustrated in FIG. 1. The memory (360) has following subsystems: an object detection subsystem (110), a transformation matrix generation subsystem (120), a bottleneck warp vector generation subsystem (130), a warp vector modulation subsystem (140) and a transformation subsystem (150) of FIG. 1.
Computer memory elements may include any suitable memory device(s) for storing data and executable program, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling memory cards and the like. Embodiments of the present subject matter may be implemented in conjunction with program subsystems, including functions, procedures, data structures, and application programs, for performing tasks, or defining abstract data types or low-level hardware contexts. Executable program stored on any of the above-mentioned storage media may be executable by the processor(s) (350).
The object detection subsystem (110) instructs the processor(s) (350) to extract a plurality of pixels from at least one detected object using one or more object detection techniques, wherein at least one object is detected from one or more videos or images.
The transformation matrix generation subsystem (120) instructs the processor(s) (350) to generate a transformation matrix to apply on the plurality of pixels.
The bottleneck warp vector generation subsystem (130) instructs the processor(s) (350) to generate a warp vector for transforming the plurality of pixels associated with the set of rows.
The warp vector modulation subsystem (140) instructs the processor(s) (350) to modulate a generated bottleneck warp vector using a predefined blending function for generating a transformation matrix.
The transformation subsystem (150) instructs the processor(s) (350) to apply generated transformation matrix on each of at least one of the set of rows associated with the plurality of pixels.
The transformation subsystem (150) instructs the processor(s) (350) to resize at least one of the set of rows associated with the plurality of pixels upon replication of corresponding plurality of pixels in the image.
FIG. 5 is a flow diagram representing steps involved in a method (380) for transforming video to deter object recognition in accordance with an embodiment of the present disclosure. The method includes extracting, by object detection subsystem, a plurality of pixels from at least one detected object using one or more object detection techniques, wherein at least one object is detected from one or more videos or images in step 390. In one embodiment, extracting the plurality of pixels from at least one detected object using the one or more object detection techniques may include extracting the plurality of pixels from at least one detected object using a deep learning based neural network such as region based convolutional neural network, fast region based convolutional neural network and the like.
In one embodiment, the method (380) may include resizing the plurality of pixels to a predefined value. In such embodiment, resizing the plurality of pixels to the predefined value may include resizing the plurality of pixels to a width of each of the plurality of pixels and a height of each of the plurality of pixels.
The method (380) also includes generating, by a transformation matrix generation subsystem, a transformation matrix to apply on the plurality of pixels corresponding to the at least one detected object in step 400. In some embodiment, generating the transformation matrix to apply on the plurality of pixels may include generating the transformation matrix to apply the transformation matrix on the plurality of pixels to transform the plurality of pixels associated with a set of rows.
Further, the method (380) may include generating, by a bottleneck warp vector generation subsystem, a warp vector for transforming the plurality of pixels associated with the set of rows, wherein the warp vector is associated with each of the set of rows in step 410. In one embodiment, generating the warp vector may include generating the warp vector corresponds to a numerical value.
In some embodiment, the method (380) may include conveying, by warp vector, a pixel density distribution. In some embodiment, generating the warp vector may include generating the warp vector of length same as the width of each of the plurality of pixels. In such embodiment, the method may include representing, by value of the warp vector, a number of times the corresponding pixel in the image may be replicated.
In one embodiment, the method (380) may include causing a warp behaviour by an uneven distribution of the plurality of pixels and resizing to the width associated with the plurality of pixels.
In one embodiment, the method (380) may include applying, by the bottleneck warp vector generation subsystem, the predefined step function along one of a horizontal axis and a vertical axis of the image. At second instance, the method (380) may also include applying, by bottleneck warp vector generation subsystem, the predefined step function along an axis opposite to the already applied axis.
More specifically, in one exemplary embodiment, the method (380) may include applying, by the bottleneck warp vector generation subsystem, the predefined step function along the horizontal axis of each of the set of rows to obtain compression or decompression effect. In such embodiment, the method (380) may also include applying the predefined step function along the vertical axis of each of the set of rows to obtain compression and decompression effect at second instance. In another embodiment, the method (380) may include applying the predefined step function in a vice versa manner.
In some embodiment, applying the predefined step function along the horizontal and vertical axis may include applying the predefined step function along the horizontal and vertical axis for the N number of intervals for obtaining compression or decompression effect on the at least one detected object.
In one embodiment, the method (380) may include defining the step function by using an equation
f(x) = ?_(i=0)^N a_i g_Ki (x) for all real numbers x, where N is a number of intervals and a_i is an interval amplitude. Also, g_Ki (x) = {1 for x ? Ki and 0 for x ? Ki}. Further, the step function may be smoothed to provide more aesthetic results.
Further, the method (380) may include filtering the generated step function through a smoothing filter to smoothen the transition. In another embodiment, the method (380) may include generating the bottleneck warp vector using a gaussian distribution. In such embodiment, generating the bottleneck warp vector using the gaussian distribution may include generating the bottleneck warp vector by generating a plurality of gaussian vectors based on the width of the each of the plurality of pixels. Further, generating the bottleneck warp vector may include generating the bottleneck warp vector by adding the plurality of generated gaussian vectors to obtain a resultant vector. Furthermore, generating the bottleneck warp vector may include generating the bottleneck warp vector by subtracting the resultant vector from the maximum value of the gaussian vectors to generate the bottleneck warp vector.
The method (380) may also include modulating, by a warp vector modulation subsystem, a generated bottleneck warp vector to maximize the warping for a predefined row from the set of rows using a predefined blending function for generating a transformation matrix in step 420. In some embodiment, the method (380) may include tapering, by the warp vector modulation subsystem, the warping towards the beginning and ending rows corresponds to the plurality of pixels for enabling the plurality of pixels to blend back smoothly to the original image.
In one embodiment, warping with a maximum value may considered as a bottleneck row. In another embodiment, warping with a maximum value may considered as a uniform distribution.
Further, the method (380) may include generating, by warp vector modulation subsystem, the transformation matrix by initializing the transformation matrix to zero based on a height of the plurality of pixels and a width of the plurality of pixels.
In one embodiment, the method (380) may include assigning, by the assigning subsystem, the bottleneck warp vector to a predefined set of middle rows associated with the transformation matrix. In one embodiment, assigning the bottleneck warp vector to the predefined set of middle rows may include assigning the bottleneck warp vector to rows from BN_T to BN_B. In such embodiment, assigning the bottleneck warp vector to rows from BN_T to BN_B may include assigning the bottleneck warp vector to rows from start of the bottleneck top row to end of the bottleneck bottom row.
Further, the method (380) may include assigning, by the assigning subsystem, a uniformly distributed vector to a predefined set of lower rows and a predefined set of upper rows associated with the transformation matrix.
In one embodiment, assigning the uniformly distributed vector to the predefined set of lower rows may include assigning the uniformly distributed vector to the rows from BL_B to CH. In such embodiment, assigning the uniformly distributed vector to the row BL_B may include assigning the uniformly distributed vector to the blending bottom row where warp vector transitions may be performed to obtain the uniform distribution of the plurality of pixels. In one embodiment, assigning the uniformly distributed vector to the row CH may include assigning the uniformly distributed vector to the row CH which represents the height associated with the plurality of pixels.
In some embodiment, assigning the uniformly distributed vector to the predefined set of upper rows may include assigning the uniformly distributed vector to the rows from 0 to BL_T. In such embodiment, assigning the uniformly distributed vector to the rows from 0 to BL_T may include assigning the uniformly distributed vector to the rows from 0 to BL_T where BL_T is abbreviated as blending top row.
Further, the method (380) may include assigning, by assigning subsystem, a modulated bottleneck warp vector to a predefined set of middle to upper and lower rows associated with the transformation matrix.
Moreover, the method (380) may include assembling, by the warp vector modulation subsystem, the warp vectors associated with the predefined set of rows associated with the transformation matrix.
In one embodiment, the method (380) may include generating the transformation matrix by using the defined equations. The method (380) may include initializing the warp vectors for rows below BL_B and above BL_T with a uniform distribution as defined in the equation
transform_matrix [i] = CW * U (j,0, CW), i ? 0, BL_T – 1 and BL_B, CH -1. j ? 0, CW – 1
where i is the row number of the image, j is the column number of the image, U (x, a, b) = i / (b-a), a <= x < b, CW is the width of the plurality of pixels and CH is the height of the plurality of pixels.
In continuation with the above-mentioned equation, the method (380) may include generating the rows from BN_T to BN-B by using an equation:
transform_matrix [i] = bottleneck_warp_vector, i ? BN_T, BN_B
delta and clip are defined as delta_BT = ((bottleneck_warp_vector) -min?(bottleneck_warp_vector)))/((BN_T-BL_T))
delta_BB = ((bottleneck_warp_vector) -min?(bottleneck_warp_vector)))/((BL_B-BN_B))
For vector “X” of length “VLEN”
clip (X) = if X[j] > N, then X[j] = N else X[j] = X[j], j ? V LEN
where clip (X) is a function that clips value “X” to a maximum defined value “N” if X is greater than “N”, else it returns “X”.
Furthermore, the method (380) may include calculating warp vectors for rows BL_T to BN_T by the equation:
transform matrix [i-1] = clip (transform_matrix [i] + delta_BT), i ? BN_T, BL_T where delta_BT is a scalar added to all values of the vector transform_matrix [i] which in turn achieves modulation of bottleneck warp vector at location BN_T to a uniform distribution at location BL_T.
Afterwards, the method (380) may include calculating the warp vectors for rows below BN_B till BL_B by the equation:
transform_matrix [i+1] = clip (transform_matrix [i] + delta_BN), i ? BN_B, BL_B which in turn achieves modulation of bottleneck warp vector at location BN_B to a uniform distribution at location BL_B.
Furthermore, the method (380) may include applying, by transformation function, generated transformation matrix on the plurality of pixels associated with each of the set of rows to deter object recognition in step 430.
In some embodiment, the method (380) may include applying, by transformation matrix generation subsystem, a tilt detection technique on the plurality of pixels corresponding to the at least one detected object to detect the tilted angle alpha of the detected object prior to transform the plurality of pixels corresponding to the at least one detected object. In one embodiment, the method (380) may include rotating the plurality of pixels by angle -alpha to align the plurality of pixels in a XY plane upon detection of tilted angle alpha of the at least one detected object.
In one embodiment, the method (380) may include transforming, by transformation subsystem, the plurality of pixels associated with each of the set of rows by replicating each of the plurality of pixels corresponding to each of the set of rows based on the value associated with a warp vector of the transformation matrix. In some embodiment, the method (380) may include increasing size of each of the set of rows upon replicating each of the plurality of pixels. Further, the method (380) includes resizing, by the transformation subsystem, the at least one of the set of rows to the width associated with the plurality of pixels upon completion of replication process in step 440.
Further, the method (380) may include obtaining, by transformation subsystem, a transformed set of rows upon resizing the replicated set of rows to the predefined size. In one specific embodiment, the method (380) may include performing the above described functions on at least one of the set of columns subsequently upon performing the above described functions on at least one of the set of rows subsequently.
The method (380) may include rotating, by transformation subsystem, the plurality of pixels of the image associated with the transformation matrix by 90 degree to transform the plurality of pixels associated with each of a set of columns by replicating each of the pixels corresponding to each of the set of columns based on the value associated with the warp vector. In some embodiment, the method (380) may include increasing the size of each of the set of columns upon replicating each of the plurality of pixels. The method (380) may also include obtaining transformed set of columns upon resizing the replicated set of columns to the predefined size. Furthermore, the method (380) may include rotating the plurality of pixels associated with the matrix by –(90-alpha) degree to obtain an original image.
Various embodiments of the system ensure that the original image and detected object intent remains preserved after video or image transformation. Also, the proposed system ensures that the aesthetics of the videos will remain preserved. Moreover, such system is able to control the warping extent for various kinds of objects using a control vector by which the resulting video will appear coherent.
While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependant on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.
| # | Name | Date |
|---|---|---|
| 1 | 201941031222-STATEMENT OF UNDERTAKING (FORM 3) [01-08-2019(online)].pdf | 2019-08-01 |
| 2 | 201941031222-PROOF OF RIGHT [01-08-2019(online)].pdf | 2019-08-01 |
| 3 | 201941031222-POWER OF AUTHORITY [01-08-2019(online)].pdf | 2019-08-01 |
| 4 | 201941031222-FORM FOR STARTUP [01-08-2019(online)].pdf | 2019-08-01 |
| 5 | 201941031222-FORM FOR SMALL ENTITY(FORM-28) [01-08-2019(online)].pdf | 2019-08-01 |
| 6 | 201941031222-FORM 1 [01-08-2019(online)].pdf | 2019-08-01 |
| 7 | 201941031222-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [01-08-2019(online)].pdf | 2019-08-01 |
| 8 | 201941031222-EVIDENCE FOR REGISTRATION UNDER SSI [01-08-2019(online)].pdf | 2019-08-01 |
| 9 | 201941031222-DRAWINGS [01-08-2019(online)].pdf | 2019-08-01 |
| 10 | 201941031222-DECLARATION OF INVENTORSHIP (FORM 5) [01-08-2019(online)].pdf | 2019-08-01 |
| 11 | 201941031222-COMPLETE SPECIFICATION [01-08-2019(online)].pdf | 2019-08-01 |
| 12 | Correspondence by Agent_Form-1, 3, 5, 28, DIPP Certificate, GPOA_13-08-2019.pdf | 2019-08-13 |
| 13 | 201941031222-FORM-9 [26-08-2019(online)].pdf | 2019-08-26 |