Abstract: The various embodiments herein disclose method and system for rendering a spin view of an object in a three dimensional mode. The method comprises of receiving, by a data manager, a plurality of input data from a sensor unit, determining, by a direction estimation module, a current user movement direction for capturing the spin view image of the object with respect to a central axis of the object, comparing the current user movement direction with an allowed initial user movement direction for image capturing, determining, if one or more key frames corresponding to the spin view images are captured in the initial user movement direction of image capturing, generating one or more image poses corresponding to the captured one or more key frames, performing image correction and pose centering on the generated image poses and rendering the image of the object in spin view.
DESC:FORM 2
THE PATENTS ACT, 1970
[39 of 1970]
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(Section 10; Rule 13)
METHOD AND SYSTEM FOR GENERATING A SPIN VIEW OF AN OBJECT USING A MOBILE DEVICE
SAMSUNG R&D INSTITUTE INDIA – BANGALORE PRIVATE LIMITED,
#2870, Orion Building, Bagmane Constellation Business Park,
Outer Ring Road, Doddanekundi Circle,
Marathahalli Post, Bangalore – 560037,
Karnataka, India,
an Indian Company
The following Specification particularly describes the invention and the manner in which it is to be performed
RELATED APPLICATION
The present invention claims benefit of the Indian Provisional Application No. 1431/CHE/2015 titled " A SYSTEM AND METHOD FOR PRODUCING SPIN VIEW RENDERING OF A GIVEN OBJECT OF INTEREST USING HAND-HELD MOBILE DEVICE” by Samsung R&D Institute India – Bangalore Private Limited, filed on 20th March 2015, which is herein incorporated in its entirety by reference for all purposes.
FIELD OF THE INVENTION
The present invention generally relates to image processing systems and methods. More particularly, the present invention relates to a system and method for generating a spin view, which is a three-dimensional view, of an object using a mobile device.
BACKGROUND OF THE INVENTION
Nowadays, most of the hand-held mobile devices are equipped with cameras, both front camera as well as rear camera with high quality lenses. Mobile devices with cameras capture digital representations of images and can be configured to capture multiple images over a designated time. The process of image capturing gets more complex or difficult when the image or video has to be captured in spin or rotated motion, as it is difficult to maintain the image clarity constant. While it can be appreciated that software running on computing devices can perform 3D scans of objects, such methods are time consuming. Conventional videos shot by going around the object also suffer from limitations and do not give the illusion of a spin-view (a view around a single axis passing through the object).
Current systems and methods used for generating spin view of the object does not provide better clarity and quality images/ videos to the user of the mobile device, and fail in many real life scenarios involving human induced inconsistencies related to video capture, as the variations of the angular velocity while obtaining spin view image is not constant.
In view of the foregoing, there is a need for a system and method which can generate a produce spin view of a given object according to the user specifications and overcome the aforementioned drawbacks.
The above mentioned shortcomings, disadvantages and problems are addressed herein and which will be understood by reading and studying the following specification.
SUMMARY OF THE INVENTION
The various embodiments herein disclose a method of rendering spin view of an object in a three-dimensional (3D) mode. The e method comprises of receiving, by a data manager, a plurality of input data from a sensor unit, determining, by a direction estimation module, a current user movement direction for capturing the spin view image of the object with respect to a central axis of the object, comparing the current user movement direction with an allowed initial user movement direction for image capturing, determining, if one or more key frames corresponding to the spin view images are captured in the initial user movement direction of image capturing, generating one or more image poses corresponding to the captured one or more key frames, performing image correction and pose centering on the generated image poses, and rendering the image of the object in spin view.
According to an embodiment of the present invention, generating the one or more image process comprises steps of modeling movement of an image capturing device as motion in 3 dimensions, and generating images poses of the object in 3D using at least one of visual sensor data and inertial sensor data.
According to an embodiment of the present invention, performing the image correction comprises steps of computing a trajectory and centering estimation of the captured spin view images of the one or more key frames captured, and performing a path correction estimation for rendering the 3D spin view of the object.
According to an embodiment of the present invention, the method of rendering a three-dimensional (3D) spin view of an object further comprises steps of determining, by the data manager, a time synchronization between pluralities of the input data through synchronization, and re-aligning the input data for in-time-order with respect to each other on a global time scale through a Time Stamp Remapping.
According to an embodiment of the present invention, the input data comprises at least one of a video frame, camera preview frame, gyroscopic sensor value, accelerometer value, magnetometer value along with associated time information. According to another embodiment of the present invention, the input data is provided in the form an encoded data format or a pre-captured visual data format.
According to an embodiment of the present invention, determining the current user movement direction comprises steps of performing estimation of drift in motion sensors, where the estimation of drift is performed based one the data obtained by one of calibration of drift in the motion sensors and fusion of data from various sensors used for detection of stationary state of motion sensors, estimating a drift correction based on an information of a current stationary state and a previous stationary state of the motion sensors, employing the estimated drift correction for correction of drift in readings from other motion sensors, providing the drift corrected motion sensor data as an input to the direction estimation module, and estimating the current user movement direction using a set of readings and an amount of motion sensed by the motion sensors.
According to an embodiment of the present invention, determining the current user movement direction further comprises of determining the current user movement direction using one or more predefined thresholds if the user movement direction has not been estimated for capturing the spin view images of the object.
According to an embodiment of the present invention, selecting the one or more key frames is based on an amount of change in an Euler angle and a number of frames captured.
According to an embodiment of the present invention, performing a trajectory and centering estimation of the captured spin view images comprises steps of determining an inter-spatial relationship between the pluralities of input data, determining temporal and spatial modifications required for plurality of images to represent as jitter free when seen together in order, and generating poses corresponding to the shooting device location at a point of frame capture.
The foregoing has outlined, in general, the various aspects of the invention and is to serve as an aid to better understanding the more complete detailed description which is to follow. In reference to such, there is to be a clear understanding that the present invention is not limited to the method or application of use described and illustrated herein. It is intended that any other advantages and objects of the present invention that become apparent or obvious from the detailed description or illustrations contained herein are within the scope of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The other objects, features and advantages will occur to those skilled in the art from the following description of the preferred embodiment and the accompanying drawings in which:
Figure 1 is a flow chart illustrating a method for generating spin view of an object using a mobile device, according to an embodiment of the present invention.
Figure 2 is a block diagram illustrating the functional components of a spin view generation system, according to an embodiment of the present invention.
Figure 3a is a schematic block diagram depicting components of a Guidance and Selection Unit depicted in Figure 2, according to an embodiment of the present invention.
Figure 3b is a schematic flow diagram depicting a process of drift correction performed by the drift correction module depicted in Figure 3a, according to an embodiment of the present invention.
Figure 3c is a schematic flow chart depicting a process of estimating user direction by the direction estimation module depicted in Figure 3a, according to an embodiment of the present invention.
Figure 3d is a schematic flow diagram depicting a process of frame selection using the frame selection unit depicted in Figure 3a, according to an embodiment of the present invention.
Figure 4 is a schematic flow diagram illustrating the functions of a processing unit depicted in Figure 2, according to an embodiment of the present invention.
Figure 5a is a schematic flow diagram illustrating the process of object centering depicted in Figure 4, according to an embodiment of the present invention.
Figure 5b is schematic flow diagram illustrating the process of 2D (in-plane) pose generation and correction depicted in Figure 4, according to an embodiment of the present invention.
Figure 5c is a schematic flow diagram illustrating a process of 3D pose generation depicted in Figure 4, according to an embodiment of the present invention.
Figure 5d is a schematic flow diagram illustrating a process of zoom correction depicted in Figure 4, according to an embodiment of the present invention.
Figures 6a and 6b are schematic block diagrams illustrating the functional components of the output unit depicted in Figure 2, according to an embodiment of the present invention.
Figure 7 is schematic block diagram illustrating the functional components of an output unit depicted in Figure 2, according to another embodiment of the present invention.
Figure 8 is a schematic diagram illustrating orientation of the camera position and the object for capturing the image, according to an embodiment of the present invention.
Figure 9a-9d illustrates a schematic representation of the image capturing device in a portrait orientation, a reverse portrait orientation, a landscape orientation and a reverse landscape orientation respectively, according to an embodiment of the present invention.
Figure 10 is a schematic diagram illustrating GOP based buffering mechanism, according to an embodiment of the present invention.
Although specific features of the present invention are shown in some drawings and not in others. This is done for convenience only as each feature may be combined with any or all of the other features in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The present invention provides a system and method for rendering a three-dimensional (3D) spin view of an object in a hand-held mobile device. In the following detailed description of the embodiments of the invention, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
The specification may refer to “an”, “one” or “some” embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms “includes”, “comprises”, “including” and/or “comprising” when used in this specification, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The present invention provides a system and method for generating a spin view, which is a, 3D view of the object of interest around an imaginary central axis passing through the object of interest using a hand-held device such as a mobile device. The embodiments herein fuse the visual sensor data with inertial sensor data to generate capture-video through semi-circular or circular scan of the target object with approximately constant variations of angular velocity.
The present invention is described with respect to a handheld device, but the present method can be used in of image capturing systems such as, but not limited to, mobile phones, laptop, cameras, PDA, tablet, and the like, without departing from the scope of the invention. The present invention is best described along with supporting use cases and embodiments to illustrate the working of the present methods, and the person ordinarily skilled in the art can understand that the description of the present methods does not limit the scope of the present invention.
According to the present invention, a method of rendering a three-dimensional (3D) spin view of an object, comprises steps of receiving, by a data manager, a plurality of input data from a sensor unit. A handheld device comprises of a data manager that receives one or more input data from the one or more sensor units. In an embodiment of the present invention, the one or more sensor units can be visual sensor units, non-visual sensor units or both. In another embodiment of the present invention, the input data captured by the one or more sensor units can comprise at least one of, but not limited to, a video frame, camera preview frame, gyroscopic sensor value, accelerometer value, magnetometer value along with associated time information, and the like.
In another embodiment of the present invention, the visual sensor units capture input data such as one or more image frames, the video frames, camera preview frame and the like, while the non-visual sensor units can capture input data such as, but not limited to, gyroscopic sensor value, accelerometer value, magnetometer value along with associated time information, and the like. In another embodiment of the present invention, the input data is provided in the form of an encoded data format or a pre-captured visual data format. The person having ordinary skills in the art can understand that visual sensors and non-visual sensors are part of the data manager and can capture the plurality of input data, without departing from the scope of the invention.
Further, the method comprises of determining, by a direction estimation module, a current user movement direction for capturing the spin view image of the object with respect to a central axis of the object. The handheld device comprises of the direction estimation module, wherein the direction estimation module identifies one or more objects in the preview image obtained from the sensor unit and identifies an imaginary central axis around the object.
Based on the identified imaginary central axis of the object, the direction estimation module estimates direction in which user can move around the object to capture spin view image. In an embodiment of the present invention, the spin view image can be of 180 degrees spin around the central axis of the object. In another embodiment of the present invention, the spin view image can be of 360 degrees spin around the central axis of the object. The person ordinarily skilled in the art can understand that the spin view can be any degrees that can cover the objects around its central axis in a 3D mode, without departing from the scope of the invention.
In an embodiment of the present invention, determining the current user movement direction comprises of performing estimation of drift in motion sensors, wherein the estimation of drift is performed based on the data obtained by one of calibration of drift in the motion sensors and fusion of data from various sensors used for detection of stationary state of motion sensors. Based on identified estimated drift, a drift correction can be estimated, wherein the estimation of the drift correction is based on information of a current stationary state and a previous stationary state of the motion sensors. Further, the method comprises of employing the estimated drift correction for correction of drift in readings from other motion sensors. Further, the method comprises providing the drift corrected motion sensor data as an input to the direction estimation module. Based on the received set of readings of the drift corrected motion sensor data and an amount of motion sensed by the motion sensors, the direction estimation module estimates the current user movement direction.
In an embodiment of the present invention, one or more predefined thresholds are identified for determination of the current user movement direction, wherein these one or more thresholds can be used for capturing the spin view images of the object if the user movement direction has not been estimated.
Further, the method comprises of comparing the current user movement direction with an allowed initial user movement direction for image capturing. The direction estimation module estimates current user movement direction, as described herein above, which is suitable for capturing the spin view image. Further, the direction estimation module compares the actual movement of the user with allowed initial user movement direction for capturing the spin view image. In an embodiment of the present the handheld device can display a direction of movement on a device display to allow the user to move in the particular direction and angle so that the user can capture images for providing a better spin view. In another embodiment of the present invention, the handheld device can also provide audio assistance along with a direction information to the user so that the user can move forward and backward, tilt the camera in required angle and increase or decrease the movement speed to capture the spin view images, without departing from the scope of the invention.
Further, the method comprises of determining, if one or more key frames corresponding to the spin view images are captured in the initial user movement direction of image capturing. Based on the movement of the user, the handheld device captures one or more spin view images along the initial user movement direction. The direction estimation module identifies the user movement along the direction and identifies the one or more key frames corresponding to the spin view images are captured in the initial user movement direction of image capturing. In an embodiment of the present invention, the key frames can be selected based on the factors such as, but not limited to, predetermined amount of rotation around the spin axis, number of frames captured, and the like, without departing from the scope of the invention.
Further, the method comprises generating one or more image poses corresponding to the captured one or more key frames. From the obtained key frames, the handheld device can generate image poses in 3 dimensions (3-D) for the obtained one or more key frames. In an embodiment of the present invention, generating the one or more image poses comprises steps of modeling movement of an image capturing device as motion in 3 dimensions, and generating images poses of the object in 3D using at least one of the visual sensor data and the inertial sensor data.
The method further comprises of performing image correction and pose centering on the generated image poses. Based on the obtained image poses, the handheld device computes a trajectory along with which the spin view can be obtained and centers estimation of the captured spin view images of the one or more key frames captured. In an embodiment of the present invention, trajectory and centering estimation of the captured spin view images comprises of determining an inter-spatial relationship between the pluralities of input data. Further, temporal and spatial modifications required for plurality of images can also be determined to represent as jitter free when seen together in order. Based on the temporal and spatial modifications, poses can be generated corresponding to the shooting device location at a point of frame capture. Based on the obtained trajectory and estimated centering of the spin view images of the one or more captured key frames, the hand held device performs path correction estimation for rendering the 3D spin view of the object.
Further, the method comprises of rendering the image of the object in spin view. Based on the trajectory and estimated path correction of the key frames, the handheld device render’s the image of the object in the spin view. In an embodiment of the present invention, the rendered image of the object in the spin view can be a preview image. In another embodiment of the present invention, the rendered image can be a processed image displaying 3D spin view of the object, without departing from the scope of the invention.
In an embodiment of the present invention, the method of rendering a three-dimensional (3D) spin view of an object further comprises of determining time synchronization between pluralities of the input data through synchronization and re-aligning the input data for in-time-order with respect to each other on a global time scale through a Time Stamp Remapping.
Figure 1 is a flow chart 100 depicting a method for generating a 3D spin view an object using a mobile device, according to an embodiment of the present invention. According to flow chart 100, at step 102 the process of image capturing starts. At step 104, the input data corresponding to one or more images captured is received from one or more sensors. In an embodiment of the present invention, one or more sensors comprises of a visual sensor unit, non-visual sensor unit, or both. The types of sensor units involved and the type of input data captured by the sensor units are described herein above and thus not described herein again to avoid repetition.
At step 106, based on the received input data from the sensor units, estimate a user movement direction to be prompted to the user. Further, at step 108, check whether the estimated user movement direction is an allowed direction for capturing the plurality of images. In case the direction has not been estimated then collected data is used for determination of direction using thresholds. These thresholds for direction estimation are estimated offline. In case direction has been estimated at 106, the set of N readings are used for estimation of current direction and amount of motion sensed by the motion sensors. The current estimated direction is used for comparison with the initial estimated direction. If the estimated user movement direction is not in an allowed direction, then the process moves to step 110, wherein the handheld device determines one or more key frames from the captured images in the allowed direction. The process of determining one or more key frames is described above and hence described herein again to avoid repetition.
At step 112, the handheld device checks whether the captured image the key frame or not. If no, then the process will again move to step 104 and start capturing images and sensor data associated with the captured images. If yes, then the process moves to step 114, wherein trajectory and object centering can be estimated. Further, at step 116, based on the estimated trajectory and object centering information, path correction information can be estimated, which will allow the system to select the one or more images which can be used for rendering the spin view image of the object. At step 118, the selected images and estimated path correction information can be stored in a database for rending of the spin view image of the object. In an embodiment of the present invention, the storage can be any of storage units, that include, but not limited to, inbuilt device memory such as RAM, ROM, hard disk and the like, external memory unit such as pen drive, external hard disk, CD, DVD, database, cloud, server, and the like, and the person having ordinarily skilled in the art can understand that the captured images along with information pertaining to the captured images can be stored in any of the known storage units, without departing from the scope of the invention.
Figure 2 is a schematic block diagram illustrating a 3D spin view generation system 200, according to an embodiment of the present invention. According to Figure 2, the spin view generation system 200 comprises of a guided capture unit 202, a processing unit 250 and an output unit 260. The guided capture unit 202 further comprises of a data manager unit 210, a visual sensor unit 220, a non-visual sensor unit 230, and a guidance and selection unit 240. The guided capture unit 202 basically responsible for capture of images, and information related to the one or more images.
The data manager unit 210 receives the plurality of input data from the visual sensor unit 220 and the non-visual sensor unit 230. In an embodiment, the received data or input data consists of basic data such as, but not limited to, video frame, camera preview frame, gyroscopic sensor value, and the like. Further, the received data at the data manager unit 210 further associated with time information. In another embodiment, the received data can consists of only basic data. The received data or input data is described above in detail and not described herein again to avoid repetition. The data manager unit 210 further determines the time synchronization between pluralities of data through Synchronization 214 and re-aligns the data for in-time-order with respect to each other data on a global Time scale through Time Stamp Remapping 212.
In an embodiment of the present invention, the visual sensor unit 220 can comprise a camera capturing unit which provides the visual data in bare raw format or in any encoded format. In another embodiment of the present invention, the visual sensor unit 220 can comprise of system that provide the pre-captured visual data, without departing from the scope of the invention.
The non-visual sensor unit 230 comprises of units or modules that provide data that are not visual in nature. In an embodiment, the non-visual data can comprise the information provided by units such as, but not limited to, gyroscope, accelerometer, magnetometer, and the like that are non-visual sensors present on the mobile device. The spin view generation system 200 can use one or more such non-visual data and may combine them in any order, without departing from the scope of the invention.
Further, the guidance and selection unit 240 receives the processed data from data manager unit 210, and determines the suitable input data among the received plurality of input data for further processing. Further, the processing unit 250 receives the determined suitable input data, and determines the inter-spatial relationship between the pluralities of visual data. The processing unit 250 further determines the temporal and spatial modifications (as Transformation) required for plurality of visual data in order to represent them as “jitter free” when seen together in order.
The output unit 260 receives the plurality of visual data and corresponding non-visual data information from the guidance and selection unit 240 and for each visual data the associated transformation information from the processing unit 250. In an embodiment of the present invention, the output unit 260 can produce the output data by transforming the visual data. In another embodiment of the present invention, the output unit 260 can represent the received information in any standard or propriety format, in which case, while viewing, the instantaneous transformation is applied and shown.
Figure 3a is a schematic block diagram depicting components of the guidance and selection unit 240 depicted in Figure 2, according to an embodiment of the present invention. According to the block diagram, the guidance and selection unit 240 comprises of a drift correction module 302, a direction estimation module 304 and a frame selection unit 306. The drift correction module 302 receives the raw input data from one or more motion sensors or non-visual sensor data units 230. The raw input data from the motion sensors, such as gyroscope, is primarily affected by drift. Hence in order for reliable use of this data, the drift needs to be estimated and compensated accordingly. In an embodiment of the present invention, calibration of motion sensor data can be used for estimation of the drift.
In another embodiment of the same, fusion of data from various sensors can be used for detection of stationary state of motion sensors. Thus the data obtained can be used for estimation of drift in the motion sensors. For every image frame captured, one of the motion sensors provides an indication for stationary state.
Further, the direction estimation module 304 receives drift corrected motion sensor data as input from the drift correction module 302. Here, corresponding to every image frame captured, a collection of N motion sensor data samples is recorded. In case the direction has not been estimated, then the collected data is used for determination of the direction using thresholds. These thresholds for direction estimation can be estimated offline, without departing from the scope of the invention. In case direction has been estimated, then the set of N readings are used for estimation of current direction and amount of motion sensed by the motion sensors. The current estimated direction is used for comparison with the initial estimated direction. The result of this comparison along with the amount of motion sensed can be given as a feedback to the user via a display.
Further, the frame selection unit 306 receives Euler angles corresponding to each frame and a number of frames captured as an input. The Euler angle is the angle of pre-determined amount of rotation around the spin axis. Based on the Euler angle or pre-determined amount of rotation around the spin axis, the frame selection unit 306 selects one or more image frames from the captured image frames for further processing.
Figure 3b is a schematic flow diagram 300 depicting a process of drift correction performed by the drift correction module depicted in Figure 3a, according to an embodiment of the present invention. In one other embodiment of the same, fusion of data from various sensors can be used for detection of stationary state of motion sensors. The data thus obtained can used for estimation of drift in the motion sensors. For every frame, one of the motion sensors provides an indication for stationary state. According to the flow diagram 300, a raw data is received by the drift correction module, and at step 312 it is checked whether sensor A is in stationary state or not. If the sensor A is in stationary state, then at step 314, the drift correction module 302 further checks whether the sensor B is in stationary state or not. If the sensor B is not in stationary state, then at step 316 the drift correction module 302 updates the correction parameter. At step 318, the updated correction parameter is applied to the readings of the sensor B. If the sensor B is in stationary state, then at step 318, the correction can be applied to the sensor B. Further, if the sensor A is not in stationary state, then the process moves to step 318, wherein the correction is applied to the sensor B to obtain drift correction motion sensor data of the captured images.
Figure 3c is a schematic flow chart 320 depicting a process of estimating user movement direction by the direction estimation module depicted in Figure 3a, according to an embodiment of the present invention. According to the flow chart 320, at step 322, the direction estimation module checks whether session direction is pre-estimated or not. If yes, then at step 324, then current frame direction is estimated. If no, then at step 326, estimate the direction session for the current frame. Once the direction estimation for the current frame is performed at step 324, at step 328, it is checked whether the predetermined session direction is same as that of the current frame direction. If both the directions are same, then at step 330, the direction estimated module identifies that the session is moving in wrong direction. If the directions are different, then at step 332, the direction estimation module identifies that the device session is moving in right direction.
Further, after estimating the session direction at step 326, at step 334, the estimated session direction is recorded. Based on the recorded session direction, at step 332, the direction estimation module understands that the device session is moving in right direction, and thus forwards the data to the frame selection module for further processing.
Figure 3d is a schematic flow diagram 340 depicting a process of frame selection using the frame selection unit depicted in Figure 3a, according to an embodiment of the present invention. According to the flow diagram 340, the frame selection unit receives the Euler angles from the direction estimation module. At step 342, amount of change in Euler angles is calculated using input. Further it is checked whether the amount of angular change and the number of frames captured is greater than 0. If yes, then at step 344, the frame is selected, and image data gets compared with background model to estimate visual changes in the image frame. At step 346, based on the sensor’s estimation of motion, the threshold for visual changes gets updated so that thresholds for visual based selection adapt to the visual content of the scene. Upon updating the threshold estimation, at step 348, the frame is used to update the background model. At step 350, the frame is then selected for further processing.
If the amount of angular change and the number of frames captured is less than 0, then at step 352, the image data is compared with the background model to measure the difference, which takes care of motions which cannot be estimated by device’s sensor accurately. At step 354, it is checked whether the difference between the frame changes with respect to the background model is more than a preset threshold or not. If the difference is more than the threshold, then at step at 348, the update the background model. Further at step 350, selected the next frame for further session. If the difference between the frame changes with respect to the background model is less than threshold, then the frame is discarded.
Figure 4 is a schematic flow diagram illustrating the functions of a processing unit 250 depicted in Figure 2, according to an embodiment of the present invention. The processing unit 250 determines the inter-spatial relationship between the pluralities of visual data. The processing unit further determines the temporal and spatial modifications, as transformation required for plurality of visual data in order to represent them as “jitter free” when seen together in order. The processing unit 250 comprises of a pose generation unit, a pose correction unit and a transformation matrix generation unit. The input to the processing unit 250 consists of the motion sensor data and camera frames data along with time stamps for both inputs corresponding to time of capture of each data sample.
According to the flow diagram, at step 402, perform synchronization of the motion sensor data, camera frames and time stamps for the input data. At step 404, for the synchronized data, it is checked whether a zoom correction has been enabled or not. If no, then at step 406, perform an object centering, at step 408 in plane 2D pose correction is performed. Further, simultaneous to steps 406 and 408, at step 410, estimation of the 3D rotation pose is performed, and at step 412, 3D rotation pose correction is performed. Here the step 406 and 408 is performed to obtain pose corrected 2D images and step 410 and 412 is performed to obtain pose corrected 3D images. At step 414, both the pose corrected 2D images obtained from in plane 2D pose correction of step 408 and pose corrected 3D images obtained from 3D rotational pose correction of 412 is combined to obtain the transformation data. At step 416, the transformation data is written to a file for further processing requirements.
If the zoom correction is enabled for the synchronized data, then at step 418, it is assumed that the images are to be processed in 3D and thus 3D pose estimation is performed. At step 420, the 3D pose correction is performed for the pose estimated images. At step 422, zoom correction and object centering is performed for the pose corrected image to obtain the transformation data. At step 424, the transformation data is written to the file. The pose estimation, pose correction and object centering process are described herein in detail and hence not described herein again to avoid repetition.
Figure 5a is a schematic flow diagram 500 illustrating a process of object centering depicted in Figure 4, according to an embodiment of the present invention. The process of object centering is performed by an object centering module. According to the flow diagram 5004, at step 502, the object centering module identifies region of interest (ROI) around the image center. At step 504, the object centering module receives the next image frame. At step 506, the object centering module checks whether the image frame received is the first frame or not. If the image is the first frame, then the process moves to step 504 and receives the next frame. If no, then the process moves to step 508, wherein the object centering module tracks feature points from the previous frame ROI. Further, the object centering module updates the ROI position for the current frame. Further, at step 510, feature points in the current frame ROI is detected by the object centering module. At step 512, it is checked whether the object centering module has received all the image frames in the sequence. If yes, then the process ends, otherwise, the process again moves to step 504 and starts checking for next image frames.
Figure 5b is schematic flow diagram 520 illustrating processes of 2D (in-plane) pose generation and correction depicted in Figure 4, according to an embodiment of the present invention. The process of in-plane 2D pose generation and correction is performed by in plane 2D pose correction module. According to the flow diagram 520, at step 522, the 2D pose correction module receives next image frame. At step 524, the feature points present in the received image frame is computed. At step 526, it is checked whether the image frame for which the feature points are computed is a first frame. If no, then at step 528, the pose correction module tracks and filters the feature points of the image frame. At step 530, the global pose trajectory is updated. If the image frame for which feature points are computed is not a first frame, then the process moves to step 532.
At step 532, the pose correction module further checks whether the sequence of image frames got over. If yes, then the process goes to step 534, and no, then the process moves to step 502 and starts receiving the next image frames. At step 534, the step further proceeds and sets filtering strength. At step 536, the pose correction module performs trajectory filtering. At step 538, the pose correction unit checks whether the output crop of the filtered trajectory is greater than the predefined threshold or not. If no, then the process gets terminated. If yes, then at step 540, filtering strength of the pose correction unit is adjusted and the process further moves to step 536 to perform trajectory filtering.
Figure 5c is a schematic flow diagram 550 illustrating a process of 3D pose generation depicted in Figure 4, according to an embodiment of the present invention. According to the flow diagram 550, set of images and camera intrinsic parameters such as non-visual sensor data is provided to 3D pose correction module. At step 552, the 3D pose correction module performs feature detection on each input image and perform matching of feature points across pairs of images. Any suitable feature detection and matching method is used herein for feature detection without departing from the scope of the invention. Subsequently, feature tracks are built that map feature points to a plurality of images among the inputted image. At step 554, an initial image pair with highest number of matches is selected. At step 556, the 3 dimensional rotation and translation of camera between the image pair is estimated. The person having ordinarily skills in the art can use state of the art SFM methods to solve the problem of 3D rotation and translation, without departing from the scope of the invention.
At step 558, a triangulation is performed for each feature point so as to project it into a 3D space and obtain its true 3D coordinate that minimizes the projection error with respect to each image in the pair. At step 560, filtering of the feature points is performed to remove those points that subtend a small projection angle or result in high re-projection error in each image in the pair. At step 562, it is checked whether the overall number of points is successfully reconstructed or not. If no, then process will move to step 554. If yes, then at step 564, bundle adjustment is performed and refined camera poses and point cloud is stored.
At step 566, the system further checks whether there are any pending frames or not. If no frames are pending, then the process gets completed. If there are any pending frames, then at step 568, obtain tracks which are visible for a new frame from a reconstruction scene. At step 570, solve PnP problems to get R and T for the new projection plane, which is a new frame. At step 572, a triangulation is performed for each feature point in the new frame so as to project it into a 3D space and obtain its true 3D coordinate that minimizes the projection error with respect to each image in the pair. At step 574, evaluated the unwanted points and filter them from the new frame. Further, the method goes to step 564 to perform bundle adjustment, refine camera poses and stored point cloud. The steps 564-574 runs iteratively and add remaining images to the 3D point cloud, one at a time.
Figure 5d is a schematic flow diagram 580 illustrating a process of zoom correction depicted in Figure 4, according to an embodiment of the present invention. While there is lot of prior art that talks about 2D affine model based stabilization and 3D rotational stabilization, very little attention is drawn to the correction of translation and zoom factors in 3D space. However in a 3D scanning operation where the user captures a series of images going around an object, rendering without zoom and translation correction results in very poor end user experience.
Also, the person having ordinarily skilled in the art appreciates that while zoom correction, object centering can be automatically handled in 3D scan and modelling applications, by rendering a 3D model as output. Such rendering is not feasible in this use case due to limitations in processing speed. The description below explains the process of zoom correction and object centering without using the 3D model of the object for rendering.
At step 582, the plurality of camera positions are projected onto a planar surface in the 3D world, such that the surface may not be parallel to the ground. Hence a best fit algorithm is used to perform the projection. In an embodiment of the present invention, the projection is done using Principal Component Analysis (PCA). At step 584, a circle fit lying in the said projected plane is estimated. In an embodiment of the present invention, any least squares method is used to estimate the fit, without departing from the scope of the invention.
The 3D point cloud represents a sparse collection of feature points with associated track length and 3D world coordinate. Each of these tracks originates from different objects. At step 586, an initial depth based filtering is performed for these tracks. It is assumed that the object of interest lies at the center of first image. Based on this heuristic, all 3D coordinates is projected to the first image, and filter out those tracks that do not lie within a central window of the image. From the remaining tracks, a simple clustering is done of the depth values, and the dominant cluster is picked. At step 588, each of the filtered tracks id projected to the 3D plane generated at step 582.
At step 590, a simple filtering is performed by picking only the 3D points that lie within the circle in the 2D plane, as estimated by block 584. The person having ordinary skills in the art understands that the projection of 3D coordinates of points belonging to object of interest will lie inside the circle, without departing from the scope of the invention. At step 592, a final filtering is performed by projecting each of the remaining 3D points on to each image, and picking only those points that lie within each and every image. It is assumed that object of interest remains in the field of view of the camera throughout the scanning. The steps 586-592 helps to estimate geometric center of the object being scanned.
At step 594, based on estimated trajectory circle and estimated geometric center of the object in 3D, zoom correction of the image is performed. At step 596, object centering is performed on the zoom corrected image obtained from the step 594. In an embodiment of the present invention, any of the embodiments from pose generation unit is combined with the any of the embodiment from pose correction unit to generate the corrected poses, without departing from the scope of the invention.
Figures 6a and 6b are schematic block diagrams 260 illustrating the functional components of the output unit depicted in Figure 2, according to an embodiment of the present invention. The output unit 260 receives the plurality of visual data and corresponding non-visual data information from guidance and selection unit 240 and for each visual data the associated transformation information from the processing unit 250. In an embodiment of the present invention, the output unit 260 generates the output data by transforming the visual data. In another embodiment of the present invention, the output unit 260 can represent the received information in any standard or propriety format, in which case, while viewing, the instantaneous transformation is applied and shown.
Figure 6a is a schematic block diagram 600 illustrating the functioning of the Output Unit 260 according to an embodiment of the present invention. According to the Figure 6a, the output unit 260 receives the visual data as input from the guidance and selection unit 240, and transformation information from processing unit 250. The transformation information comprises of the modification/transformation relationship required for generating the desired output. In an embodiment of the present invention, the transformation information can also include the further altered/modified visual data, without departing from the scope of the invention. The transforming unit 262 combines the transformation information and visual data and generates visual output data 264 which is smooth and jitter free.
Figure 6b is a schematic block diagram 610 illustrating the functioning of the Output Unit 260 according to another embodiment of the present invention. According to Figure 6b, the output unit 260 arranges the received visual data 240 and transformation information 250, and packs them into single unit as output data 266. The output data 266 can be either arranged sequentially as per the corresponding type or in any pre-determined order which can be sequential or intermixed.
Figure 7 is schematic block diagram 700 illustrating the functional components of an output unit depicted in Figure 2, according to another embodiment of the present invention. According to the block diagram 700, the output unit 260 is responsible for decoding the contents of the generated media file and renders the same on a device display. The output unit comprises of a user event handler 701, a frame renderer 702, a controller 703, a fusion sensor 704, a buffer manager 705, and a decoder and demuxer 706.
The user event handler 701 receives touch based input from a user for rendering of frames. The user event handler 701 interacts with the frame renderer 702 for rendering of the frames. The user event handle 701 also comprises of a touch sensor that is used to get touch events and once the event parameters cross a defined threshold, events will be sent to the controller 703, wherein the threshold can vary from device to device depending on their display screen resolution. The amount of touch movement is used to calculate the number of frames to be sent for rendering.
If user swipes on screen from Point A to Point B, the number of frames to be rendered is calculated using given formula:
frameCount = (AbsoluteValueOf (distance)* numFrames ) /(DisplayWidth)
where the distance denotes a pixel distance between Point B to Point A, numFrames denotes total number of frames present in the content which needs to be rendered, and displayWidth denotes the device display width in pixels. This rframeCount along with a relative touch direction is sent to the controller 703.
Further, the frame renderer module 702 handles request of frames for rendering. For every frame, a corresponding 3x3 warping matrix is passed along with orientation parameters. Every frame is warped, cropped, color converted (YUV to RGB) and rendered.
Further, the fusion sensor 704 specifies number of frames to be displayed along with a relative direction, either positive or negative, to the controller 703. A composite sensor is used as the fusion sensor 704 that provides a unified device spatial orientation based on fusion of raw sensor values such as, but not limited to, magnetometer, accelerometer, gyroscopic sensor values, and the like. The composite sensor reports the device spatial orientation with respect to roll, pitch and azimuth values in degrees.
The buffer manager 705 works on buffer mechanism which is decided based on the amount of heap memory permitted to be used on the device and the resolution of the generated spin view content. There are two kinds of buffer mechanism being used. The first mechanism buffers all frames and stores it in list for the complete duration of viewer session. The second mechanism uses a Group of Pictures (GOP) based buffering.
According to GOP based buffering mechanism, the buffering manager 705 maintains list of decoded frames with N GOP frame data. The request sent to the decoder and demuxer 706 will be either previous GOP or next GOP. Decoding one GOP all together will be having less overhead compared to frame by frame decoding especially in case of reverse playback.
The value of N decides memory footprint of buffering mechanism, so its value depends upon the system heap memory available and kind of requests the user want the system to cater and performance the user want to achieve from the system. The value of N is chosen such that new request can be catered without over burdening the system resources and with less processing. The value of N is chosen such that there are always two different points of decision both for previous and next GOP request. Further, there should be sufficient GOP (buffers) in memory (henceforth called as n) in the direction of new request, utill the new GOP request is fulfilled by the decoder and demuxer 706. The request to next GOP will be made when following equation is satisfied:
F=((N-n)×S – X) (1)
wherein F is buffer number requested in list, S is GOP size and X is the playback speed like 1x, 2x etc. Request to previous GOP will be made when following equation is satisfied:
F=(n×S + X) (2)
Between two points of decision there will be always (N-2n) GOP in memory. The request to buffers in those GOP can easily be catered without making any request to the decoder. Request to the decoder for GOP is made on separate thread which is maintaining a self-updating queue. The queue will update itself so that it holds only latest N requests for next or previous GOP’s and remove old requests. Least recently Used (LRU) concept is applied to reuse the already allocated memory.
The controller 703 is the core of the spin view rendering system as it communicates with the other modules to handle their request. Initially depending on touch input or data from the fusion sensor 704, the controller 703 requests the buffer manager 705 to get decoded frames. The controller 703 gets content direction from SEF generated media file and compares the direction either sent by the fusion sensor 704 or user event handler 701. If the direction does not match, then the content will not be played, otherwise the frame buffer is sent to the frame renderer 702 along with the corresponding warping matrix for rendering.
The decoder and demuxer 706 decodes the generated media file. The file container can be parsed using the demuxer and the video data is decoded into raw format using a decoder module. In addition, metadata is also parsed and stored. Depending upon next/previous GOP request, demuxer pointer is moved back and forth. The buffer manager 705 creates list of buffer pointers and passes to the decoder and demuxer 706, which populates the buffer by decoding the corresponding frames.
Figure 8 is a schematic diagram 800 illustrating orientation of the camera position and the object, according to an embodiment of the present invention. According to the diagram 800, a composite sensor is used as the fusion sensor 704 of the output module 260 that provides a unified device spatial orientation based on fusion of raw sensor values such as, but not limited to, magnetometer, accelerometer, gyroscopic sensor values, and the like. The composite sensor reports the device spatial orientation with following three values in degrees:
Roll: 0 degree when the device is in level, increasing to 90 degrees as the device is tilted up onto its left side, and decreasing to -90 degrees when the device is tilted up onto its right side.
Pitch: 0 degree when the device is in level, increasing to 90 degrees as the device is tilted so it’s top is pointing down, and then decreasing to 0 degree as it gets turned over. Similarly, as the device is tilted so its bottom points down, pitch decreases to -90 degrees, and then increases to 0 degree as it gets turned all the way over.
Azimuth: 0 degree when the top of the device is pointing north, 90 degrees when it is pointing east, 180 degrees when it is pointing south, 270 degrees when it is pointing west, etc.
Figure 9a-9d illustrates a schematic representation of the image capturing device in a portrait orientation, a reverse portrait orientation, a landscape orientation and a reverse landscape orientation, according to an embodiment of the present invention. According to the Figures 9a-9d, the value to be used among the three sensor values, roll, pitch and azimuth, for computation of device spatial orientation is decided based on the criterion as discussed below.
Figure 9a and Figure 9b illustrates Portrait (708) and Reverse Portrait (709) cases respectively. According to the Figures 9a and 9b, in normal case, (Pitch value, p > (-90+t) or p < (-90-t)), current sensor value is set as Roll Value. In case if Pitch Value is in range, p > (-90-t) and p < (-90+t) which occurs if the device is held up straight in purely vertical position, then current sensor value is set as Azimuth value after normalizing.
Figures 9c and 9d illustrate landscape (710) and Reverse Landscape (711) cases respectively. According to the Figures 9c and 9d, in normal case (Roll value, r < (90-t) or r > (90+t), current sensor value is set as Pitch Value. In case if Roll Value is in range, r > (90-t) and r < (90+t), which occurs if the device is help up straight in purely horizontal position, then current sensor value is set as Azimuth value(after normalizing).
Based on this, the difference ? (SensorValue) of current sensor value with previous reference value is calculated. This difference is then compared with a defined threshold to eliminate noisy values and if this difference crosses a particular threshold then the event for further processing will be sent to the controller 703. The number of frames to be sent for rendering, frameCount is calculated based on this sensor value difference and maximum viewangle.
frameCount = ?(SensorValue) * FramePerDegree
Where, FramePerDegree is the number of frames to be rendered per degree change. For example, if 180 frames need to be rendered within a viewangle of 45 degrees, then FramePerDegree will be 4.
In addition, Sensor spike correction is also handled. If the current Sensor value varies in direction from the previous sensor samples within a defined window, then the current sensor value is dropped as it is perceived to be a possible spike.
Figure 10 is a schematic diagram illustrating GOP based buffering mechanism, according to an embodiment of the present invention. In this example, the request to next GOP is made when the frame counter traverses 75% of the queue (N = 4). Similarly request to previous GOP is made when frame counter traverses in reverse and hits the 25th percentile GOP in the queue.
The present embodiments have been described with reference to specific example embodiments; it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the various embodiments. Furthermore, the various devices, modules, and the like described herein may be enabled and operated using hardware circuitry, for example, complementary metal oxide semiconductor based logic circuitry, firmware, software and/or any combination of hardware, firmware, and/or software embodied in a machine readable medium. For example, the various electrical structure and methods may be embodied using transistors, logic gates, and electrical circuits, such as application specific integrated circuit.
Although the embodiments herein are described with various specific embodiments, it will be obvious for a person skilled in the art to practice the invention with modifications. However, all such modifications are deemed to be within the scope of the claims. It is also to be understood that the following claims are intended to cover all of the generic and specific features of the embodiments described herein and all the statements of the scope of the embodiments which as a matter of language might be said to fall there between.
We Claim:
1. A method of rendering a spin view of an object, the method comprising:
receiving, by a data manager, a plurality of input data from a sensor unit;
determining, by a direction estimation module, a current user movement direction for capturing the spin view image of the object with respect to a central axis of the object;
comparing the current user movement direction with an allowed initial user movement direction for image capturing;
determining, if one or more key frames corresponding to the spin view images are captured in the initial user movement direction of image capturing;
generating one or more image poses corresponding to the captured one or more key frames;
performing image correction and pose centering on the generated image poses; and
rendering the image of the object in spin view.
2. The method of claim 1, generating the one or more image process comprises of:
modeling movement of an image capturing device as motion in 3 dimensions; and
generating images poses of the object in 3D using at least one of visual sensor data and inertial sensor data.
3. The method of claim 1, wherein performing the image correction comprises of:
computing a trajectory and centering estimation of the captured spin view images of the one or more key frames captured; and
performing a path correction estimation for rendering the 3D spin view of the object.
4. The method of claim 1, further comprises of:
determining, by the data manager, a time synchronization between pluralities of the input data through synchronization; and
re-aligning the input data for in-time-order with respect to each other on a global time scale through a Time Stamp Remapping.
5. The method of claim 1, wherein the input data comprises at least one of a video frame, camera preview frame, gyroscopic sensor value, accelerometer value, magnetometer value along with an associated time information.
6. The method of claim 1, wherein the input data is provided in the form an encoded data format or a pre-captured visual data format.
7. The method of claim 1, wherein determining the current user movement direction comprises of:
performing estimation of drift in motion sensors, where the estimation of drift is performed based one the data obtained by one of calibration of drift in the motion sensors and fusion of data from various sensors used for detection of stationary state of motion sensors;
estimating a drift correction based on an information of a current stationary state and a previous stationary state of the motion sensors; and
employing the estimated drift correction for correction of drift in readings from other motion sensors;
providing the drift corrected motion sensor data as an input to the direction estimation module; and
estimating the current user movement direction using a set of readings and an amount of motion sensed by the motion sensors.
8. The method of claim 6, further comprises of determining the current user movement direction using one or more predefined thresholds if the user movement direction has not been estimated for capturing the spin view images of the object.
9. The method of claim 1, wherein determining the one or more key frames is based on predetermined amount of rotation around the spin axis and a number of frames captured.
10. The method of claim 1, wherein performing a trajectory and centering estimation of the captured spin view images comprises of:
determining an inter-spatial relationship between the pluralities of input data;
determining temporal and spatial modifications required for plurality of images to represent as jitter free when seen together in order; and
generating poses corresponding to the shooting device location at a point of frame capture.
11. A system of rendering a spin view of an object in a three-dimensional mode, the system comprising:
a data manager unit to receive a plurality of input data from a sensor unit;
a direction estimation module adapted for:
determining a current user movement direction for capturing the spin view image of the object with respect to a central axis of the object;
comparing the current user movement direction with an allowed initial user movement direction for image capturing;
a frame estimation module adapted to:
determining, if one or more key frames corresponding to the spin view images are captured in the initial user movement direction of image capturing;
a pose generation module adapted for generating one or more image poses corresponding to the captured one or more key frames;
a pose generation module adapted for performing an image correction and pose centering of the generated image poses; and
an output module adapted for rendering the image of the object in spin view.
Dated this the 17th day of March 2016
Signature
KEERTHI J S
Patent agent
Agent for the applicant
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 1431-CHE-2015-IntimationOfGrant27-09-2023.pdf | 2023-09-27 |
| 1 | SRIB-20150318-016_Provisional Specification_Filed with IPO on 20th March 2015.pdf | 2015-03-28 |
| 2 | 1431-CHE-2015-PatentCertificate27-09-2023.pdf | 2023-09-27 |
| 2 | SRIB-20150318-016_Drawings_Filed with IPO on 20th March 2015.pdf | 2015-03-28 |
| 3 | Drawing [18-03-2016(online)].pdf | 2016-03-18 |
| 3 | 1431-CHE-2015-PETITION UNDER RULE 137 [19-05-2023(online)].pdf | 2023-05-19 |
| 4 | Description(Complete) [18-03-2016(online)].pdf | 2016-03-18 |
| 4 | 1431-CHE-2015-RELEVANT DOCUMENTS [19-05-2023(online)].pdf | 2023-05-19 |
| 5 | abstract-1431-CHE-2015-jpeg.jpg | 2016-09-13 |
| 5 | 1431-CHE-2015-Response to office action [19-05-2023(online)].pdf | 2023-05-19 |
| 6 | 1431-CHE-2015-PETITION UNDER RULE 137 [09-05-2023(online)].pdf | 2023-05-09 |
| 6 | 1431-CHE-2015-FORM 13 [07-11-2019(online)].pdf | 2019-11-07 |
| 7 | 1431-CHE-2015-RELEVANT DOCUMENTS [09-05-2023(online)].pdf | 2023-05-09 |
| 7 | 1431-CHE-2015-FER.pdf | 2020-03-11 |
| 8 | 1431-CHE-2015-Written submissions and relevant documents [09-05-2023(online)].pdf | 2023-05-09 |
| 8 | 1431-CHE-2015-PETITION UNDER RULE 137 [11-09-2020(online)].pdf | 2020-09-11 |
| 9 | 1431-CHE-2015-Correspondence to notify the Controller [21-04-2023(online)].pdf | 2023-04-21 |
| 9 | 1431-CHE-2015-PETITION UNDER RULE 137 [11-09-2020(online)]-1.pdf | 2020-09-11 |
| 10 | 1431-CHE-2015-FORM-26 [21-04-2023(online)].pdf | 2023-04-21 |
| 10 | 1431-CHE-2015-OTHERS [11-09-2020(online)].pdf | 2020-09-11 |
| 11 | 1431-CHE-2015-FORM-26 [11-09-2020(online)].pdf | 2020-09-11 |
| 11 | 1431-CHE-2015-US(14)-HearingNotice-(HearingDate-24-04-2023).pdf | 2023-04-11 |
| 12 | 1431-CHE-2015-ABSTRACT [11-09-2020(online)].pdf | 2020-09-11 |
| 12 | 1431-CHE-2015-FER_SER_REPLY [11-09-2020(online)].pdf | 2020-09-11 |
| 13 | 1431-CHE-2015-CLAIMS [11-09-2020(online)].pdf | 2020-09-11 |
| 13 | 1431-CHE-2015-DRAWING [11-09-2020(online)].pdf | 2020-09-11 |
| 14 | 1431-CHE-2015-COMPLETE SPECIFICATION [11-09-2020(online)].pdf | 2020-09-11 |
| 14 | 1431-CHE-2015-CORRESPONDENCE [11-09-2020(online)].pdf | 2020-09-11 |
| 15 | 1431-CHE-2015-COMPLETE SPECIFICATION [11-09-2020(online)].pdf | 2020-09-11 |
| 15 | 1431-CHE-2015-CORRESPONDENCE [11-09-2020(online)].pdf | 2020-09-11 |
| 16 | 1431-CHE-2015-CLAIMS [11-09-2020(online)].pdf | 2020-09-11 |
| 16 | 1431-CHE-2015-DRAWING [11-09-2020(online)].pdf | 2020-09-11 |
| 17 | 1431-CHE-2015-FER_SER_REPLY [11-09-2020(online)].pdf | 2020-09-11 |
| 17 | 1431-CHE-2015-ABSTRACT [11-09-2020(online)].pdf | 2020-09-11 |
| 18 | 1431-CHE-2015-FORM-26 [11-09-2020(online)].pdf | 2020-09-11 |
| 18 | 1431-CHE-2015-US(14)-HearingNotice-(HearingDate-24-04-2023).pdf | 2023-04-11 |
| 19 | 1431-CHE-2015-FORM-26 [21-04-2023(online)].pdf | 2023-04-21 |
| 19 | 1431-CHE-2015-OTHERS [11-09-2020(online)].pdf | 2020-09-11 |
| 20 | 1431-CHE-2015-Correspondence to notify the Controller [21-04-2023(online)].pdf | 2023-04-21 |
| 20 | 1431-CHE-2015-PETITION UNDER RULE 137 [11-09-2020(online)]-1.pdf | 2020-09-11 |
| 21 | 1431-CHE-2015-PETITION UNDER RULE 137 [11-09-2020(online)].pdf | 2020-09-11 |
| 21 | 1431-CHE-2015-Written submissions and relevant documents [09-05-2023(online)].pdf | 2023-05-09 |
| 22 | 1431-CHE-2015-FER.pdf | 2020-03-11 |
| 22 | 1431-CHE-2015-RELEVANT DOCUMENTS [09-05-2023(online)].pdf | 2023-05-09 |
| 23 | 1431-CHE-2015-FORM 13 [07-11-2019(online)].pdf | 2019-11-07 |
| 23 | 1431-CHE-2015-PETITION UNDER RULE 137 [09-05-2023(online)].pdf | 2023-05-09 |
| 24 | 1431-CHE-2015-Response to office action [19-05-2023(online)].pdf | 2023-05-19 |
| 24 | abstract-1431-CHE-2015-jpeg.jpg | 2016-09-13 |
| 25 | Description(Complete) [18-03-2016(online)].pdf | 2016-03-18 |
| 25 | 1431-CHE-2015-RELEVANT DOCUMENTS [19-05-2023(online)].pdf | 2023-05-19 |
| 26 | Drawing [18-03-2016(online)].pdf | 2016-03-18 |
| 26 | 1431-CHE-2015-PETITION UNDER RULE 137 [19-05-2023(online)].pdf | 2023-05-19 |
| 27 | SRIB-20150318-016_Drawings_Filed with IPO on 20th March 2015.pdf | 2015-03-28 |
| 27 | 1431-CHE-2015-PatentCertificate27-09-2023.pdf | 2023-09-27 |
| 28 | SRIB-20150318-016_Provisional Specification_Filed with IPO on 20th March 2015.pdf | 2015-03-28 |
| 28 | 1431-CHE-2015-IntimationOfGrant27-09-2023.pdf | 2023-09-27 |
| 1 | 2020-03-1113-34-19E_11-03-2020.pdf |
| 1 | US8384665E_11-03-2020.pdf |
| 2 | 2020-03-1113-34-19E_11-03-2020.pdf |
| 2 | US8384665E_11-03-2020.pdf |