Abstract: TITLE OF INVENTION: SYSTEM AND METHOD FOR DETECTING DEFORMITY IN AN OBJECT USING 3D IMAGE A system (100) for detecting a deformity in an object is disclosed. An object detection module (201) generates a query point cloud corresponding a query object based upon a 3D image of a scene including the query object. A pose estimation module (202) estimates a pose of the query object based upon the query point cloud. A pose alignment module (203) generates an aligned query point cloud based upon the estimated pose of the query object and a standard point cloud. A coarse estimation module (204) determines a set of likely deformed candidate voxels of the aligned query point cloud based upon a plurality of voxels of the aligned query point cloud and the standard point cloud. For each query point in the set of candidate voxels, a fine estimation module (205) identifies a nearest point in the standard point cloud; and determines the query point as a deformed point based upon a distance between the query point and the nearest point. Fig. 2
Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(Section 10 and Rule 13)
TITLE OF THE INVENTION:
SYSTEM AND METHOD FOR DETECTING DEFORMITY IN AN OBJECT USING 3D IMAGE
APPLICANT
e-Infochips Private Limited, an Indian company of the address Building No. 2, Aaryan Corporate Park Near Shilaj Railway Crossing, Thaltej-Shilaj Road, Ahmedabad Gujarat 380054, India
The following specification particularly describes the invention and the manner in which it is to be performed:
FIELD OF INVENTION
[001] The present disclosure relates to image processing. More particularly, the present disclosure relates to a system and method for detection a deformity in an object using 3D images.
BACKGROUND OF INVENTION
[002] Maintaining the quality of products is an essential aspect in manufacturing and production industries for several reasons. The quality of a product directly impacts customer satisfaction, brand reputation, and overall business success. Further, it is also important for ensuring adherence to industry standards and regulations, thereby mitigating financial and legal risks arising out of non-compliance with quality standards. Traditionally, the quality checks are done manually. However, manual quality checks are quite labor intensive and prone to errors. Automated defect detection systems are, therefore, replacing the labor-intensive manual quality check processes.
[003] Following advancements in computer vision, systems using computer vision-based defect detection are becoming popular for automated quality checks. A conventional defect detection system in a manufacturing set-up uses an RGB camera installed above a conveyer belt to capture an image(s) of an object. These images are processed using various algorithms to detect defects (for example, dents) in the object. However, the conventional defect detection systems suffer from several drawbacks. For example, a dent deforms the shape of an object without altering its color, which is the most common scenario. In such a case, the dent can only be measured using depth information. However, the RGB cameras only capture two-dimensional images and the depth information is lost. Therefore, the conventional defect detection systems that use RGB cameral are not capable of capturing such defects. Moreover, due to the susceptibility of RGB cameras to environmental conditions like lighting and illumination, the conventional defect detection systems may produce incorrect results in case of improper lighting or reflective property of an object.
[004] Therefore, there is a need of a defect detection system and method which can overcome the shortcomings of the defect detection systems known in the art.
SUMMARY OF INVENTION
[005] Particular embodiments of the present disclosure are described herein below with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are mere examples of the disclosure, which may be embodied in various forms. Well-known functions or constructions are not described in detail to avoid obscuring the present disclosure in unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
[006] The present disclosure relates to a system and method for detecting a deformity in an object. In an embodiment, the system includes an object detection module, a pose estimation module, a pose alignment module, a coarse estimation module and a fine estimation module. The object detection module is configured to generate a query point cloud corresponding a query object based upon a three-dimensional (3D) image of a scene comprising the query object, captured by a 3D imaging device. The pose estimation module is configured to estimate a pose of the query object based upon the query point cloud, wherein the pose of the query objects indicates a rotation and a translation of the query object in a 3D space. The pose alignment module is configured to generate an aligned query point cloud based upon the estimated pose of the query object and a standard point cloud, wherein the standard point cloud corresponds to a point could of a 3D model of the query object, wherein the aligned query point cloud represents the query point cloud aligned with the standard point cloud. The coarse estimation module is configured to: generate a grid comprising a plurality of voxels of the aligned query point cloud, each voxel of the plurality of voxels comprising a plurality of points of the aligned point cloud; and determine a set of likely deformed candidate voxels of the aligned query point cloud based upon the plurality of voxels of the aligned query point cloud and a corresponding plurality of voxels of the standard point cloud. For each query point in the set of candidate voxels of the aligned query point cloud, the fine estimation module is configured to: identify a nearest point in the standard point cloud to the query point; and determine the query point as a deformed point based upon a distance between the query point and the nearest point in the standard point cloud.
[007] In an embodiment, the method includes generating, by an object detection module, a query point cloud corresponding a query object based upon a three-dimensional (3D) image of a scene comprising the query object, captured by a 3D imaging device. The method further includes estimating, by a pose estimation module, a pose of the query object based upon the query point cloud, wherein the pose of the query objects indicates a rotation and a translation of the query object in a 3D space. The method further includes generating, by a pose alignment module, an aligned query point cloud based upon the estimated pose of the query object and a standard point cloud, wherein the standard point cloud corresponds to a point could of a 3D model of the query object, wherein the aligned query point cloud represents the query point cloud aligned with the standard point cloud. The method further includes generating, by a coarse estimation module, a grid comprising a plurality of voxels of the aligned query point cloud, each voxel of the plurality of voxels comprising a plurality of points of the aligned point cloud. The method further includes determining, by the coarse estimation module, a set of likely deformed candidate voxels of the aligned query point cloud based upon the plurality of voxels of the aligned query point cloud and a corresponding plurality of voxels of the standard point cloud. The method further includes, for each query point in the set of candidate voxels of the aligned query point cloud: identifying, by a fine estimation module, a nearest point in the standard point cloud to the query point; and determining, by the fine estimation module, the query point as a deformed point based upon a distance between the query point and the nearest point.
BRIEF DESCRIPTION OF THE DRAWINGS
[008] The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the apportioned drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale.
[009] Fig. 1 depicts an exemplary environment 150 in which a system 100 for detecting a deformity in an object may be deployed, according to an embodiment of the present disclosure.
[0010] Fig. 2 depicts a block diagram of the system 100, according to an embodiment of the present disclosure.
[0011] Fig. 3 depicts a block diagram of a pose estimation module 202, according to an embodiment, of the present disclosure.
[0012] Fig. 3a depicts an example implementation of a first neural network 221 of the pose estimation module 202, according to an embodiment of the present disclosure.
[0013] Fig. 4 depicts a method 400 for detecting a deformity in an object, according to an embodiment of the present disclosure.
[0014] Fig. 5a depicts an object point cloud of an exemplary object having a dent, according to an embodiment of the present disclosure.
[0015] Fig. 5b depicts an exemplary point cloud of a 3D model of the object, according to an embodiment of the present disclosure.
[0016] Fig. 5c depicts an exemplary aligned query point cloud, according to an embodiment of the present disclosure.
[0017] Fig. 5d illustrates an exemplary point cloud having a set of candidate voxels post a coarse estimate stage, according to an embodiment of the present disclosure.
[0018] Fig. 5e illustrates an exemplary point cloud having a plurality of deformed points, according to an embodiment of the present disclosure.
[0019] Fig. 5f illustrates an exemplary image of the object showing undented and dented portions of the object, according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE ACCOMPANYING DRAWINGS
[0020] Prior to describing the invention in detail, definitions of certain words or phrases used throughout this patent document will be defined: the terms "include" and "comprise", as well as derivatives thereof, mean inclusion without limitation; the term "or" is inclusive, meaning and/or; the phrases "coupled with" and "associated therewith", as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have a property of, or the like. Definitions of certain words and phrases are provided throughout this patent document, and those of ordinary skill in the art will understand that such definitions apply in many, if not most, instances to prior as well as future uses of such defined words and phrases.
[0021] Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including,” “comprising,” “having,” and variations thereof mean “including but not limited to” unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive and/or mutually inclusive, unless expressly specified otherwise. The terms “a,” “an,” and “the” also refer to “one or more” unless expressly specified otherwise.
[0022] Although the operations of exemplary embodiments of the disclosed method may be described in a particular, sequential order for convenient presentation, it should be understood that the disclosed embodiments can encompass an order of operations other than the particular, sequential order disclosed. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Further, descriptions and disclosures provided in association with one particular embodiment are not limited to that embodiment, and may be applied to any embodiment disclosed herein. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed system, method, and apparatus can be used in combination with other systems, methods, and apparatuses.
[0023] The embodiments are described below with reference to block diagrams and/or data flow illustrations of methods, apparatus, systems, and computer program products. It should be understood that each block of the block diagrams and/or data flow illustrations, respectively, may be implemented in part by computer program instructions, e.g., as logical steps or operations executing on a processor in a computing system. These computer program instructions may be loaded onto a computer, such as a special purpose computer or other programmable data processing apparatus to produce a specifically-configured machine, such that the instructions which execute on the computer or other programmable data processing apparatus implement the functions specified in the data flow illustrations or blocks.
[0024] These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the functionality specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the data flow illustrations or blocks.
[0025] Accordingly, blocks of the block diagrams and data flow illustrations support various combinations for performing the specified functions, combinations of operations for performing the specified functions and program instructions for performing the specified functions. It should also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or operations, or combinations of special purpose hardware and computer instructions. Further, applications, software programs or computer readable instructions may be referred to as components or modules. Applications may be hardwired or hardcoded in hardware or take the form of software executing on a general-purpose computer such that when the software is loaded into and/or executed by the computer, the computer becomes an apparatus for practicing the disclosure, or they are available via a web service. Applications may also be downloaded in whole or in part through the use of a software development kit or a toolkit that enables the creation and implementation of the present disclosure. In this specification, these implementations, or any other form that the disclosure may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the disclosure.
[0026] Furthermore, the described features, advantages, and characteristics of the embodiments may be combined in any suitable manner. One skilled in the relevant art will recognize that the embodiments may be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments. These features and advantages of the embodiments will become more fully apparent from the following description and apportioned claims, or may be learned by the practice of embodiments as set forth hereinafter.
[0027] The present disclosure relates to a system and a method for detecting a deformity in an object. The proposed system and method use a 3D image of the object and a 3D model of the object to detect a deformity. According to the teachings of the present disclosure, the system performs deformity detection in two stages – namely, coarse estimation and fine estimation – according to the teachings of the present disclosure. In an embodiment, at the coarse estimation stage, voxels of the object point cloud that have a likelihood of the presence of a deformity are identified. Such voxels are processed further in the fine estimation stage and remaining voxels of object point cloud that are not likely to have a deformity are eliminated from further processing. In the fine estimation stage, each point in the voxels identified in the coarse estimation stage is analyzed to determine whether the point represents a deformity. Such a two-stage approach for detecting a deformity in an object is computationally less expensive. Lower computational power requirements enable the proposed system and method to be implemented even on portable devices or edge devices. Also, the two-stage approach reduces time required to determine whether the object is deformed or not. This makes the system and method more efficient. Consequently, the proposed system and method can be used in applications requiring real-time decision making, e.g., quality inspection of objects on a conveyer belt.
[0028] Referring now to the figures, Fig. 1 illustrates an environment 150 in which a system 100 for detecting a deformity in an object is deployed, according to an embodiment of the present disclosure. In an embodiment, the deformity in the object may be a dent. The system 100 is configured to identify a deformity in the object (also referred to as a query object) based upon a 3D image of the object and a 3D model of the object without any deformity (hereinafter referred to as a 3D standard model) and classify the object as deformed or non-deformed. According to an embodiment, the system 100 may be also configured to generate a score representative of a degree of deformity. The score helps to determine a severity of the deformation. The system 100 may classify the object as detective or non-defective based upon the score. The system 100 may also generate an image of the object having a heap map indicating the deformity in the object. The object may be transparent or non-transparent. The object may be made of any material, for example, metal, glass, plastic, leather, etc. or combinations thereof. Examples of the object include, without limitation, a bottle, a chair, a table, a ball, a car, an airplane, etc. It should be understood that the teachings of the present disclosure may be applied to any product or a part thereof.
[0029] A 3D imaging device 101 is configured to capture the 3D image of a scene having the object and generate a depth map of the scene. The 3D imaging device 101 may include an active 3D image sensor or a passive 3D image sensor. Examples of an active 3D image sensor include a Light Detection and Ranging (LiDAR) sensor, a Time of Flight (ToF) sensor, and the like. A passive 3D imaging sensor typically include two imaging sensors, for example, a left and a right imaging sensor, to capture respective images of the object. The distance of the object is calculated using pixel disparity between the images captured by the two imaging sensors. Stereoscopic cameras are an example of the passive 3D image sensors. Other types of 3D imaging techniques that provide depth information of the object can also be used without deviating from the scope of the present disclosure. In an embodiment, the 3D imaging device 101 outputs a point cloud for a scene including the object. In another embodiment, the 3D imaging device 101 outputs a depth map which is processed by a computing device (not shown) to generate the point cloud for a scene having the object. The 3D imaging device 101 may be communicatively coupled to the system 100 using a first data interface or over a first network. Examples of first the data interface include, without limitation, Gigabit Multimedia Serial Link (GMSL), Universal Serial Bus (USB), Cameral Serial Interface (CSI), etc. The first network may include, without limitation, a local area network, a wide area network, a private network, a wireless network (such as, Wi-Fi, Bluetooth), a cellular network (such as 2G, 3G, 4G, 5G, etc.), an Internet, or any combinations thereof. Though the present disclosure has been explained with the help of a 3D image captured using a single 3D imaging device 101, the teachings of the present disclosure can be extended to processing 3D images captured using multiple 3D imaging devices without deviating from the scope of the present disclosure.
[0030] A storage unit 102 is configured to store the standard model of the object. Alternately or in addition, the storage unit 102 is configured to store a point cloud corresponding the 3D standard model (hereinafter referred to as a standard point cloud) of the object. In an embodiment, the standard model and standard point cloud may be generated in a following manner. An object for which the system 100 is to be used, is kept on a rotating platform rotatable by 360 degrees. A 3D imaging device (such as, the 3D imaging device 101) may be used to capture a 3D image of the object and generate a corresponding point cloud. The rotating platform may be rotated as desired to capture the 3D images of the object from different angles or perspectives. Environmental conditions (e.g., ambient light, background, color, noise, and the like) may be adjusted to capture 3D images of the same object under different lighting conditions. Based upon the multiple 3D images thus captured, the standard model and the standard point cloud may be generated using, for example, Iterative Closest Point (ICP) algorithm. The aforesaid process may be repeated for other objects for which the system 100 is to be used.
[0031] The storage unit 102 may be a database, a hard disc, a flash memory, or any other non-volatile storage device capable of storing data. The storage unit 102 may be communicatively coupled to the system 100 using a second data interface or over a second network. Examples of the second data interface include, without limitation, Internet Small Computer System Interface (iSCSI), Fibre Channel SAN (FC-SAN), Network File System (NFS), etc. The second network may include, without limitation, a local area network, a wide area network, a private network, a wireless network (such as, Wi-Fi, Bluetooth), a cellular network (such as 2G, 3G, 4G, 5G, etc.), an Internet, or any combinations thereof. The first data interface and the second data interface may be the same or different data interfaces. The first network and the second network may be the same or different networks. In an embodiment, the storage unit 102 is a database storing standard models of different objects. The storage unit 102 may be a local database or a remote database (e.g., a cloud database) accessible by the system 100 over internet. The 3D imaging device 101 and/or the storage unit 102 may be a part of the system 100. In other words, the system 100 may include at least one of: the 3D imaging device 101 and the storage unit 102.
[0032] According to an example implementation, the system 100 may be deployed on a server capable of obtaining the 3D image and/or the point cloud of the scene from the 3D imaging device 101 over the first network and the storage unit 102 may be a local data store or a remote database (e.g., a cloud database) accessible by the server. In another example implementation, the 3D imaging device 101 and the storage unit 102 may be integrated with the system 100, e.g., in a quality inspection robot in a factory, or as a portable unit.
[0033] The system 100 may be deployed in various use cases. In an example use case, the system 100 may be deployed in a production line at a manufacturing or a logistic facility. In one example, the system 100 may be a part of a quality system used for detecting quality of objects moving on a conveyer belt. In this case, the 3D imaging device 101 may be installed at a desired position (e.g., above the conveyer belt) to capture 3D images of moving objects and the send the 3D images to the system 100. The system 100 may be deployed as a host machine interfaced with the 3D imaging device 101 or deployed on a cloud server. The system 100 may provide results of deformity detection to a display (e.g., a display of the host machine or a display of a user device of a quality personnel). In another example, the system 100 may be integrated with a robot used to identify defective products and remove defective products. The system 100 may send a control signal to an arm of the robot in response to detecting an object to be defective. The arm of the robot may remove the defective object from the conveyer belt based upon the control signal. In another use case, the system 100 may be used by insurance companies, law enforcement agencies, vehicle manufacturers, vehicle service providers, etc. for an on-the-spot assessment of dents in a vehicle, for example, due to an accident. In this case, a portable device having the 3D imaging device 101 may capture a 3D image of a dent in the vehicle and send the 3D image over Internet to a server deploying the system 100. The system 100 may determine the score denoting the severity of the dent and send the score to the portable device. The score may be sent to the portable device for displaying to a user. It should be appreciated that the aforesaid use cases and implementations are exemplary and the system 100 may be deployed in other use cases in different implementation configurations without deviating from the scope of the present disclosure.
[0034] Fig. 2 illustrates a block diagram of the system 100, according to an embodiment. The system 100 includes an object detection module 201, a pose estimation module 202, a pose alignment module 203, a coarse estimation module 204, a fine estimation module 205, a deformity visualization module 206, a memory 207 and a processing unit 208. The aforesaid modules of the system 100 are communicatively coupled to each other.
[0035] The object detection module 201 is communicatively coupled to the 3D imaging device 101. In an embodiment, the object detection module 201 is configured to obtain a 3D image of a scene from the 3D imaging device 101. The scene includes an object of interest (hereinafter, a query object). The objection detection module 201 is configured to generate a point cloud corresponding to the query object based upon the 3D image of the scene. Hereinafter, the point cloud corresponding to the query object is referred to as a query point cloud. The object detection module 201 may use any technique known in the art capable of generating the query point cloud using either a depth map or a point cloud of the scene. For example, the object detection module 201 may use a point cloud segmentation algorithm such as, region growing algorithms, clustering algorithms (e.g., k-mean clustering, density-based spatial clustering of applications with noise (DBSCAN), ordering points to identify the clustering structure (OPTICS), etc.), graph-based segmentation, or deep-learning based segmentation algorithms using, for example, convolutional neural networks, PointNet, graph convolutional networks, etc., to generate a query point cloud of the query object based on 3D image of the scene. In an exemplary embodiment, the object detection module 201 generates the query point cloud from the depth map of the scene using a 3D active region generation technique disclosed in an Indian patent application having application number 202321073421, which is incorporated herein by reference.
[0036] The pose estimation module 202 is communicatively coupled to the objection detection module 201. In an embodiment, the pose estimation module 202 is configured to obtain the query point cloud and determine a pose of the query object based upon the query point cloud. The pose of the query object may correspond to the rotation and translation of the query object in a three-dimensional space – i.e., a 6 degree of freedom (DOF) pose. The pose estimation module 202 may output the estimated pose of the query object in the form of a matrix having translation and rotation values in each of the three dimensions. According to an embodiment, the pose estimation module 202 uses a deep learning architecture to estimate the pose of the query object and includes a first neural network 221 and a second neural network 222 (shown in Fig. 3). The first neural network 221 is configured to estimate of the rotation of the query object in the three-dimensional space based upon the query point cloud and generate a matrix defining a rotational axis and a rotational angle (hereinafter, a rotation matrix) corresponding to the estimated rotation. The second neural network 222 is configured to estimate of the translation of the query object in the three-dimensional space based upon the query point cloud and generate a matrix defining the translation in each of the three dimensions (hereinafter, a translation matrix) corresponding to the estimated translation. The estimates of the rotation and the translation of the query object in the form of respective matrix represents the pose of the query object. The pose estimation module 202 may store the estimated pose (e.g., the rotation and the translation matrices) in the memory 207.
[0037] The first neural network 221 and the second neural network 222 may be implemented using a deep neural network-based architecture. It should be understood that the first neural network 221 and the second neural network 222 may be implemented using any other neural network architecture. Fig. 3a illustrates an exemplary implementation of the first neural network 221. The implementation of the second neural network 222 may be understood from the architecture shown in Fig. 3a and is not repeated for the sake of brevity. The first neural network 221 may include an input 231 configured to receive the query point cloud having n points. In an example implementation, n is 2048. A shared multi-layer perceptron (MLP) 232 is coupled to the input 231 and receives the object point cloud. In an embodiment, the shared MLP 232 has 5 layers of sizes 64, 64, 64, 256 and 1028. Each layer of the shared MLP 232 includes batch normalization and rectified linear unit (RLU), though any other suitable non-linear activation function may be used. The shared MLP 232 is configured to convert the object point cloud from a low dimensional space to a higher dimensional space, which enables a more robust estimation of the pose. A global max pooler 233 is coupled to the shared MLP 232 and is configured to aggregate all points in the higher dimensional space into a global feature vector 234. An MLP 235 is coupled to the global max pooler 233 and receives the global feature vector 234 as an input. The MLP 235 is configured to output the rotation matrix (or the translation matrix when a similar architecture is used to implement the second neural network 222) based upon the global feature vector 234. In an embodiment, the MLP 235 has 3 layers of sizes 256, 512, and 1024. It should be understood that the number of layers in the shared MLP 232 and the MLP 235, corresponding layer sizes, the number points (n) in the object point cloud given herein are merely exemplary and they may be selected based upon one or more factors such as error tolerance, type of sensor, computation complexity, latency etc. Similarly, the second neural network 222 may be configured to receive the query point cloud and generate the translation matrix. In an embodiment, the first neural network 221 and the second neural network 222 are implemented using an identical architecture. Though in the depicted embodiment, the first neural network 221 and the second neural network 222 have an identical architecture, it is possible that the first neural network 221 and the second neural network 222 may be implemented using different architectures.
[0038] The first neural network 221 and the second neural network 222 may be trained during a training phase of the system 100. A dataset including a plurality of points clouds of different objects in different poses may be used to train the first neural network 221 and the second neural network 222. In an embodiment, a 3D model of each of the different objects may be created using a 3D modeling tool. The 3D model may be in the form of a point cloud. The 3D model may then be updated to incorporate variations corresponding to different poses and other environmental conditions (e.g., ambient lighting, background, color, etc.) for the object using a domain randomization technique. The resulting point clouds may be labeled with rotation and translation values using a synthetic data simulator. The labeled point clouds may then be used to train the first neural network 221 and the second neural network 222. The aforesaid approach may be followed to overcome the lack of adequate 3D training dataset. Once the first neural network 221 and the second neural network 222 converge, corresponding models may be registered and stored in the memory 207.
[0039] The pose alignment module 203 is communicatively coupled to the pose estimation module 202. In an embodiment, the pose alignment module 203 is configured to obtain the estimated pose (e.g., the rotation and translation matrices) and align (or overlap) the query point cloud with the standard point cloud based upon the estimated pose. That is, the pose alignment module 203 is configured to generate an aligned query point cloud based upon the estimated pose of the query object and the standard point cloud. The aligned query point cloud represents the query point cloud having a pose aligned with the pose of the standard point cloud. In an embodiment, the pose alignment module 203 may obtain the rotation and translation matrix of the query point cloud from the pose estimation module 202 or from the memory 207. Further, the pose alignment module 203 obtains the standard point cloud from the memory 207 or from the storage unit 102. The standard point cloud may be pre-stored in the memory 207 during the set-up of the system 100. In an example implementation, the storage unit 102 may be the same as the memory 207.
[0040] According to an embodiment, the pose alignment module 203 is configured to compute a transformation matrix based upon the estimated pose of the query object, i.e., based upon the rotation and translation matrix and the standard point cloud using any technique known in the art, for example, affine projection and applies the transformation matrix to the query point cloud to generate the aligned query point cloud. Optionally, the pose alignment module 203 may refine the transformation matrix using an iterative closest point (ICP) algorithm. Hereinafter, the standard point cloud and the aligned query point cloud may be collectively referred to as a pair of aligned point clouds. The pose alignment module 203 may store the aligned point cloud in the memory 207.
[0041] The coarse estimation module 204 is communicatively coupled to the pose alignment module 203. The coarse estimation module 204 is configured to obtain the standard point cloud and the aligned query point cloud from the pose alignment module 203 and/or from the memory 207. According to an embodiment, the coarse estimation module 204 is configured to generate for each of the standard point cloud and the aligned query point cloud, a grid including a plurality of voxels (or voxels) using any voxelization technique known in the art. Each voxel of the aligned point cloud includes a plurality of points in the aligned point cloud and each voxel of the standard point cloud include a plurality of points in the standard point cloud. In an embodiment, instead of generating the grid including the voxels for the standard point cloud during run-time, the coarse estimation module 204 may be configured to obtain the voxels of the standard point cloud from the memory 207 and/or from the storage unit 102. In this case, a grid including the voxels of the standard point cloud may be generated during the set-up of the system 100. This reduces the computational requirements during the run-time of the system 100. In an embodiment, the voxels of the standard point cloud may be stored in a search-optimized data structure format. For example, the voxels of the standard point cloud may be arranged in a search optimized tree, such as, an octree, a k-d tree and the like. This reduces computational requirements and decreases time required for detecting a deformity. The voxels of the aligned point cloud and the standard point cloud correspond to each other, i.e., a pair of corresponding voxels of the aligned point cloud and the standard point cloud have the same position and volume in the 3D space. The size and number of voxels in the aligned query point cloud and the standard point cloud are the same. The coarse estimation module 204 is further configured to determine a set of likely deformed candidate voxels of the aligned point cloud based upon the voxels of the aligned point cloud and corresponding voxels of the standard point cloud. Herein, the set of likely deformed candidate voxels (or the set of voxels) represents voxels in the query point cloud that are likely to have a deformity. The set of voxels may include one or more voxels of the plurality of voxels of the aligned point cloud. In an embodiment, the coarse estimation module 204 is configured to calculate a statistical average of each voxel of the aligned query point cloud and the standard point cloud. The statistical average may include a mean, a median, a mode, a weighted average, or any other statistical descriptor. In an example implementation, the coarse estimation module 204 calculates a centroid of each voxel of the aligned query point cloud and the standard point cloud. Further, for each voxel of the aligned point cloud, the coarse estimation module 204 is configured to calculate an absolute difference between the statistical average of the voxel of the aligned query point cloud and the statistical average of the corresponding voxel of the standard point cloud. The coarse estimation module 204 may be configured to compare the absolute difference for each voxel of the aligned point cloud with a first threshold. The coarse estimation module 204 may be configured to determine the voxel in the aligned query point cloud as a candidate voxel based upon the comparison. For example, the coarse estimation module 204 may determine a voxel of the query point cloud as a candidate voxel when the absolute difference between the statistical average of the voxel of the aligned query point cloud and the statistical average of the corresponding voxel of the standard point cloud is greater than or equal to a first threshold. Thus, the coarse estimation module 204 determines a set of candidate voxels in the query point cloud based upon the aligned query point cloud and the standard point cloud. Thus, the set of candidate voxels represent voxels in the query point cloud that are likely to have a deformity. The first threshold may be set depending upon requirements. For example, the first threshold may be chosen based upon the dimensions of the product(s) to be scanned for detecting deformities, acceptable manufacturing tolerances in manufacturing such product(s) and so on. Further, the accuracy with which the coarse estimation module 204 may determine the set of candidate voxels and computational requirements are inversely dependent upon the number of voxels that the aligned query point cloud and the standard point cloud are divided into. The higher the number of voxels, the higher is the accuracy of the coarse estimation module 204 and the computational requirements and vice versa. Thus, there is a trade-off between the accuracy and computation requirements, and the number of voxels may be chosen as per accuracy requirement and available computational power.
[0042] The fine estimation module 205 is configured to detect a deformity in the query object by processing the set of candidate voxels and the voxels of the standard point cloud. In an embodiment, the fine estimation module 205 is configured to determine a plurality of points in the set of candidate voxels of the query point cloud having a deformity (hereinafter referred to as a plurality of deformed points). The plurality of deformed points represents locations where the query object has a deformity. In an embodiment, for each point (hereinafter referred to as a query point) in the set of candidate voxels of the aligned query point cloud, the fine estimation module 205 is configured to determine a point in the standard point cloud (hereinafter referred to as a nearest point) that is nearest to the query point. The fine estimation module 205 may be configured to calculate a distance between the query point and the nearest point, and determine the query point as a deformed point or an undeformed point based upon the distance. In an embodiment, the fine estimation module 205 compares the distance with a second threshold. The fine estimation module 205 is configured to determine the query point as a deformed point or as an undeformed point based upon the distance between the query point and the nearest point in the standard point cloud . For example, when the distance between the query point and the nearest point is greater than or equal to the second threshold, the query point is determined as a deformed point and when the distance between the query point and the nearest point is less than the second threshold, the query point is determined as an undeformed point. In an embodiment, distances between various points as described herein may correspond to Euclidean distances. It should be understood that any other distance measures may also be used without deviating from the score of the present disclosure. The second threshold may be chosen based upon requirements. For example, the second threshold may be chosen depending upon various uses cases, size of the products to be scanned for detection of deformity, manufacturing tolerances and the like. The fine estimation module 205 may identify the nearest point in the standard point cloud using any techniques known in the art. In an embodiment, the fine estimation module 205 uses a space partitioning technique to identify the set of neighboring points within the standard point cloud. Examples of the space partitioning technique include, without limitation, kd-tree, octree, R-tree, ball tree and the like. Each of the voxels of the standard point cloud may be structured in the form of a tree-like graph and stored in the memory 207. This significantly reduces time required to identify the nearest point. In another embodiment, the fine estimation module 205 may use a linear search approach, approximation methods (such as, greedy search in proximity neighborhood graphs, locality sensitive hashing), etc. to identify the nearest point. In one example implementation, the fine estimation module 205 uses the kd-tree technique to identify the nearest point. In another example implementation, the fine estimation module 205 uses the octree technique to identify the nearest point. According to another embodiment, the fine estimation module 205 may identify the nearest point in the standard point cloud as follows. According to an embodiment, the fine estimation module 205 is configured to define a spherical region having a predefined radius ‘r’ around the query point with the query point being the center of the spherical region. The fine estimation module 205 is configured to identify a set of neighboring points in the standard point cloud within the predefined radius ‘r’ from the query point using any techniques known in the art. The fine estimation module 205 may be configured to determine the nearest point based upon a distance between the query point and each of the set of neighboring points. In response to finding at least one neighboring point within the predefined radius ‘r’, the fine estimation module 205 determines the query point to be an undeformed point. In response to finding no neighboring points within the predefined radius ‘r’, the fine estimation module 205 may be configured to update the predefined radius ‘r’ by increasing the predefined radius ‘r’ by a pre-defined distance. The fine estimation module 205 identifies the set of neighboring points in the standard point cloud within the updated predefined radius ‘r’ and determines the nearest point in a similar manner as explained earlier. This process may be repeated until at least one neighboring point is found within the predefined radius or until for a pre-set number of iterations. In an embodiment, the initial value of the predefined radius ‘r’ is equal to the second threshold. The pre-defined distance and the pre-set number of iterations are chosen based upon the application scenario of the system 100, available computational power, desired time limit, etc. The fine estimation module 205 may optionally be configured to calculate a deformity score based upon the plurality of deformed points. The deformity score indicates the severity of the deformity for the plurality of deformed points. The measure of the deformity score may be chosen based upon requirements or applications where the system 100 may be deployed. In an embodiment, the deformity score is equal to a sum of the distance between each of the plurality of deformed points and the corresponding nearest point in the standard point cloud. Such a measure may be used in applications where the depth of deformity is more important, for example, product manufacturing, product inspection, surface inspection, detecting bone deformity in medical inspections, parts assembly in manufacturing industry, etc. In another embodiment, the deformity score is equal to an area encompassed by the plurality of deformed points. Such a measure may be use where the area of deformity may be more important, for example, damage to a vehicle during insurance assessment, inspection of aerospace materials, etc. Further, the fine estimation module 205 may be configured to classify the query object as deformed or non-deformed based upon the deformity score.
[0043] The deformity visualization module 206 is configured to generate an image of the query object based upon plurality of deformed points and the deformity score to visualize the query object along with locations where the deformities are present and the severity of deformity. In an embodiment, the image includes a heatmap of the query object, wherein a color for each of the plurality of deformed points represents the severity of the deformity. The deformity visualization module 206 may be configured to display the image of the query object having the heatmap of the deformity on a display (not shown). The display may be a part of the system 100 or may be a display of a user device (not shown). The user device may be communicatively coupled to the deformity visualization module 206.
[0044] In an embodiment, the object detection module 201, the pose estimation module 202, the pose alignment module 203, the coarse estimation module 204, the fine estimation module 205, and the deformity visualization module 206 are executed by the processing unit 208. The processing unit 208 may be microprocessor, a personal computer, a graphical processing unit (GPU), an application-specific integrated circuit (ASIC), a server, or any other computing device capable of executing instructions.
[0045] In an embodiment, the object detection module 201, the pose estimation module 202, the pose alignment module 203, the coarse estimation module 204, the fine estimation module 205, and the deformity visualization module 206 are stored in the memory 207 accessible by the processing unit 208. The memory 207 may be a flash drive, a hard disc, a read-only memory, or any other computer readable storage medium.
[0046] Fig. 4 illustrates a flowchart of a method 400 for detecting a deformity in an object, according to an embodiment of the present disclosure.
[0047] At step 401, a 3D image of a scene including a query object is obtained. In an embodiment, the 3D image of the scene is obtained by the object detection module 201 from the 3D imaging device 101. The 3D image of the scene may include a depth map and/or a point cloud of the scene.
[0048] At step 402, a query point cloud is generated based upon the depth map or the point cloud of the scene, for example, by the object detection module 201. The query point cloud represents a point cloud corresponding to the query object. The query point cloud may be generated in a similar manner as described earlier. Fig. 5a illustrates an exemplary query point cloud showing a bottle having a dent.
[0049] At step 403, a pose of the query object is estimated based upon the query point cloud, for example, by the pose estimation module 202. The pose of the query object may correspond to a 6 DOF pose and may be represented in the form of a matrix having translation and rotation values in each of the three dimensions. For example, the estimated pose of query object may include a translation matrix indicating the estimated translation of the query object in the three-dimensional space and a rotation matrix indicating the estimated rotation of the query object in the three-dimensional space. In an embodiment, the pose of the query object is estimated as follows. The rotation matrix is generated by the pose estimation module 202 based upon the query point cloud using the first neural network 221. Further, the translation matrix is generated by the pose estimation module 202 based upon the query point cloud using the second neural network 222.
[0050] At step 404, an aligned query point cloud is generated by the pose alignment module 203 based upon the estimated pose of the query point cloud and the standard point cloud so that the query point cloud and the standard point are aligned together. The standard point cloud corresponds to a point cloud of a 3D model of the query object. Fig. 5b represent an exemplary standard point cloud showing a 3D model of a bottle. In an embodiment, a transformation matrix is computed based upon the estimated pose of the query object, for example, using the translation matrix and rotation matrix. The transformation matrix is then applied to the query point cloud to generate the aligned query point cloud. The pose of the aligned point cloud may correspond to the pose of the standard point cloud. In an embodiment, the transformation matrix may, optionally, be further refined using the ICP technique. Fig. 5c represents an exemplary aligned object point cloud corresponding to the object point cloud illustrated in Fig. 5a and aligned with the standard point cloud illustrated in Fig. 5b.
[0051] At step 405, a set of likely deformed candidate voxels in the aligned query point cloud are identified based upon the aligned query point cloud and the standard point cloud, for example, by the coarse estimation module 204. In an embodiment, the set of candidate voxels are determined as follows. The aligned query point is divided by the coarse estimation module 204 to generate a grid including a plurality of voxels using a voxelization technique. Similarly, the standard point cloud may be divided to generate a grid including a plurality of voxels. In an embodiment, the coarse estimation module 204 may generate the grid having the plurality of voxels of the standard point cloud using a voxelization technique. In another embodiment, the grid having the plurality of voxels of the standard point cloud may be generated at the time of set-up of the system 100 and stored in the storage unit 102 and/or in the memory 207, and obtained by the coarse estimation module 204 at this step. The set of candidate voxels are determined based upon the plurality of voxels of the aligned query point cloud and the standard point cloud. In an embodiment, a statistical average (e.g., a mean, a centroid, a weighted average, etc.) is calculated for each voxel of the aligned query point cloud and the standard point cloud. An absolute difference between the statistical average of each voxel of the aligned query point cloud and the statistical average of the corresponding voxel of the standard point cloud is calculated. A voxel in the aligned query point cloud is determined as a candidate voxel of the set of candidate voxels based upon the comparison. For example, when the absolute difference between the statistical average of the voxel of the aligned query point cloud and the statistical average of the corresponding voxel of the standard point cloud is greater than or equal to the first threshold. Fig. 5d illustrates an exemplary set of candidate voxels and associated point cloud for the query point cloud of the bottle illustrated in Fig. 5a.
[0052] At step 406, a plurality of deformed points in the set of candidate voxels are identified, for example, by the fine estimation module 205, based upon the set of candidate voxels and the voxels of the standard point cloud. According to an embodiment, the one or more deformed points are determined as follows. For each point (referred to as a query point) in the set of candidate voxels, a nearest point in the standard point cloud is identified and the query point is determined as a deformed point or an undeformed point based upon the distance between the query point and the nearest point. For example, the query point is determined as the deformed point when the distance between the query point and the corresponding nearest point is greater than or equal to a second threshold, and when the distance between the query point and the nearest point is less than the second threshold, the query is determined as an undeformed point. In an embodiment, the nearest point in the standard point cloud is identified in a similar manner as explained earlier. Fig. 5e illustrates an exemplary point cloud including the plurality of deformed points.
[0053] Further, a deformity score may be calculated for the plurality of deformed points. The deformity score is indicative of the severity of the deformity. Various embodiments for calculating the deformity score are explained earlier. The query object may be classified as a deformed or non-deformed based upon the deformity score for the plurality of deformed points.
[0054] Optionally, at step 407, an image of the query object is generated, for example, by the deformity visualization module 206. The image helps to visualize the deformity in the query object. The image shows the deformity in the query object at the location identified by the plurality of deformed points and the degree of deformity identified by the corresponding deformity scores. In an embodiment, the image includes a heatmap for the plurality of deformed points, where a color of the heatmap represents the severity of the deformity. Fig. 5f illustrates an exemplary image of the bottle showing non-deformed and deformed portions of the bottle.
[0055] The proposed system presents several advantages over conventional deformity detection systems. Since the proposed system uses a 3D image of an object for detecting a deformity, the proposed system is more robust in the presence of variations in environmental conditions (e.g., improper lighting, illumination intensity, etc.) and/or characteristics of the surface of the object (e.g., reflective property of a metal object, a transparent object, etc.) compared to the conventional systems, which use 2D images captured using 2D RGB cameras. Further, the use of 3D images enables the proposed system to determine the severity of deformity, which is not possible with conventional systems. Further, the proposed system employs a two-stage approach – coarse estimation and fine estimation – to identify the deformity. Such an approach is computationally more efficient and reduces time taken to identify the deformity. Consequently, the proposed system can also be implemented on devices having less computational power, e.g., a portable or a handheld device. Further, the proposed system is more scalable and can be easily customized to detect a deformity in any desired object by providing a 3D model and corresponding point cloud of the object.
[0056] The scope of the invention is only limited by the appended patent claims. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings of the present invention is/are used. , Claims:WE CLAIM
1. A system (100) for detecting a deformity in an object, the system (100) comprising:
a. an object detection module (201) configured to generate a query point cloud corresponding a query object based upon a three-dimensional (3D) image of a scene comprising the query object, captured by a 3D imaging device (101);
b. a pose estimation module (202) configured to estimate a pose of the query object based upon the query point cloud, wherein the pose of the query objects indicates a rotation and a translation of the query object in a 3D space;
c. a pose alignment module (203) configured to generate an aligned query point cloud based upon the estimated pose of the query object and a standard point cloud, wherein the standard point cloud corresponds to a point could of a 3D model of the query object, wherein the aligned query point cloud represents the query point cloud aligned with the standard point cloud;
d. a coarse estimation module (204) configured to:
i. generate a grid comprising a plurality of voxels of the aligned query point cloud, each voxel of the plurality of voxels comprising a plurality of points of the aligned point cloud; and
ii. determine a set of likely deformed candidate voxels of the aligned query point cloud based upon the plurality of voxels of the aligned query point cloud and a corresponding plurality of voxels of the standard point cloud; and
e. a fine estimation module (205) configured to, for each query point in the set of candidate voxels of the aligned query point cloud:
i. identify a nearest point in the standard point cloud to the query point; and
ii. determine the query point as a deformed point based upon a distance between the query point and the nearest point in the standard point cloud.
2. The system (100) as claimed in claim 1, wherein the pose estimation module (202) comprises:
a. a first neural network (221) configured to receive the query point cloud and generate a rotation matrix indicating the rotation of the query object; and
b. a second neural network (222) configured to receive the query point cloud and generate a translation matrix indicating the estimated translation of the query object.
3. The system (100) as claimed in claim 1, wherein the pose alignment module (203) is configured to:
a. compute a transformation matrix based upon the estimated pose of the query object and the standard point cloud; and
b. apply the transformation matrix to the query point cloud to generate the aligned query point cloud.
4. The system (100) as claimed in claim 3, wherein the pose alignment module (203) is configured to refine the transformation matrix using an iterative closest point (ICP) technique.
5. The system (100) as claimed in claim 1, wherein the coarse estimation module (204) is configured to:
a. calculate a statistical average for each voxel of the plurality of voxels of the aligned query point cloud;
b. calculate a statistical average for each voxel of the plurality of voxels of the standard point cloud; and
c. for each voxel of the aligned query point cloud,
i. calculate an absolute difference between the statistical average of the voxel and a corresponding voxel of the standard point cloud;
ii. compare the absolute difference with a first threshold; and
iii. determine the voxel as a candidate voxel of the set of candidate voxels based upon the comparison.
6. The system (100) as claimed in claim 1, wherein the coarse estimation module (204) is configured to generate a grid comprising the plurality of voxels of the standard point cloud.
7. The system (100) as claimed in claim 1, wherein the fine estimation module (205) is configured to determine the query point as the deformed point when the distance between the query point and the nearest point in the standard point cloud is greater than or equal to a second threshold.
8. The system (100) as claimed in claim 1, wherein the fine estimation module (205) is configured to, for each query point in the set of candidate voxels:
a. identify a set of neighboring points in the standard point cloud within a predefined radius of the query point; and
b. determine the nearest point based upon a distance between the query point and each of the set of neighboring points.
9. The system (100) as claimed in claim 8, wherein the fine estimation module (205) is configured to:
a. In response to identifying no neighboring points within the predefined radius, update the predefined radius by increasing the predefined radius by a pre-defined distance; and
b. identify the set of neighboring points in the standard point cloud within the updated predefined radius.
10. The system (100) as claimed in claim 1, wherein the fine estimation module (205) is configured to identify the nearest point in the standard point cloud using a space partitioning technique.
11. The system (100) as claimed in claim 1, wherein the fine estimation module (205) is configured to calculate a deformity score indicating a severity of deformity for a plurality of deformed points.
12. The system (100) as claimed in claim 11, wherein the system (100) comprises a deformity visualization module (206) configured to generate an image comprising a heatmap of the query object, based upon the plurality of deformed points and the deformity score, wherein a color for each of the plurality of deformed points represents the severity of deformity.
13. A method (400) for detecting a deformity in an object, the method (400) comprising:
a. generating, by an object detection module (201), a query point cloud corresponding a query object based upon a three-dimensional (3D) image of a scene comprising the query object, captured by a 3D imaging device (101);
b. estimating, by a pose estimation module (202), a pose of the query object based upon the query point cloud, wherein the pose of the query objects indicates a rotation and a translation of the query object in a 3D space;
c. generating, by a pose alignment module (203), an aligned query point cloud based upon the estimated pose of the query object and a standard point cloud, wherein the standard point cloud corresponds to a point could of a 3D model of the query object, wherein the aligned query point cloud represents the query point cloud aligned with the standard point cloud;
d. generating, by a coarse estimation module (204), a grid comprising a plurality of voxels of the aligned query point cloud, each voxel of the plurality of voxels comprising a plurality of points of the aligned point cloud;
e. determining, by the coarse estimation module (204), a set of likely deformed candidate voxels of the aligned query point cloud based upon the plurality of voxels of the aligned query point cloud and a corresponding plurality of voxels of the standard point cloud;
f. for each query point in the set of candidate voxels of the aligned query point cloud:
i. identifying, by a fine estimation module (205), a nearest point in the standard point cloud to the query point; and
ii. determining, by the fine estimation module (205), the query point as a deformed point based upon a distance between the query point and the nearest point.
14. The method (400) as claimed in claim 13, wherein the step of estimating the pose of the query object comprises:
a. generating, by the pose estimation module (202), a rotation matrix indicating the rotation of the query object using a first neural network (221); and
b. generate, by the pose estimation module (202), a translation matrix indicating the estimated translation of the query object using a second neural network (222).
15. The method (400) as claimed in claim 13, wherein the step of generating the aligned query point cloud comprises:
a. computing, by the pose alignment module (203), a transformation matrix based upon the estimated pose of the query object and the standard point cloud; and
b. applying, by the pose alignment module (203), the transformation matrix to the query point cloud to generate the aligned query point cloud.
16. The method (400) as claimed in claim 15, wherein the step of generating the aligned query point cloud comprises, refining, by the pose alignment module (203), the transformation matrix using an iterative closest point (ICP) technique.
17. The method (400) as claimed in claim 13, wherein the step of determining the set of candidate voxels comprises:
a. calculating, by the coarse estimation module (204), a statistical average for each voxel of the plurality of voxels of the aligned query point cloud;
b. calculating, by the coarse estimation module (204), a statistical average for each voxel of the plurality of voxels of the standard point cloud; and
c. for each voxel of the aligned query point cloud,
i. calculating, by the coarse estimation module (204), an absolute difference between the statistical average of the voxel and a corresponding voxel of the standard point cloud;
ii. comparing, by the coarse estimation module (204), the absolute difference with a first threshold; and
iii. determining, by the coarse estimation module (204), the voxel as a candidate voxel of the set of candidate voxels based upon the comparison.
18. The method (400) as claimed in claim 13, wherein the method comprises generating, by the coarse estimation module (204), a grid comprising the plurality of voxels of the standard point cloud.
19. The method (400) as claimed in claim 13, wherein the query point is determined as the deformed point when the distance between the query point and the nearest point in the standard point cloud is greater than or equal to a second threshold.
20. The method (400) as claimed in claim 13, wherein the step of identifying the nearest point comprises, for each query point in the set of candidate voxels:
a. identifying, by the fine estimation module (205), a set of neighboring points in the standard point cloud within a predefined radius of the query point; and
b. identifying, by the fine estimation module (205), the nearest point based upon a distance between the query point and each the set of neighboring points.
21. The method (400) as claimed in claim 20, wherein the step of identifying the set of neighboring points comprises:
a. In response to identifying no neighboring points within the predefined radius, updating, by the fine estimation module (205), the predefined radius by increasing the predefined radius by a pre-defined distance; and
b. identifying, by the fine estimation module (205), the set of neighboring points within the updated predefined radius.
22. The method (400) as claimed in claim 13, wherein the nearest point is identified using a space partitioning technique.
23. The method (400) as claimed in claim 13, wherein the method (400) comprises calculating, by the fine estimation module (205), a deformity score indicating a severity of deformity for a plurality of deformed points.
24. The method (400) as claimed in claim 23, wherein the method (400) comprises generating, by a deformity visualization module (206), an image comprising a heatmap of the query object, based upon the plurality of deformed points and the deformity score, wherein a color for each of the plurality of deformed points represents the severity of deformity.
| # | Name | Date |
|---|---|---|
| 1 | 202421092647-STATEMENT OF UNDERTAKING (FORM 3) [27-11-2024(online)].pdf | 2024-11-27 |
| 2 | 202421092647-REQUEST FOR EARLY PUBLICATION(FORM-9) [27-11-2024(online)].pdf | 2024-11-27 |
| 3 | 202421092647-FORM-9 [27-11-2024(online)].pdf | 2024-11-27 |
| 4 | 202421092647-FORM-26 [27-11-2024(online)].pdf | 2024-11-27 |
| 5 | 202421092647-FORM 1 [27-11-2024(online)].pdf | 2024-11-27 |
| 6 | 202421092647-FIGURE OF ABSTRACT [27-11-2024(online)].pdf | 2024-11-27 |
| 7 | 202421092647-DRAWINGS [27-11-2024(online)].pdf | 2024-11-27 |
| 8 | 202421092647-DECLARATION OF INVENTORSHIP (FORM 5) [27-11-2024(online)].pdf | 2024-11-27 |
| 9 | 202421092647-COMPLETE SPECIFICATION [27-11-2024(online)].pdf | 2024-11-27 |
| 10 | Abstract.jpg | 2024-12-16 |
| 11 | 202421092647-Proof of Right [23-04-2025(online)].pdf | 2025-04-23 |
| 12 | 202421092647-Form 1 (Submitted on date of filing) [15-05-2025(online)].pdf | 2025-05-15 |
| 13 | 202421092647-Covering Letter [15-05-2025(online)].pdf | 2025-05-15 |
| 14 | 202421092647-CERTIFIED COPIES TRANSMISSION TO IB [15-05-2025(online)].pdf | 2025-05-15 |
| 15 | 202421092647-FORM 18A [05-06-2025(online)].pdf | 2025-06-05 |
| 16 | 202421092647-FER.pdf | 2025-09-30 |
| 17 | 202421092647-FORM 3 [07-10-2025(online)].pdf | 2025-10-07 |
| 1 | 202421092647_SearchStrategyNew_E_SearchStrategyMatrix202421092647E_06-08-2025.pdf |