Specification
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR POINT CLOUD BASED GRASP PLANNING FRAMEWORK
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
The present application claims priority from Indian provisional application no. 202221046849, filed on August 17, 2022. The entire contents of the aforementioned application are incorporated herein by reference.
TECHNICAL FIELD
The disclosure herein generally relates to the field of image processing and, more particularly, to a method and system for point cloud based grasp planning framework.
BACKGROUND
Universal Picking (UP) is defined as the ability of robots to pick diverse and novel objects reliably. It is a much desired skill to facilitate automation in manufacturing units, warehouses, retail stores, home services, etc., for bin picking. Other desirable attributes of UP are real-time execution, ease of deployment, and reconfiguration requiring minimal technical expertise. In the context of autonomous robotic manipulation, the problem becomes highly challenging if the target objects are lying together randomly in a pile. Additionally, the objects in the real world have unlimited combinations of color, texture, shape, size, materials, etc. Sensor noise, errors in calibration and the inherent uncertainty in robot actuation further convolute the problem to a greater degree.
Bin picking (picking objects from bin) solutions can be categorized based on the level of clutter used for experiments, namely no-clutter (isolated objects), semi-clutter (few well separated objects lying together) and dense-clutter (objects in heavy clutter as a random pile). Bin-picking solutions for unseen objects in dense-clutter category is a challenging task. For example, it is quite difficult to properly segment unseen objects or estimate their pose in clutter due to occlusions and unlimited variations, and diversity amongst real-world objects.
Conventional methods for bin picking in dense clutter environment initially sample some number of candidate grasp poses within the workspace and then evaluate them using some grasp quality index to select a best among them for the grasp action. Some other approaches learns hand-eye coordination by training a large Convolutional Neural Network (CNN) that predicts a grasp success probability given the task space motion of the gripper. However, to train the CNN its needed to collect thousands of data samples which is a time consuming process. This limitation was mitigated by learning a CNN for grasp quality index entirely over simulated dataset generated using the depth scans of adversarial training objects. However, the grasp quality of so trained CNN models are found to be sensitive to certain parameters used during dataset generation such as the robot gripper, the depth camera, the distance between the camera and workspace. Thus, any change in the above parameters would require repeating the entire training procedure to get the same level of performance. Some deep learning-based methods have been also used conventionally. However, in general, all the methods discussed above are domain-dependent i.e., these methods often fail to perform equally well on a target domain if it is somewhat different from the source domain they are trained upon.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for Point cloud based grasp planning framework is provided. The method includes receiving, by one or more hardware processors, an input image pertaining to a surface in a robotic bin picking environment, wherein the surface comprises a plurality of heterogenous unseen objects. Further, the method includes generating, by the one or more hardware processors, a plurality of sampled grasp poses in a random configuration based on the input image, using a baseline grasp planning technique, wherein each of the plurality of sampled grasp poses is represented as rectangles. Furthermore, the method includes computing by the one or more hardware processors, a depth difference value for each of a plurality of pixels corresponding to each of the plurality of sampled grasp poses, based on a comparison between each of the plurality of pixels corresponding to each of the plurality of sampled grasp poses and a corresponding center pixel. Furthermore, the method includes generating, by the one or more hardware processors, a binary map for each of the plurality of sampled grasp poses based on the corresponding depth difference value by assigning a binary value one to a plurality of pixels with a depth difference value greater than a predefined depth threshold and zero otherwise. Furthermore, the method includes obtaining (310), by the one or more hardware processors, a plurality of subregions corresponding to each of the plurality of sampled grasp poses based on the corresponding binary map, wherein the plurality of subregions comprises a contact region, a free region and a collision region by (i) identifying a left starting point and a left ending point of a left free region of each of the plurality of sampled grasp poses based on the corresponding binary map, wherein the left free region is a region with the binary value one (ii) identifying a right starting point and a right ending point of a right free region of each of the plurality of sampled grasp poses based on the corresponding binary map and (iii) computing the plurality of subregions based on the left starting point, the left ending point, the right starting point and the right ending point using a subregion computation technique. Furthermore, the method includes selecting, by the one or more hardware processors, a plurality of feasible grasp poses from the plurality of sampled grasp poses based on the plurality of subregions and a plurality of conditions. Furthermore, the method includes refining, by the one or more hardware processors, each of the plurality of feasible grasp poses by (i) shifting a center corresponding to each of the plurality of feasible grasp poses along width of the corresponding grasp pose such that the corresponding contact region is divided into two equal halves, and (ii) adjusting the width corresponding to each of the plurality of feasible grasp poses such that the corresponding collision region is excluded. Finally, the method includes obtaining, by the one or more hardware processors, an optimum grasp poses for a robotic arm based on a refined plurality of feasible grasp poses using a Grasp Quality Score (GQS).
In another aspect, a system for Point cloud based grasp planning framework is provided. The system includes at least one memory storing programmed instructions, one or more Input /Output (I/O) interfaces, and one or more hardware processors operatively coupled to the at least one memory, wherein the one or more hardware processors are configured by the programmed instructions to receive a surface in a robotic bin picking environment, wherein the surface comprises a plurality of heterogenous unseen objects. Further, the one or more hardware processors are configured by the programmed instructions to generate a plurality of sampled grasp poses in a random configuration based on the input image, using a baseline grasp planning technique, wherein each of the plurality of sampled grasp poses is represented as rectangles. Furthermore, the one or more hardware processors are configured by the programmed instructions to compute a depth difference value for each of a plurality of pixels corresponding to each of the plurality of sampled grasp poses, based on a comparison between each of the plurality of pixels corresponding to each of the plurality of sampled grasp poses and a corresponding center pixel. Furthermore, the one or more hardware processors are configured by the programmed instructions to generate a binary map for each of the plurality of sampled grasp poses based on the corresponding depth difference value by assigning a binary value one to a plurality of pixels with a depth difference value greater than a predefined depth threshold and zero otherwise. Furthermore, the one or more hardware processors are configured by the programmed instructions to obtain a plurality of subregions corresponding to each of the plurality of sampled grasp poses based on the corresponding binary map, wherein the plurality of subregions comprises a contact region, a free region and a collision region by (i) identifying a left starting point and a left ending point of a left free region of each of the plurality of sampled grasp poses based on the corresponding binary map, wherein the left free region is a region with the binary value one (ii) identifying a right starting point and a right ending point of a right free region of each of the plurality of sampled grasp poses based on the corresponding binary map and (iii) computing the plurality of subregions based on the left starting point, the left ending point, the right starting point and the right ending point using a subregion computation technique. Furthermore, the one or more hardware processors are configured by the programmed instructions to select a plurality of feasible grasp poses from the plurality of sampled grasp poses based on the plurality of subregions and a plurality of conditions. Furthermore, the one or more hardware processors are configured by the programmed instructions to refine each of the plurality of feasible grasp poses by (i) shifting a center corresponding to each of the plurality of feasible grasp poses along width of the corresponding grasp pose such that the corresponding contact region is divided into two equal halves, and (ii) adjusting the width corresponding to each of the plurality of feasible grasp poses such that the corresponding collision region is excluded. Finally, the one or more hardware processors are configured by the programmed instructions to obtain an optimum grasp poses for a robotic arm based on a refined plurality of feasible grasp poses using a Grasp Quality Score (GQS).
In yet another aspect, a computer program product including a non-transitory computer-readable medium having embodied therein a computer program for Point cloud based grasp planning framework is provided. The computer readable program, when executed on a computing device, causes the computing device to receive a surface in a robotic bin picking environment, wherein the surface comprises a plurality of heterogenous unseen objects. Further, the computer readable program, when executed on a computing device, causes the computing device to generate a plurality of sampled grasp poses in a random configuration based on the input image, using a baseline grasp planning technique, wherein each of the plurality of sampled grasp poses is represented as rectangles. Furthermore, the computer readable program, when executed on a computing device, causes the computing device to compute a depth difference value for each of a plurality of pixels corresponding to each of the plurality of sampled grasp poses, based on a comparison between each of the plurality of pixels corresponding to each of the plurality of sampled grasp poses and a corresponding center pixel. Furthermore, the computer readable program, when executed on a computing device, causes the computing device to generate a binary map for each of the plurality of sampled grasp poses based on the corresponding depth difference value by assigning a binary value one to a plurality of pixels with a depth difference value greater than a predefined depth threshold and zero otherwise. Furthermore, the computer readable program, when executed on a computing device, causes the computing device to obtain a plurality of subregions corresponding to each of the plurality of sampled grasp poses based on the corresponding binary map, wherein the plurality of subregions comprises a contact region, a free region and a collision region by (i) identifying a left starting point and a left ending point of a left free region of each of the plurality of sampled grasp poses based on the corresponding binary map, wherein the left free region is a region with the binary value one (ii) identifying a right starting point and a right ending point of a right free region of each of the plurality of sampled grasp poses based on the corresponding binary map and (iii) computing the plurality of subregions based on the left starting point, the left ending point, the right starting point and the right ending point using a subregion computation technique. Furthermore, the computer readable program, when executed on a computing device, causes the computing device to select a plurality of feasible grasp poses from the plurality of sampled grasp poses based on the plurality of subregions and a plurality of conditions. Furthermore, the computer readable program, when executed on a computing device, causes the computing device to refine each of the plurality of feasible grasp poses by (i) shifting a center corresponding to each of the plurality of feasible grasp poses along width of the corresponding grasp pose such that the corresponding contact region is divided into two equal halves, and (ii) adjusting the width corresponding to each of the plurality of feasible grasp poses such that the corresponding collision region is excluded. Finally, the computer readable program, when executed on a computing device, causes the computing device to obtain an optimum grasp poses for a robotic arm based on a refined plurality of feasible grasp poses using a Grasp Quality Score (GQS).
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 is a functional block diagram of a system for point cloud based grasp planning framework, in accordance with some embodiments of the present disclosure.
FIG. 2A illustrates a functional architecture of the system of FIG. 1, for point cloud based grasp planning framework, in accordance with some embodiments of the present disclosure.
FIG. 2B illustrates an example robotic bin picking environment for point cloud based grasp planning framework, in accordance with some embodiments of the present disclosure.
FIG. 3 is an exemplary flow diagram illustrating a processor implemented method 300 for point cloud based grasp planning framework implemented by the system of FIG. 1 according to some embodiments of the present disclosure.
FIGS. 4A and 4B illustrates example input and sampled grasp poses for the processor implemented method for point cloud based grasp planning framework implemented by the system of FIG. 1 according to some embodiments of the present disclosure.
FIG. 4C illustrates example subregion computation for the processor implemented method for point cloud based grasp planning framework implemented by the system of FIG. 1 according to some embodiments of the present disclosure.
In an embodiment, FIG. 4D illustrates example sampled grasp poses after subregion computation for the processor implemented method for point cloud based grasp planning framework implemented by the system of FIG. 1 according to some embodiments of the present disclosure.
In an embodiment, FIG. 4E illustrates example feasible grasp poses for the processor implemented method for point cloud based grasp planning framework implemented by the system of FIG. 1 according to some embodiments of the present disclosure.
FIG. 5 is an exemplary flow diagram illustrating a method 500 for selecting optimum grasp pose selection implemented by the system of FIG. 1 according to some embodiments of the present disclosure.
FIG. 6 illustrates sample Grasp Quality Score (GQS) for the processor implemented method for point cloud based grasp planning framework implemented by the system of FIG. 1 according to some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments.
A fully automated and reliable picking of a diverse range of previously unseen objects in clutter is a challenging problem. This becomes even more difficult given the inherent uncertainty in sensing, control, and interaction physics. Conventional methods for bin picking (picking objects from bin by a robot or robotic arm) in dense clutter environment are domain-dependent i.e., these methods often fail to perform equally well on a target domain if it is somewhat different from the source domain they are trained upon.
Embodiments herein provide a method and system for point cloud based grasp planning framework to obtain an optimum grasp pose to pick an object from a bin by a robotic arm. The present disclosure provides a domain independent novel grasp planning framework that is based on the depth data coming from an Reg Green Blue- Depth (RGB-D) sensor. Further, the present disclosure includes an unsupervised clustering-based grasp pose sampler, a grasp pose validation step based on a grasp feasibility map, a grasp pose refinement, and a grasp pose quality ranking scheme to obtain the optimum grasp pose.
Initially, the system receives an input image pertaining to a surface in a robotic bin picking environment. The surface includes a plurality of heterogenous unseen objects. Further, a plurality of sampled grasp poses generated in a random configuration based on the input image using a baseline grasp planning technique, wherein each of the plurality of sampled grasp poses are represented as rectangles. Post generating sampled grasp poses, a depth difference value is computed for each of a plurality of pixels corresponding to each of the plurality of sampled grasp poses based on a comparison between each of the plurality of pixels corresponding to each of the plurality of sampled grasp poses and a corresponding center pixel. Further, a binary map is generated for each of the plurality of sampled grasp poses based on the corresponding depth difference value by assigning a binary value one to a plurality of pixels with a depth difference value greater than a predefined depth threshold and zero otherwise. After generating binary map, a plurality of subregions are obtained corresponding to each of the plurality of sampled grasp poses based on the corresponding binary map. The plurality of subregions comprises a contact region, a free region and a collision region. Further, a plurality of feasible grasp poses are selected from the plurality of sampled grasp poses based on the plurality of subregions and a plurality of conditions. Further, each of the plurality of feasible grasp poses are refined by shifting the center of the contact region and adjusting the width of the feasible grasp poses by excluding the collision region. Finally, an optimum grasp pose is obtained based on the refined plurality of feasible grasp poses using a Grasp Quality Score (GQS).
Referring now to the drawings, and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 is a functional block diagram of an point cloud based grasp planning framework, in accordance with some embodiments of the present disclosure. The system 100 includes or is otherwise in communication with hardware processors 102, at least one memory such as a memory 104, an I/O interface 112. The hardware processors 102, memory 104, and the Input /Output (I/O) interface 112 may be coupled by a system bus such as a system bus 108 or a similar mechanism. In an embodiment, the hardware processors 102 can be one or more hardware processors.
The I/O interface 112 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 112 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a printer and the like. Further, the I/O interface 112 may enable the system 100 to communicate with other devices, such as web servers, and external databases.
The I/O interface 112 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface 112 may include one or more ports for connecting several computing systems with one another or to another server computer. The I/O interface 112 may include one or more ports for connecting several devices to one another or to another server.
The one or more hardware processors 102 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, node machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 102 is configured to fetch and execute computer-readable instructions stored in the memory 104.
The memory 104 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 104 includes a plurality of modules 106. The memory 104 also includes a data repository (or repository) 110 for storing data processed, received, and generated by the plurality of modules 106.
The plurality of modules 106 include programs or coded instructions that supplement applications or functions performed by the system 100 for point cloud based grasp planning framework. The plurality of modules 106, amongst other things, can include routines, programs, objects, components, and data structures, which performs particular tasks or implement particular abstract data types. The plurality of modules 106 may also be used as, signal processor(s), node machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 106 can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 102, or by a combination thereof. The plurality of modules 106 can include various sub-modules (not shown). The plurality of modules 106 may include computer-readable instructions that supplement applications or functions performed by the system 100 for the semantic navigation using spatial graph and trajectory history. In an embodiment, the modules 106 includes a sampled grasp poses generation module (shown in FIG. 2A), a depth difference computation module (shown in FIG. 2A), a binary map generation module (shown in FIG. 2A), a subregion computation module (shown in FIG. 2A), a feasible grasp pose selection module (shown in FIG. 2A), a feasible grasp pose refinement module and an optimum grasp pose selection module (shown in FIG. 2A). In an embodiment, FIG. 2A illustrates a functional architecture of the system of FIG. 1, for point cloud based grasp planning framework, in accordance with some embodiments of the present disclosure.
The data repository (or repository) 110 may include a plurality of abstracted piece of code for refinement and data that is processed, received, or generated as a result of the execution of the plurality of modules in the module(s) 106.
Although the data repository 110 is shown internal to the system 100, it will be noted that, in alternate embodiments, the data repository 110 can also be implemented external to the system 100, where the data repository 110 may be stored within a database (repository 110) communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, new data may be added into the database (not shown in FIG. 1) and/or existing data may be modified and/or non-useful data may be deleted from the database. In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). Working of the components of the system 100 are explained with reference to the method steps depicted in FIG. 3, FIGS. 6A and FIG. 6B.
FIG. 2B illustrates an example robotic bin picking environment for point cloud based grasp planning framework, in accordance with some embodiments of the present disclosure. Now referring to FIG. 2B, the robotic bin picking environment includes the plurality of heterogenous unseen objects 224 to be picked by a robotic arm 222 (for example, a Universal Robot5 6-Degrees of Freedom (UR5 6-DOF) manipulator arm), at least one image capturing device 230 ( for example, a real sense D-435i camera), a gripper 232 (for example, a WSG-50 Schunk gripper), a bin 228 with the plurality of heterogenous unseen objects 224, and a receptacle for object drop location (not shown in FIG. 2B). In an embodiment, the bin 228 is designed with slanted edges (at some angle approximately 45 degrees) so that not only do the objects remain within the workspace during the operation but also the collision chances of the gripper with the bin edges are lesser compared to the bins with vertical edges. In an embodiment, 226 is the object picked by the robotic arm 222. In an embodiment, the robotic arm 222 is connected to the system 100 through I/O interface 112 to send data from the at least one image capturing device 230 and to receive instructions (for example, the instruction can be either to pick an object from the plurality of heterogenous unseen objects or to disperse the plurality of heterogenous unseen objects) from the one or more hardware processors 102 of system 100.
FIG. 3 is an exemplary flow diagram illustrating a method 300 for point cloud based grasp planning framework implemented by the system of FIG. 1 according to some embodiments of the present disclosure. In an embodiment, the system 100 includes one or more data storage devices or the memory 104 operatively coupled to the one or more hardware processor(s) 102 and is configured to store instructions for execution of steps of the method 300 by the one or more hardware processors 102. The steps of the method 300 of the present disclosure will now be explained with reference to the components or blocks of the system 100 as depicted in FIG. 1 and the steps of flow diagram as depicted in FIG. 3. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 300 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communication network. The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300, or an alternative method. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof.
At step 302 of the method 300, the one or more hardware processors 102 are configured by the programmed instructions to receive the input image pertaining to the surface. The surface includes the plurality of heterogenous unseen objects as shown in FIG. 4A. In another embodiment, the surface may include homogeneous unseen objects. .
At step 304 of the method 300, the sampled grasp poses generation module 202 executed by one or more hardware processors 102 is configured by the programmed instructions to generate the plurality of sampled grasp poses G_i in a random configuration based on the input image using a baseline grasp planning technique.
G_i=(p,?_i,W_i,Q) …………………..(1)
where, p = (x,y) refers to the center point of the grasp pose in the image coordinates, ?_i denotes the angle of the grasp pose with respect to the horizontal axis in the image plane, W_i refers to the width of the grasp pose rectangle, and Q denotes the grasp quality index. For the execution of the grasp pose G_i, it needs to be converted in accordance with the robot’s world Cartesian frame. For this conversion intrinsic and extrinsic camera parameters are utilized that are obtained by standard calibration procedure. The converted grasp pose Gr can be defined as follows:
G_r=(p,?_r,W_r,Q) …………………..(2)
where, p = (x,y,z) refers to the center point of the grasp pose in Cartesian space, _r represents the gripper’s rotation around the z-axis, W_r denotes the required opening width of the gripper bounded by the maximum opening of the gripper, and quantity Q is the same as defined in the equation (1). In an embodiment, the depth values used in the pseudocode are expressed in the camera reference frame. The camera is set above the bin workspace to a fixed distance facing downwards. N number of candidate grasp poses are sampled using a depth filtering and clustering-based approach.
For example, each of the plurality of sampled grasp poses is represented as rectangles as shown in FIG. 4A and 4B. Now referring to FIG. 4A, 402 is an object from the plurality of heterogenous unseen objects and 404 is the sampled grasp pose associated with the object 404. FIG. 4B illustrates the plurality sampled grasp poses, wherein each rectangle is generated corresponding to each of the plurality of the heterogenous unseen objects shown in FIG. 4A.
At step 306 of the method 300, the depth difference computation module 204 executed by the one or more hardware processors 102 is configured by the programmed instructions to compute the depth difference value for each of a plurality of pixels corresponding to each of the plurality of sampled grasp poses based on a comparison between each of the plurality of pixels corresponding to each of the plurality of sampled grasp poses and a corresponding center pixel.
At step 308 of the method 300, the binary map generation module 204 executed by the one or more hardware processors 102 is configured by the programmed instructions to generate the binary map for each of the plurality of sampled grasp poses based on the corresponding depth difference value by assigning the binary value one to a plurality of pixels with the depth difference value greater than the predefined depth threshold and zero otherwise.
At step 310 of the method 300, the subregion computation module 204 executed by the one or more hardware processors 102 is configured by the programmed instructions to obtain the plurality of subregions corresponding to each of the plurality of sampled grasp poses based on the corresponding binary map. In an embodiment, the plurality of subregions includes a contact region (R_ct), a free region (R_fs) and a collision region (R_cl).
FIG. 4C illustrates example subregion computation for the processor implemented method for point cloud based grasp planning framework implemented by the system of FIG. 1 according to some embodiments of the present disclosure. Now referring to FIG. 4C, initially, a left starting point (Ls) and a left ending point (Le) of a left free region 412 of each of the plurality of sampled grasp poses are identified based on the corresponding binary map. For example, the free region is a region with the binary value one. Further, a right starting point (Rs) and a right ending point (Re) of a right free region 414 of each of the plurality of sampled grasp poses are identified based on the corresponding binary map. Finally, the plurality of subregions are computed based on the left starting point, the left ending point, the right starting point and the right ending point using a subregion computation technique. Here, 416 indicates the contact region (R_ct). In an embodiment, FIG. 4D illustrates the plurality of sampled grasp poses after subregion computation. Now referring to FIG. 4D, the black region in each rectangle is the contact region and the white region is the free region.
In an embodiment, the subregion computation is performed using the pseudocode 1 given below. Here, L_s and L_e mark the starting point and the endpoint for the free space in the left half of the grasp pose rectangle. Similarly, the points R_s and R_e designate the free space in the right half of the grasp pose rectangle. Since the random sensor errors occurring as random 0 values in the depth map may affect the output of the algorithm 1, the depth map is preprocessed by a median filter of size 3x3. The three regions within the grasp pose rectangle can now mathematically be defined in the equations (3), (4) and (5).
R_ct=[(L_s+1,0),(R_s-1,gb)]…………………………(3)
R_fs=[(L_e,0),(L_s,gb)+[(R_s,0),(R_e,gb)]…………………………(4)
R_cl=R_(G_i )-(R_ct+R_fs)……………………………………………….(5)
Pseudocode 1
Data: Binary map B_gwxgb
L_s?0;
L_e ? 0;
R_s ?gw;
R_e ?gw;
cx ? int(gw/2);
for i ? (cx - 1) to 0 do
if ?_k¦?B_ik=gb? then
L_s?0;
break;
end
end
for j ? (i - 1) to 0 do
if ?_k¦?B_jk
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202221046849-STATEMENT OF UNDERTAKING (FORM 3) [17-08-2022(online)].pdf |
2022-08-17 |
| 2 |
202221046849-PROVISIONAL SPECIFICATION [17-08-2022(online)].pdf |
2022-08-17 |
| 3 |
202221046849-FORM 1 [17-08-2022(online)].pdf |
2022-08-17 |
| 4 |
202221046849-DRAWINGS [17-08-2022(online)].pdf |
2022-08-17 |
| 5 |
202221046849-DECLARATION OF INVENTORSHIP (FORM 5) [17-08-2022(online)].pdf |
2022-08-17 |
| 6 |
202221046849-FORM-26 [02-11-2022(online)].pdf |
2022-11-02 |
| 7 |
202221046849-Proof of Right [21-12-2022(online)].pdf |
2022-12-21 |
| 8 |
202221046849-FORM 3 [29-12-2022(online)].pdf |
2022-12-29 |
| 9 |
202221046849-FORM 18 [29-12-2022(online)].pdf |
2022-12-29 |
| 10 |
202221046849-ENDORSEMENT BY INVENTORS [29-12-2022(online)].pdf |
2022-12-29 |
| 11 |
202221046849-DRAWING [29-12-2022(online)].pdf |
2022-12-29 |
| 12 |
202221046849-COMPLETE SPECIFICATION [29-12-2022(online)].pdf |
2022-12-29 |
| 13 |
Abstract1.jpg |
2023-02-03 |
| 14 |
202221046849-Request Letter-Correspondence [29-08-2023(online)].pdf |
2023-08-29 |
| 15 |
202221046849-Power of Attorney [29-08-2023(online)].pdf |
2023-08-29 |
| 16 |
202221046849-Form 1 (Submitted on date of filing) [29-08-2023(online)].pdf |
2023-08-29 |
| 17 |
202221046849-Covering Letter [29-08-2023(online)].pdf |
2023-08-29 |
| 18 |
202221046849-CERTIFIED COPIES TRANSMISSION TO IB [29-08-2023(online)].pdf |
2023-08-29 |
| 19 |
202221046849 CORRESPONDANCE (WIPO DAS) 07-09-2023.pdf |
2023-09-07 |
| 20 |
202221046849-FORM 3 [18-01-2024(online)].pdf |
2024-01-18 |
| 21 |
202221046849-FER.pdf |
2025-06-04 |
| 22 |
202221046849-FORM 3 [14-07-2025(online)].pdf |
2025-07-14 |
Search Strategy
| 1 |
ExtensiveSearchhasbeencondutctedE_04-10-2024.pdf |