Specification
Claims: A processor implemented method (200) comprising:
identifying a plurality of surfaces associated with objects in a 3D point cloud of an environment under consideration using smoothness of surface normals to identify natural boundaries of the objects, the 3D point cloud being obtained from a single view (202), the step of identifying a plurality of surfaces comprises iteratively performing for each seed point in the 3D point cloud the following steps:
identifying, in the 3D point cloud, a current seed point with a spherical neighborhood having a plurality of points representing neighboring points for the current seed point (202a);
obtaining a pair of thresholds representing an upper limit (?_high) and a lower limit ?(??_low)for defining a current region which forms part of at least one of the plurality of surfaces (202b);
computing a smoothness criterion for each of the neighboring points of the current seed point, wherein the smoothness criterion is an angle between the surface normals of the current seed point and each of the neighboring points (202c);
including to the current region, the neighboring points of the current seed point having the angle representing the smoothness criterion less than the defined lower limit, and thereby growing the current region to form at least one of the plurality of surfaces (202d);
excluding from the current region, the neighboring points of the current seed point having the angle representing the smoothness criterion more than the defined upper limit (202e); and
including to a list of seed points to be used in a next iteration, the neighboring points (i) having the angle representing the smoothness criterion between the lower limit and the upper limit and (ii) identified as non-edge points (202f) thereby terminating the iterative steps (202a through 202e) at an edge of a corresponding surface from the plurality of surfaces.
The processor implemented method of claim 1, wherein the neighboring points are identified as edge points if a ratio C_R/m>k;0k;0k;0w to avoid collision with non-target objects while making a grasping manoeuvre. Each object surface is associated with three principal axes, namely, n ^ normal to the surface and two principal axes - major axis a ^ orthogonal to the plane of finger motion (gripper closing plane) and minor axis f ^ which is orthogonal to other two axes as shown in FIG.2B.
The grasp pose detection method takes a 3D point cloud C ?R^3 and a geometric model of the gripper as input and produces a six-dimensional (6D) grasp pose handle H ?SE wherein SE represents a 6-DOF (degree of freedom) space. The 6D pose is represented by a vector p=[x,y,z,?_x,?_y,?_z ], where (x,y,z) is the point where a closing plane of the gripper and object surface seen by a robot camera intersect and (?_x,?_y,?_z ) is the orientation of the gripper handle with respect to a global coordinate frame. Searching for a suitable 6 DOF grasp pose is a computationally intensive task and hence in accordance with the present disclosure, the search space is reduced by applying several constraints. For instance, it is assumed that the gripper approaches the object along a plane which is orthogonal to the object surface seen by the robot camera. In other words, the closing plane of the gripper is normal to the object surface as shown in FIG.2B. Since the mean depth of the object surface is known, the pose detection problem becomes a search for three-dimensional (l×b×e) bands along the major axis a ^ where l is a minimum depth necessary for holding the object. Hence, the grasp pose detection becomes a one-dimensional search problem once the object surface is identified.
In accordance with the present disclosure, the technical problem of detecting precision graspable affordances in cluttered environment is solved in two steps: (i) identifying surfaces in 3D point clouds and (ii) applying geometric constraints of a two finger parallel-jaw gripper to reduce the search space for finding suitable gripper pose.
FIG.3A through FIG.3C illustrate exemplary flow charts for a computer implemented method for detection of precision graspable affordances in cluttered environment, in accordance with an embodiment of the present disclosure. In an embodiment, the system 100 includes one or more data storage devices or memory 102 operatively coupled to the one or more processors 104 and is configured to store instructions configured for execution of steps of the method 200 by the one or more processors 104. The steps of the method 200 will now be explained in detail with reference to the components of the system 100 of FIG.1. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
Accordingly, in an embodiment of the present disclosure, the one or more processors 104 are configured to firstly identify, at step 202, a plurality of surfaces in the 3D point cloud of an environment under consideration, the 3D point cloud being obtained from a single view, using smoothness of surface normals to identify natural boundaries of objects in the environment. Furthermore, the one or more processors 104 are configured to detect in real time, graspable affordances, at step 204, in the identified plurality of surfaces for each of the objects in the environment by applying geometric constraints in the form of physical constraints of a gripper to be used for gripping the objects in the environment and constraints of the environment.
In accordance with an embodiment of the present disclosure, step 202 involves identifying several surface patches in the 3D point cloud using a modified region growing algorithm. The angle between surface normals is taken as a smoothness criterion and is denoted by symbol ?. In accordance with the conventional region growing algorithm known in the art, the step starts from one seed point and the points in its neighborhood are added to the current region (or label) if the angle between the surface normals of a neighboring point and that of the seed point is less than a user-defined threshold. The procedure is repeated with all the neighboring points as new seed points until all points have been labeled to one region or the other. The quality of segmentation heavily depends on the choice of the user-defined threshold value. A very low value may lead to over segmentation and a very high value may lead to under segmentation. The presence of sensor noise further exacerbates this problem leading to spurious edges when only one threshold is used. This limitation of the conventional region growing algorithm is overcome in the method of the present disclosure by defining edge points and using a pair of thresholds instead of one.
FIG.4 illustrates an edge point defined in accordance with an embodiment of the present disclosure. Consider a current seed point s ? C with its own spherical neighbourhood N(s) shown as a circle in FIG.4. It is further assumed that the neighbourhood consists of m points (p_i,i=1,2,….,m) in the 3D point cloud. Mathematically, the neighbourhood may be represented as follows:
N(s)={p_i? C|?s-p_i ?=r};i=1,2,…m
(1)
where r is a user-defined radius of the spherical neighborhood. In accordance with an embodiment of the present disclosure, the one or more processors 104 are configured to identify in the 3D point cloud, at step 202a, the current seed point with a spherical neighborhood having a plurality of points representing neighboring points for the current seed point.
Each neighboring point p_i has an associated surface normal N_i which makes an angle ?_i with the normal associated with the seed N_s. As stated earlier, ?_i is the smoothness criterion for the modified region growing algorithm of the present disclosure. Also, in accordance with the present disclosure, the one or more processors 104 are configured to obtain, at step 202b, a pair of thresholds representing a lower limit ?_low and an upper limit ?_highfor defining a current region for the neighboring point which forms part of at least one of the plurality of surfaces and creating new seed points for further propagation. In an embodiment, the pair of thresholds representing the upper limit and the lower limit is defined empirically.
In accordance with the present disclosure, the one or more processors 104 are configured to compute, at step 202c, the smoothness criterion for each of the neighboring points of the current seed point.
Let Q_s be a set of new seed points which may be used in a next iteration of the modified region growing algorithm and R_s be a set of neighboring points p_i of the current seed point s for which ?_i> ?_high.
R(s)={p_i? N(s)|?_i> ?_high,i=1,2,…m}
(2)
In accordance with the present disclosure, a neighboring point is identified as an edge point, if C_R/m>k;0 ?_high. FIG.5A through FIG.5D illustrate a modified region growing algorithm in accordance with an embodiment of the present disclosure. FIG.5A particularly shows, an edge point as point B. An edge point is different from a non-edge point in the sense that the later lies away from an edge and its neighbors have surface normals more or less in the same direction. One such non-edge point is shown as point A in FIG.5A. Even with sensor noise, the neighboring points around such a seed point will have surface normals with smaller values of angles with respect to the surface normal of the seed point, i.e., ?_i< ?_high.
In accordance with the present disclosure, the modified region growing algorithm starts with the current seed point s and the region L{p_i } for a neighboring point p_i? N(s) is defined as follows:
if ?_i< ?_low; then, L{p_i }=L{s}? p_i? Q_s
if ?_i> ?_high; then, L{p_i }?L{s}? p_i? Q_s
(4)
where the notation p_i? Q_sindicates that the point p_i is added to the list of seed points which may be used by the modified region growing algorithm in a next iteration. However, if the angle between normals lies between the two thresholds, i.e. ??_low?_i< ?_high, the region to the neighboring point is assigned as follows:
if s?E then,,L{p_i }=L{s}? p_i? Q_s
if s?E then,,L{p_i }=L{s}? p_i? Q_s
(5)
Equation (5) states that while the neighboring point p_i is assigned the same region as that of the seed point s, it is not considered as a new seed point if the current seed point is an edge point. This allows the modified region growing algorithm to terminate at the edges of each surface where there is a sudden and large change in the direction of surface normals thereby obtaining the natural boundaries of the objects.
FIG.5A through FIG.5C demonstrate the modified region growing algorithm pictorially. In accordance with the present disclosure, the one or more processors 104 are configured to include, at step 202d, the neighboring points of the current seed point having the angle ?_i representing the smoothness criterion less than the defined lower limit ?_low to the current region and thereby growing the current region to form at least one of the plurality of surfaces. Further, the one or more processors 104 are configured to exclude, at step 202e, the neighboring points of the current seed point having the angle ?_i representing the smoothness criterion more than the defined upper limit ?_high. Furthermore, the one or more processors 104 are configured to include to a list of seed points to be used in a next iteration, at step 202e, the neighboring points (i) having the angle ?_i representing the smoothness criterion between the lower limit ?_low and the upper limit ?_high and (ii) identified as a non-edge point (202f) thereby terminating the iterative steps (202a through 202e) at an edge of a corresponding surface from the plurality of surfaces. FIG.5A through FIG.5C also show how a pair of thresholds as provided by the present disclosure is effective in dealing with sensor noise, thereby eliminating spurious edges. The modified region growing algorithm of the present disclosure does not require any training phase, is implemented in real time and enables finding graspable affordances for rectangular box-type objects which were difficult in the art.
Once the plurality of surfaces is identified in the 3D point cloud, a search is initiated for suitable handles that may be used by the gripper for picking the objects. As explained above, in accordance with the present disclosure, search for a suitable 6 DOF grasp pose is reduced to a one-dimensional search problem. Again, the gripper is assumed to approach the object in a direction opposite to the surface normal of the object. It is also assumed that the gripper closing plane coincides with the minor axis of the surface under consideration as shown in FIG.2B. However, it is still necessary to identify graspable regions from the plurality of surfaces (identified at step 202) that may fit within the fingers of the gripper while ensuring that the gripper does not collide with neighboring objects. Accordingly, in the illustration of FIG.2A through FIG.2C, a search has to be performed for a 3D cube of dimension (l×b×e) around the centroid of the object as shown in FIG.2B. This requires carrying out a linear search along the three principal axes of the surface to find regions that meet the bounding box constraint. These regions are the graspable affordances for the object to be picked by the gripper.
After step 202, say S surfaces are created in the 3D point cloud C ?R^3. In accordance with an embodiment of the present disclosure, the one or more processors 104 are configured to extract, at step 204a, a plurality of parameters for each of the identified plurality of surfaces, wherein the plurality of parameters include the following:
centroid of each of the plurality of surfaces, µ_s=[µ_x^s,µ_y^s,µ_z^s ]
surface normal vectors associated with each of the surfaces, n ^_s ?R^3
two dominant directions ( a ^ and f ^ in FIG.2B) obtained using Principle Component Analysis (PCA) and their corresponding lengths.
In accordance with an embodiment of the present disclosure, the one or more processors 104 are configured to initiate searching, at step 204b, handles for gripping the objects by starting at the centroid µ_s of each of the plurality of surfaces and proceeding along the three principal axes including a major axis a ^, a minor axis f ^ and a surface normal n ^. In accordance with the present disclosure, at step 204b-1, the points on a surface s identified in the 3D point cloud are projected onto the new axes (a ^,f ^,n ^). FIG.6 illustrates scalar projection of a 3D point cloud to a new coordinate frame in accordance with an embodiment of the present disclosure. For every point p ?o=(x_1,y_1,z_1)o in the original point coordinate system (x ^,y ^,z ^,O)that lies within a sphere of radius d/2 results in a vector q ?o^'=(f_1,a_1,n_1)o^' in the new coordinate system (f ^,a ^,n ^,O^') , wherein d represents a maximum hand aperture of the gripper. The third axes n ^ and z ^ are not displayed in the figure since they are normal to the surface of illustration.
The search is first performed along the direction a ^ and n ^ respectively. All the points that lie within the radius e/2 around the centroid µ_s are considered to be a part of the gripper handle. In accordance with an embodiment of the present disclosure, the one or more processors 104 are configured to fix a first boundary along the major axis (a ^), at step 204b-2, at a distance of e/2 on either side of the centroid ?(µ?_s), wherein e represents width of the fingers of the gripper. Similarly, all points of the surface that lie within the radius of l along a direction of (n ^) are considered to be part of the gripper handle. In accordance with an embodiment of the present disclosure, the one or more processors 104 are configured to fix a second boundary along the normal axis (n ^), at step 204b-3, at a radius l from the centroid ?(µ?_s), wherein l represents length of the fingers of the gripper. Once the two boundaries are fixed, a horizontal patch of points extending along the minor axis f ^ is obtained. The next step involves finding a boundary along the minor axis f ^ to check if it would fit within the gripper finger gap.
In accordance with an embodiment of the present disclosure, the one or more processors 104 are configured to search for a gap along the minor axis f ^, at step 204b-4, on either side of the centroid ?(µ?_s). The gap needs to be at least bigger than a user defined threshold which itself depends on the thickness of the gripper finger to ensure that there is sufficient gap between two objects to avoid collision with neighboring objects. Accordingly, in an embodiment of the present disclosure, the one or more processors 104 are configured to stop searching, at step 204b-5, if the gap is greater than or equal to the user defined threshold, and outputs a patch forming a part of at least one of the plurality of surfaces. In an embodiment, the one or more processors 104 are configured to identify, at step 204b-6, the patch as a valid grasping handle for each of the objects in the environment when a total length of the patch along the minor axis (f ^) is less than the maximum hand aperture d of the gripper. In accordance with the present disclosure, it is possible to obtain multiple grasping handles on a same object which is useful as a robot motion planner may not be able to provide a valid end-effector trajectory for a given graspable affordance. The pose detection problem is thus a simple 1-D search problem solved in a much faster method using complex optimization methods described in step 204 and obtains remarkable improvement over the state of the art methods.
EXPERIMENTAL RESULTS
The input to a method of the present disclosure is a 3D point cloud obtained from an RGBD or a range sensor and the output is a set of graspable affordances comprising graspable regions and gripper pose required to pick the objects. The method was particularly tested on datasets obtained using Kinect, realsense and Ensenso depth sensors. An addition smoothing pre-processing step was applied to the Ensenso point cloud which are otherwise quite noisy compared to that obtained using either Kinect or realsense sensors. The method of the present disclosure was tested in an extremely cluttered environment. The performance of the method was compared with other methods of four different datasets namely (1) Big bird dataset, (2) Cornell grasping dataset, (3) ECCV dataset, (4) Kinect dataset (5) Willow garage dataset, (6) in-house dataset1 and (7) in-house dataset 2. The in-house dataset1 dataset contains 382 frames each having only a single object in its view inside the bin of a rack where the view could be slightly constrained due to poor illumination. Similarly, the in-house dataset2 consists of 40 frames with multiple objects in extreme clutter environment. Each dataset contains RGB images, point cloud data (as .pcd files) and annotations in text format. These datasets exhibit more difficult real world scenarios compared to what is available in existing datasets. The method was implemented on a Linux laptop with an i7 processor and 16GB RAM.
Performance measure: Different parameters may be used to evaluate the performance of a method. For instance, recall at high precision as a measure or accuracy may be used as a performance measure. In some cases, accuracy may not be a good measure for grasping methods because the number of true negatives in a grasping dataset is usually much more than the number of true positives. So, the accuracy could be high even when the number of true positives (actual handles detected) is less (or the precision is less). Success rate is another performance measure which is defined as the number of times a robot is able to successfully pick an object in a physical experiment. The success rate is usually directly linked to the precision of the method as false detections or mistakes could be detrimental to the robot operation. In other words, a grasping method with high precision is expected to yield high success rate. The precision is usually defined as the fraction of total number of handles detected which are true. However, in a cluttered scenario, the precision may not always provide an effective measure to evaluate the performance of the grasping algorithm. For instance, it is possible to detect multiple handles for some objects and no handles at all for some others, without affecting the total precision score. In other words, the fact that no handles are detected for a set of objects may not have any effect on the final score as long as there are other objects for which more than one handle is detected. In the case of the method of the present disclosure, the precision is considered to be 100% as any handle that does not satisfy the gripper and the environment constraints is rejected. In order to address the concerns mentioned above, recall at high precision is used as a measure of the performance of the method of the present disclosure which is defined as the fraction of total number of graspable objects for which at least one valid handle is detected. Mathematically, it may be written as:
recall %=(No.of objects for which at least one handle is detected)/(Total number of graspable objects)×100
The total number of graspable objects includes objects which could be actually picked up by the robot gripper in a real world experiment. It excludes the objects in the clutter which cannot be picked up due to substantial occlusion. This forms the ground truth for the experiment. It may be noted that the above definition is slightly different from the conventional definition of recall in the sense that the later may include multiple handles for a given object which are not considered in the above definition. The performance of the method of the present disclosure is analyzed and compared with an existing state-of-the-art algorithm using the new metric defined above.
Grasping of individual objects: Table 1 shows the performance of the method of the present disclosure when compared with Platt’s method using in-house dataset1. This dataset has 382 frames along with the corresponding 3D point cloud data and annotations for ground truth. It may be noted from Table 1, that the method of the present disclosure is able to find graspable affordance for objects in more number of frames and hence is more robust compared to the Platt’s method.
Table 1: Performance comparison for in-house dataset2 – Individual objects
Object Total no. of frames % of frames where a valid handle is detected
Platt’s method Method of the present disclosure
Toothpaste 40 38 90
Cup 50 70 96
Dove Soap 40 25 100
Fevicol 40 75 92
Battery 50 36 98
Clips 21 45 90
Cleaning brush 40 30 90
Sprout brush 21 63 95
Devi Coffee 40 76 93
Tissue paper 40 40 96
Total 382 51 94
On an average, the method of the present disclosure is able to detect handles in 94% of the frames compared to Platt’s method which can detect handles only for 51% of the frames. This could be attributed to the fact that Platt’s method primarily relies on surface curvature to find handles and hence, cannot deal with rectangular objects with flat surfaces. Platt’s method tries to overcome this limitation by training a SVM classifier to detect valid grasp out of a number of hypotheses created using histogram of oriented gradients (HoG) features. Compared to this approach, the method of the present disclosure is much simpler to implement as it does not require any training and can be implemented in real-time. It also does not depend on image features which are more susceptible to various photometric effects.
Grasping objects in a clutter: In-house dataset2 containing 40 frames with each one showing multiple objects in extreme clutter situation was used. The objects in the clutter have different shapes and sizes and may exhibit partial or full occlusion. Table 2 provides a quantitative comparison between the Platt’s method and the method of the present disclosure.
Table 2: Performance comparison for in-house dataset2 – Multiple objects in cluttered environment
Platt’s method Method of the present disclosure
Frame No. No. of graspable objects in the frame Max. no. of handles detected % Recall Max. no. of handles detected % Recall
#1 8 2 25 6 75
#3 8 3 38 6 75
#5 6 3 50 6 100
#7 7 2 28 7 100
#10 6 3 50 5 83
#12 7 2 28 7 100
#13 7 2 28 7 100
#16 8 1 13 6 75
#20 8 2 25 6 75
#23 9 2 22 8 89
#24 6 3 50 5 83
#26 5 3 60 3 60
#28 5 2 40 5 100
#30 6 2 33 6 100
#32 6 2 33 5 83
#37 5 1 20 5 100
#38 2 2 100 2 100
#39 4 2 50 3 75
#36 5 1 20 4 80
Total 118 40 33 102 86
It shows that the method of the present disclosure is able to detect at least 86% of unique handles in the dataset compared to 33% recall achieved with Platt’s method.
The performance of the two methods on various publicly available datasets is summarized in Table 3.
Table 3: Performance comparison on various publicly available datasets
% Recall
S.No. Dataset Method of the present disclosure Platt’s method
1 Big Bird 99% 85%
2 Cornell dataset 95.7% 93.7%
3 ECCV 93% 53%
4 Kinect dataset 91% 52%
5 Willow garage 98% 60%
6 In-house dataset1 94% 48%
7 In-house dataset2 85% 34%
Cornell Grasping Dataset contains single object per frame and grasping rectangle as ground truth. Their best result (93.7%) reported is in terms of accuracy whereas recall from the method of the present disclosure is 96% at 100% precision. The Bird Bird dataset consists of segmented individual objects and yields a maximum recall of 99%. This high level of performance is due to the fact that the object point cloud is segmented and processed for noise removal. This dataset, as such, does not include clutter and has been included in this section for the sake of completeness. The ECCV dataset, Kinect Dataset and the Willow garage dataset have multiple objects in one frame and may exhibit low level of clutter. All of these datasets are created for either segmentation or pose estimation purposes, therefore ground truth for grasping is not provided. The performance was evaluated (as reported in Table 3) using manual annotation. The extent of clutter in these datasets is not comparable to what one will encounter in a real world scenario. The in-house dataset1 and in-house dataset2 were created for these reasons. As seen in Table 2, the method of the present disclosure provides better grasping performance compared to the current state-of-the-art reported in literature.
Computation time: The computational performance of the method of the present disclosure may be assessed by analyzing the Table 4.
Table 4: Average computation time per frame. All values are reported per frame basis and are averaged over all frames.
Dataset # data in point cloud Time for region growing method (sec) # segments detected # valid handles detected Handle detection time (Sec)
In-house dataset1 37050 0.729 77 10 0.055
In-house dataset2 42461 0.82 182 43 0.171
This table shows the average computation time per frame for the two in-house datasets. As observed, the bulk of the time is taken by the region growing method which is the first step of the method of the present disclosure. This time is proportional to the size of the point cloud data. The second stage of the method of the present disclosure detects valid handles by applying geometric constraints on the surface segments found in the first step. This step is considerably faster compared to the first step. Many of the segments created in the first step are rejected in the second step to identify valid grasping handles as can be seen in the 4th and 5th columns in Table 4. The computation time for each valid handle for the two datasets is 4 and 5 ms respectively. The total processing time for a complete frame with around 40K data point is approximately 800 ms to 1 second. This is quite reasonable in the sense that the robot can process around 60 frames per second which is very good for most industrial applications. This time can be further reduced by detecting a particular ROI within the image thereby reducing the number of points to be processed in the frame. The computation time per frame can also be reduced significantly by down sampling the point cloud. There is a limit to the extent of down sampling allowed as it is directly linked to the quality and quantity of handles detected. For high speed applications, one may use FPGA or GPU based embedded computing platform.
Thus, in accordance with the present disclosure, systems and methods of the present disclosure are directed to finding graspable affordances needed for picking objects using a two finger parallel-jaw gripper in extreme clutter environment. These affordances are extracted from a single view 3D point cloud obtained from a RGBD or a range sensor without any apriori knowledge of object geometry. The problem is solved by first creating surface segments using a modified version region growing method based on surface smoothness condition. The modified version of region growing algorithm makes use of a pair of empirically defined thresholds and a concept called edge point to discard false boundaries arising out of sensor noise. The problem of real-time pose detection is simplified by transforming the 6D search problem to a 1D search problem through scalar projection and exploiting the geometry of the gripper.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.