Abstract: Systems, apparatuses and methods may provide for technology that identifies a plurality of segments based on semantic features and instance features associated with a scene, fuses the plurality of segments into a plurality of instances, and selects classification labels for the plurality of instances. In one example, the plurality of segments is fused into the plurality of instances via a learnable self-attention based network.
Description:RELATED APPLICATION
[0001] The present application claims priority to U.S. Non-Provisional Patent Application No. 17/582,390 filed on 24 January 2022 and titled “SEGMENT FUSION BASED ROBUST SEMANTIC SEGMENTATION OF SCENES” the entire disclosure of which is hereby incorporated by reference.
TECHNICAL FIELD
[0001] Embodiments generally relate to scene segmentation. More particularly, embodiments relate to segment fusion based robust semantic segmentation of scenes.
BACKGROUND OF THE DISCLOSURE
[0002] Three-dimensional (3D) semantic segmentation typically involves labeling each point in 3D point cloud data with a classification attribute (e.g., chair, table, etc.), where the semantic segmentation task may be useful in various applications such as autonomous driving, robotics, and indoor scene understanding. Conventional semantic segmentation solutions, however, may partially misclassify objects, involve complex and heuristic-driven post-processing, be limited to specific models, networks and/or scenes and/or focus solely on the strongest clues in the scene.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] The various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
[0004] FIG. 1 is a block diagram of an example of a segmentation pipeline according to an embodiment;
[0005] FIG. 2 is a comparative block diagram of an example of a conventional encoder block and an enhanced encoder block according to an embodiment;
[0006] FIG. 3 is an illustration of an example of the application of an instance loss function to segments according to an embodiment;
[0007] FIG. 4 is an illustration of an example of the application of a segment loss function to segment pairs according to an embodiment;
[0008] FIG. 5 is a comparative illustration of an example of a conventional classification result and an enhanced classification result according to an embodiment;
[0009] FIG. 6 is a flowchart of an example of a method of segmenting a scene according to an embodiment;
[0010] FIG. 7 is a flowchart of an example of a method of selecting classification labels for a plurality of instances according to an embodiment;
[0011] FIG. 8 is a block diagram of an example of a performance-enhanced computing system according to an embodiment;
[0012] FIG. 9 is an illustration of an example of a semiconductor package apparatus according to an embodiment;
[0013] FIG. 10 is a block diagram of an example of a processor according to an embodiment; and
[0014] FIG. 11 is a block diagram of an example of a multi-processor based computing system according to an embodiment.
DETAILED DESCRIPTION
[0015] Previous scene segmentation solutions can be classified into two-dimensional (2D, e.g., working on 2D projected data) solutions and 3D solutions (e.g., working on 3D data). Additionally, 3D processing solutions can be broadly categorized into point-based solutions and voxel-based solutions.
, C , C , Claims:1. A computing system comprising:
a network controller to obtain data corresponding to a scene;
a processor coupled to the network controller; and
a memory including a set of instructions, which when executed by the processor, cause the processor to:
identify a plurality of segments based on semantic features, instance features and point cloud data associated with the scene,
fuse the plurality of segments into a plurality of instances, and
select classification labels for the plurality of instances.
| # | Name | Date |
|---|---|---|
| 1 | 202244072883-FORM 1 [16-12-2022(online)].pdf | 2022-12-16 |
| 2 | 202244072883-DRAWINGS [16-12-2022(online)].pdf | 2022-12-16 |
| 3 | 202244072883-DECLARATION OF INVENTORSHIP (FORM 5) [16-12-2022(online)].pdf | 2022-12-16 |
| 4 | 202244072883-COMPLETE SPECIFICATION [16-12-2022(online)].pdf | 2022-12-16 |
| 5 | 202244072883-Correspondence-Letter [19-12-2022(online)].pdf | 2022-12-19 |
| 6 | 202244072883-FORM 3 [12-06-2023(online)].pdf | 2023-06-12 |
| 7 | 202244072883-FORM-26 [19-06-2023(online)].pdf | 2023-06-19 |
| 8 | 202244072883-Proof of Right [08-09-2023(online)].pdf | 2023-09-08 |
| 9 | 202244072883-FORM 3 [12-12-2023(online)].pdf | 2023-12-12 |