Sign In to Follow Application
View All Documents & Correspondence

Urban Digital Twin Models Automatic Extraction Of Features From Panoramic Imagery And Lidar Data

Abstract: URBAN DIGITAL TWIN MODELS - AUTOMATIC EXTRACTION OF FEATURES FROM PANORAMIC IMAGERY AND LIDAR DATA ABSTRACT The present invention relates to system (100) and method for Automatic extraction of realworld geographic features from the data acquired from street level 360-degree panoramic imagery acquired using terrestrial 360-degree panoramic Imaging system and also augmented by the Aerial or Terrestrial Mobile mapping system acquired LiDAR datasets. The whole invention is focused in achieving better accuracy in location of the physical features in the real world through automation using multiple methods namely, through 360-degree panoramic imaging, LiDAR datasets or Triangulation methods. This helps in quick execution of the extraction of the features from the raw and processed data and getting high accuracy of the geographic features which saves lots of time and manual efforts. <>

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 October 2021
Publication Number
16/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
info@effectualservices.com
Parent Application

Applicants

GENESYS INTERNATIONAL CORPORATION LTD.
73 A, SDF III, SEEPZ, Andheri East, Mumbai 400 096, INDIA

Inventors

1. Dr Aniruddha Roy
Genesys International Corporation Ltd. HO- 73 A, SDF III, SEEPZ, Andheri East, Mumbai 400 096, INDIA
2. Ashitosh Prabhu
Genesys International Corporation Ltd. HO- 73 A, SDF III, SEEPZ, Andheri East, Mumbai 400 096, INDIA

Specification

DESC:DESCRIPTION OF INVENTION
[0013] Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
[0014] The various implementations may be found in a method and/or a system for providing virtual services. The system (100) includes comprises a processor (102) and a memory (104) communicatively coupled to the processor. The memory (104) stores processor-executable instructions, which, on execution, cause the processor (102) to generate a 3D model/ digital twins of the street assets. The digital twins are 3D digital replicas of the physical 3D real-world features. the Extraction of 3D Information from various input sources manually is a time consuming process and need lot of human efforts. These digital twins are created using aerial survey of Manned Aircrafts with a LiDAR, Optical and Near Infrared (NIR) sensors (mounted at nadir and oblique angles) as payload. It can be noted that at the ground level the surveys are planned to conduct using Terrestrial mobile mapping vans equipped with a LiDAR sensors and 360-degree panoramic Imaging systems. The datasets generated from all these sensors coupled with the ground truth datasets are fused to create “Urban Digital Twin” of high accuracy and a Level of Details (LOD) of the physical assets on the ground. The method explains an Automated way to extract real world coordinates of street furniture like traffic signs, street poles, manholes, bus stops, etc. from a non-georeferenced equi-rectangular street view imagery with a man’s eye view using Computer vision.
[0015] FIG. 1 is a block diagram of a system (100) for generating a AI Model/ 3D model, in accordance with some embodiments of the present disclosure. The processor (102) may include suitable logic, circuitry, interfaces, and/or code that may be configured to execute a set of instructions stored in the memory (104). The processor (102) may be configured to execute artificial intelligence and machine learning (AI/ML) (106) command. The processor (102) may be further configured to receive dataset as input for the AI/ML. To this end, the processor (102) may the steps comprise of extracting the coordinate point cloud data to obtain 3D location of all street assets. Examples of the processor (102) may be an X86-based processor, a Reduced Instruction Set Computing (RISC) processor, an Application-Specific Integrated Circuit (ASIC) processor, a Complex Instruction Set Computing (CISC) processor, and/or other processors.
[0016] Further, the memory (104) may include suitable logic, circuitry, and/or interfaces that may be configured to store a machine code and/or a computer program with at least one code section executable by the processor (102). Examples of implementation of the memory (104) may include, but are not limited to, Random Access Memory (RAM), Read Only Memory (ROM), Hard Disk Drive (HDD), and/or a Secure Digital (SD) card.
[0017] In an implementation, the memory (104) may be integrated with the plurality of databases, such as the services database, the executive database, the user database, and/or the multimedia database. The services database may be configured to store the details of data points generated by the system. The database may be configured to store the details of a coordinate point cloud data obtained by 3D model of all street assets. A person of ordinary skill in the art will appreciate that the data stored in the databases described above may be stored in a structured or an unstructured data format. Examples of implementation of the databases described above may include, but are not limited to, secure databases such as Amazon Web Services Cloud (AWS®), Microsoft Azure®, Cosmos DB®, Oracle® Database, Sybase®, MySQL®, Microsoft Access®, Microsoft SQL Server®, FileMaker Pro™, and dBASE®. A person of ordinary skill in the art will appreciate that in an alternate implementation, the databases described above may be implemented as an entity that is separate from the memory, without limiting the scope of this disclosure.
[0018] Further, the datasets generated may be periodically updated using an Unmanned aerial vehicle and an ancillary data sources. The new data products serve to solve governance problems and also offer citizen centric applications in desktop and web based environment. Through the automation techniques using the artificial intelligence and machine learnings datasets acquired from the Panoramic imagery and a LiDAR point clouds are processed so that the geographic features are identified based on their size, shape, pattern etc. but also provides the accuracy of the location in the derived maps and 3D Models which is part of the Urban Digital Twin. The artificial and machine learning module is configured to generate 3D location for the dataset of global positioning system and LiDAR. The artificial and machine learning module uses opensource libraries based on PyTorch to perform instance segmentation to map the masks to 3D point clouds.
[0019] FIG. 2 is a block diagram of an artificial intelligence and a machine learning module (AI/ML) (106), in accordance with some embodiments of the present disclosure. The artificial intelligence and the machine learning module (AI/ML) (106) is a deep learning neural network with multiple layers.The deep learning of a neural network or a convolutional neural network in which multiple layers of processing are used to extract higher-level features from image data, in which Pytorch is used as a library. The artificial intelligence and the machine learning dataset is divided into two-stage process/model to distinguish between different objects. The first stage comprises steps of obtaining a 360 degrees’ view of an equi-rectangular imagery, at step 202. Further, the steps further comprise of creating an annotated dataset from the equi-rectangular imagery, at step 204. Further, the steps further comprise of training a street furniture models for instance segmentation from the annotated dataset, at step 206. Further, the steps further comprise of accessing the model accuracy of the segmented dataset, at step 208. It can be noted that if the accuracy of the model is increasing then save the model and perform instance segmentation, wherein the accuracy is reduced then the dataset is sent back to the step of the creating annotated dataset till the accuracy in predetermined range is achieved of the model. Further, the steps further comprise of masking the results of the model in a pre-defined threshold, at step 210. In one embodiment, every prediction has a confidence value associated with it and between 0-100. The predictions are filtered if the confidence value is above a certain value called threshold. It should be noted that such value is different for every class.
[0020] Further, the second stage comprises steps of collecting a mobile LiDAR point cloud, at step 216. Further, the steps further comprise of identifying the LiDAR point cloud files via an overlapping imagery, at step 218. Further, the steps further comprise of overlaying the LiDAR point cloud with imagery via combining the identify LiDAR point cloud files via an overlapping imagery and the mask results of the model, at step 220.
[0021] Further, the steps further comprise of generating a ray from a car position to masked pixel from an overlay point cloud with imagery and a car GPS sensor information, at step 212. Further, the steps comprise of extracting a coordinate point cloud data to obtain the 3D model of all street assets, at step 214.
[0022] In an embodiment, the coordinate point cloud data is presented on the screen for the 3D asset extraction from street view data (300), as shown in FIG. 3. Therefore, the artificial intelligence (AI), the machine learning (ML) technologies and other advanced analytics may then process this data to deliver insights and model future scenarios in the areas of emergency management, safety and security, flood modelling, tele-communication, infrastructure project planning and monitoring, utility network planning and maintenances, asset management, tax compliance, last mile connectivity etc.
[0023] Further, the data acquired from street level 360-degree panoramic imagery acquired using terrestrial 360-degree panoramic Imaging system and also augmented by the Aerial or Terrestrial Mobile mapping system acquired LiDAR datasets. The whole invention is focused in achieving better accuracy in location of the physical features in the real world through automation using multiple methods namely, through 360-degree panoramic imaging, LiDAR datasets or Triangulation methods. This helps in quick execution of the extraction of the features from the raw and processed data and getting high accuracy of the geographic features which saves lots of time and manual efforts.
[0024] In practice, the components used, as well as the numbers, shapes, and sizes of the components can be whatever according to the technical requirements. The scope of protection of the invention is therefore defined by the attached claims.
,CLAIMS:We Claim:

1. A system (100) for automatic extraction of location coordinates in 3D and also identification of features as it exist in the real-world, the system (100) comprising:
a processor (102); and
a memory (104) storing instructions for execution by the processor (102), wherein the processor (102) is configured by the instructions for:
executing an artificial and machine learning module (AI/ML) (106) includes:
obtaining a 360 degrees’ view of an equi-rectangular imagery, at step 202;
creating an annotated dataset from the equi-rectangular imagery, at step 204;
training a street furniture models for instance segmentation from the annotated dataset, at step 206;
accessing the model accuracy of the segmented dataset, at step 208, if the accuracy of the model is increasing then save the model and perform instance segmentation, wherein the accuracy is reduced then the dataset is sent back to the step of the creating annotated dataset till the accuracy in predetermined range is achieved of the model;
masking results of the model in a pre-defined threshold, at step 210;
generating a ray from a car position to masked pixel from an overlay point cloud with imagery and a car GPS sensor information, at step 212; and
extracting a coordinate point cloud data to obtain the model of all street assets, at step 214.

2. The system (100) of claim 1, wherein the artificial and machine learning module (106) comprises:
collecting a mobile LiDAR point cloud, at step 216;
identifying the LiDAR point cloud files via an overlapping imagery, at step 218; and
overlaying the LiDAR point cloud with imagery via combining the identify LiDAR point cloud files via an overlapping imagery and the mask results of the model, at step 220.

3. The system (100) of claim 1, wherein the artificial and machine learning module (106) is a deep learning neural network with multiple layers.

4. The system (100) of claim 1, wherein the artificial and machine learning module (106) is configured to generate 3D location for the dataset of global positioning system and LiDAR.

Documents

Application Documents

# Name Date
1 202121047009-STATEMENT OF UNDERTAKING (FORM 3) [14-10-2021(online)].pdf 2021-10-14
2 202121047009-PROVISIONAL SPECIFICATION [14-10-2021(online)].pdf 2021-10-14
3 202121047009-PROOF OF RIGHT [14-10-2021(online)].pdf 2021-10-14
4 202121047009-POWER OF AUTHORITY [14-10-2021(online)].pdf 2021-10-14
5 202121047009-FORM 1 [14-10-2021(online)].pdf 2021-10-14
6 202121047009-FIGURE OF ABSTRACT [14-10-2021(online)].jpg 2021-10-14
7 202121047009-DRAWINGS [14-10-2021(online)].pdf 2021-10-14
8 202121047009-DECLARATION OF INVENTORSHIP (FORM 5) [14-10-2021(online)].pdf 2021-10-14
9 202121047009-DRAWING [14-10-2022(online)].pdf 2022-10-14
10 202121047009-CORRESPONDENCE-OTHERS [14-10-2022(online)].pdf 2022-10-14
11 202121047009-COMPLETE SPECIFICATION [14-10-2022(online)].pdf 2022-10-14
12 Abstract1.jpg 2022-11-18
13 202121047009-Proof of Right [28-12-2022(online)].pdf 2022-12-28
14 202121047009-FORM-26 [28-12-2022(online)].pdf 2022-12-28
15 202121047009-FORM 18 [13-10-2025(online)].pdf 2025-10-13