Sign In to Follow Application
View All Documents & Correspondence

Solid State Semiconductor Lidar Sensor For Smart Visual Docking Guidance System

Abstract: Disclosed is a solid-state semiconductor light detection and ranging (LiDAR) sensor for a smart visual docking guidance system (SVDGS) at an airport having at least one parking bay. The disclosed sensor comprises a solid-state emitter module adapted to emit laser signals towards an approaching aircraft in the at least one parking bay, such that a portion of the emitted laser signals are reflected by the aircraft. Further, the disclosed sensor comprises a solid-state receiver module adapted to receive the reflected laser signals from the approaching aircraft. Furthermore, the disclosed sensor comprises a controller operably coupled to the solid-state emitter module and the solid-state receiver module. The control unit is configured to control the emission of laser signals using beam steering mechanism. The control unit is further adapted to obtain the received reflected laser signals and process the obtained received reflected laser signals to generate one or more depth scans associated with the aircraft.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
09 March 2023
Publication Number
37/2024
Publication Type
INA
Invention Field
PHYSICS
Status
Email
Parent Application

Applicants

INXEE SYSTEMS PRIVATE LIMITED
144 Udyog Vihar Phase 1, Sector 20, Gurgaon, 122 016, Haryana, India

Inventors

1. NUDURUPATI, Srinath
364 FF, Sector 23, Gurgaon 122017, Haryana, India

Specification

DESC:TECHNICAL FIELD
[0001] Embodiments of the present disclosure are generally directed to the field of docking aircraft at an airport, and more particularly relate to a smart visual docking guidance system (SVDGS) and a solid-state semiconductor light detection and ranging (LiDAR) sensor to be used in the SVDGS.
BACKGROUND
[0002] For many years, aircraft docking systems have been playing a significant role in the aviation sector. The aircraft docking systems ensure that an aircraft is securely connected to ground power and other services. Further, the aircraft docking systems are used to guide the aircraft safely and accurately to its parking position at the terminal or gate.
[0003] A conventional aircraft docking system includes a laser transceiver, camera, and/or radar, which are utilized to acquire real-time data for docking aircraft. Additionally, the system includes a controlling unit for processing the generated real-time data. Further, the system includes a display unit to provide visual display and guidance to the pilots to dock the aircraft in the designated parking bay. During an active docking operation, the aircraft docking system usually displays information related to the guidance of the aircraft's docking procedure.
[0004] With recent advancements, aircraft docking systems have been implemented using sensor fusion technology. The sensor fusion in the advanced aircraft docking systems includes a camera sensor, a radar sensor, and an analog light detection and ranging (LiDAR) sensor. The currently implemented sensor fusion technology in the advanced aircraft docking systems requires high-performance hardware to run complex software algorithms, resulting in increased costs and inferior performance.
[0005] Moreover, traditional LiDAR sensor implemented systems are electromechanical and rely on moving parts that have to be precise and accurate in order to obtain measurements suitable for autonomous navigation. Such measurements are obtained from photons of a laser, which are reflected by the surface of an object and concentrated into a collector which then determines the distance of the objects from the LiDAR sensor implemented system. In particular, the laser and the collector are required to rotate in order to scan the area around the LiDAR sensor implemented system.
[0006] The inclusion of moving parts imposes limits on the size of the system. Making the moving parts small and compact would not only increase the challenges in achieving the necessary precision during manufacturing but would also drive up the cost of production of the complete system.
[0007] Accordingly, there lies a need to provide an improved aircraft docking system to overcome the above-described limitations.
SUMMARY
[0008] This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention. This summary is neither intended to identify essential inventive concepts of the invention nor is it intended for determining the scope of the invention.
[0009] According to one embodiment of the present disclosure, disclosed herein is a solid-state semiconductor light detection and ranging (LiDAR) sensor for a smart visual docking guidance system (SVDGS) at an airport having at least one parking bay. The solid-state semiconductor LiDAR sensor comprises a solid-state emitter module adapted to emit laser signals towards an approaching aircraft in at least one parking bay, such that a portion of the emitted laser signals are reflected by the aircraft. Further, the solid-state semiconductor LiDAR sensor comprises a solid-state receiver module adapted to receive the reflected laser signals from the approaching aircraft. Furthermore, the solid-state semiconductor LiDAR sensor comprises a controller operably coupled to the solid-state emitter module and the solid-state receiver module. The control unit is configured to control the emission of laser signals using a beam steering mechanism. The control unit is further adapted to obtain the received reflected laser signals and process the obtained received reflected laser signals to generate one or more depth scans associated with the aircraft using artificial intelligence and machine learning (AI/ML) and time of flight techniques .
[0010] According to another embodiment of the present disclosure, disclosed is a smart visual docking guidance system (SVDGS) at an airport having at least one parking bay. The SVDGS comprises a solid-state semiconductor LiDAR sensor and a control unit coupled with the solid-state semiconductor LiDAR sensor. The solid-state semiconductor LiDAR sensor comprises a solid-state emitter module adapted to emit laser signals towards an approaching aircraft in at least one parking bay, such that a portion of the emitted laser signals are reflected by the aircraft. Further, the solid-state semiconductor LiDAR sensor comprises a solid-state receiver module adapted to receive the reflected laser signals from the approaching aircraft. Furthermore, the solid-state semiconductor LiDAR sensor comprises a controller operably coupled to the solid-state emitter module and the solid-state receiver module. The control unit is configured to control the emission of laser signals using a beam steering mechanism. The control unit is further adapted to obtain the received reflected laser signals and process the obtained received reflected laser signals to generate one or more depth scans associated with the aircraft using advanced AI/ML and time of flight techniques.
[0011] Further, the control unit is equipped with an advanced AI/ML engine and configured to receive the one or more depth scans generated by the solid-state semiconductor LiDAR sensor, process the one or more depth scans to monitor the movement the approaching aircraft in the at least one parking bay and analyze the one or more depth scans to determine whether the approaching aircraft has docked at the at least one parking bay or departed from the at least one parking bay.
[0012] To further clarify the advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which are illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail in the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0014] FIG. 1 is a pictorial diagram depicting an exemplary environment for implementing a smart visual docking guidance system (SVDGS) at an airport, according to an embodiment of the present disclosure;
[0015] FIG. 2 is a schematic diagram depicting an exemplary SVDGS, according to an embodiment of the present disclosure; and
[0016] FIG. 3 is a flow diagram depicting an exemplary operation of the SVDGS, according to an embodiment of the present disclosure;
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0017] For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the various embodiments and specific language will be used to describe the same. It should be understood at the outset that although illustrative implementations of the embodiments of the present disclosure are illustrated below, the present invention may be implemented using any number of techniques, whether currently known or in existence. The present disclosure is not necessarily limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified within the scope of the present disclosure
[0018] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are explanatory of the invention and are not intended to be restrictive thereof.
[0019] Reference throughout this specification to “an aspect”, “another aspect” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrase “in an embodiment”, “in another embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
[0020] It is to be understood that as used herein, terms such as, “includes,” “comprises,” “has,” etc. are intended to mean that the one or more features or elements listed are within the element being defined, but the element is not necessarily limited to the listed features and elements, and that additional features and elements may be within the meaning of the element being defined. In contrast, terms such as, “consisting of” are intended to exclude features and elements that have not been listed.
[0021] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted to not unnecessarily obscure the embodiments herein. Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. The term “or” as used herein, refers to a non-exclusive or unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0022] As is traditional in the field, embodiments may be described and illustrated in terms of blocks that carry out a described function or functions. These blocks, which may be referred to herein as units or modules or the like, are physically implemented by analog or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits, or the like, and may optionally be driven by firmware and software. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like. The circuits constituting a block may be implemented by dedicated hardware, by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the invention. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the invention.
[0023] The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents, and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
[0024] Traditional LiDAR-based aircraft docking systems rely on the movement of internal components such as laser and collector. In the traditional LiDAR-based aircraft docking systems, the laser and the collector are required to rotate to scan the surrounding area. Such operations of the internal components constrain the size of the aircraft docking systems resulting in high manufacturing and maintenance costs. Therefore, an object of the present disclosure is to overcome the limitations of the traditional LiDAR-based aircraft docking systems.
[0025] The present disclosure achieves the above-described objectives by providing a single chip based solid-state semiconductor light detection and ranging (LiDAR) sensor to be used in a smart visual docking guidance system (SVDGS), at an airport having at least one parking bay. Accordingly, the present disclosure also provides the smart visual docking guidance system (SVDGS) at an airport having the at least one parking bay.
[0026] Embodiments of the disclosed invention will be described below in detail with reference to the accompanying drawings.
[0027] FIG. 1 is a pictorial diagram depicting an exemplary environment 100 for implementing a smart visual docking guidance system (SVDGS) 101 at an airport, according to an embodiment of the present disclosure. The environment 100 may include the SVDGS 101 installed at an airport having at least one parking bay. The SVDGS 101 utilizes a solid-state semiconductor LiDAR sensor and artificial intelligence (AI) based techniques to aid in the safe maneuvering of an aircraft 103 during the process of docking and undocking.
[0028] The SVDGS 101 system includes the solid-state semiconductor LiDAR sensor which may be a 3-dimensional (3D) LiDAR sensor built on a semiconductor silicon chip. The 3D LiDAR of the aircraft docking system may detect the aircraft 103 approaching in at least one parking bay of the airport. The 3D LiDAR may be configured to generate one or more depth scans of the approaching aircraft 103 based on laser scanning using a beam steering mechanism. Further, the SVDGS 101 may be configured to process the one or more depth scans to monitor the movement the approaching aircraft 103 and determine whether the approaching aircraft has docked at the at least one parking bay or departed from the at least one parking bay.
[0029] Moreover, the SVDGS 101 may be configured to generate a guidance instruction for the docking and undocking of the aircraft 103.
[0030] According to the embodiments of the present disclosure, the SVDGS 101 is capable of providing precise information to identify the approaching aircraft 103 and other obstacles accurately based on the number of laser reflections captured per second. Thus, the SVDGS 101 with the 3D LiDAR, according to the embodiments of the present disclosure, is configured to provide accurate information for guiding aircraft to the desired docking position. One or more components of the SVDGS 101 are now described below in conjunction with FIG.2.
[0031] FIG. 2 is a schematic diagram 200 depicting an exemplary SVDGS, according to an embodiment of the present disclosure. The SVDGS 101 may include a solid-state semiconductor LiDAR sensor 201, a display unit 203, and a control unit 205. The control unit 205 is operably coupled with. the solid-state semiconductor LiDAR sensor 201, and the display unit 203 to monitor the movement of the aircraft 103 and provide guiding instructions for docking and/or undocking of the same.
[0032] According to the embodiments of the present disclosure, the solid-state semiconductor LiDAR sensor 201 may be a 3D LiDAR incorporated on a single silicon chip. The solid-state semiconductor LiDAR sensor 201 may include a solid-state emitter module 207, a solid-state receiver module 209, and a controller 211.
[0033] In an embodiment, the 3D LiDAR incorporated on a single silicon chip may include an on-chip planar lens for steering in the horizontal direction and a grating to scan in the vertical direction. The solid-state emitter module may be edge-coupled into an optical waveguide to guide, and confine emitted laser light along a desired optical path.
[0034] An initial optical waveguide may be fed into a switch matrix when then expands into a tree pattern. The switch matrix may comprise a plurality of photonic waveguides and a plurality of active phase shifters to route the emitted laser light into the desired waveguide. Each waveguide may provide a distinct path that feeds into the on-chip planar lens and corresponds to one beam in free space.
[0035] The on-chip planar lens may consist of a stack of complementary metal-oxide-semiconductor (CMOS) compatible materials. Such CMOS compatible materials may be of thickness having dimensions on the nanoscale, typically in the range of tens to hundreds of nanometres. The stack of CMOS compatible materials may be designed to collimate the laser light fed in by each waveguide. The collimation enables the creation of a flat wavefront for the light allowing minimum spreading and therefore, reduced loss allowing the light to propagate over long distances.
[0036] The 3D LiDAR the single silicon chip may be configured such that the beams emitted from adjacent waveguides overlap in the far-field, thereby allowing an entire horizontal field-of-view to be covered by beams emitted from a discrete number of waveguides.
[0037] The solid-state emitter module 207 may be adapted to emit laser signals toward the aircraft 103 approaching in at least one parking bay, such that a portion of the emitted laser signals are reflected by the aircraft. In an embodiment of the present disclosure, the laser signals are emitted using a beam steering mechanism. The beam steering mechanism may be utilized to control the direction or trajectory of the emitted laser signals.
[0038] The solid-state receiver module 209 may be adapted to receive the reflected laser signals from the approaching aircraft 103. The solid-state receiver module 209 may be further adapted to transmit the received reflected laser signals to the controller 211.
[0039] According to the embodiments of the present disclosure, the controller 211 may be operably coupled to the solid-state emitter module 207 and the solid-state receiver module 209. The controller 211 may be configured to implement the beam steering mechanism to control the emission of the laser signals by the solid-state emitter module 207.
[0040] The controller 211 may be further configured to obtain the received reflected laser signals and process the obtained received reflected laser signals to generate one or more depth scans associated with the aircraft 103. In an embodiment, the one or more depth scans are generated in the form of 3D point cloud data associated with the approaching aircraft 103. The 3D point cloud data may correspond to a collaboration of numerous dots (data points) spread throughout a 3D space to accurately measure the surface of each object within a scanning area, for example, an aircraft in the at least one parking bay of the airport.
[0041] The control unit 205 may be equipped with an advanced artificial intelligence and machine learning (AI/ML) engine and include a memory 213, and a processor 215 coupled with the memory 213.
[0042] In an example, the processor(s) 215 may be a single processing unit or a number of units, all of which could include multiple computing units. The processor 215 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logical processors, virtual processors, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 215 is configured to fetch and execute computer-readable instructions and data stored in the memory 213.
[0043] The memory 213 may include any non-transitory computer-readable medium known in the art including, for example, volatile memory or Random Access Memory (RAM), such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read-only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
[0044] At least one of a plurality of operations of the SVDGS 101 may be implemented through the AI model. A function associated with AI may be performed through the non-volatile memory, the volatile memory, and the processor 215.
[0045] The processor 215 may include one or a plurality of processors. At this time, one or a plurality of processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
[0046] The one or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or AI model stored in the non-volatile memory and the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning.
[0047] Here, being provided through learning means that, by applying a learning technique to a plurality of learning data, a predefined operating rule or AI model of a desired characteristic is made. The learning may be performed in a device itself in which AI according to an embodiment is performed, and/or may be implemented through a separate server/system.
[0048] The AI model may include a plurality of neural network layers. Each layer has a plurality of weight values and performs a layer operation through the calculation of a previous layer and an operation of a plurality of weights. Examples of neural networks include but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
[0049] The learning technique is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning techniques include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
[0050] The AI model may be obtained by training. Here, "obtained by training" means that a predefined operation rule or artificial intelligence model configured to perform a desired feature (or purpose) is obtained by training a basic AI model with multiple pieces of training data by a training technique. The AI model may include a plurality of neural network layers. Each of the plurality of neural network layers includes a plurality of weight values and performs neural network computation by computation between a result of computation by a previous layer and the plurality of weight values.
[0051] The display unit 203 may be communicatively coupled with the control unit 205. The display unit 203 may be adapted to be operatively coupled with the control unit 205 and adapted to receive the plurality of signals from the control unit 205. In addition, the display unit 203 may include a plurality of signals to facilitate in guiding and maneuvering of the aircraft 103 to facilitate the parking of the aircraft 103 at the designated parking bay. The guidance instructions provided by the control unit 205 are displayed on the display unit 203 of the SVDGS 101.
[0052] The control unit 205 may be configured to receive the one or more depth scans generated by the solid-state semiconductor LiDAR sensor, and process, via the processor 215, the one or more depth scans to monitor the movement of the approaching aircraft in the at least one parking bay. In an embodiment, the one or more depth scans associated with the approaching aircraft are generated in the form of 3D point cloud data. In another embodiment, the 3D point cloud data may be visualized using the advanced AI engine of the control unit.
[0053] Further, the control unit 205 may be configured to analyze, via the processor 215, the one or more depth scans to determine whether the approaching aircraft has docked at the at least one parking bay or departed from the at least one parking bay. The one or more depth scans may be generated using advanced AI/ML and time of flight techniques. According to time of flight techniques, the one or more depth scans may be determined using the time difference between the time of transmission and reception of the reflected laser beam.
[0054] Moreover, the control unit 205 may be configured to compare the approaching aircraft and the movement thereof with a designated parking bay based on the monitored movement of the approaching aircraft. Furthermore, the control unit 205 may be configured to generate a guidance instruction based on the comparison. The operation of the SVDGS 101 is described below in greater detail in conjunction with FIG. 3.
[0055] FIG. 3 is a flow diagram 300 depicting an exemplary operation of the SVDGS 101, according to an embodiment of the present disclosure.
[0056] At step 301, the SVDGS 101 detects an approaching aircraft 103. At step 303, a determination is made whether the approaching aircraft 103 is detected or not. If no approaching aircraft is detected, the SVDGS 101 continues to detect the aircraft 103 approaching the at least one parking bay in the airport.
[0057] At step 305, if the approaching aircraft 103 is detected at step 303, the SVDGS 101 actuates the solid-state semiconductor LiDAR sensor 201. In an embodiment of the present disclosure, the solid-state semiconductor LiDAR sensor 201 is a 3D LiDAR incorporated on a single silicon chip.
[0058] At step 307, after the solid-state semiconductor LiDAR sensor 201 is enabled, the solid-state emitter module of the solid-state semiconductor LiDAR sensor 201 initiates the emission of laser signals towards the approaching aircraft 103. The laser signals are emitted to scan the approaching aircraft which reflects a portion of the laser signals incident on the surface of the approaching aircraft.
[0059] At step 309, the solid-state receiver module of the solid-state semiconductor LiDAR sensor 201 receives the laser signals reflected from the approaching aircraft and transmits the received reflected laser signals to the controller of the solid-state semiconductor LiDAR sensor 201.
[0060] At step 311, the controller of the solid-state semiconductor LiDAR sensor 201 processes the obtained received reflected laser signals to generate one or more depth scans associated with the aircraft. In an embodiment, the one or more depth scans are generated in the form of 3D point cloud data. According to the embodiments of the present disclosure, the 3D point cloud data is visualized using the advanced AI engine of the control unit of the SVDGS 101.
[0061] At step 313, the control unit of the SVDGS 101 obtains the one or more depth scans generated by the solid-state semiconductor LiDAR sensor and processes the one or more depth scans to monitor the movement of the approaching aircraft in the at least one parking bay.
[0062] At step 315, the control unit of the SVDGS 101 analyzes the one or more depth scans to determine whether the approaching aircraft has docked at the at least one parking bay or departed from the at least one parking bay based on the monitored movement.
[0063] At step 317, if the aircraft is determined to have departed the SVDGS 101 continues to step 301 to detect an aircraft approaching at least one parking bay of the airport. In an alternative embodiment, if the aircraft has not departed, the SVDGS 101 continues to monitor the movement of the aircraft.
[0064] At step 319, if the aircraft is determined to have docked the SVDGS 101 continues to step 321 to complete the aircraft docking procedure. In an alternative embodiment, if the aircraft has not docked, the SVDGS 101 continues to monitor the movement of the aircraft.
[0065] At least by virtue of aforesaid, the present subject matter at least provides the following advantages:
[0066] The solid-state semiconductor LiDAR sensor in the SVDGS 101 described in the embodiments herein addresses all the limitations of existing aircraft docking systems.
[0067] The solid-state semiconductor LiDAR sensor in the SVDGS 101 described in the embodiments herein enables the generation of depth scans of the approaching aircraft in at least one parking bay of the airport without having to implement the rotational movement of the emitter and the collector (i.e. receiver) component as in the traditional LiDAR aircraft docking systems.
[0068] The solid-state semiconductor LiDAR sensor in the SVDGS 101 described in the embodiments herein increases the performance, reliability, and accuracy of the SVDGS 101.
[0069] Further, the solid-state semiconductor LiDAR sensor in the SVDGS 101 described in the embodiments herein requires minimal or no regular maintenance or servicing requires minimal calibration, ensuring prolonged operational lifespan at a reasonable cost with minimal or zero downtime.
[0070] Moreover, the solid-state semiconductor LiDAR leave sensor small footprint, provides high performance, ensures low power consumption, high reliability, and extended life of operation.
[0071] While specific language has been used to describe the present subject matter, any limitations arising on account thereto, are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein. The drawings and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. ,CLAIMS:1. A solid-state semiconductor light detection and ranging (LiDAR) sensor (201) to be used in a smart visual docking guiding system (SVDGS) at an airport having at least one parking bay, the solid-state semiconductor light detection and ranging (LiDAR)sensor comprises:
a solid-state emitter module (207) adapted to emit laser signals towards an approaching aircraft in the at least one parking bay, wherein a portion of the emitted laser signals are reflected by the aircraft;
a solid-state receiver module (209) adapted to receive the reflected laser signals from the approaching aircraft; and
a controller (211) operably coupled to the solid-state emitter module and the solid-state receiver module, the controller configured to:
control the emission of laser signals using beam steering mechanism;
obtain the received reflected laser signals; and
process the obtained received reflected laser signals to generate one or more depth scans associated with the aircraft using sophisticated advanced artificial intelligence and machine learning (AI/ML) and time of flight techniques.

2. The solid-state semiconductor LiDAR sensor (201) as claimed in claim 1, wherein the solid-state semiconductor LiDAR sensor is a 3-dimensional (3D) LiDAR sensor.

3. The solid-state semiconductor LiDAR sensor (201) as claimed in claim 1, wherein the solid-state semiconductor LiDAR sensor is incorporated on a single silicon chip.

4. The solid-state semiconductor LiDAR sensor (201) as claimed in claim 1, wherein the controller is configured to:
generate the one or more depth scans in the form of 3D point cloud data associated with the approaching aircraft.
5. A smart visual docking guiding system (SVDGS) (101) at an airport having at least one parking bay, the SVDGS (101) comprises:
a solid-state semiconductor light detection and ranging (LiDAR) sensor (201) comprising:
a solid-state emitter module (207) adapted to emit laser signals towards an approaching aircraft in the at least one parking bay, wherein a portion of the emitted laser signals are reflected by the aircraft;
a solid-state receiver module (209) adapted to receive the reflected laser signals from the approaching aircraft; and
a controller (211) operably coupled to the solid-state emitter module and the solid-state receiver module, the controller configured to:
control the emission of laser signals using beam steering mechanism;
obtain the received reflected laser signals; and
process the obtained received reflected laser signals to generate one or more depth scans associated with the aircraft using sophisticated advanced artificial intelligence and machine learning (AI/ML) and time of flight techniques; and
a control unit (205) equipped with an advanced artificial Intelligence (AI) engine and operably coupled with the solid-state semiconductor LiDAR sensor, wherein the control unit is configured to:
receive the one or more depth scans generated by the solid-state semiconductor LiDAR sensor;
process the one or more depth scans to monitor movement the approaching aircraft in the at least one parking bay; and
analyze the one or more depth scans to determine whether the approaching aircraft has docked at the at least one parking bay or departed from the at least one parking bay.

6. The SDVGS (101) as claimed in claim 5, wherein the control unit is further configured to:
compare the approaching aircraft and the movement thereof with a designated parking bay based on the monitored movement of the approaching aircraft; and
generate a guidance instruction based on the comparison.

7. The SDVGS (101)as claimed in claim 5, wherein the one or more depth scans associated with the approaching aircraft are generated in the form of 3D point cloud data.

8. The SDVGS (101) as claimed in claim 7, wherein the 3D point cloud data is visualized using the advanced AI engine of the control unit.

Documents

Application Documents

# Name Date
1 202311015751-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [09-03-2023(online)].pdf 2023-03-09
2 202311015751-STATEMENT OF UNDERTAKING (FORM 3) [09-03-2023(online)].pdf 2023-03-09
3 202311015751-PROVISIONAL SPECIFICATION [09-03-2023(online)].pdf 2023-03-09
4 202311015751-OTHERS [09-03-2023(online)].pdf 2023-03-09
5 202311015751-FORM FOR STARTUP [09-03-2023(online)].pdf 2023-03-09
6 202311015751-FORM FOR SMALL ENTITY(FORM-28) [09-03-2023(online)].pdf 2023-03-09
7 202311015751-FORM 1 [09-03-2023(online)].pdf 2023-03-09
8 202311015751-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-03-2023(online)].pdf 2023-03-09
9 202311015751-EVIDENCE FOR REGISTRATION UNDER SSI [09-03-2023(online)].pdf 2023-03-09
10 202311015751-DRAWINGS [09-03-2023(online)].pdf 2023-03-09
11 202311015751-DECLARATION OF INVENTORSHIP (FORM 5) [09-03-2023(online)].pdf 2023-03-09
12 202311015751-Proof of Right [22-05-2023(online)].pdf 2023-05-22
13 202311015751-FORM-26 [31-05-2023(online)].pdf 2023-05-31
14 202311015751-OTHERS [08-03-2024(online)].pdf 2024-03-08
15 202311015751-FORM FOR STARTUP [08-03-2024(online)].pdf 2024-03-08
16 202311015751-EVIDENCE FOR REGISTRATION UNDER SSI [08-03-2024(online)].pdf 2024-03-08
17 202311015751-DRAWING [08-03-2024(online)].pdf 2024-03-08
18 202311015751-CORRESPONDENCE-OTHERS [08-03-2024(online)].pdf 2024-03-08
19 202311015751-COMPLETE SPECIFICATION [08-03-2024(online)].pdf 2024-03-08