Abstract: Disclosed is an all weather capable smart visual docking guidance system (S-VDGS) (100). The S-VDGS (100) includes an imaging device (304), a processor (302), a plurality of sensors (312), and a plurality of communication modes (102). The imaging device (304) is configured to capture one of a 3D scanned image or a thermal image of the parking space at a predefined interval. The plurality of sensors (312) is adapted to collect real-time meteorological information. The plurality of communication modes (102) is configured to communicate a real-time information to an aircraft. The processor (302) is communicatively connected to the imaging device (304), the plurality of sensors (312), and the plurality of communication modes (102). The processor (302) provides instructions for aircraft docking using outputs from the imaging device (304), the plurality of sensors (312), and the plurality of communicative modes.
The present disclosure relates to aspects of an aircraft docking system, and more particularly, relates to a system for harsh weather capable smart visual docking guidance system (S-VDGS).
BACKGROUND
Airports have weather monitoring systems and protocols in place to track weather conditions and assess their potential impact on operations. The weather information is communicated to air traffic control (ATC), allowing them to make informed decisions related to docking of the aircraft or other activities. The ATC further contacts and guides the pilots to perform efficient docking of the aircraft.
However, the conventional aircraft docking system has several limitations that can result in operational inefficiencies, delays, and safety risks. For instance, the aircraft docking system includes a laser-based sensor which is used to determine the position of the aircraft, which fails to perform during low visibility weather as the laser is not able to detect the aircraft. In order to resolve this issue a manual docking procedure is followed which in turn is prone to human error, due to low visibility. Further, during fog conditions, or extreme temperature fluctuations the components of the aircraft docking system can malfunction along with condensation on the screens of the aircraft docking system, which can cause visibility issues. In order to overcome the condensation issue, ultrasonic waves can be used, but such an arrangement increases the overall cost of the aircraft docking system. Moreover, the conventional aircraft docking system does not include weather sensors, and this may cause issues during pushback and startup procedures as the pilot cannot make an informed decision regarding the flying conditions. Furthermore, the conventional aircraft docking systems communicate through only a single communication system, and in case the communication system malfunctions, there is no alternative, and the pilot is not in communication with the ATC.
SUMMARY
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention. This summary is neither intended to identify key or essential inventive concepts of the invention nor intended to determine the scope of the invention.
In an embodiment of the present disclosure, a system for all weather capable smart visual docking guidance system (S-VDGS) is disclosed. The S-VDGS is used for an aircraft parking space having a plurality of parking bays. The S-VDGS may include but is not limited to an imaging device and a processor control unit. The imaging device is configured to capture one of a 3D LIDAR scanned image or a thermal image of the parking space at a predefined interval. Further, the processor control unit is communicatively connected to the imaging device and is used to receive the image of the parking space from the imaging device. Moreover, the processor control unit is configured to process the image to detect the aircraft in the parking space and a movement thereof. Furthermore, the processor control unit is configured to compare the detected aircraft and the movement thereof with the pre-stored position information of the parking space. Additionally, the processor control unit is configured to provide a guidance instruction to the aircraft based on the comparison towards a designated parking bay.
In another embodiment of the disclosure, a smart-visual docking guidance system (S-VDGS) is disclosed. The S-VDGS may include but is not limited to a plurality of sensors and a processor control unit. The plurality of sensors are adapted to collect real-time meteorological information. Further, the processor control unit is communicatively connected to the plurality of sensors. Moreover, the processor is configured to receive the real-time meteorological information from the plurality of sensors. Furthermore, the processor is configured to determine whether the received real-time meteorological information is within a predetermined threshold range. Additionally, the processor is configured to provide guidance instruction to an aircraft based on the determined real-time meteorological information.
In another embodiment of the disclosure, a smart-visual docking guidance system (S-VDGS) is disclosed. The S-VDGS may include but is not limited to a plurality of communication modes and a processor. The plurality of communication modes is configured to communicate real-time information to an aircraft. Further, the processor is communicatively connected to the plurality of communication modes. Moreover, the processor is configured to detect an availability of a set communication mode among the plurality of communication modes 102. Furthermore, the processor is configured to selectively switch, based on a predefined sequential order, between the plurality of communication modes 102 when the set communication mode is unavailable to provide the real-time information to the aircraft.
According to the present disclosure, the S-VDGS includes a plurality of sensors to detect meteorological information, which helps in providing the pilot with information necessary for docking the aircraft. Further, the S-VDGS includes a plurality of communication modes which facilitate communication, when a mode of communication fails.
To further clarify the advantages and features of the present disclosure, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which are illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail in the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates a communication architecture of a smart visual docking guidance system (S-VDGS), in accordance with an embodiment of the present disclosure;
Figure 2 illustrates an architecture of the S-VDGS, in accordance with an embodiment of the present disclosure;
Figure 3 illustrates an architecture of a processor control unit of the S-VDGS, in accordance with an embodiment of the present disclosure;
Figure 4 illustrates an isometric view of the S-VDGS, in accordance with an embodiment of the present disclosure;
Figure 5 illustrates a front view of the S-VDGS, in accordance with an embodiment of the present disclosure;
Figure 6 illustrates a side view of the S-VDGS, in accordance with an embodiment of the present disclosure;
Figure 7 illustrates a top view of the S-VDGS, in accordance with an embodiment of the present disclosure;
Figure 8 illustrates an imaging sensor assembly of the S-VDGS, in accordance with an embodiment of the present disclosure;
Figure 9 illustrates an output image of the thermal imaging device, in accordance with an embodiment of the present disclosure;
Figure 10 illustrates an output image of the High Definition (HD) imaging device, in accordance with an embodiment of the present disclosure; and
Figure 11 illustrates an output image of the 3D LIDAR sensor, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF FIGURES
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skilled in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.
The term “some” as used herein is defined as “none, or one, or more than one, or all.” Accordingly, the terms “none,” “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” The term “some embodiments” may refer to no embodiments or to one embodiment or to several embodiments or to all embodiments. Accordingly, the term “some embodiments” is defined as meaning “no embodiment, or one embodiment, or more than one embodiment, or all embodiments.”
The terminology and structure employed herein are for describing, teaching and illuminating some embodiments and their specific features and elements and do not limit, restrict or reduce the spirit and scope of the claims or their equivalents.
More specifically, any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “MUST comprise” or “NEEDS TO include.”
Whether or not a certain feature or element was limited to being used only once, either way, it may still be referred to as “one or more features” “one or more elements” “at least one feature” or “at least one element.” Furthermore, the use of the terms “one or more” or “at least one” feature or element does NOT preclude there being none of that feature or element, unless otherwise specified by limiting language such as “there NEEDS to be one or more . . . ” or “one or more element is REQUIRED.”
Unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having an ordinary skill in the art.
Reference is made herein to some “embodiments.” It should be understood that an embodiment is an example of a possible implementation of any features and/or elements presented in the attached claims. Some embodiments have been described for the purpose of illuminating one or more of the potential ways in which the specific features and/or elements of the attached claims fulfil the requirements of uniqueness, utility, and non-obviousness.
Use of the phrases and/or terms such as but not limited to “a first embodiment,” “a further embodiment,” “an alternate embodiment,” “one embodiment,” “an embodiment,” “multiple embodiments,” “some embodiments,” “other embodiments,” “a further embodiment”, “furthermore embodiment”, “additional embodiment” or variants thereof do NOT necessarily refer to the same embodiments. Unless otherwise specified, one or more particular features and/or elements described in connection with one or more embodiments may be found in one embodiment or may be found in more than one embodiment, or may be found in all embodiments, or may be found in no embodiments. Although one or more features and/or elements may be described herein in the context of only a single embodiment, or alternatively in the context of more than one embodiment, or further alternatively in the context of all embodiments, the features and/or elements may instead be provided separately or in any appropriate combination or not at all. Conversely, any feature and/or element described in the context of separate embodiments may alternatively be realized as existing together in the context of a single embodiment.
Any particular and all details set forth herein are used in the context of some embodiments and therefore should NOT be necessarily taken as limiting factors to the attached claims. The attached claims and their legal equivalents can be realized in the context of embodiments other than the ones used as illustrative examples in the description below.
Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings.
Further, skilled artisans will appreciate those elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present disclosure. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
A smart-visual docking guidance system (S-VDGS) is an aircraft docking system used to facilitate navigation and docking of the aircraft onto the airport under adverse weather conditions, which may include foggy, snowy and low visibility conditions.
Figure 1 illustrates a communication architecture 102 of a smart visual docking guidance system (S-VDGS) 100, in accordance with an embodiment of the present disclosure. Figure 2 illustrates an architecture of the S-VDGS 100, in accordance with an embodiment of the present disclosure. For the sake of brevity, Figure 1 and Figure 2 have been explained together.
The S-VDGS 100 uses a plurality of communication modes 102 in order to communicate with the Airport Operations Control Center (AOCC) and interface with an Air Traffic Control (ATC) to receive real-time weather information and provide the real-time weather information to pilots at the parking space. The plurality of communication modes 102 is used to receive critical weather related information from the pilot prior to take-off and transmit important docking related information and status to the AOCC. The plurality of communication modes 102 may include but is not limited to a wired local area network (LAN) connection, a wireless fidelity (Wi-Fi) connection, a sub-gigahertz radio frequency (RF) connection, and a cellular connection. The communication architecture 102 has been used to explain the plurality of communication modes 102 in detail.
The wired LAN connection is established using a gigabit ethernet (GBE) switch 108. The GBE switch 108 is used to connect multiple devices together using physical cables to one of a single switch or a network of interconnected switches. Similarly, the GBE switch 108 receives a registered jack RJ45 signal, which is an ethernet-based signal from the S-VDGS 100. Further, the GBE switch 108 communicates the RJ45 based local network signal to the S-VDGS central server 116.
Further, the Wi-Fi connection is established using a Wi-Fi router 110, wherein the Wi-Fi router 110 exchanges a Wi-Fi signal with the S-VDGS 100. The Wi-Fi router 110 further communicates the RJ45 based local network signal to the S-VDGS central server 116.
Moreover, the sub-gigahertz RF connection is established using a Lo-RA modem 104, wherein the Lo-RA modem 104 exchanges a recommended standard RS232 signal with the S-VDGS 100 serially. Further, the Lo-RA modem 104 converts the RS232 signal to a signal with radio frequency in a range of 865 MHz to 867 MHz and exchanges the radio signal with a Lo-RA gateway 112. The Lo-RA gateway 112 further converts the received signal to the RJ45 based local network signal and communicates the same to the S-VDGS central server 116. In an embodiment, the sub-gigahertz RF connection may be established using a plurality of technologies which may include but are not limited to a WiSUN.
Furthermore, the cellular connection is established using a fourth-generation long-term evolution (4G-LTE) modem 106, wherein the 4G-LTE modem 106 exchanges a communication signal with the S-VDGS 100 using a universal serial bus (USB). Further, the 4G-LTE modem 106 exchanges the communication signal with an internet cloud 114, over a cellular operator network. The internet cloud 114 further converts the received signal to the ISP network signal and communicates the same to the S-VDGS central server 116. In an embodiment, the cellular connection may be established using a fifth-generation (5G) or a sixth-generation (6G) cellular modem as well.
Additionally, the S-VDGS 100 also includes an operator panel 202 which receives an RS485 signal from the S-VDGS 100 and communicates the same to an external system. Whereas the S-VDGS central server 116 communicates all the received data from the plurality of communication modes 102 to a user interface (UI) 204. The UI 204 may be used to display the information received from the plurality of communication modes 102 in the ATC as well as on the S-VDGS 100. In addition, the processor is communicatively connected to the plurality of communication modes 102 and is used to detect an availability of a set communication mode among the plurality of communication modes 102. Further, the processor is used to selectively switch between the plurality of communication modes 102 when the set communication mode is unavailable to provide the real-time information to the aircraft based on a predefined sequential order.
In an embodiment, the plurality of communication modes 102 achieved by the communication architecture 102 are fail-safe methods, which may work in the predefined sequential order. Further, the predefined sequential order may be in a sequence of the LAN connection, then the Wi-Fi connection, then the sub-gigahertz RF connection, and finally the cellular connection.
In an example, the S-VDGS 100 first attempts communication using the LAN connection, in case the LAN connection fails or is disrupted, the communication is switched to the Wi-Fi connection. Further, in case the Wi-Fi connection fails or is disrupted, the communication is switched to the sub-gigahertz RF connection. Moreover, in case the sub-gigahertz RF connection fails or is disrupted, the communication is switched to the cellular connection. Furthermore, if all of the plurality of the communication modes fails, the system indicates an error condition and shuts down.
Figure 3 illustrates an architecture of a processor control unit of the S-VDGS 100, in accordance with an embodiment of the present disclosure. The S-VDGS 100 includes a plurality of sensors 312 and the processor 302 (hereon further may be interchangeably referred to as a processor control unit 302) along with an imaging device 304. The plurality of sensors 312 is used to collect real-time meteorological information. In case the processor 302 is used to bridge the connection between a plurality of sensors 312, the imaging device 304, and the plurality of communication modes 102.
The plurality of sensors 312 may include but are not limited to a temperature sensor 306, a humidity sensor 306, an anemometer (not shown), a barometric pressure sensor (not shown), and a precipitation sensor (not shown). The temperature sensor 306 is used to sense an atmospheric temperature, whereas the humidity sensor 306 is used to sense a relative humidity of the atmosphere. Moreover, the anemometer is used to sense a wind speed and direction, whereas the barometric pressure sensor is used to sense an atmospheric pressure, along with the precipitation sensor which is used to sense a presence and intensity of rainfall, snow, sleet, freezing rain, or hail.
In an embodiment, the processor 302 is communicatively connected to the plurality of sensors 312, wherein the processor 302 is configured to receive the real-time meteorological information from the plurality of sensors 312. Further, the processor 302 is configured to determine whether the received real-time meteorological information is within a predetermined threshold range. Moreover, the processor 302 is configured to provide guidance instruction to the aircraft based on the determined real-time meteorological information.
The imaging device 304 is used to capture one of a three-dimensional (3D) scanned image or a High Definition (HD) image or a thermal image of the parking space at a predefined interval. The imaging device 304 may include but is not limited to a 3D light detection and ranging (LIDAR) sensor 308, an HD imaging device 310, and a thermal imaging device 310. The HD imaging device 310, is used to capture a real-time image of the aircraft under normal weather conditions and provide the necessary information to the pilots. In case of failure of the HD imaging device 310, the 3D LIDAR sensor 308, is used to aid the docking of an approaching aircraft.
Further, the 3D LIDAR sensor 308 is used to capture the 3D scanned image and provide distance information of all objects in a 3D scanned space. For an approaching aircraft, the 3D LIDAR sensor 308 provides accurate distance ranging information (of a precision in centimeters), while the aircraft is approaching a stopping point. However, in dense foggy conditions, the 3D LIDAR sensor 308 stops providing the distance ranging information due to its inability to work normally in dense fog.
Additionally, under circumstances where the 3D LIDAR sensor 308 may fail, the S-VDGS 100 automatically switches from the 3D LIDAR sensor 308 to the thermal imaging device 310 for guiding while docking the aircraft. The thermal imaging device 310 is a high-resolution thermal sensor that works on infrared bands and may sense the heat signature of the aircraft and obstacles in the parking area even during adverse weather conditions and alert the aircraft and the ATC about the obstacles.
However, the ranging and distancing may not be achieved using the thermal imaging sensor 310. In order to achieve the ranging, an inbuilt algorithm of artificial intelligence (AI) / machine learning (ML) algorithm is used to estimate pre-stored position information. The pre-stored position information may include but is not limited to a position of a center line, a position of a stop line, and a position of the designated parking bay. The pre-stored position information is mapped by the 3D LIDAR sensor 308. Later the pre-stored position information is used to train the AI/ML algorithm. The thermal imaging device 310 along with the trained AI/ML algorithm uses the pre-stored position information to estimate a location and distance of the aircraft.
In an embodiment, the thermal imaging device 310 along with the AI/ML algorithm recognizes laser reflections from snow and fog particles are sensed by the 3D LIDAR sensor 308, which in turn filters the reflections out. The S-VDGS 100 AI/ML algorithm may also filter other forms of reflections to ensure uncompromised ranging performance of the S-VDGS 100 in safely docking aircraft at the parking stand.
Further, the processor 302 is communicatively connected to the imaging device 304 and is used to receive the image of the parking space from the imaging device 304. The processor control unit 302 is configured to process the image to detect an aircraft in the parking space and a movement thereof. Moreover, processor control unit 302 is configured to compare the detected aircraft and the movement thereof with the pre-stored position information of the parking space. Furthermore, processor control unit 302 is configured to provide a guidance instruction to the aircraft based on the comparison towards a designated parking bay.
Figure 4 illustrates an isometric view of the S-VDGS 100, in accordance with an embodiment of the present disclosure. Figure 5 illustrates a front view of the S-VDGS 100, in accordance with an embodiment of the present disclosure. Figure 6 illustrates a side view of the S-VDGS 100, in accordance with an embodiment of the present disclosure. Figure 7 illustrates a top view of the S-VDGS 100, in accordance with an embodiment of the present disclosure. For the sake of brevity, Figure 4 to Figure 7 have been explained together.
The S-VDGS 100 may include but is not limited to a display unit 402, a first housing 404, a second housing 406, and a defogging mechanism. The display unit 402 is operably connected to the processor 302 and may be used to display the guidance instructions to the aircraft. The display unit 402 may be a light emitting diode (LED) based display unit that is used to provide visual signals in the form of guiding instructions.
In an embodiment, the display unit 402 uses a colour coding method to indicate the guiding instructions to the pilot for accurate docking in a case when there is extremely low visibility at the designated parking bay.
The first housing 404 is used to house the plurality of sensors 312 used to sense the real-time meteorological information. The first housing 404 may be made of a non-corrosive material to withstand the extreme weather conditions. Further, the first housing 404 may also include a first glass screen 408 in the front, which may be used by the imaging device 304 for capturing the 3D scanned image or the thermal image of the parking space at the predefined interval. Moreover, the first glass screen 408 may also be used for the transmission of laser beams from the 3D LIDAR sensor 308 to the aircraft, which is used to determine the location of the aircraft, while approaching or departing from the designated parking bay.
The second housing 406 is used to house a transceiver used to collect the real-time meteorological information. The second housing 406 may be made of a non-corrosive material to withstand the extreme weather conditions. Further, the second housing 406 may also include a second glass screen 410 in the front, which may be used by a radar sensor for capturing the reflected laser beams from the aircraft, which is used to determine the location of the aircraft, while approaching or departing from the designated parking bay.
The processor 302 may also include a defogging mechanism (not shown) which may include but is not limited to a heating element, which is used to raise the temperature of the glass screens 408, 410 within the control unit 302 to remove fog from the glass screens 408, 410 of the control unit 302. Further, the heating element is embedded in the glass screens 408, 410 along with a preventive coating on the glass screens 408, 410 to prevent any fogging. The AI/ML algorithm is used to determine if the glass screens 408, 410 have been fogged along with sensing the humidity and temperature change within and outside the first housing 404 and the second housing 406. Under circumstances wherein relative humidity and temperature changes are sensed, the AI/ML algorithm switches on the defogging mechanism.
Figure 8 illustrates an image sensor assembly of the S-VDGS 100, in accordance with an embodiment of the present disclosure. When the aircraft approaches the designated parking bay, the plurality of sensors 312 senses the position of the aircraft, using laser reflection from the aircraft, which happens at a predefined angle. Further, the reflected laser beam from the aircraft is sensed by the imaging device 304, which is used to determine the location and the aircraft. The imaging device 304 based output is further communicated to the processor control unit 302 which then using the trained AI/ML algorithm, determines the accurate position of the aircraft. The position of the aircraft is then communicated using the plurality of communication modes 102 to the S-VDGS 100 and the display unit 402.
Figure 9 illustrates an output image 900 of the thermal imaging device 310, in accordance with an embodiment of the present disclosure. Figure 10 illustrates an output image 1000 of the imaging device 304, in accordance with an embodiment of the present disclosure. Figure 11 illustrates an output image 1100 of the 3D LIDAR sensor 308, in accordance with an embodiment of the present disclosure. For the sake of brevity, Figure 9 to Figure 11 have been explained together.
Referring to Figure 9, the output image 900 of the thermal imaging device 310 shows the aircraft in an infrared spectrum with a colour gradient which may be specific to the heat signature of each component captured in the image. Further, the output image 900 also detects the positioning of the aircraft based on the pre-stored position information from the AI/ML algorithm.
Referring to Figure 10, the output image 1000 of the imaging device 304 shows a real-time image in the visible light spectrum of the aircraft which may also include the nearby vicinity of the aircraft. Further, the output image 1000 also detects the positioning of the aircraft based on the pre-stored position information from the AI/ML algorithm.
Referring to Figure 11, the output image 1100 of the 3D LIDAR sensor 308 shows a grayscale 3D scan of each component captured in the image mainly focusing on the nose of the aircraft. Further, the output image 1100 also detects the positioning of the aircraft in the designated parking bay and provides data to be stored and used to train the AI/ML algorithm regarding the pre-stored position and the corresponding information.
The S-VDGS 100 provides various technical advancements such as using a thermal imaging device 310 in order to aid visibility during low-visibility conditions and malfunctioning of the 3D LIDAR sensor 308. Further, the S-VDGS 100 includes embedded heating elements in the glass screens 408, 410 and a defogging mechanism in order to prevent and overcome fogging on the glass screens 408, 410 and the processor 302. Moreover, the S-VDGS 100 includes a plurality of sensors 312 to detect meteorological information, which helps in providing the pilot with information necessary for docking the aircraft. Furthermore, the S-VDGS 100 includes a plurality of communication modes 102 which facilitate communication, even when a mode of communication might fail.
While specific language has been used to describe the present subject matter, any limitations arising on account thereto, are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein. The drawings and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment.
,CLAIMS:1. A smart-visual docking guidance system (S-VDGS) (100) for an aircraft parking space having a plurality of parking bays comprising:
an imaging device (304) configured to capture one of a 3D scanned image or a thermal image of the parking space at a predefined interval; and
a processor control unit (302) communicatively connected to the imaging device (304) and adapted to receive the image of the parking space from the imaging device (304), wherein the processor control unit (302) is configured to:
process the image to detect an aircraft in the parking space and a movement thereof;
compare the detected aircraft and the movement thereof with a pre-stored position information of the parking space; and
provide a guidance instruction to the aircraft based on the comparison towards a designated parking bay.
2. The S-VDGS (100) as claimed in claim 1, comprising a display unit (402) operable by the processor control unit (302) to display the guidance instruction to the aircraft.
3. The S-VDGS (100) as claimed in claim 1, wherein the imaging device (304) is at least one of:
a 3D LIDAR sensor (308) adapted to capture the 3D scanned image; and
a High Definition imaging device (310) and a thermal imaging device (310) adapted to capture the HD image and thermal image.
4. The S-VDGS (100) as claimed in claim 3, wherein the processor control unit (302) is configured to automatically switch from a 3D LIDAR image based operation to thermal image based operation during the presence of thick fog conditions.
5. The S-VDGS (100) as claimed in claim 1, wherein the processor control unit (302) is configured to determine a thermal image and a 3D scanned image of an obstacle present within a trajectory of the aircraft to alert the aircraft.
6. The S-VDGS (100) as claimed in claim 1, wherein the pre-stored position information includes one or more of, a position of a center line, a position of a stop line, and a position of the designated parking bay.
7. The S-VDGS (100) as claimed in claim 1, comprising a light emitting diode (LED) based display unit (402) adapted to provide visual signals based on the guidance instruction to guide the aircraft towards the designated parking bay.
8. The S-VDGS (100) as claimed in claim 1, wherein the processor control unit (302) includes a defogging mechanism, the defogging mechanism comprises a heating element adapted to raise a temperature of the glass screens (408, 410) within the control unit (302) to remove fog from the glass screens (408, 410) of the control unit (302).
9. The S-VDGS (100) as claimed in claim 8, wherein the processor control unit (302) is communicatively connected to the defogging mechanism and configured to activate the defogging mechanism when a relative humidity inside of the processor control unit is above a predefined threshold value.
10. A smart-visual docking guidance system (S-VDGS) (100) comprising:
a plurality of sensors (312) adapted to collect real-time meteorological information; and
a processor control unit (302) communicatively connected to the plurality of sensors (312), wherein the processor is configured to:
receive the real-time meteorological information from the plurality of sensors (312);
determine the received real-time meteorological information is within a predetermined threshold range; and
provide a guidance instruction to an aircraft based on the determined real-time meteorological information.
11. The S-VDGS (100) as claimed in claim 10, wherein the plurality of sensors (312) includes any one or more of, a temperature sensor (306), a humidity sensor (306), an anemometer, a barometric pressure sensor, and a precipitation sensor.
12. The S-VDGS (100) as claimed in claim 11, wherein the real-time meteorological information includes any one or more of, an atmospheric temperature, a relative humidity, a wind speed and direction, an atmospheric pressure, a presence and intensity of rainfall, snow, sleet, freezing rain, or hail.
13. A smart-visual docking guidance system (S-VDGS) (100) comprising:
a plurality of communication modes (102) configured to communicate a real-time information to an aircraft, and
a processor (302) communicatively connected to the plurality of communication modes (102), wherein the processor is configured to:
detect an availability of a set communication mode among the plurality of communication modes (102); and
selectively switch, based on a predefined sequential order, between the plurality of communication modes 102 when the set communication mode is unavailable to provide the real-time information to the aircraft.
14. The S-VDGS (100) as claimed in claim 13, wherein the plurality of communication modes (102) includes one or more of, a Local Area Network (LAN) connection, a Wi-Fi connection, a sub-gigahertz RF connection, and a cellular connection.
15. The S-VDGS (100) as claimed in claim 13, wherein the plurality of communication modes (102) adapted to interface with an Air Traffic Control (ATC) to receive real-time weather information and provide the real-time weather information to pilots at the parking space.
16. The S-VDGS (100) as claimed in claim 13, adapted to receive critical weather related information from a pilot prior to take-off.
17. The S-VDGS (100) as claimed in claim 13, wherein the processor control unit (302) comprising an artificial intelligence (AI)/machine learning (ML) algorithm adapted to detect and filter a laser reflection from extreme snow and fog conditions.
| # | Name | Date |
|---|---|---|
| 1 | 202311015851-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [09-03-2023(online)].pdf | 2023-03-09 |
| 2 | 202311015851-STATEMENT OF UNDERTAKING (FORM 3) [09-03-2023(online)].pdf | 2023-03-09 |
| 3 | 202311015851-PROVISIONAL SPECIFICATION [09-03-2023(online)].pdf | 2023-03-09 |
| 4 | 202311015851-OTHERS [09-03-2023(online)].pdf | 2023-03-09 |
| 5 | 202311015851-FORM FOR STARTUP [09-03-2023(online)].pdf | 2023-03-09 |
| 6 | 202311015851-FORM FOR SMALL ENTITY(FORM-28) [09-03-2023(online)].pdf | 2023-03-09 |
| 7 | 202311015851-FORM 1 [09-03-2023(online)].pdf | 2023-03-09 |
| 8 | 202311015851-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-03-2023(online)].pdf | 2023-03-09 |
| 9 | 202311015851-EVIDENCE FOR REGISTRATION UNDER SSI [09-03-2023(online)].pdf | 2023-03-09 |
| 10 | 202311015851-DRAWINGS [09-03-2023(online)].pdf | 2023-03-09 |
| 11 | 202311015851-DECLARATION OF INVENTORSHIP (FORM 5) [09-03-2023(online)].pdf | 2023-03-09 |
| 12 | 202311015851-Proof of Right [22-05-2023(online)].pdf | 2023-05-22 |
| 13 | 202311015851-FORM-26 [31-05-2023(online)].pdf | 2023-05-31 |
| 14 | 202311015851-FORM FOR STARTUP [08-03-2024(online)].pdf | 2024-03-08 |
| 15 | 202311015851-EVIDENCE FOR REGISTRATION UNDER SSI [08-03-2024(online)].pdf | 2024-03-08 |
| 16 | 202311015851-DRAWING [08-03-2024(online)].pdf | 2024-03-08 |
| 17 | 202311015851-CORRESPONDENCE-OTHERS [08-03-2024(online)].pdf | 2024-03-08 |
| 18 | 202311015851-COMPLETE SPECIFICATION [08-03-2024(online)].pdf | 2024-03-08 |