Abstract: A method for a vehicle localization is provided. The method includes acquiring one or more images of a vehicle (112) in a designated area (104) using an acquisition device (102). Based on the acquired image, a first reference point of the vehicle (112) is determined. Subsequently, a relative position and an orientation of the vehicle (112) in the designated area (104) are identified with respect to a selected point of reference (105) corresponding to the designated area (104) based on the acquired image. An offset value of a second reference point of the vehicle (112) from the determined first reference point is interpolated based on one or more reference offset values corresponding to the relative position and the orientation of the vehicle (112). A location of the vehicle (112) in the designated area (104) is determined based on the interpolated offset value.
Claims:
1. A method for a vehicle localization, the method comprising:
acquiring one or more images of a vehicle (112) in a designated area (104) using an acquisition device (102);
determining a first reference point of the vehicle (112) based on an acquired image;
identifying a relative position and an orientation of the vehicle (112) in the designated area (104) with respect to a selected point of reference (105) corresponding to the designated area (104) based on the acquired image;
interpolating an offset value of a second reference point of the vehicle (112) from the determined first reference point based on one or more reference offset values corresponding to the relative position and the orientation of the vehicle (112); and
determining a location of the vehicle (112) in the designated area (104) based on the interpolated offset value.
2. The method as claimed in claim 1, wherein the step of identifying the relative position of the vehicle (112) in the designated area (104) comprises:
identifying a position of the vehicle (112) in the acquired image; and
transforming the position of the vehicle (112) in the acquired image into the relative position of the vehicle (112) in the designated area (104) by scaling the position of the vehicle (112) with a homography matrix, wherein the homography matrix correlates the position of the vehicle (112) in the acquired image to the position of the vehicle (112) in the designated area (104).
3. The method as claimed in claim 2, wherein the orientation and the first reference point of the vehicle (112) are identified from the acquired image of the vehicle (112) using one or more of a principal component analysis and an ellipse fitting method.
4. The method as claimed in claim 3, wherein the first reference point corresponds to a centroid of the vehicle and the second reference point corresponds to a midpoint of a rear axle of the vehicle.
5. The method as claimed in claim 1, further comprising generating a list comprising the one or more reference offset values by positioning a test vehicle (302) within a field of view of the acquisition device (102) at a plurality of positions and at a plurality of orientations with respect to the selected point of reference (105), wherein the test vehicle (302) is a vehicle used for generating the list comprising the one or more reference offset values and is different from the vehicle (112) in the designated area (104), and wherein the test vehicle (302) comprises a first marking point (304) corresponding to a centroid of the test vehicle (302) and a second marking point (306) corresponding to a midpoint of a rear axle of the test vehicle (302).
6. The method as claimed in claim 5, wherein generating the list comprising the one or more reference offset values comprises:
iteratively positioning the test vehicle (302) within the field of view of the acquisition device (102) at the plurality of positions with respect to the selected point of reference (105) corresponding to the designated area (104) and at the plurality of orientations;
acquiring one or more images of the test vehicle (302) when the test vehicle (302) is placed at each of the plurality of positions and the plurality of orientations;
measuring a distance to the midpoint (306) of the rear axle of the test vehicle (302) from the selected point of reference (105) when the test vehicle (302) is positioned at each of the plurality of positions and the plurality of orientations;
determining the centroid (304) of the test vehicle (302) based on the one or more images of the test vehicle (302) when the test vehicle (302) is positioned at each of the plurality of positions and the plurality of orientations; and
computing an offset of the midpoint (306) of the rear axle with respect to the centroid (304) of the test vehicle (302) when the test vehicle (302) is positioned at each of the plurality of positions and the plurality of orientations to generate the list comprising the one or more reference offset values.
7. The method as claimed in claim 6, wherein the offset of the midpoint (306) of the rear axle with respect to the centroid (304) of the test vehicle (302) is computed in accordance with an equation:
Offset Distance , whereas X1 and Y1 correspond to a position of the midpoint (306) of the rear axle of the test vehicle (302), and X2 and Y2 correspond to a position of the centroid (304) of the test vehicle (302) in the designated area (104).
8. The method as claimed in claim 6, further comprising:
identifying a length of the vehicle (112) in the designated area (104) based on the acquired image of the vehicle (112);
comparing the length of the vehicle (112) with a plurality of reference lengths to identify a group associated with the vehicle (112) in the designated area (104), wherein each length in the plurality of reference lengths is associated with a particular vehicle group in a plurality of vehicle groups ;
computing a scale factor by dividing an average length of the identified group with a length of the test vehicle (302) used for generating the list comprising the one or more reference offset values;
computing an actual offset of the midpoint of the rear axle of the vehicle (112) with respect to the centroid of the vehicle (112) by scaling the offset value that is obtained by interpolating the one or more reference offset values based on the scaling factor, wherein the offset value is interpolated based on a trilinear interpolation method; and
determining the midpoint of the rear axle of the vehicle (112) based on the actual offset.
9. The method as claimed in claim 8, further comprising:
receiving an image of the designated area (104) that comprises a selected vacant parking slot (702), wherein the designated area (104) is a parking space, and wherein the selected vacant parking slot (702) is located in a target zone of the designated area (104);
identifying a location of the selected vacant parking slot (702) with respect to the selected point of reference (105) of the designated area (104) based on the received image;
communicating a map of the designated area (104) to the vehicle (112) such that the vehicle (112) navigates towards the target zone based on the map;
tracking one or more heading directions of the vehicle (112) that navigates towards the target zone;
determining a current position and a current orientation of the vehicle (112) at the target zone, wherein the current position of the vehicle (112) is defined by the midpoint of the rear axle of the vehicle (112).
10. The method as claimed in claim 9, further comprising:
identifying a distance between the current position of the vehicle (112) and the final target position of the vehicle (112) in the selected vacant parking slot (702);
identifying a relative difference in an orientation between the current orientation of the vehicle (112) and the final target orientation of the vehicle (112) in the selected vacant parking slot (702);
calculating a required displacement of the vehicle (112) to automatically park the vehicle (112) in the selected vacant parking slot (702) based on the identified distance and the identified difference in the orientation; and
communicating the required displacement to the vehicle (112) such that the vehicle (112) is automatically parked in the selected vacant parking slot (702).
11. A system (100) for automatically localizing a vehicle, the system comprising:
one or more acquisition devices (102) that are configured to acquire one or more images of the vehicle (112) in a designated area (104); and
a vehicle localizing system (108), wherein the vehicle localizing system (108) comprises a processor that is configured to:
determine a first reference point of the vehicle (112) based on an acquired image;
identify a relative position and an orientation of the vehicle (112) in the designated area (104) with respect to a selected point of reference (105) corresponding to the designated area (104) based on the acquired image;
interpolate an offset value of a second reference point of the vehicle (112) from the determined first reference point based on one or more reference offset values corresponding to the relative position and the orientation of the vehicle (112); and
determine a location of the vehicle (112) in the designated area based on the interpolated offset value.
12. The system (100) as claimed in claim 11, wherein the vehicle localizing system (108) is further configured to:
identify a position of the vehicle (112) in the acquired image; and
transform the position of the vehicle (112) in the acquired image into the relative position of the vehicle (112) in the designated area (104) by scaling the position of the vehicle (112) with a homography matrix, wherein the homography matrix correlates the position of the vehicle (112) in the acquired image to the position of the vehicle (112) in the designated area (104).
13. The system (100) as claimed in claim 11, wherein the vehicle localizing system (108) is further configured to generate a list comprising the one or more reference offset values by positioning a test vehicle (302) within a field of view of the acquisition device (102) at a plurality of positions and at a plurality of orientations with respect to the selected point of reference (105), wherein the test vehicle (302) is a vehicle used for generating the list comprising the one or more reference offset values and is different from the vehicle (112) in the designated area (104), and wherein the test vehicle (302) comprises a first marking point (304) corresponding to a centroid of the test vehicle (302) and a second marking point (306) corresponding to a midpoint of a rear axle of the test vehicle (302).
14. The system (100) as claimed in claim 13, wherein the vehicle localizing system (108) is further configured to:
identify a length of the vehicle (112) in the designated area (104) based on the acquired image of the vehicle (112);
compare the length of the vehicle (112) with a plurality of reference lengths to identify a group associated with the vehicle (112) in the designated area (104), wherein each length in the plurality of reference lengths is associated with a particular vehicle group in a plurality of vehicle groups;
compute a scale factor by dividing an average length of the identified group with a length of the test vehicle (302) used for generating the list comprising the one or more reference offset values;
compute an actual offset of the second reference point with respect to the first reference point of the vehicle (112) by scaling the offset value that is obtained by interpolating the one or more reference offset values based on the scaling factor, wherein the offset value is interpolated based on a trilinear interpolation method; and
determine a location of the second reference point of the vehicle (112) based on the actual offset.
15. The system (100) as claimed in claim 14, wherein the vehicle localizing system (108) is further configured to:
receive an image of the designated area (104) that comprises a selected vacant parking slot (702), wherein the designated area (104) is a parking space, and wherein the selected vacant parking slot (702) is located in a target zone of the designated area (104);
identify a location of the selected vacant parking slot (702) with respect to the selected point of reference (105) of the designated area (104) based on the received image;
communicate a map of the designated area (104) to the vehicle (112) such that the vehicle (112) navigates towards the target zone based on the map;
track one or more heading directions of the vehicle (112) that navigates towards the target zone;
determine a current position and a current orientation of the vehicle (112) at the target zone, wherein the current position of the vehicle (112) is defined by the midpoint of the rear axle of the vehicle (112);
identify a distance between the current position of the vehicle (112) and the final target position of the vehicle (112) in the selected vacant parking slot (702);
identify a relative difference in an orientation between the current orientation of the vehicle (112) and the final target orientation of the vehicle (112) in the selected vacant parking slot (702);
calculate a required displacement of the vehicle (112) to automatically park the vehicle (112) in the selected vacant parking slot (702) based on the identified distance and the identified difference in the orientation; and
communicate the required displacement to the vehicle (112) such that the vehicle (112) is automatically parked in the selected vacant parking slot (702). , Description:BACKGROUND
[0001] Embodiments of the present specification relate generally to object localization. More particularly, the present specification relates to a system and method for automatic vehicle localization using one or more acquisition devices.
[0002] Generally, vehicles are used by humans to move from one place to another. With advancement in technology, semi-autonomous and autonomous vehicles have come into existence. The semi-autonomous vehicles are driven with driver assistance and are equipped to perform some functions autonomously such as parking. In contrast, the autonomous vehicles are driven and parked automatically.
[0003] Both the semi-autonomous and the autonomous vehicles utilize various types of autonomous systems for driving or parking the vehicles. One such autonomous parking system includes wireless sensors, which may be located in each parking slot in a designated parking area. Such wireless sensors provide real time information of vacant parking slots to the vehicle. However, such wireless sensors are unable to localize the vehicles.
[0004] In another approach, the autonomous parking systems include ultrasonic sensors, 3D cameras, laser scanners, and/or Radars located in the vehicle for autonomously parking the vehicle. However, such autonomous parking systems are not very cost efficient and need sensors installed on the vehicles as well as near the vacant parking slot in the parking area.
[0005] Moreover, such ultrasound sensor-based autonomous parking systems require the vehicle to be aligned in a predefined manner to detect the vacant parking slot leading to additional efforts. Furthermore, a driver of the vehicle is required to indicate a side of the vehicle to scan for a vacant parking slot resulting in complexity in parking the vehicle. The driver is further required to manually provide a type of parking of the vehicle such as parallel or perpendicular parking, based on an orientation of the vacant parking slot, adding to manual efforts. Additionally, such autonomous parking systems are capable of identifying the vacant parking slots only when the vehicle is moving below a predefined threshold speed. In case, the speed of the vehicle is more than the predefined threshold, the autonomous parking systems leads to errors in identification of the vacant parking slot as the system may fail to detect the vehicle traveling at a speed that is more than the predefined threshold.
[0006] Certain other autonomous parking systems employ wireless sensors and a series of cameras located in the designated area for autonomously parking the vehicle. However, such autonomous parking systems entail use of an additional rear-view camera located in or over the vehicle to perform the autonomous parking of the vehicle.
[0007] Hence, there is a need for an improved system and method to address the aforementioned issues.
BRIEF DESCRIPTION
[0008] According to an exemplary aspect of the present specification, a method for a vehicle localization is provided. The method includes acquiring one or more images of a vehicle in a designated area using an acquisition device. A first reference point of the vehicle is determined based on an acquired image. A relative position and an orientation of the vehicle in the designated area are identified with respect to a selected point of reference corresponding to the designated area based on the acquired image. An offset value of a second reference point of the vehicle from the determined first reference point is interpolated based on one or more reference offset values corresponding to the relative position and the orientation of the vehicle. The location of the vehicle in the designated area is determined based on the interpolated offset value.
[0009] A position of the vehicle may be identified in the acquired image. The position of the vehicle in the acquired image may be transformed into the relative position of the vehicle in the designated area by scaling the position of the vehicle with a homography matrix. The homography matrix may correlate the position of the vehicle in the acquired image to the position of the vehicle in the designated area. The orientation and the first reference point of the vehicle may be identified from the acquired image of the vehicle using one or more of a principal component analysis and an ellipse fitting method. The first reference point may correspond to a centroid of the vehicle and the second reference point may correspond to a midpoint of a rear axle of the vehicle.
[0010] A list including the one or more reference offset values may be generated by positioning a test vehicle within a field of view of the acquisition device at a plurality of positions and at a plurality of orientations with respect to the selected point of reference. The test vehicle may be a vehicle used for generating the list including the one or more reference offset values and may be different from the vehicle in the designated area. The test vehicle may include a first marking point corresponding to a centroid of the test vehicle and a second marking point corresponding to a midpoint of a rear axle of the test vehicle.
[0011] Generating the list including the one or more reference offset values may include iteratively positioning the test vehicle within the field of view of the acquisition device at the plurality of positions with respect to the selected point of reference corresponding to the designated area and at the plurality of orientations. The one or more images of the test vehicle may be acquired when the test vehicle (302) is placed at each of the plurality of positions and the plurality of orientations. A distance to the midpoint of the rear axle of the test vehicle from the selected point of reference may be measured when the test vehicle is positioned at each of the plurality of positions and the plurality of orientations. The centroid of the test vehicle may be determined based on the one or more images of the test vehicle when the test vehicle is positioned at each of the plurality of positions and the plurality of orientations. An offset of the midpoint of the rear axle with respect to the centroid of the test vehicle may be computed when the test vehicle is positioned at each of the plurality of positions and the plurality of orientations to generate the list comprising the one or more reference offset values.
[0012] The offset of the midpoint of the rear axle with respect to the centroid of the test vehicle may be computed in accordance with an equation, Offset Distance , whereas X1 and Y1 correspond to a position of the midpoint of the rear axle of the test vehicle, and X2 and Y2 correspond to a position of the centroid of the test vehicle in the designated area. A length of the vehicle in the designated area may be identified based on the acquired image of the vehicle. The length of the vehicle may be compared with a plurality of reference lengths to identify a group associated with the vehicle in the designated area. Each length in the plurality of reference lengths may be associated with a particular vehicle group in a plurality of vehicle groups. A scale factor may be computed by dividing an average length of the identified group with a length of the test vehicle used for generating the list comprising the one or more reference offset values. An actual offset of the midpoint of the rear axle of the vehicle with respect to the centroid of the vehicle may be computed by scaling the offset value that is obtained by interpolating the one or more reference offset values based on the scaling factor. The midpoint of the rear axle of the vehicle may be determined based on the actual offset. The offset value may be interpolated based on a trilinear interpolation method.
[0013] An image of the designated area that comprises a selected vacant parking slot may be received. The designated area may be a parking space. The selected vacant parking slot may be located in a target zone of the designated area. A location of the selected vacant parking slot with respect to the selected point of reference of the designated area may be identified based on the received image. A map of the designated area may be communicated to the vehicle such that the vehicle navigates towards the target zone based on the map. One or more heading directions of the vehicle that navigates towards the target zone may be tracked. A current position and a current orientation of the vehicle may be determined at the target zone. The current position of the vehicle is defined by the midpoint of the rear axle of the vehicle.
[0014] A distance between the current position of the vehicle and the final target position of the vehicle in the selected vacant parking slot may be identified. A relative difference in an orientation between the current orientation of the vehicle and the final target orientation of the vehicle in the selected vacant parking slot may be identified. A required displacement of the vehicle to automatically park the vehicle in the selected vacant parking slot may be calculated based on the identified distance and the identified difference in the orientation. The required displacement may be communicated to the vehicle such that the vehicle is automatically parked in the selected vacant parking slot.
[0015] According to another exemplary aspect of the present specification, a system for automatically localizing a vehicle is provided. The system includes one or more acquisition devices and a vehicle localizing system. The one or more acquisition devices are configured to acquire one or more images of the vehicle in a designated area. The vehicle localizing system includes a processor that is configured to determine a first reference point of the vehicle based on an acquired image. A relative position and an orientation of the vehicle in the designated area are identified with respect to a selected point of reference corresponding to the designated area based on the acquired image. An offset value of a second reference point of the vehicle from the determined first reference point is interpolated based on one or more reference offset values corresponding to the relative position and the orientation of the vehicle. The location of the vehicle in the designated area is determined based on the interpolated offset value.
[0016] The vehicle localizing system may be further configured to identify a position of the vehicle in the acquired image and transform the position of the vehicle in the acquired image into the relative position of the vehicle in the designated area by scaling the position of the vehicle with a homography matrix. The homography matrix may correlate the position of the vehicle in the acquired image to the position of the vehicle in the designated area. The vehicle localizing system may generate a list including the one or more reference offset values by positioning a test vehicle within a field of view of the acquisition device at a plurality of positions and at a plurality of orientations with respect to the selected point of reference. The test vehicle may be a vehicle used for generating the list including the one or more reference offset values and may be different from the vehicle in the designated area. The test vehicle may include a first marking point corresponding to a centroid of the test vehicle and a second marking point corresponding to a midpoint of a rear axle of the test vehicle.
[0017] The vehicle localizing system may identify a length of the vehicle in the designated area based on the acquired image of the vehicle. The length of the vehicle may be compared with a plurality of reference lengths to identify a group associated with the vehicle in the designated area. Each length in the plurality of reference lengths may be associated with a particular vehicle group in a plurality of vehicle groups. A scale factor may be computed by dividing an average length of the identified group with a length of the test vehicle used for generating the list comprising the one or more reference offset values. An actual offset of the midpoint of the rear axle of the vehicle with respect to the centroid of the vehicle may be computed by scaling the offset value that is obtained by interpolating the one or more reference offset values based on the scaling factor. The offset value may be interpolated based on a trilinear interpolation method. The midpoint of the rear axle of the vehicle may be determined based on the actual offset.
[0018] The vehicle localizing system may receive an image of the designated area that includes a selected vacant parking slot. The selected vacant parking slot may be located in a target zone of the designated area. A location of the selected vacant parking slot with respect to the selected point of reference of the designated area may be identified based on the received image. A map of the designated area may be communicated to the vehicle such that the vehicle navigates towards the target zone based on the map. One or more heading directions of the vehicle that navigates towards the target zone may be tracked. A current position and a current orientation of the vehicle may be determined at the target zone. The current position of the vehicle is defined by the midpoint of the rear axle of the vehicle.
[0019] The vehicle localizing system may identify a distance between the current position of the vehicle and the final target position of the vehicle in the selected vacant parking slot. A relative difference in an orientation between the current orientation of the vehicle and the final target orientation of the vehicle in the selected vacant parking slot may be identified. A required displacement of the vehicle to automatically park the vehicle in the selected vacant parking slot may be calculated based on the identified distance and the identified difference in the orientation. The required displacement may be communicated to the vehicle such that the vehicle is automatically parked in the selected vacant parking slot.
DRAWINGS
[0020] These and other features, aspects, and advantages of the claimed subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0021] FIG. 1 is a block diagram illustrating an exemplary object localization system.
[0022] FIG. 2A is a schematic diagram illustrating a designated area having a reference point.
[0023] FIG. 2B is a schematic diagram illustrating mapping of one or more points in the designated area with the one or more points in an acquired image to determine a homography matrix.
[0024] FIG. 3 is an exemplary schematic representation illustrating a test vehicle that is placed in the designated area at a first position and at a first orientation for identifying an offset value that corresponds to the first position and the first orientation.
[0025] FIG. 4 is a flow diagram illustrating an exemplary method for localizing a vehicle in the designated area.
[0026] FIG. 5 is a schematic diagram illustrating an exemplary trilinear interpolation map that is used for determining an offset value for a relative position and an orientation of the vehicle in the designated area.
[0027] FIG. 6 is a flow diagram illustrating an exemplary method for automatically parking the vehicle in a vacant parking slot in the parking space.
[0028] FIG. 7 is an exemplary schematic diagram illustrating a current position and a current orientation of the vehicle that is positioned at a target zone in the parking space.
[0029] FIG. 8 is an exemplary schematic diagram illustrating another application area of the object localization system.
DETAILED DESCRIPTION
[0030] For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as would normally occur to those skilled in the art are to be construed as being within the scope of the present invention.
[0031] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.
[0032] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0033] Embodiments of the present invention will be described below in detail with reference to the accompanying figures.
[0034] The following description presents exemplary systems and methods for localizing an object in a designated area. Particularly, embodiments described herein disclose systems and methods for identifying a current location of a vehicle in a designated area and tracking the object while moving in the designated area using one or more images that are captured using one or more acquisition devices. An exemplary system that is used for localizing the object in the designated area is described in detail with reference to FIG. 1.
[0035] FIG. 1 is a block diagram illustrating an exemplary object localization system (100). It may be noted that different embodiments of the object localization system (100) may be used for identifying a location of an object in a designated area and for continuously tracking the object while moving in the designated area. Though the object localization system (100) is capable of identifying locations of any objects in the designated area, for clarity, the object localization system (100) will be described herein with reference to identification of a location of a vehicle in the designated area. Examples of the designated area include, but are not limited to, a parking area, a building structure, and a roadside environment. In certain embodiments, the identified location of the vehicle in the designated area can be used for various applications such as for lane departure warning, autonomous driving, and other driver assistance or traffic regulatory functions.
[0036] For example, in a parking application, the identified location of the vehicle in a parking area can be used for providing navigation guidance to a vehicle to navigate to a vacant parking slot. In another example, the identified location of a vehicle may be used for navigation guidance when the vehicle is inside a building structure, where a global positioning signal (GPS) signal is poor. In such scenario, the object localization system (100) identifies a current location of the vehicle within the building structure and communicates the current location information to the vehicle. Certain exemplary components of the object localization system (100) that are utilized for localizing the vehicle in the designated area are described in subsequent sections.
[0037] In one embodiment, the object localization system (100) includes one or more acquisition devices (102) that are deployed in a designated area (104). In certain embodiments, the one or more acquisition devices (102) are images acquisition devices. The one or more acquisition devices (102) are selected from one or more of a monocular type, a stereo type camera, and a multiple optical sensor system. In one embodiment, the one or more acquisition devices (102) are mounted on one or more structures in the designated area (104). According to aspects of the present disclosure, a reference point (105) corresponding to the designated area (104) is selected. In one embodiment, the reference point (105) is an origin of the designated area (104). However, in other embodiments, the reference point may be any selected reference position within or outside the designated area (104). Based on a size of the designated area (104), a number of acquisition devices used for localizing vehicles may vary. For example, a smaller sized designated area may require only one camera if the camera’s field of view is enough to cover at least a selected portion of the entire designated area. If a size of the designated area is bigger, multiple cameras may be required to cover selected portions of the designated area.
[0038] The object localization system (100) further includes a server (106) and a vehicle localizing system (108). The server (106) is in communication with the one or more acquisition devices (102) through a network (110). The network (110) may be any wired or wireless network capable of conducting communication between components of the object localization system (100).
[0039] In certain embodiments, the vehicle localizing system (108) resides in the one or more acquisition devices (102) or in the server (106). In an exemplary embodiment, in which the vehicle localizing system (108) resides in the server (106), the one or more image acquisition devices (102) acquires one or more images of a vehicle (112) in the designated area (104) and communicates the one or more acquired images to the server (106). The vehicle localizing system (108) processes the one or more acquired images for identifying a location of the vehicle (112) with respect to the reference point (105) of the designated area (104). In certain embodiments, the server (106) is a cloud server.
[0040] In another exemplary embodiment, in which the vehicle localizing system (108) resides in the one or more acquisition devices (102), the acquisition devices (102) process the acquired images for identifying a location of the vehicle (112) with respect to the reference point (105) of the designated area (104) without sending the acquired images to the server (106). To that end, the one or more acquisition devices (102) are provided with one or more processors to perform image processing and to localize the vehicle (112) based on the image processing.
[0041] In certain embodiments, the location of the vehicle (112) in the designated area (104) is defined by a location of a midpoint of a rear axle of the vehicle (112). This is because vehicles such as autonomous or semi-autonomous vehicles use the midpoint of the rear axle as a point of reference for planning vehicle motion, tracking the vehicle motion, and controlling the vehicle motion. Hence, for localizing the vehicle (112) in the designated area (104) and for tracking the motion of the vehicle (112) in the designated area (104), the vehicle localizing system (108) identifies the midpoint of rear axle of the vehicle (112) based on one or more images acquired by the one or more acquisition devices (102).
[0042] In certain embodiments, the vehicle localizing system (108) identifies the midpoint of the rear axle of the vehicle (112) by identifying a first reference point (shown in FIG. 3) of the vehicle (112) from the one or more acquired images and an offset value of a second reference point (shown in FIG. 3) from the first reference point. In one embodiment, the first reference point corresponds to a centroid of the vehicle (112). However, in other embodiments, the first reference point may correspond to any point within or on the vehicle (112). Further, the second reference point corresponds to the midpoint of the rear axle of the vehicle (112). Hereinafter, throughout description of various embodiments, the second reference point is referred as a rear axle reference point. The centroid and the rear axle reference point of the vehicle (112) are points that lie on a vehicle axis (shown in FIG. 3). More particularly, the vehicle localizing system (108) identifies the centroid of the vehicle from the one or more acquired images. Subsequently, the vehicle localizing system (108) determines an offset of the rear axle reference point from the centroid of the vehicle to identify the midpoint of the rear axle of the vehicle (112).
[0043] In one embodiment, the vehicle localizing system (108) identifies the centroid of the vehicle (112) from the one or more acquired images using a principal component analysis (PCA) technique. However, it is to be understood that the vehicle localizing system (108) can use other image processing techniques instead of the PCA technique for identifying the centroid of the vehicle (112). An example of such image processing techniques includes an ellipse fitting method. In the ellipse fitting method, the vehicle localizing system (108) fits an ellipse over a vehicle blob in the acquired image, identifies major and minor axes of the fitted ellipse, and finds the centroid of the vehicle blob by calculating a mean of pixel coordinates of the vehicle blob. Accuracy associated with the ellipse fitting method can be improved by taking a pixel intensity gradient of the vehicle blob in to consideration to distinguish a vehicle from a vehicle shadow. For identifying the offset of the rear axle reference point from the centroid of the vehicle (112), the vehicle localizing system (108) requires a position information of the vehicle (112) with respect to the reference point (105) of the designated area (104).
[0044] To that end, the vehicle localizing system (108) is configured to identify the position information of the vehicle (112) in the designated area (104) from the one or more acquired images using a homography matrix, as described in greater detail with reference to FIG. 2A and FIG. 2B. The homography matrix assists in mapping a position of the vehicle (112) in an acquired image to a position of the vehicle (112) in the designated area (104). In certain embodiments, for identifying the vehicle’s position in the designated area from an acquired image, the homography matrix needs to be determined once at the time of an initial setup of the one or more acquisition devices (102) in the designated area (104). A procedure associated with determining the homography matrix is described with reference to description of FIG. 2A and FIG. 2B.
[0045] FIG. 2A is a schematic diagram illustrating the designated area (104) having the reference point (105). To determine the homography matrix, one or more points (202) are first marked in the designated area (104). Subsequently, a horizontal distance and a vertical distance of each point with respect to the reference point (105) is measured to obtain a position information of each point. Following that, an acquisition device (102) is configured to acquire an image of the designated area (104) such that the reference point (105) and the one or more points (202) are clearly visible in the acquired image. An exemplary image (204) of the designated area (104) acquired using the acquisition device (102) is shown in FIG. 2B. Further, FIG. 2B depicts mapping of the points (202) in the designated area (104) to the points (202) in the acquired image (204). The vehicle localizing system (108) identifies a position information of each of the points (202) in the acquired image (204). In certain embodiments, based on the known position information of the points (202) in the designated area (104) and the known position information of the points (202) in the acquired image (204), the homography matrix is determined using simple least-squares method in accordance with a equation X=Hx, where ‘X’ represents the position information of the one or more points (202) with respect to the reference point (105) in the designated area (104), ‘x’ represents the position information of the one or more points (202) in the acquired image (204), and ‘H’ represents the homography matrix.
[0046] After initial setup of the one or more acquisition devices (102) in the designated area (104) and determination of the homography matrix, the position information of the vehicle (112) with respect to the reference point (105) of the designated area (104) is identified from an acquired image. To that end, the vehicle localizing system (108) identifies a position coordinates of the vehicle (112) in the acquired image. The vehicle localizing system (108) then multiplies the identified position coordinates with the determined homography matrix to obtain the position information of the vehicle (112) with respect to the reference point (105).
[0047] In certain embodiments, the one or more acquisition devices (102) are mounted on one or more structures in the designated area (104) at a designated height from a ground level such that the acquisition devices (102) can clearly acquire a perspective view of the vehicle (112) in the designated area (104). In certain embodiments, the one or more acquisition devices (102) are mounted on the structures at a fixed angle. In one embodiment, the image acquired by the acquisition devices (102) includes the reference point (105) or the one or more points (202) that are used for determining the homography matrix.
[0048] In certain embodiments, the vehicle localizing system (108) requires the position information of the vehicle (112) as well as an orientation of the vehicle (112) in the designated area (104) for identifying the offset of the rear axle reference point from the centroid of the vehicle (112). This is because a value of the offset depends on the position information and the orientation of the vehicle (112) in the designated area (104). For example, consider two scenarios. In a first scenario, the vehicle (112) is too far away from an acquisition device (102). In a second scenario, the vehicle (112) is near the acquisition device (102). Accordingly, for an acquisition device (102) having a particular field of view, the offset value associated with the first scenario may be lesser than the offset value associated with the second scenario. Whereas, in reality, the offset of the rear axle reference point from the centroid of a vehicle is always constant for that particular vehicle model. Hence, at the time of initial setup of the one or more acquisition devices (102) in the designated area (104), a list of offsets values are generated by placing a test vehicle in different positions and orientations in the designated area (104), as described in greater detail with reference to FIG. 3. Subsequently, when a vehicle enters the designated area (104), the vehicle localizing system (108) identifies a position and an orientation of the vehicle (112), and interpolates an offset value corresponding to the identified position and the orientation based on the list of generated reference offset values.
[0049] FIG. 3 is an exemplary schematic representation (300) illustrating a test vehicle (302) that is placed in the designated area (104) at a first position and at a first orientation for identifying an offset value that corresponds to the first position and the first orientation. Before positioning the test vehicle (302) in a field of view of an acquisition device (102), a first reference point (304) and a second reference point (306) are marked on a top of the test vehicle (302), for example, using color markings. The first reference point (304) and the second reference point (306) are marked using color markings along a selected vehicle axis (308). The first reference point (304) corresponds to a centroid of the test vehicle (302) and the second reference point (306) corresponds to a midpoint of a rear axle of the test vehicle (302). Subsequently, the test vehicle (302) is placed in a field of view of the acquisition device (102) at the first position with respect to the reference point (105) in the designated area (104) and at the first orientation. Following that, the acquisition device (102) acquires an image of the test vehicle (302) such that the centroid (304) and the rear axle reference point (306) are clearly visible in the acquired image.
[0050] In certain embodiments, a distance in terms of ‘X’ and ‘Y’ coordinates between the reference point (105) (shown in FIG. 2) and the rear axle reference point (306) are manually measured. From the acquired image, the vehicle localizing system (108) identifies an orientation of the test vehicle (302) and position coordinates of the centroid (304) using a principal component analysis (PCA) technique. The vehicle localizing system (108) then identifies a distance in terms of ‘X’ and ‘Y’ coordinates between the reference point (105) and the centroid (304) by multiplying the position coordinates of the centroid (304) with the determined homography matrix. Subsequently, in one embodiment, the vehicle localizing system (108) identifies the offset of the rear axle reference point (306) from the centroid (304) in accordance with an equation, offset distance , where X1 and Y1 represent the measured distance between the reference point (105) and the rear axle reference point (306), and X2 and Y2 represent the identified distance between the reference point (105) and the centroid (304).
[0051] Thus, the offset of the rear axle reference point (306) from the centroid (304) is identified when the test vehicle (302) is placed at the first position and at the first orientation with respect to the reference point (105) of the designated area (104). Similarly, it is to be understood that, the test vehicle (302) is iteratively placed at multiple positions and orientations in the designated area (104) to generate a list of reference offset values associated with a plurality of positions and orientations of the test vehicle (302). With the list generated at the initial setup of the acquisition devices (102) in the designated area (104), the vehicle localizing system (108) identifies the offset of the rear axle reference point from the centroid of the vehicle (112), and thereby a location of the vehicle (112), as described in greater detail with reference to FIG. 4.
[0052] FIG. 4 is a flow diagram illustrating an exemplary method (400) for localizing the vehicle (112) in the designated area (104) using the object localization system (100) of FIG. 1. At step (402), the one or more acquisition devices (102) acquire one or more images of the vehicle (112) in the designated area (104). For clarity, the exemplary embodiment of localizing the vehicle (112) will be described with reference to a single acquisition device (102). The acquisition device (102) acquires an image of the vehicle (112) in the designated area (104). At step (404), the vehicle localizing system (108) determines a first reference point and an orientation of the vehicle (112) based on the acquired image. The first reference point, for example, corresponds to the centroid of the vehicle (112). In one embodiment, the vehicle localizing system (108) determines the centroid and the orientation of the vehicle (112) based on the acquired image using the principal component analysis technique. At step (406), the vehicle localizing system (108) identifies a relative position of the vehicle (112) with respect to the reference point (105) of the designated area by multiplying position coordinates of the vehicle (112) in the acquired image with a homography matrix, that is determined as described with reference to description of FIG. 2A and FIG. 2B. .
[0053] At step (408), the vehicle localizing system (108) interpolates an offset value of a rear axle reference point of the vehicle (112) from the centroid based on one or more reference offset values associated with a plurality of positions and orientations of the test vehicle (302) in a predetermined list. Specifically, the vehicle localizing system (108) interpolates the offset value based on one or more reference offset values associated with a position and orientation that correspond to the relative position and the orientation of the vehicle (112) in the list. In certain embodiments, the vehicle localizing system (108) uses a linear interpolation method for estimating the offset of the rear axle reference point from the centroid. In one embodiment, the linear interpolation method is a trilinear interpolation method. The vehicle localizing system (108) applies a trilinear interpolation method that includes curve fitting using a linear polynomial expression to estimate the offset value associated with the vehicle (112) based on the generated list including the known offset values corresponding to multiple positions and multiple orientations. An exemplary linear polynomial expression used for curve fitting and for estimating the offset value associated with the vehicle (112) for the identified position and the determined orientation is p (x, y, z) = c0 + c1.?x + c2.?y + c3.?z + c4.?x.?y + c5.?y.?z + c6.?z.?x + c7.?x.?y.?z.
[0054] In the previously noted linear polynomial expression, ‘p’ represents a value at a point (x, y, z) on a trilinear interpolation map (500) that is shown in FIG. 5. The value of the point ‘p’ provides the offset of the rear axle reference point from the centroid of the vehicle (112). Whereas, ‘x’ and ‘y’ represent the identified position of the vehicle (112) with respect to the reference point (105) of the designated area (104) and ‘z’ represents the determined orientation of the vehicle (112) in the designated area (104). ?x, ?y and ?z represent relative distances of points with respect to nearest available calibration data points. In one embodiment, ?x, ?y and ?z are determined in accordance with the following equations (1)-(3):
?x = (x-x0)/(x1-x0) (1)
?y = (y-y0)/(y1-y0) (2)
?z = (z-z0)/(z1-z0) (3)
[0055] Further, coefficients (cj), where j varies from 0 to 7, are determined from values associated with vertices of the trilinear interpolation map (500) based on the following set of equations (5).
c0 = p000
c1 = (p100 - p000)
c2 = (p010 - p000)
c3 = (p001 - p000)
c4 = (p110 - p010 - p100 - p000)
c5 = (p011 - p001 - p010 - p000)
c6 = (p101 - p001 - p100 - p000)
c7 = (p111 - p011 - p101 - p110 - p100 - p001 - p010 - p000) (5)
[0056] In one embodiment, the trilinear interpolation map (500) is a cuboid around a point to be interpolated. Thus, the vehicle localizing system (108) identifies the interpolated offset of the rear axle reference point from the centroid of the vehicle (112) using the trilinear interpolation method. At step (410), the vehicle localizing system (108) computes an actual offset of the rear axle reference point of the vehicle (112) from the centroid based on the interpolated offset. The actual offset associated with the vehicle (112) is obtained by scaling up or scaling down the interpolated offset value. This is because an offset value of the rear axle reference point from the centroid varies from vehicles to vehicles based on vehicles type and model. The trilinear interpolation actually provides the offset value associated with the test vehicle (302) and not the offset value associated with the vehicle (112), as the data (i.e., the list) used for interpolation are initially generated using the test vehicle (302). Accordingly, in order to identify the offset value associated with the vehicle (112) from the interpolated offset value, the interpolated offset value is scaled up or scaled down using a scaling factor as described in a subsequent paragraph.
[0057] The vehicle localizing system (108) identifies a length of the vehicle (112) in the designated area (104) by processing the acquired image of the vehicle (112). In one embodiment, the image processing includes employing the principal component analysis (PCA) technique to measure a major axis length of the vehicle (112) in world coordinates based on the acquired image. Subsequently, the vehicle localizing system (108) compares the identified length of the vehicle (112) with reference lengths in a database of vehicle groups that are pre-classified based on lengths of a plurality of vehicles to identify a group associated with the vehicle (112). In certain embodiments, the database of vehicle groups is stored in a memory associated with the acquisition device (102) or the server (106). Further, the vehicle localizing system (108) computes a scale factor that is used for scaling up or scaling down the interpolated offset value by dividing an average length of the identified group with a length of the test vehicle (302) used for generating the list including the offset values. With the computed scale factor, the vehicle localizing system (108) computes the actual offset by multiplying the interpolated offset value with the scale factor. At step (412), the vehicle localizing system (108) determines a midpoint of a rear axle of the vehicle (112) based on the computed actual offset value. The computed actual offset value provides an offset of the rear axle reference point from the centroid of the vehicle (112). Thus, the vehicle localizing system (108) identifies a midpoint of the rear axle of the vehicle (112) that corresponds to the rear axle reference point and thus, aids in identifying the location of the vehicle (112) in the designated area (104).
[0058] As previously noted, the object localization system (100) that determines a midpoint of a rear axle of a vehicle can be used in many application areas. One such application area includes an automated parking scenario that requires localization of the vehicle (112) in a parking space and continuous tracking of the vehicle (112) for successfully parking the vehicle (112) in a vacant parking slot, as described with respect to description of FIG. 6. When used for automated parking, the object localization system (100) includes one or more acquisition devices that are deployed in a parking space. The parking space may include a reference point. During an initial setup of the acquisition devices (102) at the parking space, a homography matrix is determined, as described with reference to description of FIG. 2A and FIG. 2B. Following that, the list of reference offset values are generated by placing the test vehicle (302) at various positions and orientations with respect to the reference point of the parking space, as described with reference to FIG. 3. The acquisition devices (102) are in communication with the server (106) via the network (110). In one embodiment, the acquisition device (102) acquires an image of the parking space. In certain embodiments, the object localizing system (100) may include a vacant slot detection system that detects one or more vacant parking slots based on the acquired image. In one embodiment, the vacant slot detection system uses a histogram comparison method for detecting the one or more vacant slots. The vacant slot detection system compares current histograms of occupied parking slots and vacant parking slots in the acquired image with reference histograms in a reference image. The reference histograms correspond to completely vacant parking slots. The reference histograms are created and are made available in a memory device associated with the acquisition device (102) or the server (106) at the time of initial setup of the acquisition device (102) at the parking space. The vacant slot detection system compares the current histograms in the acquired image with the reference histograms in the reference image to detect one or more vacant parking slots in the parking space.
[0059] Though, the vacant slot detection system uses the histogram comparison method for detecting the vacant slots, it is to be understood that the vacant slot detection system can use other techniques (e.g., machine learning techniques) for detecting the vacant slots. The vacant slot detection system detects availability of vacant parking slots in the parking space and updates the availability information stored in the server (106) in near real-time. Using the vacant parking slots availability information, the vehicle localizing system (108) provides navigation guidance to a vehicle that enters the parking space or is in the parking space to navigate to a vacant parking slot, as described in greater detail with respect to description of FIG. 6.
[0060] FIG. 6 is a flow diagram illustrating an exemplary method (600) for automatically parking the vehicle (112) in a vacant parking slot in the parking space. In certain embodiments, the vehicle (112) is an autonomous vehicle and therefore the exemplary method (600) will be described with reference to automated parking of the autonomous vehicle (112) in the vacant parking slot. However, it is to be understood that, the exemplary method (600) can also be applicable to semi-autonomous and manually controlled vehicles.
[0061] At step (602), the one or more acquisition devices (102) continually acquire image frames of the parking space to detect if there are any vehicles (e.g., the vehicle (112)) that enter the parking space. In one embodiment, the vehicle localizing system (108) processes the acquired image frames and detects if there are any vehicles that enter the parking space using a color based method. The color-based method is used for detecting a vehicle area in an image. In addition, the color-based method uses simple thresholds for segmenting desired objects in the image. To ensure robustness against illumination variations, color thresholding may be done in a hue and saturation channel. Red, Green, Blue (RGB) image frames acquired from the one or more acquisition devices (102) is converted to hue, saturation, and value (HSV) color space for color-based thresholding. One or more binary images obtained after color thresholding are filtered using morphological operations. Such color based method used for detecting one or more vehicles in the acquired image frames are already known in the art. Though the vehicle localizing system (108) uses the color-based method for detecting vehicles, it is to be understood that, the vehicle localizing system (108) can use other image processing techniques instead of the color-based method for detecting one or more vehicle in the acquired image frames.
[0062] At step (604), the vehicle localizing system (108) receives a selected vacant parking slot from the vacant slot detection system. In certain embodiments, the vacant slot detection system selects the vacant parking slot from the available vacant parking slots based on a dimension of the vehicle (112). At step (606), the vehicle localizing system (108) identifies a location of the selected vacant parking slot with respect to the reference point of the parking space by multiplying position coordinates of the selected vacant parking slot in an acquired image with the homography matrix.
[0063] At step (608), the vehicle localizing system (108) communicates a map of the parking space to the vehicle (112). In one embodiment, the map provides a navigation path for the vehicle (112) from its current position in the parking space to a target zone where the selected vacant parking slot is available. At step (610), as previously noted, the vehicle localizing system (108) determines a centroid and an orientation of the vehicle (112) based on the images frames acquired by the acquisition device (102) using the principal component analysis (PCA) technique. At step (612), as the vehicle (112) navigates to the target zone using the map, the vehicle localizing system (108) keeps tracking heading directions of the vehicle (112) by processing a series of images of the vehicle (112) acquired by the acquisition device (102). In certain embodiments, a heading angle that represents a heading direction of the vehicle (112) is estimated from each image based on an orientation of the vehicle (112) in that particular image and a heading angle associated with a previous image.
[0064] At step (614), the vehicle localizing system (108) determines a midpoint of a rear axle of the vehicle (112), as previously described with reference to description of FIG. 4. When the vehicle (112) navigates to the target zone using the map, the vehicle localizing system (108) continuously tracks the location of the midpoint of the rear axle with respect to the reference point of the parking space to track the location of the vehicle (112) in the parking space. When the vehicle (112) reaches the target zone, the vehicle localizing system (108) may receive a request from the vehicle (112) for a final target position and orientation required to park the vehicle (112) in the selected vacant parking slot. Alternatively, the vehicle localizing system (108) may automatically determine that the vehicle (112) has reached the target zone based on the acquired images. At step (616), the vehicle localizing system (108) receives an image of the vehicle (112) that is currently positioned at the target zone where the selected vacant parking slot is available. At step (618), the vehicle localizing system (108) determines a current position and a current orientation of the vehicle (112) at the target zone based on the acquired image. At step (620), the vehicle localizing system (108) calculates a displacement required to automatically manoeuvre the vehicle (112) from the current position and orientation to the final target position and orientation, as shown and described in detail with reference to FIG. 8.
[0065] FIG. 8 is an exemplary schematic diagram (700) illustrating a current position (X0, Y0) and a current orientation (?) of the vehicle (112) that is currently positioned at the target zone of the parking space. FIG. 8 also illustrates a final target position (X1, Y1) and a final orientation associated with the selected vacant parking slot (702). In the embodiment depicted in FIG. 8, the final orientation corresponds to a position of the vehicle (112) when a vehicle axis (704) aligns with a Y-axis (706). The vehicle localizing system (108) calculates a distance (i.e., ?X, ?Y) between the current position (X0, Y0) and the final target position (X1, Y1) and a difference in orientation (i.e., ?) between the current orientation (?) and the final orientation. Subsequently, the vehicle localizing system (108) calculates a displacement required to automatically manoeuvre the vehicle (112) from the current position and orientation to the final target position and orientation based on the calculated distance and the calculated difference in orientation. Referring again to FIG. 6, at step (622), the vehicle localizing system (108) communicates the required displacement to the vehicle (112) through the network (110). A control system associated with the vehicle (112) receives the required displacement and controls one or more of a steering angle, direction, speed, and application of brakes, such that the vehicle (112) is successfully parked in the selected vacant parking slot (702).
[0066] FIG. 9 is an exemplary schematic diagram (800) illustrating another application area of the object localization system (100). The application area includes Vehicle-to-anything (V2X) communication, for example such as V2I Vehicle-to-Infrastructure (V2I), Vehicle-to-vehicle (V2V), Vehicle-to-Pedestrian (V2P), Vehicle-to-device (V2D), and Vehicle-to-grid (V2G) communication. In one embodiment, the object localization system (100) includes two acquisition devices such as a first acquisition device (802) and a second acquisition device (804). The acquisition devices (802) and (804) are deployed in a roadside environment (806). The acquisition devices (802) and (804) monitor vehicles that pass through the roadside environment (806). When the vehicles pass through the roadside environment (806), the vehicle localizing system (108) determines the midpoint of the rear axle of the vehicles, and positions and orientations of the vehicles with respect to a central line (808) marked in the roadside environment (806).
[0067] In certain embodiments, the vehicles are autonomous vehicles. The positions and orientations of the vehicles with respect to the central line (808) can be used for alerting vehicles that are behind a leading vehicle about a sudden change of lanes of the leading vehicle. For example, the object localization system (100) depicts a first vehicle (810) that is followed by a second vehicle (812). The vehicle localizing system (108), residing in the camera (802), determines a midpoint (814) of a rear axle of the first vehicle (810), and a sudden positional shift (X1) and an orientation shift (?1) of the first vehicle (810) with respect to the central line (808). With the sudden positional shift (X1) and the orientation shift (?1), the vehicle localizing system (108) determines that the first vehicle (810) is attempting to change from a right lane to a left lane in the roadside environment (806) and communicates the information to a V2X system. The V2X system alerts the second vehicle (812) about a change of lane of the first vehicle (810).
[0068] Though an exemplary embodiment of FIG. 9 describes localizing and tracking a vehicle in a roadside environment, it is to be understood, that the object localizing system (100) can be used in any known environments for localizing and tracking vehicles. Examples of such environments include city roads, driving lanes inside a building, a restricted compound, and a road inside a tunnel. Generally, when vehicles pass through such environments, the GPS reception associated with the vehicles would be poor. In such a scenario, the object localization system (100) identifies a current location of a vehicle in a known environment and communicates the current location information to the vehicle such that the vehicle identifies its current location.
[0069] The embodiments described herein enable identification of the midpoint of the rear axle of the vehicle (112) from one or more images acquired using the one or more acquisition devices (102) to determine an exact location of the vehicle (112) in the designated area (104). Unlike typical localization approaches that use expensive on-board sensors (e.g., ultrasonic sensors) for finding a vacant parking slot, the present embodiments utilize the acquisition devices (102) deployed in a parking area for identifying the vacant parking slot. Moreover, the vehicle localizing system (108) tracks positions of the vehicle (112) continuously in the designated area (104) and enables successful parking of the vehicle (112) in the vacant parking slot. In addition, unlike typical approaches that use ultrasonic sensors that require vehicles to move below certain threshold speed values when finding vacant slots, the vehicle localizing system (108) accurately identifies vacant parking slots and guides the vehicles to the vacant parking slots irrespective of a vehicle speed. Further, the embodiments described herein do not require driver inputs on types of available parking slots (e.g., a parallel parking, a perpendicular parking, or an angular parking) as the vehicle localizing system (108) calculates the final position and the final orientation required to park the vehicle (112) in the selected vacant parking slot autonomously.
[0070] Although specific features of various embodiments of the present systems and methods may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments shown in the different figures.
[0071] While only certain features of the present systems and methods have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the claimed invention.
| # | Name | Date |
|---|---|---|
| 1 | 201741011609-IntimationOfGrant05-02-2024.pdf | 2024-02-05 |
| 1 | Power of Attorney [31-03-2017(online)].pdf | 2017-03-31 |
| 2 | 201741011609-PatentCertificate05-02-2024.pdf | 2024-02-05 |
| 2 | Form 5 [31-03-2017(online)].pdf | 2017-03-31 |
| 3 | Form 3 [31-03-2017(online)].pdf | 2017-03-31 |
| 3 | 201741011609-Annexure [11-01-2024(online)].pdf | 2024-01-11 |
| 4 | 201741011609-Written submissions and relevant documents [11-01-2024(online)].pdf | 2024-01-11 |
| 5 | Form 18 [31-03-2017(online)].pdf_51.pdf | 2017-03-31 |
| 5 | 201741011609-Correspondence to notify the Controller [14-12-2023(online)].pdf | 2023-12-14 |
| 6 | Form 18 [31-03-2017(online)].pdf | 2017-03-31 |
| 6 | 201741011609-FORM-26 [14-12-2023(online)].pdf | 2023-12-14 |
| 7 | Drawing [31-03-2017(online)].pdf | 2017-03-31 |
| 7 | 201741011609-US(14)-HearingNotice-(HearingDate-28-12-2023).pdf | 2023-11-24 |
| 8 | Description(Complete) [31-03-2017(online)].pdf_52.pdf | 2017-03-31 |
| 8 | 201741011609-FER.pdf | 2021-10-17 |
| 9 | 201741011609-CLAIMS [02-03-2021(online)].pdf | 2021-03-02 |
| 9 | Description(Complete) [31-03-2017(online)].pdf | 2017-03-31 |
| 10 | 201741011609-FER_SER_REPLY [02-03-2021(online)].pdf | 2021-03-02 |
| 10 | Form26_Power of Attorney_13-07-2018.pdf | 2018-07-13 |
| 11 | 201741011609-FORM 3 [02-03-2021(online)].pdf | 2021-03-02 |
| 11 | Form 1_After Filling_13-07-2018.pdf | 2018-07-13 |
| 12 | 201741011609-PETITION UNDER RULE 137 [02-03-2021(online)].pdf | 2021-03-02 |
| 12 | Declaration_After Filling_13-07-2018.pdf | 2018-07-13 |
| 13 | Correspondence by Agent_Form1_PA_Declaration_13-07-2018.pdf | 2018-07-13 |
| 14 | 201741011609-PETITION UNDER RULE 137 [02-03-2021(online)].pdf | 2021-03-02 |
| 14 | Declaration_After Filling_13-07-2018.pdf | 2018-07-13 |
| 15 | 201741011609-FORM 3 [02-03-2021(online)].pdf | 2021-03-02 |
| 15 | Form 1_After Filling_13-07-2018.pdf | 2018-07-13 |
| 16 | 201741011609-FER_SER_REPLY [02-03-2021(online)].pdf | 2021-03-02 |
| 16 | Form26_Power of Attorney_13-07-2018.pdf | 2018-07-13 |
| 17 | Description(Complete) [31-03-2017(online)].pdf | 2017-03-31 |
| 17 | 201741011609-CLAIMS [02-03-2021(online)].pdf | 2021-03-02 |
| 18 | Description(Complete) [31-03-2017(online)].pdf_52.pdf | 2017-03-31 |
| 18 | 201741011609-FER.pdf | 2021-10-17 |
| 19 | Drawing [31-03-2017(online)].pdf | 2017-03-31 |
| 19 | 201741011609-US(14)-HearingNotice-(HearingDate-28-12-2023).pdf | 2023-11-24 |
| 20 | Form 18 [31-03-2017(online)].pdf | 2017-03-31 |
| 20 | 201741011609-FORM-26 [14-12-2023(online)].pdf | 2023-12-14 |
| 21 | Form 18 [31-03-2017(online)].pdf_51.pdf | 2017-03-31 |
| 21 | 201741011609-Correspondence to notify the Controller [14-12-2023(online)].pdf | 2023-12-14 |
| 22 | 201741011609-Written submissions and relevant documents [11-01-2024(online)].pdf | 2024-01-11 |
| 23 | Form 3 [31-03-2017(online)].pdf | 2017-03-31 |
| 23 | 201741011609-Annexure [11-01-2024(online)].pdf | 2024-01-11 |
| 24 | Form 5 [31-03-2017(online)].pdf | 2017-03-31 |
| 24 | 201741011609-PatentCertificate05-02-2024.pdf | 2024-02-05 |
| 25 | 201741011609-IntimationOfGrant05-02-2024.pdf | 2024-02-05 |
| 25 | Power of Attorney [31-03-2017(online)].pdf | 2017-03-31 |
| 1 | searchE_14-09-2020.pdf |