Sign In to Follow Application
View All Documents & Correspondence

System And Method For Validation Of A Stereo Vision System

Abstract: An emulation system (100) for validating a device under test (DUT) (102) including a first and second data acquisition unit (DAU 108, 110) is disclosed. The system (100) includes a first and second display device (104, 106) to display a first and second laterally shifted view (LSV) corresponding to a test scenario. The system (100) includes focusing elements (112) that restrict a field of view (FOV) of the first DAU (108) to a first display area where first LSV is displayed, and FOV of the second DAU (110) to a second display area where second LSV is displayed. The system (100) includes a processing subsystem (122) communicatively coupled to the DUT (102) configured to determine depth information for objects in the test scenario based on first and second LSVs. The processing subsystem (122) validates the DUT (102) based on a comparison of the depth information and a corresponding expected value.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 November 2016
Publication Number
20/2018
Publication Type
INA
Invention Field
PHYSICS
Status
Email
shery.nair@tataelxsi.co.in
Parent Application
Patent Number
Legal Status
Grant Date
2023-08-11
Renewal Date

Applicants

TATA ELXSI LIMITED
ITPB Road, Whitefield, Bangalore, India

Inventors

1. JIHAS KHAN
Tata Elxsi Limited ITPB Road, Whitefield, Bangalore – 560048
2. ARAVIND RAVEENDRANADH NATH
Tata Elxsi Limited ITPB Road, Whitefield Bangalore - 560048
3. MANU MURALI
Tata Elxsi Limited ITPB Road, Whitefield, Bangalore – 560048
4. RANJITH SANKARANARAYANAN
Tata Elxsi Limited ITPB Road, Whitefield, Bangalore – 560048

Specification

Claims:
1. An emulation system (100) for validating a device under test (102) that comprises a first data acquisition unit (108) and a second data acquisition unit (110), the emulation system (100) comprising:
at least a first display device (104) and a second display device (106) configured to display a first laterally shifted view and a second laterally shifted view corresponding to a desired test scenario configured for validating the device under test (102);
one or more focusing elements (112) configured to restrict a field of view of the first data acquisition unit (108) to be within a first designated display area where the first laterally shifted view is configured to be displayed on the first display device (104), and to restrict a field of view of the second data acquisition unit (110) to be within a second designated display area where the second laterally shifted view is configured to be displayed on the second display device (106); and
a processing subsystem (122) communicatively coupled to the device under test (102) that is configured to determine depth information corresponding to one or more objects in the desired test scenario based on the first laterally shifted view and the second laterally shifted view, wherein the processing subsystem (122) is configured to validate a function of the device under test (102) based on a comparison of the determined depth information and a corresponding expected value.

2. The system (100) as claimed in claim 1, wherein the one or more focusing elements (112) comprise a mirror assembly (112), the mirror assembly (112) comprising:
at least a first mirror (118) positioned relative to the first display device (104) to reflect a visual displayed within the first designated area towards the field of view of the first data acquisition unit (108); and
at least a second mirror (116) positioned relative to the second display device (106) to reflect a visual displayed within the second designated area towards the field of view of the second data acquisition unit (110).

3. The system (100) as claimed in claim 1, wherein the one or more focusing elements (112) comprise a mirror and prism assembly (300), the mirror and prism assembly (300) comprising:
at least one prism (310);
at least a first mirror (306) positioned relative to the first display device (302) to reflect a visual displayed within the first designated area towards the prism (310),
at least a second mirror (308) positioned relative to the second display device (304) to reflect a visual displayed within the second designated area towards the prism (310);
wherein the prism (310) is positioned to reflect the visual reflected by the first mirror (306) towards the field of view of the first data acquisition unit (312) and the visual reflected by the second mirror (308) towards the field of view of the second data acquisition unit (316) using total internal reflection.

4. The system (100) as claimed in claim 1, wherein the one or more focusing elements (112) comprise a prism assembly (400), the prism assembly (400) comprising:
at least first, second, and third prisms (406), (408), (410), wherein the first and second prisms (406), (408) are positioned relative to the first display device (402) and the second display device (404) to reflect one or more visuals displayed within the first designated area and the second designated area, respectively, towards the third prism (410), and the third prism (410) is positioned to reflect the visuals reflected by the first and second prisms (406), (408) towards the field of view of the first data acquisition unit (412) and the second data acquisition unit (414), respectively, using total internal reflection.

5. The system (100) as claimed in claim 1, wherein the one or more focusing elements (112) comprise an aluminum foil, panda film, synthetic films such as Mylar, Dureflect, polished anodized aluminum, acrylic mirror, astro-foil, prism, diamond, dielectric mirror, a suitable reflective element, or combinations thereof.

6. The system (100) as claimed in claim 1, wherein the one or more focusing elements (112) are configured to move in at least a horizontal direction and a vertical direction, and wherein one or more of an angular position and a distance between the one or more focusing elements (112), the first display device (104), the second display device (106), and the device under test (102) is selected to provide maximum reflection of the first laterally shifted view and the second laterally shifted view towards the field of view of the first data acquisition unit (108) and the second data acquisition unit (110), respectively.

7. The system (100) as claimed in claim 1, wherein the processing subsystem (122) is configured to:
process the first laterally shifted view and the second laterally shifted view to include effect of one or more test vectors, wherein the first laterally shifted view and the second laterally shifted view are pre-recorded using the device under test (102), and wherein the test vectors comprise one or more of noise, distortion, color balance, white balance, chrominance noise, sharpness, tilt, velocity, vibrations, and other environmental effects;
stream the processed first laterally shifted view to the first display device (104) and the processed second laterally shifted view to the second display device (106).

8. The system (100) as claimed in claim 1, wherein the processing subsystem (122) is configured to:
emulate a virtual environment corresponding to the desired test scenario,
design a virtual device under test positioned in the virtual environment;
generate a first virtual video stream corresponding to the first laterally shifted view and a second virtual video stream corresponding to the second laterally shifted view using the virtual device under test positioned in the virtual environment;
process the first virtual video stream and the second virtual video stream to include effect of one or more test vectors, wherein the test vectors comprise one or more of noise, distortion, color balance, white balance, chrominance noise, sharpness, tilt, velocity, vibrations, and other environmental effects; and
stream the processed first virtual video stream to the first display device (104) and the processed second virtual video stream to the second display device (106).

9. The system (100) as claimed in claim 1, further comprising a memory unit (124) communicatively coupled to the processing subsystem (122) and configured to store one or more test vectors corresponding to the desired test scenario, the first laterally shifted view, the second laterally shifted view, the determined depth information, a result of the comparison of the determined depth information and a corresponding expected value, or combinations thereof, for validating the function of the device under test (102).

10. A method for validating a device under test (102) that comprises a first data acquisition unit (108) and a second data acquisition unit (110), the method comprising:
displaying a first laterally shifted view corresponding to a desired test scenario on a first display device (104) and a second laterally shifted view corresponding to the desired test scenario on a second display device (106);
projecting the first laterally shifted view displayed on the first display device (104) towards the first data acquisition unit (108), and the second laterally shifted view displayed on the second display device (106) towards the second data acquisition unit (110) using one or more focusing elements (112) positioned relative to the first display device (104), the second display device (106), the first data acquisition unit (108), and the second data acquisition unit (110) such that the focusing elements (112) reflect the first laterally shifted view towards a field of view of the first data acquisition unit (108) and the second laterally shifted view towards a field of view of the second data acquisition unit (110) without overlap;
comparing a depth information corresponding to one or more objects in the desired test scenario with a corresponding expected value, wherein the depth information is determined by the device under test (102) based on the first laterally shifted view and the second laterally shifted view acquired by the first data acquisition unit (108) and the second data acquisition unit (110), respectively; and
validating a function of the device under test (102) based on an outcome of the comparison.

11. The method as claimed in claim 10, further comprising adjusting one or more of an angular position and a distance between the one or more focusing elements (112), the first display device (104), the second display device (106), and the device under test (102) to provide maximum reflection of the first laterally shifted view and the second laterally shifted view towards a field of view of the first data acquisition unit (108) and a field of view of the second data acquisition unit (110), respectively.

12. The method as claimed in claim 10, further comprising:
recording the first laterally shifted view and the second laterally shifted view using the device under test (102) in an actual implementation environment;
processing the first laterally shifted view and the second laterally shifted view to include effect of one or more test vectors, wherein the test vectors comprise one or more of noise, distortion, color balance, white balance, chrominance noise, sharpness, tilt, velocity, vibrations, and other environmental effects; and
streaming the processed first laterally shifted view to the first display device (104) and the processed second laterally shifted view to the second display device (106).

13. The method as claimed in claim 10, further comprising:
emulating a virtual environment corresponding to the desired test scenario,
designing a virtual device under test positioned in the virtual environment;
generating a first virtual video stream corresponding to the first laterally shifted view and a second virtual video stream corresponding to the second laterally shifted view using the virtual device under test positioned in the virtual environment;
processing the first virtual video stream and the second virtual video stream to include effect of one or more test vectors, wherein the test vectors comprise one or more of noise, distortion, color balance, white balance, chrominance noise, sharpness, tilt, velocity, vibrations, and other environmental effects; and
streaming the processed first virtual video stream to the first display device (104) and the processed second virtual video stream to the second display device (106). , Description:
BACKGROUND

[0001] Embodiments of the present specification relate generally to validation techniques, and more particularly to a system and method for validation of a stereo vision system.
[0002] Computer stereo vision utilizes human binocular vision principles to perceive depth and three-dimensional (3D) structure from laterally shifted image views of the same scene acquired by horizontally displaced image sensors. Generally, stereo vision systems estimate depth information based on an amount of lateral shift (disparity map) between left and right image views. Stereo vision systems, therefore, are widely used in the fields of consumer electronics, industrial applications, science and technology, engineering, entertainment, automated systems, photogrammetry, and remote sensing, where relative depths of objects in a real environment may be used to implement various functions and features.
[0003] For example, a stereo camera system may be used in an automobile to determine depths of objects in a vicinity of the automobile for triggering various driver assistance functions. To that end, the stereo camera system may include two lenses, two horizontally displaced image sensors configured to acquire one or more images of a desired field of view, and an image processing unit capable of extracting depth information of objects present in the desired field of view. The depth information may then be used for triggering safety features such as emergency braking, adaptive cruise control, and blind spot detection. As triggers for various safety features depend upon accuracy of the depth information estimated by the stereo camera system, various functions and features of the stereo camera system need to be verified and validated in extensive detail prior to installation in an actual automobile.
[0004] Conventional validation methods entail validation testing of a stereo vision system in a real-world environment. However, such validation testing often fails to emulate all possible test vectors and test scenarios possible in the real world environment. For example, testing of the stereo vision system mounted within an automobile on actual roads may not allow for many safety-critical scenarios that may endanger the driver of the automobile. Moreover, testing in real-world environments often fails to provide accurate repeatability of test scenarios. Frequently, such testing may also be delayed due to non-availability of final target in the early phases of development, which may result in rework and associated expenses to rectify issues identified during implementation.
[0005] Generally, during a camera validation test, a display device exposed to a camera may be used to play a prerecorded video. Specifically, the display device is positioned such that a field of view of the camera is restricted to be within a bezel of the display device. Such a configuration allows the camera to behave as if the camera is capturing the video directly. Although, this is a good solution for the validation of a mono camera, the solution may not be optimal for validating a stereo camera. This is because the two mono cameras that form the integrated stereo camera system have overlapping fields of view. Accordingly, another method for validation of stereo cameras may be employed. The method involves providing the video feed separately to each camera to avoid overlap between the two camera views that may result in inaccurate processing of test data. However, this method requires dismantling the integrated stereo camera system into individual cameras. Furthermore, the method does not validate the entire stereo system in its original form, which may lead to discrepancies.
[0006] Yet another validation method includes bypassing output feeds of both cameras and providing acquired video stream data to an image processing unit. Here again, the method fails to validate the entire stereo system in its original form, which may lead to incorrect validation of various features and functions.
[0007] Therefore, it may be desirable to develop a framework to validate performance of a stereo vision system in its original form without needing an actual target system that includes the stereo vision system to be available for testing. Additionally, there is a need for a safer and more cost-effective stereo vision validation system that provides exhaustive test coverage and repeatability of the test scenarios in a laboratory environment.

BRIEF DESCRIPTION

[0008] In accordance with an aspect of the present disclosure, an emulation system for validating a device under test that includes a first data acquisition unit and a second data acquisition unit is disclosed. The system includes at least a first display device and a second display device that are configured to display a first laterally shifted view and a second laterally shifted view, respectively, corresponding to a desired test scenario configured for validating the device under test. The system also includes one or more focusing elements configured to restrict a field of view of the first data acquisition unit to be within a first designated display area where the first laterally shifted view is configured to be displayed on the first display device. Additionally, the focusing elements are also configured to and to restrict a field of view of the second data acquisition unit to be within a second designated display area where the second laterally shifted view is configured to be displayed on the second display device. Further, the system includes a processing subsystem communicatively coupled to the device under test that is configured to determine depth information corresponding to one or more objects in the desired test scenario based on the first laterally shifted view and the second laterally shifted view, wherein the processing subsystem is configured to validate a function of the device under test based on a comparison of the determined depth information and a corresponding expected value.
[0009] According to one aspect, the one or more focusing elements include a mirror assembly. The mirror assembly includes at least a first mirror positioned relative to the first display device to reflect a visual displayed within the first designated area towards the field of view of the first data acquisition unit. The mirror assembly also includes at least a second mirror positioned relative to the second display device to reflect a visual displayed within the second designated area towards the field of view of the second data acquisition unit.
[0010] According to another aspect, the one or more focusing elements include a mirror and prism assembly. The mirror and prism assembly includes at least one prism, at least a first mirror positioned relative to the first display device to reflect a visual displayed within the first designated area towards the prism, and at least a second mirror positioned relative to the second display device to reflect a visual displayed within the second designated area towards the prism. The prism is positioned to reflect the visual reflected by the first mirror towards the field of view of the first data acquisition unit and the visual reflected by the second mirror towards the field of view of the second data acquisition unit using total internal reflection.
[0011] According to yet another aspect, the one or more focusing elements include a prism assembly. The prism assembly includes at least first, second, and third prisms, where the first and second prisms are positioned relative to the first display device and the second display device to reflect one or more visuals displayed within the first designated area and the second designated area, respectively, towards the third prism. Additionally, the third prism is positioned to reflect the visuals reflected by the first and second prisms towards the field of view of the first data acquisition unit and the second data acquisition unit, respectively, using total internal reflection.
[0012] According to a further another aspect, the one or more focusing elements include an aluminum foil, panda film, synthetic films such as Mylar, Dureflect, polished anodized aluminum, acrylic mirror, astro-foil, prism, diamond, dielectric mirror, a suitable reflective element, or combinations thereof.
[0013] Further, the one or more focusing elements are configured to move in at least a horizontal direction and a vertical direction. One or more of an angular position and a distance between the one or more focusing elements, the first display device, the second display device, and the device under test is selected to provide maximum reflection of the first laterally shifted view and the second laterally shifted view towards the field of view of the first data acquisition unit and the second data acquisition unit, respectively.
[0014] According to one aspect, the processing subsystem is configured to process the first laterally shifted view and the second laterally shifted view to include effect of one or more test vectors. The first laterally shifted view and the second laterally shifted view are pre-recorded using the device under test, and the test vectors include one or more of noise, distortion, color balance, white balance, chrominance noise, sharpness, tilt, velocity, vibrations, and other environmental effects. The processing subsystem is also configured to stream the processed first laterally shifted view to the first display device and the processed second laterally shifted view to the second display device.
[0015] According to another aspect, the processing subsystem is configured to emulate a virtual environment corresponding to the desired test scenario, and design a virtual device under test positioned in the virtual environment. The processing subsystem is also configured to generate a first virtual video stream corresponding to the first laterally shifted view and a second virtual video stream corresponding to the second laterally shifted view using the virtual device under test positioned in the virtual environment. Further, processing subsystem is configured to process the first virtual video stream and the second virtual video stream to include effect of one or more test vectors, where the test vectors comprise one or more of noise, distortion, color balance, white balance, chrominance noise, sharpness, tilt, velocity, vibrations, and other environmental effects. Additionally, the processing subsystem is configured to stream the processed first virtual video stream to the first display device and the processed second virtual video stream to the second display device.
[0016] According to a further aspect, the emulation system further includes a memory unit communicatively coupled to the processing subsystem. The memory unit is configured to store one or more test vectors corresponding to the desired test scenario, the first laterally shifted view, the second laterally shifted view, the determined depth information, a result of the comparison of the determined depth information and a corresponding expected value, or combinations thereof, for validating the function of the device under test.
[0017] According to an aspect of the present disclosure, a method for validating a device under test that comprises a first data acquisition unit and a second data acquisition unit is presented. The method includes displaying a first laterally shifted view corresponding to a desired test scenario on a first display device and a second laterally shifted view corresponding to the desired test scenario on a second display device. The method further includes projecting the first laterally shifted view displayed on the first display device towards the first data acquisition unit, and the second laterally shifted view displayed on the second display device towards the second data acquisition unit using one or more focusing elements. The one or more focusing elements are positioned relative to the first display device, the second display device, the first data acquisition unit, and the second data acquisition unit such that the focusing elements reflect the first laterally shifted view towards a field of view of the first data acquisition unit and the second laterally shifted view towards a field of view of the second data acquisition unit without overlap. Additionally, the method includes comparing a depth information corresponding to one or more objects in the desired test scenario with a corresponding expected value, where the depth information is determined by the device under test based on the first laterally shifted view and the second laterally shifted view acquired by the first data acquisition unit and the second data acquisition unit, respectively. Further, the method includes validating a function of the device under test based on an outcome of the comparison.
[0018] Moreover, the method includes adjusting one or more of an angular position and a distance between the one or more focusing elements, the first display device, the second display device, and the device under test to provide maximum reflection of the first laterally shifted view and the second laterally shifted view towards a field of view of the first data acquisition unit and a field of view of the second data acquisition unit, respectively.
[0019] According to certain aspects, the method includes recording the first laterally shifted view and the second laterally shifted view using the device under test in an actual implementation environment. The method also includes processing the first laterally shifted view and the second laterally shifted view to include effect of one or more test vectors, where the test vectors comprise one or more of noise, distortion, color balance, white balance, chrominance noise, sharpness, tilt, velocity, vibrations, and other environmental effects. The method further includes streaming the processed first laterally shifted view to the first display device and the processed second laterally shifted view to the second display device.
[0020] According to one aspect, the method includes emulating a virtual environment corresponding to the desired test scenario, and designing a virtual device under test positioned in the virtual environment. The method further includes generating a first virtual video stream corresponding to the first laterally shifted view and a second virtual video stream corresponding to the second laterally shifted view using the virtual device under test positioned in the virtual environment. The method also includes processing the first virtual video stream and the second virtual video stream to include effect of one or more test vectors, where the test vectors comprise one or more of noise, distortion, color balance, white balance, chrominance noise, sharpness, tilt, velocity, vibrations, and other environmental effects. Additionally, the method includes streaming the processed first virtual video stream to the first display device and the processed second virtual video stream to the second display device.

BRIEF DESCRIPTION OF THE FIGURES

[0021] These and other features, aspects, and advantages of the claimed subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings, in which:
[0022] FIG. 1 illustrates a schematic view of an embodiment of an emulation system configured to validate functionality of a device under test (DUT) such as a stereo vision system;
[0023] FIG. 2A illustrates a graphical representation of an exemplary front view of the focusing elements including the mirror assembly depicted in FIG. 1;
[0024] FIG. 2B illustrates a graphical representation of an exemplary rear view of the focusing elements including the mirror assembly depicted in FIG. 1;
[0025] FIG. 2C illustrates a graphical representation of an exemplary rear view of the DUT fixture depicted in FIG. 1;
[0026] FIG. 3 illustrates an exemplary top view of an alternative embodiment of the emulation system of FIG. 1 including a mirror and prism assembly in lieu of the focusing elements depicted in FIG. 1;
[0027] FIG. 4 illustrates an exemplary isometric view of yet another embodiment of the emulation system of FIG. 1 including a prism assembly in lieu of the focusing elements depicted in FIG. 1;
[0028] FIG. 5 illustrates a flow diagram depicting an exemplary method for validating functionality of a stereo vision system using a pre-recorded test scenario, according to an embodiment of the present specification;
[0029] FIG. 6 illustrates an image depicting an example of a pre-recorded test scenario that may be validated using the system (100) of FIG. 1;
[0030] FIG. 7 illustrates a flow diagram depicting an exemplary method for validating functionality of a stereo vision system using a virtual test scenario, according to an embodiment of the present specification; and
[0031] FIG. 8 illustrates an image depicting an example of a virtual test scenario described with reference to FIG. 7.

DETAILED DESCRIPTION

[0032] The following description presents exemplary systems and methods for validating functionality of a stereo vision system in a laboratory environment. Particularly, the embodiments described herein disclose an exemplary emulation system that employs a novel system configuration that projects laterally shifted views displayed on two separate display devices towards two data acquisition units in the stereo vision system without any overlap for validating accuracy of depth information determined by the stereo vision system. The emulation system compares the determined depth information and/or a function or feature that is triggered based on the depth information with an expected value to validate performance of the stereo vision system. Although, the embodiments described herein disclose a validation system for a stereo camera system for use in an automobile, in certain embodiments, the validation system may be used to validate functions and features of any stereo vision system. For example, certain embodiments of the present system may be used for the validation of microscopes, binoculars, robot navigation systems, aerial survey devices, mobile devices with built-in 3D camera, or any other equipment that utilizes stereovision techniques. An exemplary framework that is suitable for practicing various implementations of the present system and method is discussed in detail with reference to FIGs. 1-2.
[0033] FIG. 1 depicts an exemplary emulation system (100) configured to validate functionality of a stereo vision system (DUT (102)) in a laboratory environment. To that end, the emulation system (100) includes a left display device such as a monitor (104), and a right display device such as a monitor (106) for displaying a pre-recorded or a virtual video stream including laterally shifted views of a FOV in a desired test scenario. Typically, the DUT (102) includes at least two data acquisition units such as a left lens (108) and a right lens (110) configured to capture a corresponding left laterally shifted view and a right laterally shifted view corresponding to a real 3D world scene in 2D format.
[0034] In one embodiment, the left lens (108) and the right lens (110) are exposed to the monitors (104) and (106) that are used to play the left and right laterally shifted video streams respectively. The monitors (104) and (106), for example, may include a Cathode ray tube display (CRT), Light-emitting diode display (LED), Electroluminescent display (ELD), Electronic paper, E Ink, Plasma display panel (PDP), and/or a Liquid crystal display (LCD). Further, the monitors (104) and (106) may include a High-Performance Addressing display (HPA), Thin-film transistor display (TFT), Thin-film transistor display (TFT), Swept-volume display, Varifocal mirror display, Emissive volume display, Laser display, Holographic display, Light field display, or any other suitable display device.
[0035] In certain embodiments, the lenses (108) and (110) are positioned with respect to the monitors (104) and (106) such that the FOV of each of the lenses (108) and (110) is restricted within a bezel of the corresponding monitor (104) or (106) to cause the DUT (102) to determine that the visuals captured by the lenses (108) and (110) are from a real environment. Particularly, a relative configuration of the monitors (104) and (106) and the lenses (108) and (110) is selected such that a 2D image is formed over a sensing region of each individual lens (108) and (110), similar to the 2D images formed during an actual on-road drive in a real environment.
[0036] However, during validation testing, both the lenses (108) and (110) cannot be exposed to the same video stream as the visuals captured by the lenses (108) and (110) need to be laterally shifted by a certain amount for depth determination. In conventional stereo camera systems, the distance between the two camera lenses, referred to as baseline, is only about a few centimeters in length. Therefore, when using a single monitor, FOV of the left and right lenses (108) and (110) may overlap, which in turn, may cause overlapping information corresponding to the left and right laterally shifted views to be erroneously captured by the left and right lenses (108) and (110). Accordingly, as the two laterally shifted video streams may not be played on a single monitor without acquisition of overlapping information from the left and right laterally shifted views, two monitors (104) and (106) are used for displaying individual left and right video streams to be captured by the two lenses (108) and (110).
[0037] To that end, the system (100) includes one or more focusing elements (112) configured to reflect the video streams being displayed on the monitors (104) and (106) towards the respective lens (108) or (110) of the DUT (102). In certain embodiments, the system (100) may include a fixture (114) that supports the DUT (102) and may be used to adjust a height, position, and/or orientation of the DUT (102) to achieve a desired FOV for acquiring information reflected by the focusing elements (112). In one embodiment, for example, the focusing elements (112) may include a V-shaped mirror assembly (hereinafter referred to as mirror assembly (112)) positioned in front of the DUT (102) to restrict a FOV of the left lens (108) to scenes displayed within a first designated display area such as within a bezel of the left monitor (104). Similarly, the V-shaped mirror assembly (112) restricts a FOV of the right lens (110) to scenes displayed within a second designated display area such as within a bezel of the right monitor (106). To that end, the mirror assembly (112) includes a right mirror (116) and a left mirror (118) disposed in holders that may be mounted on a rigid platform support (120). Particularly, the mirror holders may be capable of moving in both horizontal and vertical directions. The mirrors (118) and (116) are placed such that the video stream being played on the respective monitor (104) or (106) is reflected towards the FOV of the corresponding lens (108) or (110) without any loss in visual information. Specifically, a height and/or angular position of the monitors (104) and (106), the mirrors (118) and (116), the lenses (108) and (110), and distances between each of the monitors (104) and (106), mirrors (118) and (116), and the lenses (108) and (110) may be selected to provide maximum reflection of the visuals displayed on the monitors (104) and (106) towards the FOV of the individual lenses (108) and (110) for validating functionality of the DUT (102) in the desired test scenario. Certain exemplary components that allow for adjusting a position and/or orientation of the of the mirror assembly (112) are described in greater detail with reference to FIGs. 2A, 2B, and 2C.
[0038] FIG. 2A illustrates a graphical representation of an exemplary front view (200) of the mirror assembly (112) depicted in FIG. 1. The mirror assembly (112) includes two knobs (202) and (204) used to adjust a horizontal distance (206) between the two mirrors (118) and (116).
[0039] Further, FIG. 2B illustrates a graphical representation of an exemplary rear view (208) of the mirror assembly (112) depicted in FIG. 1. The mirror assembly (112) may include an adjustment means such as knobs (210) and (212) that may be used to adjust both vertical and angular position of the mirrors (118) and (116), respectively. The vertical and angular adjustment of the mirrors (118) and (116) are indicated in FIG. 2B via reference numerals (214) and (216), respectively.
[0040] FIG. 2C illustrates a graphical representation of an exemplary rear view (218) of the fixture (114) depicted in FIG. 1 that is configured to support the DUT (102). In one embodiment, the fixture (114) includes a knob (220) for adjusting a height of the DUT (102), as indicated by the reference numeral (222), for setting a FOV for the DUT (102) for a desired test scenario.
[0041] Although, FIGs. 1, 2A, 2B, and 2C depict a mirror assembly (112) used for reflecting the visuals displayed on the monitors (104) and (106) towards the FOV of the individual lenses (108) and (110), in alternative embodiments, the system (100) may employ a different set of focusing elements (112). For example, focusing elements (112) such as an aluminum foil, panda film, synthetic films such as Mylar, Dureflect, polished anodized aluminum, acrylic mirror, astro-foil, prism, diamond, dielectric mirror, or any other suitable reflective element may be used in lieu of the mirror assembly (112) in the system (100) of FIG. 1. Certain exemplary configurations of the system (100) of FIG. 1 using alternative sets of focusing elements are depicted in FIGs. 3 and 4.
[0042] Particularly, FIG. 3 depicts an exemplary embodiment of the system (100) of FIG. 1 including a mirror and prism assembly (300) in lieu of the mirror assembly (112) depicted in FIG. 1. In the embodiment shown in FIG. 3, the system (100) includes a display device (302) configured to stream a pre-recorded or virtual laterally shifted left view corresponding to a desired test scenario. Further, the system (100) includes a display device (304) for streaming a pre-recorded or virtual laterally shifted right view corresponding to a desired test scenario. In the present embodiment, the system (100) includes mirror setups (306) and (308) that are positioned to reflect the light rays from the left display device (302) and right display device (304), respectively, towards a prism (310). The prism (310), in turn, reflects the light rays towards corresponding lens of the stereo vision system via total internal reflection. Specifically, the prism (310) causes the light rays from left display device (302) to be reflected towards the left data acquisition unit (312) of the stereo vision system (314) and the light rays from the right display device (304) to be reflected towards the right data acquisition unit (316). As prisms have higher reflectivity compared to mirrors, more visual information is captured using the mirror and prism assembly (300) as compared to the mirror assembly (112) of FIG. 1.
[0043] Further, FIG. 4 depicts an exemplary isometric view of an alternative embodiment of the system (100) of FIG. 1 including a prism assembly (400) in lieu of the focusing elements (112) of FIG. 1. In the embodiment depicted in FIG. 4, the system (100) includes a display device (402) for streaming a pre-recorded or virtual laterally shifted left view, and a display device (404) for streaming a pre-recorded or virtual laterally shifted right view corresponding to a desired test scenario. The prism assembly (400) includes prisms (406) and (408) that are positioned so as to reflect the light rays from the left display device (402) and right display device (404), respectively, towards a prism (410) using total internal reflection. The prism (410), in turn, reflects these light rays to the corresponding lens of the stereo vision system using total internal reflection. Specifically, the reflected light ray from left display device (402) is reflected towards the left data acquisition unit (412) and the reflected light ray from the right display device (404) is reflected towards the right data acquisition unit (414) of the stereo vision system (416). Since prisms have higher reflectivity compared to mirrors, more visual information is captured using the prism assembly (400) as compared to the mirror assembly (112) of FIG. 1.
[0044] It may be noted that the focusing elements shown in FIGs. 1-4 are merely exemplary. As previously noted, many other such focusing elements may be used to reflect the visuals displayed on the monitors (104) and (106) towards corresponding lenses (108) and (110) without overlapping in order to validate performance of the DUT (102).
[0045] With returning reference to FIG. 1, the desired test scenario may be validated using a pre-recorded test scenario, and/or a virtual test scenario using an emulated environment. In both cases, video feeds input to both monitors (104) and (106) are streamed in such a way that the visuals begin displaying simultaneously in both of the monitors (104) and (106) without delay. In case of a pre-recorded test scenario, the input feed to the monitors (104) and (106) are recorded video streams captured using the DUT (102) from an actual environment for the desired test scenario. Based on specific requirements and/or aspects of the desired test scenario, ambient and/or external factors, such as, noise, distortion, color balance, white balance, chrominance noise, sharpness, tilt, vibrations, and other environmental effects may be added to the streaming video before playing it on the monitors (104) and (106). Thus, different environmental conditions are validated in a laboratory environment to significantly improve test coverage.
[0046] In case of virtual test scenario, an emulation environment is utilized to create a virtual test environment. In one embodiment, the system (100) may include a processing subsystem (122), which in operative association with a memory unit (124), is configured to determine relevant parameters and generate one or more test vectors based on specific requirements and/or aspects of the desired test scenario. To the end, the processing subsystem (122), for example, may include one or more general-purpose processors, specialized processors, graphical processing units, microprocessors, programming logic arrays, field programming gate arrays, and/or other suitable computing devices.
[0047] In certain embodiments, the processing subsystem (122) may be configured to emulate numerous test vectors and parameters to recreate a large number of test scenarios that may be unsafe or difficult to recreate during on-road testing. For example, one or more functions of an automobile may be repeatedly validated in presence of different traffic, network, environmental, and terrain conditions during a series of tests by simply modifying one or more parameters of interest without extensive effort and expense. In certain embodiments, the memory unit (124) stores the parameters and vectors corresponding to different test scenarios in a scenario configuration file. To that end, the memory unit (124), for example, includes Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, non-volatile memory, hard drives, CD ROMs, DVDs, flash drives, solid-state drives, and any other known physical storage media. In one embodiment, the processing subsystem (122) may be configured to store each new test case configuration in a corresponding scenario configuration file stored in the memory unit (124). The scenario configuration file may store a common set of test vectors that may be selected and/or customized to quickly setup a predefined test scenario, for example, for validating performance of a lane departure warning system. Accordingly, the scenario configuration file may store test vectors indicative of vehicle characteristics, ambient noise levels, sources of signal distortion, stored roadways information, and terrain data that may be selected and/or customized to setup the test scenario for validating the lane departure warning system. The processing subsystem (122) may identify and retrieve the scenario configuration file associated with one or more test scenarios, when needed, to quickly setup the test scenarios for testing and validation studies.
[0048] Furthermore, in certain embodiments, the system (100) may be enclosed in a chamber (not shown) to prevent unexpected external interference during the validation studies. Thus, the system (100) may be used for safe, cost-effective, and repeatable validation of a stereo vision system. An exemplary method for validation of a stereo vision system using the system (100) of FIG. 1 is described in greater detail with reference to FIG. 5.
[0049] FIGs. 5-8 describe exemplary methods for validating functionality of a stereo vision system using left and right laterally shifted video streams. The order of the steps of the exemplary methods may change in practical implementation. Additionally, certain steps may be deleted, modified, or added to the exemplary methods. Moreover, the steps included in the exemplary methods may be performed sequentially, or in a distributed manner. Further, the steps may involve additional components or omit certain components in practical implementation. For clarity, the exemplary methods are described in FIGs. 5 and 7 with reference to elements of the system (100) of FIG. 1.
[0050] Particularly, FIG. 5 depicts a flow chart (500) depicting an exemplary method for validating functionality of a stereo vision system using pre-recorded left and right laterally shifted video streams. At step (502), pre-recorded video streams of the left and right views corresponding to the test scenario are provided. Further, at step (504), the processing subsystem (122) adds requisite environmental and scenario configuration parameters to the pre-recorded left and right laterally shifted video streams. For example, in case of an automotive application where the traffic sign recognition feature is to be validated, the left and right laterally shifted video streams corresponding to the test scenario may be first recorded using the actual DUT or any video recording device having identical physical parameters as that of the video recording device used in the DUT. During the test setup, position of the recording device is adjusted to exactly match the position of the actual DUT in final target. The left and right laterally shifted video streams may then be processed to include effects of various environmental conditions like different light intensities, blurred views, and other environment disturbances. Further, at step (506), the processed left and right laterally shifted video streams are streamed to the monitors (104) and (106), respectively. At step (508), the mirror assembly (112) reflects the displayed left and right video streams towards the left and right lenses (108) and (110) of the DUT (102). The lenses (108) and (110) acquire the reflected left and right laterally shifted video streams for determining the lateral shift, and in turn, the depth information corresponding to one or more objects present in the test scenario. At step (510), the processing subsystem (122) receives the depth information determined by the DUT (102). Subsequently, at step (512), the estimated depth information and/or a function or feature that is triggered based on the depth information is compared with an expected value to validate performance of the stereo vision system. If the estimated depth information matches the expected value, performance of the DUT in the test scenario is considered to have passed the validation criteria, as denoted by step (514). Alternatively, if the estimated depth information does not match the expected value, performance of the DUT in the test scenario is considered to have failed the validation criteria at step (516).
[0051] FIG. 6 illustrates an image (600) depicting an example of a pre-recorded test scenario that may be validated using the system (100) of FIG. 1. The test scenario, for example, corresponds to validating a pedestrian detection algorithm. In an exemplary implementation, a DUT, such as a stereo vision system, is placed on a test bench (not shown in FIG. 6) that is similar to the system (100) of FIG. 1. The test bench may include two display devices (similar to monitors (104) and (106) of FIG. 1) streaming the pre-recorded test scenario. In order to validate performance of the DUT, the desired test scenario may be recorded using the DUT or a recording device having physical parameters identical to those of the DUT. Specifically, the DUT may record a test sequence involving a pedestrian moving towards and away from the DUT by taking predetermined distance steps such that an actual distance between pedestrian and DUT can be calculated. Particularly, left and right laterally shifted videos of the movement of the pedestrian may be recorded by left and right data acquisition units in the DUT. It may be desirable to test effectiveness of the pedestrian detection algorithm in different prevailing conditions. Accordingly, noise, distortion, and other environmental conditions may be added to the video as required to emulate conditions in a real-world environment. The processed left and right laterally shifted videos may then be streamed to respective left and right display devices. The DUT may process the left and right laterally shifted videos to determine depth or distance of the pedestrian from a reference position relative to the DUT. This depth information may then be used to trigger one or more driver assistance features such as an issuing of a warning, applying brakes, or switching to automated defensive driving mode. The determined depth information may be compared with the actual distance to ascertain accuracy of the depth calculation by the DUT. Additionally, a time and nature of triggered functionality may be compared with an expected outcome of the test scenario in different test conditions to ascertain the effectiveness of the pedestrian detection algorithm implemented by the DUT.
[0052] Similarly, FIG. 7 illustrates a flow chart (700) depicting another exemplary method for validating functionality of a stereo vision system such as the DUT (102) of FIG. 1 in a laboratory environment using a virtual test scenario. At step (702), the processing subsystem (122) may design an emulated environment in which a virtual stereo vision system identical to the DUT (102) is designed in software and is placed inside the virtual environment. Further, at step (704), the virtual stereo vision system generates and streams left and right laterally shifted videos corresponding to a desired test scenario in the emulated environment to the monitors (104) and (106), respectively. At step (706), the mirror assembly (112) projects the laterally shifted videos playing on the monitors (104) and (106) towards the corresponding lens in the actual DUT (102). At step (708), the DUT (102) receives and processes these laterally shifted videos to determine corresponding depth information. At step (710), the estimated depth information and/or a function or feature that is triggered based on the depth information is compared with an expected value to validate performance of the stereo vision system. If the estimated depth information matches the expected value, performance of the DUT in the test scenario is considered to have passed the validation criteria, as denoted by step (712). Alternatively, if the estimated depth information does not match the expected value, performance of the DUT in the test scenario is considered to have failed the validation criteria at step (714). Use of the virtual test scenario allows for testing of multiple combinations of test vectors that may not be safely and accurately emulated in a real-world environment. FIG. 8 illustrates an image (800) depicting an example of a virtual test scenario described with reference to FIG. 7.
[0053] Embodiments of the present systems and methods, thus, provide an efficient framework for validating functionality of a stereo vision system in a laboratory environment. Particularly, the embodiments described herein disclose an exemplary emulation system that employs a novel system configuration that projects pre-recorded or virtual laterally shifted views displayed on two separate display devices towards two data acquisition units in the stereo vision system without any overlap for determining depth information. The emulation system compares the determined depth information and/or a function or feature that is triggered based on the depth information with an expected value to validate performance of the stereo vision system. Particularly, the system aids to validate performance of a stereo vision system in its original form without dismantling the stereo vision system or needing an actual target system that includes the stereo vision system to be available for testing. Additionally, the system provides a safer and more cost-effective validation system that allows for exhaustive coverage and repeatability of the test scenarios in a laboratory environment.
[0054] Although specific features of various embodiments of the present systems and methods may be shown in and/or described with respect to one drawing and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics, and any subset thereof, may be combined and/or used interchangeably in any suitable manner in the various embodiments.
[0055] While only certain features of the present systems and methods have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

Documents

Application Documents

# Name Date
1 201641038802-IntimationOfGrant11-08-2023.pdf 2023-08-11
1 Form5_As Filed_14-11-2016.pdf 2016-11-14
2 201641038802-PatentCertificate11-08-2023.pdf 2023-08-11
2 Form3_As Filed_14-11-2016.pdf 2016-11-14
3 Form26_General Power Of Attorney_14-11-2016.pdf 2016-11-14
3 201641038802-FER.pdf 2021-10-17
4 Form2 Title Page_Complete_14-11-2016.pdf 2016-11-14
4 201641038802-CLAIMS [13-10-2021(online)].pdf 2021-10-13
5 Form18_Normal Request_14-11-2016.pdf 2016-11-14
5 201641038802-ENDORSEMENT BY INVENTORS [13-10-2021(online)].pdf 2021-10-13
6 Drawings_As Filed_14-11-2016.pdf 2016-11-14
6 201641038802-FER_SER_REPLY [13-10-2021(online)].pdf 2021-10-13
7 Description Complete_As Filed_14-11-2016.pdf 2016-11-14
7 Correspondence by Applicant_Form1-Form5-general Power of Attorney_17-04-2017.pdf 2017-04-17
8 Form1_After Filing_17-04-2017.pdf 2017-04-17
8 Claims_As Filed_14-11-2016.pdf 2016-11-14
9 Abstract_As Filed_14-11-2016.pdf 2016-11-14
9 Form26_General Power of Attorney_17-04-2017.pdf 2017-04-17
10 abstract 201647039843 .jpg 2016-12-08
10 Form5_After Filing_17-04-2017.pdf 2017-04-17
11 abstract 201647039843 .jpg 2016-12-08
11 Form5_After Filing_17-04-2017.pdf 2017-04-17
12 Abstract_As Filed_14-11-2016.pdf 2016-11-14
12 Form26_General Power of Attorney_17-04-2017.pdf 2017-04-17
13 Claims_As Filed_14-11-2016.pdf 2016-11-14
13 Form1_After Filing_17-04-2017.pdf 2017-04-17
14 Correspondence by Applicant_Form1-Form5-general Power of Attorney_17-04-2017.pdf 2017-04-17
14 Description Complete_As Filed_14-11-2016.pdf 2016-11-14
15 201641038802-FER_SER_REPLY [13-10-2021(online)].pdf 2021-10-13
15 Drawings_As Filed_14-11-2016.pdf 2016-11-14
16 201641038802-ENDORSEMENT BY INVENTORS [13-10-2021(online)].pdf 2021-10-13
16 Form18_Normal Request_14-11-2016.pdf 2016-11-14
17 201641038802-CLAIMS [13-10-2021(online)].pdf 2021-10-13
17 Form2 Title Page_Complete_14-11-2016.pdf 2016-11-14
18 Form26_General Power Of Attorney_14-11-2016.pdf 2016-11-14
18 201641038802-FER.pdf 2021-10-17
19 Form3_As Filed_14-11-2016.pdf 2016-11-14
19 201641038802-PatentCertificate11-08-2023.pdf 2023-08-11
20 Form5_As Filed_14-11-2016.pdf 2016-11-14
20 201641038802-IntimationOfGrant11-08-2023.pdf 2023-08-11

Search Strategy

1 SearchstrategyE_07-04-2021.pdf

ERegister / Renewals