Sign In to Follow Application
View All Documents & Correspondence

Systems, Methods, And Devices For Head Pose Determination

Abstract: Estimating a head pose may include obtaining sensor data corresponding to a head and at least a portion of the body of a human subject and determining an estimate of a three-dimensional (3D) body pose using the obtained sensor data. The estimation can further include generating a first rendering of at least the human subject’s head using the obtained sensor data and generating a plurality of head pose sample data sets by applying the estimated 3D body pose to a head-pose generative model. Further, the head pose estimation can include generating a plurality of second renderings respectively from each of the plurality of head pose sample data sets; determining which of the plurality of second renderings is closest to the first rendering; and selecting the second rendering determined to be closest to the first rendering.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
18 December 2020
Publication Number
25/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@iphorizons.com
Parent Application

Applicants

Intel Corporation
2200 Mission College Boulevard, Santa Clara, California 95054, USA

Inventors

1. Parual DATTA
23-56P, Devarabeesanahalli Sarjapur Outer Ring Road, Bellandur Bangalore, Karnataka 560037
2. Nilesh AHUJA
7717 Rainbow Drive, Cupertino CA 95014
3. Javier FELIP LEON
474 NW Tork Place Hillsboro,OR 97006

Specification

Claims:1. A non-transitory computer readable medium comprising instructions that when executed by the at least one processor cause the at least one processor to:
obtain sensor data corresponding to a head and at least a portion of the body of a human subject;
determine an estimate of a body pose using the obtained sensor data;
generate a first rendering of at least the human subject’s head using the obtained sensor data;
generate at least one head pose data set with a generative model based on the estimated body pose,
generate at least one second rendering using the at least one head pose dataset; and
determine at least one likeness factor between the first and the at least one second rendering.
, Description:Technical Field
[0001] Various aspects of this disclosure generally relate to head pose estimation/determination.

Background
[0002] Head-poses are a strong cue for inferring a person's intent and hence are useful in several domains or applications such as autonomous driving including pedestrian (intent) determination, human-robot collaboration, and social interactions (e.g., tracking student focus, attention and intent in adaptive learning environments). Concerning autonomous vehicles, head pose estimations can be used for determining or predicting pedestrian intent in L1 to L5 autonomy vehicles, monitoring driver attention in L2/L3/L4/ADAS vehicles. In the case of pedestrians, it is well known that pedestrian gaze is strongly indicative of pedestrian intent or heading. In road-crossing situations, for example, when a vehicle is far away, pedestrians look at the environment or the road space ahead of the vehicle; when the vehicle approaches closer, however, the gaze gradually shifts to the windshield of the vehicle. Head-pose can be used as an effective proxy for gaze, mainly when the pedestrian is far from the vehicle, and it is not possible to estimate gaze.
[0003] The accurate determination or estimation of a head-pose of a person is a challenging problem. Existing solutions rely on having access to a relatively high-resolution, frontal capture of a subject's face and assume the availability of facial landmarks. Hence, these solutions fail for various real-world scenarios in which the subject's head or face is not prominently visible, which may occur for several reasons. For example, a human subject's face might be partially occluded so that at least some facial features are obscured. For example, a subject's head may be oriented in an atypical position, or the subject's face is not fully facing the camera. A human subject, in various situations, might not be very close to the camera causing the face of the human subject to occupy a relatively small area within the captured image. Another difficult case might be when the head or face is viewed at an odd angle, rendering different levels of detail for head or face features. In such cases, it may not be possible to extract sufficient head or facial details even if the subject is directly facing the camera.
[0004] Further, lighting conditions in the real world can very. The subject's environment may not illuminate a subject face sufficiently well because the overall scene brightness might be low, or perhaps shadows are being cast on the head region in a bright scene, or contrast is high or camera sensor performs poorly e.g. against full sunlight.

Brief Description of the Drawings
[0005] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale; emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments of the invention are described with reference to the following drawings, in which:
FIG. 1 shows a diagram of an exemplary process for estimating a head pose in accordance with aspects of the present disclosure.
FIG. 2 shows a system according to exemplary aspects of the present disclosure.
FIG. 3 shows a method according to exemplary aspects of the present disclosure.

Description
[0006] The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the invention may be practiced.
[0007] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration". Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
[0008] As used herein, unless otherwise specified, the use of the ordinal adjectives "first", "second", "third" etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or any other manner.
[0009] For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). Reference to "one embodiment/aspect" or "an embodiment/aspect" in the present disclosure means that a particular feature, structure, or characteristic described in connection with the embodiment/aspect is included in at least one embodiment/aspect. The appearances of the phrase "for example," "in an example," or "in some examples" are not necessarily all referring to the same example.

Documents

Application Documents

# Name Date
1 202041055251-FORM 1 [18-12-2020(online)].pdf 2020-12-18
2 202041055251-DRAWINGS [18-12-2020(online)].pdf 2020-12-18
3 202041055251-COMPLETE SPECIFICATION [18-12-2020(online)].pdf 2020-12-18
4 202041055251-Request Letter-Correspondence [22-02-2021(online)].pdf 2021-02-22
5 202041055251-Power of Attorney [22-02-2021(online)].pdf 2021-02-22
6 202041055251-FORM-26 [22-02-2021(online)].pdf 2021-02-22
7 202041055251-Form 1 (Submitted on date of filing) [22-02-2021(online)].pdf 2021-02-22
8 202041055251-Covering Letter [22-02-2021(online)].pdf 2021-02-22
9 202041055251-FORM 3 [18-06-2021(online)].pdf 2021-06-18
10 202041055251-FORM 3 [17-12-2021(online)].pdf 2021-12-17
11 202041055251-FORM 18 [21-11-2024(online)].pdf 2024-11-21