Abstract: An indoor unit of an air conditioner is provided with an image pickup device that includes a human body detecting means for detecting the presence or absence of a person and an obstacle detecting means for detecting the presence or absence of an obstacle. A wind direction changing means mounted to the indoor unit is controlled based on a detection signal of the human body detecting means and that of the obstacle detecting means. When the air conditioner is not in operation, the image pickup device is covered with a portion of the indoor unit. The image pickup device is designed to face in the same direction at the start of operation of the air conditioner.
DESCRIPTION
Title of the Invention
Air Conditioner
Technical Field
The present invention relates to an air conditioner having an indoor unit that is provided with a human body detecting device for detecting the presence or absence of a person and an obstacle detecting device for detecting the presence or absence of an obstacle and, more particularly, to an air conditioner designed to control wind direction changing blades based on a detection result of the human body detecting device and a detection result of the obstacle detecting device.
Background Art
A conventional air conditioner has an indoor unit including a human position detecting means and an obstacle position detecting means wherein a wind direction changing means is controlled based on both of a detection signal of the human position detecting means and a detection signal of the obstacle position detecting means to thereby enhance the air-conditioning efficiency.
In this air conditioner, when a heating operation is started, a determination is first made by the human position detecting means as to whether a person is present or absent in a room. If no person is present, the obstacle position detecting means determines whether an obstacle is present or absent, and if no obstacle is present, the wind direction changing means is controlled to spread the air-conditioned air over an entire space within the room.
If no person is present but an avoidable obstacle has been detected, the wind direction changing means is so controlled as to be directed toward a direction in which no obstacle is present. On the other hand, if an unavoidable object has been detected, the wind direction changing means is controlled so as not to allow the air-conditioned air to directly impinge on the obstacle to thereby spread the air-conditioned air over the entire space within the room.
Further, if a person(s) is present, a determination is made as to whether or not a region of absence is present, and if the region of absence is not present, the wind direction changing means is controlled to allow the air-conditioned air to spread over the entire space within the room. If the region of absence is present, the presence or absence of an obstacle is determined in the region of absence, i.e., the region where no person is present. If an obstacle is present, the wind direction changing means is so controlled as to be directed toward a direction in which the obstacle is present so that the air-conditioned air may not strongly impinge on the obstacle, while if no obstacle is present, the wind direction changing means is so controlled as to be directed toward a direction in which no obstacle is present (see, for example, Patent Document 1). (Prior Art Documents) Patent Document 1: Japanese Laid-Open Utility Model Publication No. 3-72249
Summary of the Invention
Problems to be solved by the Invention
In the case of the air conditioner as disclosed in Patent Document 1, an ultrasonic sensor is employed as a distance measuring device for detecting a distance to a person or an obstacle, and an entire region of a room is scanned for detection of an obstacle by driving the ultrasonic sensor. However, a range of measurement of the distance measuring device such as the ultrasonic sensor is narrow and, hence, detection and recognition of a person or an obstacle in the entire region of the room requires complicated and time-consuming scanning.
Also, if the distance measuring device such as the ultrasonic sensor is exposed irrespective of whether or not the air conditioner is in operation, the distance measuring device is adversely affected by dirt, dust, cigarette smoke or the like, thus resulting in a reduction in recognition performance.
Further, if the distance measuring device such as the ultrasonic sensor is of a movable type, when the detection by the distance measuring device is stopped after the scanning of the entire region of the room, the distance measuring device may face a comer of the room, thereby giving a resident or residents a feeling of strangeness. In addition, if the distance measuring device faces in different directions each time the scanning by the distance measuring device such as the ultrasonic sensor is completed, the residents similarly feel strange.
The present invention has been developed to overcome the above-described disadvantages.
It is accordingly an objective of the present invention to provide an air conditioner that makes use of a fixed or driven image pickup device mounted on an indoor unit to detect the presence or absence of a person (human body detecting means) and the presence or absence of an obstacle (obstacle detecting means), wherein the image pickup device is not exposed when the air conditioner is not in operation. 1f the image pickup device is of the driven type, the image pickup device is designed to always face in the same direction at the start of operation of the air conditioner, thereby minimizing a reduction in recognition performance of the image pickup device and giving the residents a sense of ease. Means to Solve the Problems
In accomplishing the above objective, the air conditioner according to the present invention makes use of the fixed or driven image pickup device mounted on the indoor unit to detect the presence or absence of a person ( human body detecting means) and the presence or absence of an obstacle (obstacle detecting means), and includes wind direction changing blades mounted to the indoor unit so as to be controlled based on a detection result of the human body detecting means and a detection result of the obstacle detecting means. When the air conditioner is not in operation, the image pickup device is covered with a portion of the indoor unit. If the image pickup device is of the driven type, the image pickup device is designed to face in the same direction at the start of operation of the air conditioner Effects of the Invention
According to the present invention, when the air conditioner is not in operation, the image pickup device is not exposed to thereby minimize a reduction in recognition performance of the image pickup device, while when operation of the air conditioner is started, the image pickup device is designed to face in the same direction to thereby give a resident or residents a sense of ease. Also, a sense of unease to privacy can be dispelled, in which the residents feel that an interior of a room may be always imaged if the image pickup device is exposed at the time of stoppage of the air conditioner. Brief Description of the Drawings
Fig. 1 is a front view of an indoor unit of an air conditioner according to the present
invention.
Fig. 2 is a vertical sectional view of the indoor unit of Fig. 1.
Fig. 3A is a vertical sectional view of the indoor unit of Fig. 1, depicting a state in which a movable front panel opens a front opening and vertical wind direction changing blades open a discharge opening.
Fig. 3B is a vertical sectional view of the indoor unit of Fig. 1, depicting a state in which a lower blade constituting the vertical wind direction changing blades has been set downward.
Fig. 4 is a sectional view of an image pickup device mounted on the indoor unit of Fig. 1.
Fig. 5 is a flowchart indicating human position estimation processing in an embodiment of the present invention.
Figs. 6 A to 6C are schematic views to explain background difference processing in the human position estimation in this embodiment.
Figs. 7 A to 7C are schematic views to explain processing for creating a background image in the background difference processing.
Figs. 8 A to 8C are schematic views to explain processing for creating a background image in the background difference processing.
Figs. 9A to 9C are schematic views to explain processing for creating a background image in the background difference processing.
Figs. 10 A and 10 B are schematic views to explain region division processing in the human position estimation in this embodiment.
Fig. 11 is a schematic view to explain two coordinate systems utilized in this embodiment.
Figs. 12A and 12 B are schematic views indicating a distance from an image pickup sensor unit to a position of a center of gravity of a human body.
Figs. 13 A and 13 B are schematic views indicating human position discriminating regions that are detected by the image pickup sensor unit constituting a human body detecting means.
Figs. 14 A and 14 B are schematic views indicating the human position discriminating regions that are detected by the image pickup sensor unit constituting the human body detecting means, particularly depicting a case where a human body or human bodies are present
Fig. 15 is a flowchart for setting region property to each region shown in Figs. 13 A and 13 B.
Fig. 16 is a flowchart for finally determining the presence or absence of a person in each region shown in Figs. 13A and 13B.
Fig. 17 is a schematic plan view of a house in which the indoor unit of Fig. 1 has been installed.
Fig. 18 is a graph indicating long-term cumulative results obtained by each image pickup sensor unit with respect to the house of Fig. 17.
Fig. 19 is a schematic plan view of another house in which the indoor unit of Fig. 1 has been installed.
Fig. 20 is a graph indicating long-term cumulative results obtained by each image pickup sensor unit with respect to the house of Fig. 19.
Fig. 21 is a flowchart indicating human position estimation processing in which processing for extracting a human-like region from a frame image is utilized.
Fig. 22 is a flowchart indicating human position estimation processing in which processing for extracting a face-like region from a frame image is utilized.
Fig. 23 is a schematic view of obstacle position discriminating regions that are detected by an obstacle detecting means.
Fig. 24 is a schematic view indicating how to detect an obstacle using a stereo method.
Fig. 25 is a flowchart indicating distance measurement processing to an obstacle.
Fig. 26 is a schematic view indicating a distance from the image pickup sensor unit to a point P.
Fig. 27 is an elevational view of a certain living space, particularly depicting measurement results of the obstacle detecting means
Fig. 28 is a schematic view indicating the definition of a wind direction at each position of right and left blades constituting the horizontal wind direction changing blades.
Fig. 29 is a schematic plan view of a room to explain a wall detection algorithm that is used to obtain distance numbers of surrounding walls by measuring distances to them from the indoor unit.
Fig. 30 is a front view of an indoor unit of another air conditioner according to the present invention.
Fig. 31 is a schematic view indicating a relationship between the image pickup sensor unit and a light emitting portion.
Fig. 32 is a flowchart indicating distance measurement processing to an obstacle that is executed using the light emitting portion and the image pickup sensor unit.
Fig. 33 is a front view of an indoor unit of another air conditioner according to the present invention.
Fig. 34 is a flowchart indicating processing to be executed by a human body distance detecting means employing the human body detecting means.
Figs. 35A and 35B are schematic views indicating processing for estimating a distance from the image pickup sensor unit to a human body using a v-coordinate v1 of an uppermost portion of an image.
Fig. 36 is a flowchart indicating processing executed by the obstacle detecting means employing the human body detecting means.
Figs. 37A and 37 B are schematic views indicating processing for estimating a height v2 of a human body on the image using distance information from the image pickup sensor unit to the human body that has been estimated by the human body distance detecting means.
Figs. 38 A and 38 B are schematic views indicating processing for estimating whether an obstacle is present or absent between the image pickup sensor unit and the human body.
Figs. 39 A and 39 B are schematic views indicating the processing for estimating whether an obstacle is present or absent between the image pickup sensor unit and the human body.
Detailed Description of the Embodiments
A first invention is directed to an air conditioner that makes use of a fixed or driven image pickup device mounted on an indoor unit to detect the presence or absence of a person ( human body detecting means) and the presence or absence of an obstacle (obstacle detecting means), and includes wind direction changing blades mounted to the indoor unit so as to be controlled based on a detection result of the human body detecting means and a detection result of the obstacle detecting means. When the air conditioner is not in operation, the image pickup device is covered with a portion of the indoor unit.
This construction can not only minimize a reduction in recognition performance of the image pickup device but also give a resident or residents a sense of ease. Also, a sense of unease to privacy can be dispelled, in which the residents feel that an interior of a room may be always imaged if the image pickup device is exposed at the time of stoppage of the air conditioner.
In a second invention, the driven type image pickup device is designed to face in a same direction at the start of operation of the air conditioner.
In a third invention, the driven type image pickup device is designed to face in a direction in which a front face of the indoor unit faces.
By these constructions, the residents do not feel a sense of strangeness.
In a fourth invention, the driven type image pickup device is designed to face, when the air conditioner is brought into operation, in a direction in which an optical axis thereof extends forward so as to be substantially perpendicular to a surface of the indoor unit, on which the image pickup device is mounted, as viewed from above the indoor unit, thereby bringing about the same effect as in the second or third invention.
In a fifth invention, a direction of the driven type image pickup device is changeable in horizontal and vertical directions within a predetermined range of angles, and when operation of the air conditioner is started, the image pickup device is designed to face in a direction in which the image pickup device is located at an uppermost or lowermost position of the predetermined range of angles in the vertical direction, thereby bringing about the same effect as in the second or third invention.
If a viewing angle of the image pickup device is sufficient in the vertical direction, it is only necessary to horizontally drive the image pickup device. Similarly, if the viewing angle of the image pickup device is sufficient in the horizontal direction, it is only necessary to vertically drive the image pickup device. Also, if the viewing angle of the image pickup device is sufficient in both the horizontal and vertical directions, the image pickup device may be of the fixed type.
In a sixth invention, when the air conditioner is not in operation, the image pickup device is covered with a movable front panel or a vertical wind direction changing blade. This construction can prevent the image pickup device from being adversely affected by dirt, dust, cigarette smoke or the like, thus making it possible to minimize a reduction in recognition performance. Also, a sense of unease to privacy can be dispelled, in which the residents feel that an interior of a room may be always imaged if the image pickup device is exposed at the time of stoppage of the air conditioner.
Embodiments of the present invention are described hereinafter with reference to the drawings. (Whole construction of air conditioner)
Air conditioners for use in ordinary households include an outdoor unit and an indoor unit connected to each other via refrigerant piping, and Figs. 1 to 4 depict an indoor unit of an air conditioner according to the present invention.
The indoor unit includes a main body 2 and a movable front panel (hereinafter referred to simply as "front panel") 4 to open and close front suction openings 2a defined in the main body 2. When the air conditioner is not in operation, the front panel 4 is held in close contact with the main body 2 to close the front suction openings 2a, while when the air conditioner is brought into operation, the front panel 4 moves away from the main body 2 to open the front suction openings 2a. Figs. 1 and 2 depict a state in which the front suction openings 2a have been closed by the front panel 4, and Figs. 3A and 3B depict a state in which the front suction openings 2a have been opened by the front panel 4.
As shown in Figs. 1 to 3B, the main body 2 accommodates therein a heat exchanger 6, an indoor fan 8 operable to blow out into a room indoor air, which has been sucked through the front suction openings 2a and upper suction openings 2b and then heat exchanged by the heat exchanger 6, vertical wind direction changing blades 12 operable to open and close a discharge opening 10, through which heat exchanged air is blown into the room, and also operable to vertically change the direction of air blown out from the discharge opening 10, and horizontal wind direction changing blades 14 operable to horizontally change the air direction. A filter 16 is disposed between the front and upper suction openings 2a, 2b and the heat exchanger 6 to remove dust contained in indoor air that has been sucked through the front suction openings 2a and the upper suction openings 2b.
The front panel 4 is connected at an upper portion thereof to an upper portion of the main body 2 via two arms 18, 20 provided on respective side portions thereof. The arm 18 is connected to a drive motor (not shown), and when the air conditioner is brought into operation, the front panel 4 is moved forward and obliquely upward from a position (where the front suction openings 2a are closed) at the time of stop of operation of the air conditioner by driving the drive motor.
The vertical wind direction changing blades 12 include an upper blade 12a and a lower blade 12b, both swingably mounted to a lower portion of the main body 2. The upper blade 12a and the lower blade 12b are connected to respective drive sources (for example, stepping motors), and angles thereof are independently controlled by a controller (first substrate 48 described later, for example, microcomputer) accommodated within the indoor unit. As can be seen from Figs. 3A and 3B, a range of angles within which the lower blade 12b is allowed to swing is so set as to be greater than a range of angles within which the upper blade 12a is allowed to swing.
A method of driving the upper blade 12a and the lower blade 12b is explained later. The vertical wind direction changing blades 12 may be made up of three blades or more. In this case, it is preferred that angles of at least two blades (in particular, an uppermost blade and a lowermost blade) be independently controlled.
The horizontal wind direction changing blades 14 are made up of a total of ten blades in groups of five each on right and left sides with respect to a center of the indoor unit.
These blades are swingably mounted to a lower portion of the main body 2. Each group of five blades is connected to a drive source (for example, a stepping motor) as a unit, and the angle thereof is controlled by the controller accommodated in the indoor unit. A method of driving the horizontal wind direction changing blades 14 is also explained later. (Construction of human body detecting means)
As shown in Fig. 1, two image pickup devices 25 each incorporating an image pickup sensor unit 24, 26 therein are respectively mounted on both ends of an indoor unit body at a lower portion thereof as viewed from the front. Alternatively, only one image pickup device 25 incorporating an image pickup sensor unit 24 therein may be mounted on an end of the indoor unit body at a lower portion thereof. The image pickup device 25 is explained hereinafter with reference to Fig. 4.
The image pickup sensor unit 24 includes a circuit board 51, a lens 52 mounted on the circuit board 51, and an image pickup sensor 53 accommodated in the lens 52. The human body detecting means determines the presence or absence of a person based on, for example, difference processing (explained later) using the circuit board 51. That is, the circuit board 51 acts as a determination means for determining whether a person is present or absent.
The image pickup sensor unit 24 includes a spherical support (sensor holder) 54 for rotatably supporting the image pickup sensor 23 and an imaging direction changing means (drive means) for driving the image pickup sensor 23 to change a direction thereof so that a necessary field of view may be entirely scanned.
The support 54 includes a rotary shaft 55 for horizontal (transverse) rotation and a rotary shaft 56 for vertical rotation extending in a direction perpendicular to the rotary shaft 55.
The rotary shaft 55 is connected to and driven by a motor 57 for horizontal rotation, and the rotary shaft 56 is connected to and driven by a motor 58 for vertical rotation. That is, the imaging direction changing means is made up of the motor 57 for horizontal rotation, the motor 58 for vertical rotation, and the like to change and recognize the direction or angle of the image pickup sensor 53 in two dimensions. (Human position estimation by image pickup sensor unit)
A method of estimating a position of a person using the image pickup sensor unit is explained hereinafter, supposing that the image pickup sensor is fixed, i.e., drive of the image pickup sensor is not required for the sake of brevity (the field of view of the image pickup sensor is sufficiently ensured in both the horizontal and vertical directions).
A known difference method is utilized to estimate a position of a person using the image pickup sensor unit 24. In this method, difference processing is performed with respect to a background image in which no person is present and an image taken by the image pickup sensor unit 24, and it is estimated that a person is present in a region where a difference occurs.
Fig. 5 is a flowchart indicating human position estimation processing in this embodiment.
At step S101, background difference processing is utilized to detect pixels in which a difference occurs in a frame image. The background difference processing is a method of comparing a background image taken under specific conditions and an image taken under the same imaging conditions such as a field of view, a point of view, a focal length and the like of the image pickup sensor unit as those of the background image to detect an object that is not present in the background image but present in the image taken. To detect a person, an image in which no person is present is first created as the background image.
Figs. 6 A to 6C are schematic views to explain background difference processing in the. human position estimation according to this embodiment. Fig. 6A depicts a background image. The field of view is set so as to be nearly equal to a space to be air conditioned by the air conditioner. In this figure, 101 denotes a window present in the space to be air conditioned, and 102 denotes a door.
Fig. 6B depicts a frame image taken by the image pickup sensor unit. The field of view, the point of view, the focal length and the like of the image pickup sensor unit are the same as those of the background image shown in Fig. 6A. 103 denotes a person present in the space to be air conditioned. The background difference processing creates a difference image between Fig. 6 A and Fig. 6B to detect the person.
Fig. 6C depicts the difference image in which white pixels denote pixels having no difference and black pixels denote pixels having a difference. It is found that a region of the person 103 not present in the background image but present in the frame image taken has been detected as a region 104 in which the difference has occurred. That is, the human region can be detected by extracting the region having the difference from the difference image.
The background image referred to above can be created by making use of inter-frame difference processing. Figs. 7 to 9 are schematic views to explain this processing. Figs. 7 A to 7C are schematic views depicting images of three consecutive frames taken by the image pickup sensor unit 24 in a scene in which the person 103 is moving from right to left in front of the window 101. Fig. 7B depicts an image of the next frame of Fig. 7A, and Fig. 7C depicts an image of the next frame of Fig. 7B. Figs. 8 A to 8C depict inter-frame difference images obtained by the inter-frame difference processing with the use of the images of Figs. 7 A to 7C. White pixels denote pixels having no difference and black pixels 105 denote pixels having a difference. If an object that is moving within the field of view is only a person, it is conceivable that no person exists in a region where no difference has occurred in the inter-frame difference images. In view of this, in this embodiment, the background image is replaced with the image of the current frame in the region where no inter-frame difference has occurred, thereby automatically creating a new background image. Figs. 9 A to 9C schematically depict renewal of the background images of the frames of Figs. 7A to 7C, respectively. A shaded region 106 denotes a region where the background image has been renewed, a black region 107 denotes a region where any background image has not been created yet, and a white region 108 denotes a region where the background image has not been renewed. That is, the sum of the black region 107 and the white region 108 in Figs. 9A to 9C is equal to the black region in Figs. 8 A to 8C, respectively. As shown in Figs. 9 A to 9C, if a person is moving, the black region 107 becomes gradually small and the background image is automatically created.
If a plurality of difference regions have been obtained, they are separated at the next step S102. That is, if a plurality of persons exist, they are separated as a plurality of difference regions using a known clustering method. By way of example, the difference images are separated as different regions in accordance with a rule in which a pixel having a difference and pixels similarly having a difference and existing in the vicinity thereof lie in the same region. Figs. 10A and 10B schematically depict this region separation processing. Fig. 10A depicts a difference image calculated by the difference processing, wherein black pixels 111 and 112 denote pixels in which a difference has occurred. When the difference image of Fig. 10A is obtained, a result as shown in Fig. 10 B is subsequently obtained by the region separation in accordance with the aforementioned rule in which a pixel having a difference and pixels similarly having a difference and existing in the vicinity thereof lie in the same region. As can be seen from these figures, a determination is made that a horizontal-striped region 113 and a vertical-striped region 114 are different regions. For this determination, denoising processing such as, for example, morphological processing widely used in the field of image processing may be performed.
At the next step S103, the position of a person detected is determined by calculating the position of a center of gravity of each region obtained. In order to determine the position of a person from the position of the center of gravity of each image, the perspective projection conversion may be used.
Two coordinate systems are explained to explain the perspective projection conversion. Fig. 11 is a schematic view to explain the two coordinate systems. An image coordinate system is first considered. This is a two-dimensional coordinate system in an image taken. A top-left pixel in the image is set as an origin, and a rightward direction and a downward direction are indicated by "u" and V, respectively. A camera coordinate system that is a three-dimensional coordinate system on the basis of a camera is subsequently considered. In this coordinate system, the position of a focal point of the image pickup sensor unit is set as an origin, and an optical axis direction of the image pickup sensor unit is indicated by "Zc". Also, an upward direction and a leftward direction as viewed from the camera are indicated by "Yc" and "Xc", respectively.
Then, the following relationship is established by means of the perspective projection conversion. [Formula 1]
where f denotes a focal length [mm], (uO, vO) denotes an image center [Pixel] on an image coordinate, and (dpx, dpy) denotes a size [mm/Pixel] of one pixel of an image pickup element. Because Xc, Yc and Zc are all unknowns, if a coordinate (u, v) on the image is known, Formula 1 means that a real three-dimensional position corresponding to that coordinate lies on a certain straight line from the origin of the camera coordinate system.
As shown in Figs. 12 A and 12B, the position of the center of gravity of a person on the image is denoted by (ug, vg), and the three-dimensional position thereof in the camera coordinate system is denoted by (Xgc, Ygc, Zgc). Fig. 12A and Fig. 12 B schematically depict a space to be air conditioned as viewed from the side and from above, respectively.
Also, an installation height of the image pickup sensor unit 24 is denoted by H. A direction Xc is horizontal, and an optical axis Zc forms an angle 0 with the vertical direction. A direction of the image pickup sensor unit 24 is denoted by an angle a in the vertical direction (angle of elevation or angle as measured upward from a vertical line) and by an angle (3 in the horizontal direction (angle as measured rightward from a front reference line as viewed from the indoor unit. If a height of the center of gravity of the person is denoted by h, the three-dimensional position of the image pickup sensor unit 24 in the space to be air conditioned, i.e., a distance L and a direction W from the image pickup sensor unit 24 to the center of gravity of the person are calculated by the following formulas. [Formula 2]
Here, it is considered that the image pickup sensor unit 24 is generally installed at a height of H=about 2 meters and the height h of the center of gravity of a person is about 80 centimeters. In applications where the installation height H of the image pickup sensor unit 24 and the height h of the center of gravity of the person are determined, Formula 3 and Formula 5 mean that the position (L, W) of the center of gravity of the person in the space to be air conditioned is uniquely obtained from the position (ug, vg) of the center of gravity on the image.
Figs. 13A and 13B depict a plurality of regions A-G in the space to be air conditioned and, as shown therein, if the center of gravity of the person on the image exists in somewhere in those regions A-G, a region where the person is present in the space to be air conditioned can be determined. Also, Figs. 14 A and 14 B schematically depict where a person or persons are present. In the case of Fig. 14A, because the centers of gravity of the persons are present in the regions A and F, a determination is made that the persons are present in the regions A and F shown in Fig. 13B. On the other hand, in the case of Fig. 14B, because the center of gravity of the person is present in the region D, a determination is made that the person is present in the region D shown in Fig. 13B.
Fig. 15 is a flowchart for setting region property (explained later) to each of the regions A-G using the image pickup sensor unit 24, and Fig. 16 is a flowchart for determining the presence or absence of a person in each region A-G using the image pickup sensor unit 24. A method of determining the position of a person is explained hereinafter with reference to these flowcharts.
At step S1, the presence or absence of a person in each region is first determined at predetermined intervals T1 (for example, 200 milliseconds if a frame rate of the image pickup sensor unit 24 is 5 fps) in the above-described manner. Based on results of such determination, the regions A-G are classified into a first region in which a person is frequently present (place of frequent presence), a second region in which a person is present for a short period of time (transit region such as a region through which the person merely passes, a region in which the person stays for a short period of time, or the like), and a third region in which a person is present for a considerably short period of time (non-living region such as walls, windows, or the like in which nobody is present very often. The first, second and third regions are hereinafter sometimes referred to as living sections I, II and III, respectively, which are hereinafter sometimes referred to as a region of region property 1, a region of region property II, and a region of region property III, respectively. The living sections may be broadly classified depending on the frequency of the presence or absence of a person by referring to the living section I (region property I) and the living section II (region property II) as a living region (region in which a person(s) lives) and referring to the living section III (region property III) as a non-living region (region in which no person lives).
This determination is made after step S3 in the flowchart of Fig. 15 and explained hereinafter with reference to Figs. 17 and 18.
Fig. 17 depicts a layout of a house called "1LDK" consisting of a Japanese-style room and an LD (living and dining room), with the indoor unit of the air conditioner according to the present invention installed in the LD. Regions indicated by ovals in Fig. 17 indicate places where a subject is frequently present, which was reported by the subject.
As described hereinabove, a determination is made as to whether a person is present or absent in each region A-G for every period T1. A response result of 1 (presence of response) or 0 (no response) is outputted after a lapse of each period T1 and, upon repetition of this a plurality of times, all sensor outputs are cleared at step S2.
At step S3, a determination is made as to whether or not a predetermined cumulative period of time of operation of the air conditioner has elapsed. If it is determined at step S3 that the predetermined period of time has not elapsed, the program returns to step S1, but if it is determined that the predetermined period of time has elapsed, each region A-G is determined as one of the living sections I, II, and III by comparing the response results of each region A-G accumulated for the predetermined period of time with two threshold values.
Detailed explanation is made with reference to Fig. 18 indicating long-term cumulative results. A first threshold value and a second threshold value less than the first threshold value are set with which the long-term cumulative results are compared. A determination is made at step S4 whether or not the long-term cumulative results of each region A-G are greater than the first threshold value. If it is determined that the long-term cumulative results are greater than the first threshold value, the region having such long-term cumulative results is determined as the living section I at step S5. On the other hand, if it is determined at step S4 that the long-term cumulative results of each region A-G are not greater than the first threshold value, a determination is made at step S6 whether or not the long-term cumulative results of each region A-G are greater than the second threshold value. If it is determined that the long-term cumulative results are greater than the second threshold value, the region having such long-term cumulative results is determined as the living section II at step S7, and if not, the region is determined as the living section III at step S8.
In the example of Fig. 18, the regions C, D and G are determined as the living section I, the regions B and F as the living section II, and the regions A and E as the living section III.
Fig. 19 depicts a layout of another house having an LD in which the indoor unit of the air conditioner according to the present invention has been installed, and Fig. 20 indicates long-term cumulative results of each region A-G. In the example of Fig. 19, the regions B, C and E are determined as the living section I, the regions A and F as the living section II, and the regions D and G as the living section III.
Although the determination for the region property (living section) referred to above is repeated for every predetermined period of time, the results of determination hardly change unless sofas, tables and the like disposed inside the room to be determined are moved.
A final determination of the presence or absence of a person in each region A-G is explained hereinafter with reference to the flowchart of Fig. 16.
Because steps S21 and S22 are the same as steps S1 and S2 in the flowchart of Fig. 15, explanation thereof is omitted. It is determined at step S23 whether or not response results for a predetermined number M of (for example, 45) periods T1 have been obtained. If it is determined that the periods T1 do not reach the predetermined number M, the program returns to step S21, while if it is determined that the periods T1 have reached the predetermined number M, the number of a series of cumulative responses equal to a total of response results for periods T1 x M is calculated at step S24. The calculation of the series of cumulative responses is repeated a plurality of times, and it is determined at step S25 whether or not calculation results of a predetermined number of (for example, N=4) series of cumulative responses have been obtained. If it is determined that the calculation does not reach the predetermined number, the program returns to step S21, while it is determined that the calculation has reached the predetermined number, the presence or absence of a person in each region A-G is estimated at step S26 based on the region property that has been already determined and the predetermined number of series of cumulative responses.
It is to be noted here that because the program returns to step S21 from step S27 at which 1 is subtracted from the number (N) of the series of cumulative responses, the calculation of the plurality of series of cumulative responses is repeated.
Table 1 indicates a record of a newest series of cumulative responses (periods T1 x M). In Table 2, 2 AO means the number of a series of cumulative responses in the region A.
When the number of a series of cumulative responses immediately before 2A0 is 2A1, and the number of a series of cumulative responses immediately before 2A1 is 2A2 • • •, if N=4, the presence or absence of a person is determined based on the past four records (2 A4, 2 A3, 2 A2, 2 A1). In the case of the living section I, if the past four records reveal that at least a series of cumulative responses exceeds 1, it is determined that a person is present. In the case of the living section II, if the past four records reveal that more than two series of cumulative responses exceed 1, it is determined that a person is present. In the case of the living section III, if the past four records reveal that more than three series of cumulative responses exceed 2, it is determined that a person is present.
After the period T1 x M from the determination of the presence or absence of a person referred to above, a subsequent determination of the presence or absence of a person is similarly made based on the next four records, the region property, and the predetermined number of series of cumulative responses.
That is, the indoor unit of the air conditioner according to the present invention tries to obtain human position estimation results having a high probability by estimating the human position using the region property, which is obtained upon long-term accumulation of the region determination results for each predetermined period, and the past records indicating the number of N series of cumulative responses in each region, each series indicating the region determination results for a predetermined number of periods.
When the presence or absence of a person is determined in a manner as described above, if T1=0.2 seconds and M=45, a period of time required for estimation of the presence of a person and that required for estimation of the absence of a person are indicated in Table 2.
After an area that is to be air conditioned by the indoor unit of the air conditioner according to the present invention has been classified into a plurality of regions A-G in the above-described manner using the image pickup sensor unit 24, the region property (living section Mil) of each region A-G is determined, and the period of time required for estimation of the presence of a person and that required for estimation of the absence of a person are changed depending on the region property of each region A-G..
That is, after the setting for air conditioning has been changed, about one minute is needed before wind reaches and, hence, if the setting for air conditioning is changed in a short period of time (for example, several seconds), comfort is lost. In addition, it is preferred in terms of energy saving that a place that will be soon empty is not much air conditioned. For this reason, the presence or absence of a person in each region A-G is first detected, and air conditioning is optimized particularly in a region where a person is present.
More specifically, the period of time required for estimation of the presence or absence of a person in a region determined as the living section II is set as a standard one, and the presence of a person is estimated in a shorter period of time in a region determined as the living section I than in the region determined as the living section II, while when the person has disappeared from the region, the absence of a person is estimated in a longer period of time in the region determined as the living section I than in the region determined as the living section II. In other words, the period of time required for estimation of the presence of a person is set shorter and that required for estimation of the absence of a person is set longer with respect to the region determined as the living section I. On the other hand, the presence of a person is estimated in a longer period of time in a region determined as the living section III than in the region determined as the living section II, while when the person has disappeared from the region, the absence of a person is estimated within a shorter period of time in the region determined as the living section III than in the region determined as the living section II. In other words, the period of time required for estimation of the presence of a person is set longer and that required for estimation of the absence of a person is set shorter with respect to the region determined as the living section III. Further, as described above, the living section set to each region changes depending on the long-term cumulative results, and the period of time required for estimation of the presence of a person and that required for estimation of the absence of a person are both variably set.
Although in the above discussion the difference method is used to estimate the position of a person by means of the image pickup sensor unit 24, any other method may be used.
For example, a human-like region, i.e., a region where a person is likely to be present may be extracted from a frame image by making use of image data of an entire body of the person. A method of utilizing, for example, HOG (Histograms of Oriented Gradients) is widely known as such a method (N. Dalai and B. Triggs, "Histograms of Oriented Gradients for Human Detection", In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Vol. 1, pp. 886-893, 2005). The HOG are the amounts of features focused on the edge strength in each edge direction in localized portions of an image, and the human region can be detected from the frame image by conducting learning and discrimination with respect to those amounts of features with the use of an SVM (Support Vector Machine) or the like.
Fig. 21 is a flowchart indicating human position estimation processing in which processing for extracting a human-like region from the frame image is utilized. In this figure, the same steps as those in Fig. 5 are designated by the same step numbers and detailed explanation thereof is omitted.
At step S104, the human-like region in the frame image is extracted as a human region by making use of the aforementioned HOG.
At step S103, the position of the person detected is determined by calculating the position of a center of gravity of the human region obtained. In order to determine the position of the person from the position of the center of gravity of the image, Formula 3 and Formula 5 can be used, as described above.
Instead of utilizing image data of the entire body of the person, the human-like region may be extracted from the frame image. A method of utilizing, for example, Haar-like features is widely known as such a method (P. Viola and M. Jones, "Robust real-time face detection", International Journal of Computer Vision, Vol. 57, No. 2, pp. 137-154, 2004). The Haar-like features are the amounts of features focused on a luminance difference between localized portions of an image, and the human region can be detected from the frame image by conducting learning and discrimination with respect to those amounts of features with the use of the SVM (Support Vector Machine) or the like.
Fig. 22 is a flowchart indicating human position estimation processing in which processing for extracting a face-like region from a frame image is utilized. In this figure, the same steps as those in Fig. 5 are designated by the same step numbers and detailed explanation thereof is omitted.
At step S105, the face-like region in the frame image is extracted as a face region by making use of the aforementioned Haar-like features.
At step S103, the position of the person detected is determined by calculating the position of a center of gravity of the face region obtained. In order to determine the position of the person from the position of the center of gravity of the image, the perspective projection conversion can be used, as described above. In this event, when the position of the person is determined from the position of the center of gravity thereof by making use of the region of the entire body of the person, the height of the center of gravity of the person is
set to h=about 80 centimeters, but when the face region is utilized, the height of the center of gravity of the face is set to h=about 160 centimeters and the position of the person is determined by making use of Formula 3 and Formula 5. (Construction of obstacle detecting means)
Obstacle detection is conducted by making use of the image pickup sensor unit 24 and this obstacle detecting means is explained hereinafter. The term "obstacle" as employed throughout this application is defined as an object that generally impedes an air flow blown out from the discharge opening 10 in the indoor unit to provide a resident or residents with a comfortable space, and collectively means objects other than residents such as, for example, a television set, an audio station, and furniture such as sofas, tables or the like.
In this embodiment, the floor face in the living space is divided into a plurality of regions as shown in Fig. 23 by the obstacle detecting means based on the vertical angle a and the horizontal angle p. Each of the plurality of regions so divided is defined as an obstacle position discriminating region or a "position" where the presence or absence of an obstacle is determined. An entire area covering all the positions shown in Fig. 23 substantially coincides with an entire area covering all the human position discriminating regions as shown in Fig. 13B. By making region boundaries of Fig. 13B substantially coincide with position boundaries of Fig. 23, and by making the regions correspond to the positions in the following manner, not only can air conditioning control be easily conducted, but the number of memories for storage of information can also be minimized.
Region A: Position A1+A2+A3
Region B: Position B1+B2
Region C: Position C1+C2
Region D: Position D1+D2
Region E: Position E1+E2
Region F: Position F1+F2
Region G: Position G1+G2
In the region division of Fig. 23, the number of the positions is so set as to be greater than the number of the human position discriminating regions, and at least two positions belong to each of the human position discriminating regions and are positioned side by side as viewed from the indoor unit. However, air conditioning control can be conducted with a region division in which at least one position belongs to each of the human position discriminating regions.
Also, in the region division of Fig. 23, each of the plurality of human position discriminating regions is divided depending on a distance to the indoor unit, and the number of the positions belonging to a human position discriminating region close to the indoor unit is set greater than the number of the positions belonging to another human position discriminating region remote from the indoor unit. However, the positions belonging to each human position discriminating region may be the same in number irrespective of the distance from the indoor unit. (Detecting operation and data processing by obstacle detecting means)
As described above, in the air conditioner according to the present invention, the presence or absence of a person in the regions A-G is detected by the human body detecting means, while the presence or absence of an obstacle in the positions A1-G2 is detected by the obstacle detecting means, and the vertical wind direction changing blades 12 and the horizontal wind direction changing blades 14 both constituting the wind direction changing means are controlled based on a detection signal (result detected) from the human body detecting means and that (result detected) from the obstacle detecting means, thereby providing a comfortable space.
The human body detecting means makes use of, for example, the fact that a person moves and can detect the presence or absence of the person by detecting an object that is moving in a space to be air conditioned, while the obstacle detecting means detects a distance to an obstacle by means of the image pickup sensor unit 24 and accordingly cannot distinguish between a human body and an obstacle.
If a human body is erroneously detected as an obstacle, a region in which a person is present cannot be air conditioned or air-conditioned air (air current) may directly impinge on the person, thus resulting in inefficient or uncomfortable air conditioning control.
For this reason, the obstacle detecting means is designed so as to detect only an obstacle by executing data processing explained below.
An obstacle detecting means employing image pickup sensor units is first explained. A stereo method is utilized to detect an obstacle with the use of the image pickup sensor units. The stereo method utilizes a plurality of image pickup sensor units 24, 26 to estimate a distance to an object by making use of a disparity between them.
Fig. 24 is a schematic view indicating how to detect an obstacle using the stereo method. In this figure, the image pickup sensor units 24, 26 are utilized to measure a distance to a point P where the obstacle is positioned. In Fig. 24, f denotes a focal length, B denotes a distance between a focal point of the image pickup sensor unit 24 and that of the image pickup sensor unit 26, u1 denotes a u-coordinate of the obstacle on an image of the image pickup sensor unit 24, u2 denotes a u-coordinate of a point corresponding to u1 on an image of the image pickup sensor unit 26, and X denotes a distance to the two image pickup sensor units 24, 26. Assuming that a center position of the image of the image pickup sensor unit 24 and that of the image of the image pickup sensor unit 26 are the same, the distance X from the image pickup sensor units 24, 26 to the point P is obtained by the following formula. [Formula 6]
This formula reveals that the distance X from the image pickup sensor units 24, 26 to the point P or the obstacle depends on a disparity |u1-u2| between the two image pickup sensor units 24, 26.
A block matching method employing a template matching method may be used to search the corresponding point. As described above, the distance measurements (position detection of obstacles) are conducted using the image pickup sensor units.
Formula 3, Formula 5 and Formula 6 reveal that the position of the obstacle can be estimated by means of the pixel position and the disparity. In Table 3, "i" and "j" indicate the pixel positions to be measured, and angles in the vertical direction and those in the horizontal direction indicate the angles of elevation a referred to above and angles B measured rightward from a front reference line as viewed from the indoor unit, respectively. That is, each pixel is set to fall within a range of 5 degrees to 80 degrees in the vertical direction and within a range of -80 degrees to 80 degrees in the horizontal direction as viewed from the indoor unit, and the image pickup sensor units measure the disparity of each pixel.
That is, the air conditioner conducts the distance measurements (position detection of obstacles) by measuring the disparity at each pixel from a pixel [14,15] to a pixel [142, 105].
A range of detection of the obstacle detecting means at the start of operation of the air conditioner may be limited to more than 10 degrees in the angle of elevation. The reason for this is that there is a high possibility that someone is present at the start of operation of the air conditioner, and data measured can be effectively utilized by conducting the distance measurements with respect to only regions where it is highly possible that nobody is detected, i.e., regions where walls exist (because a human body is not an obstacle, data obtained from a region where a person is present are not used, as described later).
The distance measurements to an obstacle are explained hereinafter with reference to Fig. 25.
If a determination is made at step S41 that no person is present in a region (any one of the regions A-G shown in Fig. 13) corresponding to the present pixel, the program advances to step S42, while if a determination is made at step S41 that a person is present, the program advances to step S43. Because a human body is not an obstacle, preceding distance data are used with respect to a pixel corresponding to the region where a determination has been made that a person is present without conducting the distance measurements (distance data are not updated). The distance measurements are conducted with respect to only a pixel corresponding to the region where a determination has been made that no person is present, and newly measured distance data are used (distance data are updated).
That is, in determining the presence or absence of an obstacle in each obstacle position discriminating region, a determination is made whether or not results of determination by the obstacle detecting means in each obstacle position discriminating region should be updated depending on results of determination of the presence or absence of a person in a human position discriminating region corresponding to each obstacle position discriminating region, thus resulting in an efficient determination of the presence or absence of the obstacle. More specifically, in an obstacle position discriminating region belonging to a human position discriminating region where it has been determined by the human body detecting means that no person is present, preceding results of determination by the obstacle detecting means are updated by current results of determination, while in an obstacle position discriminating region belonging to a human position discriminating region where it has been determined by the human body detecting means that a person is present, the preceding results of determination by the obstacle detecting means are not updated by the current results of determination.
At step S42, the disparity of each pixel is calculated by making use of the aforementioned block matching method, and the program advances to step S44.
At step S44, data are obtained eight times at the same pixel, and a determination is made whether or not distance measurements based on the data obtained have been completed.
If it is determined that the distance measurements have not been completed yet, the program returns to step S41. To the contrary, if it is determined at step S44 that the distance measurements have been completed, the program advances to step S45.
At step S45, the reliability of the distance measurements is evaluated to enhance the accuracy of the distance estimation. That is, if a determination is made that the distance measurements are reliable, distance number determining processing is executed at step S46. On the other hand, if a determination is made that the distance measurements are not reliable, another processing in which a distance number of an adjacent pixel is used as distance data of the pixel in process is executed at step S47.
Such processing is executed by the image pickup sensor units 24, 26 and, hence, the image pickup sensor units 24, 26 act as the obstacle position detecting means.
The distance number determining processing at step S46 is explained hereinafter, but the term "distance number" is first explained.
The "distance number" means an approximate distance from the image pickup sensor unit to a position P in a space to be air conditioned. As shown in Fig. 26, when the image pickup sensor unit has been placed two meters above a floor face, and the distance from the image pickup sensor unit to the position P is referred to as "distance X [m] corresponding to the distance number", the position P is represented by the following formulas. [Formula 7]
As indicated by Formula 6, the distance X corresponding to the distance number depends on the disparity between the image pickup sensor units 24, 26. The distance number is represented by an integer between two and twelve, and the distance corresponding to the distance number is set as shown in Table 4.
Table 4 shows the position P corresponding to an angle of elevation (a) that is determined by a v-coordinate value of each pixel based on the distance number and Formula 2. An area in black indicates positions under the floor where "h" takes a negative value (h<0).
Also, Table 4 is applied to an air conditioner having a capacity of 2.2kw, and supposing that this air conditioner is solely installed in a six-mat room (width across corners=4.50m), a distance number=9 is set as a limiting value (maximum value D). In the six-mat room, a position corresponding to a distance number 10 is positioned on the other side beyond a wall (outside the room). Although such a distance number can be applied to a room having a width across corners>4.50m, it has no meaning in the six-mat room and is indicated in black in Table 4.
Table 5 is applied to an air conditioner having a capacity of 6.3kw, and supposing that this air conditioner is solely installed in a twenty-mat room (width across corners=8.49m), a distance number=12 is set as a limiting value (maximum value D).
Table 6 indicates limiting distance numbers set depending on the capacity of the air conditioner and the angle of elevation of each pixel.
The reliability evaluation processing at step S45 and the distance number determining processing at step S46 are explained hereinafter.
As described above, the distance number has a limiting value depending on the capacity of the air conditioner and the angle of elevation of each pixel, and even if a distance number estimation result is N>maximum value D, unless all the measurement results are "distance number=N", the distance number is set to D.
Eight distance numbers are determined at each pixel. Two distance numbers from largest and two distance numbers from smallest are all removed, and an average of the four remaining distance numbers is determined as the distance number. In applications where the stereo method employing the block matching method is used, when an obstacle without any luminance change is detected, a disparity calculation is unstable and widely varying disparity results (distance numbers) are detected for every measurement. In view of this, the four remaining distance numbers are compared at step S45, and if a variation thereof is greater than or equal to a threshold value, a determination is made at step S47 that the distance numbers are not reliable, and the distance estimation at the pixel in process is abandoned. In this case, the distance number that has been estimated at an adjacent pixel is used. The average is an integer obtained by rounding it up after the decimal point. The positions corresponding to the distance numbers so determined are shown in Table 4 or Table 5.
Although in this embodiment the distance number has been described as being obtained by determining eight distance numbers at each pixel, by removing two distance numbers from largest and two distance numbers from smallest, and by averaging the four remaining distance numbers, the number of distance numbers to be determined at each pixel is not limited to eight, and that to be averaged is not limited to four.
That is, in determining the presence or absence of an obstacle in each obstacle position discriminating region, a determination is made whether or not results of determination by the obstacle detecting means in each obstacle position discriminating region should be updated depending on results of determination of the presence or absence of a person in a human position discriminating region corresponding to each obstacle position discriminating region, thus resulting in an efficient determination of the presence or absence of the obstacle. More specifically, in an obstacle position discriminating region belonging to a human position discriminating region where it has been determined by the human body detecting means that no person is present, preceding results of determination by the obstacle detecting means are updated by current results of determination, while in an obstacle position discriminating region belonging to a human position discriminating region where it has been determined by the human body detecting means that a person is present, preceding results of determination by the obstacle detecting means are not updated by current results of determination.
Although the preceding distance data are used at step S43 in the flowchart of Fig. 25, if a determination by the obstacle detecting means in each obstacle position discriminating region is a first one, a default value is used because no preceding data exist immediately after the air conditioner has been installed. The limiting value (maximum value D) described above is used as the default value.
Fig. 27 is a schematic elevation view (vertical sectional view passing through the image pickup sensor unit) of a living space, depicting measurement results when a floor face is located two meters below the image pickup sensor unit and there are obstacles such as tables at a level of 0.7-1.1m above the floor face. In this figure, meshing, upward-sloping hatching, and downward-sloping hatching indicate short-distance regions, intermediate-distance regions, and long-distance regions (these distances are described later) where the presence or absence of an obstacle is determined, respectively. (Learning control for obstacle detection)
As described above, the stereo method is likely to fail in obstacle detection depending on objects, for example, in applications where an obstacle without any luminance change is detected.
By way of example, considering a table such as, for example, a dining table having a planar top surface and no luminance change, if there is nothing on the table, the stereo method is likely to fail in disparity calculation and, hence, a determination of the position of the table is difficult. On the other hand, if there is living ware (tableware, a remote controller, a book, a newspaper, a box of tissues, or the like) on the table, a luminance difference (texture) occurs on the top surface of the table, thereby making the stereo method easily determine the position of the table.
In view of this, in this learning control, the obstacle detection is conducted not only with respect to an obstacle but also by making use of an interaction with objects adjacent and around the obstacle. However, it is likely that the position of furniture (actually, living ware placed on the furniture rather than the furniture) installed in a room changes from day to day, and the angle of the obstacle or the interaction with the objects adjacent and around the obstacle changes. Accordingly, detection errors can be minimized by repeating the obstacle detection. As shown in a flowchart of Fig. 28, the learning control is intended to learn the position of the obstacle based on every results of scanning and determine the position of the obstacle from results of learning for a subsequent air current control explained later.
Fig. 28 is a flowchart indicating a determination of the presence or absence of an obstacle, which is conducted with respect to all the positions (obstacle position discriminating regions) as shown in Fig. 23. This flowchart is explained hereinafter, taking the case of position A1.
When obstacle detection by the image pickup sensor units 24, 26 is initiated, the obstacle detection (stereo method) is first conducted by the image pickup sensor units 24, 26 at a first pixel of position A1 at step S71, followed by step S72 at which the determination of the presence or absence of an obstacle referred to above is conducted. If a determination is made at step S72 that an obstacle is present, "1" is added to a first memory at step S73, while if a determination is made that no obstacle is present, "0" is added to the first memory at step S74.
At step S75, a determination is made whether or not the detection at all pixels of position A1 has been completed, and if the detection at all the pixels is not completed, the detection by the stereo method is conducted at the next pixel at step S76, and the program returns to step S72.
On the other hand, if the detection at all the pixels has been completed, a numerical value (a total of pixels where a determination has been made that an obstacle is present) recorded in the first memory is divided by the number of pixels of position A1 (division process is executed) at step S77. At the next step S78, a quotient obtained by the division process is compared with a predetermined value. If the quotient is greater than or equal to the predetermined value, a determination is temporarily made at step S79 that an obstacle is present in position A1, followed by step S80 at which "5" is added to a second memory. In contrast, if the quotient is less than the predetermined value, a determination is temporarily made at step S81 that no obstacle is present in position A1, followed by step S82 at which "-1" is added to the second memory ("1" is subtracted).
Because the obstacle detection by the image pickup sensor units 24, 26 becomes difficult as the distance from the image pickup sensor units 24, 26 to the obstacle increases, the threshold value used here is set, for example, as follows depending on the distance from the indoor unit.
Short distance: 0.4
Intermediate distance: 0.3
Long distance: 0.2
Also, because the obstacle detection is repeated every time the air conditioner is brought into operation, either "5" or "-1" is repeatedly added to the second memory. Accordingly, "10" and "0" are set as a maximum value and a minimum value of the numerical value recorded in the second memory, respectively.
At step S83, a determination is made whether or not the numerical value (total after addition) recorded in the second memory is greater than or equal to a determination reference value (for example, 5). If the former is greater than or equal to the latter, a final determination is made at step S84 that an obstacle is present in position A1, while if the former is less than the latter, another final determination is made at step S85 that no obstacle is present in position A1.
It is to be noted that upon completion of the obstacle detection in one position, the first memory can be used as a memory for obstacle detection in the next position by clearing it, but the added values in one position are accumulated in the second memory each time the air conditioner is brought into operation (provided that maximum value^total valued minimum value), the same number of second memories as the positions are prepared.
In the learning control for obstacle detection referred to above, "5" is set as the determination reference value, and if a final determination that an obstacle is present is made in a first obstacle detection in a certain position, "5" is recorded in the second memory. Under this situation, if a final determination that no obstacle is present is made in the next obstacle detection, a value obtained by the addition of "-1" to "5" becomes less than the determination reference value, and this means that no obstacle exists in such a position.
However, if a final determination that an obstacle is present is also made in the next obstacle detection, a value "10" obtained by the addition of "5" to "5" is recorded in the second memory. Because this total value is greater than the determination reference value, it means that an obstacle exists in the position. Even if a determination that no obstacle is present is made in five obstacle detections after the second one, a value obtained by the addition of "-1 X5" to "10" is "5" and means that an obstacle still exists in the position.
That is, the learning control for obstacle detection is characterized in that in finally determining the presence or absence of an obstacle based on a cumulative total value obtained by a plurality of additions (or a plurality of additions and subtractions), a value to be added when it has been determined that an obstacle is present is set far greater than a value to be subtracted when it has been determined that no obstacle is present. Such setting is prone to result in a determination that an obstacle is present.
Further, by setting the maximum value and the minimum value to the numerical value recorded in the second memory, even if the positions of obstacles change largely due to, for example, moving or rearrangement of furniture in a room, the air conditioner can follow the change as soon as possible. If no maximum value is set, when a determination that an obstacle is present is made every time, the total value recorded in the second memory progressively increases. Accordingly, even if there is no obstacle in a region where the determination that an obstacle is present has been made every time due to a change of the positions of obstacles that may be cause by, for example, moving, it takes much time before the numerical value recorded in the second memory becomes less than the determination reference value. If no minimum value is set, a converse phenomenon takes place.
Fig. 29 is a modification of the learning control for obstacle detection as indicated in the flowchart of Fig. 28. Because only the processing at steps S100, S102 and S103 differs from that in the flowchart of Fig. 28, these steps are explained.
In this learning control, if a determination is temporarily made at step S99 that an obstacle is present in position A1, "1" is added to the second memory at step S100. On the other hand, if a determination is temporarily made at step S101 that no obstacle is present in position A1, "0" is added to the second memory at step S102.
Next, at step S103, the total value recorded in the second memory based on past ten obstacle detections including the present obstacle detection is compared with a determination reference value (for example, 2). If the former is greater than or equal to the latter, a final determination that an obstacle is present in position A1 is made at step S104, while if the former is less than the latter, a final determination that no obstacle is present in position A1 is made at stepS 105.
That is, in the above-described learning control for obstacle detection, even if no obstacle is detected eight times in the past ten obstacle detections in a certain position, when an obstacle is detected twice, a final determination that an obstacle is present is made.
Accordingly, this learning control for obstacle detection is characterized in that the number (in this example, 2) of obstacle detections in which the final determination that an obstacle is present is made is set far smaller than the number of past obstacle detections to be referenced. Such setting is prone to result in a determination that an obstacle is present.
The indoor unit body or remote controller may be provided with a reset button operable to reset data recorded in the second memory. In this case, the data are reset by depressing the reset button.
It is basically unlikely that the positions of obstacles or walls change having a great influence on an air current control, but if the installation position of the indoor unit changes due to, for example, moving, or if the positions of fittings change due to, for example, rearrangement of furniture in a room, the use of data obtained by then is not suited for the air current control. The reason for this is that although the learning control can eventually optimizes air conditioning for a room, it takes much time to realize an optimized control (this is conspicuous particularly when an obstacle has been removed from a region). Accordingly, if a positional relationship between the indoor unit and obstacles or walls changes, resetting data obtained by then with the use of the reset button can prevent unsuitable air conditioning based on past doubtful data, and restarting the learning control from the beginning can realize an air conditioning control suited for the situation within a short period of time. (Obstacle avoiding control)
During heating, the vertical wind direction changing blades 12 and the horizontal wind direction changing blades 14, both employed as the wind direction changing means, are controlled in the following manner based on the determination of the presence or absence of an obstacle referred to above.
In the following discussion, the terms "block", "field", "short distance", "intermediate distance", and "long distance" are used, and these terms are first explained.
Each of the regions A-G shown in Fig. 13 belongs to the following block.
Block N: region A
Block R: region B, E
Block C: region C, F
Block L: region D, G
Each of the regions A-G belongs to the following field.
Field 1: region A
Field 2: region B, D
Field 3: region C
Field 4: region E, G
Field 5: region F
The distance from the indoor unit is defined as follows.
Short distance: region A
Intermediate distance: region B, C, D
Long distance: region E, F, G
Table 7 indicates target angles of five right-side blades and five left-side blades
constituting the horizontal wind direction changing blades 14 at each position. Signs attached to the figures (angles) are defined such that a plus sign (+, no sign in Table 7) indicates a direction in which the right- or left-side blades are directed inwards, and a minus sign (—) indicates a direction in which the right- or left-side blades are directed outwards, as shown in Fig. 28.
Heating region B in Table 7 is a heating region where an obstacle avoiding control is conducted, and "Normal automatic wind direction control" is a wind direction control in which no obstacle avoiding control is conducted. A determination as to whether or not the obstacle avoiding control is conducted is based on a temperature of the indoor heat exchanger 6. A wind direction control not to cause a wind to impinge on a resident or residents, a wind direction control at a maximum capacity position, and a wind direction control for the heating region B are conducted in the case where the temperature is low, too high, and moderate, respectively. "Low temperatures", "too high temperatures", "wind direction control not to cause a wind to impinge on a resident or residents", and "wind direction control at a maximum capacity position" all used here have the following meanings.
Low temperatures: temperatures (for example, 32°C) below an optimum temperature of the heat exchanger 6, which is set equal to a cutaneous temperature (33-34°C)
Too high temperatures: temperature of, for example, above 56°C Wind direction control not to cause a wind to impinge oh a resident or residents: wind direction control in which the angle of the vertical wind direction changing blades 12 is controlled to cause a wind to flow along a ceiling so that no wind may be conveyed to a space around the residents
Wind direction control at a maximum capacity position: wind direction control in which a resistance (loss) generated when the vertical wind direction changing blades 12 or the horizontal wind direction changing blades 14 bend an air current approaches zero inimitably (in the case of the horizontal wind direction changing blades 14, this position is a position where they are directed straight forward, and in the case of the vertical wind direction changing blades 12, this position is a position where they are directed 35 degrees downward from a horizontal line)
Table 8 indicates target angles of the vertical wind direction changing blades 12 in each field when the obstacle avoiding control is conducted. In Table 8, an angle (y1) of the upper blade and an angle (y2) of the lower blade are angles (angles of elevation) measured upward from a vertical line.
The obstacle avoiding control depending on a position of an obstacle is specifically explained hereinafter, and the terms "swing motion", "position swing motion with pause", and "block swing motion with pause" used in the obstacle avoiding control are first explained.
The "swing motion" is a motion of the horizontal wind direction changing blades 14 in which they swing right and left within a predetermined range of angles centering on a target position without any pause at right and left ends of the motion.
In the "position swing motion with pause", the target angles set at each position (angles indicated in Table 7) are modified using Table 9, and the modified angles are set as those at the right and left ends of the motion. In this motion, a time period of pause (time period for fixing the horizontal wind direction changing blades 14) is provided at each end of the motion. By way of example, when the time period of pause elapses at the left end, the horizontal wind direction changing blades 14 swing toward the right end and maintain the wind direction at the right end until the time period of pause elapses, and after a lapse of the time period of pause, the horizontal wind direction changing blades 14 swing toward the left end and repeat such motion. The time period of pause is set to, for example, 60 seconds.
That is, if an obstacle exists in a certain position and if the target angles set at such position are used without modification, a hot wind impinges on the obstacle, but the modification indicated in Table 9 allows the hot wind to reach a region where a person is present through the side of the obstacle.
In the "block swing motion with pause", the angles of the horizontal wind direction changing blades 14 corresponding to right and left ends of each block are determined based on, for example, Table 10. In this motion, a time period of pause is provided at respective ends of each block. By way of example, when the time period of pause elapses at the left end, the horizontal wind direction changing blades 14 swing toward the right end and maintain the wind direction at the right end until the time period of pause elapses, and after a lapse of the time period of pause, the horizontal wind direction changing blades 14 swing toward the left end and repeat such motion. The time period of pause is set to, for example, 60 seconds, as in the position swing motion with pause.
Because the right and left ends of each block coincide with those of a human position discriminating region corresponding to each block, the block swing motion with pause can be referred to as a "swing motion with pause in human position discriminating region".
It is to be noted that the position swing motion with pause and the block swing motion with pause are separately used depending on a size of the obstacle. If an obstacle in front of a person is small, the position swing motion with pause is performed centering on a position where the obstacle is present to thereby convey air-conditioned air while avoiding the obstacle. On the other hand, if an obstacle in front of a person is large and extends, for example, over a whole area in front of a region where the person is present, the block swing motion with pause is performed to convey air-conditioned air over a wide range.
In this embodiment, the swing motion, the position swing motion with pause, and the block swing motion with pause are collectively referred to as a swing motion of the horizontal wind direction changing Wades 14.
Although specific examples of control of the vertical wind direction changing blades 12 or that of the horizontal wind direction changing blades 14 are explained, if it has been determined by the human body detecting means that a person is present in only one region, and if it has been determined by the obstacle detecting means that an obstacle is present in an obstacle position discriminating region positioned in front of a human position discriminating region where the person has been detected by the human body detecting means, an air current control is conducted to control the vertical wind direction changing blades 12 such that air-conditioned air may flow above the obstacle to avoid the obstacle. Also, if it has been determined by the obstacle detecting means that an obstacle is present in an obstacle position discriminating region belonging to a human position discriminating region where a person has been detected by the human body detecting means, one of a first air current control and a second air current control is selected. In the first air current control, the horizontal wind direction changing blades 14 are caused to swing within at least one obstacle position discriminating region belonging to a human position discriminating region where a person has been detected by the human body detecting means, and a time period for fixing the horizontal wind direction changing blades 14 is not provided at respective ends of the swing motion. In the second air current control, the horizontal wind direction changing blades 14 are caused to swing within at least one obstacle position discriminating region belonging to a human position discriminating region where a person has been detected by the human body detecting means or another human position discriminating region adjacent such a human position discriminating region, and a time period for fixing the horizontal wind direction changing blades 14 is provided at respective ends of the swing motion.
Although in a discussion below the control of the vertical wind direction changing blades 12 and that of the horizontal wind direction changing blades 14 are separated, the control of the vertical wind direction changing blades 12 and that of the horizontal wind direction changing blades 14 are conducted in a combined fashion depending on the position of a person and that of an obstacle.
A. Control of vertical wind direction changing blades
(1) A case where a person is present in any one of the regions B-G, and an obstacle is present in a position A1-A3 in front of the region where the person is present The set angles of the vertical wind direction changing blades 12 as indicated in the normal field wind direction control table (Table 8) are modified as indicated in Table 11 so that an air current control may be conducted in which the vertical wind direction changing blades 12 have been set upward.
(2) A case where a person is present in any one of the regions B-G, and no obstacle is present in the region A in front of the region where the person is present (other than the case (1) above)
The normal automatic wind direction control is conducted.
B. Control of horizontal wind direction changing blades
B1. A case where a person is present in the region A (short distance)
(1) A case where the number of the positions where no obstacle is present is one in the region A
The first air current control is conducted in which the blades are caused to swing right and left centering on a target angle set at the position where no obstacle is present. By way of example, if an obstacle is present in the positions A1 and A3, and no obstacle is present in the position A2, the blades are caused to swing right and left centering on a target angle set at the position A2 to thereby basically conduct air conditioning with respect to the position A2 where no person is present, but because it may be that there is a person in the position A1 or A3, the swing motion allows an air current to be conveyed to the positions A1 and A3 to some extent.
More specifically, because the target angles and modification angles (swing range of angles during the swing motion) at the position A2 are determined based on Table 7 and Table 9, both the right-side blades and the left-side blades continue swinging in a range of angles of ± 10 degrees centering on an angle of 10 degrees without pause. However, timing for a turn of the right-side blades and that for a turn of the left-side blades are set to be identical and, hence, the swing motion of the right-side blades and that of the left-side blades are synchronized.
(2) A case where the number of the positions where no obstacle is present is two in the region A, and the two positions adjoin each other (A1 and A2, or A2 and A3)
The first air current control is conducted in which the blades are caused to swing right and left with the target angles at the two positions where no obstacle is present employed as respective ends, thereby basically air conditioning the positions where no obstacle is present.
(3) A case where the number of the positions where no obstacle is present is two in the region A, and the two positions are spaced away from each other (A1 and A3)
The block swing motion with pause is performed with the target angles at the two positions where no obstacle is present employed as respective ends, thereby conducting the second air current control.
(4) A case where an obstacle is present in all the positions in the region A Because the target position is not clear, the block swing motion with pause is performed with respect to the block N to conduct the second air current control. Rather than aiming at the entire region, the block swing motion with pause can create a wind having greater directivity that is likely to reach far while avoiding the obstacles. That is, even if the region A is dotted with obstacles, the block swing motion with pause can convey air-conditioned air through spaces between the obstacles. (5) A case where no obstacle is present in each position in the region A
The normal automatic wind direction control is conducted with respect to the region A.
B2. A case where a person is present in any one of the regions B, C and D (intermediate distance)
(1) A case where an obstacle is present in only one of the two positions belonging to the region where the person is present
The first air current control is conducted in which the blades are caused to swing right and left centering on a target angle set at the position where no obstacle is present. By way of example, if a person is present in the region D, and an obstacle is present in only the position D2, the blades are caused to swing right and left centering on a target angle set at the position D1.
(2) A case where an obstacle is present in each of the two positions belonging to the region where a person is present
The block swing motion with pause is performed with respect to a block containing the region where the person is present to conduct the second air current control. By way of example, if the person is present in the region D, and an obstacle is present in each of the positions D1 and D2, the block swing motion with pause is performed with respect to the block L.
(3) A case where no obstacle is present in a region where a person is present
The normal automatic wind direction control is conducted with respect to the region where the person is present.
B3. A case where a person is present in any one of the regions E, F and G (long distance)
(1) A case where an obstacle is present in only one of the two positions belonging to an intermediate-distance region in front of the region where the person is present (for example, the person is present in the region E, the obstacle is present in the position B2, and no obstacle is present in the position B1)
(1.1) A case where no obstacle is present on respective sides of the position where the obstacle is present (for example, no obstacle is present in each of the positions B1 and C1) (1.1.1) A case where no obstacle is present behind the position where the
obstacle is present (for example, no obstacle is present in the position E2)
The position swing motion with pause is performed centering on the position where the obstacle is present, thereby conducting the second air current control. By way of example, if a person is present in the region E, an obstacle is present in the position B2, and no obstacle is present on respective sides of and behind the position B2, an air current can be conveyed to the region E by causing the air current to pass by the obstacle in the position B2 to avoid the obstacle.
(1.1.2) A case where an obstacle is present behind the position where the obstacle is present (for example, the obstacle is present in the position E2)
The first air current control is conducted in which the blades are caused to swing centering on a target angle set at a position where no obstacle is present and which belongs to an intermediate-distance region. By way of example, if a person is present in the region E, an obstacle is present in the position B2, no obstacle is present on respective sides thereof, but an obstacle is present behind the position B2, it is advantageous that an air current be conveyed through the position B1 where no obstacle is present.
(1.2) A case where an obstacle is present on one side of the position where the obstacle is present and no obstacle is present on the other side
The first air current control is conducted in which the blades are caused to swing centering on a target angle set at a position where no obstacle is present. By way of example, if a person is present in the region F, an obstacle is present in the position C2, another obstacle is present in the position D1 that is one of two positions on respective side of the region C2, and no obstacle is present in the position C1, an air current can be conveyed toward the region F through the position C1 where no obstacle is present while avoiding the obstacle in the region C2.
(2) A case where an obstacle is present in each of the two positions belonging to the intermediate-distance region in front of the region where the person is present
The block swing motion with pause is performed with respect to a block containing the region where the person is present to conduct the second air current control. By way of example, if the person is present in the region F, and an obstacle is present in each of the positions C1 and C2, the block swing motion with pause is performed with respect to the block C. In this case, because the obstacle is present in front of the person and accordingly unavoidable, the block swing motion with pause is performed irrespective of the presence or absence of an obstacle in a block adjacent the block C.
(3) A case where no obstacle is present in each of the two positions belonging to the intermediate-distance region in front of the region where the person is present (for example, the person is present in the region F and no obstacle is present in each of the positions C1 and C2)
(3.1) A case where an obstacle is present in only one of two positions belonging to the region where the person is present
The first air current control is conducted in which the blades are caused to swing centering on a target angle set at the other of the two positions where no obstacle is present. By way of example, if a person is present in the region F, no obstacle is present in each of the positions C1, C2 and F1, and an obstacle is present in the position F2, a space in front of the region F where the person is present is open. Accordingly, the position F1 where no obstacle is present and that is a long-distance position is mainly air conditioned considering the obstacle in the long-distance position.
(3.2) A case where an obstacle is present in each of the two positions belonging to the region where the person is present
The block swing motion with pause is performed with respect to a block containing the region where the person is present to conduct the second air current control. By way of example, if the person is present in the region G, no obstacle is present in each of the positions D1 and D2, and an obstacle is present in each of the positions G1 and G2, the block swing motion with pause is performed with respect to the block L. The reason for this is that although the region G where the person is present is open on a front side thereof, the obstacles are present in this entire region and, hence, the target position is not clear.
(3.3) A case where no obstacle is present in each of the two positions belonging to the region where the person is present
The normal automatic wind direction control is conducted with respect to the region where the person is present.
(Obstacle avoiding control based on only determination of presence or absence of obstacle)
This obstacle avoiding control is basically intended to convey air-conditioned air toward a region where a determination has been made by the obstacle detecting device that no obstacle is present while avoiding a region where a determination has been made that an obstacle is present. Specific examples of the obstacle avoiding control are explained hereinafter.
A. Control of vertical wind direction changing blades
(1) A case where an obstacle is present in the region A (short distance)
During heating, if the vertical wind direction changing blades 12 are directed extremely downward so that lightened warm air may not float up, and if an obstacle is present in the region A, it is conceivable that the warm air remains in front (on the indoor unit side) of the obstacle or does not reach a floor upon impingement thereof on the obstacle.
In view of this, if an obstacle is detected at a location immediately below or adjacent the indoor unit, the set angles of the vertical wind direction changing blades 12 as indicated in the normal field wind direction control table (Table 8) are modified as indicated in Table 11 so that an air current control may be conducted in which the vertical wind direction changing blades 12 have been set upward to thereby conduct air conditioning from above the obstacle. If the air current is raised too much as a whole to avoid the obstacle, the warm air directly impinges on a face of a resident and makes him or her uncomfortable. Accordingly, the warm air is raised by the lower blade 12b to avoid the obstacle, while the floating up thereof is prevented by the upper blade 12a.
B. Control of horizontal wind direction changing blades
(1) A case where an obstacle is present in any one of the regions B, C and D (intermediate distance)
A direction in which no obstacle is present is mainly air conditioned. By way of example, if an obstacle is detected in the region C (center of a room), the block swing motion with pause is conducted alternately with respect to two blocks each including either the region B or the region D where no obstacle is present, thereby making it possible to mainly air condition the region where no obstacle is present (=region where it is likely that a person is present).
On the other hand, if an obstacle is detected in the region B or D (corner or side of the room), the block swing motion with pause is conducted with respect to blocks including the regions C and D or the regions B and C. In this case, if the horizontal wind direction changing blades 14 are caused to swing with respect to the region B or D at a rate of once per a plurality of times (for example, five times) after the block swing motion with pause has been conducted with respect to the regions C and D or the regions B and C, not only
can a region where it is likely that a person is present be mainly air conditioned, but the whole room can be also effectively air conditioned.
Although an area to be air conditioned may be divided into a plurality of positions (obstacle position discriminating regions) where the presence or absence of an obstacle is determined, as shown in Fig. 23, irrespective of the capacity of the air conditioner, the number of division may be changed because the size of a room in which the indoor unit is installed differs depending on the capacity of the air conditioner. For example, in the case of an air conditioner having a capacity of 4.0kw or more, the division as shown in Fig. 23 is employed, while in the case of an air conditioner having a capacity of 3.6kw or less, the area to be air conditioned may be divided into three short-distance regions and six intermediate-distance regions without providing any long-distance regions.
Further, as shown in Fig. 23, when the area to be air conditioned is divided into the short-intermediate- and long-distance regions at equal intervals from the indoor unit upon recognition of the room in a radial fashion, the area of each region increases as the distance from the indoor unit increases. However, the sizes of all the regions can be made substantially uniform by increasing the number of the discriminating regions with an increase of the distance from the indoor unit, thereby facilitating the air current control. (Person-wall proximity control)
If a person and a wall are present in the same region, the person is always positioned in front of and adjacent to the wall. In this case, during heating, warm air is apt to remain in proximity to the wall and make a room temperature in proximity to the wall higher than that in other space. A person-wall proximity control is conducted to avoid such a phenomenon.
In this control, disparities are calculated at pixels different from the pixels [I, j] shown in Table 3 and the distances thereto are then detected, thereby first recognizing the positions of a front wall, a right-side wall and a left-side wall.
That is, a disparity of a pixel positioned substantially horizontally forward is first calculated using the image pickup sensor units 24, 26 to measure the distance to the front wall, thereby obtaining the distance number thereof. Further, a disparity of a pixel positioned substantially horizontally leftward is calculated to measure the distance to the left-side wall, thereby obtaining the distance number thereof. The distance number of the right-side wall is similarly obtained.
A detailed discussion is further made with reference to Fig. 29, which is a plan view of a room in which the indoor unit has been installed, depicting a case where a front wall WC, a left-side wall WL, and a right-side wall WR exist forward and on the right and left sides of the indoor unit, respectively. Numerals on the left side of Fig. 29 indicate distance numbers of corresponding squares, and Table 12 indicates distances from the indoor unit to a close point and to a distant point corresponding to each distance number.
As described above, the term "obstacle" as employed throughout this application is referred to, for example, as a television set, an audio station, and furniture such as tables, sofas, or the like, and considering the average heights of these obstacles, they are not detected in a range of angles of elevation more than 75 degrees. Because it can be assumed that what are detected in this range of angles are walls, in this embodiment, the distances to objects existing forward, rightward and leftward of the indoor unit in the range of angles of elevation more than 75 degrees are detected, and it is determined that the detected objects and objects lying on extensions thereof are walls.
It can be also assumed that in terms of a view angle in the horizontal direction, the left-side wall WL exists at positions of angles of -80 and -75 degrees, the front wall WC exists at positions of angles of -15 to 15 degrees, and the right-side wall exists at positions of angles of 75 and 80 degrees. Of the pixels indicated in Table 3, the pixels present within the above view angle in the horizontal direction in the range of angles of elevation more than 75 degrees are as follows.
Left end: [14, 15], [18, 15], [14, 21], [18, 21], [14, 27], [18, 27] Front: [66, 15]-[90, 15], [66, 21]-[90, 21], [66, 27]-[90, 27] Right end: [138, 15], [142, 15], [138, 21], [142, 21], [138, 27], [142, 27]
In determining the distance numbers of the front wall WC, the left-side wall WL, and the right-side wall WR, wall data are extracted at each pixel as indicated in Table 13.
[Table 13]
Next, as indicated in Table 14, unnecessary wall data are removed by removing a maximum value and a minimum value from the wall data, and the distance numbers of the front wall WC, the left-side wall WL, and the right-side wall WR are determined based on the wall data obtained in this way.
Maximum values (WL=6, WC=5, WR=3) in Table 14 can be employed as the distance numbers of the left-side wall WL, the front wall WC, and the right-side wall WR. The employment of the maximum values results in air conditioning for a room (large room) having a front wall and right- and left-side walls each farther than that of the actual room. That is, a wider space is set as an object to be air conditioned. However, the maximum values are not always employed, and average values may be employed.
After the distance numbers of the left-side wall WL, the front wall WC, and the right-side wall WR have been determined in the above-described manner, the obstacle detecting means determines whether a wall is present or absent in an obstacle position discriminating region belonging to a human position discriminating region where a person has been detected by the human body detecting means. If it is determined that a wall is present, it is conceivable that the person is present in front of the wall and, hence, a temperature lower than a temperature set by the remote controller is newly set during heating.
The person-wall proximity control is explained hereinafter more specifically, taking the case of heating.
A. A case where a person is present in a short-distance region or an intermediate-distance region Because the short-distance region or the intermediate-distance region is close to the indoor unit and has a small area, the degree of increase of the room temperature becomes high. Accordingly, a temperature lower than the temperature set by the remote controller by a first predetermined temperature (for example, 2°C) is newly set.
B. A case where a person is present in a long-distance region Because the long-distance region is distant from the indoor unit and has a large area, the degree of increase of the room temperature is lower than that in the short-distance region or the intermediate-distance region. Accordingly, a temperature lower than the temperature set by the remote controller by a second predetermined temperature (for example, 1°C) less than the first predetermined temperature is newly set.
Further, because the long-distance region has a large area, even if a determination has been made that a person and a wall are present in the same human position discriminating region, it may be that the person and the wall are apart from each other. Accordingly, the person-wall proximity control is conducted only in the case of combinations as indicated in Table 15 to perform a temperature shift depending on a positional relationship between a person and a wall.
Although in this embodiment the distance detecting means employs the stereo method, a method of utilizing a light emitting portion 28 and an image pickup sensor unit 24 can be employed in place of the stereo method. This method is explained hereinafter.
Fig. 30 depicts an indoor unit having a main body 2, the light emitting portion 28 mounted on the main body 2, and the image pickup sensor unit 24 mounted on the main body 2. The light emitting portion 28 includes a light source (not shown) and a scanning portion (not shown), and an LED or a laser is used for the light source. The scanning portion employs a galvanomirror and can arbitrarily change a direction of light emission.
Fig. 31 is a schematic view indicating a relationship between the image pickup sensor unit 24 and the light emitting portion 28. In general, the direction of light emission has two degrees of freedom and an imaging area lies on a two dimensional plane, but it is assumed for the sake of brevity that the direction of light emission has a single degree of freedom and the imaging area is a horizontally extending straight line. The light emitting portion 28 emits light in a direction p with respect to an optical axis direction of the image pickup sensor unit 24. The image pickup sensor unit 24 performs the difference processing with respect to a frame image immediately before the light emitting portion 28 emits light and a frame image toward which light is now emitted to obtain a u-coordinate u1 of a point P, which reflects light emitted from the light emitting portion 28, on an image. If the distance from the image pickup sensor unit 24 to the point P is represented by X, the following relationship is established. [Formula 9]
[Formula 10]
That is, distance information in a space to be air conditioned can be obtained by detecting the point P of reflection of the light while changing the direction p of light emission of the light emitting portion 28. In Table 16, "i" and "j" indicate addresses to be scanned by the light emitting portion 28, and angles in the vertical direction and those in the
horizontal direction indicate the angles of elevation a referred to above and angles B measured rightward from a front reference line as viewed from the indoor unit, respectively. That is, each address is set to fall within a range of 5 degrees to 80 degrees in the vertical direction and in a range of -80 degrees to 80 degrees in the horizontal direction as viewed from the indoor unit, and the light emitting portion 28 measures each address to scan the living space. [Table 16]
The distance measurements to an obstacle are explained hereinafter with reference to a flowchart of Fig. 32. Because the flowchart of Fig. 32 is quite similar to the flowchart of Fig. 25, only different steps are explained.
At step S48, if a determination is made that no person is present in a region (any one of the regions A-G shown in Fig. 13) corresponding to an address [i, j] to which light is emitted from the light emitting portion 28, the program advances to step S49, but if a determination is made that a person is present, the program advances to step S43.
Because a human body is not an obstacle, preceding distance data are used at a pixel corresponding to the region where a determination has been made that a person is present without conducting the distance measurements (distance data are not updated). The distance measurements are conducted only at a pixel corresponding to the region where a determination has been made that no person is present, and newly measured distance data are used (distance data are updated).
At step S49, the distance to the obstacle is estimated by obtaining the point of reflection generated by the aforementioned light emitting processing from the image pickup sensor unit 24. As described above, it is only necessary to execute the distance number determining processing in which the distance number is used.
The human body detecting means may be used as the distance detecting means. In this case, the human body detecting means serves as a human body distance detecting means and also as an obstacle detecting means. This processing is explained hereinafter.
Fig. 33 depicts an indoor unit having a main body 2 and a single image pickup sensor unit 24 mounted on the main body 2. Fig. 34 is a flowchart indicating processing to be executed by the human body distance detecting means using the human body detecting means. In this figure, the same steps as those in Fig. 5 are designated by the same step numbers and detailed explanation thereof is omitted.
At step S201, in each of the regions divided by the human body detecting means, the human body distance detecting means detects a pixel that is located at an uppermost portion of an image from pixels having a difference to obtain a v-coordinate v1 thereof.
At step S202, the human body distance detecting means estimates the distance from the image pickup sensor unit to a human body using the v-coordinate v1 of the pixel located at the uppermost portion of the image.
Fig. 35 A and Fig. 35 B are schematic views indicating this processing. Fig. 35A depicts a scene in which two persons 121, 122 close to and remote from a camera are present, and Fig. 35 B depicts a difference image of an image taken by the image pickup sensor unit in the scene of Fig. 35 A. Two regions 123, 124 in which a difference has occurred correspond to the two persons 121, 122, respectively. It is assumed that the heights hi of all persons in a space to be air conditioned are known and substantially the same. As described above, because the image pickup sensor unit 24 is located at a height of two meters, the image pickup sensor unit 24 takes an image while looking down the persons from above. At this moment, the closer the person is to the image pickup sensor unit 24, the person is imaged at a lower portion of the image as shown in Fig. 35 B. That is, the v-coordinate v1 of an uppermost portion of the image of the person and the distance from the image pickup sensor unit 24 to the person have a one-to-one relationship. Because of this, the distance to a human body can be detected using the human body detecting means by determining the relationship between the v-coordinate v1 of the uppermost portion of the image of the person and the distance from the image pickup sensor unit 24 to the person in advance.
Table 17 is an example in which an average height of persons is hi, and the relationship between the v-coordinate v1 of the uppermost portion of the image of the person and the distance from the image pickup sensor unit 24 to the person has been determined in advance. This table has been obtained using an image pickup sensor unit having a resolution VGA as the image pickup sensor unit 24. From this table, if v1=70, for example, the distance from the image pickup sensor unit 24 to the person is estimated to be about two meters.
The obstacle detecting means with the use of the human body detecting means is explained hereinafter.
Fig. 36 is a flowchart indicating the processing to be executed by the obstacle detecting means employing the human body detecting means.
At step S203, the obstacle detecting means estimates a height v2 of a person on an image by making use of distance information from the image pickup sensor unit 24 to the person estimated by the human body distance detecting means. Fig. 37 A and Fig. 37B are schematic views for explaining this processing, indicating the same scene as in Fig. 35A and Fig. 35B. As described above, the heights hi of all persons in a space to be air conditioned are known and substantially the same. Because the image pickup sensor unit 24 is located at a height of two meters, the image pickup sensor unit 24 takes an image while looking down the persons from above, as shown in Fig. 37A. At this moment, the closer the person is to the image pickup sensor unit 24, the bigger the person is on the image, as shown in Fig. 37 B. That is, a difference v2 between a v-coordinate of an uppermost portion and that of a lowermost portion of an image of the person and the distance from the image pickup sensor unit 24 to the person have a one-to-one relationship. Because of this, if the distance from the image pickup sensor unit 24 to the person is known, the size of the person on the image can be estimated. In this case, it is only necessary to determine the relationship between the difference v2 between the v-coordinate of the uppermost portion and that of the lowermost portion of the image of the person and the distance from the image pickup sensor unit 24 to the person in advance.
Table 18 is an example in which the relationship between the difference v2 between the v-coordinate of the uppermost portion and that of the lowermost portion of the image of the person and the distance from the image pickup sensor unit 24 to the person has been determined in advance. This table has been obtained using an image pickup sensor unit having a resolution VGA as the image pickup sensor unit 24. From this table, if the distance from the image pickup sensor unit 24 to the person is two meters, the difference v2 between the v-coordinate of the uppermost portion and that of the lowermost portion of the image of the person is estimated to be equal to 85.
At step S204, in each region of the difference images, the obstacle detecting means detects a pixel having a difference and located at an uppermost portion of the image and a pixel having a difference and located at a lowermost portion of the image to calculate a difference v3 between v-coordinates thereof.
At step S205, whether an obstacle is present between the image pickup sensor unit 24 and the person is estimated by comparing the height v2 of the person that has been estimated on the image using the distance information from the image pickup sensor unit 24 to the person and the height v3 of the person that has been obtained from the real difference image.
Figs. 38A and 38B and Figs. 39A and Fig. 39B are schematic views indicating this processing. Fig. 38A and Fig.. 38 B depict a scene similar to that of Fig. 35 A and Fig. 35 B, but particularly depicting a scene in which no obstacle is present between the image pickup sensor unit 24 and the person. On the other hand, Fig. 39A and Fig. 39 B schematically depict a scene in which an obstacle is present. As shown in Fig. 38 A and Fig. 38B, if no obstacle is present between the image pickup sensor unit 24 and the person, the height h2 of the person on the image that has been estimated using the distance information from the image pickup sensor unit 24 to the person becomes nearly equal to the height v3 of the person 123 that has been obtained from the real difference image. On the other hand, as shown in Fig. 39A and Fig. 39 B, if an obstacle is present between the image pickup sensor unit 24 and the person, the person is partially screened and any difference does not exist in a screened region. Taking notice of the fact that almost all obstacles in a space to be air conditioned are placed on a floor, it is conceivable that a lower part of the person is screened. This means that even if an obstacle is present between the image pickup sensor unit 24 and the person, the distance to the person can be correctly obtained using the v-coordinate v1 of the uppermost portion of the image in the human region. On the other hand, if an obstacle is present between the image pickup sensor unit 24 and the person, it is estimated that the height v3 of the person 125 obtained from the real difference image becomes smaller than the height h2 of the person on the image that has been estimated using the distance information from the image pickup sensor unit 24 to the person. In view of this, if a determination is made at step S205 that v3 is sufficiently smaller than v2, the program advances to step S206, at which a determination is made that an obstacle is present between the image pickup sensor unit 24 and the person. In this event, the distance from the image pickup sensor unit 24 to the person is made equal to the distance from the image pickup sensor unit 24 to the person that has been obtained from the v-coordinate v1 of the uppermost portion of the image.
As described above, the distance detecting means can be realized by making use of a detection result of the human body detecting means.
In this embodiment, the image pickup sensor unit 24 has been described as being of a fixed type having a sufficiently wide viewing angle, if a horizontal field of view of the image pickup sensor unit 24 is narrow, it is only necessary to horizontally reciprocate the image pickup sensor unit 24 to broaden the field of view. Similarly, if a vertical field of view of the image pickup sensor unit 24 is narrow, it is only necessary to vertically reciprocate the image pickup sensor unit 24 to broaden the field of view. If both the horizontal field of view and the vertical field of view of the image pickup sensor unit 24 are narrow, they can be broadened by scanning the image pickup sensor unit 24.
In this case also, the entire image obtained by driving the image pickup device can be used in each image processing. That is, what is different is only the number of pixels and the way of thinking is the same as that about the fixed image pickup device.
In this action, a resident or residents do not feel a sense of strangeness by driving the image pickup sensor unit 24 in the manner referred to above.
Also, in this embodiment, the image pickup device 25 disposed at a lower portion of the indoor unit is covered with a portion of the indoor unit, but the same applies to an image pickup sensor unit disposed at an upper portion of the indoor unit. Further, the image pickup sensor unit may be also left exposed at the time of stoppage of the air conditioner or protected by, for example, a transparent cover without covering the image pickup device 25 with a portion of the indoor unit. In this case also, a resident or residents do not feel a sense of strangeness by driving the image pickup sensor unit 24 in the manner referred to above.
Naturally, only one image pickup device 25 may be disposed in the vicinity of a center of the indoor unit.
Industrial Applicability
The air conditioner according to the present invention can not only restrain a reduction in recognition performance of the image pickup sensor but also give a resident or residents a sense of ease and is accordingly effectively utilized particularly for air conditioners for general home use.
Explanation of reference numerals 2 indoor unit body, 2a front suction opening, 2b upper suction opening, 4 movable front panel, 6 heat exchanger, 8 indoor fan, 10 discharge opening, 12 vertical wind direction changing blade, 14 horizontal wind direction changing blade, 16 filter, 18, 20 arm for front panel, 24, 26 image pickup sensor unit, 25 image pickup device, 28 light emitting portion, 51 circuit board, 52 lens, 53 mage pickup sensor, 54 support (sensor holder), 55 rotary shaft for horizontal (transverse) rotation, 56 rotary shaft for vertical rotation, 57 motor for horizontal rotation, 58 motor for vertical rotation.
CLAIMS
1. An air conditioner comprising:
an indoor unit;
a human body detecting means mounted to the indoor unit to detect presence or absence of a person;
an obstacle detecting means mounted to the indoor unit to detect presence or absence of an obstacle; and
wind direction changing blades mounted to the indoor unit so as to be controlled based on a detection result of the human body detecting means and a detection result of the obstacle detecting means;
wherein the human body detecting means and the obstacle detecting means are realized by a fixed or driven image pickup device, which is covered with a portion of the indoor unit when the air conditioner is not in operation.
2. The air conditioner according to claim 1, wherein the driven type image pickup device is designed to face in a same direction at start of operation of the air conditioner.
3. The air conditioner according to claim 2, wherein the same direction is a direction in which a front face of the indoor unit faces.
4. The air conditioner according to claim 2 or 3, wherein the same direction is a direction in which an optical axis of the image pickup device extends forward so as to be substantially perpendicular to a surface of the indoor unit, on which the image pickup device is mounted, as viewed from above the indoor unit.
5. The air conditioner according to any one of claims 2 to 4, wherein a direction of the image pickup device is changeable in horizontal and vertical directions within a predetermined range of angles, and the same direction is a direction in which the image pickup device is located at an uppermost or lowermost position of the predetermined range of angles in the vertical direction.
6. The air conditioner according to any one of claims 1 to 5, further comprising a movable front panel operable to open and close front suction openings defined in the indoor unit, and the wind direction changing blades comprise a vertical wind direction changing blade operable to open and close a discharge opening defined in the indoor unit, through which air is blown into a room, and to vertically change a direction of air blown out from the discharge opening 10, wherein the portion of the indoor unit comprises one of the movable front panel and the vertical wind direction changing blade.
| # | Name | Date |
|---|---|---|
| 1 | 3092-CHENP-2012 CORRESPONDENCE OTHERS 03-10-2012.pdf | 2012-10-03 |
| 1 | 3092-CHENP-2012 PCT 04-04-2012.pdf | 2012-04-04 |
| 2 | 3092-CHENP-2012 FORM-5 04-04-2012.pdf | 2012-04-04 |
| 2 | 3092-CHENP-2012 FORM-3 03-10-2012.pdf | 2012-10-03 |
| 3 | 3092-CHENP-2012 FORM-3 04-04-2012.pdf | 2012-04-04 |
| 3 | 3092-CHENP-2012 POWER OF ATTORNEY 03-10-2012.pdf | 2012-10-03 |
| 4 | 3092-CHENP-2012 ABSTRACT 04-04-2012.pdf | 2012-04-04 |
| 4 | 3092-CHENP-2012 FORM-2 04-04-2012.pdf | 2012-04-04 |
| 5 | 3092-CHENP-2012 FORM-1 04-04-2012.pdf | 2012-04-04 |
| 5 | 3092-CHENP-2012 CLAIMS 04-04-2012.pdf | 2012-04-04 |
| 6 | 3092-CHENP-2012 DRAWINGS 04-04-2012.pdf | 2012-04-04 |
| 6 | 3092-CHENP-2012 CORRESPONDENCE OTHERS 04-04-2012.pdf | 2012-04-04 |
| 7 | 3092-CHENP-2012 DESCRIPTION (COMPLETE) 04-04-2012.pdf | 2012-04-04 |
| 8 | 3092-CHENP-2012 DRAWINGS 04-04-2012.pdf | 2012-04-04 |
| 8 | 3092-CHENP-2012 CORRESPONDENCE OTHERS 04-04-2012.pdf | 2012-04-04 |
| 9 | 3092-CHENP-2012 FORM-1 04-04-2012.pdf | 2012-04-04 |
| 9 | 3092-CHENP-2012 CLAIMS 04-04-2012.pdf | 2012-04-04 |
| 10 | 3092-CHENP-2012 ABSTRACT 04-04-2012.pdf | 2012-04-04 |
| 10 | 3092-CHENP-2012 FORM-2 04-04-2012.pdf | 2012-04-04 |
| 11 | 3092-CHENP-2012 POWER OF ATTORNEY 03-10-2012.pdf | 2012-10-03 |
| 11 | 3092-CHENP-2012 FORM-3 04-04-2012.pdf | 2012-04-04 |
| 12 | 3092-CHENP-2012 FORM-5 04-04-2012.pdf | 2012-04-04 |
| 12 | 3092-CHENP-2012 FORM-3 03-10-2012.pdf | 2012-10-03 |
| 13 | 3092-CHENP-2012 PCT 04-04-2012.pdf | 2012-04-04 |
| 13 | 3092-CHENP-2012 CORRESPONDENCE OTHERS 03-10-2012.pdf | 2012-10-03 |