Abstract: A silicon microphone includes a silicon microphone device (10), on which four acoustic transducers (11A, 11B, 11C, 11D) are integrated, an integrated circuit device (20a) and a package (30a) for housing the devices (10, 20a) in an inner space defined therein, and the four acoustic transducers have different values of sensitivity and, accordingly, different values of dynamic range; the analog acoustic signals (S1, S2, S3, S4) are supplied from the four acoustic transducers (11 A, 11B, 11C, 11D) to the integrated circuit device (20a), and are converted to digital acoustic signals (DS1, DS2, DS3, DS4); the digital acoustic signal (S1, S2, S3) output from the acoustic transducers (11 A, 11B, 11C) with relatively high sensitivity acoustic transducers are normalized with respect to the digital acoustic signal (S4) output from the acoustic transducer (11D) with lowest sensitivity, and the normalized digital acoustic signals are selectively formed into a composite acoustic signal (S5) depending upon the sound pressure of sound waves so that the dynamic range is expanded without sacrifice of high sensitivity in the low sound pressure range.
TITLE OF THE INVENTION
SENSITIVE SILICON MICROPHONE WITH WIDE DYNAMIC RANGE
FIELD OF THE INVENTION
This invention relates to a microphone and, more particularly, to an elec-
trostatic microphone fabricated on a semiconductor substrate.
DESCRIPTION OF THE RELATED ART
Growing research and development efforts are being made for miniature
microphones. Various approaches have been proposed. One of the ap-
proaches is disclosed in Japan Patent Application laid-open No. 2001-
169395. The prior art miniature microphone disclosed in the Japan Patent
Application laid-open is of the type optically converting vibrations of ex-
tremely thin vibratory plates to an electric signal, and, accordingly, is called
as "an optical microphone".
The prior art optical microphone is hereinafter described. An inner
space is defined inside a package, and is divided into plural chambers by
means of photo-shield walls. The chambers are respectively assigned to
acoustic transducers, i.e., acoustic waves-to-electric signal converters, and
each of the acoustic transducers is constituted by a substrate of gallium ar-
senide and an extremely thin vibratory plate. A laser diode and photo-
diodes are integrated on the gallium arsenide substrate, and are opposed to
the extremely thin vibratory plate. The laser diode emits the light to the ex-
tremely thin vibratory plate, and the light is reflected thereon. The reflected
light is incident on the photo-diodes, and is converted to photo-current. The
1A
acoustic waves give rise to vibrations of the extremely thin vibratory plates,
and cause the amount of incident light to be varied in the chambers. Ac-
cordingly, the amount of photo-current is varied together with the vibrations
of vibratory plates. The prior art optical microphone aims at response to the
sound waves in a wide frequency range. The acoustic converters have dif-
ferent frequency ranges partially overlapped with the adjacent frequency
ranges so as to make the prior art optical microphone respond to the wide
frequency range.
Recent development of MEMS (Micro Electro Mechanical Systems)
technologies makes it possible to fabricate an electrostatic microphone on a
silicon chip. The miniature electrostatic microphone is called as "a silicon
microphone". A typical example of the silicon microphone has a diaphragm
and a back plate, both formed on a silicon chip through the micro fabrication
techniques. The diaphragm is spaced from the back plate by an air gap so as
to form a condenser together with the back plate. While sound waves are
exerting force on the diaphragm, the diaphragm is deformed, and, accord-
ingly, the condenser varies the capacitance together with the sound pressure.
An electric signal representative of the capacitance is taken out from the
condenser. Thus, the silicon microphone converts the sound waves to the
electric signal.
The silicon microphone makes the amplitude of the electric signal well
proportional to the sound pressure in so far as the sound pressure does not
exceed a critical value. However, the silicon microphone does not enlarge
2
the amplitude of electric signal after the sound pressure exceeds the critical
value. In other words, the electric signal is saturated.
The sound pressure at the critical value is called as "saturated sound
pressure". Term "unsaturated sound pressure range" means the range of
sound pressure less than the saturated sound pressure, and is a synonym of
"dynamic range". When the sound pressure is exerting the sound pressure
equal to or greater than the saturated sound pressure, the silicon microphone
enters "saturated state".
In the following description, term "sound pressure" means an amplitude
of pressure or a difference between the highest value of pressure and the
next lowest value of the pressure, and is corresponding to the amplitude of
the electric signal taken out from an ideal microphone, which makes the am-
plitude of electric signal proportional to the sound pressure without the satu-
rated state. On the other hand, term "amplitude" is the difference between
the lowest peak value and the highest peak value of the electric signal output
from a real silicon microphone.
Term "sensitivity" is another figure expressing the capability of silicon
microphone, and is defined as "a rate of change of the amplitude of electric
signal in terms of a unit pressure of sound propagating medium." A silicon
microphone with a high sensitivity can convert faint sound to the electric
signal. However, the silicon microphone enters the saturated state at a rela-
tively small value of the saturated sound pressure. On the other hand, a sili-
con microphone with a lower sensitivity has a wide dynamic range. How-
3
ever, it is hard to convert faint sound to the electric signal. Thus, there is a
trade-off between the dynamic range and the sensitivity.
It is important for application designers to keep the silicon microphones
unsaturated state in their application put in their operation environments.
However, it is difficult to for designers of a general purpose silicon micro-
phone exactly to forecast all the operation environments.
Although plural acoustic transducers are found to form a prior art mi-
crophone device for giving directionality to the prior art microphone device,
the plural acoustic transducers make the prior art directional microphone de-
vice bulky. In other words, it is difficult to fabricate a compact directional
microphone device from the plural acoustic transducers.
SUMMARY OF THE INVENTION
It is therefore an important object of the present invention to provide a
semiconductor microphone, which has a wide dynamic range and a high sen-
sitivity in a relatively low sound pressure range.
It is another important object of the present invention to provide a signal
processing system, which forms a part of the semiconductor microphone.
It is yet another important object of the present invention to provide a
compact directional semiconductor microphone.
To accomplish the object, the present invention proposes to produce a
composite acoustic signal representative of the sound waves from intermedi-
ate acoustic signals output from plural acoustic transducers different in sen-
sitivity and saturated sound pressure.
4
In accordance with one aspect of the present invention, there is provided
a semiconductor microphone connected to a signal processor for converting
sound waves to plural intermediate acoustic signals, the signal processor
carries out a signal processing on said intermediate acoustic signals so as to
produce a composite acoustic signal, and the semiconductor microphone
comprises a housing having an inner space and formed with a sound hole,
which permits the sound waves to enter the inner space, and plural acoustic
transducers accommodated in the inner space, having respective values of
sensitivity different from one another and respective values of saturated
sound pressure of the sound waves different from one another, converting
the sound waves to the plural intermediate acoustic signals, respectively,
and providing the plural intermediate acoustic signals to the signal processor.
In accordance with another aspect of the present invention, there is pro-
vided a semiconductor microphone for converting sound waves to a compos-
ite acoustic signal comprising a housing having an inner space and formed
with plural sound holes which permit the sound waves to enter the inner
space, a partition wall structure provided in the inner space so as to divide
the inner space into plural compartments open to the outside of the housing
selectively through the plural sound holes, plural acoustic transducers re-
spectively provided in the plural compartments and converting the sound
waves to plural intermediate acoustic signals, and a signal processor con-
nected to the plural acoustic transducers, introducing delay into selected
ones of the plural intermediate acoustic signals so as to produce delayed
5
acoustic signals and forming a composite acoustic signal from the delayed
acoustic signals, thereby giving directivity to the semiconductor microphone.
BRIEF DESCRIPTION OF THE DRAWINGS
The features and advantages of the silicon microphone will be more
clearly understood from the following description taken in conjunction with
the accompanying drawings, in which
Fig. 1A is a plane view showing the arrangement of components of a sili-
con microphone of the present invention,
Fig. IB is a block diagram showing the system configuration of the sili-
con microphone,
Fig. 2 is a cross sectional view taken along line III - III of figure 1A and
showing the structure of the silicon microphone,
Fig. 3 is a cross sectional view showing the structure of acoustic trans-
ducers incorporated in the silicon microphone,
Fig. 4 is a plane view showing diaphragms of the acoustic transducers,
Figs. 5A to 5C are cross sectional views showing a process for fabricat-
ing the silicon microphone device,
Fig. 6 is a cross sectional view showing the structure of acoustic trans-
ducers incorporated in another silicon microphone,
Fig. 7 is a block diagram showing the function of an information process-
ing system incorporated in the silicon microphone,
Fig. 8 is a graph showing relations between acoustic signals and sound
pressure,
6
Fig. 9 is a graph showing coefficients of cross fading in terms of time,
Figs. 10A to IOC are flowcharts showing a job sequence realized through
the execution of a computer program,
Fig. 11 is a block diagram showing the function of another silicon micro-
phone of the present invention,
Fig. 12 is a diagram showing intermediate acoustic signals of the silicon
microphone and composite acoustic signals produced from the intermediate
acoustic signals,
Fig. 13 is a block diagram showing the function of yet another silicon
microphone of the present invention,
Fig. 14 is a block diagram showing the function of still another silicon
microphone of the present invention,
Fig. 15 is a block diagram showing a normalizing function in the silicon
microphone,
Fig. 16A is a plane view showing the arrangement of yet another silicon
microphone of the present invention,
Fig. 16B is a block diagram showing the system configuration of an inte-
grated circuit device of the silicon microphone,
Fig. 17 is a cross sectional view taken along line V-V of figure 16A and
showing the structure of the silicon microphone,
Fig. 18 is a block diagram showing the function of an integrated circuit
device of the silicon microphone,
7
Fig. 19 is a diagram showing the concept of endowment of directivity,
and
Fig. 20 is a block diagram showing the function of the information proc-
essing system incorporated in still another silicon microphone of the present
invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
A semiconductor microphone embodying the present invention is used
for converting sound waves to intermediate acoustic signals. A signal proc-
essor carries out a signal processing on the intermediate acoustic signals so
as to produce a composite acoustic signal.
The semiconductor microphone comprises housing and plural acoustic
transducers. The housing has an inner space, and is formed with a sound
hole. The plural acoustic transducers are accommodated in the inner space.
Since the sound hole permits the sound waves to enter the inner space, the
sound waves reach the plural acoustic transducers, and are converted to the
plural intermediate acoustic signals by means of the plural acoustic trans-
ducers.
The plural acoustic transducers have respective values of sensitivity dif-
ferent from one another and respective values of saturated sound pressure
different from one another. Namely, the plural acoustic transducers are dif-
ferent in sensitivity and saturated sound pressure from one another. A cer-
tain potential level of the plural intermediate acoustic signals is representa-
tive of different values of sound pressure. The plural intermediate acoustic
8
signals are provided from the plural acoustic transducers to the signal proc-
essor, and the composite acoustic signal is produced from the plural inter-
mediate acoustic signals through the data processing.
Since the plural intermediate acoustic transducers are different in satu-
rated sound pressure from one another, the composite acoustic signal has an
unsaturated region wider than each of the plural intermediate acoustic sig-
nals. Thus, the semiconductor microphone embodying the present invention
makes it possible to produce the composite acoustic signal with the wide un-
saturated region. Moreover, the plural acoustic transducers are integrated
inside the housing so that the semiconductor microphone is compact.
Another semiconductor microphone embodying the present invention
largely comprises a housing, a partition wall structure, plural acoustic trans-
ducers and a signal processor, and converts sound waves to a composite
acoustic signal. The housing has an inner space, and is formed with plural
sound holes. The partition wall structure is provided inside the housing so
that the inner space is divided into plural compartments. The plural sound
holes permit the sound waves to enter the inner space, i.e., the plural com-
partments. In other words, the plural compartments are acoustically open to
the outside of the housing through the plural sound holes.
The plural acoustic transducers Eire respectively provided in the plural
compartments, and convert the sound waves to plural intermediate acoustic
signals. The signal processor is connected to the plural acoustic transducers
so that the plural intermediate acoustic signals are supplied from the plural
9
acoustic transducers to the signal processor for the signal processing. The
signal processor introduces delay into selected ones of the pluralintermedi-
ate acoustic signals so as to produce delayed acoustic signals. The signal
processor produces the composite acoustic signal from the delayed acoustic
signals. Since the plural sound holes are differently spaced from an origin
of the sound waves, the sound holes give directivity to the semiconductor
microphone.
In the following description on embodiments of the present invention,
term "sound pressure" means an amplitude of pressure or a difference be-
tween the highest value of pressure and the next lowest value of the pressure,
and is corresponding to the amplitude of the electric signal taken out from
an ideal microphone, which makes the amplitude of electric signal propor-
tional to the sound pressure without the saturated state. On the other hand,
term "amplitude" is the difference between the lowest peak value and the
highest peak value of the electric signal output from a real silicon micro-
phone.
Term "sensitivity" is another figure expressing the capability of silicon
microphone, and is defined as "a rate of change of the amplitude of electric
signal in terms of a unit pressure of sound propagating medium."
First Embodiment
Referring first to figures 1A and IB of the drawings, a silicon micro-
phone la embodying the present invention largely comprises a silicon mi-
crophone device 10, an integrated circuit device 20a and a single package
10
30a. The silicon microphone la is, by way of example, provided in a mobile
telephone and a PDA (Personal Digital Assistant).
An inner space is defined inside the package 30a, and the silicon micro-
phone device 10 and integrated circuit device 20a are accommodated in the
package 30a. The package 30a is formed with a sound hole 34a. Since a lid
32a is removed from the package 30a shown in figure 1A, dots-and-dash line
is indicative of the location of the sound hole 34a. The sound waves enter
the inner space through the sound hole 34a, and reach the silicon micro-
phone device 10.
The silicon microphone device 10 is electrically connected to the inte-
grated circuit device 20a. The sound waves are converted to four intermedi-
ate acoustic signals SI, S2, S3 and S4 through the silicon microphone device
10, and the intermediate acoustic signals SI, S2, S3 and S4 are supplied
from the silicon microphone device 10 to the integrated circuit device 20a.
The intermediate acoustic signals SI, S2, S3 and S4 are produced in the sili-
con microphone device 10 on the condition that the silicon microphone de-
vice 10 changes the sensitivity thereof among different four values. The in-
tegrated circuit device 20a produces a composite acoustic signal S5 on the
basis of the intermediate acoustic signals SI, S2, S3 and S4. While the
sound waves is exhibiting relatively low value of sound pressure, the com-
posite acoustic signal S5 is equivalent to an acoustic signal produced
through a silicon microphone with a high sensitivity. On the other hand,
while the sound waves have relatively high values of sound pressure, the
11
composite acoustic signal S5 is equivalent to an acoustic signal produced
through a silicon microphone with a low sensitivity. Thus, the silicon mi-
crophone la exhibits a variable sensitivity. Thus, the silicon microphone la
achieves a wide dynamic range by virtue of the variable sensitivity.
Figure 2 shows the structure of the silicon microphone la. The package
30a is broken down into a circuit board 31 and a lid 32a. Aflat portion 31a,
a wall portion 31b and conductive leads (not shown) form in combination
with the package 30a. A conductive pattern 31c is printed on the flat portion,
and is connected to the conductive leads. The silicon microphone device 10
and integrated circuit device 20a are mounted on an inner surface of the flat
portion 31a, and the wall portion 31b projects from the periphery of the flat
portion 31a in the normal direction to the upper surface. Thus, the wall por-
tion 31b forms an opening. The opening is closed with the lid 32a so that
the silicon microphone device 10 and integrated circuit device 20a are ac-
commodated in the inner space. The lid 32a is spaced from an upper surface
of the silicon microphone device 10 and an upper surface Of the integrated
circuit device 20a.
The sound hole 34a is located over the integrated circuit device 20a, and
the silicon microphone device 10 is offset from the sound hole 34a. This is
because of the fact that the moisture in breath and saliva are liable to invade
the space beneath the sound hole 34a. The offset arrangement prevents the
silicon microphone device 10 from the moisture and saliva.
12
Conductive pads (not shown) are formed on the upper surface of silicon
microphone device 10 and the upper surface of integrated circuit device 20a.
Several conductive pads on the silicon microphone device 10 are connected
through pieces of conductive wire 33 to the conductive pads, which serve as
signal input nodes, of the integrated circuit device 20a, and other conductive
pads, which serve as signal output nodes, are connected to the conductive
leads through the conductive pattern 31c. Electric power and ground poten-
tial are supplied to the silicon microphone device 10 and integrated circuit
device 20a through other conductive leads.
Thus, the silicon microphone device 10 and integrated circuit device 20
are integrated on the substrate 31, and sound waves are converted to the
composite acoustic signal S5 through the cooperation between the silicon
microphone device 10 and integrated circuit device 20 to the outside of the
package 30a.
Structure of Silicon Microphone Device
The silicon microphone device 10 is fabricated on a silicon substrate
through the MEMS technologies. The silicon microphone device 10 largely
comprises a frame structure 10a and acoustic transducers 11 A, 11B, 11C and
1 ID. In this instance, four acoustic transducers 11 A, 1 IB, 11C and 1 ID are
integrated in the silicon microphone device 10. Four cylindrical hollow
spaces 14A, 14B, 14C and 14D are formed in quarter portions of the frame
structure 10a. and extend in parallel to the perpendicular direction of the
frame structure 10a. The four cylindrical hollow spaces 14A to 14D are re-
13
spectively assigned to the four acoustic transducers 11A to 1 ID, and the four
acoustic transducers 11A to 1 ID are supported by the frame structure 10a.
The four acoustic transducers 11A to 1 ID are independent of one another,
and convert the sound waves to the intermediate acoustic signals SI to S4,
respectively. In other words, the four acoustic transducers 11A to 1 ID are
operative in parallel to one another for producing the intermediate acoustic
signals SI to S4. The acoustic converters 11A to 1 ID are of the type con-
verting sound waves to the intermediate acoustic signals SI to S4 through
variation of capacitance. The acoustic converters HA'to 11D are different
in sensitivity from one another so that the sound waves, which are propa-
gated to the four acoustic converters 11A to 11D, make the intermediate
acoustic signals SI to S4 have the amplitude different from one another.
As shown in figures 3 and 4, the frame structure 10a includes a semicon-
ductor substrate 12 and a supporting layer 13 grown on the semiconductor
substrate 12. In this instance, the semiconductor substrate 12 is made of
single crystalline silicon, and the supporting layer 13 is made of silicon ox-
ide. As described hereinbefore, the cylindrical hollow spaces 14A to 14D
are formed in the frame structure 10a in such a manner as to penetrate the
supporting layer 13 and silicon substrate 12, and are different in diameter
from one another.
Each of the acoustic converters 14A to 14D includes a diaphragm 15 and
a back plate 16. The diaphragm 15 and back plate 16 are made of silicon.
Plural small through-holes 17 are formed in the back plate 16, and the dia-
14
phragm 15 and back plate 16 are supported in parallel to each other by the
supporting layer 13. The diaphragm 15 is spaced from the back plate 16 by
an extremely narrow gap 18, and the diaphragm 15 and back plate 16 serve
as electrodes of a capacitor. The diaphragm 15 is vibratory without respect
to the supporting layer 13, and the black plate 16 is stationary with respect
to the supporting layer 13. The sensitivity of acoustic converters 11A to
1 ID is dependent on the area of the diaphragms 15 exposed to the sound
waves. Since the peripheral portions of the diaphragms 15 are embedded in
the supporting layer 13, the vibratory portions of diaphragms 15 are equal in
area to the cross sections of the cylindrical hollow spaces 14A, 14B, 14C
and 14D, and the cylindrical hollow spaces 14A to 14D are different in area
of cross section from one another. Thus, the cylindrical hollow spaces 14A
to 14D make the acoustic converters 11A to 1 ID different in sensitivity from
one another.
When the silicon microphone la is energized, the diaphragms 15 are bi-
ased to the associated back plates 16, and a potential difference takes place
between the diaphragms 15 and the associated back plates 16. Sound waves
are assumed to reach the acoustic converters 11A to 1 ID. While the sound
waves are exerting the sound pressure on the diaphragms 15, the sound pres-
sure gives rise to vibrations of the diaphragms 15. The vibrating diaphragms
15 cause the gaps 18 from the associated back plates 16 repeatedly varied,
and, accordingly, the capacitance of acoustic converters 1.1 A to 11D is var-
ied in dependence on the gaps 18 from the associated back plates 16. The
15
varied capacitance is taken out from the acoustic converters 11A to 1 ID as
the intermediate acoustic signals SI to S4.
As described hereinbefore, the diameter of vibratory portions is different
from one another. Although the sound waves uniformly exert the sound
pressure on all the vibratory portions of diaphragms 15, the flexural rigidity
of which is assumed to be equal to one another, the amplitude of the vibra-
tions is different among the diaphragms 15 due to the difference in diameter.
The capacitance is varied in dependence on the gaps 18 and, accordingly, the
amplitude of vibrations. Thus, the intermediate acoustic signals SI to S4
have different values of the amplitude in the presence of the same sound
waves. In other words, the four acoustic converters 14A to 14D exhibit the
sensitivity different from one another.
Fabrication Process for Acoustic Transducers
The silicon microphone device 10 is fabricated as follows. Figures 5A to
5C shows a process for fabricating the silicon microphone device 10.
The process starts with preparation of a substrate 111 of single crystal-
line silicon. Silicon dioxide SiC>2 is deposited over the entire surface of the
major surface of the substrate 111 so as to form a silicon oxide layer 112,
and, thereafter, polycrystalline silicon is deposited over the entire surface of
the silicon oxide layer 112 so that a poly-silicon layer 113 is formed on the
silicon oxide layer 112.
The silicon oxide layer 112 serves as a sacrifice layer. In this instance,
the silicon oxide and poly crystalline silicon are grown through chemical
16
vapor deposixion techniques. While the polycrystalline silicon is being de-
posited, n-type impurity such as, for example, phosphorus is doped into the
polycrystalline silicon by using an in situ doping technique. After the depo-
sition of poly crystalline silicon, P2O5 is thermally diffused so that the poly
crystalline silicon is heavily doped with the n-type impurity. An ion-
implantation may be employed in the introduction of n-type impurity.
Photo-resist solution is spun onto the poly-silicon layer 113, and is
baked so as to form a photo-resist layer. A latent image of an etching mask
114 is optically transferred from a photo mask (not shown) to the photo-
resist layer, and the latent image is developed so that the etching mask 114
is left on the poly-silicon layer 113 as shown in figure 5A.
The etching mask 114 has four circular disk portions, which are corre-
sponding to the diaphragms 15. The poly-silicon layer 113 is partially
etched away in the presence of etchant, and the etching mask 114 prevents
the diaphragms 15 from the etchant. As a result, the diaphragms 15 are left
on the silicon oxide layer 112. The etching mask 114 is stripped off.
Subsequently, silicon dioxide is deposited on the entire surface of the
resultant structure so as to form a silicon oxide layer 115, and, thereafter,
poly crystalline silicon is deposited over the silicon oxide layer 115. Boron
is also doped into the polycrystalline silicon through the in situ doping tech-
nique. The diaphragms 15 is covered with the silicon oxide layer .115, which
in turn is covered with a poly-silicon layer 116.
17
Photo-resist solution is spun onto the entire surface of the poly-silicon
layer 116, and is baked for forming a photo-resist layer. A latent image of
an etching mask 117 is optically transferred from a photo-mask to the photo-
resist layer, and is developed so that the etching mask 117 is left on the
poly-silicon layer 116 as shown in figure 5B. The etching mask 117 is
formed with through-holes over the areas where the small through-holes 17
are to be formed.
The etching mask 117 has circular disk portions corresponding to the
back plates 16. The resultant structure is exposed to etchant. Although the
poly-silicon layer 116 is partially removed in the presence of the etchant, the
etching mask 117 prevents the back plates 16 from the etchant, and the back
plates 16 are left on the silicon oxide layer 115. The etching mask 117 is
stripped off.
Subsequently, a photo-resist etching mask (not shown) is patterned on
the reverse surface of the substrate 111, and the areas where the cylindrical
hollow spaces 14A to 14D are to be formed are not covered with the photo-
resist etching mask. The substrate 111 is subjected to a deep RIE (Reactive
Ion Etching), i.e., an anisotropic dry etching for achieving large aspect ratio.
The substrate 111 is partially etched away until the silicon oxide layer 112 is
exposed. Thus, the cylindrical hollow spaces 14A to 14D are formed in the
substrate 111. The etching mask is stripped off. The patterned substrate
111 serves as the substrate 12.
18
Subsequently, a photo-resist etching mask 118 is patterned oh the back
plates 16. Although the peripheral areas of back plates 16 and peripheral ar-
eas of silicon oxide layer 115 around the back plates 16 are covered with the
photo-resist etching mask 118, the central areas of back plates 16 are uncov-
ered with the photo-resist etching mask 118. The small through-holes 17 are
formed in the central areas of back plates 16. The resultant structure is
dipped in wet etchant such as, for example, fluoric acid solution. The
fluoric acid solution penetrates into the small through-holes 17, and silicon
oxide below the back plates 16 is removed from the resultant structure. As a
result, the diaphragms 15 are spaced from the back plates 16 by the gaps 18.
The silicon oxide exposed to the cylindrical hollow spaces 14A to 14D is
also removed from the resultant structure, and the diaphragms 15 are ex-
posed to the cylindrical hollow spaces 14A to 14D. The photo-resist etching
mask 118 is stripped off, and the acoustic converters 11A to 1 ID, which are
supported by the frame structure 10a, are completed. The patterned silicon
oxide layers 1 12 and 115 form in combination the supporting layer 13.
Although the acoustic converters 11A to 1 ID are integrated on the sub-
strate 12 in the above-described embodiment, acoustic converters 11 A', ..,
11C, .... , which are different in sensitivity from one another, of another
silicon microphone 1A1 may be respectively formed on substrates of indi-
vidual supporting structures 10A, .., 10C, ..., as shown in figure 6. The sili-
con microphone lAa further comprises an integrated circuit device 20Aa and
a package 30Aa. A sound hole 34Aa is formed over the acoustic transducers
19
11A',..., 11G', ..., and a conductive pattern 31 Ac is different from the con-
ductive pattern 31c because of the physically independent acoustic transduc-
ers 11A', .., 11C, ... Although the acoustic transducer 11C is directly con-
nected to the integrated circuit device 20Aa, the other acoustic transducers
11 A', ... are connected to the integrated circuit device 20Aa through pieces
of conductive wire 33 and the conductive pattern 31 Ac. However, the other
features of the package 30Aa are similar to those of the package 30a. The
other component parts of package 30Aa are labeled with references designat-
ing the corresponding component parts of package 30a without detailed de-
scription. The integrated circuit device 20Aa is same as the integrated cir-
cuit device 20a.
Integrated Circuit Device
As described hereinbefore, the acoustic transducers 11A to 11D are dif-
ferent in sensitivity from one another. The intermediate acoustic signals SI,
S2, S3 and S4 are swung substantially in proportion to the amplitude of vi-
brations of diaphragms 15 and, accordingly, to the sound pressure. All the
intermediate acoustic signals SI to S4 are saturated at a certain value. In
other words, a common dynamic range is found in all the intermediate
acoustic signals SI to S4. The different values of sensitivity means that the
intermediate acoustic signals SI to S4 have respective values of the change
of rate of the amplitude of intermediate acoustic signals SI to S4 in terms of
unit value of the sound pressure. For this reason, the saturated amplitude of
20
intermediate acoustic signals expresses different values of sound pressure,
i.e., different values of saturated sound pressure PA, PB, PC and PD.
The saturated sound pressure PA, PB, PC and PD are corresponding to
the maximum values of sound pressure detectable by the acoustic transduc-
ers 11A, 1 IB, 11C and 1 ID, respectively. The sensitivity of acoustic trans-
ducers 11 A, 1 IB, 11C and 11D is accompanied with "SA", "SB", "SC" and
"SD", respectively. The acoustic transducer 11A exhibits the highest sensi-
tivity, and the acoustic transducer 1 ID exhibits the lowest sensitivity. The
sensitivity SA is followed by the sensitivity SB, which in tern is followed by
the sensitivity SC, i.e., SA > SB > SC > SD. In other words, the wider the
diaphragm is, the higher the sensitivity is.
The integrated circuit device 20a receives the intermediate acoustic sig-
nals SI, S2, S3 and S4 from the acoustic transducers 11A to 1 ID, and carries
out data processing on pieces of sound pressure data for producing the com-
posite acoustic signal S5. While the sound pressure is small, the composite
acoustic signal S5 is produced from the pieces of sound pressure data output
from the acoustic transducer 11A with the highest sensitivity SA. While the
sound pressure is being increased from the small sound pressure region, the
acoustic transducer is changed from 11A through 1 IB and 11C to 1 ID, and
the composite acoustic signal S5 is produced from the pieces of sound pres-
sure data output from the selected one of the acoustic transducers 1 IB, 11C
or 1 ID. Thus; the dynamic range of composite acoustic signal S5 is widened
without sacrifice of the high sensitivity in the small sound pressure region.
21
The integrated circuit device 20a produces the composite acoustic signal
S5 on the basis of the intermediate acoustic signals SI to S4. Analog-to-
digital converters 21 and an information processing system 22a are incorpo-
rated in the integrated circuit device 20a as shown in figure IB. The acous-
tic converters 11A to 1 ID are connected to the analog-to- digital converters
21 so that discrete values on the waveforms of the intermediate acoustic sig-
nals SI to S4 are periodically sampled at sampling intervals and converted
to digital acoustic signals DS1, DS2, DS3 and DS4.
Though not shown in the drawings, the information processing system
22a includes a microprocessor, signal input circuits, a program memory, a
non-volatile data storage, a working memory, peripheral processors, signal
output circuits and a shared bus system. The microprocessor, input circuits,
program memory, working memory, peripheral processors and signal output
circuits are connected to the shared bus system so that the microprocessor
communicates with the peripheral processors, signal input circuits, program
memory, working memory and signal output circuits through the shared bus
system. Thus, the microprocessor serves as a central processing unit so as to
supervise the other system components.
Critical values of the amplitude which are corresponding to the saturated
sound pressure are stored in the non-volatile data storage for the acoustic
transducers 11 A, 1 IB and 11C. The acoustic transducer 11A has the small-
est value of saturated sound pressure. The value of saturated sound pressure
of the acoustic transducer 1 IB is larger than the value of saturated sound
22
pressure of the acoustic transducer 11 A, and is smaller than the value of
saturated sound pressure of the acoustic transducer 11C. Accordingly, the
critical value for the acoustic transducer 11A is the smallest, and the critical
value for the acoustic transducer 1 IB is larger than the critical value for the
acoustic transducer 11A and smaller than the critical value of the'acoustic
transducer 11C. The acoustic transducer 1 ID does not enter the saturated
state in the dynamic range of acoustic signal S4.
Relations between cross fading coefficients and time are further stored in
the non-volatile memory for the four acoustic transducers 11A to 1 ID. The
coefficients of cross fading will be hereinlater described in detail.
Function of Information Processing System
A computer program is stored in the program memory, and runs on the
microprocessor so as to realize a function shown in figure 7. The function,
which is achieved through the execution of computer program, is hereinafter
described with reference to figure 7.
The analog-to-digital converters 21 supplies the digital acoustic signals
DS1 to DS4 to the signal input circuits. The microprocessor periodically
fetches the discrete values expressed by the digital acoustic signals DS1 to
DS4 from the signal input circuits, and the discrete values are temporarily
stored in the working memory.
The function is broken down into plural sub-functions, which are referred
to as "composition control 221a", "normalization "226aA, 226aB'and
226aC" and "composition 227a". The composition control 221a is further
23
broken down into sub-functions referred to as "acquisition of sound pressure
data 222", "selection of acoustic transducer 223a", "acquisition of saturated
sound pressure data 224" and "determination of cross-fading coefficients".
The sub-functions are hereinafter described in detail.
The acoustic digital signals DS1, DS2 and DS3 are normalized through
the normalization 226aA, 226aB and 226aC. As described hereinbefore, the
acoustic transducers 11A to 1 ID have difference values of sensitivity SA,
SB, SC and SD. Figure 8 shows the amplitude of acoustic signals SI to S4
in terms of the sound pressure. Plots 11 A', 1 IB', 11C and 11D' stand for
the relations between the amplitude of sound signals SI to S4 output from
the acoustic transducers 11A to 11D and the sound pressure. As will be un-
derstood from the plots 11 A' to 1 ID', even if the sound pressure is found at
a certain value, the acoustic signals SI to S4 have different values of the
amplitude. The proportional relation on the plots 11 A', 1 IB', 11C and
1 ID' are destroyed at the values PA, PB and PC, and the saturated sound
pressure is also different among the acoustic transducers 11A to 11D. "PA",
"PB" and "PC" are indicative of the saturated sound pressure of the acoustic
transducers 11 A, 1 IB and 11C, respectively, and are corresponding to the
critical values THA, THB and THC of amplitudes of acoustic signals SI to
S3. Pieces of saturated sound pressure data expresses the critical values
THA, THB and THC.
Turning back to figure 7, the normalization 226aA, 226aB and 226aC
makes the discrete values of digital acoustic signals DS1 to DS4 changed to
24
normalized discrete values on the assumption that the acoustic transducers
11A, 1 IB and 11C are equal in sensitivity to the acoustic transducer 1 ID.
The normalization is a preliminary data processing before the composition
227a. The discrete values of digital acoustic signals DS1 to DS4 are nor-
malized through amplification or multiplication by the ratio of sensitivity.
The discrete values of digital acoustic signal DS1 are, by way of example,
amplified or multiplied by the ratio of SD/ SA. The discrete values of other
digital acoustic signal are similarly amplified by SD/ SB and SD/ SC. Thus,
the acoustic transducer 1 ID serves as the standard. For this reason, the nor-
malization is not carried out for the digital acoustic signal DS4.
The composite acoustic signal S5 is produced through the composition
227a. As will be described hereinafter in conjunction with the selection of
acoustic transducer 223a, the composite acoustic signal S5 is partially pro-
duced from each of the digital acoustic signals DS1 to DS4 and partially
from selected two of the digital acoustic signals DS1 to DS4 under the su-
pervision of the composition control 221a.
In order to control the composition, the sound pressure of sound waves is
firstly determined through the acquisition of sound pressure data 222. The
acoustic transducer 1 ID has the widest detectable range of sound pressure so
that a current discrete value A, i.e., amplitude A, representative of the sound
pressure is determined on the basis of an envelope line of discrete value of
the digital acoustic signal DS4.
25
As described hereinbefore, the critical values THA, THB and THC are
stored in the non-volatile data storage. The critical values THA, THB and
THC are read out from the non-volatile data storage through the acquisition
of saturated sound pressure data 224. The current value A of sound pressure
are compared with the critical values THA, THB and THC indicative of the
saturated sound pressure PA, PB and PC to see what acoustic transducer 11 A,
1 IB, 11C or 1 ID is to be selected. When the current value A of digital
acoustic signal DS4 is less than the critical value THA corresponding to the
saturated sound pressure PA, i.e., A < THA, the acoustic transducer 11A is
to be selected. When the current value A is fallen within the range equal to
or greater than the critical value THA and less than the critical value THB,
i.e., THA^A< THB, the acoustic transducer 1 IB is selected from the four.
If the current value A is fallen within the range equal to or greater than the
critical value THB and less than the critical value THC, i.e., THB^SA <
THC, the acoustic transducer 11C is selected from the four. When the cur-
rent value A is equal to or greater than the critical value THC, i.e., THC is A,
the acoustic transducer 11D is selected from the four.
While the current value A is not found in the vicinities of critical values
THA, THB and THC, the normalized value of digital acoustic signal DS1,
DS2, DS3 or DS4 is output from the information processing system 22a as
the composite acoustic signal S5. However, if the current value A is found
in the vicinity of one of the critical values THA, THB or THC, the corre-
sponding part of the composite acoustic signal S5 is produced through a
26
cross fading technique. In other words, when the current value A is found in
the vicinity of critical value THA, THB or THC, the normalized value of
discrete acoustic signal DS1, DS2 or DS3 fades out, and the normalized
value of discrete acoustic signal DS2, DS3 or DS4 fides in. Coefficients are
required for the cross fading, and are determined through the sub-function
"determination of cross-fading coefficients".
Figure 9 shows the coefficients of cross fading in terms of time. Plots SI
stands for the coefficient to be applied to the normalized values of an acous-
tic transducer 11 A, 1 IB or 11C presently used, and plots S2 stands for the
coefficients to be applied to the normalized values of an acoustic transducer
I IB, 11C or 1 ID to be changed from the currently used acoustic transducer
II A, HBor 11C. The coefficient on the plots SI is decreased with time,
and the coefficient on the plots S2 is increased with time.
Figures 10A to IOC show a job sequence of the computer program. The
computer program has a main routine and a subroutine. The main routine
periodically branches into the subroutine through timer interruptions. Each
of the timer interruptions takes place upon expiry of a predetermined time
period. The timer interruption takes place at the predetermined time inter-
vals approximately equal to the sampling intervals for the analog-to-digital
conversion by means of the analog-to-digital converters 21.
When the silicon microphone is powered, the computer program starts to
run on the microprocessor. The microprocessor firstly carries out a system
initialization as by step SI in figure 10A. While the microprocessor is ini-
27
tializing the system, predetermined memory locations in the working mem-
ory are assigned to new discrete values, and address pointers are defined at
another memory location in the working memory. The address pointers are
indicative of the addresses where the values of coefficients are stored. The
value of coefficient on the plots SI is read out from the address indicated by
one of the address pointers, and the value of coefficient on the plots S2 is
read out from the address indicated by the other address pointer. (See figure
9). The address pointers are incremented so that the value on plots SI and
value on plots S2 are respectively decreased and increased together with
time. When the address pointers are incremented to "1", the address count-
ers are indicative of the addresses where the values of coefficients at tl are
stored.
Upon completion of the system initialization, the microprocessor
checks the working memory to see whether or not new discrete values are
stored in the predetermined memory locations as by step S11.
While the answer at step SI 1 is being given negative "No", the micro-
processor repeats the job at step Sll, and waits for the change of answer at
step Sll. When new discrete values are stored in the predetermined memory
locations, the answer at step Sll is changed to affirmative "Yes".
With the positive answer "Yes" at step Sll, the microprocessor reads out
the new discrete values from the working memory as by step SI2, and nor-
malizes the new discrete values of the digital acoustic signals DS1 to DS3 as
by step SI3. The normalization has been already described in conjunction
28
with the function "Normalization" at the boxes 226aA, 226aB and 226aC in
figure 7. The microprocessor stores the normalized values in the working
memory as by step SI4.
Subsequently, the microprocessor reads out the new discrete value A on
the digital acoustic signal DS4 from the working memory as by step SI5,
and compares the new discrete value A with the critical values THA, THB
and THC to see what range the new discrete value is fallen into as by step
SI6. The critical values THA, THB and THC and comparison have been al-
ready described in conjunction with the boxes 223a and 224.
When the new discrete value A is found in the range less than the critical
value THA, the microprocessor temporarily selects the acoustic transducers
11A and 1 IB from the four as by step SI7. When the new discrete value A
is found in the range equal to the critical value THA and less than the criti-
cal value THB, the microprocessor temporarily selects the acoustic trans-
ducers 11 A, 11B and 11C from the four as by step S18.
When the new discrete value A is found in the range equal to the critical
value THB and less than the critical value THC, the microprocessor tempo-
rarily selects the acoustic transducers 1 IB, 11C and 11D from the four as by
step SI9. When the new discrete value A is found in the range equal to or
greater than the critical value THC, the microprocessor temporarily selects
the acoustic transducers 11C and 1 ID from the four as by step S20.
The acoustic transducers 11A and 1 IB are assumed to be selected at step
SI7. The microprocessor checks the new discrete value A to see whether or
29
not the new discrete value A is fallen in the vicinity of the critical value
THA as by step S21. If the new discrete value A is fallen in the vicinity of
the critical value THA, the answer at step S21 is given affirmative "Yes",
and the microprocessor proceeds to step S32. On the other hand, when the
discrete value A is found to be outside of the vicinity of the critical value
THA, the answer at step S21 is given negative "No", and the microprocessor
proceeds to siep S31.
The acoustic transducers 11 A, 1 IB and 11C are assumed to be selected at
step SI8. The microprocessor checks the new discrete value A to see
whether or not the new discrete value A is fallen in the vicinity of the criti-
cal value THA or THB as by step S23. If the new discrete value A is fallen
in the vicinity of the critical value THA or THB, the answer at step S23 is
given affirmative "Yes", and the microprocessor discards either acoustic
transducer 11 A or 11C as by step S25. In detail, when the new discrete
value A is found in the vicinity of critical value THA, the microprocessor
discards the acoustic transducer 11C. On the other hand, when the new dis-
crete value A is found in the vicinity of critical value THC, the microproces-
sor discard the acoustic transducer 11 A. However, if the new discrete value
A is found outside of the vicinities of critical values THA and THB, the an-
swer at step S23 is given negative "No", and the microprocessor discards
both of the acoustic transducers 11A and 11C as by step S24. Upon comple-
tion of job at step S24, the microprocessor proceeds to step S31. On the
30
other hand, when the microprocessor completes the job at step S25, the mi-
croprocessor proceeds to step S32.
The acoustic transducers 1 IB, 11C and 1 ID are assumed to be selected at
step SI9. The microprocessor checks the new discrete value A to see
whether or not the new discrete value A is fallen in the vicinity of the criti-
cal value THB or THC as by step S26. If the new discrete value A is fallen
in the vicinity of the critical value THB or THC, the answer at step S26 is
given affirmaiive "Yes", and the microprocessor discards either acoustic
transducer 1 IB or 11D as by step S28. In detail, when the new discrete
value A is found in the vicinity of critical value THB, the microprocessor
discards the acoustic transducer 1 ID. On the other hand, when the new dis-
crete value A is found in the vicinity of critical value THC, the microproces-
sor discard the acoustic transducer 1 IB. However, if the new discrete value
A is found outside of the vicinities of critical values THB and THC, the an-
swer at step S26 is given negative "No", and the microprocessor discards
both of the acoustic transducers 1 IB and 1 ID as by step S27. Upon comple-
tion of job at step S24, the microprocessor proceeds to step S31. On the
other hand, when the microprocessor completes the job at step S28, the mi-
croprocessor proceeds to step S32.
The microprocessor is assumed to select the acoustic transducers 11C
and 1 ID at step S20. The microprocessor checks the new discrete value A to
see whether or not the new discrete value A is found in the vicinity of criti-
cal value THC as by step S29. If the new discrete value A is found in the
31
vicinity THC, the answer at step S29 is given affirmative "Yes", and the mi-
croprocessor proceeds to step S32. On the other hand, if the new discrete
value is found to be outside of the vicinity of critical value THC, the answer
at step S29 is given negative "No", and the microprocessor discards the
acoustic transducer 11C as by step S30. Upon completion of the job at step
S30, the microprocessor proceeds to step S31. Thus, the microprocessor
proceeds to step S31 on the condition that the new discrete value A is found
to be outside of the vicinities of critical values THA, THB and THC, and
proceeds to step S32 on the condition that the new discrete value A is found
in the vicinity of either critical value THA, THB or THC.
When the new discrete value A is found to be outside of the vicinities of
critical values THA, THB and THC, any cross fading is not required. For
this reason, the microprocessor transfers the new discrete value from the
working memory to the signal output circuit at step S31.
On the other hand, if the new discrete value A is found to be in the vicin-
ity of either critical value THA, THB or THC, the microprocessor carries out
the cross fading as follows.
First, the microprocessor checks the working memory to see whether or
not the previous discrete value was fallen in the vicinity as by step S32. If
the answer at step S32 is given negative "No", the microprocessor resets the
address pointers to zero as by step S33, and increments the address pointers.
When the address pointers are incremented from zero to "1", the address
32
pointers are indicative of the addresses where the values of coefficients at tl
are stored.
On the other hand, when the previous discrete value was found in the vi-
cinity, the addresses are to be incremented together with the lapse of time
from tl. For this reason, the microprocessor proceeds to step S34, and in-
crements the address pointers.
Thus, the values of coefficients are successively read out from the ad-
dresses in the nonvolatile memory as by step S35. The microprocessor reads
j
out the new discrete values from the working memory as by step$36, and
calculates a value of the composite acoustic signal S5 on the basis of the
new discrete values and the coefficients as by step S37. Upon competition
of the calculation, the microprocessor transfers the value of the composite
acoustic signal to the signal output circuit as by step S38. Trie microproces-
sor returns from step S31 or S38 to step Sll. Thus, the microprocessor reit-
erates the loop consisting of steps Sll to S3 8. When the acoustic transduc-
ers are to be changed, the acoustic transducers are overlapped with one an-
other in the vicinities, and the values of composite acoustic signals are pro-
duced through the cross fading.
As will be understood from the flowchart shown in figures 10A to IOC,
the functions "normalization", "composition control" and "composition" are
realized through the execution of computer program.
The silicon microphone la of the present invention has plural acoustic
transducers 11 A to 1 ID different in sensitivity, and produces the composite
33
acoustic signal S5 from the acoustic signals SI, S2, S3 and S4 through the
composition. While the sound pressure is being relatively low, the compos-
ite acoustic signal S5 is produced from the acoustic signal SI, S2 or S3 out-
put from the acoustic transducer 11A, 1 IB or 11C with relatively high sensi-
tivity. When the discrete value indicative of the sound pressure is equal to
or greater than the critical value THC of the acoustic signal S3, the compos-
ite acoustic signal S5 is produced from the acoustic signal S4 output from
the acoustic transducer 11D with the lowest sensitivity. However, the
acoustic transducer is responsive to the widest range of sound pressure. As
a result, the silicon microphone la of the present invention achieves the lin-
ear sound-to-signal converting characteristics in wide sound pressure range
without sacrifice of the sensitivity at the relatively low sound pressure.
When the acoustic transducer is changed, the silicon microphone la of
the first embodiment carries out the cross fading on the pieces of sound
pressure data output from the acoustic transducers on both sides of the criti-
cal value THA, THB or THC so that the composite acoustic signal S5 is free
from undesirable noise. Since the sound pressure-to-electric signal charac-
teristics are partially overlapped with one another, the silicon microphone la
makes it possible to carry out the cross fading.
The pieces of sound pressure data from the acoustic transducers 11 A,
11B and 11C are normalized with respect to the piece of sound pressure data
output from the acoustic transducer 1 ID with the lowest sensitivity. The
acoustic transducer exhibits the wide dynamic range is so that the optimum
34
acoustic transducer or transducers are selected from the plural acoustic
transducers.
The acoustic transducers 11A to 1 ID are integrated on the single sub-
strate so that the fabrication process is simplified.
Second Embodiment
Turning to figure 11 of the drawings, another silicon microphone lb em-
bodying the present invention largely comprises plural acoustic transducers
11 A, 11B, 11C and 11D and an integrated circuit device 22b. The acoustic
transducers 11A to 11D are same as those of the first embodiment, and no
further description is hereinafter incorporated for the sake of simplicity.
The integrated circuit device 22b is adapted to achieve a function, i.e.,
"composition 227b", through which the intermediate acoustic signals SI to
S4 compose a composite acoustic signal DS5a. For the composition 227b,
the integrated circuit device 22b calculates a sum of values expressing the
amplitude of the intermediate acoustic signals SI to S4 or a square root of
the sum of square values. In this instance, analog-to-digital converters and a
microcomputer are integrated on a single semiconductor chip, and the fol-
lowing function is realized through an execution of a computer program.
Figure 12 shows the function of the integrated circuit device 22b. PL11,
PL 12, PL 13 and PL 14 stand for relation between the sound pressure and the
amplitude of intermediate acoustic signals SI, S2, S3 and S4. Although the
actual intermediate acoustic signals SI to S4 have proportional regions and
non-proportional regions as shown in figure 8, the amplitude of intermediate
35
acoustic signals shown in figure 12 is linearly increased until the saturated
state for the sake of simplification.
While the analog-to-digital converters are periodically outputting the
discrete values on the plots PL11 to PL14 to the microcomputer, the micro-
computer fetches the discrete values in synchronism with the analog-to-
digital conversions, and temporarily stores the discrete values in an internal
working memory. The discrete values are sequentially read out from the in-
ternal working memory, and are added to one another. As a result, the sum
of discrete values is left in an internal register. The sum is output from the
microcomputer as indicated by plots PL15a.
Otherwise: the discrete values are squared, and the square values are
added to one another. A value of square root is extracted from the sum of
square values. The square root of the sum of square values is output from
the microcomputer as indicated by plots PL15b.
Comparing plots PL11 to PL14 with one another, it is understood that the
amplitude of intermediate acoustic signals SI to S4 at a certain sound pres-
sure is increased together with the sensitivity. The larger the sensitivity is,
the higher the amplitude is. The discrete values on the plots PL11 occupy a
substantial part of the sum on plots PL15a and a substantial part of the
square root of the sum of square values on plots PL15b until the plots PL11
is saturated. After the saturation of plots PL11, the discrete values on plots
PL12 is more influential on the sum and the square root of sum of square
values than the discrete values on the other plots PL11, PL13 and PL14 until
36
the plots PL 12 is saturated. However, the sum and the square root of sum of
square values are increased together with the discrete values on the plots
PL4 after the saturation of plots PL13. Thus, while a relatively low sound
pressure is being input into the silicon microphone, the acoustic transducer
with a relatively large sensitivity is more influential on the sum and the
square root of sum of square values than the acoustic transducer with a rela-
tively small sensitivity so that the composite acoustic signal DS5a is pro-
duced under the condition of a relatively large sensitivity. Although the in-
fluence of the acoustic transducers with relatively large sensitivity is re-
duced together with the increase of sound pressure, the sum and the square
root of sum of square values are increased until the plots PL14 is saturated.
In other words, the plots PL15a and PL15b are not saturated before the plots
PL14 is saturated. Thus, the silicon microphone is responsive to the wide
sound pressure range without sacrifice of the sensitivity in the relatively low
sound pressure region.
The addition of discrete values or the calculation of square root of sum
of square values is desirable, because the value due to random noise is less
weighed in the sum or the square root of sum of square values.
Third Embodiment
Turning to figure 13 of the drawings, yet another silicon microphone lc
embodying the present invention largely comprises plural acoustic transduc-
ers 11A to 1 ID and an integrated circuit device 22c. The acoustic transduc-
ers 11A to 1 ID are similar to those of the first embodiment, and, for this
37
reason, are not hereinafter detailed. Boxes 221b, 222, 223b, 224, 225,
226aA to 226aC, 227c, 229A to 229C and 230 and circles 228A to 228C
stand for functions of the integrated circuit device 22c.
The normalization 226aA, 226aB and 226aC are similar to those of the
first embodiment, and the composition 227a and composition control 221a
are replaced with a composition 227c and composition control 221b, respec-
tively. For this reason, description is focused on the composition 227c and
composition control 221b.
The composition control 221b is broken down into sub-functions of "ac-
quisition of sound pressure data 222", "selection of acoustic transducer
223b", "acquisition of saturated sound pressure data 224" and "determina-
tion of cross fading coefficients 225". The sub-functions of "acquisition of
sound pressure data 222", "acquisition of saturated sound pressure data 224"
and "determination of cross fading coefficients 225" are similar to those of
the first embodiment, and no further description is hereinafter incorporated
for avoiding repetition.
When the current discrete value A is determined through the acquisition
of sound pressure data, the critical values THA, THB and THC, which are
shown in figure 8, are transferred through the acquisition of saturated sound
pressure data 224 to the selection of acoustic transducer 223b, and the cur-
rent discrete value is compared with the critical values THA, THB and THC
for selecting one of or more than one of the acoustic transducers 11A to 1 ID.
When the current discrete value A is less than the critical value THA, i.e., A
38
< THA, all of the acoustic transducers 11A to 1 ID are selected through the
selection of acoustic transducer 223b. If the current discrete value A is
equal to or greater than the critical value THA and less than the critical
value THB, i.e., THA ^ A < THB, the acoustic transducers 1 IB, 11C and
1 ID are selected from the four. If the current discrete value A is equal to or
greater than the critical value THB and less than the critical value THC, i.e.,
THB S A < THC, the acoustic transducers 11C and 1 ID are selected from
the four. If the current discrete value A is greater than the critical value
THC, i.e., THC ^ A, only the acoustic transducer 1 ID is selected from the
four.
The function "composition" is broken down into sub-functions "addi-
tion" 228A, 228B and 228C, "division" 229A, 229B and 229C and "cross
fading 230". The discrete value on the digital acoustic signal DS4 is added
to the discrete value on the digital acoustic signal DS3 through the sub-
function 228C, and the discrete value on the digital acoustic signal DS2 is
added to the sum of the discrete values on the digital acoustic signals DS4
and DS3 through the sub-function 228B. The discrete value on the digital
acoustic signal DS1 is added to the sum of the discrete values on the digital
acoustic signals DS4, DS3 and DS2 through the sub-function 228A.- The
sub-functions 228A, 228B and 228C are selectively realized depending upon
the acoustic transducers selected through the sub-function 223b.
The sum of discrete values is divided by the number of discrete values
added to one another through the sub-function 229A, 229B or 229C. Al-
39
though the quotient are per se output from the integrated circuit device 22d,
the quotients are subjected to the cross fading 230 on the condition that the
current discrete value A is fallen in the vicinities of critical values THA,
THB and THC. The cross fading 230 is similar to that described in conjunc-
tion with the composition 227a, and detailed description is omitted for
avoiding repetition.
While the sound waves being found in the small sound pressure region,
the composite acoustic signal S5b is produced from the intermediate acous-
tic signal SI of the acoustic transducer with a high sensitivity. This is be-
cause of the fact that the acoustic signal SI has desirable influence on the
composite acoustic signal S5b, which is produced from the acoustic signals
SI to S4. The acoustic transducers 11A to 1 ID are selectively used for pro-
ducing the composite acoustic signal S5b in the sound pressure region be-
tween PA and PC. However, when the sound waves have a value of sound
pressure greater than the saturated value PC, the composite acoustic signal is
produced from the intermediate acoustic signal S4 of the acoustic transducer
1 ID with the widest dynamic range.
As will be understood from the foregoing description, the silicon micro-
phone of the third embodiment is responsive to a wide sound pressure range
without sacrifice of a high sensitivity in the small sound pressure region.
Since the addition 228A, 228B and 228C is followed by the division
229A, 229B or 229C, the composite acoustic signal S5b is varied in a rela-
40
tively narrow numerical range so that the composite acoustic signal S5b is
easy to be processed in an application device.
Random noise is reduced through the addition of plural discrete values.
The cross fading makes noise eliminated from the composite acoustic
signal S5b.
Fourth Embodiment
Figure 14 shows the function of still another silicon microphone Id em-
bodying the present invention. Still another silicon microphone Id embody-
ing the present invention largely comprises plural acoustic transducers 11A
to 1 ID and an integrated circuit device 22d. The acoustic transducers 11A
to 1 ID are similar to those of the first embodiment, and, for this reason, are
not hereinafter detailed. Boxes 221b, 222, 223b, 224, 225, 226bA to 226bC,
227c, 229A to 229C and 230 and circles 228A to 228C stand for functions of
the integrated circuit device 22d.
The composition 227c and composition control 221b are similar to those
of the third embodiment, and the normalization 226aA, 226aB and 226aC are
replaced with normalization 226bA, 226bB and 226bC. For this reason, de-
scription is focused on the normalization 226bA to 226bC.
Although the discrete values of digital acoustic signals DS1, DS2 and
DS3 are amplified by the fixed values of ratios SD/ SA, SD/ SB and SD/ SC
in the normalization 226aA, 226aB and 226aC, the ratio SD/ SA, SD/ SB and
SD/ SC are variable in the normalization 226bA, 226bB and 226bC.
41
In detail, figure 15 shows the function of normalization 226bA. The
other normalization 226bB and 226bC are identical in function with the
normalization 226bA. The function of normalization 226bA is broken down
into amplication 2261 A, determination of the discrete value of digital acous-
tic signal 2262A and determination of the amplification factor 2263 A.
The amplification factor is determined as follows. The current discrete
value of digital acoustic signal DS4 is relayed from the sub-function of "ac-
quisition of sound pressure data 222" to the sub-function of "determination
of amplification factor 2263A", and the normalized discrete value of digital
acoustic signal DS1 is relayed from the sub-function "read-out of discrete
value 2262A" to the sub-function of "determination of amplification factor
2263A". The ratio SD/ SA is amplified by the gain, i.e., ratio DDS4/ DDS1
through the sub-function of "determination of amplification factor 2263A",
where DDS4 and DDS1 are representative of the current discrete value of
digital acoustic signal DS4 and the normalized discrete value of digital
acoustic signal DS1. The product (SD/ SA XDDS4/ DDS1) is supplied from
the sub-function "determination of amplification factor 2263A" to the sub-
function "amplification 2261 A" as the amplification factor. The discrete
value of digital acoustic signal DS1 is amplified by the amplification factor
(SD/ SA x DDS4/ DDS1) through the sub-function "amplification 2261A",
and the product (DDS1 x (SD/ SA x DDS4/ DDS1)) is supplied from the
sub-function of "amplification 2261 A" to the sub-function "addition 228A".
42
As will be understood from the foregoing description, the silicon micro-
phone Id is responsive to the sound waves in the wide sound pressure range
without sacrifice of high sensitivity in the small sound pressure region as
similar to the first to third embodiments.
Moreover, the amplification factor is corrected through the amplification
between the ratio (SD/ SA) and the ratio (DDS4/ DDS1). The ratio (SD/
SA) is the correction factor due to the difference in sensitivity between the
acoustic transducer 1 ID and the acoustic transducer 11A, and the ratio
(DDS4/ DDS1) is another correction factor due to the difference in current
sound pressure represented by the digital acoustic signals DS4 and DS1.
Thus, the discrete values of digital acoustic signals DS1, DS2 and DS3 are
exactly normalized with respect to the discrete value of digital acoustic sig-
nal DS4.
Fifth Embodiment
Turning to figures 16A and 16B of the drawings, yet another silicon mi-
crophone le largely comprises a silicon microphone device 10b, an inte-
grated circuit device 20b and a package 30b. The silicon microphone device
10b has plural acoustic transducers 11A, 1 IB, 11C and 1 ID as similar to the
silicon microphone device 10a of the first embodiment, and sound waves are
concurrently converted to intermediate acoustic signals SI, S2, S3 and S4 by
means of the acoustic transducers 11A to 11D. The intermediate acoustic
signals SI to S4 are supplied from the silicon microphone device 10b to the
integrated circuit device 20b, and are subjected to a predetermined signal
43
processing in the integrated circuit device 20b. A composite acoustic signal
is produced through the predetermined signal processing on the basis of the
intermediate acoustic signals SI to S4, and is output from the silicon micro-
phone le.
Although the single sound hole 34a is formed in the package 30a for all
the acoustic converters 11A to 1 ID, the inner space of package 30b is di-
vided into plural sub-spaces by means of partition walls 36, 37 and 38 as
shown in figure 17. The package 30b is broken down into a circuit board 31
and a lid 32b. The partition wall 36 upwardly projects from the circuit
board 31, and extends in the lateral direction as indicated by dots-and-dash
line in figure 16A. The partition wall 37 downwardly projects from the in-
ner surface of lid 32b, and is held in contact with the upper surface of the
partition wall 36. Thus, the inner space of the package 30b is divided into
two sub-spaces, and the silicon microphone device 10 and integrated circuit
device 20b are assigned to the sub-spaces, respectively. The acoustic trans-
ducers 11A to 11D are connected to a conductive pattern 31 ec on the circuit
board 31 through pieces of bonding wire 33, and the conductive pattern 3 lee
is further connected to pads on the integrated circuit device 20b through
other pieces of bonding wire 33.
The partition walls 38 downwardly project from the inner surface of the
lid 32b over the silicon microphone device 10, and cross each other at right
angle. The lower surfaces of partition walls 38 are held in contact with the
upper surface of the silicon microphone device 10. As a result, the sub-
44
space, which is assigned to the silicon microphone device 10, is further di-
vided into four compartments. Thus, each of the compartments is isolated
from the other compartments by means of the partition walls 38. The four
compartments are respectively assigned to the acoustic transducers 11A to
1 ID. Although the four acoustic transducers 11A to 1 ID are integrated on
the single silicon substrate, more than one silicon substrate may be used for
the silicon microphone device 10b.
Sound holes 34bA, 34bB, 34bC and 34bD are formed in the lid 32b, and
are slightly offset the four acoustic transducers 11A to 1 ID in a direction
spaced from the center of partition walls 38, respectively. The reason for
the offset arrangement is that a time delay is introduced among the arrivals
of sound waves at the acoustic transducers 11A to 1 ID. The four compart-
ments are open to the atmosphere through the four sound holes 34bA to
34bD, respectively. The sound waves pass through the four sound holes
34bA to 34bD, and reach the acoustic transducers 11A to 11D. The other
features of package 30b are similar to the corresponding features of package
30a, and nor further description is hereinafter incorporated for the sake of
simplicity.
The integrated circuit device 20b includes analog-to-digital converters 21
and an information processing system 22E as shown in figure 16B. A com-
puter program runs on a microprocessor of the information processing sys-
tem 22E, and realizes a function of "expansion of dynamic range 22a" and
another function of "endowment of directivity 23". The function of"expan-
45
sion of dynamic range 22a" is similar to the function of the integrated circuit
device described in conjunction with the first embodiment, second embodi-
ment, third embodiment or fourth embodiment. The function of "endowment
of directivity 23" is hereinafter described in detail.
The function of "endowment of directivity 23" is broken down into sub-
functions of "normalization 231", "directivity control 23.2", "introduction of
delay 233A, 233B, 233C and 233D" and "selection and composition of de-
layed signals 234" as shown in figure 18. The function 23 endows the sili-
con microphone le with the directivity so that the amplitude of composite
acoustic signal S5 is varied depending upon the direction of a source of
sound waves. The physically separated arrangement of acoustic transducers
11A to 1 ID makes it possible to endow the silicon microphone le with the
directivity.
The endowment of directivity is achieved by introducing delays into the
digital acoustic signals DS1 to DS4. In detail, the digital acoustic signals
DS1 to DS3 are firstly normalized with respect to the digital acoustic signal
DS4 as if the acoustic transducers 11 A, 1 IB and 11C have the sensitivity
equal to the sensitivity of the acoustic transducer 11D. The sub-function of
"normalization 231" is similar to the sub-function 226aA/ 226aB/ 226aC or
226bA/ 226bB/ 226bC, and, for this reason, no further description is herein-
after incorporated for avoiding repetition.
The acoustic transducers, which are to participate the endowment of di-
rectivity, are selected from the four acoustic transducers 11A to 11D through
46
the sub-function of "directivity control 232", and the direction of directivity
is determined also through the sub-function of "directivity control 232".
Thereafter, the amount of delay to be introduced into the selected acoustic
transducers is determined on the basis of the direction of directivity through
the sub-function of "directivity control 232".
The amount of delay is relayed from the sub-function of "directivity con-
trol 232" to the sub-function of "introduction of delay 233A, 233B, 233C
and 233D". The normalized discrete values are relayed from the sub-
function of "normalization 231" to the selected sub-functions of "introduc-
tion of delay 233A, 233B, 233C and 233D", and the amount of delay is in-
troduced into the propagation of each of the normalized discrete values.
Thus, digital delayed acoustic signals DS1\ DS2', DS3' and DS4' are re-
layed from the sub-function "introduction of delay 233A, 233B, 233C and
233D" to the sub-function of "selection and composition of delayed signals
234".
Since the sub-function "directivity control" informs the sub-function of
"selection and composition of delayed signals 234" of the selected acoustic
transducers, a composite acoustic signal S5e is produced from the selected
ones of the digital delayed acoustic signals DS1' to DS4' through the sub-
function of "selection and composition of delayed signals 234". The com-
posite acoustic signal S5e is endowed with the directivity through the beam
steering or null steering. The beam steering makes sound waves in a par-
47
ticular direction emphasized, and the null steering makes sound waves in a
particular direction reduced.
In detail, figure 19 shows the concept of endowment of directivity on the
assumption that the acoustic transducers 11A and 1 IB are selected from the
four acoustic transducers 11A to 1 ID. The center of the diaphragm 15 of
acoustic transducer 11A is spaced from the center of the diaphragm 15 of
other acoustic transducer 1 IB by distance "d". The sound waves are as-
sumed to be propagated on a plane, i.e., plane waves for the sake of simpli-
fication. The plane waves are propagated from a sound source to the acous-
tic transducers 11A and 1 IB in direction DR. When the plane waves arrives
at the diaphragm 15 of acoustic transducer 11 A, there remains distance (d
sin 9 ) until the diaphragm 15 of acoustic transducer 1 IB. The delay time is
expressed as (d sin 9)1 c where c is the acoustic velocity. Thus, the excita-
tion of diaphragm 15 of acoustic transducer 1 IB is delayed from the excita-
tion of diaphragm 15 of acoustic transducer 11A by (d sin 9 )l c.
When the amount of delay is adjusted to (d sin 9)1 c for the sub-function
of "introduction of delay 233A", the delay time between the acoustic trans-
ducer 11A and the acoustic transducer 1 IB is cancelled. As a result, the in-
troduction of delay (d sin 9)1 z makes the digital delayed acoustic signals
DS1' and DS'2' express the plane waves propagated in the direction DR as if
the plane waves simultaneously arrive at both of the acoustic transducers
11A and 11B. Of course, the introduction of delay (d sin 9 )l c is proper to
only the plane waves in the direction DR. There remains delay time in the
48
propagation of plane waves in a direction different from the direction DR, or
the delay time is increased for the plane waves propagated in other direc-
tions when 6 is around 90 degrees.
The sub-function "selection and composition of delayed signals" is
equivalent to sub-function "addition" and/ or "substitution". When the com-
posite acoustic signal S5e is endowed with the directivity in the direction
DR through the beam steering, the digital delayed acoustic signal DS2' is
added to the digital delayed acoustic signal DS1'. As a result, the discrete
value of composite acoustic signal S5e is twice as large as the discrete value
of digital acoustic signal DS1. On the other hand, the discrete values of
composite acoustic signal S5e, which express sound waves propagated from
directions different from the direction DR, are less than the discrete value of
composite acoustic signal S5e expressing the sound waves propagated in the
direction DR due to the actual delay time different from the delay time (d
sin 6 )/ c. Thus, the sound waves propagated in the direction DR are empha-
sized through the beam steering.
On the other hand, when the composite acoustic signal S5e is endowed
with the directivity through the null steering, the sub-function "subtraction"
is used for the endowment of directivity. The discrete value of digital de-
layed acoustic signal DS2' is subtracted from the discrete value of digital
delayed acoustic signal DS1' so that the discrete value of composite acoustic
signal S5e is minimized to zero. On the other hand, the discrete value of
composite acoustic signal S5e is greater than zero due to the remaining de-
49
lay time. In an extreme case, the discrete value of composite acoustic signal
S5e is greater than the discrete value of digital delayed acoustic signal DS1'.
Thus, the sound waves in the direction DR are emphasized through the null
steering.
Another set of acoustic transducers such as, for example, the acoustic
transducers 11C and 1 ID may be selected from the four through the sub-
function "directivity control 232".
As will be understood from the foregoing description, the silicon micro-
phone makes it possible to respond to the sound waves in the wide sound
pressure range without sacrifice of the high sensitivity in the small sound
pressure region as similar to the first to fourth embodiments.
Moreover, the acoustic converters 11A to 1 ID are accommodated in the
compartments physically separated from one another, and the compartments
are open to the atmosphere through individual sound holes 34bA, 34bB,
34bC and 34bD, respectively. For this reason, the sound waves give rise to
the excitation of diaphragms 15 at different times, and the sub-function "en-
dowment of directivity" makes it possible to emphasize the sound waves
propagated from a particular direction. Thus, the silicon microphone le
produces the directive composite acoustic signal S5e from the intermediate
acoustic signals SI to S4.
Furthermore, the acoustic transducers 11 A, 1 IB, 11C and 11D make the
silicon microphone compact. It is expected to supersede the prior art bulky
50
directional microphone with the compact directional microphone of the pre-
sent invention.
Although the acoustic transducers 11A, 1 IB, 11C and 1 ID are different
in sensitivity from one another, it is possible to form a semiconductor direc-
tional microphone from plural acoustic transducers approximately equal in
sensitivity to one another.
Sixth Embodiment
Referring to figures 20 of the drawings, still another silicon microphone
If embodying the present invention largely comprises largely comprises a
silicon microphone device 10F, an integrated circuit device 20f and a pack-
age (not shown). The silicon microphone device 10F includes plural acous-
tic transducers 11A, 1 IB, 11C and 1 ID, which are similar in structure to
those of the silicon microphone device 10. For this reason, detailed descrip-
tion is not made on the acoustic transducers 11 A, 1 IB, 11C and 1 ID for the
sake of simplicity.
The integrated circuit device 20f includes analog-to-digital converters
(not shown) and an information processing system 22f. The information
processing system 22f is similar in system configuration to the information
processing system 22a except for equalizers 250a, 250b, 250c and 250d. For
this reason, description is focused on the equalizers 250a to 250d for avoid-
ing repetition.
In general, an acoustic transducer with low sensitivity is proper to con-
version from loud sound to an electric signal, and exhibits good sound-to-
51
signal converting characteristics for low-frequency sound components rather
than high-frequency sound components. On the other hand, when sound is
produced at small loudness, an acoustic transducer with high sensitivity is
well responsive to the sound, and exhibits good sensitivity to high-frequency
sound components rather than low-frequency sound components. This phe-
nomenon is observed among the acoustic transducers 11 A, 1 IB, 11C and
11D.
As described in conjunction with the silicon microphone la, the compos-
ite acoustic signal S5 is produced from selected one or two of the intermedi-
ate acoustic signals SI to S4 depending upon the loudness of sound. When
faint sound reaches the silicon microphone la, the information processing
system 22a selects the acoustic transducer 11A or 1 IB. The selected acous-
tic transducer 11A or 1 IB tends to emphasize the high frequency compo-
nents of faint sound. On the other hand, when loud sound is input to the
silicon microphone la, the information processing system 22a selects the
acoustic transducer 11D or 11C. The selected transducer 11D or 11C tends
to emphasize the low frequency components of loud sound. When the com-
posite acoustic signal S5 is converted to sound, users feel the reproduced
sound slightly different from the original sound.
In order to improve the quality of reproduced sound, the equalizers 250a
to 250c are connected between the normalizations 226aA to 226aC to the
composition 227a, and the equalizer 250d is connected between the analog-
to-digital converter (not shown) and the composition 227a. Each of the
52
equalizers 250a to 250d is responsive to plural frequency bands of interme-
diate acoustic signal DS1, DS2, DS3 or DS4, and the signal components of
intermediate acoustic signal are amplified with different values of gain. The
different values of gain are memorized in application goods such as, for ex-
ample, mobile telephone, as default values. The users may change the gain
from the default values to their own values through a man-machine interface
of the application goods.
In this instance, the equalizer 250a has a larger value of gain on low fre-
quency band components such as 100 Hz to 500 Hz rather than the value of
gain on high frequency band components, and the equalizer 250d has a lar-
ger value of gain on high frequency band components such as 1.5 kHz to 2
kHz of voice and 2 kHz to 10 kHz of musical instrument sound rather than
the value of gain on low frequency band components. Thus, the equalizers
250a to 250d compensate the distortion due to the sound-to-signal convert-
ing characteristics of acoustic transducers 250a to 250d.
Another function of equalizers 250a to 250d is to make the plural fre-
quency band components output from the audio transducers 11A to 11D
equalized or balanced at the composition 227a through regulation of fre-
quency band components. In the regulation, a certain value of sound pres-
sure serves as a "reference" common to the acoustic transducers 11A to 11D.
A mean value of sound pressure in a predetermined frequency band of voice
may serve as the reference for the acoustic transducers 11A to 1 ID. The
53
predetermined frequency band for voice may be 500 Hz to 10 kHz. Other-
wise, a value of sound pressure at 1 kHz may serve as the reference.
After the regulation, the intermediate acoustic signals DS1 to DS4 are
supplied from the equalizers 250a to 250d to the composition 227a, and the
composition 227a produces the composite acoustic signal S5 from the regu-
lated intermediate acoustic signals DS1 to DS4. The regulation of frequency
band components is desirable, because the composition 227a keeps the com-
posite acoustic signal S5 stable at the change from one of the intermediate
acoustic signals DS1 to DS4 to another intermediate acoustic signal. Thus,
the users feel the reproduced sound natural at the change of acoustic trans-
ducers 11A to 1 ID by virtue of the regulation of frequency band components
among the intermediate acoustic signals DS1 to DS4.
Although particular embodiments of the present invention have been
shown and described, it will be apparent to those skilled in the art that vari-
ous changes and modifications may be made without departing from the
spirit and scope of the present invention.
The silicon microphone device 10 and integrated circuit device 20a may
be mounted on a multi-layer board. In this instance, the conductive pads are
connected to a multi-layer interconnection of the multi-layer board. A con-
ductive layer of the multi-layer board and a lid serve as a shield structure.
The area of diaphragms 15 is narrowed in the order of acoustic transduc-
ers 11 A, 1 IB, 11C, 1 ID so as to make the sensitivity SA to SD of acoustic
transducers 11A to 1 ID different in the above- described embodiment.
54
However, other design factors, which have influences on the amplitude of
vibrations, make the sensitivity SA to SD different. For this reason, the dia-
phragms 15 may be different in flexural rigidity, i.e., the geometrical mo-
ment of inertia and/ or material from one another. The thicker the dia-
phragm is, the lower the sensitivity is. The larger the stress in diaphragms is,
the lower the sensitivity is.
Although the integrated circuit device 22b realizes the composition
through the computer program, the microcomputer and computer program
may be replaced with a wired-logic circuit. For example, the digital acous-
tic signals DS1 to DS4 may be supplied to adders synchronized with one an-
other by means of a timing control signal from a frequency multiplier.
A DSP(Digital Signal Processor) is available for the information processing
system.
The normalization 226aA to 226aC may be carried out on the digital
acoustic signals DS1 to DS4 before the function 227b. In this instance, the
normalization makes it possible to enhance the fidelity of the composite
acoustic signal S5a.
The sum and the square root of sum of square values do not set any limit
to the technical scope of the present invention. In case where an integrated
circuit calculates the square root of sum of square values, the polarity of the
intermediate acoustic signals SI to S4 is eliminated from the square values.
In order to keep the piece of polarity data in the composite acoustic signal
55
PL15b, the integrated circuit device may determine the composite acoustic
signal through the following steps.
1. Keep pieces of polarity data indicative of the positive sign or nega-
tive sign added to the discrete values of the digital acoustic signals
DS1 to DS4 in memory locations of the working memory:
2. Square the positive discrete values and/ or negative discrete values:
3. Add the piece of polarity data to the square values:
4. Add the positive square values and/ or negative square values to one
another:
5. Keep the piece of polarity data of the sum of square values in a
memory location of the working memory:
6. Find the square root of the absolute value of the sum: and
7. Add the piece of polarity data to the square root.
Silicon does not set any limit to the technical scope of the present inven-
tion. Term "silicon" is a typical example of the semiconductor material.
Another sort of semiconductor microphone device may form a part of a
semiconductor microphone of the present invention.
The optical sound waves-to-electric signal converters on the gallium-
arsenide substrates disclosed in Japan Patent Application laid-open No.
2001-169395 may form a semiconductor microphone together with the inte-
grated circuit device 20a or 20b. Although the optical sound waves-to-
electric signal converters are used for expansion of bandwidth, it is possible
to redesign the vibratory plates for different values of sensitivity. The
56
acoustic transducers 11A to 1 ID are replaced with the optical sound waves-
to-electric signal converters with the redesigned vibratory plates.
Two acoustic transducers, three acoustic transducers or more than four
acoustic transducers may be connected in parallel to the integrated circuit
device.
The silicon microphone devices 10 and 10A/ IOC may be housed in a
package different from a package for the integrated circuit device 20a/ 20b.
The sub-function of "cross-fading" is not an indispensable feature of the
present invention. The discrete values of digital acoustic signal may be
simply formed in the composite acoustic signal without the cross fading. An
interpolation may be employed in the vicinity of the critical values THA,
THB and THC.
A single equalizer may be shared among more than one acoustic trans-
ducers 11A to 1 ID. In this instance, the single equalizer is accompanied
with a selector, and a control signal is supplied from the selection of acous-
tic transducer 223a to the selector. When the selection of acoustic trans-
ducer 223 a steers the selector from one of the intermediate acoustic signals
to another one, the single equalizer carries out the compensation of regula-
tion for the newly selected intermediate acoustic signal. The single equal-
izer makes the system configuration simple, and the manufacturer reduces
the production cost.
The component parts and jobs described in the embodiments are corre-
lated with claim languages as follows.
57
The packages 30a; 30Aa; 30b serve as "a housing". The silicon micro-
phones 1a, 1Aa, 1b, 1c, 1d and 1e serve as a "semiconductor microphone",
and the integrated circuit devices 20a, 20Aa and 20b and computer programs
running on the microprocessors of the integrated circuit devices 20a and 20b
as a whole constitute a "signal processor". The intermediate signals SI, S2,
S3 and S4 and digital intermediate acoustic signals DS1, DS2, DS3 and DS4
serve as "intermediate acoustic signals", and the composite acoustic signals
S5, S5a, S5b and S5e are corresponding to a "composite acoustic signal".
One of or two of the digital acoustic signals DS1 to DS4, which are selected
through the sub-function of "selection of acoustic transducer 223a/ 223b"
are "optimum acoustic signals". "A current value of the sound pressure of
said sound waves" is obtained through the sub-function of "acquisition of
sound pressure data 222.
The information processing system 22a/ 22b/ 22c/ 22d/ 22E and a part of
the computer program realizing the sub-functions of "acquisition of sound
pressure data 222", "selection of acoustic transducers 223a/ 223b", "acquisi-
tion of saturated sound pressure data 224" and "determination of cross fad-
ing coefficients 225" as a whole constitute "a composition controller", and
the information processing system 22a/ 22b/ 22c/ 22d/ 22E and another part
of the computer program realizing the sub-functions of "normalization
226aA/ 226aB/ 226aC or 226bA/ 226bB/ 226bC" and ^composition/ cross
fading 227a/ 227b/ 230" as a whole constitute a "composer"
58
The information processing system 22a/ 22b/ 22c/ 22d/ 22E and a part of
the computer program realizing the sub-functions of "acquisition of sound
pressure data 222", "selection of acoustic transducers 223a/ 223b" and "ac-
quisition of saturated sound pressure data 224" as a whole constitute a "se-
lector", and the information processing system 22a/ 22b/ 22c/ 22d/ 22E and
a part of the computer program realizing the sub-function "determination of
cross fading coefficients 225" as a whole constitute a "determiner". The
cross fading coefficients are "parameters".
The information processing system 22a/ 22b/ 22c/ 22d/ 22E and a part of
the computer program realizing the sub-function of "normalization 226aA/
226aB/ 226aC or 226bA/ 226bB/ 226bC" as a whole constitute a "normaliza-
tion unit", and the information processing system 22a/ 22b/ 22c/ 22d/ 22E
and another part of the computer program realizing the sub-function of
"composition/ cross fading 227a/ 227b/ 230" as a whole constitute a "merg-
ing unit".
The information processing system 22E and a part of the computer pro-
gram realizing the sub-functions of "directivity control 232", "introduction
of delay 233A, 233B, 233C and 233D" and "selection and composition of
delayed signals 234" as a whole constitute an "endower". The information
processing system 22E and a part of the computer program realizing the sub-
functions of "directivity control 232" as a whole constitute a "directivity
control unit", and the information processing system 22E and another part of
the computer program realizing the sub-functions of "introduction of delay
59
233A, 233B, 233C and 233D" as a whole constitute a "delay unit". The in-
formation processing system 22E and yet another part of the computer pro-
gram realizing the sub-functions of "selection and composition of delayed
signals 234" as a whole constitute an "emphasizing unit". The direction
"DR." is corresponding to a "particular direction".
The back plate 16 serves as a "stationary electrode", and the diaphragm
15 serves as "vibratory electrode".
60
WE CLAIM:
1. A semiconductor microphone (1a; 1Aa; 1b; 1c; 1d; 1e; 1f) connected to a
signal processor (20a; 20Aa; 20b) for converting sound waves to plural in-
termediate acoustic signals (S1, S2, S3, S4, DS1, DS2, DS3, DS4), said sig-
nal processor (20a; 20Aa, 20b; 20f) carrying out a signal processing on said
plural intermediate acoustic signals (S1, S2, S3, S4, DS1, DS2, DS3, DS4)
so as to produce a composite acoustic signal (S5; S5a; S5b; S5e), said semi-
conductor microphone (1a; 1Aa; 1b; 1c; 1d; le) comprising an acoustic
transducer unit (10; 10A/ 10C) for converting said sound waves to said in-
termediate acoustic signals (S5; S5a; S5b; S5e),
characterized by further comprising
a housing (30a; 30Aa; 30b) having an inner space, and formed with a
sound hole (34a; 34Aa; 34bA, 34bB, 34bC, 34bD) which permits said sound
waves to enter said inner space,
in that
said acoustic transducer unit (10; 10A/ 10C) includes plural acoustic
transducers (11 A, 1 IB, 11C, 11D; 11 A', 11C) accommodated in said inner
space, having respective values of sensitivity different from one another and
respective values of saturated sound pressure of said sound waves different
from one another, converting said sound waves to said plural intermediate
acoustic signals (SI, S2, S3, S4, DSL, DS2, DS3, DS4), respectively, and
providing said plural intermediate acoustic signals (SI, S2, S3, S4, DS1,
DS2, DS3, DS4) to said signal processor (20a; 20Aa; 20b).
61
2. The semiconductor microphone as set forth in claim 1, in which said sig-
nal processor (20a; 20Aa; 20b) is accommodated in said inner space together
with said plural acoustic transducers (11 A, 11B, 11C, 1 ID; 11A', 11C).
3. The semiconductor microphone as set forth in claim 2, in which said sig-
nal processor includes (20a; 20Aa; 20b)
a composition controller (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 222, 223a/ 223b,
224, 225) selecting at least one optimum acoustic signal (DS1/ DS2/ DS3/
DS4) from said plural intermediate acoustic signals (SI, S2, S3, S4, DS1,
DS2, DS3, DS4) on the basis of a current value of the sound pressure of said
sound waves and changing said at least one optimum acoustic signal (DS1,
DS2, DS3, DS4) depending upon said current value of said sound pressure,
and
a composer (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 226aA, 226aB, 226aC/
226bA, 226bB, 226bC) connected to said composition controller (22a/ 22b/
22c/ 22d/ 22E/ 22f, 222, 223a/ 223b, 224, 225) and producing said compos-
ite acoustic signal (S5; S5a; S5b; S5e) from said at least one optimum acous-
tic signal (DS1/ DS2/ DS3/ DS4).
4. The semiconductor microphone as set forth in claim 3, in which said
composition controller (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 222, 223a/ 223b, 224,
225) includes
a selector (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 222, 223a/ 223b, 224) select-
ing one of said plural intermediate acoustic signals (SI, S2, S3, S4, DS1,
DS2, DS3, DS4) as said at least one optimum acoustic signal (DS1/ DS2/
62
DS3/ DS4) in a sound pressure range except for vicinities of the values of
said saturated sound pressure (THA (PA), THB (PB), THC (PC)) and more
than one intermediate acoustic signal (SI, S2, S3, S4, DS1, DS2, DS3, DS4)
as said at least one optimum acoustic signal (DS1/ DS2/ DS3/ DS4) in said
vicinities of said values of said saturated sound pressure (THA (PA), THB
(PB), THC (PC)), and
a determiner (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 225) connected to said se-
lector (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 222, 223a/ 223b, 224) and supplying
said composer (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 226aA, 226aB, 226aC/ 226bA,
226bB, 226bC) parameters for merging said more than one optimum acoustic
signal (DS1/ DS2/ DS3/ DS4) into said composite acoustic signal (S5; S5a;
S5b; S5e).
5. The semiconductor microphone as set forth in claim 4, in which said pa-
rameters make said composer (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 226aA, 226aB,
226aC/ 226bA, 226bB, 226bC) merge said more than one optimum acoustic
signal (DS1/ DS2/ DS3/ DS4) into said composite acoustic signal (S5; S5a;
S5b; S5e) through a fading technique while said current value of said sound
pressure is being found in said vicinities of said values of said saturated
sound pressure (THA (PA), THB (PB), THC (PC)).
6. The semiconductor microphone as set forth in claim 3, in which said
composer (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 226aA, 226aB, 226aC/ 226bA,
226bB, 226bC) includes
63
a normalization unit (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 226aA/ 226aB/
226aC; 226bA/ 226bB/ 226bC) normalizing said intermediate acoustic sig-
nals (DS1/ DS2/ DS3/ DS4) with respect to one of said plural intermediate
acoustic signals (DS4) serving as a reference signal on the basis of said val-
ues of said sensitivity, and
a merging unit (22a/ 22b/ 22c/ 22d/ 22E/ 22f, 227a/ 227b/ 230) produc-
ing said composite acoustic signal (S5; S5a; S5b; S5e) from one of the nor-
malized intermediate acoustic signals in a sound pressure range except for
vicinities of the values of said saturated sound pressure (THA (PA), THB
(PB), THC (PC)) and more than one normalized intermediate acoustic signal
in said vicinities of said values of said saturated sound pressure (THA (PA),
THB (PB), THC (PC)).
7. The semiconductor microphone as set forth in claim 6, in which said
merging unit adds (22b/ 22c, 227b/ 227c) values of said normalized interme-
diate acoustic signals (DS1/ DS2/ DS3) to a value of said reference signal
(DS4) for determining a sum, and devices said sum by the number of said in-
termediate acoustic signals (DS1/ DS2/ DS3/ DS4) so as to determine a cur-
rent value of said composite acoustic signal (S5a, S5b).
8. The semiconductor microphone as set forth in claim 6, in which said
normalizing unit (22d, 226bA/ 226bB/ 226bC) further carries out the nor-
malization on the basis of a value of said reference signal (DS4) and values
of the normalized intermediate acoustic signal (DS1/ DS2/ SD3) except for
said reference signal (DS4).
64
9. The semiconductor microphone as set forth in claim 3, in which said
signal processor further includes an endower (22E, 232, 233A/ 233B/ 233C/
233D, 234) endowing said composite acoustic signal (S5e) with a directivity.
10. The semiconductor microphone as set forth in claim 9, in which said
endower includes
a directivity control unit (22E, 232) determining more than one of said
plural acoustic transducers (11 A, 1 IB, 11C, 1 ID) participating in the en-
dowment of said directivity for the sound waves propagated from a particu-
lar direction (DR) and calculating the amount of delay time excessively con-
sumed until said sound waves arrive at said more than one of said plural
acoustic transducers (11 A, 1 IB, 11C, 1 ID) except for one of said plural
acoustic transducers serving as a reference transducer,
a delay unit (22E, 233A/ 233B/ 233C/ 233D) connected to said directiv-
ity control unit (22E, 232) and introducing said amount of delay time into
propagation of said intermediate acoustic signals (DS1, DS2, DS3, DS4) so
as to make said sound waves simultaneously arrive at said more than one of
said plural acoustic transducers (11A, 1 IB, 11C, 11D), and
an emphasizing unit (22E, 234) connected to said directivity control
unit (22E, 232) and said delay unit (22E, 233A/ 233B/ 233C/ 233D) and car-
rying out a calculation on the delayed, intermediate acoustic signals for em-
phasizing the sound waves propagating in said particular direction (DR).
11. The semiconductor microphone as set forth in claim 2, in which said
plural acoustic transducers (11 A, 11B, 11C, 11D; 11 A', 11C) are of the
65
type having a stationary electrode (16) and a vibratory electrode (15) spaced
from said stationary electrode (16), and makes said sound waves converted
to said intermediate acoustic signals (SI, S2, S3, S4) through variation of
capacitance between said stationary electrode (16) and said vibratory elec-
trode (15).
12. The semiconductor microphone as set forth in claim 11, in which said
plural acoustic transducers (11 A, 1 IB, 11C, 11D) are fabricated on a single
semiconductor chip (10).
13. The semiconductor microphone as set forth in claim 12, in which said
single semiconductor chip (10) is packaged together with another semicon-
ductor chip where said signal processor (22a, SI to S37) is fabricated.
14. The semiconductor microphone as set forth in claim 11, in which the vi-
bratory electrodes (15) of said plural acoustic transducers (11 A, 1 IB, 11C,
11D; 11 A', 11C) are different in dimensions from one another so as to
make said values of said sensitivity different from one another.
15. The semiconductor microphone as set forth in claim 1, further compris-
ing at least one equalizer (250a, 250b, 250c, 250d) provided in association
with said plural acoustic transducers (11 A, 1 IB, 11C, 11D) for compensat-
ing distortion of sound-to-signal converting characteristics of said plural
acoustic transducers.
16. A semiconductor microphone (le) for converting sound waves to a com-
posite acoustic signal, comprising:
a housing (30b) having an inner space;
66
plural acoustic transducers (11 A, 1 IB, 11C, 1 ID) provided in said in-
ner space, and converting said sound waves to plural intermediate'acoustic
signals; and
a signal processor (20b) connected to said plural acoustic transducers
(11 A, 1 IB, 11C, 1 ID) for producing a composite acoustic signal,
characterized by further comprising
a partition wall structure (36, 37, 38) provided in said inner space so as
to divide said inner space into plural compartments open to the outside of
said housing (30b) selectively through plural sound holes (34bA, 34bB,
34bC, 34bD) formed in said housing (30b),
and in that
said signal processor has an endower (23) introducing delay into selected
ones of said plural intermediate acoustic signals so as to produce delayed
acoustic signals and forming said composite acoustic signal from said de-
layed acoustic signals, thereby giving directivity to said semiconductor mi-
crophone.
Dated 14* February 2008
67
A silicon microphone includes a silicon microphone device (10), on which
four acoustic transducers (11A, 11B, 11C, 11D) are integrated, an integrated
circuit device (20a) and a package (30a) for housing the devices (10, 20a) in
an inner space defined therein, and the four acoustic transducers have different
values of sensitivity and, accordingly, different values of dynamic range;
the analog acoustic signals (S1, S2, S3, S4) are supplied from the four
acoustic transducers (11 A, 11B, 11C, 11D) to the integrated circuit device
(20a), and are converted to digital acoustic signals (DS1, DS2, DS3, DS4);
the digital acoustic signal (S1, S2, S3) output from the acoustic transducers
(11 A, 11B, 11C) with relatively high sensitivity acoustic transducers are
normalized with respect to the digital acoustic signal (S4) output from the
acoustic transducer (11D) with lowest sensitivity, and the normalized digital
acoustic signals are selectively formed into a composite acoustic signal (S5)
depending upon the sound pressure of sound waves so that the dynamic
range is expanded without sacrifice of high sensitivity in the low sound
pressure range.
| # | Name | Date |
|---|---|---|
| 1 | 263-KOL-2008-ABANDONED LETTER.pdf | 2017-10-31 |
| 1 | abstract-00263-kol-2008.jpg | 2011-10-06 |
| 2 | 263-KOL-2008-ABSTRACT.pdf | 2017-10-31 |
| 2 | 263-KOL-2008-TRANSLATED COPY OF PRIORITY DOCUMENT.pdf | 2011-10-06 |
| 3 | 263-KOL-2008-PRIORITY DOCUMENT.pdf | 2011-10-06 |
| 3 | 263-KOL-2008-ANNEXURE TO FORM-3.pdf | 2017-10-31 |
| 4 | 263-KOL-2008-FORM 3-1.1.pdf | 2011-10-06 |
| 4 | 263-KOL-2008-ASSIGNMENT.pdf | 2017-10-31 |
| 5 | 263-KOL-2008-CORRESPONDENCE OTHERS 1.1.pdf | 2011-10-06 |
| 5 | 263-KOL-2008-CLAIMS.pdf | 2017-10-31 |
| 6 | 263-KOL-2008-CORRESPONDENCE.pdf | 2017-10-31 |
| 6 | 00263-kol-2008-gpa.pdf | 2011-10-06 |
| 7 | 263-KOL-2008-DESCRIPTION (COMPLETE).pdf | 2017-10-31 |
| 7 | 00263-kol-2008-form 5.pdf | 2011-10-06 |
| 8 | 263-KOL-2008-FIRST EXAMINATION REPORT.pdf | 2017-10-31 |
| 8 | 00263-kol-2008-form 3.pdf | 2011-10-06 |
| 9 | 00263-kol-2008-form 2.pdf | 2011-10-06 |
| 9 | 263-kol-2008-form 18.pdf | 2017-10-31 |
| 10 | 00263-kol-2008-form 1.pdf | 2011-10-06 |
| 10 | 263-KOL-2008-FORM 2.pdf | 2017-10-31 |
| 11 | 00263-kol-2008-drawings.pdf | 2011-10-06 |
| 11 | 263-KOL-2008-GPA.pdf | 2017-10-31 |
| 12 | 00263-kol-2008-description complete.pdf | 2011-10-06 |
| 12 | 263-KOL-2008-SPECIFICATION-COMPLETE.pdf | 2017-10-31 |
| 13 | 00263-kol-2008-correspondence others.pdf | 2011-10-06 |
| 13 | 263-KOLNP-2008-FIRST EXAMINATION REPORT.pdf | 2017-06-22 |
| 14 | 00263-kol-2008-claims.pdf | 2011-10-06 |
| 14 | 263-KOL-2008-FORM-18.pdf | 2016-07-13 |
| 15 | 00263-kol-2008-abstract.pdf | 2011-10-06 |
| 15 | 263-KOL-2008_EXAMREPORT.pdf | 2016-06-30 |
| 16 | 263-KOL-2008-(07-03-2014)-ABANDONED LETTER.pdf | 2014-03-07 |
| 17 | 263-KOL-2008_EXAMREPORT.pdf | 2016-06-30 |
| 17 | 00263-kol-2008-abstract.pdf | 2011-10-06 |
| 18 | 263-KOL-2008-FORM-18.pdf | 2016-07-13 |
| 18 | 00263-kol-2008-claims.pdf | 2011-10-06 |
| 19 | 00263-kol-2008-correspondence others.pdf | 2011-10-06 |
| 19 | 263-KOLNP-2008-FIRST EXAMINATION REPORT.pdf | 2017-06-22 |
| 20 | 00263-kol-2008-description complete.pdf | 2011-10-06 |
| 20 | 263-KOL-2008-SPECIFICATION-COMPLETE.pdf | 2017-10-31 |
| 21 | 00263-kol-2008-drawings.pdf | 2011-10-06 |
| 21 | 263-KOL-2008-GPA.pdf | 2017-10-31 |
| 22 | 00263-kol-2008-form 1.pdf | 2011-10-06 |
| 22 | 263-KOL-2008-FORM 2.pdf | 2017-10-31 |
| 23 | 00263-kol-2008-form 2.pdf | 2011-10-06 |
| 23 | 263-kol-2008-form 18.pdf | 2017-10-31 |
| 24 | 263-KOL-2008-FIRST EXAMINATION REPORT.pdf | 2017-10-31 |
| 24 | 00263-kol-2008-form 3.pdf | 2011-10-06 |
| 25 | 263-KOL-2008-DESCRIPTION (COMPLETE).pdf | 2017-10-31 |
| 25 | 00263-kol-2008-form 5.pdf | 2011-10-06 |
| 26 | 263-KOL-2008-CORRESPONDENCE.pdf | 2017-10-31 |
| 26 | 00263-kol-2008-gpa.pdf | 2011-10-06 |
| 27 | 263-KOL-2008-CORRESPONDENCE OTHERS 1.1.pdf | 2011-10-06 |
| 27 | 263-KOL-2008-CLAIMS.pdf | 2017-10-31 |
| 28 | 263-KOL-2008-FORM 3-1.1.pdf | 2011-10-06 |
| 28 | 263-KOL-2008-ASSIGNMENT.pdf | 2017-10-31 |
| 29 | 263-KOL-2008-PRIORITY DOCUMENT.pdf | 2011-10-06 |
| 29 | 263-KOL-2008-ANNEXURE TO FORM-3.pdf | 2017-10-31 |
| 30 | 263-KOL-2008-TRANSLATED COPY OF PRIORITY DOCUMENT.pdf | 2011-10-06 |
| 30 | 263-KOL-2008-ABSTRACT.pdf | 2017-10-31 |
| 31 | 263-KOL-2008-ABANDONED LETTER.pdf | 2017-10-31 |
| 31 | abstract-00263-kol-2008.jpg | 2011-10-06 |