Abstract: The present invention provides a system and method that allows user to separate, choose and control selective sounds what they want to listen and what they do not want to listen. The invention provides a wearable device integrated with smartphone via an application that can separate, choose and control selective sounds. All audio signal processing are done on phone only. The device comprises two speakers and multiple microphones, which are required for signal processing. As the sound from different sources such as instruments, people, machines, noise etc. reaches the microphones, it is converted to digital domain and fed to the phone. The application installed in the phone filters out and separates all the sounds coming from each source and allow people to choose which sounds they want to amplify/attenuate.
FIELD OF INVENTION:
The present invention in general relates to systems and methods for selective
audio enhancement and separation. More particularly, the present invention
relates to a smart earphone integrated to smartphone via an application to amplify
and attenuate selective sounds where user decides which sound/s he wants to
selectively hear and adjusts the volume. All audio signal processing are done on
phone only.
BACKGROUND AND PRIOR ART:
In day to day life when the sound from different sources such as instruments,
people, machines, noise etc. reaches the ears, it is difficult to follow the sound of
a particular source. Available hearing aids apply filtering/ directional
amplification to enhance the quality of the sound by reducing the noise. Reference
may be made to the following-
Publication No. US2014286497 relates to methods, systems, and apparatuses for
improved multi-microphone source tracking and noise suppression. In multimicrophone
devices and systems, frequency domain acoustic echo cancellation is
performed on each microphone input, and microphone levels and sensitivity are
normalized. Methods, systems, and apparatuses are also described for improved
acoustic scene analysis and source tracking using steered null error transforms,
on-line adaptive acoustic scene modeling, and speaker-dependent information.
Switched super-directive beamforming reinforces desired audio sources and
closed-form blocking matrices suppress desired audio sources based on spatial
information derived from microphone pairings. Underlying statistics are tracked
and used to updated filters and models. Automatic detection of single-user and
multi-user scenarios, and single-channel suppression using spatial information,
non-spatial information, and residual echo are also described.
US Publication No. 20140328487 relates to a sound signal processing apparatus
including: an observed signal analysis unit that receives as an observed signal a
sound signal for a plurality of channels obtained by a sound signal input unit
fc te jbrmecl of a plurality of microphones placed at different positions and estimates a
sound direction and a sound segment of a target sound which is sound to be
-3-
extracted; and a sound source extraction unit that receives the sound direction and
sound segment of the target sound estimated by the observed signal analysis unit
and extracts the sound signal for the target sound.
US Publication No. 20150117649 relates to a selective audio source enhancement
and extraction solution based on a methodology, referred to herein as Blind
Source Separation (BSS). Multichannel BSS is able to segregate the reverberated
signal contribution of each statistically independent source observed at the
microphones, or other sources of audio input. One possible application of BSS is
the blind source extraction (BSE) of a specific target source from the remaining
noise with a limited amount of distortion when compared to traditional
enhancement methods.
US Publication No. 20140226838 relates to a microphone with closely spaced
elements is used to acquire multiple signals from which a signal from a desired
source is separated. For example, a signal from a desired source is separated from
background noise or from signals from specific interfering sources. The signal
separation approach uses a combination of direction-of-arrival information or
other information determined from variation such as phase, delay, and amplitude
among the acquired signals, as well as structural information for the signal from
the source of interest and/or for the interfering signals. Through this combination
of information, the elements may be spaced more closely than may be effective
for conventional beamforming approaches. In some examples, all the microphone
^ elements are integrated into a single a micro-electrical-mechanical system
£ (MEMS).
™ US Publication No. 20130294611 relates to audio signal processing and source
£ separation methods and apparatus utilizing independent component analysis
55
o
CM
(ICA) in conjunction with acoustic echo cancellation (AEC).
LU
Q
4 Patent No. US8892432 relates to a signal processing system that localizes a
CM
d plurality of input signals containing varying proportions of signal components at
different positions in space by a multiple rendering section. This is processing for
CM
^ -
CO
CO " 3 T -
$£ reducing distortion at the cost of reduced performance of signal separation.
o
CM
O)
<:
-4-
However, since performance of separation may be compensated by intrinsic
functionality of the human auditory organ, distortion may be reduced while
maintaining performance of signal separation.
Patent No. US8144896 relates to a computer-implemented audio blind source
separation system including a frequency transform component for transforming a
plurality of sensor signals to a corresponding plurality of frequency-domain
sensor signals. The system further includes a frequency domain blind source
separation component for estimating a plurality of source signals per frequency
band based on the plurality of frequency domain sensor signals and processing
matrices computed independently for each of a plurality of frequency bands.
Patent No. US8874439 relates to a signal separation method to include sampling
a first input signal, which is a mixture of different signals comprising signals from
at least a first signal source and a separate, second signal source, to obtain first
frequency components in the first input signal. A second input signal, which is a
mixture of different signals comprising signals from at least the first signal source
and the second signal source, is also sampled to obtain second frequency
components in the second input signal. Next, the first frequency components and
the second frequency components are processed to extract frequency dependency
information between the first and the second input signals. The extracted
frequency dependency information is then used to separate a signal originated
from the first signal source from a signal originated from the second signal
source.
Patent No. US7383178 relates to systems and methods for speech processing
useful to identify and separate desired audio signal(s), such as at least one speech
signal, in a noisy acoustic environment. The speech process operates on a
device(s) having at least two microphones, such as a wireless mobile phone,
headset, or cell phone. At least two microphones are positioned on the housing of
the device for receiving desired signals from a target, such as speech from a
speaker. The microphones are positioned to receive the target user's speech, but
also receive noise, speech from other sources, reverberations, echoes, and other
-5-
03
G)
_2
E
o
LL
o
CN|
_J
LU
Q
m
CM
CM
^ ,
CO
CO
CO
o
CNI
G)
<:
undesirable acoustic signals. At least both microphones receive audio signals that
include the desired target speech and a mixture of other undesired acoustic
information. The mixed signals from the microphones are processed using a
modified ICA (independent component analysis) process. The speech process
uses a predefined speech characteristic, which has been predefined, to assist in
identifying the speech signal. In this way, the speech process generates a desired
speech signal from the target user, and a noise signal. The noise signal may be
used to further filter and process the desired speech signal.
Patent No. US7366662 relates to a process for generating an acoustically distinct
information signal based on recordings in a noisy acoustic environment. The
process uses a set of a least two spaced-apart transducers to capture noise and
information components. The transducer signals, which have both a noise and
information component, are received into a separation process. The separation
process generates one channel that is dominated by noise, and another channel
that is a combination of noise and information. An identification process is used
to identify which channel has the information component. The noise-dominant
signal is then used to set process characteristics that are applied to the
combination signal to efficiently reduce or eliminate the noise component. In this
way, the noise is effectively removed from the combination signal to generate a
good quality information signal. The information signal may be, for example, a
speech signal, a seismic signal, a sonar signal, or other acoustic signal.
£ Patent No. US7295972 relates to a method of separating mixture signals received
through two microphones into N source signals while eliminating noise from the
received signals in real time. There is provided a method of separating first and
second mixture signals received through two sensors into source signals,
including: (a) calculating a global signal absence probability for each frame of the
mixture signal and a local signal absence probability for each frequency band of a
frame for at least one of the first and second mixture signals; (b) estimating a
spectrum vector for each frequency band in which a noise signal is eliminated
using the global signal_absence probability calculated in the calculating; (c)
CT-ir.' ' ^ W " ' f c Rasa. ? - ' ^ * •••^> ' ^ ' ^ ' V £-' ' ™ * " ^ ™' '• -
determining a plurality of frequency bands including at least one of a noise
-6-
signal and a source signal using the local signal absence probability, and
generating a source label vector which consists of a plurality of frequency bands
assigned to each source, using an attenuation parameter and a delay parameter
generated for each of the determined frequency bands; and (d) multiplying the
spectrum vector estimated for each frequency band obtained in the estimating by
the source label vector obtained in the determining, and obtaining signals
separated according to the source signals.
Patent No. US724306Q relates to a method for recovering an audio signal
produced by a desired source from an audio channel in which audio signals from a
plurality of different sources are combined. The method includes the steps of
processing the audio channel with a joint acoustic modulation frequency
algorithm to separate audio signals from the plurality of different sources into
distinguishable components. Next, each distinguishable component corresponding
to any source that is not desired in the audio channel is masked, so that the
distinguishable component corresponding to the desired source remains
unmasked. The distinguishable component that is unmasked is then processed
with an inverse joint acoustic modulation frequency algorithm, to recover the
audio signal produced by the desired source.
Publication No. CN104053107 (A) relates to a hearing aid device and method, in
particular to the hearing aid device and method for separating and positioning
sound sources in noise environments. The hearing aid device comprises a
microphone array, a sound source position displayer, a sound source separating
and positioning module, a signal collector, an analog signal amplifier and a sound
source selection keyboard. The sound source separating and positioning module
adopts a cross-correlation method to process the collected microphone signals to
obtain eight initial positions of the sound sources relative to the microphone array,
the collected microphone signals are processed through a blind source separation
method based on space search to obtain sound sources containing voices of a
talker and other persons, a user selects out the sound sources containing voices of
the talker through the sound source selection keyboard and computes the accurate
position of the talker relative to the microphone array according to the selected
-7-
sound source, and the position of the sound source is displayed in the sound
source position displayer.
Publication No. CN202749088 (U) relates to a voice reinforcing system using a
blind source separation algorithm. The voice reinforcing system using the blind
source separation algorithm comprises multiple microphone elements, multiple
analog-to-digital converters, a DSP (digital signal processor) controller, a single
chip machine, a display module, a keyboard module, an analog-to-digital
converter and a sound equipment interface, wherein one microphone element is
connected with the DSP controller by one analog-to-digital converter, the DSP
controller is connected with the sound equipment interface by the analog-todigital
converter, and the DSP controller is connected with the display module
and the keyboard module by the single chip machine.
Publication No. KR20010084482 (A) relates to a selective attention concentration
apparatus and method to separate a noise and a desired signal from a noise source
and selectively remove the noise using the separated signal.
Thus, in view of the above prior arts, various techniques were employed for
extract and separation of sources for e.g. blind source separation, acoustic - scene
modelling or beamforming. Few publications use source extraction only for a
target sound and few of them use the structural information of the source for
source separation. Publications which are using Blind Source Separation
technique, are using frequency domain analysis. There is no complete existing
solution in the form of product/device that allows user to separate, choose and
control what they want to listen. Additionally, in prior art solutions, all the signal
processing is done in the earpiece only using DSPs and ADCs thus it involve
active components in the earpiece, which makes it costly. Further none of the
patents presented in the prior art provides a complete solution in the form of an
earphone which is controlled by a smartphone application.
-8-
Hence, there is a need to develop an improved system and method that can
separate, choose and control selective sounds based on user's choice as. well as is
cost effective.
Thus, in order to solve one or more problems associated with the prior art
systems, present invention provides a wearable device integrated with smartphone
that can separate, choose and control selective sounds. All the signal processing is
done in the smartphone thereby making the earphone very cost effective because
it does not involve any active components in it.
OBJECTS OF THE INVENTION:
The principal object of the present invention is to provide a system and method
that allows user to separate, choose and control selective sounds what they want
to listen and what they do not want to listen.
An object of the present invention is to provide a wearable device integrated with
smartphone via an application and a method which does source extraction not
only for a target sound, but separates all the sources present in real time without
noticeable delay.
Another object of the present invention is to provide a wearable device integrated
with smartphone via an application that can separate, choose and control selective
sounds.
Another object of the present invention is to provide a wearable device that does
not has any active components in it.
Another object of the present invention is to provide a method for source
separation that does not use any structural information of the source.
Yet another object of the present invention is to provide a wearable device in the
r^-form of an^eamiece^or earphone- - -.- x
-9-
Still another object of the present invention is to provide an earphone which is
controlled by a smartphone application.
Further another object of the present invention is to provide an ear phone
integrated with smartphone and all signal processing is done completely within
the smartphone application.
Yet another object of the present invention is to provide a cost effective earphone
because it does not involve any active components in it.
BRIEF DESCRIPTION OF THE INVENTION:
The present invention provides a system and method that allows user to separate,
choose and control selective sounds what they want to listen and what they do not
want to listen using different source separation techniques. The invention
provides a wearable device integrated with smartphone via an application that can
separate, choose and control selective sounds. The wearable device is in the form
of an earpiece or earphone that is useful for everyone to amplify and attenuate
selective sounds. User decides which sound/s he wants to selectively hear and can
also adjust their volume. All audio signal processing are done on phone only. The
device comprises one or more speakers and multiple microphones, which are
required for signal processing. As the sound from different sources such as
instruments, people, machines, noise etc. reaches the microphones, it is converted
to digital domain and fed to the phone. The application installed in the phone
filters out and separates all the sounds coming from each source and allow people
to choose which sounds they want to amplify/attenuate. This means that even if
there are multiple persons talking within a given range, the device separates out
all the different voices present inthe audio and allows user to control them. All the .
processing is done in real time without noticeable delay. The software interface
shows the different sources that are present in the surrounding and has a volume
control for each of the sounds. This is an easy to use interface for the user so that
as soon the application boots the device is ready to use.
BRIEF DESCRIPTION OF THE DRAWINGS
It is to be noted, however, that the appended drawings illustrate only typical
embodiments of this invention and are therefore not to be considered for
limiting of its scope, for the invention may admit to other equally effective
embodiments.
Figure 1 and 2 show a system and method to separate, choose and control
selective sounds according to the present invention;
Figure 3 shows a wearable device in the form of earpiece, earphone;
Figure 4 shows a smartphone application to perform all the signal processing
required for the separation of sources;
DESCRIPTION OF THE PREFERRED EMBODIMENTS:
The present invention provides a system and method that allows user to separate,
choose and control selective sounds what they want to listen and what they do not
want to listen using different source separation techniques.
Referring to the fig. 1 and 2, the invention provides a wearable device in the form
of earpiece, earphone or the like (10) integrated with smartphone (11) via an
application to separate, choose and control selective sounds. The device in
combination with a smartphone acts as an intelligent earphone containing two
microphones near the earpieces. The sound from different sources such as
instruments, people, machines, noise etc. reaches the microphones. It is then
converted to digital domain and fed to the phone. After processing, individual
sources are identified and user can decide which sounds they want to
amplify/attenuate. This means that even if there are multiple persons talking
within a given range, the device separates out all the different voices present in
the audio and allow user to control them. All this is done in real time.
The smart earphone looks similar to a headphone as shown in fig. 3. The key
difference is that while conventional earphones have a single/no microphone, this
device has 2 microphones each of which are located near the earpieces. It is not
possible to improve the intelligibility and quality of the recovered signal
simultaneously. Quality is improved at the expense of sacrificing intelligibility.
The only way to overcome this limitation is to add some spatial information to the
time/frequency information available in the single channel. This is done by using
multiple microphones in the device.
The device is controlled by a smartphone application which performs all the
signal processing required for the separation of sources as shown in fig. 4. The
sound entering the device is sent to the smartphone and after the separation the
final output is fed to the speaker of the device. The application in the smartphone
will display a permutation of the separated sources. To filter any of the sources,
the sound level can be adjusted. The purpose behind using a smartphone to do the
digital signal processing is that modern smartphones are having very powerful
processors and leveraging the processing capabilities of the smartphone helps
lower the cost of the device. This also means that the device is passive and more
robust.
Numerous modifications and adaptations of the system of the present
invention will be apparent to those skilled in the art, and thus it is intended
by the appended claims to cover all such modifications and adaptations
which fall within the true spirit and scope of this invention.
CLAIM:
1. An improved system for amplifying and attenuating selective sounds
through different source separation techniques comprising more than one
speaker and more than one microphone and integrated with a smartphone.
2. The improved system for amplifying and attenuating selective sounds as
claimed in claim 1 wherein the system is in the form of wearable device
selected from earpiece, earphone or the like.
3. The improved system for amplifying and attenuating selective sounds as
claimed in claim 1 wherein the wearable device is integrated with
smartphone via an application to separate, choose and control selective
sounds.
4. The improved system for amplifying and attenuating selective sounds as
claimed in claim 1 wherein the device in combination with a smartphone
acts as an intelligent earphone containing two microphones located near
the earpieces.
5. The improved system for amplifying and attenuating selective sounds as
claimed in claim 1 wherein the sound from different sources such as
instruments, people, machines, noise etc. reaches the microphones, then
converted to digital domain and fed to the phone and after processing,
individual sources are identified and then user decides which sounds they
want to amplify/attenuate.
6. The improved system for amplifying and attenuating selective sounds as
claimed in claim 1 wherein the device is controlled by a smartphone
application which performs all the signal processing required for the
separation of sources.
7. The improved system for amplifying and attenuating selective sounds as
claimed in claim 1 wherein the sound entering the device is sent to the
smartphone and after the separation the final output is fed to the speaker of
the device.
The improved system for amplifying and attenuating selective sounds as
claimed in claim 1 wherein the application in the smartphone displays a
permutation of the separated sources and to filter any of the sources, the
sound level is adjusted.
| # | Name | Date |
|---|---|---|
| 1 | 2254-del-2015-Form-5-(24-07-2015).pdf | 2015-07-24 |
| 1 | 2254-DEL-2015-IntimationOfGrant14-03-2024.pdf | 2024-03-14 |
| 2 | 2254-del-2015-Form-3-(24-07-2015).pdf | 2015-07-24 |
| 2 | 2254-DEL-2015-PatentCertificate14-03-2024.pdf | 2024-03-14 |
| 3 | 2254-del-2015-Form-2-(24-07-2015).pdf | 2015-07-24 |
| 3 | 2254-DEL-2015-AMMENDED DOCUMENTS [05-02-2024(online)].pdf | 2024-02-05 |
| 4 | 2254-del-2015-Form-1-(24-07-2015).pdf | 2015-07-24 |
| 4 | 2254-DEL-2015-FORM 13 [05-02-2024(online)].pdf | 2024-02-05 |
| 5 | 2254-DEL-2015-MARKED COPIES OF AMENDEMENTS [05-02-2024(online)].pdf | 2024-02-05 |
| 5 | 2254-del-2015-Form-30-(22-07-2016).pdf | 2016-07-22 |
| 6 | 2254-DEL-2015-Form 2(Title Page)-190816.pdf | 2016-08-23 |
| 6 | 2254-DEL-2015-Annexure [03-02-2024(online)].pdf | 2024-02-03 |
| 7 | 2254-DEL-2015-Written submissions and relevant documents [03-02-2024(online)].pdf | 2024-02-03 |
| 7 | 2254-DEL-2015-Form 18-010519.pdf | 2019-05-06 |
| 8 | 2254-DEL-2015-OTHERS-010621.pdf | 2021-10-17 |
| 8 | 2254-DEL-2015-Correspondence to notify the Controller [16-01-2024(online)].pdf | 2024-01-16 |
| 9 | 2254-DEL-2015-Form 5-010621.pdf | 2021-10-17 |
| 9 | 2254-DEL-2015-US(14)-ExtendedHearingNotice-(HearingDate-19-01-2024).pdf | 2023-12-06 |
| 10 | 2254-DEL-2015-FORM 13 [05-12-2023(online)].pdf | 2023-12-05 |
| 10 | 2254-DEL-2015-Form 3-010621.pdf | 2021-10-17 |
| 11 | 2254-DEL-2015-Form 2(Title Page)-010621.pdf | 2021-10-17 |
| 11 | 2254-DEL-2015-POA [05-12-2023(online)].pdf | 2023-12-05 |
| 12 | 2254-DEL-2015-FER.pdf | 2021-10-17 |
| 12 | 2254-DEL-2015-RELEVANT DOCUMENTS [05-12-2023(online)].pdf | 2023-12-05 |
| 13 | 2254-DEL-2015-Examination Report Reply Recieved-010621.pdf | 2021-10-17 |
| 13 | 2254-DEL-2015-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [05-12-2023(online)].pdf | 2023-12-05 |
| 14 | 2254-DEL-2015-Description(Complete)-010621.pdf | 2021-10-17 |
| 14 | 2254-DEL-2015-US(14)-HearingNotice-(HearingDate-08-12-2023).pdf | 2023-11-07 |
| 15 | 2254-DEL-2015-Abstract-010621.pdf | 2021-10-17 |
| 15 | 2254-DEL-2015-Claims-010621.pdf | 2021-10-17 |
| 16 | 2254-DEL-2015-Abstract-010621.pdf | 2021-10-17 |
| 16 | 2254-DEL-2015-Claims-010621.pdf | 2021-10-17 |
| 17 | 2254-DEL-2015-US(14)-HearingNotice-(HearingDate-08-12-2023).pdf | 2023-11-07 |
| 17 | 2254-DEL-2015-Description(Complete)-010621.pdf | 2021-10-17 |
| 18 | 2254-DEL-2015-Examination Report Reply Recieved-010621.pdf | 2021-10-17 |
| 18 | 2254-DEL-2015-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [05-12-2023(online)].pdf | 2023-12-05 |
| 19 | 2254-DEL-2015-FER.pdf | 2021-10-17 |
| 19 | 2254-DEL-2015-RELEVANT DOCUMENTS [05-12-2023(online)].pdf | 2023-12-05 |
| 20 | 2254-DEL-2015-Form 2(Title Page)-010621.pdf | 2021-10-17 |
| 20 | 2254-DEL-2015-POA [05-12-2023(online)].pdf | 2023-12-05 |
| 21 | 2254-DEL-2015-FORM 13 [05-12-2023(online)].pdf | 2023-12-05 |
| 21 | 2254-DEL-2015-Form 3-010621.pdf | 2021-10-17 |
| 22 | 2254-DEL-2015-Form 5-010621.pdf | 2021-10-17 |
| 22 | 2254-DEL-2015-US(14)-ExtendedHearingNotice-(HearingDate-19-01-2024).pdf | 2023-12-06 |
| 23 | 2254-DEL-2015-Correspondence to notify the Controller [16-01-2024(online)].pdf | 2024-01-16 |
| 23 | 2254-DEL-2015-OTHERS-010621.pdf | 2021-10-17 |
| 24 | 2254-DEL-2015-Written submissions and relevant documents [03-02-2024(online)].pdf | 2024-02-03 |
| 24 | 2254-DEL-2015-Form 18-010519.pdf | 2019-05-06 |
| 25 | 2254-DEL-2015-Form 2(Title Page)-190816.pdf | 2016-08-23 |
| 25 | 2254-DEL-2015-Annexure [03-02-2024(online)].pdf | 2024-02-03 |
| 26 | 2254-DEL-2015-MARKED COPIES OF AMENDEMENTS [05-02-2024(online)].pdf | 2024-02-05 |
| 26 | 2254-del-2015-Form-30-(22-07-2016).pdf | 2016-07-22 |
| 27 | 2254-del-2015-Form-1-(24-07-2015).pdf | 2015-07-24 |
| 27 | 2254-DEL-2015-FORM 13 [05-02-2024(online)].pdf | 2024-02-05 |
| 28 | 2254-del-2015-Form-2-(24-07-2015).pdf | 2015-07-24 |
| 28 | 2254-DEL-2015-AMMENDED DOCUMENTS [05-02-2024(online)].pdf | 2024-02-05 |
| 29 | 2254-DEL-2015-PatentCertificate14-03-2024.pdf | 2024-03-14 |
| 29 | 2254-del-2015-Form-3-(24-07-2015).pdf | 2015-07-24 |
| 30 | 2254-DEL-2015-IntimationOfGrant14-03-2024.pdf | 2024-03-14 |
| 30 | 2254-del-2015-Form-5-(24-07-2015).pdf | 2015-07-24 |
| 1 | SS-2020-11-2311-43-45E_23-11-2020.pdf |