Sign In to Follow Application
View All Documents & Correspondence

Method And System For Dynamic Noise Cancellation In A Vehicle

Abstract: Embodiments of the present disclosure relate to a method (300) and system (103) for dynamic noise cancellation in a vehicle (100). The method comprises receiving images and voice signals from one or more cameras (101) and one or more microphones (102a, 102b) of the vehicle (100), when a user is in a call in the vehicle (100). Further, determining a location of the user in the vehicle (100) using the images and the voice signals. Further, determining a pitch of the voice signals associated with the user. Thereafter, performing one or more actions to suppress voice signals having pitch other than the determined pitch, based on the location of the user and using one or more filters. Figure 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 December 2021
Publication Number
26/2023
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

TATA MOTORS LIMITED
Bombay House, 24 Homi Mody Street, Hutatma Chowk, Mumbai 400 001, Maharashtra, INDIA

Inventors

1. Sanjay Patel
c/o TATA MOTORS LIMITED, of an Indian company having its registered office at Bombay House, 24 Homi Mody Street, Hutatma Chowk, Mumbai 400 001, Maharashtra, INDIA

Specification

Claims:We claim:

1. A dynamic noise cancellation method (300) in a vehicle (100), the method comprising:
receiving (301), by a processor, images and voice signals from one or more cameras (101) and one or more microphones (102a, 102b) of a vehicle (100), when a user is in a call in the vehicle (100);
determining (302), by the processor, a location of the user in the vehicle (100) using the images and the voice signals;
determining (303), by the processor, a pitch of the voice signals associated with the user;
performing (304), by the processor, one or more actions to suppress voice signals having pitch other than the determined pitch, based on the location of the user and using one or more filters.

2. The method as claimed in claim 1, wherein the images and voice signals are used to identify the user speaking over the call using one or more image features and one or more voice features.

3. The method as claimed in claim 2, wherein the one or more image features comprises a face ID, a face direction, lips movement, wherein the one or more voice features comprises at least one of predefined phrases.

4. The method as claimed in claim 1, wherein the one or more filters are one of, an analog filter or a digital filter.

5. The method as claimed in claim 1, wherein performing one or more actions include at least one of:
controlling the one or more microphones (102a, 102b) based on the location of the user; and
filter the voice signals to allow only the determined pitch using the one or more filters.

6. A dynamic noise cancellation system (103) in a vehicle (100), the system (103) comprising: a memory; and a processor; wherein the processor is configured to:
receive images and voice signals from one or more cameras (101) and one or more microphones (102a, 102b) of a vehicle (100), when a user makes a call in the vehicle (100);
determine a location of the user in the vehicle (100) using the images and the voice signals;
determine a pitch of the voice signals associated with the user;
perform one or more actions to suppress voice signals having pitch other than the determined pitch, based on the location of the user and using one or more filters.

7. The system (103) as claimed in claim 6, wherein the images and voice signals are used to identify the user speaking over the call using one or more image features and one or more voice features.

8. The system (103) as claimed in claim 6, wherein the one or more filters comprises one of, an analog filter or a digital filter.
, Description:TECHNICAL FIELD
[001] The present disclosure relates in general to automobiles. Particularly the present disclosure relates to a system and method for dynamic noise cancellation in a vehicle.

BACKGROUND
[002] In present days, attending business calls or personal calls in car has become essential. More often, people do have important conversations while travelling in the car. It is not only important to hear the person on the other side clearly but, also to have spoken word go through as clearly as possible.

[003] Due to surrounding noise, quality of the voice is not as clear/audible as it is in office or home. While driving a car along with other passengers, the other passengers are normally in conversation with each other. When a driver/passenger attends any call in the meantime, there is noise which disturbs the quality of call. Also, in traffic due to continuous noise of horns, vehicle engines and the like there is lot of disturbance to attend the calls inside the car. Hence, there is a need of system and method for dynamic noise cancellation in the vehicles which helps the driver and passengers to have better quality conversations over the calls.

[004] The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgment or any form of suggestion that this information forms the prior art already known to a person skilled in the art.

SUMMARY
[005] Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.

[006] Disclosed herein is a method for dynamic noise cancellation in a vehicle. The method comprises receiving images and voice signals from one or more cameras and one or more microphones of a vehicle, when a user is in a call in the vehicle. Further, determining a location of the user in the vehicle using the images and the voice signals. Further, determining a pitch of the voice signals associated with the user. Thereafter, performing one or more actions to suppress voice signals having pitch other than the determined pitch, based on the location of the user and using one or more filters.

[007] Further, the present disclosure discloses a dynamic noise cancellation system in a vehicle. The system comprising a memory and a processor. The processor is configured to receive images and voice signals from one or more cameras and one or more microphones of a vehicle, when a user makes a call in the vehicle. The location of the user in the vehicle is determined using the images and the voice signals. Further, the pitch of the voice signals associated with the user is determined. Thereafter, one or more actions is performed to suppress voice signals having pitch other than the determined pitch, based on the location of the user and using one or more filters.

[008] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features may become apparent by reference to the drawings and the following detailed description.

BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
[009] The novel features and characteristic of the disclosure are set forth in the appended claims. The disclosure itself, however, as well as a preferred mode of use, further objectives, and advantages thereof, may best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. One or more embodiments are now described, by way of example only, with reference to the accompanying figures wherein like reference numerals represent like elements and in which:

[0010] Fig. 1 shows a system for dynamic noise cancellation in a vehicle, in accordance with some embodiments of the present disclosure;

[0011] Fig. 2 shows a detailed block diagram of a dynamic noise cancellation system in a vehicle, in accordance with some embodiments of the present disclosure;

[0012] Fig. 3 shows a flowchart illustrating method steps for dynamic noise cancellation in a vehicle, in accordance with some embodiments of the present disclosure; and

[0013] Fig. 4a shows an exemplary diagram of dynamic noise cancellation in a vehicle when one user is attending a call, in accordance with some embodiments of the present disclosure.

[0014] Fig. 4b shows an exemplary diagram of dynamic noise cancellation in a vehicle when two users are attending a same call, in accordance with some embodiments of the present disclosure.

[0015] Fig. 5 is a block diagram of a general-purpose computer capable of dynamic noise cancellation in a vehicle, in accordance with an embodiment of the present disclosure.

[0016] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it may be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes, which may be substantially represented in computer readable medium and executed by a computer or processor, whether or not such computer or processor is explicitly shown.

DETAILED DESCRIPTION
[0017] In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.

[0018] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and may be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the scope of the disclosure.

[0019] The terms “comprises”, “includes” “comprising”, “including” or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises… a” or “includes…a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.

[0020] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.

[0021] Figure 1 illustrates a vehicle (100) having dynamic noise cancellation system (103). In an embodiment, the vehicle (100) includes one or more cameras (101), one or more microphones (102a, 102b….102n), a dynamic noise cancellation system (103) are present inside the vehicle (100). In an embodiment, the one or more cameras (101) captures the images of users/ passengers in the vehicle (100) to identify a user speaking over the call. In an embodiment, the one or more microphones (102a, 102b) collects voice signals from the users/passengers to identify the user speaking over the call. The dynamic noise cancellation system (103) may be placed inside the vehicle (100), for example in an infotainment system of the vehicle (100), is configured to collect the images and the voice signals from the one or more cameras (101) and one or more microphones (102a, 102b). In an embodiment, the dynamic noise cancellation system (103) is also referred as system (103) in the present disclosure. Further, the system (103) will suppress the unwanted audio/voice signals receives by the one or more microphones (102a, 102b) present inside the car. Further, the system (103) determines signal characteristics specific to the user speaking over the call using the voice signals and cancel out noise other than the determined signal characteristics. Then along with the voice signals are used to determine a location of the user speaking over the call and perform one or more actions to suppress noise. The user device (104) maybe mobile device, laptop or tablet, which is connected to the system (103) may be through, but not limited to, Wi-Fi, Bluetooth, USB cable inside the vehicle (100). In an embodiment, the one or more cameras (101), and the one or more microphones (102) may be part of the system (103).

[0022] In an embodiment, the one or more cameras (101) and the one or more microphones (102a, 102b) may be installed at different locations in the vehicle (100). For example, a camera in the front view mirror may be configured to capture images of a driver and a front passenger. Another camera in a roof at the center of the vehicle (100) may be configured to capture images of rear passengers. Likewise, a first microphone may be configured on the driver side, a second mic on the front passenger side, a third on the left rear passenger side and a fourth on the right rear passenger side.

[0023] Figure 2 shows a detailed block diagram of a dynamic noise cancellation system (103) in a vehicle, in accordance with some embodiments of the present disclosure. The dynamic noise cancellation system (103) may include one or more Central Processing Unit (“CPU” or “processor”) (203, 204) and a memory (202) storing instructions executable by the at least one processor (203, 204). The processor (203, 204) may include at least one data processor for executing program components for executing user or system-generated requests. The memory (202) is communicatively coupled to the processor (203, 204). The dynamic noise cancellation system (103) includes an Input/ Output (I/O) interface (201). The I/O interface (201) is coupled with the processor (203, 204) through which an input signal or/and an output signal is communicated.

[0024] In an embodiment, the processor may include, for example, an image processor (203) and a signal processor (204) for executing program components for executing user or system-generated requests.

[0025] The image processor (203) may be configured to pre-process the images captured by the one or more cameras (101). Pre-processing may include, at least one of, normalizing the values associated with the received image data, reducing noise in the data, and formatting the received data. The input to the image processor (203) may be low quality image, and the output is an image with high quality. The image processor (203) is typically implemented in specialized hardware, such as Digital Signal Processor (DSP) or Field Programmable Gate Array (FPGA) chips.

[0026] In an embodiment, the signal processor (204) may be used for processing the voice signals. The signal processor (204) may perform operations like amplification, attenuation, suppressing the unwanted noise, noise cancellation in the voice signals. The signal processor (204) attempts to enhance certain features in the voice signals and suppress other signals inside the vehicle (100).

[0027] In an embodiment, data (206) may be stored within the memory (202). The data (206) may include, for example, image features (207), voice features (208), and other data (209).

[0028] In an embodiment, the image features (207) may include, but not limited to, a face ID, a face direction, lips movement and the like of users/passenger in the vehicle (100). The face ID may be used for facial recognition of the user speaking over the call. The face direction may be a direction of the user’s face subjected to while speaking over the call. The lips movement may be captured by the camera (102) to detect the user speaking over the call among other users in the vehicle (100). When the lips movement is detected from the user, the location of the user is determined using the images and the voice signals. Several other image features not mentioned in the present disclosure may also fall within the list of image features captured by the camera (102) and such may also be received by the dynamic noise cancellation system (103).

[0029] In an embodiment, the voice features (207) may include, but are not limited to, at least one of predefined telephonic phrases like “hello”, “hi”, “hey” and the like, pitch of the user audio, voice intensity and user voice direction to detect from where the voice is generated to localize the user. Several other voice features not mentioned in the present disclosure may also fall within the list of voice features recorded by the microphone (102) and such may also be received by the dynamic noise cancellation system (103). The voice features (207) may help the system (103) to recognize the user connected over the call.

[0030] The other data (209) may include the data of the user like face ID, which might be stored in the memory, user device ID, and the like. These parameters may contribute to the user profile. The user profile may be permanently stored in the processor for any time access. For example, In the real-life scenario, when the user attends the call, the system may automatically retrieve the user details, to offer the noise cancellation inside the vehicle.

[0031] In an embodiment, the data (206) in the memory (202) is processed by modules (210). As used herein, the term module refers to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a Field-Programmable Gate Arrays (FPGA), Programmable System-on-Chip (PSoC), a combinational logic circuit, and/or other suitable components that provide the described functionality. The modules (210) when configured with the functionality defined in the present disclosure will result in a novel hardware.

[0032] In one implementation, the modules (210) may include, for example, a communication module (211), a determining module (212), a noise suppression module (213) and other modules (214). It will be appreciated that such aforementioned modules 210 may be represented as a single module or a combination of different modules.

[0033] In an embodiment, the communication module (211) may be configured to facilitate communication between the dynamic noise cancellation system (103) and the one or more cameras (102), the system (103) and the one or more microphones (102a, 102b…102n) and the system and the user device (104). The communication module (211) may receive the images and voice signals from one or more cameras (101) and one or more microphones (102a, 102b…102n) of the vehicle, when the user is on call in the vehicle. The received images and voice signals are received by the dynamic noise cancellation system (103) to suppress the unwanted noise received when the user is attending the call.

[0034] In an embodiment, the determining module (212) may be configured to determine the location of the user in the vehicle using the images and the voice signals. When lips movement is detected from the user, the location of the user is determined using the images and the voice. In an embodiment, the determining module (212) may also be configured to determine a pitch of the voice signals associated with the user. The pitch of the voice signal may be recognized by using telephonic pre-defined phrases. For example, when the user starts attending the call, firstly they use phrases like “hello”, “hi” and “hey” and the like, may be uttered by the user, which helps the system to recognize the user’s pitch. In an embodiment, the pitch is a fundamental frequency of the voice signal. The pitch may be determined in time domain or frequency domain. The pitch may be determined using known techniques, for example as measuring distance between zero-crossing points of the voice signals. Other techniques also can be used which determines the pitch of a voice signal having multiple sine waves. In the scenario of the vehicle (100), wherein multiple users are in conversation, techniques which can reliably determine the pitch of the user who is on call.

[0035] In an embodiment, the noise suppression module (213) may be configured to perform one or more actions to suppress voice signals having pitch other than the determined pitch, based on the location of the user. The one or more actions performed may include, at least one of controlling the one or more microphones based on the location of the user and filter the voice signals to allow only the determined pitch using one or more filters. The one or more microphones (102) and the one or more cameras (101) may be located at different location to capture images and voice signals from different angles inside the vehicle (100) which helps the system (103) to suppress the unwanted surrounding noise by suppressing the unwanted voice signal and allowing determined voice signal of the user during the call. The pitch of the user may be the fundamental frequency of the voice signal of the user. The one or more microphones (102) are controlled based on the location of the user and by filtering the voice signals to allow only the determined pitch using the one or more filters. For example, when rear left end passenger speaks over the call, then the microphone placed near to that passenger is unmuted, whereas the rest of the microphones are muted, thereby only that particular voice signal is passed over the call by suppressing the rest of the voice signals. The one or more filters may be hardware filters or software filters, which may include, one of but not limited to, an analog filter or a digital filter.

[0036] In an embodiment, the other modules (219) may include, but are not limited to preprocessing module and a filter module. In an embodiment the preprocessing module may be part of the image processor (203) and the signal processor (204). The preprocessing module may be configured to preprocess the audio signals. Preprocessing may include, but not limited to, converting analog voice signals into the digital voice signal, removing the noise from the images and the voice signals, and the like. The filter module may include software filters used for filtering the unwanted noise and allowing only the desired voice signal. For example, the software filters may be Butterworth filter, Chebyshev filter, elliptical filter and the like used in noise reduction methods and systems.

[0037] Figure 3 shows a flowchart illustrating method (300) for dynamic noise cancellation in the vehicle (100), in accordance with some embodiments of the present disclosure.

[0038] As illustrated in Figure 3, the method (300) may comprise one or more steps. The method (300) may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.

[0039] The order in which the method (300) is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.

[0040] At step (301), receiving, by a processor (203, 204), images and voice signals from the one or more cameras (101) and the one or more microphones (102a, 102b…102n) of the vehicle (100), when a user is in a call in the vehicle (100). In an embodiment, the camera (101) captures the images to identify the user speaking over the call using one or more image features inside the vehicle (100). In an embodiment, the one or more microphones (102a, 102b) collects voice signals from the users/passengers to identify the user speaking over the call. The dynamic noise cancellation system (103) may be placed inside the vehicle (100), for example in an infotainment system of the vehicle (100), is configured to collect the images and the voice signals from the one or more cameras (101) and one or more microphones (102a, 102b). In an embodiment, the image features (207) may include, but not limited to, a face ID, a face direction, lips movement and the like of a driver/passenger in the vehicle (100). When lips movement is detected from the user, the location of the user is determined using the images and the voice. In an embodiment, the voice features (207) may include, but are not limited to, at least one of predefined telephonic phrases like “hello”, “hi”, “hey” and the like, pitch of the user audio, voice intensity and user voice direction. The voice features (207) may help the system (103) to recognize the user connected over the call. For example, when the user attending the call, the user may use phrases like “hello”, “hi” and “hey” and the like , which can be recognized by the system to identify the user’s pitch and locate the user connected over the call.

[0041] At step (302), determining, by the processor (203, 204), the location of the user in the vehicle (100) using the images and the voice signals. In an embodiment the location of the user is determined using the images and the voice signals captured by the one or more camera (101) and the one or more microphones (102a, 102b…102n). For example, when lips movement is detected from the user, the location of the user is determined using the images and the voice signals. The images captured by the one or more camera (101) includes image features and the voice signals recorded by one or more microphones (102a, 102b…102n). In an embodiment, the voice features (207) may include, but are not limited to, at least one of predefined telephonic phrases like “hello”, “hi”, “hey” and the like, pitch of the user audio, voice intensity and user voice direction. The voice features (207) may help the system (103) to recognize the user connected over the call.

[0042] At step (303), determining, by the processor (203, 204), the pitch of the voice signals associated with the user. The pitch of the voice signal may be recognized by using telephonic pre-defined phrases. For example, when the user starts attending the call, firstly they use phrases like “hello”, “hi” and “hey” and the like, may be uttered by the user, which helps the system to recognize the user’s pitch. For example, when the user attending the call, the user may use phrases like “hello”, “hi” and “hey” and the like, which can be recognized by the system to identify the user’s pitch and locate the user connected over the call. In an embodiment, the pitch is a fundamental frequency of the voice signal. The pitch may be determined in time domain or frequency domain. The pitch may be determined using known techniques, for example as measuring distance between zero-crossing points of the voice signals. Other techniques also can be used which determines the pitch of a voice signal having multiple sine waves. In the scenario of the vehicle (100), wherein multiple users are in conversation, techniques which can reliably determine the pitch of the user who is on call. The pitch of the user may be the fundamental frequency of the voice signal of the user.
[0043] At step (304), performing, by the processor (203, 204), one or more actions to suppress voice signals having pitch other than the determined pitch, based on the location of the user and using one or more filters. In an embodiment, wherein performing one or more actions include at least one of controlling the one or more microphones (102a, 102b…102n) based on the location of the user and filter the voice signals to allow only the determined pitch using the one or more filters. In an embodiment, the noise suppression module (213) may be configured to perform one or more actions to suppress voice signals having pitch other than the determined pitch, based on the location of the user and using one or more filters. The controlling the one or more microphones (102a, 102b…102n) based on the location of the user and filter the voice signals to allow only the determined pitch using the one or more filters. For example, when rear left end passenger speaks over the call, then the microphone near to that passenger is unmuted, whereas the rest of the microphones are muted, by which only that particular voice signal is passed over the call by suppressing the rest of the voice signals. The one or more filters may be hardware filters or software filters, which may include, one of but not limited to, an analog filter or a digital filter.

[0044] Figure 4a shows an exemplary diagram of dynamic noise cancellation in the vehicle (100) when one user is attending a call, in accordance with some embodiments of the present disclosure. As shown in Figure 4a, the driver (401) receives the call, meanwhile the microphone placed near to the driver (401) is unmuted and the rest of the microphones are muted to avoid unwanted noise during the call. The microphones (102a, 102b…102n) and the cameras (101) are placed at different locations inside the vehicle (100) to record the image features and the voice features. When lips movement is detected from the user, the location of the user is determined using the images and the voice. Further, using the voice features the system (103) recognizes the pitch of the driver (401). For example, when the driver (401) receives the call, the driver may use the phrase ”hello” to start the conversation. The system (103) may be configured to recognize the phrase “hello” and locate the user in the vehicle (100) who is conversing in the call. The one or more cameras (101) may be configured to capture the images of all the users in the vehicle, and the system (103) may use image processing techniques to identify which user has uttered the word “hello”. This information may be further correlated with the voice signals received from the one or more microphones (102) to locate the user, i.e., the driver in this example. Furthermore, already recorded voice signals or the voice signals captured from the conversation from the driver can be used to determine the pitch of the driver. After determining the pitch, the system (103) controls the one or more microphones (102a, 102b…102n) based on the location of the driver (401) and filters the voice signals to allow only the determined pitch of the driver (401) using the one or more filters. Based on the location of the driver (401) and the determined pitch of the driver (401) the noise causing voice signals from the other passengers (402, 403, 404, 405) are suppressed by the system (103). That is, the microphones which are away from the driver are either deactivated or the voice signals collected from such microphones are suppressed, and the microphone closer to the driver is activated and maximum voice signals are collected from the microphone closer to the driver. Thus, the noise is cancelled by the system (103).

[0045] Figure 4b shows an exemplary diagram of dynamic noise cancellation in the vehicle (100) when two users are attending a same call, in accordance with some embodiments of the present disclosure. As shown in Figure 4b, for example, when the driver (401) receives the call, the driver may use the phrase ”hello” to start the conversation. The system (103) may be configured to recognize the phrase “hello” and locate the user in the vehicle (100) who is conversing in the call. Later, the passenger (405) starts speaking over the same call, by using the phrase “hello”. The one or more cameras (101) may be configured to capture the images of all the users in the vehicle, and the system (103) may use image processing techniques to identify which users who has uttered the word “hello”. This information may be further correlated with the voice signals received from the one or more microphones (102) to locate the user, i.e., the driver (401) and the passenger (405) in this example. Furthermore, already recorded voice signals or the voice signals captured from the conversation from the driver and the passenger (405) can be used to determine the pitch of the driver. After determining the pitch, the system (103) controls the one or more microphones (102a, 102b…102n) based on the location of the driver (401) and the passenger (405) and filters the voice signals to allow only the determined pitch of the driver (401) and of the passenger (405) using the one or more filters. That is, the microphones which are away from the driver (401) and the passenger (405) are either deactivated or the voice signals collected from such microphones are suppressed, and the microphone closer to the driver (401) and the passenger (405) is activated and maximum voice signals are collected from the microphone closer to the driver (401). Thus, the noise is cancelled by the system (103).

[0046] The present disclosure provides a method and a system for dynamic noise cancellation in a vehicle (100). Hence, the present disclosure discloses noise cancellation method, when a user is in a call in the vehicle (100). The system (103) performs one or more actions to suppress voice signals having pitch other than the determined pitch, based on the location of the user and using one or more filters. The proposed system (103) and method (300) removes surrounding noise inside the vehicle like discussion noise of passengers seated inside the vehicle, engine noise, traffic noise and other noises received by the system. Hence, this method and system provides noise-free environment while attending the call and improves the quality of the call inside the vehicle (100).

COMPUTER SYSTEM
[0047] Figure 5 illustrates a block diagram of an exemplary computer system (500) for implementing embodiments consistent with the present disclosure. In an embodiment, the computer system (500) is used to implement the method for adaptive streaming of multimedia data. The computer system (500) may comprise a central processing unit (“CPU” or “processor”) 402. The processor (502) may comprise at least one data processor for executing program components for dynamic resource allocation at run time. The processor (502) may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc

[0048] The processor (502) may be disposed in communication with one or more input/output (I/O) devices (not shown) via I/O interface (501). The I/O interface (501) may employ communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.

[0049] Using the I/O interface (501), the computer system 400 may communicate with one or more I/O devices. For example, the input device (510) may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc. The output device 411 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma display panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.

[0050] In some embodiments, the computer system (500) is connected to the service operator through a communication network (509). The processor (502) may be disposed in communication with the communication network (509) via a network interface (503). The network interface (503) may communicate with the communication network (509). The network interface (503) may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/Internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network (509) may include, without limitation, a direct interconnection, e-commerce network, a peer to peer (P2P) network, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, Wi-Fi, etc. Using the network interface (503) and the communication network (509), the computer system 400 may communicate with the one or more service operators.

[0051] In some embodiments, the processor (502) may be disposed in communication with a memory (505) (e.g., RAM, ROM, etc. not shown in Figure 5) via a storage interface (504). The storage interface (504) may connect to memory (505) including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as serial advanced technology attachment (SATA), Integrated Drive Electronics (IDE), IEEE-1394, Universal Serial Bus (USB), fibre channel, Small Computer Systems Interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.

[0052] The memory (505) may store a collection of program or database components, including, without limitation, user interface (506), an operating system (507), web server (508) etc. In some embodiments, computer system (500) may store user/application data (506), such as the data, variables, records, etc. as described in this disclosure. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or Sybase.

[0053] The operating system (507) may facilitate resource management and operation of the computer system (500). Examples of operating systems include, without limitation, Apple Macintosh OS X, Unix, Unix-like system distributions (e.g., Berkeley Software Distribution (BSD), FreeBSD, NetBSD, OpenBSD, etc.), Linux distributions (e.g., Red Hat, Ubuntu, Kubuntu, etc.), IBM OS/2, Microsoft Windows (XP, Vista/7/8, 10 etc.), Apple iOS, Google Android, Blackberry OS, or the like.

[0054] In some embodiments, the computer system (500) may implement a web browser (508) stored program component. The web browser (508) may be a hypertext viewing application, such as Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, Apple Safari, etc. Secure web browsing may be provided using Secure Hypertext Transport Protocol (HTTPS), Secure Sockets Layer (SSL), Transport Layer Security (TLS), etc. Web browsers (508) may utilize facilities such as AJAX, DHTML, Adobe Flash, JavaScript, Java, Application Programming Interfaces (APIs), etc. In some embodiments, the computer system (500) may implement a mail server stored program component. The mail server may be an Internet mail server such as Microsoft Exchange, or the like. The mail server may utilize facilities such as ASP, ActiveX, ANSI C++/C#, Microsoft .NET, CGI scripts, Java, JavaScript, PERL, PHP, Python, WebObjects, etc. The mail server may utilize communication protocols such as Internet Message Access Protocol (IMAP), Messaging Application Programming Interface (MAPI), Microsoft Exchange, Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), or the like. In some embodiments, the computer system (500) may implement a mail client stored program component. The mail client may be a mail viewing application, such as Apple Mail, Microsoft Entourage, Microsoft Outlook, Mozilla Thunderbird, etc. Remote devices (512) may be connected to the computer system (500). The remote devices may include third-party application servers, OEM servers, and one or more databases.

[0055] The terms "an embodiment", "embodiment", "embodiments", "the embodiment", "the embodiments", "one or more embodiments", "some embodiments", and "one embodiment" mean "one or more (but not all) embodiments of the invention(s)" unless expressly specified otherwise.

[0056] The terms "including", "comprising", “having” and variations thereof mean "including but not limited to", unless expressly specified otherwise.

[0057] The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms "a", "an" and "the" mean "one or more", unless expressly specified otherwise.

[0058] A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.

[0059] When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.

[0060] The illustrated operations of Figure 4a, Figure 4b, and Figure 5 show certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified or removed. Moreover, steps may be added to the above-described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.

[0061] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

[0062] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope being indicated by the following claims.

REFERRAL NUMERALS:
Reference number Description
100 Vehicle
101 Camera
102a Microphone-1
102b Microphone-2
103 Dynamic Noise Cancellation System
104 User Device
201 I/O interface
202 Memory
203 Image Processor
204 Signal Processor
206 Data
207 Image features
208 Voice features
209 Other data
210 Modules
211 Communication Module
212 Determining Module
213 Noise Suppression Module
214 Other Modules
219 Other modules
401 Driver
402 Passenger-1
403 Passenger-2
404 Passenger-3
405 Passenger-4
500 Computer system
501 I/O interface
502 Processor
503 Network interface
504 Storage interface
505 Memory
506 U/I
507 Operating system
508 Web server
509 Communication network
510 Input devices
511 Output devices
512 Remote devices

Documents

Application Documents

# Name Date
1 202121061706-8(i)-Substitution-Change Of Applicant - Form 6 [22-01-2025(online)].pdf 2025-01-22
1 202121061706-CLAIMS [18-06-2024(online)].pdf 2024-06-18
1 202121061706-STATEMENT OF UNDERTAKING (FORM 3) [30-12-2021(online)].pdf 2021-12-30
2 202121061706-ASSIGNMENT DOCUMENTS [22-01-2025(online)].pdf 2025-01-22
2 202121061706-FER_SER_REPLY [18-06-2024(online)].pdf 2024-06-18
2 202121061706-REQUEST FOR EXAMINATION (FORM-18) [30-12-2021(online)].pdf 2021-12-30
3 202121061706-OTHERS [18-06-2024(online)].pdf 2024-06-18
3 202121061706-PA [22-01-2025(online)].pdf 2025-01-22
3 202121061706-POWER OF AUTHORITY [30-12-2021(online)].pdf 2021-12-30
4 202121061706-FORM 18 [30-12-2021(online)].pdf 2021-12-30
4 202121061706-FER.pdf 2023-12-20
4 202121061706-CLAIMS [18-06-2024(online)].pdf 2024-06-18
5 Abstract1.jpg 2022-03-22
5 202121061706-FORM 1 [30-12-2021(online)].pdf 2021-12-30
5 202121061706-FER_SER_REPLY [18-06-2024(online)].pdf 2024-06-18
6 202121061706-Proof of Right [21-02-2022(online)].pdf 2022-02-21
6 202121061706-OTHERS [18-06-2024(online)].pdf 2024-06-18
6 202121061706-DRAWINGS [30-12-2021(online)].pdf 2021-12-30
7 202121061706-FORM-8 [05-01-2022(online)].pdf 2022-01-05
7 202121061706-FER.pdf 2023-12-20
7 202121061706-DECLARATION OF INVENTORSHIP (FORM 5) [30-12-2021(online)].pdf 2021-12-30
8 202121061706-COMPLETE SPECIFICATION [30-12-2021(online)].pdf 2021-12-30
8 Abstract1.jpg 2022-03-22
9 202121061706-DECLARATION OF INVENTORSHIP (FORM 5) [30-12-2021(online)].pdf 2021-12-30
9 202121061706-FORM-8 [05-01-2022(online)].pdf 2022-01-05
9 202121061706-Proof of Right [21-02-2022(online)].pdf 2022-02-21
10 202121061706-DRAWINGS [30-12-2021(online)].pdf 2021-12-30
10 202121061706-FORM-8 [05-01-2022(online)].pdf 2022-01-05
10 202121061706-Proof of Right [21-02-2022(online)].pdf 2022-02-21
11 202121061706-COMPLETE SPECIFICATION [30-12-2021(online)].pdf 2021-12-30
11 202121061706-FORM 1 [30-12-2021(online)].pdf 2021-12-30
11 Abstract1.jpg 2022-03-22
12 202121061706-DECLARATION OF INVENTORSHIP (FORM 5) [30-12-2021(online)].pdf 2021-12-30
12 202121061706-FER.pdf 2023-12-20
12 202121061706-FORM 18 [30-12-2021(online)].pdf 2021-12-30
13 202121061706-DRAWINGS [30-12-2021(online)].pdf 2021-12-30
13 202121061706-OTHERS [18-06-2024(online)].pdf 2024-06-18
13 202121061706-POWER OF AUTHORITY [30-12-2021(online)].pdf 2021-12-30
14 202121061706-FER_SER_REPLY [18-06-2024(online)].pdf 2024-06-18
14 202121061706-FORM 1 [30-12-2021(online)].pdf 2021-12-30
14 202121061706-REQUEST FOR EXAMINATION (FORM-18) [30-12-2021(online)].pdf 2021-12-30
15 202121061706-CLAIMS [18-06-2024(online)].pdf 2024-06-18
15 202121061706-FORM 18 [30-12-2021(online)].pdf 2021-12-30
15 202121061706-STATEMENT OF UNDERTAKING (FORM 3) [30-12-2021(online)].pdf 2021-12-30
16 202121061706-PA [22-01-2025(online)].pdf 2025-01-22
16 202121061706-POWER OF AUTHORITY [30-12-2021(online)].pdf 2021-12-30
17 202121061706-ASSIGNMENT DOCUMENTS [22-01-2025(online)].pdf 2025-01-22
17 202121061706-REQUEST FOR EXAMINATION (FORM-18) [30-12-2021(online)].pdf 2021-12-30
18 202121061706-8(i)-Substitution-Change Of Applicant - Form 6 [22-01-2025(online)].pdf 2025-01-22
18 202121061706-STATEMENT OF UNDERTAKING (FORM 3) [30-12-2021(online)].pdf 2021-12-30

Search Strategy

1 202121061706E_14-12-2023.pdf