Sign In to Follow Application
View All Documents & Correspondence

Method And Personalized Audio Space Generation System For Generating Personalised Audio Space In A Vehicle

Abstract: The present disclosure relates to a method and system for generating personalized audio space in vehicle. Information related to user in each region of the vehicle is collected and analyzed to determine direction of first directional speakers associated with corresponding region. An audio space boundary for each region is identified based on the direction of first directional speakers in the corresponding region. Further, the proposed method renders first sound wave of a user selected audio using first directional speakers in the region and transmits a second sound wave corresponding to first sound wave in the region using second directional speakers associated with the corresponding region. The second sound wave restricts rendering of the first sound wave beyond the audio space boundary of the one of the one or more regions to generate the personalized audio space in the vehicle. Figure 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 December 2017
Publication Number
27/2019
Publication Type
INA
Invention Field
ELECTRICAL
Status
Email
bangalore@knspartners.com
Parent Application
Patent Number
Legal Status
Grant Date
2021-10-20
Renewal Date

Applicants

WIPRO LIMITED
Doddakannelli, Sarjapur Road, Bangalore 560035, Karnataka, India.

Inventors

1. CHAITANYA RAJENDRA ZANPURE
Malti Smruti Shri Prasad Socy. 481/2/C, Parvati, Pune 411009, Maharashtra, India.

Specification

Claims:We Claim:

1. A method of generating a personalized audio space in a vehicle, the method comprising:
receiving, by a processor of a personalized audio space generation system, a user related information of a user in the vehicle from at least one image sensor associated with each of one or more regions in the vehicle;
determining, by the processor, direction of one or more first directional speakers associated with each of the one or more regions in the vehicle based on the received user related information;
identifying, by the processor, an audio space boundary for each of the one or more regions based on the direction of the one or more first directional speakers of corresponding region of the one or more regions;
rendering, by the processor, a first sound wave of a user selected audio in the determined direction using the one or more first directional speakers associated with one of the one or more regions where the user is seated; and
transmitting, by the processor, a second sound wave corresponding to the first sound wave along the identified audio space boundary of the one of the one or more regions using one or more second directional speakers associated with the one of the one or more regions, wherein the first sound wave and the second sound wave converge at the audio space boundary of the region such that the second sound wave restricts rendering of the first sound wave beyond the audio space boundary of the one of the one or more regions to generate the personalized audio space in the vehicle.

2. The method as claimed in claim 1, wherein the user related information includes a height of the user, a position of the user, material of interior of the vehicle, and acoustic characteristics associated with the material in each of the one or more regions.

3. The method as claimed in claim 1, wherein transmitting the second sound wave corresponding to the first sound wave comprising:
determining the first sound wave associated with the user selected audio via an input source associated with each of the one or more regions in the vehicle, wherein the user selected audio is played via the one or more first directional speakers associated with the one of the one or more regions; and
generating the second sound wave based on the first sound wave of the user selected audio, displacement of the first sound wave in the one of the one or more regions and a specific acoustic impedance for the first sound wave in the one of the one or more regions.

4. The method as claimed in claim 3, wherein the specific acoustic impedance for the first sound wave is determined as product of density and phase velocity of the first sound wave of the user selected audio.

5. The method as claimed in claim 3, wherein the second sound wave comprises amplitude and frequency equivalent to that of the first sound wave.

6. The method as claimed in claim 3, wherein the second sound wave is 180 degrees out of phase with respect to the first sound wave.

7. The method as claimed in claim 1, wherein the first sound wave associated with the user selected audio is determined based on a metadata of the user selected audio stored in a media device connected to the one of one or more input sources configured in the vehicle.

8. The method as claimed in claim 1, wherein the determined direction of the first directional speaker is varied based on the user related information including acoustic characteristics of material of interior of the vehicle.

9. The method as claimed in claim 1, wherein direction of the second directional speaker is varied based on the audio space boundary in the one of the one or more regions.

10. A personalized audio space generation system, the system comprising:
a processor;
a memory, communicatively coupled with the processor, wherein the memory stores processor-executable instructions, which on execution cause the processor to:
receive a user related information of a user in the vehicle from at least one image sensor associated with each of one or more regions in the vehicle;
determine direction of one or more first directional speakers associated with each of the one or more regions in the vehicle based on the received user related information;
identify an audio space boundary for each of the one or more regions based on the direction of the one or more first directional speakers of corresponding region of the one or more regions;
render a first sound wave of a user selected audio in the determined direction using the one or more first directional speakers associated with one of the one or more regions where the user is seated; and
transmit a second sound wave corresponding to the first sound wave along the identified audio space boundary of the one of the one or more regions using one or more second directional speakers associated with the one of the one or more regions, wherein the first sound wave and the second sound wave converge at the audio space boundary of the region such that the second sound wave restricts rendering of the first sound wave beyond the audio space boundary of the one of the one or more regions to generate the personalized audio space in the vehicle.

11. The system as claimed in claim 10, wherein the user related information includes a height of the user, a position of the user, material of interior of the vehicle and acoustic characteristics associated with material of the vehicle in each of the one or more regions.

12. The system as claimed in claim 10, wherein the processor is configured to transmit the second sound wave corresponding to the first sound wave by:
determining the first sound wave associated with the user selected audio via an input source associated with each of the one or more regions in the vehicle, wherein the user selected audio is played via the one or more first directional speakers associated with the one of the one or more regions; and
generating the second sound wave based on the first sound wave of the user selected audio, displacement of the first sound wave in the one of the one or more regions and a specific acoustic impedance for the first sound wave in the one of the one or more regions.
13. The system as claimed in claim 12, wherein the specific acoustic impedance for the first sound wave is determined as product of density and phase velocity of the first sound wave of the user selected audio.

14. The system as claimed in claim 12, wherein the second sound wave comprises amplitude and frequency equivalent to that of the first sound wave.

15. The system as claimed in claim 12, wherein the second sound wave is 180 degrees out of phase with respect to the first sound wave.

16. The system as claimed in claim 10, wherein the first sound wave associated with the user selected audio is determined based on a metadata of the user selected audio stored in a media device connected to the one of one or more input sources configured in the vehicle.

17. The system as claimed in claim 10, wherein the direction of the first directional speaker is varied based on the user related information including acoustic characteristics of material of interior of the vehicle.

18. The system as claimed in claim 10, wherein direction of the second directional speaker is varied based on the audio space boundary in the one of the one or more regions.

Dated this 30th day of December, 2017

Madhusudan S.T
IN/PA-1297
Of K&S Partners
Agent for the Applicant
, Description:TECHNICAL FIELD
The present subject matter is related, in general to audio processing systems and more particularly, but not exclusively to a method and system for generating personalized audio space in a vehicle.

Documents

Application Documents

# Name Date
1 201741047426-STATEMENT OF UNDERTAKING (FORM 3) [30-12-2017(online)].pdf 2017-12-30
2 201741047426-REQUEST FOR EXAMINATION (FORM-18) [30-12-2017(online)].pdf 2017-12-30
3 201741047426-REQUEST FOR CERTIFIED COPY [30-12-2017(online)].pdf 2017-12-30
4 201741047426-POWER OF AUTHORITY [30-12-2017(online)].pdf 2017-12-30
5 201741047426-FORM 18 [30-12-2017(online)].pdf 2017-12-30
6 201741047426-FORM 1 [30-12-2017(online)].pdf 2017-12-30
7 201741047426-DRAWINGS [30-12-2017(online)].pdf 2017-12-30
8 201741047426-DECLARATION OF INVENTORSHIP (FORM 5) [30-12-2017(online)].pdf 2017-12-30
9 201741047426-COMPLETE SPECIFICATION [30-12-2017(online)].pdf 2017-12-30
10 201741047426-REQUEST FOR CERTIFIED COPY [09-03-2018(online)].pdf 2018-03-09
11 201741047426-Proof of Right (MANDATORY) [14-05-2018(online)].pdf 2018-05-14
12 Correspondence by Agent_Form30,Form1_16-05-2018.pdf 2018-05-16
13 201741047426-FER.pdf 2019-12-27
14 201741047426-OTHERS [26-06-2020(online)].pdf 2020-06-26
15 201741047426-FER_SER_REPLY [26-06-2020(online)].pdf 2020-06-26
16 201741047426-DRAWING [26-06-2020(online)].pdf 2020-06-26
17 201741047426-CORRESPONDENCE [26-06-2020(online)].pdf 2020-06-26
18 201741047426-COMPLETE SPECIFICATION [26-06-2020(online)].pdf 2020-06-26
19 201741047426-CLAIMS [26-06-2020(online)].pdf 2020-06-26
20 201741047426-ABSTRACT [26-06-2020(online)].pdf 2020-06-26
21 201741047426-PatentCertificate20-10-2021.pdf 2021-10-20
22 201741047426-PROOF OF ALTERATION [16-11-2021(online)].pdf 2021-11-16
23 201741047426-RELEVANT DOCUMENTS [20-09-2023(online)].pdf 2023-09-20

Search Strategy

1 SS_06-12-2019.pdf

ERegister / Renewals

3rd: 16 Nov 2021

From 30/12/2019 - To 30/12/2020

4th: 16 Nov 2021

From 30/12/2020 - To 30/12/2021

5th: 16 Nov 2021

From 30/12/2021 - To 30/12/2022

6th: 07 Dec 2022

From 30/12/2022 - To 30/12/2023

7th: 19 Dec 2023

From 30/12/2023 - To 30/12/2024

8th: 18 Dec 2024

From 30/12/2024 - To 30/12/2025