Sign In to Follow Application
View All Documents & Correspondence

Masking Of Dual Tone Multi Frequency (Dtmf) Tone In A Voice Based Communication

Abstract: ABSTRACT MASKING OF DUAL TONE MULTI-FREQUENCY (DTMF) TONE IN A VOICE BASED COMMUNICATION Described herein are a method and a system for masking a dual tone multi-frequency (DTMF) tone in a voice based communication over a telecommunication network. For this, an audio signal is received at a computing system (104). The audio signal is then fragmented into a plurality of audio frames. From amongst the plurality of audio frames, an audio frame is identified as a tonal frame. Thereafter, a speech signal in the tonal frame is detected. Based on the detection, a complete spectral masking or a partial spectral masking of the tonal frame is performed. According to the partial spectral masking, a DTMF tone is masked while the speech signal in the tonal frame is left unmasked. [[ TO BE PUBLISHED WITH FIGURE 3 ]]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 November 2013
Publication Number
30/2015
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
iprdel@lakshmisri.com
Parent Application
Patent Number
Legal Status
Grant Date
2021-02-19
Renewal Date

Applicants

TATA CONSULTANCY SERVICES LIMITED
Nirmal Building, 9th Floor, Nariman Point, Mumbai, Maharashtra 400021,

Inventors

1. SHUKLA, Manish
54-B, Hadapsar Industrial Estate, Pune 411013,
2. RADADIA, Purushotam
54-B, Hadapsar Industrial Estate, Pune 411013,
3. KARANDE, Shirish
54-B, Hadapsar Industrial Estate, Pune 411013,
4. KOPPARAPPU, Sunil
5G4, Yantra Park, Pokhran Road 2, Thane (West), 400601,
5. LODHA, Sachin
54B, Hadapsar Industrial Estate, Pune 411013,

Specification

CLIAMS:I/We claim:

1. A method for masking a dual tone multi-frequency (DTMF) tone in a voice based communication over a telecommunication network, the method comprising:
receiving an audio signal, wherein the audio signal includes a speech signal and a DTMF tone;
fragmenting the audio signal into a plurality of audio frames;
identifying an audio frame, from amongst the plurality of audio frames, as a tonal frame;
detecting the speech signal in the tonal frame; and
based on the detection, performing one of:
complete spectral masking of the tonal frame when no speech signal is detected in the identified tonal frame, and
partial spectral masking of the tonal frame when the speech signal is detected such the DTMF tone is masked while the speech signal is left unmasked in the tonal frame.

2. The method as claimed in claim 1, wherein the identifying of the tonal frame is performed based on determining a number of peaks of the audio frame reaching a peak count threshold.

3. The method as claimed in claim 1, wherein the detecting of the speech signal is performed based on characteristic spectral shapes of the speech signal.

4. The method as claimed in claim 1, wherein after identifying the tonal frame, the method comprises confirming the identity of the audio frame as the tonal frame on determining a peak variance of absolute values of a collection of peaks reaching a peak variance threshold, and wherein for absolutes values, a peak is obtained for a positive margin crossing to negative margin crossing and a peak is obtained for negative margin crossing to positive margin crossing.

5. The method as claimed in claim 1, wherein after identifying the tonal frame, the method comprises identifying the DTMF tone in the tonal frame by:
forming a sparse approximation of the tonal frame in terms of reference vectors obtained on the basis of DTMF reference frequencies;
evaluating sparsity and accuracy of the sparse approximation to ensure that the sparse approximation complies with a structure of a DTMF tone; and
identifying the DTMF tone based on the evaluation.

6. The method as claimed in claim 5, wherein based on identifying the DTMF tone, the method comprises processing the DTMF tone for providing input to an intended application.

7. The method as claimed in claim 1, wherein the detecting the speech signal in the identified tonal frame comprises detecting overlapping of the speech signal on the DTMF tone by estimating boundaries of the DTMF tone and two frequencies of the DTMF tone.

8. The method as claimed in claim 1, wherein the complete spectral masking comprises:
determining the DTMF tone in the tonal frame;
replacing the DTMF tone with silence in the tonal frame; and
introducing a dummy low frequency tone in the tonal frame, wherein the dummy low frequency tone has a reference frequency of less than 1000 Hz.

9. The method as claimed in claim 1, wherein the partial spectral masking comprises:
introducing a dummy high frequency tone in the tonal frame, wherein the dummy high frequency tone has a reference frequency of greater than 1000 Hz and belongs to a DTMF dictionary comprising all DTMF tones;
performing high band pass filtering of the frequencies greater than 1000 Hz in the tonal frame;
low frequency tone left in the tonal frame is cancelled by an exact tone cancellation method; and
introducing a dummy low frequency tone in the tonal frame, wherein the dummy low frequency tone has a reference frequency of less than 1000 Hz.

10. A computing system (104) for masking a dual tone multi-frequency (DTMF) signal in a voice based communication over a telecommunication network, the computing system (104) comprising:
a processor (202);
a virtual audio device driver (108), coupled to the processor (202), the virtual audio device driver (108) comprising:
an audio receiving module (214) to receive an audio signal, wherein the audio signal includes a speech signal and a DTMF tone, and
a processing module (216) to:
fragment the audio signal into a plurality of audio frames;
identify an audio frame, from amongst the plurality of frames, as a tonal frame upon determining a number of peaks in the audio frame reaching a peak count threshold; and
an audio engine (110), coupled to the processor (202), to:
detect the speech signal in the tonal frame based on characteristic spectral shapes of the speech signal; and
based on the detection, perform one of:
complete spectral masking of the tonal frame when no speech signal is detected in the tonal frame; and
partial spectral masking of the tonal frame when the speech signal is detected such that the DTMF tone is masked while the speech signal is left unmasked in the tonal frame.

11. The computing system (104) as claimed in claim 10, wherein after the identification of the tonal frame, the processing module (216) confirms the identity of the audio frame as the tonal frame on determining a peak variance of absolute values of a collection of peaks reaching a peak variance threshold, and wherein for absolutes values, a peak is obtained for a positive margin crossing to negative margin crossing and a peak is obtained for negative margin crossing to positive margin crossing.

12. The computing system (104) as claimed in claim 10, wherein the processing module (216) comprises a scheduling processing sub-module (216-1) to random sampling of the plurality of audio frames, and wherein the random sampling comprises sub-sampling followed by pre-filtering of the plurality of audio frames.

13. The computing system (104) as claimed in claim 10, wherein the processing module (216) identifies the DTMF tone in the identified tonal frame by:
forming a sparse approximation of the tonal frame in terms of reference vectors obtained on the basis of DTMF reference frequencies;
evaluating sparsity and accuracy of the sparse approximation to ensure that the sparse approximation complies with a structure of a DTMF tone; and
identifying the DTMF tone based on the evaluation.

14. The computing system (104) as claimed in claim 13, wherein based on identification of the DTMF tone, the processing module (216) processes the DTMF tone for providing input to an intended application.

15. The computing system as claimed in claim 10, wherein the audio engine (110) detects overlapping of the speech signal on the DTMF tone by estimating boundaries of the DTMF tone and the two frequencies of the DTMF tone.

16. The computing system as claimed in claim 10, wherein the audio engine (110) performs complete spectral masking by:
determining the DTMF tone in the tonal frame;
replacing the DTMF tone with silence in the tonal frame; and
introducing a dummy low frequency tone in the tonal frame, wherein the dummy low frequency tone has a reference frequency of less than 1000 Hz.

17. The computing system as claimed in claim 10, wherein the audio engine (110) performs partial spectral masking by:
introducing a dummy high frequency tone in the tonal frame, wherein the dummy high frequency tone has a reference frequency of greater than 1000 Hz and belongs to a DTMF dictionary comprising all DTMF tones;
performing high band pass filtering of the frequencies greater than 1000 Hz in the tonal frame;
low frequency tone left in the tonal frame is cancelled by an exact tone cancellation method; and
introducing a dummy low frequency tone in the tonal frame, wherein the dummy low frequency tone has a reference frequency of less than 1000 Hz.

18. A non-transitory computer-readable medium having a set of computer readable instructions that, when executed, cause a processor (202) to:
receive an audio signal including a speech signal and a DTMF tone;
fragment the received audio signal into a plurality of audio frames;
identify an audio frame, from amongst the plurality of audio frames, as a tonal frame upon determining a number of peaks in the audio frame reaching a peak count threshold;
detect the speech signal in the identified tonal frame based on

characteristic spectral shapes of the speech signal; and
based on the detection, perform one of:
complete spectral masking of the tonal frame when no speech signal is detected in the identified tonal frame; and
partial spectral masking of the tonal frame when the speech signal is detected such that DTMF tone is masked while the speech signal is left unmasked in the tonal frame.
,TagSPECI:TECHNICAL FIELD
[0001] The present subject matter relates to masking of a dual tone multi-frequency (DTMF) tone in a voice based communication, in general, and, particularly, but not exclusively, to masking of a DTMF tone in Voice over Internet protocol (VoIP).
BACKGROUND
[0002] Nowadays, face-to-face transactions between a customer and a service provider are being replaced by transactions carried out remotely over telecommunication networks. This provides ease of access for the customer, and reduced costs for the service provider. For example, there has been a rapid rise in telephone based transactions including credit card validation, banking, off-track betting, stock market transactions, commodities transactions, placing reservations, ticketing, and retail and wholesale sales.
[0003] Generally, such telephone based transactions are facilitated in response to a customer dialing a telephone number of a service provider and then entering security sensitive information, including identification information and/or card information, using standard touch tones, i.e., dual tone multi-frequency (DTMF) tones, to represent the security sensitive information being entered and transmitted. However, when transmitted, such DTMF tones are subject to compromise by someone eavesdropping on the service provider’s computing system.
BRIEF DESCRIPTION OF THE FIGURES
[0004] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which:
[0005] Fig. 1 illustrates a Voice over internet protocol (VoIP) environment in accordance with an embodiment.
[0006] Fig. 2 illustrates a computing system for masking of a dual tone multi-frequency (DTMF) tone in a voice based communication, in accordance with an embodiment.
[0007] Fig. 3 illustrates a computing system for masking of a dual tone multi-frequency (DTMF) tone in a voice based communication, in accordance with another embodiment.
[0008] Fig. 4 illustrates a method for masking a dual tone multi-frequency (DTMF) tone in a voice based communication, in accordance with an embodiment.
[0009] In the present document, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment or implementation of the present subject matter described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
DETAILED DESCRIPTION
[0010] Methods and systems for dual tone multi-frequency (DTMF) tones processing in a Voice over Internet Protocol (VoIP) based communication is described herein. The methods and systems may include one or more functions including identifying potential audio frames comprising DTMF tones, detecting and confirming the presence tonal frames from amongst the potential audio frames, and masking DTMF tones in the tonal frame by removal or suppression of the DTMF tones present in the audio frame. The methods can be implemented in various systems communicating through various networks. Although the description herein is with reference to a VoIP network, the methods and systems may be implemented in other communication protocols, albeit with a few variations, as will be understood by a person skilled in the art but nevertheless included within the scope of the present subject matter.
[0011] Voice based communication, such Voice over Internet Protocol (VoIP) services provided as Internet services, allow delivery of audio signals and multimedia sessions over the Internet. The VoIP services may be utilized in a variety of services, such as phone banking, electronic commerce, insurance related transactions, bill payments, etc. In each of these services, both voice and data may be transmitted for completion of a transaction. For example, when a customer desires to purchase a product from a service provider through a user device, a call operator of the service provider may ask the customer during voice based communication to enter personal or confidential information, including product code, and payment card details, using the keypad of the communication instrument being used. Typically, during use, an interaction, such as pressing a button corresponding to a number, produces an audible tone, such as Dual Tone Multi-Frequency (DTMF) tones. Normally, such DTMF tones are produced using a standard keypad of the user device. In an example, the user device may include a VoIP phone, a smart phone having an application, a telephone, etc., for accessing VoIP services.
[0012] In conventional system, the DTMF tones are generated using DTMF encoding techniques, which produce a tone from a combination of two frequencies. In such a case one frequency is selected from a high frequency band group and the other frequency is selected from a low frequency band group. In standard telephone systems, the high frequency band group includes four high frequencies (1209, 1336, 1477, and 1633 Hz), while the low band frequency group includes four low frequencies (697, 770, 852 and 941 Hz). Each one of the four high frequencies corresponds to one of the four columns of keys on a standard extended telephone keypad, while each one of the low frequencies corresponds to one of the four rows of keys on the standard extended telephone keypad. Thus, by using the DTMF encoding technique, each different telephone key is represented by a signal including a unique combination of one frequency tone from the high frequency band group and one from the low frequency band group. For example, when the customer presses a key representing number ‘1’, a DTMF tone containing a high frequency 1209 Hz and a low frequency 697 Hz is transmitted.
[0013] However, the security sensitive information, such as payment card details and personal identification number (PIN), are security sensitive information. Thus, it may not be desirable to share such information over the VoIP through the DTMF tones, unless the communication network is secure.
[0014] For ensuring security of the DTMF tones, payment card industry data security standard (PCI DSS) compliance is defined to handle payment card related information. According to the PCI DSS compliance, the DTMF tones keyed in through the VoIP device may not be transmitted to a system or an operator, or recorded by the system receiving the security sensitive information. However, the PCI DSS compliance is valid only for certain temporal intervals when a safe mode has been turned on by the call operator of the service provider.
[0015] Further, in one conventional technique, an in-call interactive voice response (IVR) transfer feature is employed for the security of the DTMF tones. The in-call IVR transfer feature allows the call operator handling a VoIP call of the customer to switch the voice based communication to an IVR system for transmission of the DTMF tones, and the call operator is then put on hold during the transmission of the DTMF tones. However, in the in-call IVR transfer feature, the time for transmission of the DTMF tones is detected by the call operator to manually switch the voice based communication and thus any non-adherence to switching before the DTMF transmission may cause DTMF tones to be transmitted to the call operator. Further during the switching of the call from the call operator to the IVR, call drop is also known which inconveniences the customer.
[0016] In another conventional technique, the call operator configures a computing system to suppress the DTMF tones during recording in a storage medium, by canceling or filtering the DTMF tones. However, during suppression, the call operator may receive the DTMF tones and, therefore, the call operator may be in a position to detect security sensitive information associated with the DTMF tones.
[0017] In yet another conventional technique, a notch filtering is performed on an audio signal, including the voice and the DTMF tones, received at the computing system for determination and removal of a portion of the audio signal in which the DTMF tones may be present. However, such notch filtering may remove the portion of audio signal in which both the voice along with the DTMF tones is present. Further, the notch filtering of the entire audio signal received at the computing system leads to significant time overhead in the processing.
[0018] Further, integration of the aforesaid conventional techniques in an existing system may require introduction of special hardware or software applications or both. Such integration with the existing system may be cumbersome and undesirable.
[0019] Various embodiments described herein, in accordance with the present subject matter, include methods and systems for masking dual tone multi-frequency (DTMF) tone in a voice based communication. The methods and systems according to the present subject matter may be employed for processing of an audio signal received at a computing system. In an example, the audio signal includes a speech signal and a DTMF tone. The steps for processing include fragmenting the audio signal into a plurality of audio frames. In an example, the audio signal is fragmented based on time slots. For example, the computing system may fragment the audio signal by 20 ms time slots. After fragmentation of the audio signal, the computing system may be employed for identifying an audio frame, from amongst the plurality of audio frames, as a tonal frame. In an example, the tonal frame may be understood as an audio frame including a DTMF tone. The identification of the tonal frame is confirmed upon determining a number of peaks in the audio frame reaching a peak count threshold. Once the tonal frame is detected, a speech signal is detected in the tonal frame based on characteristic spectral shapes of the speech signal.
[0020] In this way, audio frames that are identified as tonal frames from amongst the plurality of audio frames are processed and not all the plurality of audio frames are processed, which in turn reduces the computational load on the computing system. Thus, the methods and the systems of the present subject matter perform significantly faster processing of the audio frames as compared to the techniques known in the art.
[0021] Subsequently, based on the detection of the speech signal in the tonal frame, the methods and the systems perform either complete spectral masking or partial spectral masking of the tonal frame or both. According to the complete spectral masking, the tonal frame is completed masked when no speech signal is detected in the tonal frame. Further, according to the partial spectral masking, the tonal frame is partially masked when the speech signal is detected such that the DTMF tone is masked while the speech signal is left unmasked.
[0022] In this way, the present subject matter facilitates the methods and the systems that identify a tonal frame in which a speech signal overlaps a DTMF tone based on spectral analysis, and masks or suppress only the DTMF tone present in such tonal frame. Thus, the methods and systems of the present subject matter perform accurate masking of the DTMF tone in a tonal frame based on the spectral analysis.
[0023] Further, the methods and the systems according to the present subject matter can be implemented as a middleware or can be integrated into already existing computing systems. Thus, the application level need not undergo any changes for the integration of the methods and the systems proposed according to the present subject matter.
[0024] The aspects defined above and further aspects of the present subject matter are apparent from the examples of embodiment to be described hereinafter and are explained with reference to the examples of embodiment. The present subject matter will be described in more detail hereinafter with reference to examples of embodiment but to which the present subject matter is not limited.
[0025] Fig. 1 illustrates, as an example, a Voice over internet protocol (VoIP) environment 100 in accordance with an exemplary embodiment of the present subject matter. The VoIP environment 100 includes a plurality of user devices 102-1, 102-2 …, 102-N, hereinafter collectively referred to as user devices 102 and individually as user device 102. Examples of user devices 102 may include, but are not limited to, VoIP phones, smart phones, mobiles, soft phones on laptops and other smart devices having calling capabilities. Thus, a normal phone or cell phone users would also be within the scope of the present subject matter.
[0026] Further, the VoIP environment 100 can include a plurality of computing systems out of which one computing system 104 is shown for the sake of simplicity. The computing system 104 may be implemented on various communication devices including but not limited to smart phones, VoIP phones, soft phones on laptops, etc. The computing system 104 may also be an independent system, capable of transmitting audio signals to a VoIP recipient after processing.
[0027] In an example, the computing system 104 and the user devices 102 are connected over a network 106 via wired, wireless, optical, or other types of telecommunication network connections. The network 106 may be a single network or a combination of multiple networks. In an example, the network 106 enables voice and data communication for the user devices 102 over a packet data telecommunication network, such as Internet. The network 106 comprises several network elements including modems, infrastructure elements for Internet, and VoIP servers.
[0028] In an example, a customer, from amongst a plurality of customers C1, C2......, CN of the user devices 102, may communicate with a service provider for several types of transactions, such as phone banking, credit card related transactions, and electronic commerce. As an example, a customer, say a customer C2, may desire to conduct electronic commerce with a service provider having the possession of the computing system 104. For this, the customer C2 initiates the electronic commerce by dialing, from the user device 102-2, a telephone number of the service provider having the possession of the computing system 104. At service provider’s end, a call operator or the computing system 104 implemented as an interactive voice response (IVR) system may prompt the customer C2 during a real time voice based communication to enter security sensitive information, such as the account number, debit card number, and the personal identity number (PIN), using DTMF tones. Normally, such DTMF tones are produced using a keypad of the user device 102-2.
[0029] After receiving the prompt from the computing system 104 or the call operator, the customer C2 depresses appropriate keys on the keypad of the user device 102-2 to generate DTMF tones corresponding to the security sensitive information. Each key in the keypad of the user device 102-2 may represent a digit or an alphanumeric value. The digit or the alphanumeric value corresponding to the security sensitive information is encoded as DTMF tones. The DTMF tones may be considered to be generated one or more DTMF codes. Each DTMF code uses a combination of two frequencies, one being a low frequency from a low frequency group and other being a high frequency from a high frequency group. Low frequency group include 697 Hz, 770 Hz, 852 Hz, and 941 Hz, while high frequency group include 1209 Hz, 1336Hz, 147 Hz, and 1633 Hz. The DTMF tones can be generated from a combination of any low frequency and any high frequency, giving a possibility of 16 different DTMF codes.
[0030] The DTMF tones generated by depressing appropriate keys of the user device 102-2 is then encoded, along with the speech signal, in an audio signal and transmitted within the audio signal from the user device 102-2 to the computing system 104. The audio signal is then received and acted upon by the computing system 104. In an example, the computing system 104 may determine the DTMF tones in the audio signal and converts the determined DTMF tones into a predetermined code, for example, standard ASCII (American Standard Code for Information Interchange) and EBCDIC (Extended Binary Coded Decimal Interchange Code), representative of alphanumeric data. The alphanumeric data is then inputted as the security sensitive information in an intended application running on the computing system 104. In an example, the intended application can be a VoIP application, Interactive Voice Response (IVR) application, or a Session Initiation Protocol (SIP) application.
[0031] After processing of the DTMF tones, the audio signal having the DTMF tones is required to be masked to prevent eavesdropping of the DTMF tones in an audio output of the computing system 104. For this, the computing system 104 includes a virtual audio device driver 108 and an audio engine 110. The virtual audio device driver 108 detects tonal frames in the audio signal. Tonal frames may be understood as audio frames comprising DTMF signals. Once the tonal frames are identified, the audio engine 110 perform masking of the DTMF tones present in the tonal frames to ensure the safety of the security sensitive information encoded in the DTMF tones. In an example, the audio engine 110 may mask the DTMF tones by suppressing or removing the DTMF tones present in the tonal frame, as is explained in detail later with reference to Fig. 2 and Fig. 3.
[0032] Fig. 2 illustrates exemplary components of the computing system 104, according to an exemplary embodiment of the present subject matter. In said embodiment, the computing system 104 includes a processor(s) 202, an interface(s) 204, and a memory 206. The processor(s) 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) 202 is configured to fetch and execute computer-readable instructions stored in the memory 206.
[0033] The interface(s) 204 may include a variety of computer-readable instructions and hardware interfaces, for example, a web interface, a graphical user interface, etc., allowing the computing system 104 to interact with the user devices 102. Further, the interface(s) 204 may enable the computing system 102 to communicate with other computing devices, such as web servers and external data servers (not shown in figure). The interface(s) 204 can facilitate multiple communications within a wide variety of telecommunication networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The interface(s) 204 may include one or more ports for connecting a number of devices to each other or to another server.
[0034] The memory 206 can include any computer-readable medium known in the art including, for example, volatile memory (e.g., RAM), and/or non-volatile memory (e.g., EPROM, flash memory, etc.). In one embodiment, the computing system 104 includes module(s) 208 and data 210. The module(s) 208 usually includes routines, programs, objects, components, data structure, etc., that perform particular task or implement particular abstract data types.
[0035] In one implementation, the module(s) 208 includes a virtual audio device driver 108, an audio engine 110, and an actual audio device driver 212. The virtual audio device driver 108 includes other modules, for example, an audio receiving module 214, a processing module 216 having a scheduling processing sub-module 216-1, and an audio output module 218, for providing various functionalities of the computing system 104. Similarly, the audio engine 110 includes the masking module 220 to perform masking of the DTMF tones. In addition to these, the module(s) 208 may include other module(s) 222 to perform various functions of the computing system 104. It will be appreciated that such modules may be represented as a single module or a combination of different modules.
[0036] Additionally, the computing system 104 further includes the data 210 that serves, amongst other things, as a repository for storing data fetched, processed, received and generated by one or more of the modules 208. In one implementation, the data 210 may include, for example, an audio buffer 224 and other data 226. In an example, the data 210 may be stored in the memory 206 in the form of data structures. Additionally, the aforementioned data 210 can be organized using data models, such as relational or hierarchical data models. In another example, the audio buffer 224 may be implemented as a circular buffer in a shared memory that is sharable by a number of applications or processes running on the computing system 104.
[0037] The computing system 104 having the configurations according to the embodiment illustrated in Fig. 2 can be implemented to identify audio frames comprising the DTMF tones in the middle of a voice based communication, to detect the DTMF tones in the identified audio frames, and to mask the detected DTMF tones such that the speech quality is largely unaffected.
[0038] The working or operation of the computing system 104, illustrated in Fig. 2, is described in detail with reference to Fig. 3 in the description hereinafter.
[0039] As described earlier, in an example, a customer C2 may desire to conduct electronic commerce with a service provider having the possession of the computing system 104. For this, the customer C2 can dial from the user device 102-2 a telephone number of the service provider. At service provider’s end, a call operator or the computing system 104 implemented as an interactive voice response (IVR) system prompts the customer C2 during the voice based communication to enter security sensitive information, such as the account number, debit card number, and the personal identity number (PIN), using DTMF tones. In response to the prompt, the customer C2 may provide the security sensitive information using the DTMF tones by depressing appropriate keys provided on the user device 102-2. The DTMF tones along with the speech signal is then transmitted as an audio signal from the user device 102-2 to the computing system 104.
[0040] At the computing system 104, the audio signal is received by the audio receiving module 214 of the virtual audio device driver 108 through the interface 204. In the embodiment represented in Fig. 3, the virtual audio device driver 108 is adapted to be the default audio device driver of the computing system 104. This ensures that all audio signals received for an audio hardware 302, for example, an audio speaker, are intercepted through the virtual audio device driver 108.
[0041] In the embodiment represented in Fig. 3, the virtual audio device driver 108 is running in a kernel mode, however other embodiments are not so limited and the virtual audio device driver 108 may operate in a different environment than the kernel mode. In general, a computing system has several modes of operation relative to the notion of privilege. Privilege has to do with the degree of access that a program has to various functional capabilities and resources contained within a computing system. Kernel mode is typically a privileged mode in which a program has complete and unlimited access to all computer functional capabilities and resources. In contrast, user mode is typically a designated mode in which a program has a limited access to computer resources and functions. Typically, user application programs operate in the user mode.
[0042] In an example, the virtual audio device driver 108 may be implemented with the processing module 216, the audio output module 218, and an application database (not shown in figure), in addition to the audio receiving module 214. The application database may have one or more intended applications that require processing or detection of the DTMF tones present in the audio signal received by the audio receiving module 214. According to the said implementation, the processing module 216 verifies whether the intended application for the audio signal is present in the application database. If the intended application is not identified, the processing module 216 may forward the audio signal in a default manner to the audio engine 110. Otherwise, if the intended application is identified in the application database, the processing module 216 may process the audio signal for determining DTMF tones. For determining the DTMF tones, the processing module 216 may fragment the received audio signal into a plurality of audio frames. In an example, the fragmentation may be carried out based on time slots. According to the example, each of the plurality of audio frames may be of a fixed time slot. In addition and in accordance with the present subject matter, the processing module 216 may fragment the plurality of audio frames with an overlap. For example, if the processing module 216 fragments the audio signal by 20 ms time slots, then 0 ms to 20 ms forms first time slot, 10 ms to 30 ms forms second time slot, 20 ms to 40 ms forms third time slot and so on. As can be seen, each of the plurality of audio frames has a fixed size of 20 ms, but the time slots are overlapping. The overlap may be helpful in smoother processing, that is, if an audio signal includes security sensitive information, and that security sensitive information could include cohesive data. Cohesive data may include data that has to be read in conjunction, like a personal identification number, a card number, etc. Thus, overlapping of the audio frames increases the probability of cohesive data being available in one audio frame, and therefore ensure easier and accurate processing.
[0043] The plurality of audio frames are then processed by the processing module 216 to identify tonal frames from amongst the plurality of audio frames. The tonal frames may be understood as audio frames that include DTMF tones. The identification of the tonal frames is performed by comparing a total energy in each of the plurality of audio frames with an energy threshold. In an example, the energy threshold may be identified by historic data of the DTMF signals.
[0044] In an implementation, the processing module 216 identifies the tonal frames from amongst the plurality of audio frames using a peaks count method. That is, the computing system 104 identifies an audio frame as a tonal frame based on determining a number of peaks of a signal contained in an audio frame reaching a peak count threshold. On basis of the peaks count method, it is determined by the processing module 216 that the number of peaks of a signal in an audio frame is less than the peak count threshold, the processing module 216 may identify that the said audio frame does not comprise a DTMF tone. In such a case, the processing module 216 may transmit the said audio frame, without any further processing, to the audio engine 110. However, in case it is determined by the processing module 216 that the number of peaks of a signal in an audio frame reaches the peak count threshold, then the processing module 216 identifies that audio frame as a tonal frame and marked that audio frame for masking.
[0045] Further, once peaks count method is performed, the processing module 216 confirms the identification of the tonal frame by using peak variance method. In the peak variance method, one peak is determined for obtaining a maximum absolute value of signal between a positive margin crossing to a negative margin crossing. Similarly, one peak is determined for obtaining maximum absolute value of that signal between negative margin crossing to positive margin crossing. In an example, the maximum absolute value may correspond to a maximum amplitude of a signal between a positive margin crossing and a negative margin crossing. Once absolute values of the peaks is obtained, the processing module 216 confirms the identification of the tonal frame by comparing the variance of the collected absolute values of the determined peaks, normalized by an average signal power in the tonal frame, with a peak variance threshold. In an example, the peak variance threshold may be obtained from historic data of the DTMF tones. In another example, the peak variance threshold may be identified based on multiple test trials which identify waveforms of the DTMF tones.
[0046] Further, once peaks_variance_method is performed, the processing module 216 performs a sparse approximation method to further confirm the identification of the tonal frame and to determine the identity of an alphanumeric digit that corresponds to a tone present in the tonal frame. The sparse approximation method utilizes a reference dictionary, having reference vectors for all the DTMF reference frequencies, to form a sparse approximation of the tonal frame. For forming the sparse approximation, the tonal frame is treated as a signal vector and weights are determined for the reference vectors such that a linear weighted combination of the reference vectors form an approximation for the signal vector. Thus, the sparse approximation of the tonal frame is formed in terms of the reference vectors obtained on the basis of the DTMF reference frequency. Then, sparsity and accuracy of the so formed sparse approximation is evaluated to ensure that the sparse approximation complies with a structure of a DTMF tone. In case the so formed sparse approximation complies with a structure of a DTMF tone, then the identification of the audio frame as the tonal frame is confirmed; otherwise, the processing module 216 may transmit the said audio frame, without any further processing, to the audio engine 110.
[0047] In an example, the reference dictionary includes a quadrature and in-phase components of all the DTMF reference frequencies:
Dictionary_Matrix = [
cos(2*pi*0*(1/Fs)*697), .…cos(2*pi*k*(1/Fs)*697),…...cos(2*pi*((Fs*20/1000)-1)*(1/Fs)*697);
cos(2*pi*0*(1/Fs)*770), …..cos(2*pi*k*(1/Fs)*770),…...cos(2*pi*((Fs*20/1000)-1)*(1/Fs)*770);
cos(2*pi*0*(1/Fs)*852), …..cos(2*pi*k*(1/Fs)*852),…...cos(2*pi*((Fs*20/1000)-1)*(1/Fs)*852);
cos(2*pi*0*(1/Fs)*941), …..cos(2*pi*k*(1/Fs)*941),…...cos(2*pi*((Fs*20/1000)-1)*(1/Fs)*941);
cos(2*pi*0*(1/Fs)*1209), …cos(2*pi*k*(1/Fs)*1209),….cos(2*pi*((Fs*20/1000)-1)*(1/Fs)*1209);
cos(2*pi*0*(1/Fs)*1336), …cos(2*pi*k*(1/Fs)*1336),….cos(2*pi*((Fs*20/1000)-1)*(1/Fs)*1336);
cos(2*pi*0*(1/Fs)*1477), …cos(2*pi*k*(1/Fs)*1477),….cos(2*pi*((Fs*20/1000)-1)*(1/Fs)*1477);
cos(2*pi*0*(1/Fs)*1633), …cos(2*pi*k*(1/Fs)*1633),….cos(2*pi*((Fs*20/1000)-1)*(1/Fs)*1633);
sin(2*pi*0*(1/Fs)*697), .…sin(2*pi*k*(1/Fs)*697),…......sin(2*pi*((Fs*20/1000)-1)*(1/Fs)*697);
sin(2*pi*0*(1/Fs)*770), …..sin(2*pi*k*(1/Fs)*770),…….sin(2*pi*((Fs*20/1000)-1)*(1/Fs)*770);
sin(2*pi*0*(1/Fs)*852), …..sin(2*pi*k*(1/Fs)*852),…….sin(2*pi*((Fs*20/1000)-1)*(1/Fs)*852);
sin(2*pi*0*(1/Fs)*941), …..sin(2*pi*k*(1/Fs)*941),…….sin(2*pi*((Fs*20/1000)-1)*(1/Fs)*941);
sin(2*pi*0*(1/Fs)*1209), …sin(2*pi*k*(1/Fs)*1209),…...sin(2*pi*((Fs*20/1000)-1)*(1/Fs)*1209);
sin(2*pi*0*(1/Fs)*1336), …sin(2*pi*k*(1/Fs)*1336),…...sin(2*pi*((Fs*20/1000)-1)*(1/Fs)*1336);
sin(2*pi*0*(1/Fs)*1477), …sin(2*pi*k*(1/Fs)*1477),…...sin(2*pi*((Fs*20/1000)-1)*(1/Fs)*1477);
sin(2*pi*0*(1/Fs)*1633), …sin(2*pi*k*(1/Fs)*1633),…..sin(2*pi*((Fs*20/1000)-1)*(1/Fs)*1633);];
[0048] Further, in said example, the sparse approximation of an audio frame is formed by solving least square problem to provide a solution for a weight vector as:
W = Audio Frame Vector * pseudo inverse (Dictionary_Matrix);
That is, for sparse approximation, a weight vector is determined by multiplying a vector formed by an audio frame with a reference dictionary to define all DTMF tones. Further, in the said example, lets presume that:
W = [cL1,cL2,cL3,cL4, cH1,cH2,cH3, cH4, sL1,sL2,sL3,sL4, sH1,sH2,sH3, sH4]
and Wmag = [cL1*cL1+sL1*sL1,... ....,cL4*cL4+sL4*sL4,cH1*cH1+sH1*sH1,.....,cH4*cH4+sH4*sH4]l
The sparsity of Wmag is used to determine whether the tone exists or not.
[0049] In accordance with an implementation of the present subject matter, in order to identify the tonal frame amongst the plurality of audio frames, the present subject matter may employ either the peaks count method or the peak variance method, or the sparse approximation method or any combination of the three.
[0050] In an alternative implementation of the present subject matter, the processing module 216 may include a scheduling processing sub-module 216-1 that is activated on demand for the intended application which needs the DTMF tone. In the said implementation, the intended application is hot patched or injected with an application tagger using the custom dynamic link library (DLL) injection. The implementation of the DLL injection generally requires to reserve a block of memory that can be shared between different applications or processes. In the said block, the audio buffer 224 is implemented as a circular buffer in the shared memory to facilitate the introduction of latency in reading audio frames from a process and returning masked audio frames from a previous time coordinate to the same or a different process. The read and write operation to the said block is the method for communication between two processes. Generally, the read and write operation cannot cross each other, as this may amount to an overflow event which may lead to incorrect audio play out or even a system crash. Further, the overflow event may also exist when a processing of audio frames present in the audio buffer 224 takes more computational time with regard to a nominal computational time. Similarly, an underflow event may also occur when a write operation has to wait for storing a newer audio frame into the audio buffer 224. The wait time anticipated during the underflow event can be better utilized for more accurate processing of the audio frame.
[0051] In the said alternative implementation, the scheduling processing sub-module 216-1 may be activated to perform either masking or vary sub-sampling rate so as to control the usage of the audio buffer 224 in the shared memory, in case the overflow event or the underflow event is anticipated.
[0052] In case the underflow event is anticipated, the scheduling processing sub-module 216-1 may set the most comprehensive detection and masking logic as the default for each of the tonal frames. Because of this, the processing or masking of the tonal frames present in the audio buffer 224 becomes slower, however the accuracy and the quality of masking may improve.
[0053] Similarly, in case the overflow event is anticipated, the scheduling processing sub-module 216-1 performs sub-sampling of the plurality of audio frames followed by pre-filtering of the plurality of audio frames is performed by random sampling of the plurality of audio frames to detect the presence of a DTMF tone. The randomly sampled audio frames are then down sampled by the scheduling processing sub-module 216-1. The down sampled audio frames are then provided to the processing module 216 to identify the tonal frames by employing either the peaks count method or the peak variance method, the sparse approximation method or some combination of all three. In this way, the scheduling processing sub-module 216-1 may reduce the computational load of the processing module 216 in comparison to the load handled by the processing modules known in the art.
[0054] In an example, the scheduling processing sub-module 216-1 may be implemented as a VoIP application tagger that performs random sampling of audio frames, by sub-sampling followed by pre-filtering steps, to determine whether an audio frame has to be monitored strictly. For example, in case a DTMF tone is encountered in an audio frame from amongst the plurality of audio frames during random sampling of the audio frames, then every audio frame from that audio frame is to be scrutinized.
[0055] A point to be noted in context of the present subject matter is that the scheduling processing sub-module 216-1 is activated when one of the underflow event and the overflow event is anticipated. Otherwise, processing module 216 may determine the tonal frames by employing either the peaks count method or the peak variance method, or the sparse approximation method or some combination of all three.
[0056] The processing module 216, after identification of the DTMF tone based on the sparse approximation method, converts the identified DTMF tone into a security sensitive information and provides that security sensitive information to an intended application. In an example, the intended application can be a VoIP application, Interactive Voice Response (IVR) application, or a Session Initiation Protocol (SIP) application, running on the computing system 104.
[0057] The processing module 216, after providing the security sensitive information, the processing module 216 forwards the tonal frames along with other audio frames, to an audio output module 218. The audio output module 218 then stores the tonal frames along with other audio frames in the audio buffer 224 assigned to the audio engine 110. In an example, the audio engine 110 is a user-mode audio component through which applications can share access to the audio hardware 302, such as an audio speaker.
[0058] The tonal frames are then processed by the audio engine 110 to detect the speech signal in each of the tonal frames. In one implementation, a speech signal is detected in a tonal frame based on characteristic spectral shapes of the speech signal. However, in an alternative implementation, there are several techniques that can be used for the detection of the speech signal in the tonal frame. Examples include divergence distance between speech and noise histograms, the spectral slope, or a classifier trained on spectral coefficients.
[0059] Based on the detection of the speech signal, the masking module 220 of the audio engine 110 performs either a complete spectral masking, a partial spectral masking, or both. For instance, the masking module 220 performs the complete spectral masking to mask the entire tonal frame when no speech signal is detected, and performs a partial spectral masking by masking a portion of a tonal frame when a speech signal is detected in the tonal frame. That is, in the partial spectral masking, the DTMF tone is masked while the speech signal is left unmasked or unaffected.
[0060] In an example, according to the complete spectral masking, the masking module 220 masks a DTMF tone in the tonal frame by determining the DTMF tone, replacing the DTMF tone with silence, introducing a dummy low frequency tone in the tonal frame. In said example, the dummy low frequency tone may not belong to the DTMF dictionary and has a reference frequency of less than 1000 Hz. The dummy low frequency tone is introduced in the tonal frame to disguise the call operator of the service provider.
[0061] In an example, according to the partial spectral masking, the masking module 220 detects an overlapping of a speech signal and a DTMF tone by estimating boundaries of the DTMF tone and two frequencies of the DTMF tone, before performing masking. Then, the masking module 220 introduces a dummy high frequency tone in the tonal frame. In said example, the dummy high frequency tone may belong to the DTMF dictionary and has a reference frequency of greater than 1000 Hz. A basic assumption in the present context is that the frequencies of all DTMF tones is larger than that of any frequency of the speech signal. Taking this into consideration, the masking module 220 performs a high-band pass filtering of the frequencies greater than 1000 Hz, after the introduction of the dummy high frequency tone. Thereafter, low frequency tone left in the tonal frame is cancelled by exact tone cancellation method known in the art. Thereafter, a dummy low frequency tone may be introduced in the tonal frame to disguise the call operator of the service provider.
[0062] Once masked, the audio engine 110 mixes the masked tonal frames with the other (original) audio frames to form an audio stream and provides that audio stream to the actual audio device driver 212. In one implementation of the present subject matter, the audio engine 110 introduces latency before transmitting the audio stream to the actual audio device driver 212. The latency introduced may correspond to a processing time for identification, determination, and masking of DTMF tones of the tonal frames. The latency may allows for transmission of all the audio frames, including audio frames which were not determined to be tonal frames and processed tonal frames in the same sequence without any disruption.
[0063] The actual audio device driver 212 processes all the audio frames received from the audio engine 110 to generate an audio output at the audio hardware 302. In an example, the audio output includes the speech signal, masked DTMF tones, and additional dummy tones.
[0064] In this way, the present subject matter facilitates the methods and the systems that identify a tonal frame in which a speech signal overlaps a DTMF tone based on spectral analysis, and masks or suppress only the DTMF tone present in such tonal frame. Thus, the methods and the systems of the present subject matter perform accurate masking of the DTMF tone in a tonal frame based on the spectral analysis.
[0065] Fig. 4 illustrates a method 400 for masking of a dual tone multi-frequency (DTMF) tone in a voice based communication at a computing system, such as the computing system 104, in accordance with an embodiment of the present subject matter. The method 400 may be implemented in a variety of computing systems in several different ways. For example, the method 400, described herein, may be implemented using the computing system 104, as described above.
[0066] The method 400, completely or partially, may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. A person skilled in the art will readily recognize that steps of the method can be performed by programmed computers. Herein, some embodiments are also intended to cover program storage devices, for example, digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, wherein said instructions perform some or all of the steps of the described method 400.
[0067] The order in which the method 400 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method, or an alternative method. Additionally, individual blocks may be deleted from the method without departing from the spirit and scope of the subject matter described herein. Furthermore, the methods can be implemented in any suitable hardware, software, firmware, or combination thereof. It will be understood that even though the method 400 is described with reference to the computing system 104, the description may be extended to other systems as well.
[0068] At block 402, the computing system 104 receives an audio signal from a user device, say user device 102-2, when a customer C2 wishes to perform electric commerce with a service provider having the procession of the computing system 104. In an example, the audio signal received at the computing system 104 may include a speech signal and a DTMF tone. Further, in the context of the present subject matter, the computing system 104 can be implemented as Interactive Voice Response (IVR) system.
[0069] At block 404, the received audio signal is fragmented into a plurality of audio frames. For example, the computing system 104 may fragment the received audio signal by 30 ms time slots.
[0070] At block 406, after fragmentation of the audio signal, the computing system 104 may be employed for identifying an audio frame from amongst the plurality of audio frames as a tonal frame. In an example, tonal frame may be understood as an audio frame having a DTMF tone. The identification of the tonal frame is confirmed by employing either the peaks count method or the peak variance method, or the sparse approximation method or any combination of the three. By identifying tonal frames from amongst the plurality of audio frames using one of these methods, the computational load on the computing system 104 is substantially reduced in comparison to the known techniques, such as band pass filtering, Goertzel algorithm, or Fast Fourier Transformation (FFT). Thus, the methods and the systems of the present subject matter perform significantly faster processing of the audio frames as compared to the techniques known in the art.
[0071] At block 408, the computing system 104 performs detection of the speech signal in the tonal frame based on characteristic spectral shapes of the speech signal. However, in an alternative implementation, there are several techniques that can be used for the detection of the speech signal in the tonal frame. Examples include divergence distance between speech and noise histograms, the spectral slope, or a classifier trained on spectral coefficients.
[0072] At block 410, based on the detection of speech signal in the tonal frame, the methods and the systems perform either complete spectral masking or partial spectral masking of the tonal frame or both. According to the complete spectral masking, the tonal frame is completed masked or removed when no speech signal is detected.
[0073] At block 412, the partial spectral masking is performed when the speech signal is detected. In the partial spectral masking, the DTMF tone present in the tonal frame is masked while the speech signal is not masked and left unaffected.
[0074] Although implementations for methods and systems for masking the DTMF tone in a computing system are described, it is to be understood that the present subject matter is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as implementations for masking the DTMF tone.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 3490-MUM-2013-RELEVANT DOCUMENTS [26-09-2023(online)].pdf 2023-09-26
1 SPEC IN.pdf 2018-08-11
2 FORM 5.pdf 2018-08-11
2 3490-MUM-2013-RELEVANT DOCUMENTS [27-09-2022(online)].pdf 2022-09-27
3 FORM 3.pdf 2018-08-11
3 3490-MUM-2013-IntimationOfGrant19-02-2021.pdf 2021-02-19
4 FIGURES IN.pdf 2018-08-11
4 3490-MUM-2013-PatentCertificate19-02-2021.pdf 2021-02-19
5 ABSTRACT.jpg 2018-08-11
5 3490-MUM-2013-Written submissions and relevant documents [18-08-2020(online)].pdf 2020-08-18
6 3490-MUM-2013-FORM 26(2-1-2014).pdf 2018-08-11
6 3490-MUM-2013-Correspondence to notify the Controller [27-07-2020(online)].pdf 2020-07-27
7 3490-MUM-2013-US(14)-HearingNotice-(HearingDate-03-08-2020).pdf 2020-07-02
7 3490-MUM-2013-FORM 1(30-4-2014).pdf 2018-08-11
8 3490-MUM-2013-CORRESPONDENCE(30-4-2014).pdf 2018-08-11
8 3490-MUM-2013-CLAIMS [05-04-2019(online)].pdf 2019-04-05
9 3490-MUM-2013-CORRESPONDENCE(2-1-2014).pdf 2018-08-11
9 3490-MUM-2013-COMPLETE SPECIFICATION [05-04-2019(online)].pdf 2019-04-05
10 3490-MUM-2013 FORM 18.pdf 2018-08-11
10 3490-MUM-2013-DRAWING [05-04-2019(online)].pdf 2019-04-05
11 3490-MUM-2013-FER.pdf 2018-11-28
11 3490-MUM-2013-FER_SER_REPLY [05-04-2019(online)].pdf 2019-04-05
12 3490-MUM-2013-OTHERS [05-04-2019(online)].pdf 2019-04-05
13 3490-MUM-2013-FER.pdf 2018-11-28
13 3490-MUM-2013-FER_SER_REPLY [05-04-2019(online)].pdf 2019-04-05
14 3490-MUM-2013 FORM 18.pdf 2018-08-11
14 3490-MUM-2013-DRAWING [05-04-2019(online)].pdf 2019-04-05
15 3490-MUM-2013-COMPLETE SPECIFICATION [05-04-2019(online)].pdf 2019-04-05
15 3490-MUM-2013-CORRESPONDENCE(2-1-2014).pdf 2018-08-11
16 3490-MUM-2013-CLAIMS [05-04-2019(online)].pdf 2019-04-05
16 3490-MUM-2013-CORRESPONDENCE(30-4-2014).pdf 2018-08-11
17 3490-MUM-2013-FORM 1(30-4-2014).pdf 2018-08-11
17 3490-MUM-2013-US(14)-HearingNotice-(HearingDate-03-08-2020).pdf 2020-07-02
18 3490-MUM-2013-Correspondence to notify the Controller [27-07-2020(online)].pdf 2020-07-27
18 3490-MUM-2013-FORM 26(2-1-2014).pdf 2018-08-11
19 3490-MUM-2013-Written submissions and relevant documents [18-08-2020(online)].pdf 2020-08-18
19 ABSTRACT.jpg 2018-08-11
20 FIGURES IN.pdf 2018-08-11
20 3490-MUM-2013-PatentCertificate19-02-2021.pdf 2021-02-19
21 FORM 3.pdf 2018-08-11
21 3490-MUM-2013-IntimationOfGrant19-02-2021.pdf 2021-02-19
22 FORM 5.pdf 2018-08-11
22 3490-MUM-2013-RELEVANT DOCUMENTS [27-09-2022(online)].pdf 2022-09-27
23 SPEC IN.pdf 2018-08-11
23 3490-MUM-2013-RELEVANT DOCUMENTS [26-09-2023(online)].pdf 2023-09-26

Search Strategy

1 error_28-11-2018.pdf

ERegister / Renewals

3rd: 22 Feb 2021

From 01/11/2015 - To 01/11/2016

4th: 22 Feb 2021

From 01/11/2016 - To 01/11/2017

5th: 22 Feb 2021

From 01/11/2017 - To 01/11/2018

6th: 22 Feb 2021

From 01/11/2018 - To 01/11/2019

7th: 22 Feb 2021

From 01/11/2019 - To 01/11/2020

8th: 22 Feb 2021

From 01/11/2020 - To 01/11/2021

9th: 22 Oct 2021

From 01/11/2021 - To 01/11/2022

10th: 31 Oct 2022

From 01/11/2022 - To 01/11/2023

11th: 31 Oct 2023

From 01/11/2023 - To 01/11/2024

12th: 30 Oct 2024

From 01/11/2024 - To 01/11/2025

13th: 31 Oct 2025

From 01/11/2025 - To 01/11/2026