Sign In to Follow Application
View All Documents & Correspondence

Systems And Methods For Performing Single Input Based Multifactor Authentication

Abstract: This disclosure relates to systems and methods for performing single input based multifactor authentication. Multifactor authentication refers to an authentication system with enhanced security which utilizes more than one authentication forms to validate identity of a user. Conventionally, the process of multifactor authentication is a serial process which involves inputting of authentication information multiple times. However, with conventional approaches, delay is introduced in execution of the multifactor authentication process. The method of the present disclosure addresses unresolved problems of multifactor authentication by enabling two or more factors to be assessed simultaneously making the authentication process faster without sacrificing the robustness of authentication process. Embodiments of the present disclosure analyzes spoken response of the user to a dynamically generated question for multifactor authentication. The system of the present disclosure is modulation, age and language independent, remote authentication enabled, and does not require any additional infrastructure leading to reduced cost.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 March 2022
Publication Number
38/2023
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-01-24
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai - 400021, Maharashtra, India

Inventors

1. KOPPARAPU, Sunil Kumar
Tata Consultancy Services Limited, Yantra Park, Opp Voltas HRD Training Center, Subhash Nagar, Pokhran Road No. 2, Thane - 400601, Maharashtra, India
2. SHAH, Bimal Pravin
Tata Consultancy Services Limited, Olympus - A, Opp Rodas Enclave, Hiranandani Estate, Ghodbunder Road, Thane - 400607, Maharashtra, India

Specification

Claims: 1. A processor implemented method, comprising: receiving (202), via one or more hardware processors, an initial identity information corresponding to a valid user requesting a service; obtaining (204), via the one or more hardware processors, a plurality of attributes and a plurality of reference voice features corresponding to the valid user that are pre-stored in a dynamically updated system database; dynamically generating (206), via the one or more hardware processors, a question based on the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database using a probabilistic template based approach; generating (208), via the one or more processors, an audible recitation of the dynamically generated question in a first language from a set of languages identified from the plurality of attributes, wherein the audible recitation includes an added environmental noise; requesting (210), via the one or more processors, the valid user to provide a spoken response to the audible recitation of the dynamically generated question in a specific language among the set of languages, wherein the spoken response comprises a plurality of incoming voice features corresponding to the valid user and a knowledge possessed by the valid user pertaining to the plurality of attributes that are pre-stored in the dynamically updated system database; transforming (212), via the one or more hardware processors, the plurality of incoming voice features and the plurality of reference voice features to maximize a separation between contours and a subset of high energy points in a spectrogram of the spoken response and a synthetically generated response; and determining (214), via the one or more hardware processors, one or more authentication metrics to verify a final identity of the valid user by comparing (i) the plurality of incoming transformed voice features with the plurality of transformed reference voice features and the knowledge possessed by the valid user comprised in the spoken response with the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database. 2. The processor implemented method of claim 1, wherein the step of receiving the initial identity information comprises a comparison of a received user identification number to a user identification number stored in a system database. 3. The processor implemented method of claim 1, wherein the plurality of attributes comprises speech properties of the valid user, personal identification information of the valid user, and information related to one or more past actions relevant to the service availed by the valid user. 4. The processor implemented method of claim 1, wherein the plurality of incoming voice features and the plurality of reference voice features comprise voice signatures. 5. The processor implemented method of claim 1, wherein the one or more metrics include quickness to respond to a dynamically generated question, pronunciation quality, proximity of the speech properties of the spoken response with the speech properties comprised in the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database. 6. A system (100), comprising: a memory (102) storing instructions; one or more communication interfaces (106); and one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to: receive, via one or more hardware processors, an initial identity information corresponding to a valid user requesting a service; obtain, via the one or more hardware processors, a plurality of attributes and a plurality of reference voice features corresponding to the valid user that are pre-stored in a dynamically updated system database; dynamically generate, via the one or more hardware processors, a question based on the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database using a probabilistic template based approach; generate, via the one or more processors, an audible recitation of the dynamically generated question in a first language from a set of languages identified from the plurality of attributes, wherein the audible recitation includes an added environmental noise; request, via the one or more processors, the valid user to provide a spoken response to the audible recitation of the dynamically generated question in a specific language among the set of languages, wherein the spoken response comprises a plurality of incoming voice features corresponding to the valid user and a knowledge possessed by the valid user pertaining to the plurality of attributes that are pre-stored in the dynamically updated system database; transform, via the one or more hardware processors, the plurality of incoming voice features and the plurality of reference voice features to maximize a separation between contours and a subset of high energy points in a spectrogram of the spoken response and a synthetically generated response; and determine, via the one or more hardware processors, one or more authentication metrics to verify a final identity of the valid user by comparing (i) the plurality of incoming transformed voice features with the plurality of transformed reference voice features and the knowledge possessed by the valid user comprised in the spoken response with the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database. 7. The system of claim 6, wherein the step of receiving the initial identity information comprises a comparison of a received user identification number to a user identification number stored in a system database. 8. The system of claim 6, wherein the plurality of attributes comprises speech properties of the valid user, personal identification information of the valid user, and information related to one or more past actions relevant to the service availed by the valid user. 9. The system of claim 6, wherein the plurality of incoming voice features and the plurality of reference voice features comprise voice signatures. 10. The system of claim 6, wherein the one or more metrics include quickness to respond to a dynamically generated question, pronunciation quality, proximity of the speech properties of the spoken response with the speech properties comprised in the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database. , Description:FORM 2 THE PATENTS ACT, 1970 (39 of 1970) & THE PATENT RULES, 2003 COMPLETE SPECIFICATION (See Section 10 and Rule 13) Title of invention: SYSTEMS AND METHODS FOR PERFORMING SINGLE INPUT BASED MULTIFACTOR AUTHENTICATION Applicant: Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956 Having address: Nirmal Building, 9th Floor, Nariman Point, Mumbai 400021, Maharashtra, India The following specification particularly describes the invention and the manner in which it is to be performed. TECHNICAL FIELD The disclosure herein generally relates to the field of authentication, and, more particularly, to systems and methods for performing single input based multifactor authentication. BACKGROUND With growth of internet connectivity and network applications in recent years, use of internet for business functions has been increased. For example, applications involving online authentication such as online banking and other related applications are gaining momentum. Authentication refers to a process by which a user makes his or her identity known to a system or application which the user is attempting to access, and occasionally, also the process by which the user verifies the identity of the system being accessed. Authentication remains a persistent problem in information technology industry. With the proliferation of untrusted applications and untrusted networks, the authentication of the user based on a single authentication mechanism becomes unreliable. For better reliability, concept of Multifactor authentication in the authentication systems becomes useful. “Multifactor authentication” refers to an authentication system which utilizes more than one authentication forms to validate identity of a user. Conventionally, the process of multifactor authentication is a serial process which involves inputting of authentication information multiple times. However, with conventional approaches, delay is introduced in execution of the multifactor authentication process. SUMMARY Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a processor implemented method is provided. The method comprising: receiving, via one or more hardware processors, an initial identity information corresponding to a valid user requesting a service; obtaining, via the one or more hardware processors, a plurality of attributes and a plurality of reference voice features corresponding to the valid user that are pre-stored in a dynamically updated system database; dynamically generating, via the one or more hardware processors, a question based on the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database using a probabilistic template based approach; generating, via the one or more processors, an audible recitation of the dynamically generated question in a first language from a set of languages identified from the plurality of attributes, wherein the audible recitation includes an added environmental noise; requesting, via the one or more processors, the valid user to provide a spoken response to the audible recitation of the dynamically generated question in a specific language among the set of languages, wherein the spoken response comprises a plurality of incoming voice features corresponding to the valid user and a knowledge possessed by the valid user pertaining to the plurality of attributes that are pre-stored in the dynamically updated system database; transforming, via the one or more hardware processors, the plurality of incoming voice features and the plurality of reference voice features to maximize a separation between contours and a subset of high energy points in a spectrogram of the spoken response and a synthetically generated response; and determining, via the one or more hardware processors, one or more authentication metrics to verify a final identity of the valid user by comparing (i) the plurality of incoming transformed voice features with the plurality of transformed reference voice features and the knowledge possessed by the valid user comprised in the spoken response with the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database. In another aspect, a system is provided. The system comprising a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: receive, via one or more hardware processors, an initial identity information corresponding to a valid user requesting a service; obtain, via the one or more hardware processors, a plurality of attributes and a plurality of reference voice features corresponding to the valid user that are pre-stored in a dynamically updated system database; dynamically generate, via the one or more hardware processors, a question based on the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database using a probabilistic template based approach; generate, via the one or more processors, an audible recitation of the dynamically generated question in a first language from a set of languages identified from the plurality of attributes, wherein the audible recitation includes an added environmental noise; request, via the one or more processors, the valid user to provide a spoken response to the audible recitation of the dynamically generated question in a specific language among the set of languages, wherein the spoken response comprises a plurality of incoming voice features corresponding to the valid user and a knowledge possessed by the valid user pertaining to the plurality of attributes that are pre-stored in the dynamically updated system database; transform, via the one or more hardware processors, the plurality of incoming voice features and the plurality of reference voice features to maximize a separation between contours and a subset of high energy points in a spectrogram of the spoken response and a synthetically generated response; and determine, via the one or more hardware processors, one or more authentication metrics to verify a final identity of the valid user by comparing (i) the plurality of incoming transformed voice features with the plurality of transformed reference voice features and the knowledge possessed by the valid user comprised in the spoken response with the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database. In yet another aspect, a non-transitory computer readable medium is provided. The non-transitory computer readable medium, comprising: receiving, via one or more hardware processors, an initial identity information corresponding to a valid user requesting a service; obtaining, via the one or more hardware processors, a plurality of attributes and a plurality of reference voice features corresponding to the valid user that are pre-stored in a dynamically updated system database; dynamically generating, via the one or more hardware processors, a question based on the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database using a probabilistic template based approach; generating, via the one or more processors, an audible recitation of the dynamically generated question in a first language from a set of languages identified from the plurality of attributes, wherein the audible recitation includes an added environmental noise; requesting, via the one or more processors, the valid user to provide a spoken response to the audible recitation of the dynamically generated question in a specific language among the set of languages, wherein the spoken response comprises a plurality of incoming voice features corresponding to the valid user and a knowledge possessed by the valid user pertaining to the plurality of attributes that are pre-stored in the dynamically updated system database; transforming, via the one or more hardware processors, the plurality of incoming voice features and the plurality of reference voice features to maximize a separation between contours and a subset of high energy points in a spectrogram of the spoken response and a synthetically generated response; and determining, via the one or more hardware processors, one or more authentication metrics to verify a final identity of the valid user by comparing (i) the plurality of incoming transformed voice features with the plurality of transformed reference voice features and the knowledge possessed by the valid user comprised in the spoken response with the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database. In accordance with an embodiment of the present disclosure, the step of receiving the initial identity information comprises a comparison of a received user identification number to a user identification number stored in a system database. In accordance with an embodiment of the present disclosure, the plurality of attributes comprises speech properties of the valid user, personal identification information of the valid user, and information related to one or more past actions relevant to the service availed by the valid user. In accordance with an embodiment of the present disclosure, the plurality of incoming voice features and the plurality of reference voice features comprise voice signatures. In accordance with an embodiment of the present disclosure, the one or more metrics include quickness to respond to a dynamically generated question, pronunciation quality, proximity of the speech properties of the spoken response with the speech properties comprised in the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles: FIG. 1 illustrates an exemplary system for performing single input based multifactor authentication according to some embodiments of the present disclosure. FIG. 2 depicts an exemplary flow diagram illustrating a method for performing single input based multifactor authentication according to some embodiments of the present disclosure. DETAILED DESCRIPTION OF EMBODIMENTS Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following embodiments described herein. “Multifactor authentication” refers to an authentication system with enhanced security which utilizes more than one authentication forms to validate identity of a user. Conventionally, the process of multifactor authentication is a serial process which involves inputting of authentication information multiple times. However, with conventional approaches, delay is introduced in execution of the multifactor authentication process. For example, several multifactor authentication systems allow a user to input his/her login credentials as a first factor of authentication for accessing an application. The system further asks the user to provide a second factor of authentication in form of an authentication code, biometric, and by any other means only once the first factor of authentication is completed. In multifactor authentication, a person can be verified based on what the person has (e.g., smart card, mobile phone), what the person knows (e.g., password, his/her personal details), what the person is (e.g., fingerprint, voice, face). Here, "Has", "Knows", and "Is" are three factors of authentication. For example, smart-card ("Has") + PIN ("Knows") or smart-card ("Has") + Fingerprint ("Is") is very common with underlying assumption that the person is at a certain location where the smart-card or fingerprint reader is present (say office entrance). However, in case of remote verification such as in a pandemic scenario, some of these multifactor authentications become infeasible because the location of the person and often an Rivest, Shamir, Adleman (RSA) token ("Has") + password PIN ("Knows") is the go to multifactor authentication adopted. Further, all these have an economic implication in terms of RSA device, Fingerprint scanner etc. The present disclosure addresses unresolved problems of multifactor authentication by enabling two or more factors to be assessed simultaneously/ in a parallel manner making the authentication process faster without sacrificing the robustness of authentication process. Embodiments of the present disclosure provide systems and methods for performing single input based multifactor authentication without using any special hardware infrastructure. In one non-limiting example, a person to be verified enters his user id (say employee number) on a web-page. The system of the present disclosure generates a natural language sentence (say "My father was born in the year 1970") and presents an incomplete sentence (say "My father was born in the year ") in distorted text to the person. The person is required to complete the sentence and speak the sentence for him to get authenticated. The distorted text makes sure that it is a person (Turing test). The spoken voice is used to "Is" verify the person, and the spoken "Know" verifies the person making it a multi-factor authentication. In another example when a person calls a phone-banking system. His registered mobile number (RMN) enables the system of the present disclosure to generate a complete sentence (say "My father was born in the year 1970") and speak an incomplete sentence (say "My father was born in the year") and ask the person to complete the sentence. Again, the spoken voice is used to "Is" verify the person, and the spoken "Know" verifies the person making it a multi-factor authentication. In an embodiment, there are scenarios where with growing age of the valid user, chances of change in voice may occur. Further, in certain cases, a person may acquire fluency in a foreign language resulting in change in accent of the person. The system of the present disclosure is capable to deal with such scenarios. Thus, the system of present disclosure is independent of age factors, modulation independent, and language independent Referring now to the drawings, and more particularly to FIGS. 1 and 2, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method. FIG. 1 illustrates an exemplary system 100 for performing single input based multifactor authentication according to some embodiments of the present disclosure. In an embodiment, the system 100 includes one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106 (also referred as interface(s)), and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more processors 104 may be one or more software processing components and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is/are configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like. The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server. The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, a database 108 is comprised in the memory 102, wherein the system database 108 stores the plurality of input data/output, preprocessed data, personal identity information of one or more users requesting a service, a plurality of attributes and a plurality of reference voice features corresponding to the valid user, a plurality of incoming voice features, a plurality of dynamically generated questions and data associated with the plurality of dynamically generated questions, and/or the like. In an embodiment, the system database 108 is dynamically updated. The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis. FIG. 2, with reference to FIG. 1, depicts an exemplary flow chart illustrating a method 200 for performing single input based multifactor authentication, using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. Referring to FIG. 2, in an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method 200 of the present disclosure will now be explained with reference to components of the system 100 of FIG. 1, and the flow diagram as depicted in FIG. 2. In an embodiment, at step 202 of the present disclosure, the one or more hardware processors 104 are configured to an initial identity information corresponding to a valid user requesting a service. In an embodiment, the service requested by the valid user may include but not limited to accessing online banking application, financial transaction related application, retail based applications and/or the like. In the context of the present related, the expression ‘valid user’ refers to an actual existing and an actual new user who are trying to access an application for requesting service, but not an imposter. In an embodiment, the expression ‘valid user’ and ‘user’ can be used interchangeably. In an embodiment, the step of receiving the initial identity information comprises a comparison of a received user identification number to a user identification number stored in a system database. In an embodiment, the user identification number may include but not limited to a registered mobile number, a user id such as an employee id, and/or the like. For example, a user X calls a bank from his mobile number and the system of the proposed disclosure reads the received mobile number. The received mobile number is mapped to all the registered mobile numbers stored in the system database. Based on the mapping, name of the person against the registered mobile number is identified from the system database. In an embodiment, at step 204 of the present disclosure, the one or more hardware processors 104 are configured to obtain, a plurality of attributes and a plurality of reference voice features corresponding to the valid user that are pre-stored in a dynamically updated system database. In an embodiment, the plurality of attributes comprises speech properties of the valid user, personal identification information of the valid user, and information related to one or more past actions relevant to the service availed by the valid user. In an embodiment, the speech properties of the valid user may include but not limited to response time, pitch variation, inter syllable duration, pronunciation quality, percentage of pauses, percentage of filler words, shimmer, fluency, and speaking rate. In an embodiment, the personal identification information may include but not limited to date of birth (DoB) of the valid user, DoB of wife, names and DoB of mother and father of the valid user, educational background, number of language spoken and understood by the valid user, and/or the like. In an embodiment, the information related to one or more past actions relevant to the service availed by the valid user may include but not limited to purchase history further including date of last purchase of an item, which item was purchased, how much quantity of item was purchased by the user. In an embodiment, at step 206 of the present disclosure, the one or more hardware processors 104 are configured to dynamically generate, a question based on the plurality of attributes corresponding to the valid user that are pre-stored in the dynamically updated system database using a probabilistic template based approach. It must be appreciated that the probabilistic template based approach is known in the art. In an embodiment, question generation is dynamic since it considers last updated information stored in the system database. For example, a dynamic question is generated for the user X initially verified based on his register mobile number by analyzing past four dynamic transaction records of the user X. In an embodiment, process of dynamic question generation is initiated by the system of the present disclosure when it is known that user X needs to be verified. The system of the present disclosure picks up past four records (say N=4 records) associated with the user X which may include but not limited to credit card history, commerce transaction history. The system of the present disclosure selects one of the N=4 transactions using the probabilistic template based approach explained using the below example. In an embodiment, a random number r_1 between 0 and 1 is generated by the system of present disclosure. If the value of r_1 in the ranges of 0µ^s and ?(t-µ)=1 for t>µ. Further, x_who (t) is determined by eliminating x_what (t) part from the spoken response x(t) as shown in equation (2) provided below: x_who (t)=x(t)-x_what (t) (2) In an embodiment, at step 212 of the present disclosure, the one or more hardware processors 104 are configured to transform, the plurality of incoming voice features and the plurality of reference voice features to maximize a separation between contours and a subset of high energy points in a spectrogram of the spoken response and a synthetically generated response. In an embodiment, the step of transformation is performed to analyze an audio segment x_what (t) which is a part of the spoken response x(t). The step of transformation is better understood by way of the following non-limiting example provided as exemplary explanation For example, a scenario is considered where a person ‘Ram’ has purchased a Parker pen from “Haldiram and Sons” shop in Bandra on 01-Feb-2022 at 1800 hours using his credit card and has paid 257 rupees. In this scenario, the system of the present disclosure needs to ask a Question to ‘Ram’. For example, the system of the present disclosure decides to ask the price of the item purchased in Hindi and expects a response in Telugu since the system of the present disclosure is aware that ‘Ram’ can understand Hindi but speaks Telugu. So, the system of the present disclosure generates a question “How much did you pay for your Parker pen yesterday?” in Hindi and expects an answer “two hundred and fifty seven” in Telugu. Let the audio response of ‘Ram’ be x(t) = “I paid two hundred and fifty seven rupees yesterday for a pen” in Telugu. First the system of the present disclosure determines if x(t) is in Telugu using a spoken language identification module. It must be appreciated that any known in the art spoken language identification module may be used for language identification. Further, using a Telugu Automatic Speech Recognition approach, x(t) is converted into Telugu and the start time and the end time of x(t) are estimated. Speech utterances between µ^e and µ^srepresent the audio segment x_what (t). The system of the present disclosure then computes a spectrogram of the audio segment x_what (t). Further, a contour plot from the spectrogram and top 5 percent high energy points in the spectrogram of the audio segment x_what (t) are extracted. Furthermore, a synthetically generated response s_what (t) is generated by the system of the present disclosure using a personalized text to speech engine in ‘Ram’ voice. The system 100 further computes the spectrogram of synthetically generated response s_what (t) and the contour plot from the spectrogram and top 5 percent high energy points in the spectrogram of the synthetically generated response s_what (t) are extracted. Furthermore, a distance d indicative of the separation between contours plots and top 5% high energy points in the spectrogram of x_what (t) and s_what (t)is determined. If the distance does not exceed a first predetermined threshold as shown in equation (3) below, the system of the present disclosure verifies what ‘Ram’ knows. This means that the knowledge possessed by ‘Ram’ is verified. d(s_what,x_what (t))<ß (3) Here ß is the first pre-determined threshold that is selected depending on the application. In an embodiment, the system 100 performs analysis of an audio segment x_who (t). For the x_who (t) analysis, the system 100 of the present disclosure accesses apriori feature vector of ‘Ram’ denoted {f_v (Ram)}. From the spoken response x(t), the system of the present disclosure is able to extract the audio segment x_who (t) using equation (2). Further, a feature vector (f_v) from the audio segment x_who (t) is extracted. Here, x_who (t) consists of features such as syllable/sec, pauses/sec, pitch variations, inter-syllable distance, pronunciation quality, jitter, shimmer, and/or the like. Further, a cosine distance c_d between the f_v and the {f_v (Ram)} is determined. If cosine distance c_d does not exceed a second predetermined threshold as shown in equation (4) below, the system of the present disclosure verifies that person is ‘Ram’. c_d (f_v,{f_v (Ram)})

Documents

Application Documents

# Name Date
1 202221013913-STATEMENT OF UNDERTAKING (FORM 3) [15-03-2022(online)].pdf 2022-03-15
2 202221013913-REQUEST FOR EXAMINATION (FORM-18) [15-03-2022(online)].pdf 2022-03-15
3 202221013913-FORM 18 [15-03-2022(online)].pdf 2022-03-15
4 202221013913-FORM 1 [15-03-2022(online)].pdf 2022-03-15
5 202221013913-FIGURE OF ABSTRACT [15-03-2022(online)].jpg 2022-03-15
6 202221013913-DRAWINGS [15-03-2022(online)].pdf 2022-03-15
7 202221013913-DECLARATION OF INVENTORSHIP (FORM 5) [15-03-2022(online)].pdf 2022-03-15
8 202221013913-COMPLETE SPECIFICATION [15-03-2022(online)].pdf 2022-03-15
9 202221013913-FORM-26 [11-04-2022(online)].pdf 2022-04-11
10 Abstract1.jpg 2022-07-13
11 202221013913-Proof of Right [05-09-2022(online)].pdf 2022-09-05
12 202221013913-CORRESPONDENCE(IPO)-(WIPO DAS)-22-02-2023.pdf 2023-02-22
13 202221013913-FORM 3 [30-05-2023(online)].pdf 2023-05-30
14 202221013913-Request Letter-Correspondence [14-07-2023(online)].pdf 2023-07-14
15 202221013913-Power of Attorney [14-07-2023(online)].pdf 2023-07-14
16 202221013913-Form 1 (Submitted on date of filing) [14-07-2023(online)].pdf 2023-07-14
17 202221013913-Covering Letter [14-07-2023(online)].pdf 2023-07-14
18 202221013913-CERTIFIED COPIES TRANSMISSION TO IB [14-07-2023(online)].pdf 2023-07-14
19 202221013913-FER.pdf 2024-01-16
20 202221013913-FORM 3 [04-04-2024(online)].pdf 2024-04-04
21 202221013913-FER_SER_REPLY [23-05-2024(online)].pdf 2024-05-23
22 202221013913-COMPLETE SPECIFICATION [23-05-2024(online)].pdf 2024-05-23
23 202221013913-CLAIMS [23-05-2024(online)].pdf 2024-05-23
24 202221013913-US(14)-HearingNotice-(HearingDate-18-12-2024).pdf 2024-11-14
25 202221013913-Correspondence to notify the Controller [11-12-2024(online)].pdf 2024-12-11
26 202221013913-FORM-26 [17-12-2024(online)].pdf 2024-12-17
27 202221013913-FORM-26 [17-12-2024(online)]-1.pdf 2024-12-17
28 202221013913-Written submissions and relevant documents [30-12-2024(online)].pdf 2024-12-30
29 202221013913-RELEVANT DOCUMENTS [02-01-2025(online)].pdf 2025-01-02
30 202221013913-PETITION UNDER RULE 137 [02-01-2025(online)].pdf 2025-01-02
31 202221013913-PatentCertificate24-01-2025.pdf 2025-01-24
32 202221013913-IntimationOfGrant24-01-2025.pdf 2025-01-24

Search Strategy

1 SearchHistory_202221013913E_15-01-2024.pdf

ERegister / Renewals

3rd: 03 Mar 2025

From 15/03/2024 - To 15/03/2025

4th: 03 Mar 2025

From 15/03/2025 - To 15/03/2026