Sign In to Follow Application
View All Documents & Correspondence

Artificial Intelligence (Ai) – Based System And Method For Facilitating Assessment Of Interviews Of Candidates

Abstract: ARTIFICIAL INTELLIGENCE (AI) – BASED SYSTEM AND METHOD FOR FACILITATING ASSESSMENT OF INTERVIEWS OF CANDIDATES ABSTRACT A system and method for facilitating assessment of interviews of candidates is disclosed. The method (700) includes obtaining an interview data associated with one or more candidates, transcribing one or more audios associated with one or more interview videos into one or more transcripts and determining type of the one or more interviews. Further, the method (700) includes generating a fuzzy score between the one or more questions and the one or more candidate answers and a technical score for each of the one or more technical interviews and detecting one or more mistakes in the one or more candidate answers. Furthermore, the method (700) includes generating one or more correct answers for the one or more questions and communication score for each of the one or more communication interviews. The method (700) includes outputting the generated technical score and the generated communication score to one or more users. FIG. 7

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
17 June 2022
Publication Number
27/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
filings@ipexcel.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-01-18
Renewal Date

Applicants

E2EHIRING PRIVATE LIMITED
B 312 SAMHITA GREEN WOODS APTS, BESIDE VBGYOR HIGH SCHOOL, BANGALORE, KARNATAKA – 560066, INDIA

Inventors

1. MOHAMMED FAISAL
A 504, BANYAN, BRIGADE ORCHARDS, DEVANHALLI, BANGALORE, KARNATAKA, 562110, INDIA
2. GIRISH A KOUSHIK
NO. 430 2ND CROSS 2ND MAIN BEML LAYOUT 3RD STAGE RAJARAJESHWARI NAGAR, BENGALURU - 560098, KARNATAKA, INDIA

Specification

Description:FIELD OF INVENTION
[0001] Embodiments of the present disclosure relate to Artificial intelligence (AI)-based assessment systems and more particularly relate to an AI–based system and method for facilitating assessment of interviews of candidates.
BACKGROUND
[0002] With the advancements in technology, multiple job portals conduct online technical video interviews of one or more candidates. Further, one or more recruiters perform a manual evaluation to select or reject the one or more candidates for a given job profile based on the conducted online technical video interviews. However, selecting a right candidate in accordance with a job description is a complex and challenging task for the one or more recruiters. In addition, the one or more candidates for the same job profile are commonly evaluated by different recruiters under their individual subjective criteria and biases. Thus, there is a probability that the interview process is not fair resulting in talent loss. Furthermore, each of the one or more recruiters is different in terms of leniency and rating tendency. Thus, there is a likelihood that a less qualified or less desirable candidate may be hired as he/she is interviewed by a lenient recruiter. Further, conducting technical interviews with many candidates to choose the right candidate is an expensive and time taking process.
[0003] Hence, there is a need for an improved system and method for facilitating assessment of interviews of candidates, in order to address the aforementioned issues.
SUMMARY
[0004] This summary is provided to introduce a selection of concepts, in a simple manner, which is further described in the detailed description of the disclosure. This summary is neither intended to identify key or essential inventive concepts of the subject matter nor to determine the scope of the disclosure.
[0005] In accordance with an embodiment of the present disclosure, an Artificial Intelligence (AI)-based computing system for facilitating assessment of interviews of candidates is disclosed. The AI-based computing system includes one or more hardware processors and a memory coupled to the one or more hardware processors. The memory includes a plurality of modules in the form of programmable instructions executable by the one or more hardware processors. The plurality of modules include a data obtaining module configured to obtain an interview data associated with one or more interviews of one or more candidates from a storage unit. The interview data includes a candidate ID, an invite ID and a video data. The video data includes a video type, one or more questions, a question ID, a video Uniform Resource Location (URL) and one or more reference answers. The data obtaining module is also configured to obtain one or more interview videos associated with the one or more interviews from the storage unit based on the obtained interview data. The plurality of modules also include a data transcription module configured to transcribe one or more audios associated with the obtained one or more interview videos into one or more transcripts by using a transcription technique. The one or more transcripts correspond to one or more candidate answers for the one or more questions. Further, the plurality of modules include an interview determination module configured to determine type of the one or more interviews based on the obtained interview data upon transcribing the one or more audios. The type of the one or more interviews includes at least one of: one or more technical interviews and one or more communication interviews. Furthermore, the plurality of modules include a fuzzy score generation module configured to generate a fuzzy score between the one or more questions and the one or more candidate answers by using a score generation technique upon determining that the type of the one or more interviews is the one or more technical interviews. The plurality of modules further include a technical score generation module configured to generate a technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using a trained technical score evaluation-based AI model. The trained technical score evaluation-based AI model is a deep learning model. The one or more reference answers are predefined correct answers for the one or more questions. The plurality of modules include a data detection module configured to detect one or more mistakes in the one or more candidate answers by using a trained sentence correction-based AI model upon determining that the type of the one or more interviews is the one or more communication interviews. The one or more mistakes comprise one or more grammatical mistakes and one or more spelling mistakes. The trained sentence correction-based AI model is the deep learning model. Further, the plurality of modules include an answer generation module configured to generate one or more correct answers for the one or more questions by updating the one or more candidate answers based on the detected one or more mistakes and a set of predefined correction rules by using the trained sentence correction-based AI model. The plurality of modules include a communication score generation module configured to generate a communication score for each of the one or more communication interviews by comparing the one or more candidate answers with the generated one or more correct answers by using the trained sentence correction-based AI model. The plurality of modules includes a data output module configured to output the generated technical score and the generated communication score on user interface screen of one or more electronic devices associated with one or more users.
[0006] In accordance with another embodiment of the present disclosure, an AI-based method for facilitating assessment of interviews of candidates is disclosed. The AI-based method includes obtaining an interview data associated with one or more interviews of one or more candidates from a storage unit. The interview data includes a candidate ID, an invite ID and a video data. The video data includes a video type, one or more questions, a question ID, a video URL and one or more reference answers. The AI-based method also includes obtaining one or more interview videos associated with the one or more interviews from the storage unit based on the obtained interview data. The AI-based method further includes transcribing one or more audios associated with the obtained one or more interview videos into one or more transcripts by using a transcription technique. The one or more transcripts correspond to one or more candidate answers for the one or more questions. Further, the AI-based method includes determining type of the one or more interviews based on the obtained interview data upon transcribing the one or more audios. The type of the one or more interviews includes at least one of: one or more technical interviews and one or more communication interviews. Also, the AI-based method includes generating a fuzzy score between the one or more questions and the one or more candidate answers by using a score generation technique upon determining that the type of the one or more interviews is the one or more technical interviews. The AI-based method includes generating a technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using a trained technical score evaluation-based AI model. The trained technical score evaluation-based AI model is a deep learning model. The one or more reference answers are predefined correct answers for the one or more questions. Furthermore, the AI-based method includes detecting one or more mistakes in the one or more candidate answers by using a trained sentence correction-based AI model upon determining that the type of the one or more interviews is the one or more communication interviews. The one or more mistakes include one or more grammatical mistakes and one or more spelling mistakes. The trained sentence correction-based AI model is the deep learning model. The AI-based method also includes generating one or more correct answers for the one or more questions by updating the one or more candidate answers based on the detected one or more mistakes and a set of predefined correction rules by using the trained sentence correction-based AI model. Further, the AI-based method includes generating a communication score for each of the one or more communication interviews by comparing the one or more candidate answers with the generated one or more correct answers by using the trained sentence correction-based AI model. The AI-based method includes outputting the generated technical score and the generated communication score on user interface screen of one or more electronic devices associated with one or more users.
[0007] To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF DRAWINGS
[0008] The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0009] FIG. 1 is a block diagram illustrating an exemplary computing environment for facilitating assessment of interviews of candidates, in accordance with an embodiment of the present disclosure;
[0010] FIG. 2 is a block diagram illustrating an exemplary Artificial Intelligence (AI)–based computing system for facilitating assessment of interviews of candidates, in accordance with an embodiment of the present disclosure;
[0011] FIG. 3A is a block diagram illustrating an exemplary deep learning model architecture for generating a technical score, in accordance with an embodiment of the present disclosure;
[0012] FIG. 3B is a block diagram illustrating an exemplary deep learning model architecture for generating a communication score, in accordance with an embodiment of the present disclosure;
[0013] FIG. 4 is a schematic representation illustrating an exemplary process flow for training a customized deep learning model, in accordance with an embodiment of the present disclosure;
[0014] FIG. 5 is a schematic representation illustrating an exemplary process flow for deploying a trained customized deep learning model in a production instance, in accordance with an embodiment of the present disclosure;
[0015] FIG. 6 is a process flow diagrams illustrating an exemplary operation of the AI-based computing system for facilitating assessment of interviews of candidates, in accordance with an embodiment of the present disclosure;
[0016] FIG. 7 is a process flow diagrams illustrating an exemplary AI-based method for facilitating assessment of interviews of candidates, in accordance with an embodiment of the present disclosure; and
[0017] FIGs. 8A – 8B are graphical user interface screens of the AI-based computing system for facilitating assessment of interviews of candidates, in accordance with an embodiment of the present disclosure.
[0018] Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0019] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure. It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof.
[0020] In the present document, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or implementation of the present subject matter described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
[0021] The terms “comprise”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that one or more devices or sub-systems or elements or structures or components preceded by “comprises” a" does not, without more constraints, preclude the existence of other devices, sub-systems, additional sub-modules. Appearances of the phrase "in an embodiment”, "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0022] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[0023] A computer system (standalone, client or server computer system) configured by an application may constitute a “module” (or “subsystem”) that is configured and operated to perform certain operations. In one embodiment, the “module” or “subsystem” may be implemented mechanically or electronically, so a module include dedicated circuitry or logic that is permanently configured (within a special-purpose processor) to perform certain operations. In another embodiment, a “module” or “subsystem” may also comprise programmable logic or circuitry (as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations.
[0024] Accordingly, the term “module” or “subsystem” should be understood to encompass a tangible entity, be that an entity that is physically constructed permanently configured (hardwired) or temporarily configured (programmed) to operate in a certain manner and/or to perform certain operations described herein.
[0025] Referring now to the drawings, and more particularly to FIGs. 1 through FIG. 8B, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[0026] FIG. 1 is a block diagram illustrating an exemplary computing environment 100 for facilitating assessment of interviews of candidates, in accordance with an embodiment of the present disclosure. According to FIG. 1, the computing environment 100 includes an external server 102 communicatively coupled to an Artificial Intelligence (AI)-based computing system 104 via a network 106. The external server 102 is an employer’s online job portal configured to facilitate one or more interviews of one or more candidates. In an embodiment of the present disclosure, the one or more candidates are required to register at the employer’s online job portal to get into a virtual interview room. In the virtual interview room, the one or more candidates are required to record their answers to one or more questions prompted on user interface screen of one or more electronic devices 108. In an embodiment of the present disclosure, the answers are recorded in form of one or more interview videos. The one or more interview videos are saved in a storage unit. For example, the storage unit is Amazon Web Services (AWS) Simple Storage Service (S3) bucket.
[0027] The computing environment 100 includes the one or more electronic devices 108 associated with one or more users communicatively coupled to the AI-based computing system 104 via the network 106. For example, the one or more users are the one or more candidates, one or more recruiters and the like. The one or more electronic devices 108 are used by the one or more users to receive a technical score, a communication score and an interview status associated with each of the one or more interviews. In an exemplary embodiment of the present disclosure, the one or more electronic devices 108 may include a laptop computer, desktop computer, tablet computer, smartphone, wearable device, smart watch, and the like. Further, the network 106 may be internet or any other wireless network. The AI-based computing system 104 may be hosted on a central server, such as cloud server or a remote server.
[0028] Further, the one or more electronic devices 108 include a local browser, a mobile application or a combination thereof. Furthermore, the one or more users may use a web application via the local browser, the mobile application, or a combination thereof to communicate with the AI-based computing system 104. In an embodiment of the present disclosure, the AI-based computing system 104 includes a plurality of modules 110. Details on the plurality of modules 110 have been elaborated in subsequent paragraphs of the present description with reference to FIG. 2.
[0029] In an embodiment of the present disclosure, the AI-based computing system 104 is configured to obtain an interview data associated with the one or more interviews of the one or more candidates from the storage unit. Further, the AI-based computing system 104 obtains the one or more interview videos associated with the one or more interviews from the storage unit based on the obtained interview data. The AI-based computing system 104 transcribes one or more audios associated with the obtained one or more interview videos into one or more transcripts by using a transcription technique. The one or more transcripts correspond to one or more candidate answers for the one or more questions. Furthermore, the AI-based computing system 104 determines type of the one or more interviews based on the obtained interview data upon transcribing the one or more audios. In an exemplary embodiment of the present disclosure, the type of the one or more interviews includes one or more technical interviews, one or more communication interviews or a combination thereof. The AI-based computing system 104 generates a fuzzy score between the one or more questions and the one or more candidate answers by using a score generation technique upon determining that the type of the one or more interviews is the one or more technical interviews. The AI-based computing system 104 generates the technical score for each of the one or more technical interviews by considering the generated fuzzy score, one or more reference answers and the one or more candidate answers by using a trained technical score evaluation-based AI model. The AI-based computing system 104 detects one or more mistakes in the one or more candidate answers by using a trained sentence correction-based AI model upon determining that the type of the one or more interviews is the one or more communication interviews. Further, the AI-based computing system 104 generates one or more correct answers for the one or more questions by updating the one or more candidate answers based on the detected one or more mistakes and a set of predefined correction rules by using the trained sentence correction-based AI model. The AI-based computing system 104 generates a communication score for each of the one or more communication interviews by comparing the one or more candidate answers with the generated one or more correct answers by using the trained sentence correction-based AI model. Furthermore, the AI-based computing system 104 outputs the generated technical score and the generated communication score on user interface screen of one or more electronic devices 108 associated with one or more users.
[0030] FIG. 2 is a block diagram illustrating an exemplary AI-based computing system 104 for facilitating assessment of interviews of candidates, in accordance with an embodiment of the present disclosure. Further, the AI-based computing system 104 includes one or more hardware processors 202, a memory 204 and a storage unit 206. The one or more hardware processors 202, the memory 204 and the storage unit 206 are communicatively coupled through a system bus 208 or any similar mechanism. The memory 204 comprises the plurality of modules 110 in the form of programmable instructions executable by the one or more hardware processors 202. Further, the plurality of modules 110 includes a data obtaining module 210, a data transcription module 212, an interview determination module 214, a fuzzy score generation module 216, a technical score generation module 218, a data detection module 220, an answer generation module 222, a communication score generation module 224, a data output module 226 and a status determination module 228.
[0031] The one or more hardware processors 202, as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor unit, microcontroller, complex instruction set computing microprocessor unit, reduced instruction set computing microprocessor unit, very long instruction word microprocessor unit, explicitly parallel instruction computing microprocessor unit, graphics processing unit, digital signal processing unit, or any other type of processing circuit. The one or more hardware processors 202 may also include embedded controllers, such as generic or programmable logic devices or arrays, application specific integrated circuits, single-chip computers, and the like.
[0032] The memory 204 may be non-transitory volatile memory and non-volatile memory. The memory 204 may be coupled for communication with the one or more hardware processors 202, such as being a computer-readable storage medium. The one or more hardware processors 202 may execute machine-readable instructions and/or source code stored in the memory 204. A variety of machine-readable instructions may be stored in and accessed from the memory 204. The memory 204 may include any suitable elements for storing data and machine-readable instructions, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, a hard drive, a removable media drive for handling compact disks, digital video disks, diskettes, magnetic tape cartridges, memory cards, and the like. In the present embodiment, the memory 204 includes the plurality of modules 110 stored in the form of machine-readable instructions on any of the above-mentioned storage media and may be in communication with and executed by the one or more hardware processors 202.
[0033] The storage unit 206 may be a cloud storage. For example, the storage unit 206 may be AWS S3 bucket, SQLite database and the like. The storage unit 206 may store the interview data, the one or more interview videos and the one or more transcripts. The storage unit 206 may also store the fuzzy score, the technical score, the one or more reference answers, the one or more mistakes, the one or more correct answers, the communication score, a predefined threshold score, the set of predefined correction rules, a predefined threshold fuzzy score, a set of predefined negative words, a set of predefined positive words, and the like.
[0034] In an embodiment of the present disclosure, the one or more candidates are required to register at the external server 102 to get into the virtual interview room. The external server 102 is an employer’s online job portal configured to facilitate the one or more interviews of the one or more candidates. In an exemplary embodiment of the present disclosure, the one or more candidates register with the external server 102 by providing user details, such as name, address, gender, number of years of experience, year of graduation, highest qualification, and the like. In the virtual interview room, the one or more candidates are required to record their answers to the one or more questions prompted on the user interface screen of the one or more electronic devices 108. In an embodiment of the present disclosure, the answers are recorded in form of the one or more interview videos. The one or more interview videos are saved in the storage unit 206. In an exemplary embodiment of the present disclosure, the one or more electronic devices 108 may include a laptop computer, desktop computer, tablet computer, smartphone, wearable device, smart watch, and the like.
[0035] The data obtaining module 210 is configured to obtain the interview data associated with the one or more interviews of the one or more candidates from the storage unit 206. In an exemplary embodiment of the present disclosure, the interview data includes a candidate ID, an invite ID, and a video data. For example, the video data includes a video type, one or more questions, a question ID, a video Uniform Resource Location (URL), one or more reference answers and the like. Further, the data obtaining module 210 is configured to obtain the one or more interview videos associated with the one or more interviews from the storage unit 206 based on the obtained interview data. In an embodiment of the present disclosure, the one or more interview videos are obtained based on the obtained video URL.
[0036] The data transcription module 212 is configured to transcribe the one or more audios associated with the obtained one or more interview videos into the one or more transcripts by using the transcription technique. In an embodiment of the present disclosure, the one or more transcripts correspond to the one or more candidate answers for the one or more questions. In transcribing the one or more audios associated with the obtained one or more interview videos into the one or more transcripts by using the transcription technique, the data transcription module 212 extracts the one or more audios from the obtained one or more interview videos by using a Fast Forward Moving Picture Experts Group (FFmpeg) package. Further, the data transcription module 212 transcribes the extracted the one or more audios into the one or more transcripts by using the transcription technique. In an embodiment of the present disclosure, the transcription technique corresponds to natural language processing technique. For example, the one or more audios are extracted from .mp4 file by using the ‘FFmpeg’ package. Further, the one or more audios may be sent to Azure speech-to-text for accurate transcription. In an embodiment of the present disclosure, the computational linguistics may be engineered as natural language processing.
[0037] The interview determination module 214 is configured to determine the type of the one or more interviews based on the obtained interview data upon transcribing the one or more audios. In an exemplary embodiment of the present disclosure, the type of the one or more interviews includes the one or more technical interviews, the one or more communication interviews or a combination thereof. In an embodiment of the present disclosure, the type of the one or more videos are obtained based on the obtained video type variable.
[0038] The fuzzy score generation module 216 is configured to generate the fuzzy score between the one or more questions and the one or more candidate answers by using the score generation technique upon determining that the type of the one or more interviews is the one or more technical interviews. In an embodiment of the present disclosure, a fuzzy score is obtained using a FuzzyWuzzy package. The FuzzyWuzzy package is a library of Python which is used for string matching. In an embodiment of the present disclosure Fuzzy string matching is finding strings that match a given pattern. It uses Levenshtein distance to calculate differences between sequences. The Levenshtein distance is a string metric to calculate the difference between two different strings. The Levenshtein distance between two strings a,b (of length {|a| and |b| respectively) is given by lev(a,b) as

[0039] In an embodiment of the present disclosure, the tail of some string x is a string of all but the first character of x, and x[n] is the nth character of the string x starting with character 0.
[0040] The technical score generation module 218 is configured to generate the technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using the trained technical score evaluation-based AI model. In an embodiment of the present disclosure, the one or more questions are one or more technical questions to assess Information Technology (IT) or technical skills of the one or more candidates. In an exemplary embodiment of the present disclosure, the trained technical score evaluation-based AI model is a deep learning model. In an embodiment of the present disclosure, the deep learning model is trained to evaluate IT or technical skills by comparing the one or more reference answers associated with the one or more technical questions to perform a semantic textual similarity between the one or more reference answers and the one or more candidate answers recorded by the one or more candidates. The trained technical score evaluation-based AI model is pre-trained with a huge training data set including real-time recorded interview videos to give a more accurate result. The trained technical score evaluation-based AI model is also called a fine-tuned model. In an exemplary embodiment of the present disclosure, the fine-tuned model has an accuracy of 80% and evaluates the one or more interviews without any manual interpretation. In an embodiment of the present disclosure, the one or more reference answers are predefined correct answers for the one or more questions. In generating the technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using the trained technical score evaluation-based AI model, the technical score generation module 218 is configured to determine if the generated fuzzy score is greater than a predefined threshold fuzzy score by comparing the generated fuzzy score with the predefined threshold fuzzy score. In an exemplary embodiment of the present disclosure, the predefined threshold fuzzy score is 85. Further, the technical score generation module 218 splits the one or more candidate answers and the one or more questions into a set of tokens by using a tokenization technique upon determining that the generated fuzzy score is greater than the predefined threshold fuzzy score. In an embodiment of the present disclosure, the tokenization technique is breaking raw text into small chunks. The Tokenization technique breaks the raw text into words, sentences called tokens. These tokens help understand the context or develop the NLP model. The tokenization helps interpret the text's meaning by analyzing the sequence of the words. The technical score generation removes one or more stop-words from the one or more candidate answers and the one or more questions by using a stop-word removal technique upon splitting the one or more candidate answers and the one or more questions. In an embodiment of the present disclosure, stop word removal is commonly used in preprocessing steps across different NLP applications. The stop word removal includes removing the commonly occurring words across all the documents in the corpus. Further, articles and pronouns are generally classified as stop words. The stop words have no significance in some of the NLP tasks like information retrieval and classification, which means these words are not very discriminative. In an embodiment of the present disclosure, iteration is performed through each word in the input text, and if the word exists in the stop word set of the SpaCy language model, the word is removed. The technical score generation module 218 lemmatizes the one or more candidate answers by using a lemmatization technique upon removing the one or more stop-words. In an embodiment of the present disclosure, the lemmatization refers to the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, known as the lemma. If confronted with the token saw, lemmatization may attempt to return either see or saw depending on whether the use of the token was as a verb or a noun. Lemmatization commonly only collapses the different inflectional forms of a lemma. Furthermore, the technical score generation module 218 detects one or more common words in the lemmatized one or more candidate answers by using a word detection technique. In an embodiment of the present disclosure, the lemmatized form of the words from the reference and candidate answers are iterated through, and common words are found through one-to-one comparison. These common words are then added to a Python list. The technical score generation module 218 removes the detected one or more common words from the lemmatized one or more candidate answers. The technical score generation module 218 also determines if length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is zero. In an embodiment of the present disclosure, the one or more questions are repeated questions if the determined length is zero. Further, the technical score generation module 218 assigns a zero technical score to the one or more technical interviews upon determining that the length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is zero.
[0041] Further, in generating the technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using the trained technical score evaluation-based AI model, the technical score generation module 218 is configured to compare the lemmatized one or more candidate answers and the one or more questions upon removal of the detected one or more common words with a set of predefined positive words and a set of predefined negative words upon determining that the length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is not zero. Further, the technical score generation module 218 determines if one or more words of the set of predefined positive words, the set of predefined negative words or a combination thereof match with the lemmatized one or more candidate answers and the one or more questions based on result of comparison. For example, the set of predefined positive words are yes, yeah, true and the like. For example, the set of predefined negative words are no, nope, false and the like. The technical score generation module 218 assigns a hundred technical score to the one or more technical interviews upon determining that the one or more words match with the lemmatized one or more candidate answers and the one or more questions. In an embodiment of the present disclosure, the one or more technical questions are binary questions if the one or more words match with the lemmatized one or more candidate answers and the one or more questions. The technical score generation module 218 assigns a zero technical score to the one or more technical interviews upon determining that the one or more words do not match with the lemmatized one or more candidate answers and the one or more questions. Furthermore, the technical score generation module 218 generates the technical score for each of the one or more technical interviews by comparing the one or more candidate answers with the one or more reference answers by using the trained technical score evaluation-based AI model upon determining that the generated fuzzy score is less than the predefined threshold fuzzy score. In an exemplary embodiment of the present disclosure, the trained technical score evaluation-based AI model is an all-mpnet-base-v1 model. In an embodiment of the present disclosure, the one or more technical questions are descriptive questions if the generated fuzzy score is less than the predefined threshold fuzzy score.
[0042] Furthermore, in generating the technical score for each of the one or more technical interviews by comparing the one or more candidate answers with the one or more reference answers by using the trained technical score evaluation-based AI model upon determining that the generated fuzzy score is less than the predefined threshold fuzzy score, the technical score generation module 218 splits the one or more candidate answers and the one or more questions into the set of tokens by using the tokenization technique. Further, the technical score generation module 218 removes the one or more stop-words from the one or more candidate answers and the one or more questions by using the stop-word removing technique upon splitting the one or more candidate answers and the one or more questions. The technical score generation module 218 maps each of the one or more candidate answers and the one or more questions to a 3D vector space by using the trained technical score evaluation-based AI model upon removing the one or more stop-words. Furthermore, the score generation module determines a vector space distance between the one or more candidate answers and the one or more questions by using the trained technical score evaluation-based AI model upon mapping each of the one or more candidate answers and the one or more questions to the 3D vector space. In an embodiment of the present disclosure, the determined vector space distance corresponds to the technical score associated with each of the one or more technical interviews.
[0043] The data detection module (220) is configured to detect the one or more mistakes in the one or more candidate answers by using the trained sentence correction-based AI model upon determining that the type of the one or more interviews is the one or more communication interviews. In an exemplary embodiment of the present disclosure, the one or more mistakes include one or more grammatical mistakes, one or more spelling mistakes, and the like. In an embodiment of the present disclosure, the trained sentence correction-based AI model is the deep learning model. For example, the trained sentence correction-based AI model receives the one or more general questions, such that it passes them through a grammar correction model to detect the grammatical mistakes in the sentences.
[0044] The answer generation module 222 is configured to generate the one or more correct answers for the one or more questions by updating the one or more candidate answers based on the detected one or more mistakes and the set of predefined correction rules by using the trained sentence correction-based AI model. For example, the set of predefined correction rules may be grammatical rules to generate the one or more correct answers. In an embodiment of the present disclosure, the one or more questions are one or more communication or general questions to assess communication skills of the one or more candidates.
[0045] The communication score generation module 224 is configured to generate the communication score for each of the one or more communication interviews by comparing the one or more candidate answers with the generated one or more correct answers by using the trained sentence correction-based AI model. In an embodiment of the present disclosure, the trained sentence correction-based AI model is a computational linguistics-based model. In an exemplary embodiment of the present disclosure, the trained sentence correction-based AI model is very accurate as it is trained on 1 million sentence pairs across 14 different datasets. The overall accuracy of the trained sentence correction-based AI model is around 80%.
[0046] In an embodiment of the present disclosure, the trained technical score evaluation-based AI model are trained to get contextualize meaning of words in the one or more candidate answers and generate a cosine similarity between the one or more candidate answers and the one or more reference answers to generate the technical score for technical or IT based questions. Further, the trained sentence correction-based AI model is used to evaluate general communication-based skills. In an embodiment of the present disclosure, Bidirectional Encoder Representations from Transformers (BERT)/Sentence BERT (SBERT) pre-trained model is adopted and customized by providing an additional technical question-answer based historical training dataset to tackle and evaluate technical questions. The trained BERT or SBERT model can generate a vector of a sentence with a maximum length capacity of 768. In an embodiment of the present disclosure, the BERT model is based on a transformer architecture which is very powerful. The BERT model is trained by 2500 million words from Wikipedia and 800 million words from other books.
[0047] The data output module 226 is configured to output the generated technical score and the generated communication score on user interface screen of the one or more electronic devices 108 associated with the one or more users. For example, the one or more users are the one or more candidates, one or more recruiters and the like.
[0048] The status determination module 228 is configured to determine an interview status of the one or more interviews based on the generated technical score, the generated communication score, and a predefined threshold score. In an embodiment of the present disclosure, the predefined threshold score is a cut-off score. In an exemplary embodiment of the present disclosure, the interview status is qualified or not qualified. In an embodiment of the present disclosure, the determined interview status is outputted on user interface screen of the one or more electronic devices 108 associated with the one or more users. For example, the generated communication score is compared with the cut-off score to determine if a candidate possess the required communication skills for a job role. Similarly, the generated technical score is compared with the cut-off score to determine if the candidate possess the required technical skills for the job role.
[0049] FIG. 3A is a block diagram illustrating an exemplary deep learning model architecture for generating the technical score, in accordance with an embodiment of the present disclosure. In an embodiment of the present disclosure, the one or more candidate answers 302 are in the form of one or more videos. Further, the one or more audios associated with the one or more videos are transcribed into the one or more transcripts by using the speech-to-text 304. At step 306, a customized pre-trained deep learning model i.e., the trained technical score evaluation-based AI model generates a cosine similarity for each of the one or more technical interviews by comparing the one or more transcripts with the one or more reference answers 308. In an embodiment of the present disclosure, the cosine similarity corresponds to the technical score 310.
[0050] FIG. 3B is a block diagram illustrating an exemplary deep learning model architecture for generating the communication score, in accordance with an embodiment of the present disclosure. In an embodiment of the present disclosure, the one or more candidate answers 312 are in the form of one or more videos. Further, the one or more audios associated with the one or more videos are transcribed into the one or more transcripts by using the speech-to-text 314. Further, a grammar correction model 316 is used to detect the one or more mistakes in the one or more transcripts and generate the one or more correct answers for the one or more questions by updating the one or more transcripts based on the detected one or more mistakes and the set of predefined correction rules. At step 318, a pre-trained deep learning model i.e., the trained sentence correction-based AI model generates a cosine similarity for each of the one or more technical interviews by comparing the one or more transcripts with the one or more correct answers. In an embodiment of the present disclosure, the cosine similarity corresponds to the communication score 320.
[0051] FIG. 4 is a schematic representation illustrating an exemplary process flow for training a customized deep learning model, in accordance with an embodiment of the present disclosure. In an embodiment of the present disclosure, FIG. 4 shows the flow to train the deep learning model i.e., a technical score evaluation-based AI model with the intent to handle technical or IT question-answers. The registered job seekers store the data in AWS bucket at cloud storage. At 402, historical interviewed videos 404 at the cloud storage may be used to give the training to a customized deep learning model i.e., the technical score evaluation-based AI model. In an embodiment of the present disclosure, a finetuned model 406 i.e., the trained technical score evaluation-based AI model, is obtained after training the customized deep learning model. To train the customized deep learning model with a huge number of datasets, a very high-performance computational device is required. So, for better technical effectiveness, a system with A100-SXM4, 40GB Single GPU configuration is used. Inflow, the historical interviewed videos 404 are fetched from the AWS bucket and directly given as input to the developed predictive model i.e., the customized deep learning model on the computational device to get the finetuned model 406 to get cosine similarity score, based on a candidate score i.e., the technical score and the communication score in the interview. In an embodiment of the present disclosure, the employer may set the cut-off score to notify the job seeker without any human intervention.
[0052] FIG. 5 is a schematic representation illustrating an exemplary process flow for deploying a trained customized deep learning model in a production instance, in accordance with an embodiment of the present disclosure. When the trained technical score evaluation-based AI model 502 is obtained after training the customized deep learning model, the trained technical score evaluation-based AI model 502 is deployed in the production instance 504. In an embodiment of the present disclosure, the trained technical score evaluation-based AI model 502 may be wrapped into a docker container 506 image file to execute an application in the production instance 504 or environment. It’s highly recommended to have Graphics Processing Unit (GPU)-based computing system 104 to maintain the standard performance of trained technical score evaluation-based AI model 502. In an exemplary embodiment of the present disclosure, a recommended system configuration of the production instance is Intel Core i7-9700K Central Processing Unit (CPU) @3.60GHz processor NVIDIA Ray Tracing Texel eXtreme (RTX) 2080Ti Single GPU, 32 Giga Byte (GB) Memory and 1 Tera Byte (TB) Solid State Drive (SSD) storage.
[0053] FIG. 6 is a process flow diagrams illustrating an exemplary operation of the AI-based computing system 104 for facilitating assessment of interviews of candidates, in accordance with an embodiment of the present disclosure. At step 602, when a candidate clicks on the submit button in the virtual interview room, a get_score() API call is initiated from the backend. The candidate Id, the invite Id, and a dictionary called the video data including video type, question, question ID, video URL, and reference, are sent as a part of the request. In an embodiment of the present disclosure, no reference is there for general video type questions. For example, the candidate ID, the invite ID, the video data, the video type, question, question ID, video URL and the reference are as below:


[0054] The video type variable may be used to check for technical or communication type of interview. Based on this condition, the process flow changes. In an embodiment of the present disclosure, the request may be obtained from the backend which may be stored in an SQLite database with an appropriate status flag. This helps when there is a server outage or other external factors that result in the production system not running properly. In case of a system failure, the request can be loaded from the database and processed. At step 604, the candidate information, the video details and the one or more questions are obtained from the SQLite database. At step 606, the one or more interview videos of the one or more interviews are fetched from the cloud AWS S3 bucket. If the video type is ‘technical’, the candidate interview video may be fetched from the AWS S3 bucket using the video URL from the request. In an embodiment of the present disclosure, if the video type is ‘communication’, the candidate interview video may be fetched from the AWS S3 bucket using the video URL from the request. At step 608, the one or more audios are extracted from the .mp4 file using the ‘FFmpeg’ package. Furthermore, the one or more audios associated with the one or more interviews may be sent to Azure Speech-to-Text for accurate transcription. In an embodiment of the present disclosure, the computational linguistics may be engineered as natural language processing. At step 610, it is determined if the video type is technical. At step 612, when the video type is technical, a fuzzy comparison between the one or more questions and the one or more candidate answers i.e., the one or more transcripts, may be performed to determine whether the candidate has repeated the question or a binary question type i.e., true/false, yes/no and the like. At step 614, it is determined if the fuzzy score is greater than 85. When the fuzzy score is greater than 85, at step 616 the one or more questions and the one or more candidate answers may be sent through a custom function where they are tokenized, stop words are removed and finally, the one or more candidate answers are lemmatized. Further, the lemmatized sentences may be sent through a function to determine and remove the common words. If the length of the sentences after the common words removal is 0, the candidate has repeated the question, and a score of 0 is assigned. Furthermore, if the length of the sentences after common words removal is not 0, then the remaining words in the question and answer may be compared to a positive list (yes/yeah/true) and a negative list (no/nope/false). If the words match, the score is 100; else, it is 0. At step 618, the customized pre-trained deep learning model is used for scoring i.e., cosine similarity by comparing the one or more candidate answers with the one or more reference answers. At step 620, if the fuzzy score is less than 85, the one or more reference answers and the one or more candidate answers may be sent to the finetuned ‘all-mpnet-base-v1’ model for scoring. If the video type is not technical, at step 622, the candidate transcript is then sent to the grammar correction model for spelling and grammar corrections. At step 624, the one or more corrected answers and the one or more candidate answers are forwarded to the pre-trained deep learning model i.e., the trained technical score evaluation-based AI model for scoring i.e., cosine similarity. At step 626, once the scores are obtained, the obtained scores are sent to the backend through the call-back URL to be displayed on the virtual job portal.
[0055] FIG. 7 is a process flow diagrams illustrating an exemplary AI-based method 700 for facilitating assessment of interviews of candidates, in accordance with an embodiment of the present disclosure. In an embodiment of the present disclosure, one or more candidates are required to register at an external server 102 to get into a virtual interview room. The external server 102 is an employer’s online job portal configured to facilitate one or more interviews of one or more candidates. In an exemplary embodiment of the present disclosure, the one or more candidates register with the external server 102 by providing user details, such as name, address, gender, number of years of experience, year of graduation, highest qualification, and the like. In the virtual interview room, the one or more candidates are required to record their answers to the one or more questions prompted on the user interface screen of one or more electronic devices 108. In an embodiment of the present disclosure, the answers are recorded in form of the one or more interview videos. The one or more interview videos are saved in a storage unit 206. In an exemplary embodiment of the present disclosure, the one or more electronic devices 108 may include a laptop computer, desktop computer, tablet computer, smartphone, wearable device, smart watch, and the like.
[0056] At step 702, an interview data associated with the one or more interviews of the one or more candidates is obtained from the storage unit 206. In an exemplary embodiment of the present disclosure, the interview data includes a candidate ID, an invite ID, and a video data. For example, the video data includes a video type, one or more questions, a question ID, a video Uniform Resource Location (URL), one or more reference answers and the like.
[0057] At step 704, one or more interview videos associated with the one or more interviews are obtained from the storage unit 206 based on the obtained interview data. In an embodiment of the present disclosure, the one or more interview videos are obtained based on the obtained video URL.
[0058] At step 706, one or more audios associated with the obtained one or more interview videos are transcribed into one or more transcripts by using a transcription technique. In an embodiment of the present disclosure, the one or more transcripts correspond to the one or more candidate answers for the one or more questions. In transcribing the one or more audios associated with the obtained one or more interview videos into the one or more transcripts by using the transcription technique, the AI-based method 700 includes extracting the one or more audios from the obtained one or more interview videos by using a Fast Forward Moving Picture Experts Group (FFmpeg) package. Further, the AI-based method 700 includes transcribing the extracted the one or more audios into the one or more transcripts by using the transcription technique. In an embodiment of the present disclosure, the transcription technique corresponds to natural language processing technique. For example, the one or more audios are extracted from .mp4 file by using the ‘Ffmpeg’ package. Further, the one or more audios may be sent to Azure speech-to-text for accurate transcription. In an embodiment of the present disclosure, the computational linguistics may be engineered as natural language processing.
[0059] At step 708, type of the one or more interviews is determined based on the obtained interview data upon transcribing the one or more audios. In an exemplary embodiment of the present disclosure, the type of the one or more interviews includes the one or more technical interviews, the one or more communication interviews or a combination thereof. In an embodiment of the present disclosure, the type of the one or more videos are obtained based on the obtained video type variable.
[0060] At step 710, a fuzzy score between the one or more questions and the one or more candidate answers is generated by using a score generation technique upon determining that the type of the one or more interviews is the one or more technical interviews. In an embodiment of the present disclosure, a fuzzy score is obtained using a FuzzyWuzzy package. The FuzzyWuzzy package is a library of Python which is used for string matching. In an embodiment of the present disclosure Fuzzy string matching is finding strings that match a given pattern. It uses Levenshtein distance to calculate differences between sequences. The Levenshtein distance is a string metric to calculate the difference between two different strings. The Levenshtein distance between two strings a,b (of length {|a| and |b| respectively) is given by lev(a,b) as:

[0061] In an embodiment of the present disclosure, the tail of some string x is a string of all but the first character of x, and x[n] is the nth character of the string x starting with character 0.
[0062] At step 712, a technical score is generated for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using a trained technical score evaluation-based AI model. In an embodiment of the present disclosure, the one or more questions are one or more technical questions to assess Information Technology (IT) or technical skills of the one or more candidates. In an exemplary embodiment of the present disclosure, the trained technical score evaluation-based AI model is a deep learning model. In an embodiment of the present disclosure, the deep learning model is trained to evaluate IT or technical skills by comparing the one or more reference answers associated with the one or more technical questions to perform a semantic textual similarity between the one or more reference answers and the one or more candidate answers recorded by the one or more candidates. The trained technical score evaluation-based AI model is pre-trained with a huge training data set including real-time recorded interview videos to give a more accurate result. The trained technical score evaluation-based AI model is also called a fine-tuned model. In an exemplary embodiment of the present disclosure, the fine-tuned model has an accuracy of 80% and evaluates the one or more interviews without any manual interpretation. In an embodiment of the present disclosure, the one or more reference answers are predefined correct answers for the one or more questions. In generating the technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using the trained technical score evaluation-based AI model, the AI-based method 700 includes determining if the generated fuzzy score is greater than a predefined threshold fuzzy score by comparing the generated fuzzy score with the predefined threshold fuzzy score. In an exemplary embodiment of the present disclosure, the predefined threshold fuzzy score is 85. Further, the AI-based method 700 includes splitting the one or more candidate answers and the one or more questions into a set of tokens by using a tokenization technique upon determining that the generated fuzzy score is greater than the predefined threshold fuzzy score. In an embodiment of the present disclosure, the tokenization technique is breaking raw text into small chunks. The Tokenization technique breaks the raw text into words, sentences called tokens. These tokens help understand the context or develop the NLP model. The tokenization helps interpret the text's meaning by analyzing the sequence of the words. The AI-based method 700 includes removing one or more stop-words from the one or more candidate answers and the one or more questions by using a stop-word removal technique upon splitting the one or more candidate answers and the one or more questions. In an embodiment of the present disclosure, stop word removal is commonly used in preprocessing steps across different NLP applications. The stop word removal includes removing the commonly occurring words across all the documents in the corpus. Further, articles and pronouns are generally classified as stop words. The stop words have no significance in some of the NLP tasks like information retrieval and classification, which means these words are not very discriminative. In an embodiment of the present disclosure, iteration is performed through each word in the input text, and if the word exists in the stop word set of the SpaCy language model, the word is removed. The AI-based method 700 includes lemmatizing the one or more candidate answers by using a lemmatization technique upon removing the one or more stop-words. In an embodiment of the present disclosure, the lemmatization refers to the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, known as the lemma. If confronted with the token saw, lemmatization may attempt to return either see or saw depending on whether the use of the token was as a verb or a noun. Lemmatization commonly only collapses the different inflectional forms of a lemma. Furthermore, the AI-based method 700 includes detecting one or more common words in the lemmatized one or more candidate answers by using a word detection technique. In an embodiment of the present disclosure, the lemmatized form of the words from the reference and candidate answers are iterated through, and common words are found through one-to-one comparison. These common words are then added to a Python list. The AI-based method 700 includes removing the detected one or more common words from the lemmatized one or more candidate answers. The AI-based method 700 includes determining if length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is zero. In an embodiment of the present disclosure, the one or more questions are repeated questions if the determined length is zero. Further, the AI-based method 700 includes assigning a zero technical score to the one or more technical interviews upon determining that the length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is zero.
[0063] Further, in generating the technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using the trained technical score evaluation-based AI model, the AI-based method 700 includes comparing the lemmatized one or more candidate answers and the one or more questions upon removal of the detected one or more common words with a set of predefined positive words and a set of predefined negative words upon determining that the length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is not zero. Further, the AI-based method 700 includes determining if one or more words of the set of predefined positive words, the set of predefined negative words or a combination thereof match with the lemmatized one or more candidate answers and the one or more questions based on result of comparison. For example, the set of predefined positive words are yes, yeah, true and the like. For example, the set of predefined negative words are no, nope, false and the like. The AI-based method 700 includes assigning a hundred technical score to the one or more technical interviews upon determining that the one or more words match with the lemmatized one or more candidate answers and the one or more questions. In an embodiment of the present disclosure, the one or more technical questions are binary questions if the one or more words match with the lemmatized one or more candidate answers and the one or more questions. The AI-based method 700 includes assigning a zero technical score to the one or more technical interviews upon determining that the one or more words do not match with the lemmatized one or more candidate answers and the one or more questions. Furthermore, the AI-based method 700 includes generating the technical score for each of the one or more technical interviews by comparing the one or more candidate answers with the one or more reference answers by using the trained technical score evaluation-based AI model upon determining that the generated fuzzy score is less than the predefined threshold fuzzy score. In an exemplary embodiment of the present disclosure, the trained technical score evaluation-based AI model is an all-mpnet-base-v1 model. In an embodiment of the present disclosure, the one or more technical questions are descriptive questions if the generated fuzzy score is less than the predefined threshold fuzzy score.
[0064] Furthermore, in generating the technical score for each of the one or more technical interviews by comparing the one or more candidate answers with the one or more reference answers by using the trained technical score evaluation-based AI model upon determining that the generated fuzzy score is less than the predefined threshold fuzzy score, the AI-based method 700 includes splitting the one or more candidate answers and the one or more questions into the set of tokens by using the tokenization technique. Further, the AI-based method 700 includes removing the one or more stop-words from the one or more candidate answers and the one or more questions by using the stop-word removing technique upon splitting the one or more candidate answers and the one or more questions. The AI-based method 700 includes mapping each of the one or more candidate answers and the one or more questions to a 3D vector space by using the trained technical score evaluation-based AI model upon removing the one or more stop-words. Furthermore, the AI-based method 700 includes determining a vector space distance between the one or more candidate answers and the one or more questions by using the trained technical score evaluation-based AI model upon mapping each of the one or more candidate answers and the one or more questions to the 3D vector space. In an embodiment of the present disclosure, the determined vector space distance corresponds to the technical score associated with each of the one or more technical interviews.
[0065] At step 714, one or more mistakes are detected in the one or more candidate answers by using a trained sentence correction-based AI model upon determining that the type of the one or more interviews is the one or more communication interviews. In an exemplary embodiment of the present disclosure, the one or more mistakes include one or more grammatical mistakes, one or more spelling mistakes, and the like. In an embodiment of the present disclosure, the trained sentence correction-based AI model is the deep learning model. For example, the trained sentence correction-based AI model receives the one or more general questions, such that it passes them through a grammar correction model to detect the grammatical mistakes in the sentences.
[0066] At step 716, one or more correct answers are generated for the one or more questions by updating the one or more candidate answers based on the detected one or more mistakes and a set of predefined correction rules by using the trained sentence correction-based AI model. For example, the set of predefined correction rules may be grammatical rules to generate the one or more correct answers. In an embodiment of the present disclosure, the one or more questions are one or more communication or general questions to assess communication skills of the one or more candidates.
[0067] At step 718, a communication score is generated for each of the one or more communication interviews by comparing the one or more candidate answers with the generated one or more correct answers by using the trained sentence correction-based AI model. In an embodiment of the present disclosure, the trained sentence correction-based AI model is a computational linguistics-based model. In an exemplary embodiment of the present disclosure, the trained sentence correction-based AI model is very accurate as it is trained on 1 million sentence pairs across 14 different datasets. The overall accuracy of the trained sentence correction-based AI model is around 80%.
[0068] In an embodiment of the present disclosure, the trained technical score evaluation-based AI model are trained to get contextualize meaning of words in the one or more candidate answers and generate a cosine similarity between the one or more candidate answers and the one or more reference answers to generate the technical score for technical or IT based questions. Further, the trained sentence correction-based AI model is used to evaluate general communication-based skills. In an embodiment of the present disclosure, Bidirectional Encoder Representations from Transformers (BERT)/Sentence BERT (SBERT) pre-trained model is adopted and customized by providing an additional technical question-answer based historical training dataset to tackle and evaluate technical questions. The trained BERT or SBERT model can generate a vector of a sentence with a maximum length capacity of 768. In an embodiment of the present disclosure, the BERT model is based on a transformer architecture which is very powerful. The BERT model is trained by 2500 million words from Wikipedia and 800 million words from other books.
[0069] At step 720, the generated technical score and the generated communication score are outputted on user interface screen of the one or more electronic devices 108 associated with the one or more users. For example, the one or more users are the one or more candidates, one or more recruiters and the like.
[0070] In an embodiment of the present disclosure, the AI-based method 700 includes determining an interview status of the one or more interviews based on the generated technical score, the generated communication score, and a predefined threshold score. In an embodiment of the present disclosure, the predefined threshold score is a cut-off score. In an exemplary embodiment of the present disclosure, the interview status is qualified or not qualified. In an embodiment of the present disclosure, the determined interview status is outputted on user interface screen of the one or more electronic devices 108 associated with the one or more users. For example, the generated communication score is compared with the cut-off score to determine if a candidate possess the required communication skills for a job role. Similarly, the generated technical score is compared with the cut-off score to determine if the candidate possess the required technical skills for the job role.
[0071] The method 700 may be implemented in any suitable hardware, software, firmware, or combination thereof.
[0072] FIGs. 8A – 8B are graphical user interface screens of the AI-based computing system 104 for facilitating assessment of interviews of candidates, in accordance with an embodiment of the present disclosure. A graphical user interface screen 802 of FIG. 8A shows multiple options on dashboard, such as jobs, campus, users, accounts, panels, candidates, rounds, vault, workflow, and the like. Further, the graphical user interface screen 802 depicts personal details of the candidate, final status job details, summary, the one or more questions, skill, score, video questions, MC questions, results, and the like. Further, a graphical user interface screen 804 of FIG. 8B displays the one or more questions, the skill, score, reference, and the like.
[0073] Thus, various embodiments of the present AI-based computing system 104 provide a solution to facilitate assessment of interviews of candidates. The AI-based computing system 104 evaluates the one or more candidates or interviewees based on technical and communication skills. The AI-based computing system 104 captures the candidate interview video, obtains the transcript using Speech-To-Text, and processes the transcript for scoring based on the video type i.e., technical or communication, to judge the candidate’s technical and communication skills. Further, the AI-based computing system 104 performs an automatic evaluation of candidates’ online video interviews and provides an accurate score. The machine learning model i.e., the trained technical score evaluation-based AI model takes the one or more reference answers for the one or more technical questions into consideration and performs scoring using customized natural language processing and deep learning-based models. The AI-based computing system 104 assures recruiters to find the right candidate in a very short period of time. The AI-based computing system 104 also incorporates multithreading concepts to handle various interviews at the same point of time and evaluate them at the fingertips with help of AI-based models. The AI-based computing system 104 directly supports the digital India mission. It also contributes to our country being empowered digitally in the field of technology. Furthermore, the AI-based computing system 104 enables employers to choose the right candidates without manual interpretation of the raised interview questions. In an embodiment of the present disclosure, the AI-based computing system 104 generates the technical score and the communication score quickly. It may also drastically reduce the recruitment cost of the employer. The proposed invention is extremely time efficient. A job seeker may also be facilitated by saving the traveling cost and time to attend the interview. In addition, the job seeker may choose a flexible timing for the interview. It’s a quick process, the job seekers are not required to wait to get their result for the next round. The AI-based computing system 104 also updates the candidate with his performance in the interview.
[0074] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[0075] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0076] The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
[0077] Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
[0078] A representative hardware environment for practicing the embodiments may include a hardware configuration of an information handling/computer system in accordance with the embodiments herein. The system herein comprises at least one processor or central processing unit (CPU). The CPUs are interconnected via system bus 208 to various devices such as a random-access memory (RAM), read-only memory (ROM), and an input/output (I/O) adapter. The I/O adapter can connect to peripheral devices, such as disk units and tape drives, or other program storage devices that are readable by the system. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein.
[0079] The system further includes a user interface adapter that connects a keyboard, mouse, speaker, microphone, and/or other user interface devices such as a touch screen device (not shown) to the bus to gather user input. Additionally, a communication adapter connects the bus to a data processing network, and a display adapter connects the bus to a display device which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
[0080] A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention. When a single device or article is described herein, it will be apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be apparent that a single device/article may be used in place of the more than one device or article, or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
[0081] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open-ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[0082] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
, Claims:WE CLAIM:
1. An Artificial Intelligence (AI)-based computing system (104) for facilitating assessment of interviews of candidates, the AI-based computing system (104) comprising:
one or more hardware processors (202); and
a memory (204) coupled to the one or more hardware processors (202), wherein the memory (204) comprises a plurality of modules (110) in the form of programmable instructions executable by the one or more hardware processors (202), and wherein the plurality of modules (110) comprises:
a data obtaining module (210) configured to:
obtain an interview data associated with one or more interviews of one or more candidates from a storage unit (206), wherein the interview data comprises a candidate ID, an invite ID and a video data, and wherein the video data comprises a video type, one or more questions, a question ID, a video Uniform Resource Location (URL) and one or more reference answers;
obtain one or more interview videos associated with the one or more interviews from the storage unit (206) based on the obtained interview data;
a data transcription module (212) configured to transcribe one or more audios associated with the obtained one or more interview videos into one or more transcripts by using a transcription technique, wherein the one or more transcripts correspond to one or more candidate answers for the one or more questions;
an interview determination module (214) configured to determine type of the one or more interviews based on the obtained interview data upon transcribing the one or more audios, wherein the type of the one or more interviews comprise at least one of: one or more technical interviews and one or more communication interviews;
a fuzzy score generation module (216) configured to generate a fuzzy score between the one or more questions and the one or more candidate answers by using a score generation technique upon determining that the type of the one or more interviews is the one or more technical interviews;
a technical score generation module (218) configured to generate a technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using a trained technical score evaluation-based AI model, wherein the trained technical score evaluation-based AI model is a deep learning model, and wherein the one or more reference answers are predefined correct answers for the one or more questions;
a data detection module (220) configured to detect one or more mistakes in the one or more candidate answers by using a trained sentence correction-based AI model upon determining that the type of the one or more interviews is the one or more communication interviews, wherein the one or more mistakes comprise one or more grammatical mistakes and one or more spelling mistakes, and wherein the trained sentence correction-based AI model is the deep learning model;
an answer generation module (222) configured to generate one or more correct answers for the one or more questions by updating the one or more candidate answers based on the detected one or more mistakes and a set of predefined correction rules by using the trained sentence correction-based AI model;
a communication score generation module (224) configured to generate a communication score for each of the one or more communication interviews by comparing the one or more candidate answers with the generated one or more correct answers by using the trained sentence correction-based AI model; and
a data output module (226) configured to output the generated technical score and the generated communication score on user interface screen of one or more electronic devices (108) associated with one or more users.

2. The AI-based computing system (104) as claimed in claim 1, wherein in generating the technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using the trained technical score evaluation-based AI model, the technical score generation module (218) is configured to:
determine if the generated fuzzy score is greater than a predefined threshold fuzzy score by comparing the generated fuzzy score with the predefined threshold fuzzy score, wherein the predefined threshold fuzzy score is 85;
split the one or more candidate answers and the one or more questions into a set of tokens by using a tokenization technique upon determining that the generated fuzzy score is greater than the predefined threshold fuzzy score;
remove one or more stop-words from the one or more candidate answers and the one or more questions by using a stop-word removal technique upon splitting the one or more candidate answers and the one or more questions;
lemmatize the one or more candidate answers by using a lemmatization technique upon removing the one or more stop-words;
detect one or more common words in the lemmatized one or more candidate answers by using a word detection technique;
remove the detected one or more common words from the lemmatized one or more candidate answers;
determine if length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is zero, wherein the one or more questions are repeated questions if the determined length is zero; and
assign a zero technical score to the one or more technical interviews upon determining that the length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is zero.

3. The AI-based computing system (104) as claimed in claim 2, wherein in generating the technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using the trained technical score evaluation-based AI model, the technical score generation module (218) is configured to:
compare the lemmatized one or more candidate answers and the one or more questions upon removal of the detected one or more common words with a set of predefined positive words and a set of predefined negative words upon determining that the length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is not zero;
determine if one or more words of at least one of: the set of predefined positive words and the set of predefined negative words match with the lemmatized one or more candidate answers and the one or more questions based on result of comparison;
assign a hundred technical score to the one or more technical interviews upon determining that the one or more words match with the lemmatized one or more candidate answers and the one or more questions, wherein the one or more technical questions are binary questions if the one or more words match with the lemmatized one or more candidate answers and the one or more questions;
assign a zero technical score to the one or more technical interviews upon determining that the one or more words do not match with the lemmatized one or more candidate answers and the one or more questions; and
generate the technical score for each of the one or more technical interviews by comparing the one or more candidate answers with the one or more reference answers by using the trained technical score evaluation-based AI model upon determining that the generated fuzzy score is less than the predefined threshold fuzzy score, wherein the one or more technical questions are descriptive questions if the generated fuzzy score is less than the predefined threshold fuzzy score.

4. The AI-based computing system (104) as claimed in claim 3, wherein in generating the technical score for each of the one or more technical interviews by comparing the one or more candidate answers with the one or more reference answers by using the trained technical score evaluation-based AI model upon determining that the generated fuzzy score is less than the predefined threshold fuzzy score, the technical score generation module (218) is configured to:
split the one or more candidate answers and the one or more questions into the set of tokens by using the tokenization technique;
remove the one or more stop-words from the one or more candidate answers and the one or more questions by using the stop-word removing technique upon splitting the one or more candidate answers and the one or more questions;
map each of the one or more candidate answers and the one or more questions to a 3D vector space by using the trained technical score evaluation-based AI model upon removing the one or more stop-words; and
determine a vector space distance between the one or more candidate answers and the one or more questions by using the trained technical score evaluation-based AI model upon mapping each of the one or more candidate answers and the one or more questions to the 3D vector space, wherein the determined vector space distance corresponds to the technical score associated with each of the one or more technical interviews.

5. The AI-based computing system (104) as claimed in claim 1, wherein in transcribing the one or more audios associated with the obtained one or more interview videos into the one or more transcripts by using the transcription technique, the data transcription module (212) is configured to:
extract the one or more audios from the obtained one or more interview videos by using a Fast Forward Moving Picture Experts Group (FFmpeg) package; and
transcribe the extracted the one or more audios into the one or more transcripts by using the transcription technique.

6. The AI based computing system (104) as claimed in claim 1, further comprising a status determination module (228) configured to determine an interview status of the one or more interviews based on the generated technical score, the generated communication score and a predefined threshold score, wherein the interview status is one of: qualified and not qualified, and wherein the determined interview status is outputted on user interface screen of the one or more electronic devices (108) associated with the one or more users.

7. An Artificial Intelligence (AI)-based method for facilitating assessment of interviews of candidates, the AI-based method comprising:
obtaining, by one or more hardware processors (202), an interview data associated with one or more interviews of one or more candidates from a storage unit (206), wherein the interview data comprises a candidate ID, an invite ID and a video data, and wherein the video data comprises a video type, one or more questions, a question ID, a video Uniform Resource Location (URL) and one or more reference answers;
obtaining, by the one or more hardware processors (202), one or more interview videos associated with the one or more interviews from the storage unit (206) based on the obtained interview data;
transcribing, by the one or more hardware processors (202), one or more audios associated with the obtained one or more interview videos into one or more transcripts by using a transcription technique, wherein the one or more transcripts correspond to one or more candidate answers for the one or more questions;
determining, by the one or more hardware processors (202), type of the one or more interviews based on the obtained interview data upon transcribing the one or more audios, wherein the type of the one or more interviews comprise at least one of: one or more technical interviews and one or more communication interviews;
generating, by the one or more hardware processors (202), a fuzzy score between the one or more questions and the one or more candidate answers by using a score generation technique upon determining that the type of the one or more interviews is the one or more technical interviews;
generating, by the one or more hardware processors (202), a technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using a trained technical score evaluation-based AI model, wherein the trained technical score evaluation-based AI model is a deep learning model, and wherein the one or more reference answers are predefined correct answers for the one or more questions;
detecting, by the one or more hardware processors (202), one or more mistakes in the one or more candidate answers by using a trained sentence correction-based AI model upon determining that the type of the one or more interviews is the one or more communication interviews, wherein the one or more mistakes comprise one or more grammatical mistakes and one or more spelling mistakes, and wherein the trained sentence correction-based AI model is the deep learning model;
generating, by the one or more hardware processors (202), one or more correct answers for the one or more questions by updating the one or more candidate answers based on the detected one or more mistakes and a set of predefined correction rules by using the trained sentence correction-based AI model;
generating, by the one or more hardware processors (202), a communication score for each of the one or more communication interviews by comparing the one or more candidate answers with the generated one or more correct answers by using the trained sentence correction-based AI model; and
outputting, by the one or more hardware processors (202), the generated technical score and the generated communication score on user interface screen of one or more electronic devices (108) associated with one or more users.

8. The AI-based method as claimed in claim 7, wherein generating the technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using the trained technical score evaluation-based AI model comprises:
determining if the generated fuzzy score is greater than a predefined threshold fuzzy score by comparing the generated fuzzy score with the predefined threshold fuzzy score, wherein the predefined threshold fuzzy score is 85;
splitting the one or more candidate answers and the one or more questions into a set of tokens by using a tokenization technique upon determining that the generated fuzzy score is greater than the predefined threshold fuzzy score;
removing one or more stop-words from the one or more candidate answers and the one or more questions by using a stop-word removal technique upon splitting the one or more candidate answers and the one or more questions;
lemmatizing the one or more candidate answers by using a lemmatization technique upon removing the one or more stop-words;
detecting one or more common words in the lemmatized one or more candidate answers by using a word detection technique;
removing the detected one or more common words from the lemmatized one or more candidate answers;
determining if length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is zero, wherein the one or more questions are repeated questions if the determined length is zero; and
assigning a zero technical score to the one or more technical interviews upon determining that the length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is zero.

9. The AI-based method as claimed in claim 8, wherein generating the technical score for each of the one or more technical interviews by considering the generated fuzzy score, the one or more reference answers and the one or more candidate answers by using the trained technical score evaluation-based AI model comprises:
comparing the lemmatized one or more candidate answers and the one or more questions upon removal of the detected one or more common words with a set of predefined positive words and a set of predefined negative words upon determining that the length of the lemmatized one or more candidate answers upon removal of the detected one or more common words is not zero;
determining if one or more words of at least one of: the set of predefined positive words and the set of predefined negative words match with the lemmatized one or more candidate answers and the one or more questions based on result of comparison;
assigning a hundred technical score to the one or more technical interviews upon determining that the one or more words match with the lemmatized one or more candidate answers and the one or more questions, wherein the one or more technical questions are binary questions if the one or more words match with the lemmatized one or more candidate answers and the one or more questions;
assigning a zero technical score to the one or more technical interviews upon determining that the one or more words do not match with the lemmatized one or more candidate answers and the one or more questions; and
generating the technical score for each of the one or more technical interviews by comparing the one or more candidate answers with the one or more reference answers by using the trained technical score evaluation-based AI model upon determining that the generated fuzzy score is less than the predefined threshold fuzzy score, wherein the one or more technical questions are descriptive questions if the generated fuzzy score is less than the predefined threshold fuzzy score.

10. The AI based method as claimed in claim 9, wherein generating the technical score for each of the one or more technical interviews by comparing the one or more candidate answers with the one or more reference answers by using the trained technical score evaluation-based AI model upon determining that the generated fuzzy score is less than the predefined threshold fuzzy score comprises:
splitting the one or more candidate answers and the one or more questions into the set of tokens by using the tokenization technique;
removing the one or more stop-words from the one or more candidate answers and the one or more questions by using the stop-word removing technique upon splitting the one or more candidate answers and the one or more questions;
mapping each of the one or more candidate answers and the one or more questions to a 3D vector space by using the trained technical score evaluation-based AI model upon removing the one or more stop-words; and
determining a vector space distance between the one or more candidate answers and the one or more questions by using the trained technical score evaluation-based AI model upon mapping each of the one or more candidate answers and the one or more questions to the 3D vector space, wherein the determined vector space distance corresponds to the technical score associated with each of the one or more technical interviews.

11. The AI based method as claimed in claim 7, wherein transcribing the one or more audios associated with the obtained one or more interview videos into the one or more transcripts by using the transcription technique comprises:
extracting the one or more audios from the obtained one or more interview videos by using a Fast Forward Moving Picture Experts Group (FFmpeg) package; and
transcribing the extracted the one or more audios into the one or more transcripts by using the transcription technique.

12. The AI-based method as claimed in claim 7, further comprising determining an interview status of the one or more interviews based on the generated technical score, the generated communication score and a predefined threshold score, wherein the interview status is one of: qualified and not qualified, and wherein the determined interview status is outputted on user interface screen of the one or more electronic devices (108) associated with the one or more users.

Dated this the 17th day of June 2022
Signature

Vidya Bhaskar Singh Nandiyal
Patent Agent (IN/PA-2912)
Agent for the Applicant

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 202241034719-RELEVANT DOCUMENTS [15-09-2023(online)].pdf 2023-09-15
1 202241034719-RELEVANT DOCUMENTS [23-12-2024(online)].pdf 2024-12-23
1 202241034719-STATEMENT OF UNDERTAKING (FORM 3) [17-06-2022(online)].pdf 2022-06-17
2 202241034719-EVIDENCE FOR REGISTRATION UNDER SSI [17-12-2024(online)].pdf 2024-12-17
2 202241034719-IntimationOfGrant18-01-2023.pdf 2023-01-18
2 202241034719-PROOF OF RIGHT [17-06-2022(online)].pdf 2022-06-17
3 202241034719-FORM FOR SMALL ENTITY [17-12-2024(online)].pdf 2024-12-17
3 202241034719-FORM FOR SMALL ENTITY(FORM-28) [17-06-2022(online)].pdf 2022-06-17
3 202241034719-PatentCertificate18-01-2023.pdf 2023-01-18
4 202241034719-FORM FOR SMALL ENTITY [17-06-2022(online)].pdf 2022-06-17
4 202241034719-FORM 4 [16-12-2024(online)].pdf 2024-12-16
4 202241034719-Annexure [21-12-2022(online)].pdf 2022-12-21
5 202241034719-Written submissions and relevant documents [21-12-2022(online)].pdf 2022-12-21
5 202241034719-RELEVANT DOCUMENTS [15-09-2023(online)].pdf 2023-09-15
5 202241034719-FORM 1 [17-06-2022(online)].pdf 2022-06-17
6 202241034719-IntimationOfGrant18-01-2023.pdf 2023-01-18
6 202241034719-FORM-26 [01-12-2022(online)].pdf 2022-12-01
6 202241034719-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [17-06-2022(online)].pdf 2022-06-17
7 202241034719-PatentCertificate18-01-2023.pdf 2023-01-18
7 202241034719-EVIDENCE FOR REGISTRATION UNDER SSI [17-06-2022(online)].pdf 2022-06-17
7 202241034719-Annexure [15-11-2022(online)].pdf 2022-11-15
8 202241034719-Annexure [21-12-2022(online)].pdf 2022-12-21
8 202241034719-Correspondence to notify the Controller [15-11-2022(online)].pdf 2022-11-15
8 202241034719-DRAWINGS [17-06-2022(online)].pdf 2022-06-17
9 202241034719-DECLARATION OF INVENTORSHIP (FORM 5) [17-06-2022(online)].pdf 2022-06-17
9 202241034719-US(14)-HearingNotice-(HearingDate-06-12-2022).pdf 2022-11-14
9 202241034719-Written submissions and relevant documents [21-12-2022(online)].pdf 2022-12-21
10 202241034719-CLAIMS [09-11-2022(online)].pdf 2022-11-09
10 202241034719-COMPLETE SPECIFICATION [17-06-2022(online)].pdf 2022-06-17
10 202241034719-FORM-26 [01-12-2022(online)].pdf 2022-12-01
11 202241034719-Annexure [15-11-2022(online)].pdf 2022-11-15
11 202241034719-COMPLETE SPECIFICATION [09-11-2022(online)].pdf 2022-11-09
11 202241034719-MSME CERTIFICATE [06-07-2022(online)].pdf 2022-07-06
12 202241034719-Correspondence to notify the Controller [15-11-2022(online)].pdf 2022-11-15
12 202241034719-DRAWING [09-11-2022(online)].pdf 2022-11-09
12 202241034719-FORM28 [06-07-2022(online)].pdf 2022-07-06
13 202241034719-US(14)-HearingNotice-(HearingDate-06-12-2022).pdf 2022-11-14
13 202241034719-FORM-9 [06-07-2022(online)].pdf 2022-07-06
13 202241034719-ENDORSEMENT BY INVENTORS [09-11-2022(online)].pdf 2022-11-09
14 202241034719-CLAIMS [09-11-2022(online)].pdf 2022-11-09
14 202241034719-FER_SER_REPLY [09-11-2022(online)].pdf 2022-11-09
14 202241034719-FORM 18A [06-07-2022(online)].pdf 2022-07-06
15 202241034719-COMPLETE SPECIFICATION [09-11-2022(online)].pdf 2022-11-09
15 202241034719-FORM 3 [09-11-2022(online)].pdf 2022-11-09
15 202241034719-FORM-26 [12-07-2022(online)].pdf 2022-07-12
16 202241034719-DRAWING [09-11-2022(online)].pdf 2022-11-09
16 202241034719-FER.pdf 2022-08-08
16 202241034719-OTHERS [09-11-2022(online)].pdf 2022-11-09
17 202241034719-OTHERS [09-11-2022(online)].pdf 2022-11-09
17 202241034719-FER.pdf 2022-08-08
17 202241034719-ENDORSEMENT BY INVENTORS [09-11-2022(online)].pdf 2022-11-09
18 202241034719-FER_SER_REPLY [09-11-2022(online)].pdf 2022-11-09
18 202241034719-FORM 3 [09-11-2022(online)].pdf 2022-11-09
18 202241034719-FORM-26 [12-07-2022(online)].pdf 2022-07-12
19 202241034719-FER_SER_REPLY [09-11-2022(online)].pdf 2022-11-09
19 202241034719-FORM 18A [06-07-2022(online)].pdf 2022-07-06
19 202241034719-FORM 3 [09-11-2022(online)].pdf 2022-11-09
20 202241034719-OTHERS [09-11-2022(online)].pdf 2022-11-09
20 202241034719-FORM-9 [06-07-2022(online)].pdf 2022-07-06
20 202241034719-ENDORSEMENT BY INVENTORS [09-11-2022(online)].pdf 2022-11-09
21 202241034719-DRAWING [09-11-2022(online)].pdf 2022-11-09
21 202241034719-FER.pdf 2022-08-08
21 202241034719-FORM28 [06-07-2022(online)].pdf 2022-07-06
22 202241034719-COMPLETE SPECIFICATION [09-11-2022(online)].pdf 2022-11-09
22 202241034719-FORM-26 [12-07-2022(online)].pdf 2022-07-12
22 202241034719-MSME CERTIFICATE [06-07-2022(online)].pdf 2022-07-06
23 202241034719-FORM 18A [06-07-2022(online)].pdf 2022-07-06
23 202241034719-COMPLETE SPECIFICATION [17-06-2022(online)].pdf 2022-06-17
23 202241034719-CLAIMS [09-11-2022(online)].pdf 2022-11-09
24 202241034719-DECLARATION OF INVENTORSHIP (FORM 5) [17-06-2022(online)].pdf 2022-06-17
24 202241034719-FORM-9 [06-07-2022(online)].pdf 2022-07-06
24 202241034719-US(14)-HearingNotice-(HearingDate-06-12-2022).pdf 2022-11-14
25 202241034719-Correspondence to notify the Controller [15-11-2022(online)].pdf 2022-11-15
25 202241034719-DRAWINGS [17-06-2022(online)].pdf 2022-06-17
25 202241034719-FORM28 [06-07-2022(online)].pdf 2022-07-06
26 202241034719-Annexure [15-11-2022(online)].pdf 2022-11-15
26 202241034719-EVIDENCE FOR REGISTRATION UNDER SSI [17-06-2022(online)].pdf 2022-06-17
26 202241034719-MSME CERTIFICATE [06-07-2022(online)].pdf 2022-07-06
27 202241034719-FORM-26 [01-12-2022(online)].pdf 2022-12-01
27 202241034719-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [17-06-2022(online)].pdf 2022-06-17
27 202241034719-COMPLETE SPECIFICATION [17-06-2022(online)].pdf 2022-06-17
28 202241034719-DECLARATION OF INVENTORSHIP (FORM 5) [17-06-2022(online)].pdf 2022-06-17
28 202241034719-FORM 1 [17-06-2022(online)].pdf 2022-06-17
28 202241034719-Written submissions and relevant documents [21-12-2022(online)].pdf 2022-12-21
29 202241034719-Annexure [21-12-2022(online)].pdf 2022-12-21
29 202241034719-DRAWINGS [17-06-2022(online)].pdf 2022-06-17
29 202241034719-FORM FOR SMALL ENTITY [17-06-2022(online)].pdf 2022-06-17
30 202241034719-EVIDENCE FOR REGISTRATION UNDER SSI [17-06-2022(online)].pdf 2022-06-17
30 202241034719-FORM FOR SMALL ENTITY(FORM-28) [17-06-2022(online)].pdf 2022-06-17
30 202241034719-PatentCertificate18-01-2023.pdf 2023-01-18
31 202241034719-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [17-06-2022(online)].pdf 2022-06-17
31 202241034719-IntimationOfGrant18-01-2023.pdf 2023-01-18
31 202241034719-PROOF OF RIGHT [17-06-2022(online)].pdf 2022-06-17
32 202241034719-FORM 1 [17-06-2022(online)].pdf 2022-06-17
32 202241034719-RELEVANT DOCUMENTS [15-09-2023(online)].pdf 2023-09-15
32 202241034719-STATEMENT OF UNDERTAKING (FORM 3) [17-06-2022(online)].pdf 2022-06-17
33 202241034719-FORM 4 [16-12-2024(online)].pdf 2024-12-16
33 202241034719-FORM FOR SMALL ENTITY [17-06-2022(online)].pdf 2022-06-17
34 202241034719-FORM FOR SMALL ENTITY [17-12-2024(online)].pdf 2024-12-17
34 202241034719-FORM FOR SMALL ENTITY(FORM-28) [17-06-2022(online)].pdf 2022-06-17
35 202241034719-EVIDENCE FOR REGISTRATION UNDER SSI [17-12-2024(online)].pdf 2024-12-17
35 202241034719-PROOF OF RIGHT [17-06-2022(online)].pdf 2022-06-17
36 202241034719-RELEVANT DOCUMENTS [23-12-2024(online)].pdf 2024-12-23
36 202241034719-STATEMENT OF UNDERTAKING (FORM 3) [17-06-2022(online)].pdf 2022-06-17
37 202241034719-FORM FOR SMALL ENTITY [17-06-2025(online)].pdf 2025-06-17
38 202241034719-EVIDENCE FOR REGISTRATION UNDER SSI [17-06-2025(online)].pdf 2025-06-17

Search Strategy

1 202241034719_searchE_08-08-2022.pdf

ERegister / Renewals

3rd: 17 Dec 2024

From 17/06/2024 - To 17/06/2025

4th: 17 Jun 2025

From 17/06/2025 - To 17/06/2026