Abstract: Various embodiments of the present disclosure provide a method and a system for analysing a video. The method includes receiving a user selection of at least one menu option with respect to the video analysis to be performed. The method includes receiving the video consisting one or more of: an action performed by the user and an audio of the user. The method includes identifying one or more attributes of the user from the received video, wherein the attributes are based on the menu option selected by the user. The method includes assigning weights to the identified attributes to determine a rating for the user, wherein the rating is a combination of a positive score and a negative score. The method further includes transmitting notifications or alerts with personalized recommendations to the user; the personalized recommendations or alerts are generated based on the determined rating of the user.
[0001] Example embodiments of the present disclosure generally relate to a video analyzing system and more particularly relate to a method and system for analyzing video of a user performing one or more tasks.
BACKGROUND
[0002] Communication skill is the ability of a person to convey their thoughts effectively to other people. The communication skill can be grouped into verbal communication and non-verbal communication. The verbal communication is a type of oral communication wherein message is transmitted by sender through spoken words. In verbal communication, the sender conveys his feelings, thoughts, ideas, and opinions and expresses them in the form of speeches, discussions, presentations, and conversations. Non-verbal communication is a type of communication that uses non-oral communication such as eye-contact, facial expressions, gesture, posture, body movement and the like. Both verbal and non-verbal communications are required in a workplace for effective completion of work. Therefore, organizations test communication skills of potential candidates before making a hiring decision. In addition, the organizations provide continuous training and development to employees on enhancing their communication abilities after hiring so that the employees are able to collaborate with co-workers across multiple departments in their organization. Conventionally, because communication skills include non-verbal aspect, the training for communication skills requires having a classroom setup along with a trainer available on location to provide training, monitor the verbal and non-verbal communication of the employees and provide feedback. Setting up the classroom along with arranging for the trainer to be physically present to conduct the trainings significantly increases costs of conducting the training.
[0003] Similarly, job seekers who are in need of improving their communication skills enroll in various training centers that offer in-person coaching. However, the
cost of in-person coaching is high. Although, recent advancements in technology has provided alternatives to in-person class room trainings such as online coaching, these coaching still require the trainer to be physically or virtually present to monitor performance of students. Further, the present technologies do not monitor or provide feedback on the non-verbal aspect of communication.
SUMMARY
[0004] Various embodiments provide method and system of analyzing a video. The present invention analyzes the video of a user and provides instant feedback in real time. Therefore the users are able to obtain real time feedback on their communication. The present invention also eliminates need of a trainer to be physically present to monitor the users by implementing Artificial Intelligence (AI) based algorithms The video analysis receives the video from a user. The user may be an employee or a job seeker who is interested in improving communication skill. The user may provide a sample video that includes both verbal and non-verbal aspects of communication. The present invention analyzes the received video based on an option chosen by the user at the time of providing the sample video. Subsequently weights are assigned based on a plurality of attributes and a rating for the video is determined. Based on the rating the present invention may then provide recommendation to the user for improving their communication skills. Thus the present enables real time feedback on the communication skills of the user by analyzing the video provided by the user. The present invention also eliminates the need of a trainer to be physically present for analyzing the communication skills of user.
[0005] According to some embodiments, a method for analyzing a video is disclosed. The method includes receiving from a user device over a network, a user selection of at least one menu option with respect to the video analysis to be performed, receiving from the user device over the network, the video, wherein the video comprises one or more of an action performed by the user and an audio of the
user. The method includes identifying one or more attributes of the user from the received video, wherein the attributes are based on the menu option selected by the user. The method includes assigning weights to the identified attributes to determine a rating for the user, wherein the rating is a combination of a positive score and a negative score. The method further includes transmitting notifications or alerts with personalized recommendations to the user, the personalized recommendations or alerts are generated based on the determined rating of the user.
[0006] According to some embodiments, the method further includes providing the positive score and the negative score for each of the identified attributes of the user.
[0007] According to some embodiments, to determine the rating of the user, the weights assigned to the positive score and the negative score are different for each of the identified attribute of the user.
[0008] According to some embodiments, the assigning of weights is performed by one or more of manual assignment and machine learning algorithms.
[0009] According to some embodiments, the attributes of the user comprise atleast one of body language, vocabulary, and voice.
[0010] According to some embodiments, the menu option comprises one or more of type of video, number of people communicating with the user, and visibility of the user.
[0011] According to some embodiments, the rating for the user is determined by subtracting the negative score from the positive score.
[0012] According to some embodiments, a system for analyzing a video is disclosed. The system includes a memory storing executable instructions and a processor configured to execute the stored executable instructions. The processor is configured to receive from a user device over a network, a user selection of at least one menu option with respect to the video analysis to be performed. The processor is
configured to receive from the user device over the network, the video, wherein the video comprises one or more: of an action performed by the user and an audio of the user. The processor is configured to identify one or more attributes of the user from the received video, wherein the attributes are based on the menu option selected by the user. The processor is configured to assign weights to the identified attributes to determine a rating for the user, wherein the rating is a combination of a positive score and a negative score. The processor is further configured to transmit notifications or alerts with personalized recommendations to the user, the personalized recommendations or alerts are generated based on the determined rating of the user.
[0013] According to some embodiments, the processor is configured to assign weights by one or more of manual assignment and machine learning algorithms.
[0014] According to some embodiments, the processor is configured to process the machine learning algorithms in one or more of: a smart wearable device, a portable communication device, and a cloud server.
[0015] According to some embodiments, the processor includes a receiving module configured to receive the video from the user's device, a video analysis module configured to analyse body language of the user from the received video and an audio analysis module configured to analyse vocabulary and voice of the user from the received video.
[0016] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS [0017] FIG. 1 illustrates an environment representation of a video analyzing system for analyzing a video, in accordance with an example embodiment of the present disclosure;
[0018] FIG. 2 shows a block diagram of the system of FIG. 1, in accordance with an example embodiment of the present disclosure;
[0019] FIG. 3A-3C shows exemplary attributes for analyzing the video, in accordance with an example embodiment of the present disclosure; and
[0020] FIG. 4 shows a flowchart depicting a method for analyzing a video, in accordance with an example embodiment of the present disclosure.
DETAILED DESCRIPTION
[0021] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure may be practiced without these specific details.
[0022] Throughout the following description, numerous references may be made regarding servers, services, or other systems formed from computing devices. It should be appreciated that the use of such terms is deemed to represent one or more computing devices having at least one processor configured to or programmed to execute software instructions stored on a computer readable tangible, non-transitory medium or also referred to as a processor readable medium. For example, a server can include one or more computers operating as a web server, data source server, a cloud computing server, a remote computing server or other type of computer server in a manner to fulfill described roles, responsibilities, or functions. Within the context of this document, the disclosed modules are also deemed to comprise computing devices having a processor and a non-transitory memory storing instructions executable by the processor that cause the device to control, manage, or otherwise manipulate the features of the devices or systems.
[0023] The embodiments are described herein for illustrative purposes and are subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient
but are intended to cover the application or implementation without departing from the spirit or the scope of the present disclosure. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.
[0024] FIG. 1 illustrates an environment representation 100 of a system for analyzing a video, in accordance with an example embodiment of the present disclosure. The video includes one or more of: an action performed by a user 102 and an audio of the user 102. The video of the user 102 is captured by a plurality of sensors 108. The plurality of sensors 108 may correspond to one or more of a standalone camera, a web camera, a standalone microphone, smart phone sensors like accelerometers, gyroscopes, compasses, microphones, cameras, and a multitude of infrared, optical or radio frequency links such as Bluetooth, Wifi, Zigbee, and infrared sensors. The plurality of sensors 108 are configured to transmit the captured video to a video analysing system 106 through a network 112. The video analyzing system 106 performs one or more analysis on the captured video. The video analyzing system 106 may be accessed by the user 102 via a user device 104. The user 102 may set up an account for analyzing the video via an application interface 104a in the user device 104. In an embodiment, the user device 104 may be a smartphone that is configured to capture the video of the user 102. The plurality of sensors 108 such as camera, microphone, smartphone sensors and the like present in the user device 104 is configured to capture the video. The user device 104 is communicatively connected to the video analyzing system 106 through the network 112.
[0025] The network 112 may comprise suitable logic, circuitry, and interfaces that may be configured to provide a plurality of network ports and a plurality of communication channels for transmission and reception of data. Each network port may correspond to a virtual address (or a physical machine address) for transmission
and reception of the communication data. For example, the virtual address may be an Internet Protocol Version 4 (IPv4) (or an IPv6 address) and the physical address may be a Media Access Control (MAC) address. The network 112 may be associated with an application layer for implementation of communication protocols based on one or more communication requests from at least one of the one or more communication devices. The communication data may be transmitted or received, via the communication protocols. Examples of such wired and wireless communication protocols may include, but are not limited to, Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), ZigBee, EDGE, infrared (IR), IEEE 802.11, 802.16, cellular communication protocols, and/or Bluetooth (BT) communication protocols.
[0026] Examples of the network 112 may include, but is not limited to a wireless channel, a wired channel, a combination of wireless and wired channel thereof. The wireless or wired channel may be associated with a network standard which may be defined by one of a Local Area Network (LAN), a Personal Area Network (PAN), a Wireless Local Area Network (WLAN), a Wireless Sensor Network (WSN), Wireless Area Network (WAN), Wireless Wide Area Network (WWAN), a Long Term Evolution (LTE) network, a plain old telephone service (POTS), and a Metropolitan Area Network (MAN). Additionally, the wired channel may be selected on the basis of bandwidth criteria. For example, an optical fiber channel may be used for a high bandwidth communication. Further, a coaxial cable-based or Ethernet-based communication channel may be used for moderate bandwidth communication.
[0027] The video analyzing system 106 stores the captured video from the plurality of sensors 108 in a database 110. In one example embodiment, the database 114 may be embodied within the video analyzing system 106. In another example embodiment, the database 110 may be remotely connected to the video analyzing system 106 via the network 112. In some embodiments, the video analyzing system
106 may store information of the user 102 in the database 110. In some embodiments the video analyzing system 106 may store a rating for the user 102 in the database 110. The rating for the user 102 is determined by the video analyzing system 106 by assigning weights to the video of the user 102 captured from the plurality of sensors 108.
[0028] The video analyzing system 106 may be embodied as a cloud-based service or within a cloud-based platform. The detailed description of the video analysing system 106 is further described with reference to FIG. 2.
[0029] FIG. 2 illustrates a detailed block diagram 200 of the video analyzing system 106 defined in FIG. 1, in accordance with an example embodiment of the present disclosure. The video analyzing system 106 (also referred as "the system 106") may comprise a user interface (UI) module 202 for providing an interactive and intuitive interface to the user 102 through the user device 104 for accessing the system 106. The UI module 202 comprises a menu option module 202a for providing option to the user 102 when accessing the system 106. The menu option 202a includes one or more options such as type of video, number of people in conversation, and visibility of hands and legs of the user 102. The user 102 may select one or more option from the menu option module 102a and the system 106 is configured to perform analysis of the video based on the selected option. For example, when the user 102 selects the type of video option, the user 102 is presented with different types of video options such as elevator pitch, interview, presentation, public speaking, VLOG, customized meeting and the like, by the menu option module 102a. When the user 102 selects the number of people in conversation option, the user 102a is presented, by the menu option module 102a, with options like one-to-one, and one-to-many wherein the one-to-one option corresponds to one person in addition to the user 102 to conduct the interview, whereas one-to-many option corresponds to many persons in addition the user 102 conducting the interview. When the user 102 selects visibility option, the user 102 is presented by the menu option
module 102a with options such as visibility of hands in addition to hiding of legs, visibility of the hands in addition to the visibility of the legs, hiding of both the hands and the legs, total hiding of the user 102 where voice of the user 102 is audible, and the like. The user 102 may select various options from the menu option module 102a to customize as per requirements.
The system 108 further includes a memory module 204 for storing a plurality of information such as user data, account data, video data, and the like and also for storing computer-executable instructions to analyze video of the user 102. The system 108 further includes a processor 206 which enables carrying out of the various computer executable instructions stored in the memory module 204 through various logic components and/or modules. The processor 206 (also referred as the processing module 206) includes a receiving module 206a, a video analysis module 206b, and an audio analysis module 206c. The receiving module 206a is configured to receive the video of the user 102 from the user device 104 or from the plurality of sensors 108 connected to the system 106 via the network 112. The video consists of one or more of an action performed by the user 102 and an audio of the user 102. The action performed by the user 102 is analysed by the video analysis module 206b. The video analysis module 206b is configured to analyse body language of the user 102 from the received video. The audio of the user 102 is analysed by the audio analysis module 206c. The audio analysis module 206c is configured to analyse vocabulary and voice of the user 102 from the received video.
[0030] The video analysis module 206b, and the audio analysis module 206c is configured to identify one or more attributes of the user 102 from the video obtained from the receiving module 206a. The attributes of the user 102 comprise atleast one of body language, vocabulary, and voice of the user 102. The video analysis module 206b, and the audio analysis module 206c is further configured to assign weights to each of the identified attributes from the received video. The weights are assigned to the identified attributes to determine a rating for the user 102. The rating is a
combination of a positive score and a negative score provided for each of the identified attributes of the user 102 by the video analysis module 206b, and the audio analysis module 206c of the processor 206. The positive score and the negative score provided for each identified attribute of the user 102 varies for different menu option selected by the user 102 from the menu option module 202a. For example, when the user 102 selects the menu option of visibility of hands and in addition to visibility of legs, scores are calculated such that body language of the user 102 is given a weightage of 40%, vocabulary i.e., word power of the user 102 is given a weightage of 25%), and voice i.e., vocal tone of the user 102 is given a weightage of 35%. When the user 102 chooses the menu option visibility of hands in addition to hiding of legs, the scores are calculated such that body language of the user 102 is given a weightage of 35%), vocabulary i.e., word power of the user 102 is given a weightage of 30%, and voice of the user 102 is given a weightage of 35%. When the user 102 chooses the menu option hiding both the hands and the legs, the scores are calculated such that body language of the user 102 is given a weightage of 35%, vocabulary i.e., word power of the user 102 is given a weightage of 30%, and voice of the user 102 is given a weightage of 35%. When the user 102 chooses the menu option total hiding of the user 102 where voice of the user 102 is audible, the scores are calculated such that vocabulary i.e., word power of the user 102 is given a weightage of 30%, and voice i.e., vocal tone of the user 102 is given a weightage of 60%, the body language of the user 102 is not calculated. The scores for each category such as body language, word power, and voice consist of positive and negative score provided for a plurality of individual attributes of the user 102 under each of the categories. The detailed scoring methodology for each of the categories is explained in detail below.
[0031] Body language category measures gestures, posture and movement of body parts of the user 102 in the received video. The body language is measured by the video analysis module 206b. The video analysis module 206b provides positive and negative scores for each of the identified attributes. The attributes of the body
language category includes but not limited to smile criteria, eye contact, facial emotion, weight on legs, meaningful hand movements, movement of legs, and head movement of the user 102. The video analysis module 206b assigns weights for each of the identified attributes. When the user 102 is standing i.e., both hands and legs are visible in the received video, the weights for positive scoring are assigned by the video analysis module 206b are as follows: smile criteria 25%, eye contact 15%, facial emotion such as calm, happy, and surprise are assigned 10%, weight on legs 15%), meaningful hand movements 15%, movement of legs 10%, and head movement 10%). When legs of the user 102 are invisible in the received video, the weights for positive scoring are assigned by the video analysis module 206b are as follows: smile criteria 35%, eye contact 15%, facial emotion such as calm, happy, and surprise are assigned 15%, meaningful hand movements 20%, and head movement 15%. The movements of legs are not measured as the legs are invisible. When legs and hands of the user 102 are invisible in the received video, the weights for positive scoring are assigned by the video analysis module 206b are as follows: smile criteria 35%, eye contact 25%), facial emotion such as calm, happy, and surprised are assigned 20%, and head movement 20%. A detailed positive scoring methodology for each of the identified attributes of the user 102 such as smile criteria, eye contact, facial emotion, meaningful hand movements, movement of legs and head movements is provided in table below.
Score of 1 (Red) Score of 2 (Red) Score of 3(Yellow) Score of 4 (Green) Score 5 (Green)
<5% 5.1% to 15% 15.1% to 25% 25.1% to 35% >35%
Table. 1. Smile Criteria and eye contact
[0032] The smile criteria and eye contact is scored from 1 to 5 with 1 being the lowest and 5 being the highest. The score of 1 is provided when the user 102 smiles less than 5% of the time in the video, and a score of 5 is provided when the user
smiles greater than 35% of the time in the video. Similar scoring methodology is applied to rate the attribute eye contact of the user 102. The scoring methodology for other identified individual attributes of the user 102 is provided in tables below.
Score of 1 Score of 2 Score of 3 Score of 4 Score 5
<35% 35% to 44.9% 45% to 54.9% 55% to 64.9% 65% or more
Table.2. Facial emotion (calm, happy, and surprised)
Score of 1 (Red) Score of 2 (Red) Score of 3 (Yellow) Score of 4 (Green) Score 5 (Green)
<5% 5% to 10% 10.1% to 20% 20.1% to 30% >30%
Table.3. Meaningful Hand Movement
Score of 1 (Red) Score of 3 (Yellow) Score 5 (Green)
<40% 40-60% >60%
Table.4. Weight Balanced on both legs
When providing a positive score for the movement of legs, and movement of head attribute, the video analyzing module 206b, determines whether the user 102 is speaking in a one-to-one setup or in a one-to-many setup. The scoring is then calculated based on the determination as provided in below table.
Score of 1 (Red) Score of 3 (Yellow) Score 5 (Green)
>5% of time 2-5% 0-2%
Table. 5a. Movement of legs (when speaking one-to-one)
Score of 1 (Red) Score of 3 (Yellow) Score 5 (Green)
>20% of time <5 % of time 5.1% to 10% of time
Table. 5b. Movement of legs (when speaking one-to-many)
Score of 5 Score of 4 Score of 3 Score of 2 Score 1
20%-40% 10%-19.99% 5%-9.99% <5% >40%
Table. 6a. Movement of Head (when speaking one-to-one)
Score of 5 Score of 4 Score of 3 Score of 2 Score 1
>50% 40%-50% 30%-40% 20%-30% <20%
Table. 6b. Movement of Head (when speaking one-to-many)
In addition to the positive scores, the video analyzing module 206a provides negative scores to each of the identified attributes of the user 102. The video analyzing module 206a checks the received video whether the user 102 has performed attributes such as Weight on one leg, Hands Crossed/Locked, Closed Wrist/Hands (wrists distance), Facial Emotion (Sum of Angry and Disgusted), and Facial Emotion (Sum of Fearful, Sad, and Confused) that are discouraged when making a presentation or attending an interview. The receiving module 206a also checks whether the legs and hands of the user 102 are visible and makes changes to the scoring accordingly. The negative scoring thus obtained is then subtracted from the total positive score to obtain a final rating for the user 102. The calculation for negative score along with the weights and calculation formula is provided as explained below.
(i) Weightages for Negative Scoring for videos when legs and arms are visible
1. Weight on one leg (20%)
2. Hands Crossed/Locked (25%)
3. Closed Wrist/Hands (wrists distance) (20%)
4. Facial Emotion (Sum of Angry and Disgusted) (20%)
5. Facial Emotion (Sum of Fearful, Sad, and Confused) (15%)
Total negative score (Negative Scoring) = (1+2+3+0.5*4 + 0.5*5) % to be subtracted from Total Positive Score.
(ii) Weightages for Negative Scoring for videos when legs are not visible
1. Hands Crossed/Locked (3 5%)
2. Closed Wrists/Hands (wrists distance) (30%)
3. Facial Emotion (Sum of Angry and Disgusted) (20%)
4. Facial Emotion (Sum of Fearful, Sad, and Confused) (15%)
Total negative score (Negative Scoring) = (l+2+.5*3 +.5*4) % to be subtracted from Total Positive Score.
(iii) Weightages for Negative Scoring for videos when legs and arms are not visible.
1. Facial Emotion (Sum of Angry and Disgusted) (70%)
2. Facial Emotion (Sum of Fearful, Sad, and Confused) (30%)
Total negative score (Negative Scoring) = (.5*1 +.5*2) % to be subtracted from Total Positive Score.
(iv) Weight on one leg (standing with weight on one leg or the other)
Score of-5 (Red) Score of -4 (Red) Score of -3 (Yellow) Score of -2 (Yellow) Score-1 (Yellow) Score 0 (Green)
>25% 20% to 25% 15% to 20% 10% to 15% 5% to <5%
10% means
NA
Table.7.
(v) Hands Crossed or locked
Score of-5 Score of -3 Score -1 Score 0
(Red) (Yellow) (Yellow) (Green)
>10% 5% to 9% 1% to
5% <1%
Table. 8.
(vi) Closed Wrists/Hands (Wrist closed)
Score of -5 Score of -3 Score -1 Score 0
(Red) (Yellow) (Yellow) (Green)
>15% 10.1% to 15% 5% to 10% <5%
Table.9.
(vii) Sum Total of Facial Emotions (Disgusted and Angry only)
Score of-5 Score of-4 Score of -3 Score of -2 Score-1
>25% 20.1% to 25% 15.1% to 20% 10.1% to 15% 5% to 10%
Table. 10.
(viii) Sum Total of Facial Emotions (Sad, Fearful, and Confused only)
Score of-5 Score of-4 Score of -3 Score of -2 Score-1
>35% 30.1% to 35% 25.1% to 30% 20.1% to 25% 10% to 20%
Table. 11.
[0033] Word power i.e., vocabulary category measures type of words, vocabulary, and grammatical correctness in words used by the user 102 in the received video. The word power category is measured by the audio analysis module 206c. The audio analysis module 206c provides both positive and negative scores for each of the identified attributes of the user 102. Some of the identified attribute includes but not limited to filler words, per words, unique words, sentence length, data used, "I" statements used and the like. The detailed scoring methodology for different types of videos is shown below.
We Claim:
1. A method of analysing a video, the method comprises:
receiving from a user device over a network, a user selection of at least one menu option with respect to the video analysis to be performed;
receiving from the user device over the network, the video, wherein the video comprises one or more of: an action performed by the user and an audio of the user;
identifying one or more attributes of the user from the received video, wherein the attributes are based on the menu option selected by the user;
assigning weights to the identified attributes to determine a rating for the user, wherein the rating is a combination of a positive score and a negative score; and
transmitting notifications or alerts with personalized recommendations to the user, the personalized recommendations or alerts are generated based on the determined rating of the user.
2. The method of claim 1 further comprises providing the positive score and the negative score for each of the identified attributes of the user.
3. The method of claim 1, wherein to determine the rating of the user, the weights assigned to the positive score and the negative score are different for each of the identified attributes of the user.
4. The method of claim 1, wherein the assigning of weights is performed by one or more of manual assignment and machine learning algorithms.
5. The method of claim 1, wherein the attributes of the user comprise atleast one of body language, vocabulary, and voice.
6. The method of claim 1, wherein the menu option comprises one or more of type of video, number of people communicating with the user, and visibility of the user.
7. The method of claim 1, wherein the rating for the user is determined by subtracting the negative score from the positive score.
8. A system for analysing a video, the system comprising:
a memory configured to store executable instructions; and
a processor configured to execute the executable instructions stored in the memory, the processor configured to:
receive from a user device over a network, a user selection of at least one menu option with respect to the video analysis to be performed;
receive from the user device over the network, the video, wherein the video comprises one or more: of an action performed by the user and an audio of the user;
identify one or more attributes of the user from the received video, wherein the attributes are based on the menu option selected by the user;
assign weights to the identified attributes to determine a rating for the user, wherein the rating is a combination of a positive score and a negative score; and
transmit notifications or alerts with personalized recommendations to the user, the personalized recommendations or alerts are generated based on the determined rating of the user.
9. The system of claim 8, wherein the processor is configured to assign weights by one or more of manual assignment and machine learning algorithms.
10. The system of claim 9, wherein the processor is configured to process the machine learning algorithms in one or more of: a smart wearable device, a portable communication device, and a cloud server.
11. The system of claim 8, wherein the processor comprises:
a receiving module configured to receive the video from the user's device;
a video analysis module configured to analyse body language of the user from the received video; and
an audio analysis module configured to analyse vocabulary and voice of the user from the received video.
| # | Name | Date |
|---|---|---|
| 1 | 202111024150-STATEMENT OF UNDERTAKING (FORM 3) [31-05-2021(online)].pdf | 2021-05-31 |
| 2 | 202111024150-FORM FOR STARTUP [31-05-2021(online)].pdf | 2021-05-31 |
| 3 | 202111024150-FORM FOR SMALL ENTITY(FORM-28) [31-05-2021(online)].pdf | 2021-05-31 |
| 4 | 202111024150-FORM 1 [31-05-2021(online)].pdf | 2021-05-31 |
| 5 | 202111024150-FIGURE OF ABSTRACT [31-05-2021(online)].pdf | 2021-05-31 |
| 6 | 202111024150-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [31-05-2021(online)].pdf | 2021-05-31 |
| 7 | 202111024150-EVIDENCE FOR REGISTRATION UNDER SSI [31-05-2021(online)].pdf | 2021-05-31 |
| 8 | 202111024150-DRAWINGS [31-05-2021(online)].pdf | 2021-05-31 |
| 9 | 202111024150-DECLARATION OF INVENTORSHIP (FORM 5) [31-05-2021(online)].pdf | 2021-05-31 |
| 10 | 202111024150-COMPLETE SPECIFICATION [31-05-2021(online)].pdf | 2021-05-31 |
| 11 | 202111024150-Proof of Right [29-11-2021(online)].pdf | 2021-11-29 |
| 12 | 202111024150-FORM-26 [29-11-2021(online)].pdf | 2021-11-29 |
| 13 | 202111024150-FORM 18 [30-05-2025(online)].pdf | 2025-05-30 |